EP1794743B1 - Dispositif et procede pour regrouper des segments temporels d'un morceau de musique - Google Patents

Dispositif et procede pour regrouper des segments temporels d'un morceau de musique Download PDF

Info

Publication number
EP1794743B1
EP1794743B1 EP05760763.2A EP05760763A EP1794743B1 EP 1794743 B1 EP1794743 B1 EP 1794743B1 EP 05760763 A EP05760763 A EP 05760763A EP 1794743 B1 EP1794743 B1 EP 1794743B1
Authority
EP
European Patent Office
Prior art keywords
segment
class
similarity
segments
designed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Not-in-force
Application number
EP05760763.2A
Other languages
German (de)
English (en)
Other versions
EP1794743A1 (fr
Inventor
Markus Van Pinxteren
Michael Saupe
Markus Cremer
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Publication of EP1794743A1 publication Critical patent/EP1794743A1/fr
Application granted granted Critical
Publication of EP1794743B1 publication Critical patent/EP1794743B1/fr
Not-in-force legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/031Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
    • G10H2210/061Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for extraction of musical phrases, isolation of musically relevant segments, e.g. musical thumbnail generation, or for temporal structure analysis of a musical piece, e.g. determination of the movement sequence of a musical work

Definitions

  • the present invention relates to audio segmentation, and more particularly to the analysis of pieces of music on the individual major parts contained in the pieces of music which may occur repeatedly in the piece of music.
  • Rock and pop music mostly consists of more or less distinct segments, such as intro, verse, chorus, bridge, outro, etc. Detecting the start and end times of such segments and the segments according to their affiliation to the most important classes ( Stanza and chorus) is the target of audio segmentation. Correct segmentation and labeling of the calculated segments can be usefully used in different areas. For example, pieces of music from online providers, such as Amazon, Musicline, etc., can be intelligently "played”.
  • Another application example of the technique of audio segmentation is to integrate the segmentation / grouping / marking algorithm into a music player.
  • the information about segment beginnings and segment ends enables the targeted navigation through a piece of music. Due to the class affiliation of the segments, ie whether a segment is a verse, a chorus, etc., z. B. also jump directly to the next chorus or the next stanza.
  • Such an application is of interest to large music markets, offering their customers the opportunity to listen to complete albums. This saves the customer the annoying, searching fast-forward to characteristic parts in the song, which might perhaps cause him to actually buy a piece of music in the end.
  • a WAV file 500 is provided.
  • a feature extraction then takes place, as a feature, extracting the spectral coefficients per se or, alternatively, the mel frequency cepstral coefficients (MFCCs).
  • MFCCs mel frequency cepstral coefficients
  • STFT short-term Fourier transform
  • STFT short-term Fourier transform
  • the MFCC features are then extracted in the spectral range.
  • the extracted features are then stored in a memory 504.
  • the feature extraction algorithm now has a segmentation algorithm that ends in a similarity matrix, as shown in block 506.
  • the feature matrix is read in (508), to then group feature vectors (510), and then build a similarity matrix based on the grouped feature vectors, which consists of a distance measurement between each of all features.
  • all pairs of audio window pairs are compared using a quantitative similarity measure, distance.
  • the structure of the similarity matrix is in Fig. 8 shown. So is in Fig. 8 the music piece is represented as stream or stream 800 of audio samples. The audio piece is windowed as it has been executed, with a first window labeled i and a second window labeled j. Overall, the audio piece has z. B. K windows. This means that the similarity matrix has K rows and K columns. Then, for each window i and for each window j, a similarity measure to each other is calculated, and the calculated similarity measure or distance measure D (i, j) is input to the row or column designated by i and j in the similarity matrix. One column therefore shows the similarity of the window designated by j to all other audio windows in the piece of music.
  • the similarity of the window j to the very first window of the piece of music would then be in the column j and in the line 1.
  • the similarity of the window j to the second window of the piece of music would then be in the column j, but now in the line 2.
  • the similarity of the second window to the first window in the second column of the matrix and in the first row of the matrix would then be in the column j and in the line 1.
  • the matrix is redundant in that it is symmetric to the diagonal, and that on the diagonal the similarity of a window is to itself, which is the trivial case of 100% similarity.
  • FIG. 6 An example of a similarity matrix of a piece is in Fig. 6 to see.
  • the completely symmetrical structure of the matrix with respect to the main diagonal is recognizable, whereby the main diagonal is visible as a light stripe.
  • the main diagonal is not to be seen as a lighter solid line, but off Fig. 6 only approximately recognizable.
  • FIG. 6 A kernel correlation 512 is performed on a kernel matrix 514 to obtain a novelty measure, also known as a novelty score, that could be averaged and smoothed into Fig. 9 is shown.
  • the smoothing of this Novelty Score is in Fig. 5 schematically represented by a block 516.
  • the segment boundaries are read using the smoothed novelty value trace, whereupon the local maxima in the smoothed novelty curve must be determined and possibly even shifted by a constant number of samples caused by the smoothing to actually represent the correct segment boundaries of the audio piece as absolute or relative time.
  • FIG. 7 An example of a segment similarity matrix is in Fig. 7 shown.
  • the similarity matrix in Fig. 7 is basically similar to the feature similarity matrix of Fig. 6 , but now not more, as in Fig. 6
  • Features are used in windows, but features from a whole segment.
  • the segment similarity matrix has a similar meaning to the feature similarity matrix, but with a much coarser resolution, which of course is desired when considering that window lengths are in the order of 0.05 seconds while reasonably long segments are in the order of perhaps 10 seconds of a piece ,
  • a clustering is carried out, ie an arrangement of the segments into segment classes (an arrangement of similar segments in the same segment class), in order then to mark the found segment classes in a block 524, which is also referred to as "labeling".
  • labeling it is determined which segment class contains segments that are stanzas that are choruses that are intros, outros, bridges, and so on.
  • z. B. can be provided to a user, without redundancy of a piece only z. B. a verse, a chorus and the intro to hear.
  • the corresponding feature matrix is read out and loaded into a main memory for further processing.
  • the feature matrix has the dimension number of analysis windows times the number of feature coefficients.
  • the similarity matrix brings the feature course of a piece into a two-dimensional representation. For each pairwise combination of feature vectors, the distance measure is computed, which is recorded in the similarity matrix. There are various possibilities for calculating the distance measure between two vectors, namely the Euclidean distance measurement and the cosinus distance measurement.
  • a result D (i, j) between the two feature vectors is stored in the i, jth element of the window similarity matrix (block 506).
  • the main diagonal of the similarity matrix represents the course over the entire piece. Accordingly, the elements of the main diagonal result from the respective comparison of a window with itself and always have the value of the greatest similarity. For the cosine distance measurement this is the value 1, for the simple scalar difference and the Euclidean distance this value is 0.
  • each element i, j is assigned a gray value.
  • the gray values are graded in proportion to the similarity values, so that the maximum similarity (the main diagonal) corresponds to the maximum similarity.
  • the structure of the similarity matrix is important to the novelty measure calculated in kernel correlation 512.
  • the novelty measure arises from the correlation of a particular kernel along the main diagonal of the similarity matrix.
  • An exemplary kernel K is in Fig. 5 shown. If one correlates this kernel matrix along the main diagonal of the similarity matrix S, and thereby sums all the products of the superimposed matrix elements for each time point i of the piece, one obtains the measure of novelty, which in a smoothed form is exemplified in FIG Fig. 9 is shown.
  • the kernel K is not in Fig. 5 but an enlarged kernel, which is also superimposed with a Gaussian distribution, so that the edges of the matrix go to zero.
  • the selection of the striking maxima in the novelty course is important for the segmentation.
  • the selection of all maxima of the unsmoothed novelty course would lead to a strong over-segmentation of the audio signal.
  • the novelty measure should be smoothed with different filters, such as IIR filters or FIR filters.
  • segment boundaries of a piece of music are extracted, then similar segments must be identified as such and grouped into classes.
  • Foote and Cooper describe the calculation of a segment-based similarity matrix using a Cullback-Leibler distance.
  • individual segment feature matrices are extracted from the entire feature matrix on the basis of the segment boundaries obtained from the novelty process, ie each of these matrices is a submatrix of the entire feature matrix.
  • the resulting segment similarity matrix 520 is now subjected to Singular Value Decomposition (SVD). Then one obtains singular values in descending order.
  • an automatic digest of a piece is then performed based on the segments and clusters of a piece of music. For this purpose, first the two clusters with the largest singular values are selected. Then, the segment with the maximum value of the corresponding cluster indicator is added to this summary. This means that the summary includes a stanza and a chorus. Alternatively, all repeated segments can also be removed to ensure that all piece information is provided, but always exactly once.
  • a disadvantage of the known method is the fact that the singular value decomposition (SVD) for segment class formation, that is to say for the assignment of segments to clusters, is very computationally intensive and is problematic in the evaluation of the results. Thus, if the singular values are nearly equal, then a possibly wrong decision is made that the two similar singular values actually represent the same segment class and not two different segment classes.
  • SSVD singular value decomposition
  • the EP 1577877 A1 discloses a method of automatically detecting a chorus portion, thereby solving several problems.
  • the first problem is the study of acoustic features and a similarity of a portion of the audio signal to other portions.
  • the second problem is the criterion of how high a similarity must be in order for a section to be understood as a repetition. This criterion depends on the audio piece itself.
  • the third problem is the determination of both ends, ie the beginning and the end of repeated sections, and the fourth problem is the detection of a modulated repetition. For this purpose, a time-delay diagram is first created by generating the similarity between a 12-dimensional chroma vector of a section with a corresponding vector of each preceding section.
  • a threshold value is determined to detect line segment candidate peaks, with only peaks in R all , (t, l) above the threshold being selected.
  • the threshold is set to determine an intermediate class distribution, the threshold depending on both the number of peaks in each class, the total number of peaks, and the average of peak heights in each class.
  • integrated repeat sections are determined, taking into account whether corresponding similarity line segments exist at previous delay positions with respect to the retard position of the retransmission section.
  • the object of the present invention is to provide an improved and at the same time efficient concept for grouping temporal segments of a piece of music.
  • the present invention is based on the recognition that the allocation of a segment to a segment class is to be performed on the basis of an adaptive similarity mean for a segment such that the similarity mean takes into account which overall similarity score a segment as a whole is taken into account Piece has.
  • an adaptive similarity mean for a segment such that the similarity mean takes into account which overall similarity score a segment as a whole is taken into account Piece has.
  • the similarity mean will be lower than for a segment that is a stanza or chorus.
  • the concept according to the invention is particularly suitable for pieces of music which do not consist only of stanzas and choruses, that is to say have the segments belonging to the segment class. have the same similarity values, but also for pieces that have other parts in addition to stanza and chorus, namely an introduction (Intro), an interlude (Bridge) or a conclusion (Outro).
  • the calculation of the adaptive similarity mean and the assignment of a segment are performed iteratively, ignoring assigned segments on the next iteration run.
  • the similarity absolute value that is to say the sum of the similarity values in a column of the similarity matrix, changes again for the next iteration run since already assigned segments have been set to 0.
  • a segmentation post-correction is performed, in that after segmentation, e.g. B. based on the novelty value (the local maxima of novelty value) and after a subsequent assignment to segment classes relatively short segments are examined to see if they can be assigned to the predecessor segment or the successor segment, since segments below a minimum Segment length is likely to indicate over-segmentation.
  • a labeling is performed using a special selection algorithm to obtain the most correct labeling of the segment classes as a stanza or chorus.
  • Fig. 1 shows a device for grouping temporal segments of a piece of music, which is divided into main parts repeatedly occurring in the piece of music, into different segment classes, one segment class being assigned to a main part.
  • the present invention thus relates particularly to pieces of music which are subject to a certain structure in which similar sections appear several times and alternate with other sections.
  • the Most rock and pop songs have a clear structure in terms of their main parts.
  • the literature deals with the theme of music analysis mainly on the basis of classical music, but it also applies much to rock and pop music.
  • the main parts of a piece of music are also called "large moldings".
  • a large shaped part of a piece is understood to mean a section which, with respect to various features, e.g. B. melody, rhythm, texture, etc., has a relatively uniform nature. This definition applies generally in music theory.
  • ABABCDAB where A equals strophe, B equals refrain, C equals bridge, and D equals solo. Often a piece of music is introduced with a prelude. Intros often consist of the same chord progression as the stanza, but with different instrumentation, eg. B. without drums, without bass or distortion of the guitar in rock songs etc.
  • the device according to the invention initially comprises a device 10 for providing a similarity representation for the segments, wherein the similarity representation for each segment has an associated plurality of similarity values, the similarity values indicating how similar the segment is to each other segment.
  • the similarity representation is preferably that in Fig. 7 shown segment similarity matrix. It has for each segment (in Fig. 7 Segments 1-10) has its own column, which has the index "j". Further, the similarity representation has a separate row for each segment, with one row labeled with a row index i. This will be referred to below with reference to the exemplary segment 5.
  • the element (5,5) in the main diagonal of the matrix of Fig. 7 is the similarity value of the segment 5 with itself, ie the maximum similarity value.
  • segment 5 is still medium-like to the segment No. 6, as it is by the element (6,5) or by the element (5,6) of the matrix in Fig. 7 is designated. Moreover, the segment 5 is still similar to the segments 2 and 3, as represented by elements (2,5) or (3,5) or (5,2) or (5,3) in FIG Fig. 7 is shown. As for the other segments 1, 4, 7, 8, 9, 10, the segment No. 5 has a similarity as described in FIG Fig. 7 is no longer visible.
  • a plurality of similarity values associated with the segment is, for example, a column or a row of the segment similarity matrix in FIG Fig. 7 , which column or row indicates by its column / row index which segment it refers to, namely for example the fifth segment, and which row / column comprises the similarities of the fifth segment to each other segment in the piece ,
  • the plurality of similarity values is, for example, a row of the similarity matrix or, alternatively, a column of the similarity matrix of Fig. 7 ,
  • the device for grouping temporal segments of the piece of music further comprises means 12 for calculating a similarity mean value for a segment, using the segments and the similarity values of the segment Segment associated with a plurality of similarity values.
  • the device 12 is designed to z. For column 5 in Fig. 7 to calculate a similarity mean.
  • means 12 will add the similarity values in the column and divide by the number of segments in total. In order to eliminate self-similarity, the similarity of the segment to itself could also be deducted from the result of the addition, whereby, of course, a division should no longer be performed by all elements, but by all elements less 1.
  • the means 12 for computing could also calculate the geometric mean, that is, squiggle each similarity value of a column separately to sum the squared results, and then calculate a root from the summation result represented by the number of elements in the column the number of elements in the column is less 1) to divide.
  • Any other mean values, such as the median value, etc., are usable as long as the mean value for each column of the similarity matrix is adaptively calculated, that is, a value calculated using the similarity values of the plurality of similarity values associated with the segment.
  • the adaptively calculated similarity threshold is then provided to a segment 14 for assigning a segment to a segment class.
  • the means 14 for assigning is arranged to associate a segment with a segment class if the similarity value of the segment satisfies a predetermined condition with respect to the mean of similarity. For example, if the similarity mean value is such that a larger value indicates a greater similarity and a smaller value indicates a lower similarity, the predetermined relationship will be that the similarity value of a segment equal to or above the Similarity mean, in order for the segment to be assigned to a segment class.
  • a segment selection device 16 In a preferred embodiment of the present invention, there are still other devices to realize specific embodiments, which will be discussed later. These devices are a segment selection device 16, a segment assignment conflict device 18, a segmentation correction device 20 and a segment class designation device 22.
  • the segment selector 16 will first calculate the value V (j) for each segment to then find the vector element i of the maximum value vector V. In other words, this means that the column in Fig. 7 is chosen, which achieves the greatest value or score when adding up the individual similarity values in the column.
  • this segment could be segment no. 5 or column 5 of the matrix in Fig. 7 be, since this segment has at least a certain similarity with three other segments.
  • Another candidate in the example of Fig. 7 could also be the segment with the number 7, since this segment also has a certain similarity to three other segments, which is even greater than the similarity of the segment 5 to the segments 2 and 3 (higher shade of gray in FIG Fig. 7 ).
  • V (7) is the component of the vector V which has the maximum value among all the components of V.
  • segment similarity matrix for the seventh row or column it is checked which segment similarities are above the calculated threshold, i. H. with which segments the ith segment has an above-average similarity. All these segments are now also assigned to a first segment class like the seventh segment.
  • segment no. 4 and segment no. 1 are classified in the first segment class in addition to segment no.
  • segment no. 10 is not classified in the first segment class due to the below-average similarity to segment no.
  • the corresponding vector elements V (j) of all segments which have been assigned to a cluster in this threshold value analysis are set to 0.
  • these are beside V (7) also the components V (4) and V (1). This immediately means that the 7th, 4th and 1st columns of the matrix are no longer available for a later maximum search will be available that they are zero, so can not be a maximum.
  • a new maximum is now selected from the remaining elements of V, that is to say V (2), V (3), V (5), V (6), V (8), V (9) and V (10) searched.
  • the segment no. 5, ie V (5), is expected to yield the largest similarity score.
  • the second segment class then obtains segments 5 and 6. Due to the fact that the similarities to segments 2 and 3 are below average, segments 2 and 3 are not placed in the second order clusters.
  • the elements V (6) and V (5) are set to 0 by the vector V due to the assignment made, while still the components V (2), V (3), V (8), V (9) and V (10) of the vector for the selection of the third-order cluster remain.
  • a simple kind of resolution could be to simply not assign the segment 7 into the third segment class and e.g. For example, instead of assigning the segment 4, if for the segment 4 would not also conflict exist.
  • the similarity between 7 and 10 is considered in the following algorithm.
  • the invention is designed not to discount the similarity between i and k. Therefore, the similarity values S s (i, k) of segment i and k are compared with the similarity value S s (i *, k), where i * is the first segment assigned to the cluster C *.
  • the cluster or the segment class C * is the cluster to which segment k is already assigned on the basis of a previous examination.
  • the similarity value S s (i *, k) is decisive for the fact that the segment k belongs to the cluster C *. If S s (i *, k) is greater than S s (i, k), segment k remains in cluster C *.
  • S s (i *, k) is smaller than S s (i, k)
  • the segment k is taken out of the cluster C * and assigned to the cluster C.
  • a tendency towards the cluster i is noted for the cluster C *.
  • this tendency is also noted when segment k changes cluster membership.
  • a tendency of this segment to the cluster in which it was originally recorded is noted.
  • the similarity value check will be in favor of the first segment class due to the fact that the segment 7 is the "source segment” in the first segment class.
  • the segment 7 thus becomes its cluster affiliation (Segment membership), but it will remain in the first segment class.
  • this fact is taken into account by the fact that segment no. 10 in the third segment class is attested a trend towards the first segment class.
  • an over-segmentation of a piece often occurs, ie too many segment boundaries or generally too short segments are calculated.
  • An over-segmentation, z. B. caused by an incorrect subdivision of the stanza is inventively corrected by the fact that due to the segment length and the information in which segment class a predecessor or successor segment has been sorted is corrected.
  • the correction serves to completely eliminate segments that are too short, ie to merge with adjacent segments, and segments that are short but are not too short, ie that are short in length but longer than the minimum length to undergo a special investigation into whether they may not yet be merged with a predecessor segment or a successor segment.
  • Relatively short segments shorter than 11 seconds are only examined at all, while later on even shorter segments (a second threshold smaller than the first one) shorter than 9 seconds are examined, and later remaining segments that are shorter than 6 seconds (a third threshold that is shorter than the second threshold) are again treated alternatively.
  • the segment length check in block 31 is initially directed to finding the segments shorter than 11 seconds. For the segments that are longer than 11 seconds, no post processing is done, as can be seen by a "No" at block 31. For segments shorter than 11 seconds, a trend check (block 32) is first performed. Thus, it is first examined whether a segment due to the functionality of the segment allocation conflicting device 18 of Fig. 1 has an associated trend or an associated trend. In the example of Fig. 7 this would be the segment 10 that has a trend towards the segment 7 or a trend towards the first segment class. If the tenth segment is shorter than 11 seconds, the in Fig.
  • segment no. 10 is the only segment in the third segment class. If it was shorter than 9 seconds, it will automatically be assigned to the segment class to which segment no. 9 belongs. This automatically leads to a fusion of the segment 10 with the segment 9. If the segment 10 is longer than 9 seconds, then this merger is not performed.
  • a block 33c an examination is then made for segments shorter than 9 seconds which are not the only segment in a corresponding cluster X than in a corresponding segment group. They undergo closer scrutiny to establish regularity in clustering.
  • a novelty value check is performed by resorting to the novelty value curve, which in Fig. 9 is shown.
  • the novelty curve which resulted from the kernel correlation, is read out at the locations of the affected segment boundaries, and the maximum of these values is determined. If the maximum occurs at the beginning of a segment, the segments that are too short become the cluster of the successor segment assigned. If the maximum occurs at a segment end, the segments that are too short are assigned to the cluster of the predecessor segment.
  • segment labeled 90 has a segment that is shorter than 9 seconds
  • the novelty check at the beginning of segment 90 would give a higher novelty value 91 than at the end of the segment, with the novelty value at the end of the segment labeled 92. This would mean that the segment 90 would be assigned to the successor segment since the novelty value to the successor segment is less than the novelty value to the predecessor segment.
  • This procedure according to the invention has the advantage that no elimination of parts of the piece has been carried out, ie that no simple elimination of the segments which are too short has been carried out by setting them to zero, but that the entire complete piece of music is still represented by the entirety of the piece Segments is represented. Due to the segmentation therefore no loss of information has occurred, which would be, however, if one z. B. as a reaction on over-segmentation, simply eliminating all too short segments "regardless of losses".
  • Fig. 4a and Fig. 4b a preferred implementation of the segment class designator 22 of Fig. 1 shown.
  • two clusters are assigned the labels "stanza" and "refrain” during labeling.
  • a largest singular value of a singular value decomposition and the associated cluster are used as a refrain and the cluster for the second largest singular value as a stanza.
  • each song starts with a stanza, so that the cluster with the first segment is the stanza cluster and the other cluster is the refrain cluster.
  • the cluster in the candidate selection having the last segment is called a refrain, and the other cluster is called a stanza.
  • the last segment may actually be the last segment in the song, or a segment later in the song than any segment of the other segment class. If this segment is not the actual last segment in the song, this means that there is still an outro.
  • the last segment is from the first segment group, then all segments of that first (most significant) segment class are referred to as a refrain, as indicated by a block 41 in FIG Fig. 4b is shown.
  • all segments of the other segment class to be selected are marked as "stanza", since typically of the two candidate segment classes, one class will have the chorus, and thus immediately the other class will have the stanzas.
  • the examination in block 40 reveals that which segment class in the selection has the last segment in the music track progression, that this is the second, ie rather low-value segment class, it is examined in block 42 whether the second segment class has the first segment in the music piece , This investigation is based on the knowledge that the likelihood is very high that a song starts with a verse, not a chorus.
  • the second segment class is referred to as a refrain, and the first segment class is referred to as a stanza, as indicated in a block 43 .
  • the query in block 42 is answered with "yes”, then, contrary to the rule, the second segment group is referred to as a stanza and the first segment group as a refrain, as indicated in a block 44.
  • the designation in block 44 occurs because the probability that the second segment class corresponds to the refrain is already quite low. If the improbability that a piece of music is introduced with a refrain comes to the fore, there are some signs of an error in clustering. B. that the last considered segment was erroneously assigned to the second segment class.
  • Fig. 4b It was shown how the stanza / chorus determination was performed on the basis of two available segment classes. After this stanza / refrain determination, the remaining segment classes may then be designated in a block 45, where an outro will possibly be the segment class having the last segment of the piece, while an intro will be the segment class comprising the first Segment of a piece in itself.
  • Fig. 4a It shows how to determine the two segment classes that represent the candidates for the in Fig. 4b given algorithm.
  • an assignment of the label "stanza” and "refrain” is performed in the labeling, whereby one segment group is marked as a stanza segment group, while the other segment group is marked as a refrain segment group.
  • this concept is based on the assumption (A1) that the two clusters (segment groups) with the highest similarity values, ie cluster 1 and cluster 2, correspond to the chorus and stanza clusters. The last of these two clusters is the refrain cluster, assuming that a verse follows a chorus.
  • cluster 1 in most cases corresponds to the refrain.
  • cluster 2 the assumption (A1) is often not fulfilled.
  • This situation usually occurs when there is either a third, frequently repeating part in the play, eg. As a bridge, with a high similarity of intro and outro, or for the not uncommon case that a segment in the piece has a high similarity to the chorus, thus also a high overall similarity has the similarity to the chorus but just not big enough to belong to the cluster 1 yet.
  • a step 46 the cluster or segment group with the highest similarity value (value of the component of V which is once a maximum for the segment class determined first, ie segment 7 in the example of FIG Fig. 7 , was), that is, the segment group that at the first pass of Fig. 1 has been identified as the first candidate included in the stanza refrain selection.
  • the second highest segment class has, for example, B. at least three segments, or two segments, one of which is within the piece and not at the "edge" of the piece, so remains second segment class initially in the selection and is henceforth referred to as "Second Cluster".
  • Second clusters still have to measure themselves with a third segment class (48b), which is referred to as a "third cluster" in order to ultimately survive the selection process as a candidate.
  • the segment class "Third Cluster” corresponds to the cluster, which occurs most frequently in the entire song, but neither the highest segment class (cluster 1) nor the segment class "Second Cluster” corresponds, so to speak, the next most common (often equally common) occurring clusters by cluster 1 and "Second Cluster".
  • the first examination in block 49a is to examine whether each segment of ThirdCluster has one has certain minimum length, z being the threshold.
  • B 4% of the total song length is preferred. Other values between 2% and 10% can also lead to meaningful results.
  • a block 49b it is then examined whether ThirdCluster has a greater total portion of the song than SecondCluster. To do this, the total time of all segments in ThirdCluster is added up and compared to the corresponding total of all Segments in SecondCluster, where ThirdCluster has a larger overall song share than SecondCluster if the accumulated segments in ThirdCluster are greater than the accumulated SecondCluster segments.
  • ThirdCluster enters the verse-chorus selection. If, on the other hand, at least one of these conditions is not met, ThirdCluster does not get into the stanza-refrain selection. Instead, SecondCluster enters the stanza-refrain selection as it passes through a block 50 in Fig. 4a is shown. This completes the "Candidate search" for the verse-chorus selection, and the in Fig. 4b algorithm which finally determines which segment class comprises the stanzas and which segment class comprises the chorus.
  • the three conditions in the blocks 49a, 49b, 49c could alternatively also be weighted, so that z.
  • a no answer in block 49a will be "overruled” if both the query in block 49b and the query in block 49c are answered "yes.”
  • it could also be a condition the three conditions are highlighted so that z.
  • it only examines whether there is regularity of the sequence between the third segment class and the first segment class, while the queries in blocks 49a and 49b are not performed or performed only if the query in block 49c is "no". answered, but z. B. a relatively large total amount in block 49b and relatively large minimum quantities in block 49a are determined.
  • the refrain option is to choose a version of the chorus as a summary. This will attempt to choose a chorus version that lasts between 20 and 30 seconds if possible. If a segment with such a length is not contained in the refrain cluster, a version is chosen which has the smallest possible deviation to a length of 25 seconds. If the selected chorus is longer than 30 seconds, it will be hidden for more than 30 seconds in this embodiment, and if it is shorter than 20 seconds, it will be extended to 30 seconds with the following segment.
  • Storing a medley for the second option is more like an actual summary of a piece of music.
  • the third segment is selected from a cluster that has the largest total portion of the song and is not a verse or chorus.
  • the selected segments are not installed in their full length in the medley.
  • the length is preferably set to a fixed 10 seconds per segment, so that a total of 30 seconds is created again.
  • alternative values are also readily feasible.
  • a grouping of a plurality of feature vectors is performed in block 510 by forming an average over the grouped feature vectors.
  • the calculation of the similarity matrix the grouping can save computing time.
  • a distance is determined between all possible combinations of two feature vectors. This results in n vectors over the entire piece n x n calculations.
  • a grouping factor g indicates how many consecutive feature vectors are grouped into a vector by averaging. This can reduce the number of calculations.
  • the grouping is also a kind of noise suppression in which small changes in the feature expression of successive vectors are compensated on average. This property has a positive effect on finding large song structures.
  • the concept according to the invention makes it possible to navigate through the calculated segments by means of a special music player and to selectively select individual segments, so that a consumer in a music shop can immediately immediately refrain, for example by pressing a certain key or by activating a certain software command jump to see if the chorus pleases him, and then maybe listen to a stanza so that the consumer can finally make a purchase decision.
  • This makes it easy for a prospective buyer to hear from a piece exactly what he is particularly interested in, while he z. B. the solo or the bridge then actually save for the listening pleasure at home.
  • the concept according to the invention is also of great advantage for a music store, since the customer can listen in and therefore also quickly and ultimately buy, so that the customers do not have to wait long to listen in, but also quickly get their turn. This is because a user does not have to constantly go back and forth, but gets targeted and quickly all the information of the piece he would like to have.
  • the present invention is also applicable in other application scenarios, for example in advertising monitoring, ie where an advertiser wants to check whether the audio piece for which he has bought advertising time, has actually been played over the entire length.
  • An audio piece may include, for example, music segments, speaker segments, and noise segments.
  • the segmentation algorithm ie the segmentation and subsequent classification into segment groups, then makes it possible to carry out a quick and considerably less complicated check than a complete sample-wise comparison.
  • the efficient checking would simply consist of segment class statistics, that is, a comparison of how many segment classes have been found and how many segments are in each segment class, with a default given the ideal ad item.
  • segment class statistics that is, a comparison of how many segment classes have been found and how many segments are in each segment class, with a default given the ideal ad item.
  • the present invention is further advantageous in that it can be used for searching in large music databases, for example, to listen only to the choruses of many pieces of music in order to then perform a music program selection.
  • only individual segments would be selected from the "class" segmented class of many different pieces and provided by a program provider.
  • these can also be easily provided by always one or more segments (if any) in the "solo" designated Segment class from a large number of pieces of music z. B. assembled and provided as a file.
  • inventive concept can be easily automated, since it requires at no point a user intervention. This means that users of the inventive concept by no means require special training, except for. For example, a common skill in dealing with normal software user interfaces.
  • the inventive concept can be implemented in hardware or in software.
  • the implementation may be on a digital storage medium, in particular a floppy disk or CD with electronically readable control signals, which may interact with a programmable computer system such that the corresponding method is executed.
  • the invention thus also consists in a computer program product with a program code stored on a machine-readable carrier for carrying out the method according to the invention, when the computer program product runs on a computer.
  • the invention thus represents a computer program with a program code for carrying out the method when the computer program runs on a computer.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Claims (21)

  1. Ensemble de regroupement en différentes classes de segments de segments temporels d'un morceau audio divisé en parties principales qui reviennent de manière répétée dans le morceau audio, une classe de segments étant associée à une partie principale, l'ensemble présentant les caractéristiques suivantes :
    un dispositif (10) qui prépare une représentation de similitude pour les segments, la représentation de similitude comprenant pour chaque segment une pluralité associée de valeurs de similitude, la pluralité associée de valeurs de similitude indiquant dans quelle mesure le segment est similaire à chaque autre segment du morceau audio,
    un dispositif (12) qui calcule une valeur de seuil de similitude pour un segment spécifique en recourant exclusivement à la pluralité de valeurs de similitude associées aux segments et
    un dispositif (14) qui attribue un segment à une classe de segments si la valeur de similitude du segment remplit une condition prédéterminée par rapport à la valeur de seuil de similitude.
  2. Ensemble selon la revendication 1, qui présente en outre les caractéristiques suivantes :
    un dispositif (16) de sélection de segment qui détermine un segment extrême dont la pluralité associée de valeurs de similitude s'additionnent pour former un extrémum,
    le dispositif de calcul (12) étant configuré pour calculer la valeur de seuil de similitude pour le segment extrême et
    le dispositif d'attribution (14) étant configuré pour caractériser la classe de segments par une indication du segment extrême.
  3. Ensemble selon la revendication 1 ou 2, dans lequel le dispositif d'attribution (14) est configuré pour ne pas attribuer à la classe des segments un segment qui ne remplit pas la condition prédéterminée par rapport à la valeur de seuil de similitude mais pour l'attribuer à une autre classe de segments,
    le dispositif d'attribution (14) étant configuré pour ne plus tenir compte, pour un segment associé, de la valeur de similitude du segment associé lorsqu'il a été attribué à une autre classe de segments.
  4. Ensemble selon l'une des revendications précédentes, dans lequel le dispositif (12) qui calcule la valeur de seuil de similitude est configuré pour, après une attribution antérieure à une classe de segments, ignorer dans un passage ultérieur parmi la pluralité de valeurs de similitude les valeurs de similitude de segments attribués antérieurement,
    le dispositif d'attribution (14) étant configuré pour, lors d'un passage ultérieur, exécuter l'attribution à une autre classe de segments que la classe de segments d'un passage antérieur.
  5. Ensemble selon l'une des revendications précédentes, qui présente en outre les caractéristiques suivantes :
    un dispositif (18) de gestion des conflits d'attribution de segments configuré pour, au cas où le dispositif d'attribution (14) devrait associer un segment conflictuel à deux classes de segments différentes, déterminer une première valeur de similitude du segment conflictuel par rapport à un segment d'une première classe de segments et déterminer une deuxième valeur de similitude du segment conflictuel par rapport à un segment d'une deuxième classe de segments, et
    le dispositif d'attribution (14) étant configuré pour, au cas où la deuxième valeur de similitude indique une similitude plus forte du segment conflictuel par rapport au segment de la deuxième classe de segments, enlever le segment conflictuel de la première classe de segments et l'attribuer à la deuxième classe de segments.
  6. Ensemble selon la revendication 5, dans lequel le dispositif (18) de gestion des conflits d'attribution de segments est configuré pour attribuer au segment une tendance vers la première classe de segments au cas où le segment est enlevé de la première classe de segments, ou pour attribuer au segment une tendance vers la deuxième classe de segments au cas où le segment n'a pas été enlevé.
  7. Ensemble selon l'une des revendications précédentes, qui présente en outre la caractéristique suivante :
    un dispositif (20) de correction de la segmentation configuré pour corriger la segmentation du morceau audio, le dispositif (20) de correction de la segmentation étant configuré pour fondre des segments avec un segment précédent ou un segment suivant en fonction d'informations de classification des segments.
  8. Ensemble selon la revendication 7, dans lequel le dispositif (20) de correction de la segmentation est configuré pour vérifier sur un segment plus court qu'une longueur minimale prédéterminée s'il présente une tendance vers une classe de segments à laquelle appartient un segment immédiatement précédent et pour, dans ce cas, fondre le segment avec le segment qui précède immédiatement, ou qui est configuré pour vérifier sur un segment plus court qu'une longueur minimale prédéterminée si le segment présente une tendance vers une classe de segments à laquelle appartient un segment immédiatement suivant et pour, dans ce cas, fondre le segment avec le segment immédiatement suivant.
  9. Ensemble selon l'une des revendications précédentes, qui présente un dispositif (20) de correction de la segmentation configuré pour fondre des segments successifs qui appartiennent à la même classe de segments.
  10. Ensemble selon l'une des revendications 7 à 9, dans lequel le dispositif (20) de correction de la segmentation est configuré pour sélectionner pour la correction de segments uniquement les segments qui ont une longueur temporelle plus courte qu'une longueur minimale prédéterminée.
  11. Ensemble selon la revendication 10, dans lequel le dispositif (20) de correction de la segmentation est configuré pour fondre un segment sélectionné provenant d'une deuxième classe de segments et dont le segment précédent et le segment suivant appartiennent à une première classe de segments avec le segment précédent et le segment suivant.
  12. Ensemble selon la revendication 10 ou 11, dans lequel le dispositif (20) de correction de la segmentation est configuré pour fondre avec le segment précédent ou le segment suivant un segment qui est dans une classe de segments qui ne comporte qu'un seul segment.
  13. Ensemble selon la revendication 10, 11 ou 12, dans lequel le dispositif (20) de correction de la segmentation est configuré pour fondre avec un segment précédent respectif ou un segment suivant respectif plusieurs segments sélectionnés situés dans la même classe de segments lorsque tous les segments sélectionnés de la classe de segments comprennent des segments précédents d'une seule et même classe de segments ou des segments suivants d'une seule et même classe de segments.
  14. Ensemble selon l'une des revendications 7 à 13, dans lequel le dispositif de correction de la segmentation est configuré pour déterminer une première valeur de nouveauté au début d'un segment dont la longueur temporelle est inférieure à une longueur minimale prédéterminée, pour déterminer une deuxième valeur de nouveauté à une extrémité du segment et pour fondre le segment avec un segment suivant lorsque la première valeur de nouveauté est supérieure à la deuxième valeur de nouveauté ou pour fondre le segment avec un segment précédent lorsque la première valeur de nouveauté est inférieure à la deuxième valeur de nouveauté.
  15. Ensemble selon l'une des revendications 7 à 14, dans lequel le dispositif (20) de correction de la segmentation est configuré pour exécuter différentes mesures de correction en fonction de différentes longueurs prédéterminées de segments.
  16. Ensemble selon l'une des revendications précédentes, qui présente en outre un dispositif de désignation des classes de segments qui est configuré pour exécuter une désignation de classes de segments en différentes parties principales en fonction d'une position temporelle de segments de différentes classes de segments.
  17. Ensemble selon la revendication 16, dans lequel le dispositif (22) de désignation de classe de segments est configuré pour sélectionner deux candidats de classes de segments pour tenir compte des segments dans les classes de segments avant une désignation de la classe de segments dans une partie principale "strophe" et dans une partie principale "refrain".
  18. Ensemble selon la revendication 16 ou 17, dans lequel le dispositif (22) de désignation des classes de segments est configuré pour désigner par classe de refrain un candidat de classe de segments lorsque le candidat de classe de segments comprend le segment qui survient dans le morceau audio après tous les autres segments de l'autre candidat de classe de segments.
  19. Ensemble selon l'une des revendications 16 à 18, dans lequel le dispositif (22) de désignation de classe de segments est configuré pour désigner par classe de strophe un candidat de classe de segments lorsque le candidat de classe de segments ne comprend pas le segment qui survient dans le morceau audio après tous les autres segments de l'autre candidat de classe de segments.
  20. Procédé de regroupement en différentes classes de segments de segments temporels d'un morceau audio divisé en parties principales qui reviennent de manière répétée dans le morceau audio, une classe de segments étant associée à une partie principale, le procédé comportant les étapes suivantes :
    préparer (10) une représentation de similitude pour les segments, la représentation de similitude comprenant pour chaque segment une pluralité associée de valeurs de similitude, la pluralité associée de valeurs de similitude indiquant dans quelle mesure le segment est similaire à chaque autre segment du morceau audio,
    calculer (12) une valeur de seuil de similitude pour un segment spécifique en recourant exclusivement à la pluralité de valeurs de similitude associées aux segments et
    attribuer (14) un segment à une classe de segments si la valeur de similitude du segment remplit une condition prédéterminée par rapport à la valeur de seuil de similitude.
  21. Programme informatique doté d'un code de programme permettant d'exécuter le procédé selon la revendication 20 lorsque le programme informatique est exécuté sur un calculateur.
EP05760763.2A 2004-09-28 2005-07-15 Dispositif et procede pour regrouper des segments temporels d'un morceau de musique Not-in-force EP1794743B1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102004047068A DE102004047068A1 (de) 2004-09-28 2004-09-28 Vorrichtung und Verfahren zum Gruppieren von zeitlichen Segmenten eines Musikstücks
PCT/EP2005/007751 WO2006034743A1 (fr) 2004-09-28 2005-07-15 Dispositif et procede pour regrouper des segments temporels d'un morceau de musique

Publications (2)

Publication Number Publication Date
EP1794743A1 EP1794743A1 (fr) 2007-06-13
EP1794743B1 true EP1794743B1 (fr) 2013-04-24

Family

ID=35005745

Family Applications (1)

Application Number Title Priority Date Filing Date
EP05760763.2A Not-in-force EP1794743B1 (fr) 2004-09-28 2005-07-15 Dispositif et procede pour regrouper des segments temporels d'un morceau de musique

Country Status (4)

Country Link
EP (1) EP1794743B1 (fr)
JP (1) JP4775380B2 (fr)
DE (1) DE102004047068A1 (fr)
WO (1) WO2006034743A1 (fr)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4948118B2 (ja) 2005-10-25 2012-06-06 ソニー株式会社 情報処理装置、情報処理方法、およびプログラム
JP4465626B2 (ja) 2005-11-08 2010-05-19 ソニー株式会社 情報処理装置および方法、並びにプログラム
JP4906565B2 (ja) * 2007-04-06 2012-03-28 アルパイン株式会社 メロディー推定方法及びメロディー推定装置
JP5083951B2 (ja) * 2007-07-13 2012-11-28 学校法人早稲田大学 音声処理装置およびプログラム
EP2180463A1 (fr) * 2008-10-22 2010-04-28 Stefan M. Oertl Procédé destiné à la reconnaissance de motifs de notes dans des morceaux de musique
WO2016152132A1 (fr) * 2015-03-25 2016-09-29 日本電気株式会社 Dispositif de traitement vocal, procédé de traitement vocal et support d'enregistrement
US10629173B2 (en) 2016-03-30 2020-04-21 Pioneer DJ Coporation Musical piece development analysis device, musical piece development analysis method and musical piece development analysis program
WO2017195292A1 (fr) 2016-05-11 2017-11-16 Pioneer DJ株式会社 Dispositif, structure et programme d'analyse de structure musicale
CN109979418B (zh) * 2019-03-06 2022-11-29 腾讯音乐娱乐科技(深圳)有限公司 音频处理方法、装置、电子设备及存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5918223A (en) * 1996-07-22 1999-06-29 Muscle Fish Method and article of manufacture for content-based analysis, storage, retrieval, and segmentation of audio information
US6542869B1 (en) * 2000-05-11 2003-04-01 Fuji Xerox Co., Ltd. Method for automatic analysis of audio including music and speech
AUPS270902A0 (en) * 2002-05-31 2002-06-20 Canon Kabushiki Kaisha Robust detection and classification of objects in audio using limited training data
JP4243682B2 (ja) * 2002-10-24 2009-03-25 独立行政法人産業技術総合研究所 音楽音響データ中のサビ区間を検出する方法及び装置並びに該方法を実行するためのプログラム
EP1577877B1 (fr) * 2002-10-24 2012-05-02 National Institute of Advanced Industrial Science and Technology Dispositif et procede de reproduction de composition musicale et procede de detection d'une section de motif representatif dans des donnees de composition musicale
JP4203308B2 (ja) * 2002-12-04 2008-12-24 パイオニア株式会社 楽曲構造検出装置及び方法
JP4079260B2 (ja) * 2002-12-24 2008-04-23 独立行政法人科学技術振興機構 楽曲ミキシング装置、方法およびプログラム

Also Published As

Publication number Publication date
WO2006034743A1 (fr) 2006-04-06
DE102004047068A1 (de) 2006-04-06
JP4775380B2 (ja) 2011-09-21
EP1794743A1 (fr) 2007-06-13
JP2008515012A (ja) 2008-05-08

Similar Documents

Publication Publication Date Title
EP1794745B1 (fr) Dispositif et procede pour modifier la segmentation d'un morceau audio
EP1774527B1 (fr) Dispositif et procede pour designer differentes classes de segment
EP1794743B1 (fr) Dispositif et procede pour regrouper des segments temporels d'un morceau de musique
EP1523719B1 (fr) Systeme et procede pour caracteriser un signal d'information
EP1407446B1 (fr) Procede et dispositif pour caracteriser un signal et pour produire un signal indexe
EP1745464B1 (fr) Dispositif et procede pour analyser un signal d'information
DE69122017T2 (de) Verfahren und vorrichtung zur signalerkennung
EP2099024B1 (fr) Procédé d'analyse orienté objet sonore et destiné au traitement orienté objet sonore de notes d'enregistrements de sons polyphoniques
EP1371055B1 (fr) Dispositif pour l'analyse d'un signal audio concernant des informations de rythme de ce signal a l'aide d'une fonction d'auto-correlation
EP2351017B1 (fr) Procédé permettant de détecter des motifs de notes dans des pièces musicales
DE10058811A1 (de) Verfahren zur Identifizierung von Musikstücken
WO2003007185A1 (fr) Procede et dispositif pour produire une empreinte digitale et procede et dispositif pour identifier un signal audio
EP1388145B1 (fr) Dispositif et procede pour analyser un signal audio afin d'obtenir des informations de rythme
WO2006039993A1 (fr) Procede et dispositif pour lisser un segment de ligne melodique
DE102004028693A1 (de) Vorrichtung und Verfahren zum Bestimmen eines Akkordtyps, der einem Testsignal zugrunde liegt
EP1377924B1 (fr) Procede et dispositif permettant d'extraire une identification de signaux, procede et dispositif permettant de creer une banque de donnees a partir d'identifications de signaux, et procede et dispositif permettant de se referencer a un signal temps de recherche
EP1671315B1 (fr) Procede et dispositif pour caracteriser un signal audio
WO2009013144A1 (fr) Procédé de détermination d'une similarité, dispositif destiné à cet effet et utilisation

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20070301

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

DAX Request for extension of the european patent (deleted)
RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: GRACENOTE, INC.

17Q First examination report despatched

Effective date: 20100729

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: SONY CORPORATION

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 609032

Country of ref document: AT

Kind code of ref document: T

Effective date: 20130515

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

Free format text: LANGUAGE OF EP DOCUMENT: GERMAN

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 502005013660

Country of ref document: DE

Effective date: 20130620

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

REG Reference to a national code

Ref country code: NL

Ref legal event code: VDEP

Effective date: 20130424

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130826

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130424

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130824

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130424

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130424

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130424

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130725

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130804

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130424

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130724

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130424

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130424

BERE Be: lapsed

Owner name: SONY CORP.

Effective date: 20130731

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130424

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130424

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130424

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130424

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130424

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130424

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130424

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130424

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20130724

26N No opposition filed

Effective date: 20140127

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

REG Reference to a national code

Ref country code: FR

Ref legal event code: ST

Effective date: 20140331

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20130731

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20130731

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20130724

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20130731

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 502005013660

Country of ref document: DE

Effective date: 20140127

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20130731

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20130715

REG Reference to a national code

Ref country code: AT

Ref legal event code: MM01

Ref document number: 609032

Country of ref document: AT

Kind code of ref document: T

Effective date: 20130715

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20140721

Year of fee payment: 10

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20130715

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130424

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20050715

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20130715

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 502005013660

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20160202