EP1377924B1 - VERFAHREN UND VORRICHTUNG ZUM EXTRAHIEREN EINER SIGNALKENNUNG, VERFAHREN UND VORRICHTUNG ZUM ERZEUGEN EINER DAZUGEHÖRIGEN DATABANK und Verfahren und Vorrichtung zum Referenzieren eines Such-Zeitsignals - Google Patents
VERFAHREN UND VORRICHTUNG ZUM EXTRAHIEREN EINER SIGNALKENNUNG, VERFAHREN UND VORRICHTUNG ZUM ERZEUGEN EINER DAZUGEHÖRIGEN DATABANK und Verfahren und Vorrichtung zum Referenzieren eines Such-Zeitsignals Download PDFInfo
- Publication number
- EP1377924B1 EP1377924B1 EP02714186A EP02714186A EP1377924B1 EP 1377924 B1 EP1377924 B1 EP 1377924B1 EP 02714186 A EP02714186 A EP 02714186A EP 02714186 A EP02714186 A EP 02714186A EP 1377924 B1 EP1377924 B1 EP 1377924B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- signal
- time
- database
- identifier
- search
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Lifetime
Links
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/0008—Associated control or indicating means
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2240/00—Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
- G10H2240/121—Musical libraries, i.e. musical databases indexed by musical parameters, wavetables, indexing schemes using musical parameters, musical rule bases or knowledge bases, e.g. for automatic composing methods
- G10H2240/131—Library retrieval, i.e. searching a database or selecting a specific musical piece, segment, pattern, rule or parameter set
- G10H2240/135—Library retrieval index, i.e. using an indexing scheme to efficiently retrieve a music piece
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2250/00—Aspects of algorithms or signal processing methods without intrinsic musical character, yet specifically adapted for or used in electrophonic musical processing
- G10H2250/005—Algorithms for electrophonic musical instruments or musical processing, e.g. for automatic composition or resource allocation
- G10H2250/011—Genetic algorithms, i.e. using computational steps analogous to biological selection, recombination and mutation on an initial population of, e.g. sounds, pieces, melodies or loops to compose or otherwise generate, e.g. evolutionary music or sound synthesis
Definitions
- the present invention relates to processing time signals that have a harmonic component, and in particular on the generation of a signal identifier for a Time signal to get the time signal using a database in of a plurality of signal identifiers for a plurality of Time signals is stored to be able to describe.
- time signals with a harmonic component such as B. audio data
- a harmonic component such as B. audio data
- the Wish e.g. B. a CD of the artist in question to acquire.
- the present audio signal only includes the time signal content, but no name above the interpreter, the music publisher etc. is an identification the origin of the audio signal or by whom Song comes from, not possible. The only hope was then in it, the audio piece including reference data regarding the Author or the source where the audio signal can be purchased, heard again to get the title you want to be able to.
- a realistic inventory of audio files is several thousand saved audio files up to hundreds of thousands of audio files.
- Music database information can be stored on a central Internet server, and potential Search queries could be made over the Internet.
- the central music databases on local hard disk systems conceivable by users. It is desirable to have such music databases to be able to search for reference data via to learn an audio file, of which only the file itself, but no reference data are known.
- music databases search using predetermined criteria to be able to, for example, similar
- Similar pieces are for example, the pieces with a similar melody, one similar set of instruments, or simply with similar ones Noises such as B. sound of the sea, twittering of birds, male voices, female voices, etc.
- U.S. Patent No. 5,918,223 discloses a method and a device for content-based analysis, storage, Recovery and segmentation of audio information. This procedure relies on several acoustic Extract features from an audio signal. Measured are volume, bass, pitch, brightness and melency-based Cepstral coefficients in a time window certain length in periodic intervals.
- everyone Measurement data set consists of a sequence of measured feature vectors. Each audio file is through the complete sentence the characteristic sequences calculated for each characteristic. Furthermore, the first derivatives for each sequence of Feature vectors calculated. Then statistical values how mean and standard deviation are calculated. This Set of values is in an N vector, i.e. H. a vector with N elements, saved.
- This procedure is based on a variety of audio files applied to each audio file derive an N vector. This will gradually according to a database made up of a large number of N vectors. An unknown audio file then becomes Extract a search n vector using the same procedure. In the case of a search query, a distance calculation is then made of the given N vector and that in the database stored N vectors determined. Eventually the N vector is output, which is the minimum distance to that Search N vector. The output N vector is data assigned via the author, title, source of supply, etc. so an audio file regarding its origin can be identified.
- This method has the disadvantage that several characteristics are calculated and arbitrary heuristics to calculate of the parameters are introduced.
- mean and Standard deviation calculations across all feature vectors for an entire audio file the information generated by the time course of the feature vectors is given, reduced to a few feature sizes. This leads to one high loss of information.
- the object of the present invention is a Method and device for extracting a signal identifier to create a time signal that is meaningful Identification of a time signal without too large Enable loss of information.
- This task is accomplished by a method of extracting a Signal identifier from a time signal according to claim 1 or by a device for extracting a signal identifier solved from a time signal according to claim 19.
- Another object of the present invention is therein, a method and an apparatus for generating a Database of signal identifiers and a method and a Device for referencing a search time signal by means of to create such a database.
- This task is accomplished by a method for generating a Database according to claim 13, a device for generating a database according to claim 20, a method for referencing a search time signal according to claim 14 or a device for referencing a Search time signal according to claim 21 solved.
- the present invention is based on the finding that that with time signals that have a harmonic component, the time course of the time signal can be used can to a signal identifier of the time signal from the time signal to extract the one hand a good fingerprint for the time signal, but on the other hand is manageable in terms of their amount of data in order to be an efficient one Search a variety of signal identifiers in to enable a database.
- the audio signal is thus characterized that a tone, that is, a frequency, at a certain point Time is present, and that this sound, i.e. H. this frequency, another one later Sound, d. H. another frequency follows.
- the signal identifier or, in other words, the feature vector (MV) used to describe the time signal thus comprises a sequence of signal identification values, the more or less roughly depending on the embodiment reproduces the time course of the time signal.
- the time signal is therefore not based on, as in the prior art characterized its spectral properties, but based on the temporal sequence of frequencies in the time signal.
- a frequency value from the detected signal edges are at least two detected signal edges needed.
- the selection of these two signal edges the total detected signal edges, on the basis thereof Frequency values are calculated is varied.
- First can have two consecutive signal edges of essentially same length can be used.
- the frequency value is then the reciprocal of the time interval between them Flanks.
- a selection can also be made according to the amplitude of the detected signal edges. So two successive signal edges can be the same Amplitude can be taken to get a frequency value determine. However, it doesn't always have to be two consecutive Signal edges are taken, but z. B. always the second, third, fourth, ... signal edge of the same amplitude or length.
- any two signal edges can be taken to under Using statistical methods and based on the Superposition laws to get the coordinate tuple.
- a flute sound is two Signal edges with high amplitude provides between those there is a wave crest with a lower amplitude.
- a selection of the two detected signal edges after the Amplitude are taken.
- the temporal sequence represents in particular for audio signals of tones the most natural way of characterizing because there is the easiest way to recognize it from music signals is the essence of the audio signal in the chronological sequence of tones.
- the most immediate Sensation that a listener receives from a music signal is the sequence of tones.
- the concept according to the invention is based on this knowledge and provides a signal identifier that consists of a temporal Sequence of frequencies exists or, depending on the embodiment, from a chronological sequence of frequencies, d. H. Tones derived from statistical methods.
- An advantage of the present invention is that the signal identifier as a chronological sequence of frequencies High information content fingerprint for time signals with a harmonic component and to a certain extent the essential or the core of a time signal.
- Another advantage of the present invention is in that the signal identifier extracted according to the invention represents a strong compression of the time signal, however, the timing of the time signal continues is based on the natural view of time signals, e.g. B. pieces of music is adjusted.
- Another advantage of the present invention is in that due to the sequential nature of the signal identifier from the distance calculation referencing algorithms in State of the art can be gone and for referencing the time signal used in a database algorithms can be known from DNA sequencing are and that in addition also similarity calculations can be performed using DNA sequencing algorithms with replace / insert / delete operations be used.
- Another advantage of the present invention is in that to detect the time occurrence of Signal edges in the time signal in a favorable manner the Hough transform can be used for that efficient algorithms from image processing and image recognition exist.
- Another advantage of the present invention is in that the signal identifier extracted according to the invention of a time signal is independent of whether the search signal identifier from the entire time signal or only from one Section of the time signal is derived because according to the DNA sequencing algorithms one step at a time Comparison of the search signal identifier with a reference signal identifier can be carried out, due to the temporal sequential comparison of the section of the identifying time signal to a certain extent automatically the reference time signal is identified where the highest match between search signal identifier and Reference signal identifier exists.
- Fig. 1 shows a block diagram of an extracting device a signal identifier from a time signal.
- the device comprises a device 12 for performing a Signal edge detection, a device 14 for determining the distance between two selected detected edges, a device 16 for frequency calculation and a device 18 for signal identification generation using from output from the device 16 for frequency calculation Coordinate tuples, each a frequency value and have an appearance time for this frequency value.
- the device 12 for detecting the occurrence of time of signal edges in the time signal preferably carries one Hough transformation through.
- the Hough transform is described in U.S. Patent No. 3,069,654 by Paul V. C. Hough.
- the Hough transformation is used to recognize complex structures and especially for the automatic detection of complex Lines in photographs or other images.
- the Hough transformation is thus generally a technique that can be used to create features with special shape inside extract an image.
- the Hough transform used to do this from the time signal Extract signal edges with specified time lengths is initially determined by its temporal Length specified.
- a sine wave would be a signal edge through the rising edge of the sine function defined from 0 to 90 °.
- samples Is the time signal as a result of time samples ("Samples")
- the length of time corresponds to one Signal edge taking into account the sampling frequency, with the samples were generated, a certain number of samples.
- the length of a signal edge can thus by simply specifying the number of samples, specified to encompass the signal edge become.
- a signal edge only then to be detected as a signal edge if the same is continuous and is predominantly monotonous, i.e. in In the case of a positive signal edge a predominantly monotone has increasing course. Of course you can too negative signal edges, i.e. monotonously falling signal edges can be detected.
- Another criterion for the classification of signal edges is that a signal edge is only a signal edge is detected when it has a certain level range exceeds. To hide noise disturbances, it is preferred to use a minimum for a signal edge Specify level range or amplitude range, being monotonous rising signal edges below this level range cannot be detected as signal edges.
- a preferred embodiment of the present Invention is used for referencing audio signals further restriction that only Signal edges are searched, their specified temporal length greater than a minimum limit length and is less than a maximum time limit.
- This in other words means that only signal edges are searched that are on frequencies less than an upper cutoff frequency and greater than a lower cutoff frequency Clues.
- the signal edge detection unit 12 thus provides one Signal edge and the time of occurrence of the signal edge. It is irrelevant whether the time of the signal occurrence the signal edge the time of the first sample the signal edge, the time of the last sample the signal edge or the time of any sample is taken within the signal edge as long Signal edges are treated the same.
- the facility is based on this 16 for calculating a frequency value from the determined time interval. The frequency value corresponds to the inverse the determined time interval.
- Fig. 5 shows a section of about 13 seconds in length of the clarinet quintet in A major, Larghetto, KV 581 by Wolfgang Amadeus Mozart as it exit 16 would appear for frequency calculation.
- a clarinet sounds which is a melody leading Solo part plays as well as an accompanying string quartet.
- the coordinate tuples shown in FIG. 5 result, as provided by the device 16 for frequency calculation could be generated.
- the device 18 is finally used from the results to generate a signal identifier for the device 16, the cheap and suitable for a signal identification database is.
- the signal identifier is generally made up of a plurality generated from coordinate tuples, each coordinate tuple includes a frequency value and an occurrence time, so that the signal identifier is a sequence of signal identifier values includes the time course of the time signal.
- the device 18 serves to from the frequency-time diagram of Fig. 5, which by the device 16 could be generated, the essential Extract information to a fingerprint of the To generate time signal that is compact on the one hand, and on the other hand, the time signal is sufficiently precise and distinguishable can differ from other time signals.
- FIG. 2 shows an extraction device according to the invention a signal identifier according to a preferred embodiment of the present invention.
- an audio file 20 is input to an audio I / O handler.
- the audio I / O handler 22 reads the audio file, for example from a hard drive.
- the audio data stream can can also be read directly via a sound card.
- To reading a portion of the audio data stream device 22 retrieves the audio file and loads the next one audio file to be edited or terminates the import process.
- the device 24 serves on the one hand, if necessary a sample rate conversion perform, or a volume modification of the Audio signal.
- Audio signals are on different Media in different sampling frequencies in front.
- the Time of occurrence of a signal edge in the audio signal used to describe the audio signal so that the sampling rate must be known to the times of occurrence of signal edges to be detected correctly and above also correctly detect frequency values.
- alternative can do a sample rate conversion by decimation or interpolation be performed to the audio signals different Bring sampling rates to the same sampling rate.
- the device 24 is therefore provided for setting a sampling rate perform.
- the PCM samples also become automatic Level adjustment also undergone in the facility 24 is provided.
- the device 24 is for automatic Level adjustment in a look-ahead buffer is the middle one Signal power of the audio signal determined.
- the audio signal section, the between two signal power minima is multiplied by a scaling factor that the product of a weighting factor and the quotient from full scale and maximum level within the segment is.
- the length of the look-ahead buffer is variable.
- the audio signal preprocessed in this way is then converted into the device 12 is fed by a signal edge detection performs as described with reference to FIG. 1 has been.
- the Hough transform used.
- a circuit technology Realization of the Hough transformation is in the WO 99/26167.
- the amplitude of one determined by the Hough transformation Signal edge and the time of detection of a signal edge are then transferred to the device 14 of FIG. 1.
- this unit there are two successive ones Subtract detection times from each other, where the reciprocal of the difference in performance times as a frequency value Is accepted.
- This task is accomplished by the 1 causes and leads when a piece of music is processed accordingly to the frequency-time diagram of Fig. 5, in which the obtained frequency-time coordinate tuple are graphically represented by Mozart, Köchel-Directory 581.
- the representation of FIG. 5 could already be can be used as a signal identifier for the time signal since the temporal sequence of the coordinate tuples the temporal Reproduces the course of the time signal.
- Postprocess to get out of the frequency-time diagram 5 to extract the essential information, one for signal referencing if possible small and yet meaningful fingerprint deliver for the time signal.
- the signal identification generator 18 can be used as in FIG. 3 be constructed shown.
- the device 18 is structured into a device 18a for determining the cluster areas, into a device 18b for grouping, into a Device 18c for averaging over a group, in a Device 18d for setting intervals, in a device to quantize 18e and finally into a device 18f to switch the signal identifier for the time signal receive.
- the device 18a characteristic distribution point clouds to determine the cluster areas, which are referred to as clusters or clusters are worked out. This is done by isolating everyone Frequency-time tuples are deleted that have a given Minimum distance to the closest spatial neighbor exceed. Such isolated frequency-time tuples are for example, the dots in the top right corner of the 5. This leaves a so-called pitch contour strip band left that in Fig. 5 with the reference symbol 50 is outlined.
- the Pitch Contour strip tape consists of clusters of certain frequency latitude and longitude, whereby these clusters are caused by played notes. These tones are shown in Fig.
- the tone a1 has a frequency of 440 Hz.
- the tone h1 has a frequency of 494 Hz.
- the tone c2 has one Frequency of 523 Hz, the tone cis2 has a frequency of 554 Hz, while the tone d2 has a frequency of 587 Hz.
- the stripe width for single tones also depends of a vibrato of the musical instrument producing the single tones from.
- Blocks become the coordinate tuples of the pitch contour strip in a time window of n samples summarized in a processing block to be processed separately or grouped.
- the block size can be equidistant or variable.
- a relatively rough division can be chosen, for example a one-second grid thing about the present Sampling rate of a certain number of samples corresponds to per block, or a smaller division.
- alternative can, in the case of pieces of music the underlying notation To take account of the grid always chosen that a tone falls into the grid. This is it is necessary to estimate the length of a sound, what by the polynomial fit function 54 shown in FIG. 5 is possible.
- a group or block is then created by the temporal distance between two local extreme values of the Polynomial determined.
- This approach delivers particularly relatively large groups for relatively monophonic sections samples as they occur between 6 and 12 seconds, while at relatively polyphonic intervals of the piece of music, where the coordinate tuples are larger than one Frequency range are distributed, such as. B. at about 2 seconds 5 in Fig. 5 or smaller at 12 seconds from Fig. 5 Groups are determined, which in turn leads to the fact that the Signal identification carried out on the basis of relatively small groups becomes smaller, so that the information compression than with solid block formation.
- block 18c for averaging over a group of samples becomes a weighted average as needed determined over all coordinate tuples present in a block.
- the Tuples outside of the Pitch Contour strip band beforehand "Hidden".
- this can also be done Hide are dispensed with, which leads to all coordinate tuple calculated by the device 16 at the averaging performed by the device 18c will be taken into account.
- the value generated by the device 18c has been calculated in non-equidistant Grid values quantized.
- the division according to the audio frequency scale being the tone frequency scale, as already stated has been classified according to the frequency range, which is supplied by a standard piano and differs from 27.5 Hz (tone A2) to 4186 Hz (tone c5) and 88 tone levels includes. Is the averaged value at the output of the Device 18c between two adjacent semitones, see above he receives the value of the closest reference tone.
- the quantized values by the device 18f be post-processed, with post-processing for example in a pitch offset correction, one Transposition into another tone scale, etc. could exist.
- FIG. 4 shows schematically a device for referencing a Search time signal in a database 40, the database 40 signal identifiers of a plurality of database time signals Track_1 to Track_m, which in one preferably library 42 separate from database 40 are saved.
- Audio files 41 are gradually fed to a vector generator 43, which has a reference identifier for each audio file and stored in the database so that they are recognized can to which audio file z. B. in library 42 the signal identifier belongs.
- the signal identifier corresponds MV11, ...., MV1n the time signal Track_1.
- the Signal identifier MV21, ..., MV2n belongs to the time signal Track_2.
- the signal identifier MVm1, ..., MVmn to the time signal Track_m.
- the vector generator 43 is designed to generally perform the operations shown in FIGS Fig. 1 perform functions, and is according to a preferred embodiment as in FIG. 2nd and 3 implemented. Processed in "Learn" mode the vector generator 43 gradually different Audio files (Track_1 to Track_m) for signal identifiers for save the time signals in the database, d. H. around to fill the database.
- an audio file 41 is to be based on the database 40 are referenced.
- the search time signal 41 processed by the vector generator 43 to generate a search identifier 45.
- the search identifier 45 will then fed into a DNA sequencer 46 to match the reference identifiers to be compared in database 40.
- the DNA sequencer 46 is also arranged to make a statement about the search time signal with respect to the plurality of database time signals from library 42.
- the DNA sequencer searches the database with the search identifier 45 40 on a matching reference identifier and passes a pointer to the corresponding one with the reference identifier associated audio files in library 42.
- the DNA sequencer 46 thus makes a comparison of the search identifier 45 or parts thereof with the reference identifiers in the database. If the given sequence is available or a partial sequence thereof becomes the associated time signal referenced in library 42.
- DNA sequencer 46 preferably uses a Boyer-Moore algorithm which, for example, in the specialist book "Algorithms on Strings, Trees and Sequences", Dan Gusfield, Cambridge University Press, 1997. According to A first alternative is based on exact match checked. The conclusion of a statement is therefore that it is said that the search time signal is identical to one Time signal in library 42 is. Alternatively or additionally can also see the similarity of two sequences Use of replace / insert / delete operations and a pitch offset correction (pitch offset correction) to be examined.
- pitch offset correction pitch offset correction
- database 40 is structured to: composed of the chaining of signal identification sequences is, the end of each vector signal identifier of a time signal is specified by a separator so the search does not continue beyond time signal file limits. If multiple matches are found, all of them referenced time signals specified.
- a similarity measure can be introduced with the time signal referenced in library 42 is that the search time signal 41 based on a predetermined Similarity measure is most similar. Furthermore, it prefers to measure the similarity of the search audio signal and then determine multiple signals in the library the most similar n sections in library 42 in Order of descending similarity.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Radar Systems Or Details Thereof (AREA)
- Mobile Radio Communication Systems (AREA)
Description
- Fig. 1
- ein Blockschaltbild der erfindungsgemäßen Vorrichtung zum Extrahieren einer Signalkennung aus einem Zeitsignal;
- Fig. 2
- ein Blockschaltbild eines bevorzugten Ausführungsbeispiels, in dem eine Vorverarbeitung des Audiosignals dargestellt ist;
- Fig. 3
- ein Blockschaltbild eines Ausführungsbeispiels für die Signalkennungserzeugung;
- Fig. 4
- ein Blockschaltbild für eine erfindungsgemäße Vorrichtung zum Erzeugen einer Datenbank und zum Referenzieren eines Such-Zeitsignals in der Datenbank.
- Fig. 5
- graphische Darstellung eines Ausschnitts von Mozart KV 581 durch Frequenz-Zeit-Koordinaten-Tupel.
Claims (22)
- Verfahren zum Extrahieren einer Signalkennung aus einem Zeitsignal, das einen harmonischen Anteil hat, mit folgenden Schritten:Detektieren (12) des zeitlichen Auftretens von Signalflanken in dem Zeitsignal;Ermitteln (14) eines zeitlichen Abstands zwischen zwei ausgewählten detektierten Signalflanken;Berechnen (16) eines Frequenzwerts aus dem ermittelten zeitlichen Abstand und Zuordnen des Frequenzwerts zu einer Auftrittszeit des Frequenzwerts in dem Zeitsignal, um einen Koordinatentupel aus dem Frequenzwert und der Auftrittszeit für diesen Frequenzwert zu erhalten; undErzeugen (18) der Signalkennung aus einer Mehrzahl von Koordinatentupeln, wobei jeder Koordinatentupel einen Frequenzwert und eine Auftrittszeit umfaßt, wodurch die Signalkennung eine Folge von Signalkennungswerten umfaßt, die den zeitlichen Verlauf des Zeitsignals wiedergibt.
- Verfahren nach Anspruch 1, bei dem im Schritt des Detektierens (12) eine Signalflanke nur dann als Signalflanke detektiert, wenn dieselbe über ihrer spezifizierten zeitlichen Länge eine Amplitude aufweist, die größer als ein vorbestimmten Amplitudenschwellwert ist.
- Verfahren nach Anspruch 1 oder 2,
bei dem im Schritt des Detektierens (12) eine Signalflanke nur dann als Signalflanke detektiert wird, wenn ihre spezifizierte zeitliche Länge größer als eine minimale Grenzlänge und kleiner als eine maximale Grenzlänge ist. - Verfahren nach Anspruch 3, bei dem das Zeitsignal ein Audiosignal ist, und bei dem die minimale zeitliche Grenzlänge anhand einer maximalen hörbaren Grenzfrequenz und die maximale zeitliche Grenzlänge anhand einer minimalen hörbaren Grenzfrequenz festgelegt sind.
- Verfahren nach Anspruch 3, bei dem das Zeitsignal ein Audiosignal ist, und bei dem die minimale zeitliche Grenzlänge anhand einer maximalen durch ein Instrument erzeugbaren Tonfrequenz und die maximale zeitliche Grenzlänge anhand einer minimalen durch ein Instrument erzeugbaren Tonfrequenz festgelegt sind.
- Verfahren nach einem der vorhergehenden Ansprüche, bei dem der Schritt des Erzeugens (18) der Signalkennung folgenden Schritt aufweist:Eliminieren (18a) von Koordinatentupeln, die mehr als einen vorbestimmten Schwellenabstand von einem benachbarten Koordinatentupel in einem Frequenz-Zeit-Diagramm beabstandet sind, um Häufungen von Koordinaten-Tupeln zu ermitteln.
- Verfahren nach Anspruch 5 oder 6, bei dem der Schritt des Erzeugens (18) folgenden Schritt aufweist:Gruppieren (18b) von Koordinaten-Tupeln in aufeinanderfolgenden Zeitintervallen zu Blöcken von Koordinatentupeln.
- Verfahren nach Anspruch 7, bei dem die aufeinanderfolgenden Zeitintervalle eine feste und/oder eine variable Länge haben.
- Verfahren nach Anspruch 7 oder 8, bei dem der Schritt des Erzeugens (18) der Signalkennung folgenden Schritt aufweist:Mitteln (18c) der Frequenzwerte von Koordinaten-Tupeln in den Zeitintervallen, um eine Folge von gemittelten Frequenzwerten für eine Folge von Zeitintervallen zu erhalten, wobei die Folge von gemittelten Frequenzwerten einen Merkmalsvektor darstellt.
- Verfahren nach Anspruch 9, bei dem der Schritt (18) des Erzeugens der Signalkennung folgenden Schritt aufweist:Quantisieren (18e) des Merkmalsvektors, um einen quantisierten Merkmalsvektor zu erhalten.
- Verfahren nach Anspruch 10, bei dem der Schritt des Quantisierens (18e) unter Verwendung nicht-äquidistant verteilter Rasterpunkte durchgeführt wird, wobei Abstände zwischen zwei benachbarten Rasterpunkten gemäß einer Tonfrequenzskala bestimmt sind.
- Verfahren nach einem der vorhergehenden Ansprüche, bei dem im Schritt (12) des Detektierens von Signalflanken eine Hough-Transformation verwendet wird.
- Verfahren zum Erzeugen einer Datenbank (40) aus Referenz-Signalkennungen für eine Mehrzahl von Zeitsignalen, mit folgenden Schritten:Extrahieren einer ersten Signalkennung für ein erstes Zeitsignal durch das Verfahren gemäß einem der Ansprüche 1 bis 12;Extrahieren einer zweiten Signalkennung für ein zweites Zeitsignal durch ein Verfahren gemäß einem der Ansprüche 1 bis 12; undSpeichern der extrahierten ersten Signalkennung in Zuordnung zu dem ersten Zeitsignal in der Datenbank (40); undSpeichern der extrahierten zweiten Signalkennung in Zuordnung zu dem zweiten Zeitsignal in der Datenbank (40).
- Verfahren zum Referenzieren eines Such-Zeitsignals unter Verwendung einer Datenbank (40), wobei die Datenbank Referenz-Signalkennungen einer Mehrzahl von Datenbank-Zeitsignalen aufweist, wobei eine Referenz-Signalkennung eines Datenbank-Zeitsignals durch ein Verfahren gemäß einem der Patentansprüche 1 bis 12 ermittelt worden ist, mit folgenden Schritten:Vorgeben zumindest eines Abschnitts eines Such-Zeitsignals (41);Extrahieren (43) einer Such-Signalkennung aus dem Such-Zeitsignal durch ein Verfahren gemäß einem der Patentansprüche 1 bis 12; undVergleichen (46) der Such-Signalkennung mit der Mehrzahl von Referenz-Signalkennungen, und, ansprechend auf den Schritt des Vergleichens, Treffen einer Aussage über das Such-Zeitsignal bezüglich der Mehrzahl von Datenbank-Zeitsignalen.
- Verfahren nach Anspruch 14, bei dem im Schritt des Treffens einer Aussage ein Such-Zeitsignal als Referenz-Zeitsignal identifiziert wird, wenn die Such-Signalkennung zumindest mit einem Abschnitt einer Referenz-Signalkennung übereinstimmt.
- Verfahren nach Anspruch 14, bei dem im Schritt des Treffens einer Aussage eine Ähnlichkeit zwischen einem Such-Zeitsignal und einem Datenbank-Zeitsignal festgestellt wird, falls die Such-Signalkennung und/oder zumindest ein Abschnitt einer Datenbank-Signalkennung durch eine reproduzierbare Manipulation in Übereinstimmung bringbar sind.
- Verfahren nach einem der Patentansprüche 14 bis 16,
bei dem die Datenbank-Signalkennung eine Folge von Datenbank-Signalkennungswerten aufweist, die den zeitlichen Verlauf des Datenbank-Zeitsignals wiedergeben,
bei dem die Such-Signalkennung eine Suchfolge von Such-Signalkennungswerten aufweist, die den zeitlichen Verlauf des Such-Zeitsignals wiedergeben,
bei dem die Länge der Datenbank-Folge größer als die Länge der Such-Folge ist, und
bei dem die Such-Folge sequentiell mit der Datenbank-Folge verglichen wird. - Verfahren nach Anspruch 17, bei dem während des sequentiellen Vergleichens der Suchfolge mit der Datenbankfolge eine Korrektur der Werte der Such- und/oder der Datenbank-Signalkennung durch eine Ersetzen-, Einfügen- oder Löschen-Operation von zumindest einem Wert der Such- und/oder der Datenbank-Signalkennung durchgeführt wird, um eine Ähnlichkeit des Such-Zeitsignals und des Datenbank-Zeitsignals zu ermitteln.
- Verfahren nach einem der Ansprüche 14 bis 18,
bei dem der Schritt des Vergleichens (46) unter Verwendung eines DNA-Sequencing-Algorithmus und/oder unter Verwendung des Boyer-Moore-Algorithmus durchgeführt wird. - Vorrichtung zum Extrahieren einer Signalkennung aus einem Zeitsignal, das einen harmonischen Anteil hat, mit folgenden Merkmalen:einer Einrichtung zum Detektieren (12) des zeitlichen Auftretens von Signalflanken in dem Zeitsignal;einer Einrichtung zum Ermitteln (14) eines zeitlichen Abstands zwischen zwei ausgewählten detektierten Signalflanken;einer Einrichtung zum Berechnen (16) eines Frequenzwerts aus dem ermittelten zeitlichen Abstand und Zuordnen des Frequenzwerts zu einer Auftrittszeit des Frequenzwerts in dem Zeitsignal, um einen Koordinatentupel aus dem Frequenzwert und der Auftrittszeit für diesen Frequenzwert zu erhalten; undeiner Einrichtung zum Erzeugen (18) der Signalkennung aus einer Mehrzahl von Koordinatentupeln, wobei jeder Koordinatentupel einen Frequenzwert und eine Auftrittszeit umfaßt, wodurch die Signalkennung eine Folge von Signalkennungswerten umfaßt, die den zeitlichen Verlauf des Zeitsignals wiedergibt.
- Vorrichtung zum Erzeugen einer Datenbank (40) aus Referenz-Signalkennungen für eine Mehrzahl von Zeitsignalen, mit folgenden Merkmalen:einer Einrichtung zum Extrahieren einer ersten Signalkennung für ein erstes Zeitsignal gemäß Anspruch 20;einer Einrichtung zum Extrahieren einer zweiten Signalkennung für ein zweites Zeitsignal gemäß Anspruch 20; undeiner Einrichtung zum Speichern der extrahierten ersten Signalkennung in Zuordnung zu dem ersten Zeitsignal in der Datenbank (40); undeiner Einrichtung zum Speichern der extrahierten zweiten Signalkennung in Zuordnung zu dem zweiten Zeitsignal in der Datenbank (40).
- Vorrichtung zum Referenzieren eines Such-Zeitsignals unter Verwendung einer Datenbank (40), wobei die Datenbank Referenz-Signalkennungen einer Mehrzahl von Datenbank-Zeitsignalen aufweist, wobei eine Referenz-Signalkennung eines Datenbank-Zeitsignals durch ein Verfahren gemäß einem der Patentansprüche 1 bis 12 ermittelt worden ist, mit folgenden Merkmalen:einer Einrichtung zum Vorgeben zumindest eines Abschnitts eines Such-Zeitsignals (41);einer Einrichtung zum Extrahieren (43) einer Such-Signalkennung gemäß Patentanspruch 20; undeiner Einrichtung zum Vergleichen (46) der Such-Signalkennung mit der Mehrzahl von Referenz-Signalkennungen, und, ansprechend auf den Schritt des Vergleichens, Treffen einer Aussage über das Such-Zeitsignal bezüglich der Mehrzahl von Datenbank-Zeitsignalen.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE10117871A DE10117871C1 (de) | 2001-04-10 | 2001-04-10 | Verfahren und Vorrichtung zum Extrahieren einer Signalkennung, Verfahren und Vorrichtung zum Erzeugen einer Datenbank aus Signalkennungen und Verfahren und Vorrichtung zum Referenzieren eines Such-Zeitsignals |
DE10117871 | 2001-04-10 | ||
PCT/EP2002/002703 WO2002084539A2 (de) | 2001-04-10 | 2002-03-12 | Verfahren und vorrichtung zum extrahieren einer signalkennung, verfahren und vorrichtung zum erzeugen einer dazugehörigen databank |
Publications (2)
Publication Number | Publication Date |
---|---|
EP1377924A2 EP1377924A2 (de) | 2004-01-07 |
EP1377924B1 true EP1377924B1 (de) | 2004-09-22 |
Family
ID=7681083
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP02714186A Expired - Lifetime EP1377924B1 (de) | 2001-04-10 | 2002-03-12 | VERFAHREN UND VORRICHTUNG ZUM EXTRAHIEREN EINER SIGNALKENNUNG, VERFAHREN UND VORRICHTUNG ZUM ERZEUGEN EINER DAZUGEHÖRIGEN DATABANK und Verfahren und Vorrichtung zum Referenzieren eines Such-Zeitsignals |
Country Status (9)
Country | Link |
---|---|
US (1) | US20040158437A1 (de) |
EP (1) | EP1377924B1 (de) |
JP (1) | JP3934556B2 (de) |
AT (1) | ATE277381T1 (de) |
AU (1) | AU2002246109A1 (de) |
CA (1) | CA2443202A1 (de) |
DE (2) | DE10117871C1 (de) |
HK (1) | HK1059492A1 (de) |
WO (1) | WO2002084539A2 (de) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE10232916B4 (de) * | 2002-07-19 | 2008-08-07 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Vorrichtung und Verfahren zum Charakterisieren eines Informationssignals |
EP1684263B1 (de) * | 2005-01-21 | 2010-05-05 | Unlimited Media GmbH | Vervahren zum Erzeugen eines Abdrucks eines Audiosignals |
US7996212B2 (en) | 2005-06-29 | 2011-08-09 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Device, method and computer program for analyzing an audio signal |
DE102005030326B4 (de) * | 2005-06-29 | 2016-02-25 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Vorrichtung, Verfahren und Computerprogramm zur Analyse eines Audiosignals |
WO2010135623A1 (en) * | 2009-05-21 | 2010-11-25 | Digimarc Corporation | Robust signatures derived from local nonlinear filters |
DE102017213510A1 (de) * | 2017-08-03 | 2019-02-07 | Robert Bosch Gmbh | Verfahren und Vorrichtung zum Erzeugen eines maschinellen Lernsystems, und virtuelle Sensorvorrichtung |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR772961A (fr) * | 1934-05-07 | 1934-11-09 | Procédé d'enregistrement de la musique jouée sur un instrument à clavier, et appareil basé sur ce procédé | |
US3069654A (en) * | 1960-03-25 | 1962-12-18 | Paul V C Hough | Method and means for recognizing complex patterns |
US3979557A (en) * | 1974-07-03 | 1976-09-07 | International Telephone And Telegraph Corporation | Speech processor system for pitch period extraction using prediction filters |
US4697209A (en) * | 1984-04-26 | 1987-09-29 | A. C. Nielsen Company | Methods and apparatus for automatically identifying programs viewed or recorded |
DE4324497A1 (de) * | 1992-07-23 | 1994-04-21 | Roman Koller | Verfahren und Anordnung zur ferngewirkten Schaltung eines Verbrauchers |
US5918223A (en) * | 1996-07-22 | 1999-06-29 | Muscle Fish | Method and article of manufacture for content-based analysis, storage, retrieval, and segmentation of audio information |
CN1291324A (zh) * | 1997-01-31 | 2001-04-11 | T-内提克斯公司 | 检测录制声音的系统和方法 |
DE19948974A1 (de) * | 1999-10-11 | 2001-04-12 | Nokia Mobile Phones Ltd | Verfahren zum Erkennen und Auswählen einer Tonfolge, insbesondere eines Musikstücks |
US6990453B2 (en) * | 2000-07-31 | 2006-01-24 | Landmark Digital Services Llc | System and methods for recognizing sound and music signals in high noise and distortion |
-
2001
- 2001-04-10 DE DE10117871A patent/DE10117871C1/de not_active Expired - Fee Related
-
2002
- 2002-03-12 AT AT02714186T patent/ATE277381T1/de active
- 2002-03-12 JP JP2002582410A patent/JP3934556B2/ja not_active Expired - Lifetime
- 2002-03-12 CA CA002443202A patent/CA2443202A1/en not_active Abandoned
- 2002-03-12 EP EP02714186A patent/EP1377924B1/de not_active Expired - Lifetime
- 2002-03-12 US US10/473,801 patent/US20040158437A1/en not_active Abandoned
- 2002-03-12 DE DE50201116T patent/DE50201116D1/de not_active Expired - Lifetime
- 2002-03-12 WO PCT/EP2002/002703 patent/WO2002084539A2/de active IP Right Grant
- 2002-03-12 AU AU2002246109A patent/AU2002246109A1/en not_active Abandoned
-
2004
- 2004-04-02 HK HK04102412A patent/HK1059492A1/xx not_active IP Right Cessation
Also Published As
Publication number | Publication date |
---|---|
JP3934556B2 (ja) | 2007-06-20 |
WO2002084539A3 (de) | 2003-10-02 |
DE10117871C1 (de) | 2002-07-04 |
US20040158437A1 (en) | 2004-08-12 |
ATE277381T1 (de) | 2004-10-15 |
AU2002246109A1 (en) | 2002-10-28 |
EP1377924A2 (de) | 2004-01-07 |
DE50201116D1 (de) | 2004-10-28 |
WO2002084539A2 (de) | 2002-10-24 |
HK1059492A1 (en) | 2004-07-02 |
CA2443202A1 (en) | 2002-10-24 |
JP2004531758A (ja) | 2004-10-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
DE10117870B4 (de) | Verfahren und Vorrichtung zum Überführen eines Musiksignals in eine Noten-basierte Beschreibung und Verfahren und Vorrichtung zum Referenzieren eines Musiksignals in einer Datenbank | |
DE10232916B4 (de) | Vorrichtung und Verfahren zum Charakterisieren eines Informationssignals | |
EP1368805B1 (de) | Verfahren und vorrichtung zum charakterisieren eines signals und verfahren und vorrichtung zum erzeugen eines indexierten signals | |
EP1407446B1 (de) | Verfahren und vorrichtung zum charakterisieren eines signals und zum erzeugen eines indexierten signals | |
EP1405222B1 (de) | Verfahren und vorrichtung zum erzeugen eines fingerabdrucks und verfahren und vorrichtung zum identifizieren eines audiosignals | |
EP1371055B1 (de) | Vorrichtung zum analysieren eines audiosignals hinsichtlich von rhythmusinformationen des audiosignals unter verwendung einer autokorrelationsfunktion | |
EP2099024B1 (de) | Verfahren zur klangobjektorientierten Analyse und zur notenobjektorientierten Bearbeitung polyphoner Klangaufnahmen | |
EP1388145B1 (de) | Vorrichtung und verfahren zum analysieren eines audiosignals hinsichtlich von rhythmusinformationen | |
DE10157454B4 (de) | Verfahren und Vorrichtung zum Erzeugen einer Kennung für ein Audiosignal, Verfahren und Vorrichtung zum Aufbauen einer Instrumentendatenbank und Verfahren und Vorrichtung zum Bestimmen der Art eines Instruments | |
DE102004028693B4 (de) | Vorrichtung und Verfahren zum Bestimmen eines Akkordtyps, der einem Testsignal zugrunde liegt | |
EP1377924B1 (de) | VERFAHREN UND VORRICHTUNG ZUM EXTRAHIEREN EINER SIGNALKENNUNG, VERFAHREN UND VORRICHTUNG ZUM ERZEUGEN EINER DAZUGEHÖRIGEN DATABANK und Verfahren und Vorrichtung zum Referenzieren eines Such-Zeitsignals | |
EP1671315B1 (de) | Vorrichtung und verfahren zum charakterisieren eines tonsignals | |
DE68911858T2 (de) | Verfahren und Vorrichtung zum automatischen Transkribieren. | |
EP1743324B1 (de) | Vorrichtung und verfahren zum analysieren eines informationssignals | |
EP1381024B1 (de) | Verfahren zum Auffinden einer Tonfolge |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20030930 |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR |
|
AX | Request for extension of the european patent |
Extension state: AL LT LV MK RO SI |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: KATAI, ANDRAS Inventor name: KAUFMANN, MATTHIAS Inventor name: UHLE, CHRISTIAN Inventor name: RICHTER, CHRISTIAN Inventor name: HIRSCH, WOLFGANG Inventor name: KLEFENZ, FRANK Inventor name: BRANDENBURG, KARLHEINZ |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 1059492 Country of ref document: HK |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT;WARNING: LAPSES OF ITALIAN PATENTS WITH EFFECTIVE DATE BEFORE 2007 MAY HAVE OCCURRED AT ANY TIME BEFORE 2007. THE CORRECT EFFECTIVE DATE MAY BE DIFFERENT FROM THE ONE RECORDED. Effective date: 20040922 Ref country code: IE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20040922 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20040922 Ref country code: TR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20040922 Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20040922 |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D Free format text: NOT ENGLISH |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
GBT | Gb: translation of ep patent filed (gb section 77(6)(a)/1977) |
Effective date: 20040922 |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D Free format text: GERMAN |
|
REF | Corresponds to: |
Ref document number: 50201116 Country of ref document: DE Date of ref document: 20041028 Kind code of ref document: P |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20041222 Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20041222 Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20041222 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20050102 |
|
LTIE | Lt: invalidation of european patent or patent extension |
Effective date: 20040922 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20050312 Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20050312 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20050331 Ref country code: MC Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20050331 |
|
NLV1 | Nl: lapsed or annulled due to failure to fulfill the requirements of art. 29p and 29m of the patents act | ||
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FD4D |
|
ET | Fr: translation filed | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: GR Ref document number: 1059492 Country of ref document: HK |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
26N | No opposition filed |
Effective date: 20050623 |
|
BERE | Be: lapsed |
Owner name: FRAUNHOFER-GESELLSCHAFT ZUR FORDERUNG DER ANGEWAND Effective date: 20050331 |
|
BERE | Be: lapsed |
Owner name: *FRAUNHOFER-GESELLSCHAFT ZUR FORDERUNG DER ANGEWAN Effective date: 20050331 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: PT Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20050222 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R082 Ref document number: 50201116 Country of ref document: DE Representative=s name: SCHOPPE, ZIMMERMANN, STOECKELER, ZINKLER & PAR, DE |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R081 Ref document number: 50201116 Country of ref document: DE Owner name: MUFIN GMBH, DE Free format text: FORMER OWNER: FRAUNHOFER-GESELLSCHAFT ZUR FOERDERUNG DER ANGEWANDTEN FORSCHUNG E.V., 80686 MUENCHEN, DE Effective date: 20111109 Ref country code: DE Ref legal event code: R082 Ref document number: 50201116 Country of ref document: DE Representative=s name: SCHOPPE, ZIMMERMANN, STOECKELER, ZINKLER, SCHE, DE Effective date: 20111109 Ref country code: DE Ref legal event code: R082 Ref document number: 50201116 Country of ref document: DE Representative=s name: SCHOPPE, ZIMMERMANN, STOECKELER, ZINKLER & PAR, DE Effective date: 20111109 |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: 732E Free format text: REGISTERED BETWEEN 20120105 AND 20120111 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: CA Effective date: 20120207 Ref country code: FR Ref legal event code: TP Owner name: MUFIN GMBH, DE Effective date: 20120207 |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: PC Ref document number: 277381 Country of ref document: AT Kind code of ref document: T Owner name: MUFIN GMBH, DE Effective date: 20120402 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 15 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 16 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 17 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20190322 Year of fee payment: 18 Ref country code: CH Payment date: 20190322 Year of fee payment: 18 Ref country code: GB Payment date: 20190322 Year of fee payment: 18 Ref country code: DE Payment date: 20181211 Year of fee payment: 18 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: AT Payment date: 20190328 Year of fee payment: 18 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R119 Ref document number: 50201116 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MM01 Ref document number: 277381 Country of ref document: AT Kind code of ref document: T Effective date: 20200312 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20200331 Ref country code: DE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20201001 Ref country code: AT Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20200312 Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20200331 Ref country code: FR Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20200331 |
|
GBPC | Gb: european patent ceased through non-payment of renewal fee |
Effective date: 20200312 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GB Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20200312 |