WO2005004002A2 - Procede de traitement d’une sequence sonore, telle qu’un morceau musical - Google Patents
Procede de traitement d’une sequence sonore, telle qu’un morceau musical Download PDFInfo
- Publication number
- WO2005004002A2 WO2005004002A2 PCT/FR2004/001493 FR2004001493W WO2005004002A2 WO 2005004002 A2 WO2005004002 A2 WO 2005004002A2 FR 2004001493 W FR2004001493 W FR 2004001493W WO 2005004002 A2 WO2005004002 A2 WO 2005004002A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sequence
- sub
- subsequence
- piece
- sound
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/0008—Associated control or indicating means
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/031—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
- G10H2210/061—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for extraction of musical phrases, isolation of musically relevant segments, e.g. musical thumbnail generation, or for temporal structure analysis of a musical piece, e.g. determination of the movement sequence of a musical work
Definitions
- the present invention relates to the processing of a sound sequence, such as a piece of music or, more generally, a sound sequence comprising the repetition of a sub-sequence.
- the distributors of musical productions make available to potential customers kiosks where customers can listen to music of their choice, or even music promoted because of their novelty.
- customers can listen to music of their choice, or even music promoted because of their novelty.
- a customer recognizes a verse or a chorus of the musical piece he is listening to, he can decide to buy the corresponding musical production.
- .Des sound summaries can be downloaded .a 'station communicating with a remote server via an extensive network of the Internet type. The user of the computer station can thus order a musical production, 11 of which appreciates the sound summary.
- the present invention improves the situation.
- One of the aims of the present invention is to propose an automated detection of a repeated subsequence in a sound sequence.
- Another object of the present invention is to propose an automated creation of sound summaries of the type described above.
- the present invention relates firstly to a method of processing a sound sequence, in which: a) a spectral transform is applied to said sequence in order to obtain spectral coefficients varying as a function of time in said sequence.
- the method within the meaning of the invention further comprises the following steps: b) at least 'a subsequence repeated in said sequence is determined by statistical analysis of said spectral coefficients, and ' 'c) moments are evaluated start and end of said sub-sequence in the sound sequence.
- the above-mentioned sub-sequence is extracted to store, in a memory, sound samples representing said 'sub-sequence. . J
- the extraction of step d) relates to at least one subsequence ' the duration of which is the greatest and / or a subsequence of which the repetition frequency is the greatest in said sequence.
- the present invention finds an advantageous application in assisting in the detection of failures of industrial machines or of engines, in particular by obtaining sound recording sequences of acceleration and deceleration phases of the engine speed.
- the application of the method within the meaning of the invention makes it possible to isolate a sound sub-sequence corresponding for example to a full speed or to an acceleration phase, this sub-sequence being, if necessary, compared to a sub- reference sequence.
- the sequence The aforementioned sound is a piece of music comprising a succession of sub-sequences among at least an introduction, a verse, a chorus, a transition bridge, a theme, a motif, or a movement which is repeated in the sequence.
- step c) the respective instants for the start and end of a first sub-sequence and of a second sub-sequence are preferably determined at least.
- step d) a first and a second sub-sequence are then extracted to obtain, on a memory medium, a sound summary of said piece of music comprising at least the first sub-sequence chained with the second subsequence.
- the first sub-sequence corresponds to a verse and the second sub-sequence corresponds to a chorus.
- first and second subsequences extracted from a sound sequence, are not contiguous in time.
- dl detecting at least one cadence of the first sub-sequence and / or of the second sub-sequence to estimate the average duration of a measurement at said cadence, as well as at least one end segment of the first sub-sequence and at least one start segment of the second sub-sequence, of respective durations corresponding substantially to said average and isolated duration in ' "the sequence of a whole number of average durations, d2) . generate at least one .
- transition measure of duration corresponding to said average duration comprising an addition of sound samples "&" at least said segment end and at least said starting segment, d3) and concatenating the first 'sub-sequence, or • the transition measures and the second sub-sequence to obtain the sequence of the first and the second subsequence.
- steps dl) to d3) finds, beyond the automatic generation of sound summaries, an advantageous application to computer-assisted musical creation.
- a user can create two subsequences of a musical piece himself, while software comprising instructions for carrying out steps dl) to d3) ensures a concatenation of the two subsequences, without artifact and pleasant to the ear.
- the present invention also relates to a computer program product, stored in a computer memory or on a removable medium suitable for cooperating with a homologous computer reader, and comprising instructions for carrying out the steps of the method. within the meaning of the invention.
- the audio signal in FIG. 1a represents the sound intensity (on the ordinate) as a function of time (on the abscissa) a musical piece (here, the song "head over feet” ® by artist Alanis Morissette).
- a musical piece here, the song "head over feet” ® by artist Alanis Morissette.
- a spectral transform is applied (for example of the fast Fourier transform FFT type) to obtain a temporal variation of the spectral energy of the type represented in FIG. 1b.
- the result of which is applied to a filter bank over several frequency ranges (preferably of increasing bandwidths such as the logarithmic of the frequency).
- Another Fourier transform is then applied to obtain dynamic parameters of the audio signal (referenced PD in FIG. 1b).
- the ordinate scale of FIG. 1b indicates the amplitude of the variations of the components at different speeds in a given frequency domain.
- the index 0 or 2 of the arbitrary ordinate scale of FIG. 1b corresponds to a slow variation in the low frequencies
- the index 12 of this same scale corresponds to a rapid variation in the high frequencies.
- These variations are expressed as a function of time, on the abscissa (seconds).
- the intensities associated with these dynamic parameters PD, over time are illustrated by different levels of gray including the values relative "are indicated by there " COL reference column (on the right of figure lb). •
- the variables deduced from the audio signal and 'making it possible to characterize the. piece of music can be 'of different types, including said coefficients "Mel Frequency Cepstral Coefficients". Overall, it is indicated that these coefficients (known per se) are still obtained by fast Fourier transform, in the short term.
- the figure le provides a visual representation of the evolution of the spectral energy of figure lb.
- the abscissa represents time (in seconds) and the ordinates represent the different parts of the piece, such as verses, choruses, introduction, theme, or others.
- the repetition over time of a similar part, such as a verse or a chorus, is represented by shaded rectangles which appear at different abscissa in time (and which can be of different temporal widths), but similarly ordered .
- a statistical analysis is implemented using for example the "K-means” algorithm, or even the algorithm “FUZZY K-means”, or a hidden Markov chain, with learning by the BAUM-ELSH algorithm, followed by an evaluation by the VITERBI algorithm.
- the determination of the number of states ' (the parts of the piece of music) which are necessary for the representation of a piece of music is performed in an automated manner, by comparison of the similarity of the states found at each iteration of the algorithms above, and eliminating redundant states.
- This technique known as "pruning” thus makes it possible to isolate each redundant part of the piece of music and to determine its time coordinates (its start and end times, as indicated above).
- a chorus part For most variety pieces, we can choose to isolate the chorus parts, whose repetition is generally the most frequent, then the verse parts, whose repetition is frequent, then, if necessary, d 'other parts if they are repeated. Others are indicated. types of sub-sequences representative of the piece of music can be extracted, as soon as these "sub-sequences are repeated in the piece of music. For example, one can choose to extract a musical motif ', generally more short, a verse or a chorus, such as a pass. percussion repeated in the song, or a phrase "voice punctuated several times in the song. also, a theme can also be extracted from piece of music, for example a musical phrase repeated in a piece of jazz or classical music In classical music, a passage such as a movement can also be extracted.
- the shaded rectangles indicate the presence of a part of the song such as the introduction ("intro"), a verse or a chorus in a window time indicated by the time abscissa (in seconds).
- introduction a part of the song
- abscissa in seconds
- the piece of music starts with an introduction (indexed by the number 2 on the ordinate scale).
- the introduction is followed by two alternations of verse (indexed by the number 3) and refrain (indexed by the number 1) up to approximately 100 seconds.
- FIG. 5 we get the audio signals on the left channel “audio L” and on the right channel “audio R” in the respective steps 10 and 11, when the initial sound sequence is represented in stereophonic mode.
- the signals from these two channels are added in step 12 to obtain an audio signal of the type shown in the figure there.
- This audio signal is, if necessary, stored in sampled form in a working memory with sound intensity values arranged as a function of their associated time coordinates (step 14).
- a spectral transform (of FFT type in the example shown) is applied, in step 1.6, to obtain, in step 18, the spectral coefficients Fi (t) and / or their variation ⁇ Fi ( t) as a function of time.
- a statistical analysis module operates on the basis of the coefficients obtained in step 18 to isolate instants t 0, t,. , ..., t 7 which correspond to instants of start and end of the various subsequences which are repeated in the audio signal of stage 14.
- the piece of music has a structure (classic in variety) of the type comprising: - an introduction at the start of the piece between an instant t 0 and an instant t 1 # - a verse between tj and t 2 , - a refrain between t 2 and t 3 , - a second verse between t 3 and t 4 , - a second refrain between t 4 and t s , - an introduction, again, if necessary with an instrumental solo, between the instants t 5 and t 6 , and - the repetition of two 'refrains end. of piece between instants t 6 and t 7 .
- step 22 the instants t p '- to t 7 are listed and indexed as a function of the • ' pa ⁇ sagé '• corresponding music (introduction, verse or .refrain) • and stored, if necessary, in a working memory .
- step '23 we can then construct a visual summary of this piece of music, as shown in ' Figure'.
- the sound summary is constructed from a verse extracted from the piece, followed by a chorus extracted from the piece.
- a concatenation of the sound samples of the audio signal is prepared between the instants ti and t 2 , on the one hand, and between the instants t 2 and t 3 , on the other hand, in the example described . If necessary, the result of this concatenation is stored in a permanent memory MEM for later use, in step 26.
- the end time of an isolated verse and the start time of an isolated chorus are not necessarily identical, or alternatively, one can choose to construct the sound summary from the first verse and the second chorus (between t 4 and t 5 ) or the end chorus
- One of the aims of this concatenation construction is to locally preserve the tempo of the sound signal.
- Another aim is to ensure a temporal distance between concatenation points (or "alignment” points) equal to an integer multiple of the duration of a measurement.
- this concatenation is carried out by superposition / addition of selected sound segments and isolated from the two aforementioned respective parts of the piece of music.
- beat synchronization (called “beat -synchronous")
- measurement synchronization according to a preferred embodiment.
- - bpm the number of beats per minute of a piece of music
- - T the duration (expressed in seconds) of a beat, that is to say of the reference D: in the example, previous where D ⁇ noire
- the segments s ⁇ (t) and ⁇ j (t) are first formed by cutting the audio signal using a time window h L (t), of width L and defined ( of non-zero value) between 0 and L.
- This window can be of rectangular type, of so-called “hanning” type, of so-called “level hanning” type, or other.
- a preferred type of time window is obtained by concatenating a rising edge, a landing and a falling edge. The preferred time width of this window is shown below.
- bi and bj be two respective positions inside the first and second segments, called “synchronization positions", with respect to which the superposition / addition takes place, such as: 0 ⁇ bi ⁇ L and 0 ⁇ b j ⁇ L [2]
- T duration of a beat
- kT duration of a beat
- the distance between the instants mi and ⁇ i j is chosen equal to an integer multiple of k'NT, in which N denotes the numerator of the metric.
- FIG. 4 illustrates this situation. Note in FIG. 4 that the width L of the aforementioned time window is close to k'NT (near the rising and falling sides). However, one will preferentially choose in this case sidewall ramps such that k 'T ⁇ L-2 (b ⁇ - ⁇ ii).
- the instants mi and m j are chosen so that they correspond to the first measurement times. Under these conditions, a so-called “aligned" beat-synchronous superposition / addition is advantageously obtained.
- Each integer kj ' is defined as the largest integer t such that kj " ' T ⁇ Lj - (bj - ⁇ ij), where L j corresponds to the width of the window of the jth musical passage to be concatenated.
- the first measurement times, or the metric, or even the tempo of a piece of music can be detected automatically, for example by using existing software applications.
- the MPEG-7 standard (Audio Version 2) provides for the determination and description of the tempo and the metric of a piece of music, using such software applications.
- the sound summary may include more than two musical passages, for example an introduction, a verse and a chorus, or even two different passages of a verse and a chorus, such as the introduction and a chorus, for example.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Electrophonic Musical Instruments (AREA)
- Auxiliary Devices For Music (AREA)
Abstract
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2006516296A JP2007520727A (ja) | 2003-06-25 | 2004-06-16 | 楽曲のようなサウンドシーケンスを処理する方法 |
US10/562,242 US20060288849A1 (en) | 2003-06-25 | 2004-06-16 | Method for processing an audio sequence for example a piece of music |
EP04767355A EP1636789A2 (fr) | 2003-06-25 | 2004-06-16 | Procede de traitement d'une sequence sonore, telle qu'un morceau musical |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR0307667A FR2856817A1 (fr) | 2003-06-25 | 2003-06-25 | Procede de traitement d'une sequence sonore, telle qu'un morceau musical |
FR03/07667 | 2003-06-25 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2005004002A2 true WO2005004002A2 (fr) | 2005-01-13 |
WO2005004002A3 WO2005004002A3 (fr) | 2005-03-24 |
Family
ID=33515393
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/FR2004/001493 WO2005004002A2 (fr) | 2003-06-25 | 2004-06-16 | Procede de traitement d’une sequence sonore, telle qu’un morceau musical |
Country Status (5)
Country | Link |
---|---|
US (1) | US20060288849A1 (fr) |
EP (1) | EP1636789A2 (fr) |
JP (1) | JP2007520727A (fr) |
FR (1) | FR2856817A1 (fr) |
WO (1) | WO2005004002A2 (fr) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009536368A (ja) * | 2006-05-08 | 2009-10-08 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | 歌曲を歌詞と並べる方法及び電気デバイス |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7521623B2 (en) | 2004-11-24 | 2009-04-21 | Apple Inc. | Music synchronization arrangement |
US7563971B2 (en) * | 2004-06-02 | 2009-07-21 | Stmicroelectronics Asia Pacific Pte. Ltd. | Energy-based audio pattern recognition with weighting of energy matches |
US7626110B2 (en) * | 2004-06-02 | 2009-12-01 | Stmicroelectronics Asia Pacific Pte. Ltd. | Energy-based audio pattern recognition |
DE102004047069A1 (de) * | 2004-09-28 | 2006-04-06 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Vorrichtung und Verfahren zum Ändern einer Segmentierung eines Audiostücks |
DE102004047032A1 (de) * | 2004-09-28 | 2006-04-06 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Vorrichtung und Verfahren zum Bezeichnen von verschiedenen Segmentklassen |
US7668610B1 (en) | 2005-11-30 | 2010-02-23 | Google Inc. | Deconstructing electronic media stream into human recognizable portions |
US7826911B1 (en) | 2005-11-30 | 2010-11-02 | Google Inc. | Automatic selection of representative media clips |
US7645929B2 (en) * | 2006-09-11 | 2010-01-12 | Hewlett-Packard Development Company, L.P. | Computational music-tempo estimation |
US8084677B2 (en) * | 2007-12-31 | 2011-12-27 | Orpheus Media Research, Llc | System and method for adaptive melodic segmentation and motivic identification |
EP2096626A1 (fr) * | 2008-02-29 | 2009-09-02 | Sony Corporation | Procédé de visualisation de données audio |
EP2491560B1 (fr) * | 2009-10-19 | 2016-12-21 | Dolby International AB | Metadonnes avec marqueurs temporels pour indiquer des segments audio |
CN102541965B (zh) | 2010-12-30 | 2015-05-20 | 国际商业机器公司 | 自动获得音乐文件中的特征片断的方法和系统 |
FR3028086B1 (fr) * | 2014-11-04 | 2019-06-14 | Universite de Bordeaux | Procede de recherche automatise d'au moins une sous-sequence sonore representative au sein d'une bande sonore |
US10681408B2 (en) | 2015-05-11 | 2020-06-09 | David Leiberman | Systems and methods for creating composite videos |
US9691429B2 (en) * | 2015-05-11 | 2017-06-27 | Mibblio, Inc. | Systems and methods for creating music videos synchronized with an audio track |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2001069575A1 (fr) * | 2000-03-13 | 2001-09-20 | Perception Digital Technology (Bvi) Limited | Systeme d'extraction de melodie |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4633749A (en) * | 1984-01-12 | 1987-01-06 | Nippon Gakki Seizo Kabushiki Kaisha | Tone signal generation device for an electronic musical instrument |
JPS61204693A (ja) * | 1985-03-08 | 1986-09-10 | カシオ計算機株式会社 | 自動演奏装置を備えた電子楽器 |
US4926737A (en) * | 1987-04-08 | 1990-05-22 | Casio Computer Co., Ltd. | Automatic composer using input motif information |
US6316712B1 (en) * | 1999-01-25 | 2001-11-13 | Creative Technology Ltd. | Method and apparatus for tempo and downbeat detection and alteration of rhythm in a musical segment |
US7212972B2 (en) * | 1999-12-08 | 2007-05-01 | Ddi Corporation | Audio features description method and audio video features description collection construction method |
-
2003
- 2003-06-25 FR FR0307667A patent/FR2856817A1/fr active Pending
-
2004
- 2004-06-16 WO PCT/FR2004/001493 patent/WO2005004002A2/fr not_active Application Discontinuation
- 2004-06-16 US US10/562,242 patent/US20060288849A1/en not_active Abandoned
- 2004-06-16 JP JP2006516296A patent/JP2007520727A/ja active Pending
- 2004-06-16 EP EP04767355A patent/EP1636789A2/fr not_active Withdrawn
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2001069575A1 (fr) * | 2000-03-13 | 2001-09-20 | Perception Digital Technology (Bvi) Limited | Systeme d'extraction de melodie |
Non-Patent Citations (3)
Title |
---|
BARTSCH M A ET AL: "To catch a chorus: using chroma-based representations for audio thumbnailing" IEEE WORKSHOP ON APPLICATIONS OF SIGNAL PROCESSING TO AUDIO AND ACOUSTICS, 21 octobre 2001 (2001-10-21), pages 15-18, XP010566863 New Paltz, NY * |
SHIH H-H ET AL: "COMPARISON OF DICTIONARY-BASED APPROACHES TO AUTOMATIC REPEATING MELODY EXTRACTION" PROCEEDINGS OF THE SPIE, SPIE, BELLINGHAM, VA, US, vol. 4676, janvier 2002 (2002-01), pages 306-317, XP001189011 ISSN: 0277-786X * |
YANASE T ET AL: "Phrase based feature extraction for musical information retrieval" COMMUNICATIONS, COMPUTERS AND SIGNAL PROCESSING, 1999 IEEE PACIFIC RIM CONFERENCE ON VICTORIA, BC, CANADA 22-24 AUG. 1999, PISCATAWAY, NJ, USA,IEEE, US, 22 août 1999 (1999-08-22), pages 396-399, XP010356677 ISBN: 0-7803-5582-2 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009536368A (ja) * | 2006-05-08 | 2009-10-08 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | 歌曲を歌詞と並べる方法及び電気デバイス |
Also Published As
Publication number | Publication date |
---|---|
US20060288849A1 (en) | 2006-12-28 |
FR2856817A1 (fr) | 2004-12-31 |
JP2007520727A (ja) | 2007-07-26 |
WO2005004002A3 (fr) | 2005-03-24 |
EP1636789A2 (fr) | 2006-03-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP1636789A2 (fr) | Procede de traitement d'une sequence sonore, telle qu'un morceau musical | |
US6910035B2 (en) | System and methods for providing automatic classification of media entities according to consonance properties | |
US7065416B2 (en) | System and methods for providing automatic classification of media entities according to melodic movement properties | |
CA2563420C (fr) | Procede de recherche de contenu, notamment d'extraits communs entre deux fichiers informatiques | |
US20040060426A1 (en) | System and methods for providing automatic classification of media entities according to tempo properties | |
US20030045953A1 (en) | System and methods for providing automatic classification of media entities according to sonic properties | |
LU88189A1 (fr) | Procédés de codage de segments de parole et de controlôle de hauteur de son pour des synthèse de la parole | |
JP2002014691A (ja) | ソース音声信号内の新規点の識別方法 | |
CA2909401C (fr) | Correction de perte de trame par injection de bruit pondere | |
EP1970894A1 (fr) | Procédé et dispositif de modification d'un signal audio | |
KR20080066007A (ko) | 재생용 오디오 프로세싱 방법 및 장치 | |
BE1010336A3 (fr) | Procede de synthese de son. | |
FR2911426A1 (fr) | Modification d'un signal de parole | |
EP3040989A1 (fr) | Procédé de séparation amélioré et produit programme d'ordinateur | |
FR2827069A1 (fr) | Dispositifs et procede de production de musique en fonction de parametres physiologiques | |
FR3013885A1 (fr) | Procede et systeme de separation de contributions specifique et de fond sonore dans un signal acoustique de melange | |
WO2012143659A1 (fr) | Procede d'analyse et de synthese de bruit de moteur, son utilisation et systeme associe | |
WO2022129104A1 (fr) | Procédé et système de synchronisation automatique d'un contenu vidéo et d'un contenu audio | |
FR3028086B1 (fr) | Procede de recherche automatise d'au moins une sous-sequence sonore representative au sein d'une bande sonore | |
Desblancs | Self-supervised beat tracking in musical signals with polyphonic contrastive learning | |
FR2713006A1 (fr) | Appareil et procédé de synthèse de la parole. | |
WO2002097793A1 (fr) | Procede d'extraction de la frequence fondamentale d'un signal sonore | |
WO2007068861A2 (fr) | Procede d'estimation de phase pour la modelisation sinusoidale d'un signal numerique | |
CN114677995A (zh) | 音频处理方法、装置、电子设备及存储介质 | |
Schweitzer | Lully et la prosodie française à la fin du XVIIe siècle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A2 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A2 Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2004767355 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2006516296 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2006288849 Country of ref document: US Ref document number: 10562242 Country of ref document: US |
|
WWP | Wipo information: published in national office |
Ref document number: 2004767355 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 10562242 Country of ref document: US |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 2004767355 Country of ref document: EP |