EP3457395A1 - Musikstrukturanalysevorrichtung, verfahren zur analyse von musikstruktur und musikstrukturanalyseprogramm - Google Patents
Musikstrukturanalysevorrichtung, verfahren zur analyse von musikstruktur und musikstrukturanalyseprogramm Download PDFInfo
- Publication number
- EP3457395A1 EP3457395A1 EP16901640.9A EP16901640A EP3457395A1 EP 3457395 A1 EP3457395 A1 EP 3457395A1 EP 16901640 A EP16901640 A EP 16901640A EP 3457395 A1 EP3457395 A1 EP 3457395A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- sections
- transition points
- music piece
- section
- characteristic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/0008—Associated control or indicating means
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/031—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/031—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
- G10H2210/061—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for extraction of musical phrases, isolation of musically relevant segments, e.g. musical thumbnail generation, or for temporal structure analysis of a musical piece, e.g. determination of the movement sequence of a musical work
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/031—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
- G10H2210/076—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for extraction of timing, tempo; Beat detection
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/031—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
- G10H2210/081—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for automatic key or tonality recognition, e.g. using musical rules or a knowledge base
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/571—Chords; Chord sequences
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2240/00—Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
- G10H2240/121—Musical libraries, i.e. musical databases indexed by musical parameters, wavetables, indexing schemes using musical parameters, musical rule bases or knowledge bases, e.g. for automatic composing methods
Definitions
- the present invention relates to a music piece structure analyzer, a music piece structure analysis method, and a music piece structure analysis program.
- Patent Literature 1 discloses allocating characteristic sections such as stanza and refrain to music piece data by determining similarity between segments (characteristic sections) having been defined in the music piece data.
- Patent Literature 1 Japanese Patent No. 4775380
- An object of the invention is to provide a music piece structure analyzer, a music piece structure analysis method, and a music piece structure analysis program, which are capable of easily allocating characteristic sections that characterize music piece data.
- a music piece structure analyzer is configured to allocate characteristic sections, which characterize a structure of music piece data, to the music piece data comprising transition points delimiting the characteristic sections, the music piece structure analyzer including:
- a music piece structure analysis method is configured to allocate characteristic sections, which characterize a structure of music piece data, to the music piece data comprising transition points delimiting the characteristic sections, the music piece structure analysis method including:
- a music piece structure analysis program is configured to allow a computer to function as the music piece structure analyzer according to the above aspect of the invention.
- Fig. 1 shows a sound control system 1 according to an exemplary embodiment of the invention.
- the sound control system 1 includes two digital players 2, a digital mixer 3, a computer 4, and a speaker 5.
- Each of the digital players 2 includes a jog dial 2A, a plurality of control buttons (not shown), and a display 2B.
- An operator of the digital players 2 operates the jog dial 2A and/or the control buttons to output sound control information corresponding to the operation.
- the sound control information is outputted to the computer 4 through a USB (Universal Serial Bus) cable 6 capable of two-way communication.
- USB Universal Serial Bus
- the digital mixer 3 includes control switches 3A, volume adjusting levers 3B, and a right-left switching lever 3C.
- the digital mixer 3 is configured to output sound control information corresponding to an operation on the switches 3A and/or levers 3B, 3C.
- the sound control information is outputted to the computer 4 through a USB cable 7.
- the digital mixer 3 is also configured to receive music piece information processed by the computer 4, where the music piece information, which is inputted in a form of digital signals, is converted into analog signals to be outputted through an analog cable 8 from the speaker 5 as sound.
- the digital players 2 and the digital mixer 3 are connected each other through an IEEE 1394-compliant LAN (Local Area Network) cable 9, so that the sound control information generated in response to an operation on at least one of the digital players 2 can be directly outputted to the digital mixer 3 for DJ performance without using the computer 4.
- IEEE 1394-compliant LAN Local Area Network
- Fig. 2 is a functional block diagram of the computer 4 (music piece structure analyzer).
- the computer 4 includes a position information acquiring unit 11, a sound-number analyzing unit 12, a bass-level analyzing unit 13, a ratio calculator 14, a characteristic section allocating unit 15, and a display information generation unit 16, which are provided by a music piece structure analysis program run on an arithmetic processor 10.
- the position information acquiring unit 11 is configured to acquire a bar number of each of transition points set in the music piece data M1 as position information of each of the transition points. Specifically, as shown in Fig. 3 , the position information acquiring unit 11 is configured to acquire the position information (the bar number in the exemplary embodiment) of the transition points P1, P2, ..., Pn, Pn+3, ...Pe-4, which are defined between bars in the music piece data M1.
- the transition points are determined at points between bars by: frequency-analyzing the number of sounds (sound number) with different frequencies in each of the bars by FFT or the like; counting peaks of the sound-pressure levels; calculating a ratio of the sound number of each bar to the sound number of a bar having the maximum sound number in the music piece data M1; and finding points between the bars at which the ratio greatly changes.
- the transition points can be set by the computer 4 using the sound-number analyzing unit 12, the transition points P1, P2, ..., Pn, Pn+3, ...Pe-4 may be set in advance in the music piece data M1 by analyzing the sound number as in the exemplary embodiment.
- transition points P1, P2, ..., Pn, Pn+3, ...Pe-4 are not necessarily set according to the above-described process.
- the transition points may be set based on similarity of phrases in the music piece data M1.
- the sound-number analyzing unit 12 is configured to detect signal levels of each of frequency zones for each of the sections between the transition points P1, P2, ..., Pn, Pn+3, ...Pe-4 of the inputted music piece data M1 to analyze the sound number.
- the "sound number” herein may be determined by counting the number of sounds of different frequencies, or, alternatively, counting sounds of fundamental and harmonic frequencies as one (i.e. the same) scale.
- the inputted music piece data M1 may be stored in a hard disk in the computer 4, may be stored in CD, Blu-ray (tradename) disk or the like inserted in a slot of at least one of the digital players 2, or may be configured to be downloaded from a network through a communication line.
- the sound-number analyzing unit 12 is configured to count the number of frequency peaks in each of the sections between the transition points P1, P2, ..., Pn, Pn+3, ...Pe-4 of the music piece data M1 to count the sound number. It should be noted that, though the sounds of different frequencies are analyzed using FFT in the exemplary embodiment, for instance, discrete cosine transform or discrete Fourier transform is used for frequency transformation in some embodiments of the invention.
- the sound-number analyzing unit 12 is configured to output analysis results to the ratio calculator 14.
- the bass-level analyzing unit 13 is configured to determine chorus section(s) among the sections between the transition points P1, P2, ..., Pn, Pn+3, ...Pe-4 in further view of an average of bass pressure peak levels (bass signal levels) for each of the bar sections.
- the bass-level analyzing unit 13 is configured to analyze the bass pressure peak level of bass sounds of frequencies lower than a predetermined frequency (e.g. 100 Hz) in each of the sections between the transition points P1, P2, ..., Pn, Pn+3, ...Pe-4 of the music piece data M1. Specifically, the bass-level analyzing unit 13 is configured to acquire the bass pressure levels (e.g. bass drum, base) and calculate the average of the bass pressure peak levels in each of the bar sections to determine the signal level of each of the sections between the transition points P1, P2, ..., Pn, Pn+3, ...Pe-4.
- a predetermined frequency e.g. 100 Hz
- the bass-level analyzing unit 13 is configured to output the analysis results to the characteristic section allocating unit 15.
- the ratio of each of the other sections is calculated as a ratio of each of the other sections to the nth one of the sections (with the largest value) between the transition points P1, P2, ..., Pn, Pn+3, ...Pe-4 as shown by the figures in the rectangles representing the bars.
- the ratio calculator 14 is configured to output the calculation results to the characteristic section allocating unit 15.
- the characteristic section allocating unit 15 is configured to allocate the characteristic sections (e.g. introduction (Intro) section, A verse (Verse 1) section, B verse (Verse 2) section, chorus (Hook) section, C verse (Verse 3) section, and ending (Outro) section) to the sections between the transition points P1, P2, ..., Pn, Pn+3, ...Pe-4 based on the calculation results of the ratio calculator 14 and the analysis results of the bass-level analyzing unit 13.
- characteristic sections e.g. introduction (Intro) section, A verse (Verse 1) section, B verse (Verse 2) section, chorus (Hook) section, C verse (Verse 3) section, and ending (Outro) section
- the characteristic section allocating unit 15 is configured to search for sections with local maximums 1, 2, which each have a sound number larger than those of preceding and succeeding sections, based on the analysis results of the sound-number analyzing unit 12 as shown in a graph G1 in Fig. 5 , and determine the sections with the local maximums 1, 2 as possible chorus (Hook) sections.
- the characteristic section allocating unit 15 is configured to acquire the average of the bass pressure peak levels of each of the characteristic sections based on the analysis results of the bass-level analyzing unit 13, and judges whether the average of the bass pressure peak levels of each of the sections with the local maximums 1, 2 is above a predetermined threshold to determine the location of the chorus (Hook) section.
- the characteristic section allocating unit 15 is configured to subsequently allocate A verse section (Verse 1) to the section before the section with each of the local maximum 1, 2 (local-maximum section), allocate B verse section (Verse 2) to the section after the local-maximum section, and allocate C verse section (Verse 3) to the section after the B verse section.
- the type of the characteristic sections is determined based on whether the sound number exceeds a predetermined threshold.
- the predetermined threshold may be a fixed threshold smaller than the local maximum. Alternatively, the threshold may be a predetermined ratio to the local maximum (i.e. a threshold variable depending on the local maximum).
- the determined characteristic sections may be named as desired.
- the sections shown in Fig. 5 may be named as A-Verse, B-Verse or the like.
- the characteristic section allocating unit 15 is configured to allocate the introduction section and the ending section in advance.
- the display information generation unit 16 is configured to generate display information including the characteristic sections allocated by the characteristic section allocating unit 15 together with the music piece data M1. Specifically, the display information generation unit 16 is configured to generate the display information for allowing the characteristic sections to be displayed with the color thereof being changed in conjunction with the progression of the music piece data M1 as shown in Fig. 6 .
- the display information generated by the display information generation unit 16 is outputted to the display 2B (display device of the digital players 2) to allow a DJ performer to know the currently reproduced characteristic section in accordance with the progression of the music piece of the music piece data M1.
- the position information acquiring unit 11 acquires the position information of the transition points P1, P2, ..., Pn, Pn+3 in the music piece data M1 (Step S1).
- the sound-number analyzing unit 12 analyzes the sound number of each of the sections between the transition points P1, P2, ..., Pn, Pn+3, ...Pe-4 (Step S2).
- the ratio calculator 14 calculates the ratio of the sound number of each of the sections between the transition points P1, P2, ...Pe-4 to the section (between the transition points Pn and Pn+3) with the largest sound number based on the analysis result of the sound-number analyzing unit 12 (Step S3).
- the characteristic section allocating unit 15 allocates the introduction section to the section from the start point of the music piece data M1 to the first transition point P1 (Step S4).
- the characteristic section allocating unit 15 allocates the ending section to the section from the last transition point Pe-4 to the end point of the music piece data M1 (Step S5).
- the characteristic section allocating unit 15 searches for a local maximum in the sections between the transition points other than the introduction section and the ending section (Step S6).
- the searching may be started from the section next to the introduction section or from the section preceding the ending section.
- the characteristic section allocating unit 15 calculates the average of the bass pressure peak levels of each of the sections between the transition points P1, P2, ..., Pn, Pn+3, ...Pe-4 (Step S7).
- the characteristic section allocating unit 15 judges whether the average of the bass pressure peak levels of the section with the local maximum exceeds the predetermined threshold (Step S8).
- the characteristic section allocating unit 15 searches for the next local maximum.
- the characteristic section allocating unit 15 allocates the chorus (Hook) section to the section (Step S9).
- the characteristic section allocating unit 15 repeats the series of Step S6 to Step S9 for every section with the local maximum in the music piece data M1 (Step S10).
- Step S8 and Step S9 are conducted for improving detection accuracy of the chorus section and the chorus section may be allocated to corresponding section between the transition points simply by searching for the section between transition points with the local maximum of the sound number.
- the characteristic section allocating unit 15 acquires the respective sound numbers of the sections between the transition points preceding/succeeding each of the sections determined to be the chorus section(s) (Step S11).
- the characteristic section allocating unit 15 judges whether the sound number of each of the sections other than the chorus section(s) exceeds a predetermined threshold (Step S12).
- the characteristic section allocating unit 15 determines that the corresponding section is A verse (Verse 1) section (Step S13). When the ratio of the sound number is not more than the predetermined ratio, the characteristic section allocating unit 15 determines that the corresponding section is B verse (Verse 2) section (Step S14).
- the characteristic section allocating unit 15 repeats the above steps until the characteristic section is allocated to all of the sections between the transition points P1, P2, ..., Pn, Pn+3, ...Pe-4 (Step S15).
- the characteristic section allocating unit 15 When the characteristic section allocated to all of the sections, the characteristic section allocating unit 15 outputs the allocated results to the display information generation unit 16.
- the display information generation unit 16 generates the display information based on the allocated results, and outputs the generated display information to the display 2B of each of the digital players 2 (Step S16).
- all of the characteristic sections can be allocated simply by analyzing the sound number by the sound-number analyzing unit 12, thereby allowing easy and rapid allocation of the characteristic sections to the music piece data M1.
- the display information is outputted from display information generation unit 16 to the display 2B, so that a user conducting DJ performance can visually understand which characteristic section is currently performed, allowing a higher-level DJ performance.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Auxiliary Devices For Music (AREA)
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2016/063981 WO2017195292A1 (ja) | 2016-05-11 | 2016-05-11 | 楽曲構造解析装置、楽曲構造解析方法および楽曲構造解析プログラム |
Publications (2)
Publication Number | Publication Date |
---|---|
EP3457395A1 true EP3457395A1 (de) | 2019-03-20 |
EP3457395A4 EP3457395A4 (de) | 2019-10-30 |
Family
ID=60266426
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP16901640.9A Withdrawn EP3457395A4 (de) | 2016-05-11 | 2016-05-11 | Musikstrukturanalysevorrichtung, verfahren zur analyse von musikstruktur und musikstrukturanalyseprogramm |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP3457395A4 (de) |
JP (1) | JPWO2017195292A1 (de) |
WO (1) | WO2017195292A1 (de) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022070396A1 (ja) * | 2020-10-02 | 2022-04-07 | AlphaTheta株式会社 | 楽曲解析装置、楽曲解析方法およびプログラム |
WO2023054237A1 (ja) * | 2021-09-30 | 2023-04-06 | パイオニア株式会社 | 効果音出力装置 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4926737A (en) * | 1987-04-08 | 1990-05-22 | Casio Computer Co., Ltd. | Automatic composer using input motif information |
DE102004047068A1 (de) | 2004-09-28 | 2006-04-06 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Vorrichtung und Verfahren zum Gruppieren von zeitlichen Segmenten eines Musikstücks |
EP2088518A1 (de) * | 2007-12-17 | 2009-08-12 | Sony Corporation | Verfahren zur Musikstrukturanalyse |
JP5760543B2 (ja) * | 2011-03-16 | 2015-08-12 | ヤマハ株式会社 | 抑揚判定装置 |
-
2016
- 2016-05-11 WO PCT/JP2016/063981 patent/WO2017195292A1/ja unknown
- 2016-05-11 EP EP16901640.9A patent/EP3457395A4/de not_active Withdrawn
- 2016-05-11 JP JP2018516262A patent/JPWO2017195292A1/ja active Pending
Also Published As
Publication number | Publication date |
---|---|
WO2017195292A1 (ja) | 2017-11-16 |
EP3457395A4 (de) | 2019-10-30 |
JPWO2017195292A1 (ja) | 2019-03-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3723080A1 (de) | Verfahren zur klassifizierung von musik und vefahren zur erkennung von rhythmuspunkten, speichervorrichtung und computervorrichtung | |
US7507901B2 (en) | Signal processing apparatus and signal processing method, program, and recording medium | |
US11354355B2 (en) | Apparatus, method, and computer-readable medium for cue point generation | |
US10497348B2 (en) | Evaluation device and evaluation method | |
US8865993B2 (en) | Musical composition processing system for processing musical composition for energy level and related methods | |
KR101025163B1 (ko) | 진동 및 소음 전달경로 해석 시스템과 진동 및 소음 전달경로 해석 방법 | |
US10492276B2 (en) | Lighting control device, lighting control method, and lighting control program | |
CN109979418B (zh) | 音频处理方法、装置、电子设备及存储介质 | |
JP5605574B2 (ja) | 多チャンネル音響信号処理方法、そのシステム及びプログラム | |
CN103959031A (zh) | 用于分析音频信息以确定音高和/或分数线性调频斜率的系统及方法 | |
US20170135649A1 (en) | Breath sound analyzing apparatus, breath sound analyzing method, computer program, and recording medium | |
EP3457395A1 (de) | Musikstrukturanalysevorrichtung, verfahren zur analyse von musikstruktur und musikstrukturanalyseprogramm | |
JP6197569B2 (ja) | 音響解析装置 | |
SE1451583A1 (en) | Computer program, apparatus and method for generating a mix of music tracks | |
JP2015114361A (ja) | 音響信号分析装置及び音響信号分析プログラム | |
JP5035815B2 (ja) | 周波数測定装置 | |
JPWO2020245970A1 (ja) | 分析装置 | |
JP6812273B2 (ja) | 楽器音認識装置及び楽器音認識プログラム | |
Ward et al. | Estimating the loudness balance of musical mixtures using audio source separation | |
JP5513074B2 (ja) | グリッド検出装置及びプログラム | |
JP7176113B2 (ja) | 楽曲構造解析装置および楽曲構造解析プログラム | |
US12080262B2 (en) | Musical piece analysis device, program, and musical piece analysis method | |
JP6625202B2 (ja) | 楽曲構造解析装置、楽曲構造解析方法および楽曲構造解析プログラム | |
CN112908289B (zh) | 节拍确定方法、装置、设备和存储介质 | |
JP7175395B2 (ja) | 楽曲構造解析装置および楽曲構造解析プログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20181108 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
A4 | Supplementary search report drawn up and despatched |
Effective date: 20191002 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G10H 1/00 20060101AFI20190926BHEP |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20200603 |