WO2022202199A1 - Dispositif d'estimation de code, dispositif d'apprentissage, procédé d'estimation de code et procédé d'apprentissage - Google Patents
Dispositif d'estimation de code, dispositif d'apprentissage, procédé d'estimation de code et procédé d'apprentissage Download PDFInfo
- Publication number
- WO2022202199A1 WO2022202199A1 PCT/JP2022/009233 JP2022009233W WO2022202199A1 WO 2022202199 A1 WO2022202199 A1 WO 2022202199A1 JP 2022009233 W JP2022009233 W JP 2022009233W WO 2022202199 A1 WO2022202199 A1 WO 2022202199A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- series data
- string
- information
- time
- chord
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 25
- 238000010801 machine learning Methods 0.000 description 15
- 238000010276 construction Methods 0.000 description 13
- 238000003860 storage Methods 0.000 description 13
- 238000010586 diagram Methods 0.000 description 9
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000003825 pressing Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000003864 performance function Effects 0.000 description 1
- 239000011435 rock Substances 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10G—REPRESENTATION OF MUSIC; RECORDING MUSIC IN NOTATION FORM; ACCESSORIES FOR MUSIC OR MUSICAL INSTRUMENTS NOT OTHERWISE PROVIDED FOR, e.g. SUPPORTS
- G10G3/00—Recording music in notation form, e.g. recording the mechanical operation of a musical instrument
- G10G3/04—Recording music in notation form, e.g. recording the mechanical operation of a musical instrument using electrical means
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/36—Accompaniment arrangements
- G10H1/38—Chord
Definitions
- the present invention relates to a chord estimation device and method for estimating chords for playing a musical instrument, and a training device and method for constructing a chord estimation device.
- a code is estimated for each specific section. For example, one chord is estimated per bar. If it is possible to perform chord estimation with a higher degree of freedom from given notes, it is expected that it will be possible to more appropriately support the production of musical scores with chords.
- the purpose of the present invention is to perform chord estimation with a high degree of freedom based on musical note strings.
- a chord estimation apparatus uses a receiving unit that receives time-series data including a string of notes composed of a plurality of notes, and a trained model to generate a code string corresponding to the string of notes based on the time-series data. and an estimating unit for estimating the code string information to be indicated.
- a training apparatus includes a first acquisition unit that acquires input time-series data including a reference note string composed of a plurality of notes, and output code string information that indicates a code string corresponding to the reference note string.
- a chord estimation method is executed by a computer, accepts time series data including a string of notes, uses a trained model, and corresponds to the string of notes based on the time series data. Estimate code string information that indicates the code string to be used.
- a training method is executed by a computer, acquires input time-series data including a reference note string consisting of a plurality of notes, and outputs code string information indicating a code string corresponding to the reference note string. acquire and build a trained model that has learned the input/output relationship between the input time-series data and the output code string information.
- chord estimation with a high degree of freedom can be performed based on a string of musical notes.
- FIG. 1 is a block diagram showing the configuration of a processing system including a chord estimation device and a training device according to one embodiment of the present invention.
- FIG. 2 is a diagram showing an example of input time-series data included in training data.
- FIG. 3 is a diagram showing an example of output code string information included in training data.
- FIG. 4 is a block diagram showing the configuration of the training device and chord estimation device.
- FIG. 5 shows an example of an arranged musical score displayed on the display unit.
- FIG. 6 is a flowchart showing an example of training processing.
- FIG. 7 is a flowchart showing an example of chord estimation processing.
- FIG. 8 is a diagram showing a modified example of output code string information included in training data.
- FIG. 1 is a block diagram showing the configuration of a processing system including a chord estimation device and a training device according to one embodiment of the present invention.
- the processing system 100 includes a RAM (Random Access Memory) 110, a ROM (Read Only Memory) 120, a CPU (Central Processing Unit) 130, a storage section 140, an operation section 150 and a display section 160. .
- RAM Random Access Memory
- ROM Read Only Memory
- CPU Central Processing Unit
- the processing system 100 is implemented by a computer such as a personal computer, tablet terminal, or smart phone.
- the processing system 100 may be realized by cooperative operation of a plurality of computers connected by a communication path such as Ethernet, or may be realized by an electronic musical instrument such as an electronic piano having performance functions.
- RAM 110 , ROM 120 , CPU 130 , storage section 140 , operation section 150 and display section 160 are connected to the bus 170 .
- RAM 110 , ROM 120 and CPU 130 constitute training device 10 and chord estimation device 20 .
- training device 10 and chord estimation device 20 are configured by common processing system 100 in this embodiment, they may be configured by separate processing systems.
- the RAM 110 consists of, for example, a volatile memory, and is used as a work area for the CPU 130.
- the ROM 120 is, for example, a non-volatile memory and stores a training program and a code estimation program.
- CPU 130 performs a training process by executing a training program stored in ROM 120 on RAM 110 . Further, the CPU 130 performs code estimation processing by executing a code estimation program stored in the ROM 120 on the RAM 110 . Details of the training process and the code estimation process will be described later.
- the training program or code estimation program may be stored in the storage unit 140 instead of the ROM 120.
- the training program or code estimation program may be provided in a form stored in a computer-readable storage medium and installed in ROM 120 or storage unit 140 .
- a training program or code estimation program distributed from a server (including a cloud server) on the network is installed in the ROM 120 or the storage unit 140.
- the storage unit 140 includes a storage medium such as a hard disk, an optical disk, a magnetic disk, or a memory card, and stores a trained model M and a plurality of training data D.
- the trained model M or each piece of training data D may not be stored in the storage unit 140, but may be stored in a computer-readable storage medium.
- the trained model M or respective training data D may be stored on a server on that network.
- the trained model M is a machine learning model that has been trained to present chord strings to be referred to when the user of the chord estimation device 20 (hereinafter referred to as a performer) plays a piece of music. be.
- a trained model M is constructed using a plurality of training data D.
- a user of the training device 10 can generate the training data D by operating the operation unit 150 .
- the training data D is data created based on the musical knowledge or musical sense of the reference performer.
- the reference performer has a relatively high level of skill in playing the piece of music.
- a reference performer may be the performer's mentor or teacher in the performance of the musical composition.
- the training data D indicates a set of input time-series data and output code string information.
- the input time-series data indicates a reference note string consisting of a plurality of notes.
- the input time-series data is data that forms a melody or accompaniment sound with a plurality of notes.
- the input time-series data may be image data representing images of musical scores.
- the output code string information is data in which codes corresponding to the reference note string are arranged in time series. A code string corresponding to the reference note string is provided by the reference performer.
- Figs. 2 and 3 are diagrams showing an example of each training data D.
- Figs. The example in FIG. 2 shows input time-series data including a reference note string consisting of a plurality of notes.
- the example in FIG. 3 shows output code string information indicating a code string corresponding to the reference note string.
- the input time-series data has a metrical structure and additional information in addition to the reference note string.
- the input time-series data A shown in FIG. 2 is data obtained by extracting data for the first two bars of a song. In the input time-series data A, bars are separated by "bar", and beats are separated by "beat". In this way, the input time-series data A has a metrical structure with the "bar” and "beat” information.
- Elements A1 to A37 indicate the reference note string of the first bar. That is, the elements A1 to A37 are separated into bars by the "bar” before the element A1 and the "bar” after the element A37. In addition, it is divided into beats by "beat” after elements A8, A18, and A26.
- the element A0 is additional information.
- additional information for example, key information, genre information, difficulty level information, and the like are used.
- key information is added by the Key element.
- the key information is information specifying the key of the music represented by the reference note string.
- the numerical value following Key is the numerical value that designates the key.
- Genre information is information that designates the genre of music represented by the reference note string.
- genre information for example, genres such as rock, pops, and jazz are specified. By designating genre information as additional information, a reference note string and a code string corresponding to the genre are machine-learned.
- the difficulty level information is information indicating the difficulty level of the musical score indicated by the reference note string.
- difficulty level information is information indicating the difficulty level of the musical score indicated by the reference note string.
- a code string corresponding to the difficulty level of the reference note string and score is machine-learned. For example, in the case of a score with a low difficulty level, machine learning is performed while interpolating notes from a small number of tones. In the case of a score with a high degree of difficulty, machine learning is performed while selecting notes that form chords from an excessive number of tones.
- elements other than the element A0, "bar” and “beat” correspond to the reference note string.
- Elements A1 to A37 indicate the reference note string of the first measure.
- the element A0 is placed at the beginning of the input time-series data A, that is, before the reference note string (elements A1 to A37), but it may be placed at any position in the input time-series data A.
- elements A1 to A37 in the reference note string, “L” means left hand, “R” means right hand, and the number following “L” or “R” means scale. Also, “on” and “off” mean key depression and key release, respectively. Also, “wait” means waiting, and the number following "wait” means the length of time.
- elements A1-A5 indicate pressing the keys of scale 77 and 74 with the right hand simultaneously while simultaneously pressing the keys of scale 53 and 46 with the left hand, followed by holding for 11 units of time.
- elements A6 to A8 indicate that the left hand keys of scale 53 and scale 46 are released at the same time and then held for 1 unit of time. Then, after maintaining for one unit time, elements A9 to A11 indicate that the left hand presses scale 53 and scale 46 again, and then waits for five unit time.
- the output code string information B shown in FIG. 3 indicates a code string corresponding to the reference note string included in the input time-series data A.
- Code strings corresponding to elements A1 to A37 of input time-series data A are represented by elements B1 to B3 and elements B4 to B6. That is, the elements B1 to B6 indicate the code string corresponding to the first bar of the input time-series data A.
- FIG. In the output code string information B bars are also separated by “bar” and beats by "beat”. A range delimited by "bar” before the element B1 and "bar” after the element B6 corresponds to the first bar.
- one code is indicated by three elements.
- Elements B1 to B3 define the chord of the first beat of the first bar.
- Elements B4 to B6 define the chord of the fourth beat of the first bar.
- Elements B7 to B9 define the chord of the first beat of the fourth bar.
- the first element (B1, B4, B7) represents basic code information.
- the basic chord information (chord) indicates a numerical value from 1 to 24 that designates the type of major chord and minor chord for each of the 12 tones (C, C#, D, D#, . . . A, A#, B).
- the second element (B2, B5, B8) of the three elements indicating the chord indicates chord type information.
- the chord type information indicates a numerical value designating the type of tension chord.
- the third element (B3, B6, B9) represents chord root information.
- the chord root information (root) indicates a numerical value designating the root note of the on-chord.
- FIG. 4 is a block diagram showing the configuration of the training device 10 and the chord estimation device 20.
- the training device 10 includes a first acquisition unit 11, a second acquisition unit 12, and a construction unit 13 as functional units.
- the functional units of the training device 10 are implemented by the CPU 130 of FIG. 1 executing the training program. At least part of the functional units of the training device 10 may be realized by hardware such as an electronic circuit.
- the first acquisition unit 11 acquires the input time-series data A from each training data D stored in the storage unit 140 or the like.
- the second acquisition unit 12 acquires output code string information B from each training data D.
- the construction unit 13 uses the input time-series data A acquired by the first acquisition unit 11 as an input element, and the output code string information B acquired by the second acquisition unit 12 as an output element. perform machine learning to By repeating machine learning for a plurality of training data D, the construction unit 13 constructs a trained model M indicating the input/output relationship between the input time-series data A and the output code string information B.
- the building unit 13 builds the trained model M by training the Transformer, but the embodiment is not limited to this.
- the construction unit 13 may construct the trained model M by training a machine learning model of another method that handles time series.
- the trained model M constructed by the construction unit 13 is stored in the storage unit 140, for example.
- the trained model M constructed by the construction unit 13 may be stored in a server or the like on the network.
- the code estimation device 20 includes a reception unit 21, an estimation unit 22, and a generation unit 23 as functional units.
- the functional units of the code estimation device 20 are implemented by the CPU 130 of FIG. 1 executing the code estimation program. At least part of the functional units of the code estimation device 20 may be realized by hardware such as an electronic circuit.
- the reception unit 21 receives time-series data including a string of notes made up of a plurality of notes.
- the performer can give image data representing an image of the musical score to the reception unit 21 as time-series data.
- the performer can generate time-series data by operating the operation unit 150 and provide it to the reception unit 21 .
- the time-series data has the same configuration as the input time-series data A in FIG. In other words, time-series data has a metrical structure and additional information in addition to a string of musical notes.
- the estimation unit 22 estimates code string information using the trained model M stored in the storage unit 140 or the like.
- the code string information indicates a code string corresponding to the note string accepted by the accepting unit 21, and is estimated based on the note string and additional information. Since the time-series data has the same configuration as the input time-series data A, the code string information has the same configuration as the output code string information B.
- the generation unit 23 generates score information based on the note sequence of the time-series data received by the reception unit 21 and the code string information estimated by the estimation unit 22 .
- the musical score information is information on an arranged musical score for a piano, and is data in which chord information is added to a staff notation.
- the musical score information is MIDI data to which code string information is added.
- the display unit 160 displays the musical score with chords based on the musical score information generated by the generating unit 23 .
- FIG. 5 shows an example of a musical score with chords displayed on the display unit 160.
- the code-attached musical score indicates that the code string information estimated by the estimating section 22 corresponds to each note of the note string accepted by the accepting section 21 .
- FIG. 6 is a flowchart showing an example of training processing by the training apparatus 10 of FIG.
- the training process in FIG. 6 is performed by CPU 130 in FIG. 1 executing a training program.
- the first acquisition unit 11 acquires the input time-series data A from each training data D (step S1).
- the second acquisition unit 12 acquires the output code string information B from each training data D (step S2). Either of steps S1 and S2 may be performed first, or may be performed simultaneously.
- the construction unit 13 performs machine learning using the input time-series data A obtained in step S1 as an input element and the output code string information B obtained in step S2 as an output element. (Step S3). Subsequently, the construction unit 13 determines whether or not sufficient machine learning has been performed (step S4). If the machine learning is insufficient, the construction unit 13 returns to step S3. Steps S3 and S4 are repeated while changing the parameters until sufficient machine learning is performed. The number of iterations of machine learning changes according to quality conditions that the trained model M to be constructed should satisfy.
- the construction unit 13 saves the input/output relationship between the input time-series data A and the output code string information B learned by the machine learning in step S3 as a trained model M (step S5). This completes the training process.
- FIG. 7 is a flowchart showing an example of chord estimation processing by the chord estimation device 20 of FIG.
- the chord estimation process in FIG. 7 is performed by CPU 130 in FIG. 1 executing a chord estimation program.
- the receiving unit 21 receives time-series data (step S11).
- the estimation unit 22 estimates code string information from the time-series data received in step S11 using the trained model M saved in step S5 of the training process (step S12).
- code string information including one or a plurality of code strings is estimated from the note strings included in the time-series data
- chord estimation is performed with a high degree of freedom.
- the chord change timing is also estimated in the course of time, more appropriate chord estimation is performed.
- the time-series data does not contain information that serves as a chord change delimiter, but the estimation unit 22 performs chord estimation including chord change timing.
- the generation unit 23 After that, the generation unit 23 generates score information based on the note string of the time-series data received in step S11 and the code string information estimated in step S12 (step S13). A score with chords may be displayed on the display unit 160 based on the generated score information. This completes the chord estimation process.
- the chord estimation apparatus 20 includes the receiving unit 21 that receives time-series data including a string of notes composed of a plurality of notes, and the trained model M: and an estimating unit 22 for estimating code string information indicating a code string corresponding to the musical note string.
- the trained model M is used to estimate appropriate code string information from the temporal flow of multiple notes in the time-series data. This makes it possible to present a coded musical score based on time-series data including a string of notes. Since one or more chord strings are estimated from the note string, chord estimation is performed with a high degree of freedom.
- the trained model M learns the input/output relationship between the input time-series data A including a reference note string consisting of a plurality of notes and the output code string information B indicating a code string corresponding to each note in the reference note string. It may be a machine learning model that In this case, code string information can be easily estimated from time-series data.
- the estimation unit 22 may also estimate the chord change timing in the code string. As a result, more appropriate chord estimation corresponding to the note string is performed.
- the input time-series data A may include genre information specifying the genre of music represented by the reference note string.
- the time-series data may also include genre information that designates the genre of music represented by a string of musical notes.
- the estimation unit 22 may estimate the code string information based on the time-series data including the genre information. Thus, chord estimation suitable for the genre of music is performed.
- the input time-series data A may include key information that specifies the key of the music represented by the reference note string.
- the time-series data may also include key information that specifies the key of music represented by a string of notes.
- the estimating section 22 may estimate the code string information based on the time-series data including the key information. This provides a chord estimation that is appropriate for the key of the music.
- the input time-series data A may include difficulty level information specifying the difficulty level of the musical score indicated by the reference note string.
- the time-series data may also include difficulty level information that designates the difficulty level of the musical score indicated by the note string.
- the estimation unit 22 may estimate the code string information based on the time-series data including the difficulty level information. As a result, appropriate chord estimation is performed according to the difficulty level of the musical score indicated by the note string.
- the chord estimating device 20 may further include a generation unit 23 that generates musical score information indicating a chorded musical score to which code string information is added so as to correspond to each note of the musical note string.
- the training apparatus 10 includes a first acquisition unit 11 that acquires input time-series data A including a reference note string composed of a plurality of notes, and an output code string that indicates a code string corresponding to the reference note string.
- a second acquisition unit 12 that acquires information B, and a construction unit 13 that constructs a trained model M that has learned the input/output relationship between the input time-series data A and the output code string information B.
- a trained model M that has learned the input/output relationship between the input time-series data A and the output code string information B can be easily constructed.
- the input time-series data A includes additional information, and the time-series data includes additional information, but the embodiment is not limited to this.
- the input time-series data A only needs to include the reference note string, and does not have to include additional information.
- the time-series data may include musical note sequences and may not include additional information.
- the input time-series data A has "bar” and "beat” information as the metrical structure, but the embodiment is not limited to this.
- the input time-series data A may not have a metrical structure.
- FIG. 8 is a diagram showing an example of output code string information B prepared for input time-series data A having no metrical structure. As shown in FIG. 8, the output code string information B does not have a metrical structure consisting of "bar" and "beat” information.
- the construction unit 13 may construct different trained models M according to the type of additional information, or may construct one trained model M.
- the input time-series data A may include, as additional information, a plurality of information out of key information, genre information, and difficulty level information.
- the code estimation device 20 includes the generator 23, but the embodiment is not limited to this.
- the player can create a musical score with chords by transcribing the chord string information estimated by the estimating section 22 to a desired musical score. Therefore, the code estimation device 20 does not have to include the generator 23 .
- the training data D is trained to estimate chord string information when performing on the piano, but the embodiment is not limited to this.
- the training data D may be trained to estimate chord string information when performing with other musical instruments such as guitars and drums.
- the user of the chord estimation device 20 is a performer.
- the machine learning by the training device 10 may be performed in advance by the staff of the musical score production company.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Electrophonic Musical Instruments (AREA)
Abstract
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202280023333.9A CN117043852A (zh) | 2021-03-26 | 2022-03-03 | 和弦推定装置、训练装置、和弦推定方法及训练方法 |
JP2023508892A JPWO2022202199A1 (fr) | 2021-03-26 | 2022-03-03 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021-052532 | 2021-03-26 | ||
JP2021052532 | 2021-03-26 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022202199A1 true WO2022202199A1 (fr) | 2022-09-29 |
Family
ID=83396894
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2022/009233 WO2022202199A1 (fr) | 2021-03-26 | 2022-03-03 | Dispositif d'estimation de code, dispositif d'apprentissage, procédé d'estimation de code et procédé d'apprentissage |
Country Status (3)
Country | Link |
---|---|
JP (1) | JPWO2022202199A1 (fr) |
CN (1) | CN117043852A (fr) |
WO (1) | WO2022202199A1 (fr) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2015031738A (ja) * | 2013-07-31 | 2015-02-16 | 株式会社河合楽器製作所 | コード進行推定検出装置及びコード進行推定検出プログラム |
WO2020145326A1 (fr) * | 2019-01-11 | 2020-07-16 | ヤマハ株式会社 | Procédé d'analyse acoustique et dispositif d'analyse acoustique |
-
2022
- 2022-03-03 CN CN202280023333.9A patent/CN117043852A/zh active Pending
- 2022-03-03 JP JP2023508892A patent/JPWO2022202199A1/ja active Pending
- 2022-03-03 WO PCT/JP2022/009233 patent/WO2022202199A1/fr active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2015031738A (ja) * | 2013-07-31 | 2015-02-16 | 株式会社河合楽器製作所 | コード進行推定検出装置及びコード進行推定検出プログラム |
WO2020145326A1 (fr) * | 2019-01-11 | 2020-07-16 | ヤマハ株式会社 | Procédé d'analyse acoustique et dispositif d'analyse acoustique |
Also Published As
Publication number | Publication date |
---|---|
CN117043852A (zh) | 2023-11-10 |
JPWO2022202199A1 (fr) | 2022-09-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111630590B (zh) | 生成音乐数据的方法 | |
KR101942814B1 (ko) | 사용자 허밍 멜로디 기반 반주 제공 방법 및 이를 위한 장치 | |
EP3489946A1 (fr) | Assistance de performance en temps réel pour groupe pour musiciens | |
US7411125B2 (en) | Chord estimation apparatus and method | |
JP2019152716A (ja) | 情報処理方法および情報処理装置 | |
JP6565528B2 (ja) | 自動アレンジ装置及びプログラム | |
JP6760450B2 (ja) | 自動アレンジ方法 | |
US6984781B2 (en) | Music formulation | |
CN112420003B (zh) | 伴奏的生成方法、装置、电子设备及计算机可读存储介质 | |
WO2022202199A1 (fr) | Dispositif d'estimation de code, dispositif d'apprentissage, procédé d'estimation de code et procédé d'apprentissage | |
JP6693176B2 (ja) | 歌詞生成装置および歌詞生成方法 | |
JP6645085B2 (ja) | 自動アレンジ装置及びプログラム | |
JP7375302B2 (ja) | 音響解析方法、音響解析装置およびプログラム | |
US20220383843A1 (en) | Arrangement generation method, arrangement generation device, and generation program | |
CN113870818B (zh) | 歌曲和弦编配模型的训练方法、装置、介质和计算设备 | |
KR102490769B1 (ko) | 음악적 요소를 이용한 인공지능 기반의 발레동작 평가 방법 및 장치 | |
JP2019109357A (ja) | 音楽情報の特徴解析方法及びその装置 | |
JPWO2022113914A5 (fr) | ||
Vargas et al. | Artificial musical pattern generation with genetic algorithms | |
Suthaphan et al. | Music generator for elderly using deep learning | |
WO2022244403A1 (fr) | Dispositif et procédé d'écriture de partition de musique, dispositif et procédé d'entraînement | |
JPH04157499A (ja) | 自動リズム生成装置 | |
JP7552740B2 (ja) | 音響解析システム、電子楽器および音響解析方法 | |
US20240420668A1 (en) | Systems, methods, and computer program products for generating motif structures and music conforming to motif structures | |
WO2022215250A1 (fr) | Dispositif de sélection de musique, dispositif de création de modèle, programme, procédé de sélection de musique, et procédé de création de modèle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22774995 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202280023333.9 Country of ref document: CN Ref document number: 2023508892 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22774995 Country of ref document: EP Kind code of ref document: A1 |