CN117043852A - Chord estimation device, training device, chord estimation method, and training method - Google Patents

Chord estimation device, training device, chord estimation method, and training method Download PDF

Info

Publication number
CN117043852A
CN117043852A CN202280023333.9A CN202280023333A CN117043852A CN 117043852 A CN117043852 A CN 117043852A CN 202280023333 A CN202280023333 A CN 202280023333A CN 117043852 A CN117043852 A CN 117043852A
Authority
CN
China
Prior art keywords
chord
string
series data
information
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280023333.9A
Other languages
Chinese (zh)
Inventor
铃木正博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yamaha Corp
Original Assignee
Yamaha Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yamaha Corp filed Critical Yamaha Corp
Publication of CN117043852A publication Critical patent/CN117043852A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10GREPRESENTATION OF MUSIC; RECORDING MUSIC IN NOTATION FORM; ACCESSORIES FOR MUSIC OR MUSICAL INSTRUMENTS NOT OTHERWISE PROVIDED FOR, e.g. SUPPORTS
    • G10G3/00Recording music in notation form, e.g. recording the mechanical operation of a musical instrument
    • G10G3/04Recording music in notation form, e.g. recording the mechanical operation of a musical instrument using electrical means
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • G10H1/38Chord

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Electrophonic Musical Instruments (AREA)

Abstract

The chord estimation device includes: a receiving unit that receives time-series data including a musical note string composed of a plurality of musical notes; and an estimating unit that estimates chord string information indicating a chord string corresponding to the note string based on the time-series data, using the trained model.

Description

Chord estimation device, training device, chord estimation method, and training method
Technical Field
The present invention relates to a chord estimation device and method for estimating a chord for playing a musical instrument, and a chord estimation training device and method for constructing a chord estimation device.
Background
There is a score with a chord attached. The player plays chords using a musical instrument such as a piano, guitar, etc., thereby being able to enjoy musical instrument performance. In making a score with a chord, a maker performs a chord-imparting operation based on a melody, accompaniment, or the like shown by the notes. The operation of imparting a chord requires musical knowledge and judgment. Patent document 1 discloses a chord progression estimation detection device that estimates a chord from performance information or acoustic signals.
Patent document 1: japanese patent No. 6151121
Disclosure of Invention
In patent document 1, a chord is estimated for each specific section. For example, 1 chord is estimated for each bar. It is expected that if chord estimation with a high degree of freedom can be further performed from the given notes, the production of a chord-equipped score can be more appropriately assisted.
The purpose of the present invention is to perform chord estimation with high degrees of freedom based on note strings.
The chord estimation device according to an aspect of the present invention includes: a receiving unit that receives time-series data including a musical note string composed of a plurality of musical notes; and an estimating unit that estimates chord string information indicating a chord string corresponding to the note string based on the time-series data, using the trained model.
Another aspect of the present invention relates to a training device including: a1 st acquisition unit that acquires input time-series data including a reference note string composed of a plurality of notes; a2 nd acquisition unit that acquires output chord string information indicating a chord string corresponding to the reference note string; and a construction unit that constructs a trained model in which the input-output relationship between the input time-series data and the output chord string information is learned.
A chord estimation method according to still another aspect of the present invention is a chord estimation method executed by a computer, and receives time-series data including a note string composed of a plurality of notes, and estimates chord string information indicating a chord string corresponding to the note string based on the time-series data using a trained model.
A training method according to still another aspect of the present invention is a training method executed by a computer, wherein input time series data including a reference note string composed of a plurality of notes is acquired, output chord string information indicating a chord string corresponding to the reference note string is acquired, and a trained model in which an input-output relationship between the input time series data and the output chord string information is learned is constructed.
ADVANTAGEOUS EFFECTS OF INVENTION
According to the present invention, chord estimation with high degrees of freedom can be performed based on the note strings.
Drawings
Fig. 1 is a block diagram showing a configuration of a processing system including a chord estimation device and a training device according to an embodiment of the present invention.
Fig. 2 is a diagram showing an example of input time-series data included in training data.
Fig. 3 is a diagram showing an example of output chord string information included in training data.
Fig. 4 is a block diagram showing the configuration of the training device and the chord estimation device.
Fig. 5 shows an example of an adaptation (arange) score displayed on the display unit.
Fig. 6 is a flowchart showing an example of the training process.
Fig. 7 is a flowchart showing an example of chord estimation processing.
Fig. 8 is a diagram showing a modification of the output chord string information included in the training data.
Detailed Description
[1] Embodiment 1
(1) Structure of processing system
The chord estimation device, the training device, the chord estimation method, and the training method according to the embodiment of the present invention will be described in detail with reference to the drawings. Fig. 1 is a block diagram showing a configuration of a processing system including a chord estimation device and a training device according to embodiment 1 of the present invention. As shown in fig. 1, the processing system 100 includes a RAM (random access memory) 110, a ROM (read only memory) 120, a CPU (central processing unit) 130, a storage unit 140, an operation unit 150, and a display unit 160.
The processing system 100 is implemented by a personal computer, tablet terminal, or smart phone, among other computers. Alternatively, the processing system 100 may be realized by a common operation of a plurality of computers connected via a communication line such as an ethernet, or may be realized by an electronic musical instrument having a playing function such as an electronic piano.
The RAM 110, the ROM 120, the CPU 130, the storage unit 140, the operation unit 150, and the display unit 160 are connected to the bus 170. The training device 10 and the chord estimation device 20 are constituted by the RAM 110, the ROM 120, and the CPU 130. In the present embodiment, the training device 10 and the chord estimation device 20 are configured by a common processing system 100, but may be configured by different processing systems.
The RAM 110 is constituted of, for example, a volatile memory, and serves as a work area of the CPU 130. The ROM 120 is configured from, for example, a nonvolatile memory, and stores a training program and a chord estimation program. The CPU 130 performs training processing by executing a training program stored in the ROM 120 on the RAM 110. Further, the CPU 130 executes the chord estimation program stored in the ROM 120 on the RAM 110 to perform chord estimation processing. Details of the training process and chord estimation process will be described later.
The training program or the chord estimation program may be stored in the storage section 140 instead of the ROM 120. Alternatively, the training program or the chord estimation program may be provided so as to be stored in a computer-readable storage medium, and may be installed in the ROM 120 or the storage unit 140. Alternatively, when the processing system 100 is connected to a network such as the internet, a training program or a chord estimation program transmitted from a server (including a cloud server) on the network may be installed in the ROM 120 or the storage unit 140.
The storage unit 140 includes a storage medium such as a hard disk, an optical disk, a magnetic disk, or a memory card, and stores the trained model M and a plurality of training data D. The trained model M or each training data D may be stored in a computer-readable storage medium, instead of the storage unit 140. Alternatively, in the case where the processing system 100 is connected to a network, the trained model M or each training data D may be stored on a server on the network.
(2) Training data
The trained model M is a machine learning model trained for presenting a string of chords to be referred to when a user of the chord estimation device 20 plays a musical composition (hereinafter referred to as a player). The trained model M is constructed using a plurality of training data D. The user of the training device 10 can generate training data D by operating the operation unit 150. The training data D is data created based on music knowledge or music style of the reference player, or the like. The reference player has a high skill for the performance of the music piece. The reference player may be a director or director of the player of the performance of the musical composition.
The training data D represents a group of input time series data and output chord string information. The input time-series data represents a reference note string composed of a plurality of notes. For example, the input time-series data is data of a melody and an accompaniment sound composed of a plurality of notes. The input time-series data may be image data representing an image of a score. The output chord string information is data in which chords corresponding to the reference note string are arranged in time series. The chord string corresponding to the reference note string is given by the reference player.
Fig. 2 and 3 are diagrams showing an example of the training data D. Fig. 2 illustrates an example of input time-series data including a reference note string composed of a plurality of notes. The example of fig. 3 shows output chord string information indicating a chord string corresponding to the reference note string.
In the present embodiment, the input time-series data has a beat structure and additional information in addition to the reference note string. The input time-series data a shown in fig. 2 is obtained by extracting the first 2 pieces of data of the musical composition. The input time-series data a is segmented into bars by "bar" and beats by "bean". As described above, the input time-series data a has a beat structure by the "bar" and "bean" information. Elements A1 to a37 represent the first 1-bar reference note string. That is, the elements A1 to a37 are divided into segments by "bar" before the element A1 and "bar" after the element a 37. The beat is divided by "bean" after the elements A8, a18, a 26.
Element A0 is additional information. As the additional information, tone information, genre (Genre) information, difficulty information, and the like are used, for example. In the example of fig. 2, tone information is added by a Key element. The pitch information is information specifying the pitch of music represented by a reference note string. The subsequent value of Key is the value specifying the tone. By specifying the tone information as the additional information, the chord string corresponding to the reference note string and the tone is machine-learned. The genre information is information specifying the genre of music represented by the reference note string. The genre information is, for example, genre such as rock, pop, jazz, etc. By specifying genre information as additional information, machine learning is performed on chord strings corresponding to the reference note string and the genre. The difficulty level information is information indicating the difficulty level of a score represented by a reference note string. By specifying difficulty information as additional information, the chord string corresponding to the difficulty of the reference note string and the score is machine-learned. For example, if a score with low difficulty is used, the interpolation of notes is performed from a small number of notes and machine learning is performed. In addition, if the score is a score with high difficulty, notes constituting a chord are selected from among the surplus numbers of notes and machine learning is performed.
Among the elements of the input time-series data a, elements other than the element A0, "bar" and "bean" correspond to the reference note string. Elements A1 to a37 represent reference note strings of section 1. In the present example, the element A0 is arranged at the beginning of the input time-series data a, that is, before the reference note string (elements A1 to a 37), but may be arranged at any position of the input time-series data a.
As illustrated in the elements A1 to a37, in the reference note string, "L" means left hand, "R" means right hand, "L" or numerals subsequent to "R" means musical scale. In addition, "on" and "off" refer to a key press and a key release, respectively. In addition, "wait" refers to waiting, and the number following "wait" refers to the length of time. Therefore, the elements A1 to A5 are 11 units of time after the keys of the scales 77 and 74 are simultaneously pressed with the right hand and the keys of the scales 53 and 46 are simultaneously pressed with the left hand. After maintaining 11 units of time, elements A6 to A8 represent 1 unit of time after simultaneously releasing the keys of the left-hand scale 53 and the scale 46. After maintaining 1 unit time, elements A9 to a11 represent waiting 5 units of time after the scales 53 and 46 are pressed again with the left hand.
The output chord string information B shown in fig. 3 indicates a chord string corresponding to the reference note string included in the input time-series data a. Chord strings corresponding to the elements A1 to a37 to which the time series data a is input are represented by the elements B1 to B3 and the elements B4 to B6. That is, the elements B1 to B6 represent chord strings corresponding to the 1 st bar of the input time series data a. In the output chord string information B, bars are also divided by "bar", and beats are divided by "beat". The range divided by "bar" before the element B1 and "bar" after the element B6 corresponds to the 1 st section.
In the output chord string information B, 1 chord is represented by 3 elements. In the elements B1 to B3, the chord of the 1 st beat of the 1 st bar is specified. In the elements B4 to B6, the chord of the 4 th beat of the 1 st bar is specified. In the elements B7 to B9, chords of the 1 st beat of the 4 th bar are specified. The 1 st element (B1, B4, B7) among the 3 elements representing the chord represents the basic chord information. The basic chord information (chord) indicates that for 12 tones (C, C#, D, D#, A, B) each of which is a numerical value of 1 to 24 for specifying the type of major chord and minor chord. The 2 nd element (B2, B5, B8) among the 3 elements representing the chord represents the chord type information. The chord type information (type) indicates a numerical value specifying the kind of the tension chord. The 3 rd element (B3, B6, B9) among the 3 elements representing the chord represents the chord root information. The chord root information (root) represents a value that designates a root of an on chord (onchord).
(3) Training device and chord estimating device
Fig. 4 is a block diagram showing the configuration of the training device 10 and the chord estimation device 20. As shown in fig. 4, the training device 10 includes a1 st acquisition unit 11, a2 nd acquisition unit 12, and a construction unit 13 as functional units. The functional unit of the training device 10 is realized by executing a training program by the CPU 130 of fig. 1. At least a part of the functional parts of the training device 10 may be realized by hardware such as an electronic circuit.
The 1 st acquisition unit 11 acquires the input time-series data a from each training data D stored in the storage unit 140 or the like. The 2 nd acquisition unit 12 acquires the output chord string information B from each training data D. The construction unit 13 performs machine learning for each training data D, the machine learning having the input time-series data a acquired by the 1 st acquisition unit 11 as an input element and the output chord string information B acquired by the 2 nd acquisition unit 12 as an output element. By repeating machine learning for a plurality of training data D, the construction unit 13 constructs a trained model M representing an input-output relationship between the input time series data a and the output chord string information B.
In the present example, the construction unit 13 constructs the trained model M by training a transducer (converter), but the embodiment is not limited thereto. The construction unit 13 may construct the trained model M by training a machine learning model of another method of processing the time series. The trained model M constructed by the construction unit 13 is stored in the storage unit 140, for example. The trained model M constructed by the constructing unit 13 may be stored in a server or the like on the network.
The chord estimation device 20 includes a receiving unit 21, an estimating unit 22, and a generating unit 23 as functional units. The functional section of the chord estimation device 20 is realized by executing the chord estimation program by the CPU 130 of fig. 1. At least a part of the functional parts of the chord estimation device 20 may be realized by hardware such as an electronic circuit.
In the present embodiment, the receiving unit 21 receives time-series data including a musical note string composed of a plurality of musical notes. The player can supply image data representing an image of a musical score as time-series data to the receiving section 21. Alternatively, the player can generate time-series data by operating the operation unit 150 and supply the time-series data to the receiving unit 21. In this example, the time-series data has the same structure as the input time-series data a of fig. 2. That is, the time-series data has a beat structure and additional information in addition to the note string.
The estimating unit 22 estimates chord string information using the trained model M stored in the storage unit 140 or the like. The chord string information indicates a chord string corresponding to the note string received by the receiving unit 21, and is estimated based on the note string and the additional information. The time-series data has the same structure as the input time-series data a, and thus the chord string information has the same structure as the output chord string information B. The generating section 23 generates musical score information based on the musical note string of the time-series data received by the receiving section 21 and the chord string information estimated by the estimating section 22. The score information is, for example, information of a musical score adapted to a piano, and is data in which chord information is attached to a staff. Alternatively, the score information is MIDI data to which chord string information is attached.
The display 160 displays musical scores with chords based on the musical score information generated by the generating section 23. Fig. 5 shows an example of the score with chord displayed on the display unit 160. As shown in fig. 5, the chord note is displayed such that the chord string information estimated by the estimating unit 22 corresponds to each note of the note string received by the receiving unit 21.
(4) Training process and chord estimation process
Fig. 6 is a flowchart showing an example of the training process performed by the training device 10 of fig. 4. The training process of fig. 6 is performed by executing a training program by the CPU 130 of fig. 1. First, the 1 st acquisition unit 11 acquires the input time-series data a from each training data D (step S1). The 2 nd acquisition unit 12 acquires the output chord string information B from the training data D (step S2). The steps S1 and S2 may be executed either first or simultaneously.
Next, the construction unit 13 performs machine learning for each training data D, the machine learning having the input time-series data a acquired in step S1 as an input element and the output chord string information B acquired in step S2 as an output element (step S3). Next, the construction unit 13 determines whether or not sufficient machine learning has been performed (step S4). If the machine learning is insufficient, the construction unit 13 returns to step S3. Until sufficient machine learning is performed, steps S3 and S4 are repeated while changing the parameters. The number of iterations of machine learning varies corresponding to the quality conditions that the built trained model M should meet.
When sufficient machine learning is performed, the construction unit 13 saves the input/output relationship between the input time-series data a and the output chord string information B learned by the machine learning in step S3 as a trained model M (step S5). Thereby, the training process ends.
Fig. 7 is a flowchart showing an example of the chord estimation process performed by the chord estimation device 20 of fig. 4. The chord estimation process of fig. 7 is performed by executing the chord estimation program by the CPU 130 of fig. 1. First, the receiving unit 21 receives time-series data (step S11). Next, the estimating unit 22 uses the trained model M stored in step S5 of the training process to estimate chord string information from the time-series data received in step S11 (step S12). At this time, chord string information including 1 or more chord strings is estimated from the note strings included in the time-series data, so that chord estimation with high degrees of freedom can be performed. Further, since the timing of Chord change is estimated in the time course, more appropriate Chord estimation can be performed. That is, the time-series data does not include information on the division point that becomes the chord change, but the estimating unit 22 performs chord estimation including the timing of the chord change.
Then, the generating unit 23 generates score information based on the note string of the time-series data received in step S11 and the chord string information estimated in step S12 (step S13). The score with the chord may be displayed on the display part 160 based on the generated score information. Thereby, the chord estimation process ends.
(5) Effects of the embodiments
As described above, the chord estimation device 20 according to the present embodiment includes: a receiving unit 21 that receives time-series data including a musical note string composed of a plurality of musical notes; and an estimating unit 22 that uses the trained model M to estimate chord string information indicating a chord string corresponding to the note string. According to this configuration, the trained model M is used to estimate appropriate chord string information from the time profiles of the plurality of notes of the time series data. Thereby, the score with chord can be prompted based on the time-series data including the note string. Since 1 or more chord strings are estimated from the note strings, chord estimation with high degrees of freedom can be performed.
The trained model M may be a machine learning model in which input/output relationships between input time-series data a including a reference note string composed of a plurality of notes and output chord string information B indicating chord strings corresponding to the notes of the reference note string are learned. In this case, chord string information can be easily estimated from the time-series data.
The estimating unit 22 may estimate the timing of the chord change of the chord string. This enables more appropriate chord estimation corresponding to the note string.
The input time-series data a may contain genre information specifying the genre of music represented by the reference note string. In addition, the time-series data may contain genre information specifying the genre of music represented by the note string. The estimating unit 22 may estimate the chord string information based on time-series data including genre information. This enables chord estimation suitable for the genre of music.
The input time-series data a may contain tone information specifying the tone of the music represented by the reference note string. In addition, the time-series data may contain tone information specifying the tone of music represented by the note string. The estimating unit 22 may estimate the chord string information based on time-series data including the tone information. This enables chord estimation of tones suitable for music.
The input time-series data a may contain difficulty information specifying the difficulty of the score represented by the reference note string. In addition, the time-series data may contain difficulty information specifying the difficulty of the score shown by the note string. The estimating unit 22 may estimate the chord string information based on time-series data including the difficulty level information. Thus, it is possible to perform appropriate chord estimation in accordance with the difficulty level of the score shown by the note string.
The chord estimation device 20 may further have a generation section 23, and the generation section 23 may generate score information indicating a score with a chord to which chord string information is labeled so as to correspond to each note of the note string.
The training device 10 according to the present embodiment includes: a1 st acquisition unit 11 that acquires input time-series data a including a reference note string composed of a plurality of notes; a2 nd acquisition unit 12 that acquires output chord string information B indicating a chord string corresponding to the reference note string; and a construction unit 13 for constructing a trained model M in which the input/output relationship between the input time series data A and the output chord string information B is learned. According to this configuration, the trained model M in which the input-output relationship between the input time-series data a and the output chord string information B is learned can be easily constructed.
(6) Other embodiments
In the above embodiment, the input time-series data a contains additional information, and the time-series data contains additional information, but the embodiment is not limited thereto. The input time-series data a may include only the reference note string, and may not include additional information. Similarly, the time-series data may not include additional information as long as it includes a note string.
In the above embodiment, the input time-series data a has "bar" and "beat" information as the beat structure, but the embodiment is not limited thereto. The input time-series data a may not have a beat configuration. Fig. 8 is a diagram showing an example of output chord string information B prepared corresponding to input time-series data a without a beat structure. As shown in fig. 8, the output chord string information B does not have a beat configuration composed of "bar" and "beat" information.
In the above embodiment, the case where the input time-series data a has tone information, genre information, and difficulty information as additional information has been described as an example. The construction unit 13 may construct different trained models M according to the types of the additional information, or may construct 1 trained model M. Alternatively, the time-series data a may be input as additional information, and may include a plurality of pieces of information among tone information, genre information, and difficulty information.
In the above embodiment, the chord estimation device 20 includes the generation unit 23, but the embodiment is not limited to this. The player can create a chord-equipped score by transcribing the chord string information estimated by the estimating section 22 to a desired score. Therefore, the chord estimation device 20 may not include the generation section 23.
In the above embodiment, the training data D is trained to estimate chord string information when playing through a piano, but the embodiment is not limited thereto. The training data D may be trained to estimate chord string information when played by a guitar, drum, or other musical instrument.
In the above embodiment, the explanation was given taking the case where the user of the chord estimation device 20 is a player as an example, but the user of the chord estimation device 20 may be a staff member of a musical score production company, for example. In addition, the machine learning by the training device 10 may be performed in advance by staff members of the musical score making company.

Claims (16)

1. A chord estimation device, comprising:
a receiving unit that receives time-series data including a musical note string composed of a plurality of musical notes; and
and an estimating unit that estimates chord string information indicating a chord string corresponding to the note string based on the time-series data, using the trained model.
2. The chord estimation device according to claim 1, wherein,
the trained model is a model in which input-output relationships between input time-series data including a reference note string composed of a plurality of notes and output chord string information indicating a chord string corresponding to the reference note string are learned.
3. The chord estimation device according to claim 1 or 2, wherein,
the estimating unit also estimates the timing of chord change of the chord string.
4. The chord estimation device according to claim 2, wherein,
the input time-series data includes genre information specifying a genre of music represented by the reference note string,
the time-series data includes genre information specifying a genre of music represented by the note string,
the estimating unit estimates the chord string information based on the time-series data including genre information.
5. The chord estimation device according to claim 2, wherein,
the input time-series data includes tone information specifying a tone of music represented by the reference note string,
the time-series data includes tone information specifying a tone of music represented by the note string,
the estimating unit estimates the chord string information based on the time-series data including tone information.
6. The chord estimation device according to claim 2, wherein,
the input time-series data contains difficulty information specifying difficulty of a score represented by the reference note string,
the time-series data contains difficulty information specifying difficulty of a score represented by the note string,
the estimating unit estimates the chord string information based on the time-series data including the difficulty level information.
7. The chord estimation device according to any one of claims 1 to 6, wherein,
the generating unit generates score information indicating a chord-added score to which the chord string information is attached so as to correspond to each note of the note string.
8. A training device, comprising:
a1 st acquisition unit that acquires input time-series data including a reference note string composed of a plurality of notes;
a2 nd acquisition unit that acquires output chord string information indicating a chord string corresponding to the reference note string; and
and a construction unit that constructs a trained model in which the input-output relationship between the input time-series data and the output chord string information is learned.
9. A chord estimation method is executed by a computer,
receiving time-series data including a note string composed of a plurality of notes,
using the trained model, chord string information representing a chord string corresponding to the note string is estimated based on the time-series data.
10. The chord estimation method performed by the computer according to claim 9, wherein,
the trained model is a model in which input-output relationships between input time-series data including a reference note string composed of a plurality of notes and output chord string information indicating a chord string corresponding to the reference note string are learned.
11. The chord estimation method performed by the computer according to claim 9 or 10, wherein,
the performing estimation also performs estimation with respect to the timing of chord change of the chord string.
12. The chord estimation method performed by the computer according to claim 10, wherein,
the input time-series data includes genre information specifying a genre of music represented by the reference note string,
the time-series data includes genre information specifying a genre of music represented by the note string,
the estimating is to estimate the chord string information based on the time-series data including genre information.
13. The chord estimation method performed by the computer according to claim 10, wherein,
the input time-series data includes tone information specifying a tone of music represented by the reference note string,
the time-series data includes tone information specifying a tone of music represented by the note string,
the estimating is to estimate the chord string information based on the time-series data including tone information.
14. The chord estimation method performed by the computer according to claim 10, wherein,
the input time-series data contains difficulty information specifying difficulty of a score represented by the reference note string,
the time-series data contains difficulty information specifying difficulty of a score represented by the note string,
the estimating is to estimate the chord string information based on the time-series data including the difficulty information.
15. The chord estimation method performed by the computer according to any one of claims 9 to 14, wherein,
score information representing a chord-added score to which the chord string information is labeled in a manner corresponding to each note of the note string is also generated.
16. A training method is performed by a computer,
input time-series data including a reference note string composed of a plurality of notes is acquired,
output chord string information indicating a chord string corresponding to the reference note string is acquired,
and constructing a trained model for learning the input-output relation between the input time series data and the output chord string information.
CN202280023333.9A 2021-03-26 2022-03-03 Chord estimation device, training device, chord estimation method, and training method Pending CN117043852A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2021-052532 2021-03-26
JP2021052532 2021-03-26
PCT/JP2022/009233 WO2022202199A1 (en) 2021-03-26 2022-03-03 Code estimation device, training device, code estimation method, and training method

Publications (1)

Publication Number Publication Date
CN117043852A true CN117043852A (en) 2023-11-10

Family

ID=83396894

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280023333.9A Pending CN117043852A (en) 2021-03-26 2022-03-03 Chord estimation device, training device, chord estimation method, and training method

Country Status (3)

Country Link
JP (1) JPWO2022202199A1 (en)
CN (1) CN117043852A (en)
WO (1) WO2022202199A1 (en)

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6151121B2 (en) * 2013-07-31 2017-06-21 株式会社河合楽器製作所 Chord progression estimation detection apparatus and chord progression estimation detection program
JP7375302B2 (en) * 2019-01-11 2023-11-08 ヤマハ株式会社 Acoustic analysis method, acoustic analysis device and program

Also Published As

Publication number Publication date
WO2022202199A1 (en) 2022-09-29
JPWO2022202199A1 (en) 2022-09-29

Similar Documents

Publication Publication Date Title
JP6019858B2 (en) Music analysis apparatus and music analysis method
JP5454317B2 (en) Acoustic analyzer
CN111630590B (en) Method for generating music data
KR101942814B1 (en) Method for providing accompaniment based on user humming melody and apparatus for the same
EP3489946A1 (en) Real-time jamming assistance for groups of musicians
CN111613199A (en) A MIDI sequence generation device based on music theory and statistical rules
JP6565528B2 (en) Automatic arrangement device and program
JP7343012B2 (en) Information processing device and information processing method
CN112420003B (en) Accompaniment generation method and device, electronic equipment and computer readable storage medium
US8704067B2 (en) Musical score playing device and musical score playing program
CN113196381B (en) Acoustic analysis method and acoustic analysis device
CN117043852A (en) Chord estimation device, training device, chord estimation method, and training method
CN110959172B (en) Performance analysis method, performance analysis device, and storage medium
US20220383843A1 (en) Arrangement generation method, arrangement generation device, and generation program
CN113870817A (en) Automatic song editing method, automatic song editing device and computer program product
JPWO2022113914A5 (en)
JP3531507B2 (en) Music generating apparatus and computer-readable recording medium storing music generating program
JP6077492B2 (en) Information processing apparatus, information processing method, and program
WO2022244403A1 (en) Musical score writing device, training device, musical score writing method and training method
US20240420668A1 (en) Systems, methods, and computer program products for generating motif structures and music conforming to motif structures
WO2022190453A1 (en) Fingering presentation device, training device, fingering presentation method, and training method
US20250014543A1 (en) Data output method and data output device
JPH04157499A (en) Automatic rhythm creation device
JP5960488B2 (en) Musical score playing apparatus and musical score playing program
JP5988672B2 (en) Musical score playing apparatus and musical score playing program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination