CN111613199B - MIDI sequence generating device based on music theory and statistical rule - Google Patents

MIDI sequence generating device based on music theory and statistical rule Download PDF

Info

Publication number
CN111613199B
CN111613199B CN202010398381.9A CN202010398381A CN111613199B CN 111613199 B CN111613199 B CN 111613199B CN 202010398381 A CN202010398381 A CN 202010398381A CN 111613199 B CN111613199 B CN 111613199B
Authority
CN
China
Prior art keywords
pitch
tone
chord
note
sound
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010398381.9A
Other languages
Chinese (zh)
Other versions
CN111613199A (en
Inventor
计紫豪
李晨啸
张克俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuyiyue Technology Hangzhou Co ltd
Zhejiang University ZJU
Original Assignee
Fuyiyue Technology Hangzhou Co ltd
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuyiyue Technology Hangzhou Co ltd, Zhejiang University ZJU filed Critical Fuyiyue Technology Hangzhou Co ltd
Priority to CN202010398381.9A priority Critical patent/CN111613199B/en
Publication of CN111613199A publication Critical patent/CN111613199A/en
Application granted granted Critical
Publication of CN111613199B publication Critical patent/CN111613199B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H7/00Instruments in which the tones are synthesised from a data store, e.g. computer organs
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/101Music Composition or musical creation; Tools or processes therefor
    • G10H2210/111Automatic composing, i.e. using predefined musical rules
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/101Music Composition or musical creation; Tools or processes therefor
    • G10H2210/145Composing rules, e.g. harmonic or musical rules, for use in automatic composition; Rule generation algorithms therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Auxiliary Devices For Music (AREA)

Abstract

The invention discloses a MIDI sequence generating device based on music theory and statistical rules, which comprises a computer system, wherein the computer system is configured to: receiving a chord information sequence, a rhythm type and a tone; initializing the initial probability of the pitches of 12 tones in one octave according to the received tone; chord correction, melody motion correction, repeat sound correction and sound threshold correction are carried out on the tone pitch initial probabilities of the 12 tones according to chord restriction, melody motion restriction, repeat sound restriction and sound threshold restriction, and the tone pitch probability distribution of the 12 tones is obtained; selecting a note from the pitch probability distribution in sequence according to the sequence of the pitch from high to low to carry out condition screening, and adding the note meeting the screening condition to the generated melody; the generated melody and rhythm are integrated to generate a MIDI file. The MIDI sequence generating apparatus is capable of generating a multiplicity of MIDI sequences from a given melody.

Description

MIDI sequence generating device based on music theory and statistical rule
Technical Field
The invention relates to the technical field of music science and technology, in particular to a MIDI sequence generating device based on music theory and statistical rules.
Background
MIDI is an industry standard electronic communication protocol that defines different notes and control signals for various types of electronic musical instruments and other playing equipment. Therefore, MIDI is a method for symbolizing music and is widely used in music production.
With the development and popularization of artificial intelligence technology, computer composition becomes a new branch in music production, and symbolic music composition, i.e. MIDI sequence generation, becomes an important component thereof. Many previous MIDI sequence generation efforts incorporate different approaches, which generally have respective generation backgrounds and advantages.
Currently mainstream MIDI sequence methods can be generally divided into two categories: the first is a method of generating a musical composition based entirely on music rules, i.e., a rule for extracting a melody from an existing musical composition and creating a MIDI based on the rule. This approach can generate music that closely resembles existing music. For example, the EMI virtual music generation system of David Cope, the chord bach music generation expert system of Ebciogin. It is very difficult to extract all the music theory rules of a certain style from the music and apply the music theory rules in the generation, and the music theory rules based on the style can be continuously evolved over time.
The second method is a method for generating MIDI by using machine learning in artificial intelligence and deep learning techniques. The method can make the generated music more diverse, and can generate the music end to end. Typical examples are the music creation system MusicVAE multitrack by Adam Robert et al, the MuseGAN symbolized music and accompaniment creation system by Dong Hao-Wen et al, and so on. However, the existing work has many defects, such as the generation quality of music is generally not high, the generated single-track MIDI notes have more wrong tones, and the harmony between the generated multi-track MIDI tracks is not high.
In summary, the conventional method for generating MIDI sequences cannot ensure the diversity of generating MIDI under the controllable melody generation. And the uniqueness of the existing MIDI generation framework also becomes a bottleneck of the MIDI sequence generation problem.
Disclosure of Invention
The invention aims to provide a MIDI sequence generating device based on music theory and statistical rules, which can generate diversified MIDI sequences according to given melodies.
The technical scheme of the invention is as follows:
a MIDI sequence generation apparatus based on music theory and statistical rules, comprising a computer system configured to:
receiving a chord information sequence, a rhythm type and a tone;
initializing the initial probability of the pitches of 12 tones in one octave according to the received tone;
chord correction, melody motion correction, repeat sound correction and sound threshold correction are carried out on the tone pitch initial probabilities of the 12 tones according to chord restriction, melody motion restriction, repeat sound restriction and sound threshold restriction, and the tone pitch probability distribution of the 12 tones is obtained;
selecting a note from the pitch probability distribution in sequence according to the sequence of the pitch from high to low to carry out condition screening, and adding the note meeting the screening condition to the generated melody;
the generated melody and rhythm are integrated to generate a MIDI file.
The chord information sequence contains a sequence of chords and corresponding beat numbers, which often presents a tone, so that the tone can be automatically generated according to the input chord information sequence.
Preferably, the MIDI sequence generating apparatus is provided with a popular music tempo type therein, and directly synthesizes any popular music tempo type to generate a MIDI file.
In initializing the pitch initial probability, the size scale corresponds to different pitch probabilities. Thus, when the key is major, the initial probability of pitch for 12 tones is:
sound 1 #1/b2 2 #2/b3 3 4
Initial probability of pitch 0.184 0.001 0.155 0.003 0.191 0.109
Sound # 4/b5 5 #5/b6 6 #6/b7 7
Initial probability of pitch 0.005 0.214 0.001 0.078 0.004 0.055
When the key is a minor key, the initial probability of the pitches of the 12 tones is:
sound 1 #1/b2 2 #2/b3 3 4
Initial probability of pitch 0.192 0.005 0.149 0.179 0.002 0.144
Sound # 4/b5 5 #5/b6 6 #6/b7 7
Initial probability of pitch 0.002 0.201 0.038 0.012 0.053 0.022
Preferably, the chord modification in accordance with the chord constraint includes:
setting a chord correction coefficient, wherein the chord correction coefficient range is 2.3-3.5;
and multiplying the tone pitch probability of the chord component tone corresponding to the chord information by the chord modification coefficient according to the received chord information to obtain a chord modification result.
Preferably, the melody motion modification according to the melody motion constraint includes:
when the predicted sound and the generated sound have different musical interval differences, multiplying the pitch probability of the sound to be predicted by a melody motion constraint coefficient, wherein the melody motion constraint coefficient is obtained by the following formula:
Figure BDA0002488427420000031
F(x,root)=f(x,root+2,0.5)*0.2+f(x,root+4,1)*0.05+f(x,root-2.3,1)*0.1
+f(x,root-5,0.8)*0.06+f(x,root+7,0.8)*0.03+f(x,root+1,0.7)
*0.05+f(x,root-8,3)*0.02
where F (,) represents the melody motion constraint coefficient, x represents the predicted pitch, and root is the generated pitch.
Preferably, the repeating tone modification according to the repeating tone constraint includes:
setting a repeating sound punishment coefficient, wherein the repeating sound punishment coefficient range is 0.7-0.9;
and comparing the predicted tone with all generated tones, and if the predicted tone pitch is contained in the set consisting of all the generated tone pitches, multiplying the tone pitch probability of the predicted tone by the repeat penalty coefficient to obtain a repeat correction result.
Preferably, the threshold modification according to the threshold constraint comprises:
setting a threshold range, a threshold central value and a threshold standard deviation value, and generating a normal distribution with the threshold central value as a mean value and the threshold standard deviation value as a variance according to the threshold range, the threshold central value and the threshold standard deviation value;
and multiplying the pitch probability of the notes with the pitch within the range of the threshold by the probability value of the normal distribution of each note, and normalizing to obtain the output of the range judgment.
Preferably, the following conditional filtering is performed for selecting a note in the pitch probability distribution:
if the predicted note is not in the key, marking the note as wrong note and clearing;
if the predicted note and the chord form a pitch reduction interval, directly removing the note;
if the predicted note and the generated note exceed eight-degree big jumps or form continuous same-direction big jumps with the previous two notes, the note is directly eliminated;
if the predicted note is that the second time the outer tone appears in a certain bar, the note is cleared directly.
Compared with the prior art, the invention has the beneficial effects that:
the MIDI sequence generating device based on the music theory and the statistical rule can generate diversified MIDI sequences under the condition of ensuring controllable melody generation according to the given melody, rhythm and tone.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flowchart of MIDI sequence generation based on music theory and statistical rules according to an embodiment of the present invention.
FIG. 2 is a pitch initial probability distribution diagram at major key provided by an embodiment of the present invention;
FIG. 3 is a pitch initial probability distribution diagram for a minor key provided by an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the detailed description and specific examples, while indicating the scope of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
As shown in FIG. 1, the embodiment provides a MIDI sequence generating apparatus based on music theory and statistical rules, comprising a computer system configured to:
step 1, receiving a chord information sequence, a rhythm type and a tone.
Wherein the chord information sequence is a sequence containing chords and their corresponding beat numbers. The chord progression is consistent with tone, a chord progression list is arranged in a common device and can be customized, the data type of the chord progression list is str and float alternately, str data represents chord types, float data represents the continuous beat number of the chord, such as [ 'C',4.0, 'F',4.0, 'G',4.0, 'C',4.0], C chord duration 4 beat, F chord duration 4 beat, G chord duration 4 beat and C chord duration 4 beat.
The rhythm type defines the number of notes and the duration of each note, i.e., the user can customize the length of each note to be generated, with a minimum unit of one beat in 4/4. The device has built in a tempo type list that records popular music tempo types. Of course, the user may also customize a list for defining the tempo, where each number in the list represents the number of beats of each note generated. The start position of each note defaults to the end position of the last note, so the numbers in the rhythm type may be negative numbers, representing the number of beats continuing forward from the end of the last note.
The tone is the tone in which the melody to be generated is located, and includes the basic tone of big and small, and the tone can be set by the user, or can be automatically generated according to the input chord information sequence.
And 2, initializing the initial probability of the pitches of 12 tones in one octave according to the received tone.
The output of the MIDI sequence generating device is a note sequence of melody, which comprises the pitch and duration information of each note, and finally obtains a MIDI file with piano timbre mounted, wherein the MIDI file comprises a columnar chord proceeding track and a generated melody track which are two tracks in total.
In the melody pitch generation process, a probability model is mainly used, and a probability is set for each note to be generated. The probability is composed of a base probability and an additional probability. The base probability is derived from the user-set key and the probability distribution of each note at the given key is different. The additional probability comes from the high-level characteristics of the motion characteristic, the repeat sound, the chord sound, the range of the sound and the like of the melody, and the additional probability is directly corrected on the basis probability of the corresponding note. And obtaining the probability of each final note to be generated according to the calculation results of the two probabilities.
In the present invention, the minimum unit generated by a MIDI sequence is a note, and its basic attributes are pitch and duration. Therefore, the probability of each pitch of the next note is judged according to the given tone, the size tone corresponds to the probability of different pitches, and the probability is used as the basic probability of the next note.
Fig. 2 and 3 are initial pitch probability distribution diagrams for different progression notes at major key and minor key, respectively. The initial pitch probability is used as a base probability, for example, the base probability corresponding to the 1 st note of major scale is 0.184, and the base probability corresponding to the 1 st note of minor scale is 0.192.
And 3, performing chord correction, melody motion correction, repeat tone correction and tone threshold correction on the tone pitch initial probabilities of the 12 tones according to the chord constraint, the melody motion constraint, the repeat tone constraint and the tone threshold constraint to obtain the tone pitch probability distribution of the 12 tones.
On the basis of the initial pitch probability, chord correction, melody motion correction, repeat sound correction and tone threshold correction are carried out on the initial pitch probability, the correction sequence is not limited, and the four kinds of pitch correction are carried out on one initial pitch probability.
After the key is given, the chord of the section of melody is given in beat unit. On the basis of the basic probability of each tone to be predicted, the probability based on chord tones is increased according to the corresponding chord. Specifically, the chord modification in accordance with the chord constraint includes:
setting the chord correction coefficient to 3;
and multiplying the tone pitch probability of the chord component tone corresponding to the chord information by the chord modification coefficient according to the received chord information to obtain a chord modification result.
The melody exercise usually has the way of proceeding in a horizontal direction, a model advance and the like. The embodiment gives the tone to be predicted an additional probability addition according to the previous tone pitch of the tone to be predicted. Specifically, the melody motion modification according to the melody motion constraint comprises the following steps:
when the predicted sound and the generated sound have different musical interval differences, multiplying the pitch probability of the sound to be predicted by a melody motion constraint coefficient, wherein the melody motion constraint coefficient is obtained by the following formula:
Figure BDA0002488427420000071
F(x,root)=f(x,root+2,0.5)*0.2+f(x,root+4,1)*0.05+f(x,root-2.3,1)*0.1
+f(x,root-5,0.8)*0.06+f(x,root+7,0.8)*0.03+f(x,root+1,0.7)
*0.05+f(x,root-8,3)*0.02
where F (,) represents the melody motion constraint coefficient, x represents the predicted pitch, and root is the generated pitch.
If the same sound appears in a melody, it is very unpleasant. Therefore, if a certain pitch appears too much in the generated melody, the probability of the tone to be predicted appearing at the pitch is reduced. Specifically, the repeating tone correction according to the repeating tone constraint includes:
setting a repeating sound punishment coefficient to be 0.8;
and comparing the predicted tone with all generated tones, and if the predicted tone pitch is contained in the set consisting of all the generated tone pitches, multiplying the tone pitch probability of the predicted tone by the repeat penalty coefficient to obtain a repeat correction result.
A melody is usually located in a specific range, and a particularly wide range of ranges makes the melody have no center of gravity in hearing and very difficult to sing. Thus specifying the pitch range of occurrence of the tone to be predicted and adding a penalty to it for the probability of deviating from the centre of the register. Specifically, the threshold modification according to the threshold constraint includes:
setting a threshold range, a threshold central value and a threshold standard deviation value, and generating a normal distribution with the threshold central value as a mean value and the threshold standard deviation value as a variance according to the threshold range, the threshold central value and the threshold standard deviation value;
and multiplying the pitch probability of the notes with the pitch within the range of the threshold by the probability value of the normal distribution of each note, and normalizing to obtain the output of the range judgment.
And 4, selecting a note from the pitch probability distribution in sequence according to the sequence of the pitch from high to low for condition screening, and adding the note meeting the screening condition to the generated melody.
When obtaining the pitch probability distribution and obtaining the maximum pitch probability, comparing the predicted note with the generated note pitch, and performing post-processing, wherein the specific processing is as follows:
(a) clear tone
If the predicted note is not in the key, it is marked as a wrong note, and if there is an even key in the melody, it will increase the interest, but if there is a large number of notes, it will affect the stability of the melody, so the note is set as a high probability to be cleared and regenerated.
(b) Clearing a mute interval
If the predicted note and chord form the attenuation interval, the attenuation interval is directly eliminated.
(c) Melody amplitude control
If the predicted note exceeds the octave jump with the generated note or forms a continuous equidirectional jump with the previous two notes, the predicted note is directly cleared.
(d) Chord tone control
If the predicted note is that the second time the outer tone appears in a certain measure, the note is directly cleared.
And 5, synthesizing the generated melody and rhythm type to generate the MIDI file.
The computer system includes one or more non-transitory computer-readable storage devices storing instructions that, when executed by a processor, the computer system, perform the various computing operations described above. The computer may be a desktop computer, a laptop computer, a workstation, a cloud server, a personal digital assistant, or any other computer system. Computer systems include processors, Read Only Memory (ROM), Random Access Memory (RAM), input/output adapters for connecting peripheral devices (e.g., input devices, output devices, storage devices, etc.), user interface adapters for connecting input devices (e.g., keyboard, mouse, touch screen, voice input), and/or other devices, communication adapters for connecting computers to networks, display adapters for connecting computers to displays, and the like.
The MIDI sequence generating device based on the music theory and the statistical rule can generate diversified MIDI sequences under the condition of ensuring controllable melody generation according to the given melody, rhythm and tone.
The above-mentioned embodiments are intended to illustrate the technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only the most preferred embodiments of the present invention, and are not intended to limit the present invention, and any modifications, additions, equivalents, etc. made within the scope of the principles of the present invention should be included in the scope of the present invention.

Claims (8)

1. A MIDI sequence generation apparatus based on music theory and statistical rules, comprising a computer system, wherein the computer system is configured to:
receiving a chord information sequence, a rhythm type and a tone;
initializing pitch initial probabilities of 12 tones within one octave according to the received tone;
chord correction, melody motion correction, repeat sound correction and sound threshold correction are carried out on the tone pitch initial probabilities of the 12 tones according to chord restriction, melody motion restriction, repeat sound restriction and sound threshold restriction, and the tone pitch probability distribution of the 12 tones is obtained;
selecting a note from the pitch probability distribution as a predicted note in sequence from high pitch to low pitch, and carrying out the following condition screening: if the predicted note is not in the key, marking the note as wrong note and clearing; if the predicted note and the chord form a pitch reduction interval, directly removing the note; if the predicted note and the generated note exceed eight-degree big jumps or form continuous same-direction big jumps with the previous two notes, the note is directly eliminated; if the predicted note is that the second time of the outer tone appears in a certain measure, the note is directly cleared; adding notes satisfying the screening condition to the generated melody;
the generated melody and rhythm are integrated to generate a MIDI file.
2. A MIDI sequence generation apparatus as claimed in claim 1 wherein the key is automatically generated based on the inputted chord information sequence.
3. A MIDI sequence generation apparatus as claimed in claim 1 wherein said MIDI sequence generation apparatus has a pop music tempo directly integrated with any one of the pop music tempos to generate a MIDI file.
4. A MIDI sequence generation apparatus as claimed in claim 1, wherein when the key is major, the initial probability of pitches of 12 tones is:
the 12 tones are: 1. #1/b2, 2, #2/b3, 3, 4, #4/b5, 5, #5/b6, 6, # 6/b7, 7, the corresponding pitch initial probabilities are: 0.184, 0.001, 0.155, 0.003, 0.191, 0.109, 0.005, 0.214, 0.001, 0.078, 0.004, 0.055;
when the key is a minor key, the initial probability of the pitches of the 12 tones is:
the 12 tones are: 1. #1/b2, 2, #2/b3, 3, 4, #4/b5, 5, #5/b6, 6, # 6/b7 and 7, wherein the corresponding pitch initial probabilities are respectively as follows: 0.192, 0.005, 0.149, 0.179, 0.002, 0.144, 0.002, 0.201, 0.038, 0.012, 0.053, 0.022.
5. A MIDI sequence generation apparatus as claimed in claim 1 wherein the chord modification in accordance with the chord constraints comprises:
setting a chord correction coefficient, wherein the chord correction coefficient range is 2.3-3.5;
and multiplying the tone pitch probability of the chord component tone corresponding to the chord information by the chord modification coefficient according to the received chord information to obtain a chord modification result.
6. The apparatus of claim 1, wherein the melodic motion modification based on the melodic motion constraint comprises:
when the predicted sound and the generated sound have different musical interval differences, multiplying the pitch probability of the sound to be predicted by a melody motion constraint coefficient, wherein the melody motion constraint coefficient is obtained by the following formula:
Figure FDA0003635285840000021
F(x,root)=f(x,root+2,0.5)*0.2+f(x,root+4,1)*0.05+f(x,root-2.3,1)*0.1+f(x,root-5,0.8)*0.06+f(x,root+7,0.8)*0.03+f(x,root+1,0.7)*0.05+f(x,root-8,3)*0.02
where F (,) represents the melody motion constraint coefficient, x represents the predicted pitch, and root is the generated pitch.
7. The apparatus for generating a MIDI sequence based on music and statistical rules of claim 1 wherein the repetitive sound modification according to the repetitive sound constraint comprises:
setting a repeating sound punishment coefficient, wherein the repeating sound punishment coefficient range is 0.7-0.9;
and comparing the predicted tone with all generated tones, and if the predicted tone pitch is contained in the set consisting of all the generated tone pitches, multiplying the tone pitch probability of the predicted tone by the repeat penalty coefficient to obtain a repeat correction result.
8. A MIDI sequence generation apparatus as claimed in claim 1 wherein the threshold modification in accordance with the threshold constraint comprises:
setting a threshold range, a threshold central value and a threshold standard deviation value, and generating a normal distribution with the threshold central value as a mean value and the threshold standard deviation value as a variance according to the threshold range, the threshold central value and the threshold standard deviation value;
and multiplying the pitch probability of the notes with the pitch within the range of the threshold by the probability value of the normal distribution of each note, and normalizing to obtain the output of the range judgment.
CN202010398381.9A 2020-05-12 2020-05-12 MIDI sequence generating device based on music theory and statistical rule Active CN111613199B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010398381.9A CN111613199B (en) 2020-05-12 2020-05-12 MIDI sequence generating device based on music theory and statistical rule

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010398381.9A CN111613199B (en) 2020-05-12 2020-05-12 MIDI sequence generating device based on music theory and statistical rule

Publications (2)

Publication Number Publication Date
CN111613199A CN111613199A (en) 2020-09-01
CN111613199B true CN111613199B (en) 2022-08-09

Family

ID=72201195

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010398381.9A Active CN111613199B (en) 2020-05-12 2020-05-12 MIDI sequence generating device based on music theory and statistical rule

Country Status (1)

Country Link
CN (1) CN111613199B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112365868B (en) * 2020-11-17 2024-05-28 北京达佳互联信息技术有限公司 Sound processing method, device, electronic equipment and storage medium
CN112820255A (en) * 2020-12-30 2021-05-18 北京达佳互联信息技术有限公司 Audio processing method and device
CN113012665B (en) * 2021-02-19 2024-04-19 腾讯音乐娱乐科技(深圳)有限公司 Music generation method and training method of music generation model
CN113571030B (en) * 2021-07-21 2023-10-20 浙江大学 MIDI music correction method and device based on hearing harmony evaluation

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06167977A (en) * 1993-05-31 1994-06-14 Casio Comput Co Ltd Rhythm composing device and rhythm analyzing device
US5736666A (en) * 1996-03-20 1998-04-07 California Institute Of Technology Music composition
US6100462A (en) * 1998-05-29 2000-08-08 Yamaha Corporation Apparatus and method for generating melody
CN1753080A (en) * 2004-09-22 2006-03-29 雅马哈株式会社 Apparatus and program for displaying musical information
JP2007193222A (en) * 2006-01-20 2007-08-02 Casio Comput Co Ltd Melody input device and musical piece retrieval device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10854180B2 (en) * 2015-09-29 2020-12-01 Amper Music, Inc. Method of and system for controlling the qualities of musical energy embodied in and expressed by digital music to be automatically composed and generated by an automated music composition and generation engine

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06167977A (en) * 1993-05-31 1994-06-14 Casio Comput Co Ltd Rhythm composing device and rhythm analyzing device
US5736666A (en) * 1996-03-20 1998-04-07 California Institute Of Technology Music composition
US6100462A (en) * 1998-05-29 2000-08-08 Yamaha Corporation Apparatus and method for generating melody
CN1753080A (en) * 2004-09-22 2006-03-29 雅马哈株式会社 Apparatus and program for displaying musical information
JP2007193222A (en) * 2006-01-20 2007-08-02 Casio Comput Co Ltd Melody input device and musical piece retrieval device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
兰帆等.一种改进旋律匹配算法在MIDI演奏系统中的应用.《计算机与现代化》.2009,(第06期), *
曹西征等.应用于河南民歌的智能作曲方法.《计算机应用》.2017, *

Also Published As

Publication number Publication date
CN111613199A (en) 2020-09-01

Similar Documents

Publication Publication Date Title
CN111613199B (en) MIDI sequence generating device based on music theory and statistical rule
US5736666A (en) Music composition
Barbancho et al. Automatic transcription of guitar chords and fingering from audio
CN112382257B (en) Audio processing method, device, equipment and medium
US20120246209A1 (en) Method for creating a markov process that generates sequences
Nakamura et al. Statistical piano reduction controlling performance difficulty
CN113192471B (en) Musical main melody track recognition method based on neural network
Nakamura et al. Automatic piano reduction from ensemble scores based on merged-output hidden markov model
CN113010730A (en) Music file generation method, device, equipment and storage medium
CN109841202B (en) Rhythm generation method and device based on voice synthesis and terminal equipment
CN110867174A (en) Automatic sound mixing device
CN113178182A (en) Information processing method, information processing device, electronic equipment and storage medium
Glickman et al. (A) Data in the Life: Authorship Attribution of Lennon-McCartney Songs
CN110517655B (en) Melody generation method and system
CN110134823B (en) MIDI music genre classification method based on normalized note display Markov model
JPH0736478A (en) Calculating device for similarity between note sequences
Kumar et al. MellisAI—An AI generated music composer using RNN-LSTMs
Trochidis et al. CAMeL: Carnatic percussion music generation using n-gram models
CN112951183B (en) Music automatic generation and evaluation method based on deep learning
JP2019109357A (en) Feature analysis method for music information and its device
Hori et al. Jazz piano trio synthesizing system based on hmm and dnn
Manilow et al. Unsupervised source separation by steering pretrained music models
Paiement Probabilistic models for music
CN112992106B (en) Music creation method, device, equipment and medium based on hand-drawn graph
Tamás et al. Development of a music generator application based on artificial intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Ji Zihao

Inventor after: Li Chenxiao

Inventor after: Zhang Kejun

Inventor before: Li Chenxiao

Inventor before: Ji Zihao

Inventor before: Zhang Kejun

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant