CN110675849A - Method for generating Bossa Nova style music rhythm based on Bayesian network - Google Patents

Method for generating Bossa Nova style music rhythm based on Bayesian network Download PDF

Info

Publication number
CN110675849A
CN110675849A CN201910868054.2A CN201910868054A CN110675849A CN 110675849 A CN110675849 A CN 110675849A CN 201910868054 A CN201910868054 A CN 201910868054A CN 110675849 A CN110675849 A CN 110675849A
Authority
CN
China
Prior art keywords
music
bossa nova
inference model
rhythm
music rhythm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910868054.2A
Other languages
Chinese (zh)
Other versions
CN110675849B (en
Inventor
杨可舟
任涛
刘昕靓
刘子榆
王逸群
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeastern University China
Original Assignee
Northeastern University China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeastern University China filed Critical Northeastern University China
Priority to CN201910868054.2A priority Critical patent/CN110675849B/en
Publication of CN110675849A publication Critical patent/CN110675849A/en
Application granted granted Critical
Publication of CN110675849B publication Critical patent/CN110675849B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • G10H1/40Rhythm
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0033Recording/reproducing or transmission of music for electrophonic musical instruments
    • G10H1/0041Recording/reproducing or transmission of music for electrophonic musical instruments in coded form
    • G10H1/0058Transmission between separate instruments or between individual components of a musical system
    • G10H1/0066Transmission between separate instruments or between individual components of a musical system using a MIDI interface
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/341Rhythm pattern selection, synthesis or composition
    • G10H2210/371Rhythm syncopation, i.e. timing offset of rhythmic stresses or accents, e.g. note extended from weak to strong beat or started before strong beat
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/375Tempo or beat alterations; Music timing control
    • G10H2210/391Automatic tempo adjustment, correction or control

Abstract

The invention discloses a method for generating a Bossa Nova style music rhythm based on a Bayesian network. Firstly, establishing a Bossa Nova style music rhythm inference model and optimizing the Bossa Nova style music rhythm inference model, then establishing a prior knowledge base of the music rhythm, generating a music rhythm inference model with the music rhythm prior knowledge base, finally adopting a VE algorithm to infer the music rhythm, generating information of a conditional probability table in a recursion mode, selecting a condition with the highest probability according to the obtained information of the conditional probability table, and configuring the table according to BN parameters to obtain the music rhythm of the Bossa Nova style. The result proves that the technical scheme provided by the invention can effectively improve the generation efficiency of the music rhythm through less prior music input, quickly generate the music rhythm of the Bossa Nova style, and obtain more accurate music rhythm of the Bossa Nova style.

Description

Method for generating Bossa Nova style music rhythm based on Bayesian network
Technical Field
The invention relates to the technical field of Bayesian networks, in particular to a method for generating Bossa Nova style music tempo based on a Bayesian network.
Background
There are many tasks for making music by algorithm, such as generating chord, generating melody, assigning chord to melody, generating four harmony, generating jazz impromptu, and the like, and there are several algorithms currently applied to the field of making music by algorithm: the first is a composition algorithm based on rules and music knowledge, which is easy to understand literally what they do, i.e. the existing composition rules are applied to make music, for example, the twelve tone system of ganberg is used as algorithm rules to generate music; in addition, another view is to analyze the grammar of the music symbols and compose music through the grammar, and the method is also improved by using probability methods, such as the skip and duration probabilities of the designated notes; the second is machine learning algorithm, which is one of the hot tools currently used in the composition of the algorithm, but the algorithm has the disadvantages of requiring a large amount of training data and having a slow generation speed.
The bayesian network has been widely applied in natural language processing, medical diagnosis, weather prediction, etc., however, in an automatic music composition system, the bayesian network is not widely applied, and the prior music generation algorithm has the disadvantages of excessive training data and slow generation speed, but the bayesian network needs less prior knowledge, and the problems can be effectively solved by applying the bayesian network (abbreviated as BN), so as to improve the music generation efficiency.
Disclosure of Invention
The invention provides a method for generating Bossa Nova style music tempo based on a Bayesian network. The method functionally models the music rhythm of the Bossa Nova style, establishes a music rhythm priori knowledge base, then adopts a VE algorithm (namely a variable elimination algorithm) to reason the music rhythm, and proves that the music rhythm of the Bossa Nova style generated based on the Bayesian network conforms to the music theory of the music of the style.
In order to solve the technical problem, the invention provides a method for generating a Bossa Nova style music rhythm based on a Bayesian network, which comprises the following steps:
step 1: establishing a Bossa Nova style music rhythm inference model and optimizing the Bossa Nova style music rhythm inference model, firstly constructing a full connected graph through the connection among all elements of music, and then reducing the number of edges in the full connected graph by independently abstracting a rest symbol, a weak start symbol, a floating point and a triple connection sound in the full connected graph into one layer to obtain an optimized music rhythm inference model;
step 2: establishing a priori knowledge base of music tempo, firstly carrying out quantitative analysis on the music characteristics of the knowledge base in the inference model of Bossa Nova style music tempo obtained in the step 1, and then storing the music characteristics into the optimized music tempo inference model through statistics to generate a music tempo inference model with the music tempo priori knowledge base;
and step 3: and reasoning the music rhythm by adopting a VE algorithm, generating information of a conditional probability table in a recursion mode, selecting a condition with the highest probability according to the obtained information of the conditional probability table, and obtaining the music rhythm of the Bossa Nova style according to the information of a BN parameter configuration table.
The step 2 of establishing a priori knowledge base of music tempo, firstly carrying out quantitative analysis on the music characteristics of the knowledge base in the inference model of Bossa Nova style music tempo obtained in the step 1, and then storing the music characteristics into the optimized music tempo inference model through statistics, and the specific steps are as follows:
1) n representative Bossa nova style music is prepared for machine learning, and n is determined according to actual conditions;
2) generating MIDI music through the machine learning, scoring the MIDI music, sending the MIDI music and the score to a database for learning, and reasoning the Bossa Nova style music rhythm inference model obtained in the step 1 to obtain the joint probability distribution of the Bossa Nova style music rhythm inference model as follows:
P(A,B,C,D,E,F,G,H,I,J,K)(1)
=P(A)P(B)P(C)P(K)P(D|ABCE)P(E|BC)P(J|ABCE)P(F|D)P(G|D)P(H|D)P(I|D)
3) decomposing the joint probability distribution of the formula (1), firstly obtaining the conditional joint probability distribution, and then performing marginalization processing on the conditional joint probability distribution to obtain the conditional probability as follows:
P(F,G,H,I,J|A,B,C,K)(2)
4) carrying out quantitative analysis on the conditional probability obtained in the step 3), wherein the processing process is as follows:
s1 processing of an inactivity: setting a note with the pitch of 0 and the strength of 0;
s2 processing for weak G: dividing the note into two notes, the first one being a rest;
processing H of S3 for the punctuation: the time is changed to 1.5 times of the note;
s4 processing for trilong: change the rhythm to 1/3, and generate three tones;
s5 this rhythm type J: full, half, quarter, eighth, sixteenth, supplementary points, triple.
The invention has the beneficial effects that:
according to the method for generating the Bossa Nova style music tempo based on the Bayesian network, the generation efficiency of the music tempo can be effectively improved through less prior music input, the Bossa Nova style music tempo can be quickly generated, and the obtained Bossa Nova style music tempo is more accurate.
Drawings
Fig. 1 is a flowchart of a method of generating a Bossa Nova style music tempo based on a bayesian network in the present embodiment.
Fig. 2 is a fully connected graph of the Bossa Nova style music tempo in the present embodiment.
Fig. 3 is a fully connected graph of the optimized Bossa Nova style music tempo in the present embodiment.
Fig. 4 is a relationship diagram among the root node table, the conditional probability table, and the configuration table in this embodiment.
Fig. 5 is conditional probability table information in the present embodiment.
Detailed Description
The following detailed description of embodiments of the present invention is provided in connection with the accompanying drawings and examples.
The software environment of the embodiment is a WINDOWS 10 system, the development tool is an Adroid Studio SDK-r24, the database is SQLite, the used virtual machine is NEXUS, and the simulation environment is Android intelligent machines of different models.
The specific implementation of this example is described below:
as shown in fig. 1, which is a flowchart of a method for generating a Bossa Nova style music tempo based on a bayesian network in the embodiment of the present invention, a method for generating a Bossa Nova style music tempo based on a bayesian network includes the following steps:
step 1: an inference model of the music tempo of the Bossa Nova style is established and optimized, a full connectivity graph as shown in the full connectivity graph of the music tempo of the Bossa Nova style in the embodiment of fig. 2 is constructed through the connection among all the elements of the music, then, by abstracting the rest, the weak start, the floating point, and the triple join in the fully connected graph into one layer, the number of edges in the fully connected graph is reduced to obtain an optimized music tempo inference model, the optimized fully connected graph obtained in this embodiment is as shown in the fully connected graph of the optimized Bossa Nova style music tempo in this embodiment of fig. 3, by independently abstracting the notes such as the rest characters, the weak start characters, the floating points, the three continuous tones and the like into one layer, 6 edges are reduced, the prediction time is reduced by one third, so that the model is simplified, and the model is convenient for the expansion processing of the special notes in the future.
Step 2: establishing a priori knowledge base of music rhythm, firstly carrying out quantitative analysis on the music characteristics of the knowledge base in the inference model of the Bossa Nova style music rhythm obtained in the step 1, then storing the music characteristics into the optimized music rhythm inference model through statistics, and generating the music rhythm inference model with the priori knowledge base of music rhythm, wherein the specific steps are as follows:
1) generally, 20 representative Bossa nova-style music about the music are prepared for machine learning;
2) generating MIDI music through the machine learning, scoring the MIDI music, sending the MIDI music and the score to a database for learning, and reasoning the Bossa Nova style music rhythm inference model obtained in the step 1 to obtain the joint probability distribution of the Bossa Nova style music rhythm inference model as follows:
P(A,B,C,D,E,F,G,H,I,J,K)(1)
=P(A)P(B)P(C)P(K)P(D|ABCE)P(E|BC)P(J|ABCE)P(F|D)P(G|D)P(H|D)P(I|D)
3) decomposing the joint probability distribution of the formula (1), firstly obtaining the conditional joint probability distribution, and then performing marginalization processing on the conditional joint probability distribution to obtain the conditional probability as follows:
P(F,G,H,I,J|A,B,C,K)(2)
4) carrying out quantitative analysis on the conditional probability obtained in the step 3), wherein the processing process is as follows:
s1 processing of an inactivity: setting a note with the pitch of 0 and the strength of 0;
s2 processing for weak G: dividing the note into two notes, the first one being a rest;
processing H of S3 for the punctuation: the time is changed to 1.5 times of the note;
s4 processing for trilong: change the rhythm to 1/3, and generate three tones;
s5 this rhythm type J: full, half, quarter, eighth, sixteenth, supplementary points, triple.
The music characteristics to be counted in the embodiment include the following parts:
1) chapter: denoted by Chap, CiRepresenting the chapter number of a certain note in music, we divide the music into five chapters in general: prelude, master song, refrain, interlude and tail;
2) a subsection: bar is used for representing a note, Bi represents a few bars of a certain chapter, and the note is divided into 4 types at the position of the chapter, namely B1~B4Respectively represent the first to the fourth positions in each chapter, and the steps are repeated;
3) location of the note: what we will be musicEach subsection is divided into 16 equal parts, which are respectively A1~A16And (4) showing.
In the design of reasoning on bayesian networks, we attempt to build a more general reasoning model. The reasoning parameters of the model are separated from the actual service data, and the reasoning parameters are connected with the actual service data by constructing a configuration table. The benefit of this way of separating the model from the data is that a relatively uniform inference model can be provided. When different requirements are met, the requirements can be inferred only by appropriately modifying the configuration.
Expression of bayesian networks in data we will use three tables: the root node table, the conditional probability table and the configuration table are shown in fig. 4, t _ spec represents the configuration table of the bayesian network, the meaning represented by the parameters is represented by different variables, t _ pre represents the conditional probability represented by the root node in the bayesian network, t _ parentTochild table represents all the conditional probability tables in the bayesian network, and the corresponding parameters are shown in table 1.
TABLE 1 conditional probability table for Bayesian networks
Figure BDA0002199167790000041
The 10 nodes are replaced by 10 English letters of characters A, B, C, D, E, F, G, H, J and K, and the corresponding values are { A, B, C, D, E, F, G, H, J and K }1-A16}{B1-B2}{C1-C5}{D1-D2}{E1-E2}{F1-F2}{G1-G2}{H1-H2}{I1-I2}{J1-J5}{K1-K4And then, a parameter configuration table of the bayesian network is established, and as shown in table 2, all corresponding parameter information is placed in the configuration table.
TABLE 2 parameter configuration Table for Bayesian networks
Figure BDA0002199167790000051
And step 3: the VE algorithm is used to reason the music tempo, and the information of the conditional probability table is obtained in a recursive manner, as shown in fig. 5, in the present embodiment, the parent nodes of the node D are a, B, C, and D in the BN. When the value of the father node is A1,B1,C1,D1The child node takes the value of D1When the probability of correspondence is 0.6, the condition is the highest in occurrence probability among all the condition combinations, and therefore the set of conditions is judged to be the combination of the obtained music tempos.
According to the parameter configuration table of the Bayesian network given in the table 2, the combination of music tempos is obtained through the parameter name table lookup, and the combination of music tempos is spliced in sequence to finally obtain the music tempo of Bossa Nova style.

Claims (2)

1. A method for generating Bossa Nova style music tempo based on a Bayesian network is characterized by comprising the following steps:
step 1: establishing a Bossa Nova style music rhythm inference model and optimizing the Bossa Nova style music rhythm inference model, firstly constructing a full connected graph through the connection among all elements of music, and then reducing the number of edges in the full connected graph by independently abstracting a rest symbol, a weak start symbol, a floating point and a triple connection sound in the full connected graph into one layer to obtain an optimized music rhythm inference model;
step 2: establishing a priori knowledge base of music tempo, firstly carrying out quantitative analysis on the music characteristics of the knowledge base in the inference model of Bossa Nova style music tempo obtained in the step 1, and then storing the music characteristics into the optimized music tempo inference model through statistics to generate a music tempo inference model with the music tempo priori knowledge base;
and step 3: and reasoning the music rhythm by adopting a VE algorithm, generating information of a conditional probability table in a recursion mode, selecting a condition with the highest probability according to the obtained information of the conditional probability table, and obtaining the music rhythm of the Bossanova style according to the information of a BN parameter configuration table.
2. The method as claimed in claim 1, wherein the step 2 of creating a priori knowledge base of music tempo, comprises the steps of firstly performing quantitative analysis on the music characteristics of the knowledge base in the inference model of music tempo of Bossa Nova style obtained in the step 1, and then storing the music characteristics into the optimized music tempo inference model by statistics, and specifically comprises the following steps:
1) n representative Bossa nova style music is prepared for machine learning;
2) generating MIDI music through the machine learning, scoring the MIDI music, sending the MIDI music and the score to a database for learning, and reasoning the Bossa Nova style music rhythm inference model obtained in the step 1 to obtain the joint probability distribution of the Bossa Nova style music rhythm inference model as follows:
3) decomposing the joint probability distribution of the formula (1), firstly obtaining the conditional joint probability distribution, and then performing marginalization processing on the conditional joint probability distribution to obtain the conditional probability as follows:
P(F,G,H,I,J|A,B,C,K) (2)
4) carrying out quantitative analysis on the conditional probability obtained in the step 3), wherein the processing process is as follows:
s1 processing of an inactivity: setting a note with the pitch of 0 and the strength of 0;
s2 processing for weak G: dividing the note into two notes, the first one being a rest;
processing H of S3 for the punctuation: the time is changed to 1.5 times of the note;
s4 processing for trilong: change the rhythm to 1/3, and generate three tones;
s5 this rhythm type J: full, half, quarter, eighth, sixteenth, supplementary points, triple.
CN201910868054.2A 2019-09-11 2019-09-11 Method for generating Bossa Nova style music rhythm based on Bayesian network Active CN110675849B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910868054.2A CN110675849B (en) 2019-09-11 2019-09-11 Method for generating Bossa Nova style music rhythm based on Bayesian network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910868054.2A CN110675849B (en) 2019-09-11 2019-09-11 Method for generating Bossa Nova style music rhythm based on Bayesian network

Publications (2)

Publication Number Publication Date
CN110675849A true CN110675849A (en) 2020-01-10
CN110675849B CN110675849B (en) 2022-11-15

Family

ID=69078134

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910868054.2A Active CN110675849B (en) 2019-09-11 2019-09-11 Method for generating Bossa Nova style music rhythm based on Bayesian network

Country Status (1)

Country Link
CN (1) CN110675849B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112634842A (en) * 2020-12-14 2021-04-09 湖南工程学院 New music generation method based on dual-mode network wandering fusion
CN113780566A (en) * 2021-06-23 2021-12-10 核动力运行研究所 Bayesian network parameter initialization method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04157499A (en) * 1990-10-20 1992-05-29 Yamaha Corp Automatic rhythm creation device
WO2007119221A2 (en) * 2006-04-18 2007-10-25 Koninklijke Philips Electronics, N.V. Method and apparatus for extracting musical score from a musical signal
US20140116233A1 (en) * 2012-10-26 2014-05-01 Avid Technology, Inc. Metrical grid inference for free rhythm musical input
CN109754773A (en) * 2018-11-26 2019-05-14 成都云创新科技有限公司 Creative method is assisted based on big data audio
CN110134823A (en) * 2019-04-08 2019-08-16 华南理工大学 The MIDI musical genre classification method of Markov model is shown based on normalization note

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04157499A (en) * 1990-10-20 1992-05-29 Yamaha Corp Automatic rhythm creation device
WO2007119221A2 (en) * 2006-04-18 2007-10-25 Koninklijke Philips Electronics, N.V. Method and apparatus for extracting musical score from a musical signal
US20140116233A1 (en) * 2012-10-26 2014-05-01 Avid Technology, Inc. Metrical grid inference for free rhythm musical input
CN109754773A (en) * 2018-11-26 2019-05-14 成都云创新科技有限公司 Creative method is assisted based on big data audio
CN110134823A (en) * 2019-04-08 2019-08-16 华南理工大学 The MIDI musical genre classification method of Markov model is shown based on normalization note

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112634842A (en) * 2020-12-14 2021-04-09 湖南工程学院 New music generation method based on dual-mode network wandering fusion
CN112634842B (en) * 2020-12-14 2024-04-05 湖南工程学院 New song generation method based on dual-mode network migration fusion
CN113780566A (en) * 2021-06-23 2021-12-10 核动力运行研究所 Bayesian network parameter initialization method

Also Published As

Publication number Publication date
CN110675849B (en) 2022-11-15

Similar Documents

Publication Publication Date Title
Chuan et al. A hybrid system for automatic generation of style-specific accompaniment
CN110675849B (en) Method for generating Bossa Nova style music rhythm based on Bayesian network
Schulze et al. Music generation with Markov models
CN107229733B (en) Extended question evaluation method and device
Järveläinen Algorithmic musical composition
KR101795706B1 (en) Method and recording medium for automatic composition using artificial neural network
WO2021161429A1 (en) Program generation device, program generation method, and program
CN107993636A (en) Music score modeling and generation method based on recurrent neural network
McLeod et al. A modular system for the harmonic analysis of musical scores using a large vocabulary
CN110517655B (en) Melody generation method and system
Okumura et al. Laminae: A stochastic modeling-based autonomous performance rendering system that elucidates performer characteristics.
Lou et al. Communicating with sentences: A multi-word naming game model
CN116229922A (en) Drum music generation method based on Bi-LSTM deep reinforcement learning network
CN113096624B (en) Automatic creation method, device, equipment and storage medium for symphony music
Consoli et al. Heuristic approaches for the quartet method of hierarchical clustering
Schankler et al. Emergent formal structures of factor oracle-driven musical improvisations
Kitahara et al. An interactive music composition system based on autonomous maintenance of musical consistency
Verbeurgt et al. A hybrid Neural-Markov approach for learning to compose music by example
Mo et al. A music generation model for robotic composers
CN109033110B (en) Method and device for testing quality of extended questions in knowledge base
Della Ventura Human-centred artificial intelligence in sound perception and music composition
Komatsu et al. A Music Composition Model with Genetic Programming.
CN109508185A (en) A kind of Code Review method and apparatus
Shirai et al. A proposal of an interactive music composition system using Gibbs sampler
Hastuti et al. Gamelan composer: a rule-based interactive melody generator for Gamelan music

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant