CN113571030B - MIDI music correction method and device based on hearing harmony evaluation - Google Patents

MIDI music correction method and device based on hearing harmony evaluation Download PDF

Info

Publication number
CN113571030B
CN113571030B CN202110825341.2A CN202110825341A CN113571030B CN 113571030 B CN113571030 B CN 113571030B CN 202110825341 A CN202110825341 A CN 202110825341A CN 113571030 B CN113571030 B CN 113571030B
Authority
CN
China
Prior art keywords
chord
harmony
music
notes
evaluation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110825341.2A
Other languages
Chinese (zh)
Other versions
CN113571030A (en
Inventor
张克俊
陈其航
吴鑫达
许家瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202110825341.2A priority Critical patent/CN113571030B/en
Publication of CN113571030A publication Critical patent/CN113571030A/en
Application granted granted Critical
Publication of CN113571030B publication Critical patent/CN113571030B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0008Associated control or indicating means
    • G10H1/0025Automatic or semi-automatic music composition, e.g. producing random music, applying rules from music theory or modifying a musical piece
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0033Recording/reproducing or transmission of music for electrophonic musical instruments
    • G10H1/0041Recording/reproducing or transmission of music for electrophonic musical instruments in coded form
    • G10H1/0058Transmission between separate instruments or between individual components of a musical system
    • G10H1/0066Transmission between separate instruments or between individual components of a musical system using a MIDI interface

Abstract

The invention discloses a MIDI music correction method and device based on hearing harmony evaluation, comprising the following steps: acquiring MIDI music; performing tonality recognition on MIDI music, wherein the tonality recognition comprises chord recognition, rationality evaluation of chords according to the music theory rule, and harmony evaluation of notes from four aspects of pitch, intensity, duration and playing position; performing chord rationality improvement according to the rationality evaluation result of chord processing; carrying out harmony promotion on the notes according to the harmony assessment of the notes; the chord rationality improvement comprises avoidance of the illegal chords, beautification of the plain chords, avoidance and solving of the off-key chords and supplementation of the head-tail chords; the method and the device effectively identify and analyze the harmony between the chord progression and the note according to the music theory rule, and carry out corresponding color rendering on the chord progression and the note harmony so as to improve the harmony of the intelligently generated music.

Description

MIDI music correction method and device based on hearing harmony evaluation
Technical Field
The invention belongs to the field of MIDI music, and particularly relates to a MIDI music correction method and device based on hearing harmony evaluation.
Background
MIDI is the most widely used standard format for music in the art of music editing. A standard MIDI file is a special music file that uses the digital control signals of notes to record music, i.e. consists of a set of serialized note commands, rather than a recorded piece of sound. Compared with the common audio files such as WAV format or MP3 format, the device can record the most original and rich information of music with very small space, and the core content comprises the beginning and ending of each note, pitch, intensity, tone color and the like, and is convenient for migration and reproduction among devices. The music generation model learns the time sequence distribution rule of notes in MIDI file data and can predict and generate music. At present, a music generation algorithm model has made remarkable progress in the field of intelligent music creation, but there is still room for improvement in the aspect of stability of generation quality, and the music generation algorithm model is particularly characterized in that a more general evaluation index is lacked for judging the music generation algorithm model. Therefore, for intelligently generated music, a set of music evaluation system needs to be designed to control the quality of the music.
Currently, the evaluation of AI music mainly includes two different ways: subjective evaluation and objective evaluation. In the later stage of the AI music generation model to assist people in creating music, the AI music generation model is evaluated in a manual subjective mode, and the AI music generation model is modified and moistened. Subjective assessment is by a person with experienced music repeatedly analyzing and modifying it, which results in a higher quality of music. However, the automation degree of the method is obviously insufficient, the efficiency requirement of intelligent music evaluation cannot be met for high-automation intelligent generation, and the evaluation method has higher dependence on experience and level of an evaluator and is difficult to be universally used. The objective evaluation mode can meet the requirement in efficiency, and the automatic evaluation of the generated music can be realized without depending on excessive professionals. However, the automation is strong, so that the expertise is poor compared with the subjective evaluation mode.
In the current music generation evaluation method, reference On the evaluation of generative models in music to Li-Chia Yang et al is representative. In the objective evaluation approach, li-Chia Yang et al propose a set of formative evaluation strategies. In the proposed evaluation strategy, two objectives are split: (1) To gain insight into absolute indicators of the nature and characteristics of the data sets that have been generated or collected; (2) a relative index for comparing the two sets of data. The quality scores of the melody and the rhythm of the music can be obtained by evaluating the music through the two indexes, and the music is further analyzed. This is a formative assessment strategy for a system that generates symbolic music. This approach is a concept that applies multiple criteria in order to provide an index that evaluates the basic technical properties of the generated music. Furthermore, chuan et al (Modeling temporal tonal relations in polyphonic music through deep networks with a novel image-based representation) use modeling pitch tension and spacing frequency as indicators to compare how different features represent the performance of the model. The metrics that take domain knowledge into account are advantageous not only in their interpretability, but also in their generalizability and effectiveness, at least as long as the designed model is intended to generate music according to established rules. In addition, there are objective evaluations using probability measures as indicators. That is, the quality of the generated music is evaluated by monitoring the characteristics of the training process of the model by making statistical measurements of complex tones, scale consistency, repeatability and pitch span.
Disclosure of Invention
In view of the above, the present invention aims to provide a MIDI music correction method and apparatus based on listening harmony assessment, which effectively identifies and analyzes the chord progression and note harmony deficiency of music according to the music theory rule, and performs corresponding color rendering on the chord progression and note harmony deficiency, so as to improve the harmony degree of intelligently generated music.
In a first aspect, an embodiment provides a MIDI music correction method based on a hearing harmony evaluation, including the steps of:
obtaining MIDI music which has popular music mode and is stable, and chord transformation is carried out by taking a bar as a unit;
performing music harmony assessment on the MIDI music, wherein the music harmony assessment comprises tonality identification, rationality assessment of chord progression and harmony assessment of notes; the chord rationality evaluation comprises chord identification and chord rationality evaluation according to the music theory rule, and the note harmonicity evaluation comprises note harmonicity evaluation from four aspects of pitch, intensity, duration and playing position;
the method comprises the steps of carrying out music harmony promotion according to a music harmony assessment result, including carrying out chord rationality promotion according to a rationality assessment result of chord progression and carrying out harmony promotion of notes according to harmony assessment of notes; the chord rationality improvement comprises avoidance of the illegal chords, beautification of the plain chords, avoidance and solving of the off-key chords and supplementation of the head-tail chords.
In a second aspect, an embodiment provides a MIDI music correction apparatus based on a hearing and harmony evaluation, including a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor implementing the MIDI music correction method based on the hearing and harmony evaluation according to the first aspect when the computer program is executed.
Compared with the prior art, the MIDI music correction method and device based on the listening comfort harmony assessment can effectively solve the harmony problem of the generated MIDI music, adopt a mode of saving the respective advantages of subjective assessment and objective assessment and compensating the disadvantages of the subjective assessment and objective assessment, integrate the artificial subjective experience into the objective assessment as much as possible, so as to color the generated MIDI fragment type music into complete music, and improve the quality of the intelligently generated music.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a MIDI music for music harmony evaluation according to an embodiment;
FIG. 2 is a flow chart of music harmony promotion provided by an embodiment;
FIG. 3 is a schematic diagram of major tri-chords and their constituent notes provided by an embodiment;
FIG. 4 is a schematic diagram of minor tri-chords and their constituent notes provided by an embodiment;
FIG. 5 is a graph of intensity-dissonance curves for notes provided by an embodiment;
FIG. 6 is a graph of duration-dissonance curves for notes provided by an embodiment;
FIG. 7 is a schematic diagram of a 4/4 beat strength relationship provided by an embodiment;
FIG. 8 is a plot of performance position versus dissonance curves for notes provided by an embodiment;
FIG. 9 is a flow chart of the calculation of harmony scores for notes provided by one embodiment;
FIG. 10 is a close-up chord cycle chart provided by an embodiment;
fig. 11 is a diagram of piano spectral selection of "summer of chrysanthemums" provided by an embodiment.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the detailed description is presented by way of example only and is not intended to limit the scope of the invention.
The prior art of the same kind can provide an index for objective evaluation on characteristics such as note pitch, rhythm, melody and the like of the generated music from the angles of rules and probabilities, and can reflect the quality of the generated music to a certain extent. For a piece of music, in addition to the main melody features that are most focused by the listener, under the law of the music theory, many notes also have hidden features in time sequence, namely chord progression. The chord progression can reflect the quality and emotion color of music to a great extent. Thus, the degree of matching of a note to its chord is a major factor affecting the degree of harmony of the auditory senses of people. Based on the above, the invention provides a method and a device for correcting MIDI music based on listening sense harmony assessment, which are used for realizing conversion from subjective experience based on music theory rules and manual common use into an objectively adaptive mode so as to analyze chord information of MIDI music and the matching degree of the chord information and the notes to evaluate the quality of the music, and then automatically color-rendering the chord of the generated music and the notes according to the evaluation result, thereby improving the harmony degree of the music.
FIG. 1 is a flow chart of a MIDI music for music harmony evaluation according to an embodiment; FIG. 2 is a flow chart of music harmony promotion provided by an embodiment; as shown in fig. 1 and 2, the MIDI music correction method based on the auditory sense harmony evaluation provided by the embodiment includes a music harmony evaluation stage and a music harmony promotion stage. Each stage is described in detail below.
Stage one: musical harmony evaluation phase
The music harmony evaluation stage is mainly used for evaluating the music harmony of the acquired MIDI music, and specifically comprises three parts of harmony recognition, rationality evaluation of chord progression and harmony evaluation of notes.
In an embodiment, the acquired MIDI music may be from MIDI music fragments generated by a music generation model, and may be single-track music or multi-track music. The acquired MIDI music also needs to have characteristics that there is a popular music tune and is stable, and chord transitions are in units of one bar, and for such MIDI music, it is possible to perform tonality recognition, rationality evaluation of chord progression, harmony evaluation of notes in the following manner.
The harmony of the music is a reference condition for judging the rationality of the chord progression and the harmony of the notes, and therefore, the harmony recognition is performed before the rationality evaluation of the chord progression and the harmony evaluation of the notes.
Tonal recognition
In an embodiment, the tonality recognition includes two parts, namely tonal identification and size tonal identification. Regarding the key number identification, each key number has seven kinds of notes, and similarly, under the condition that the notes belong to the internal tone, each note has seven kinds of possible key numbers, namely, the notes have a specified corresponding relation with the possible key numbers. Based on this, the tone recognition includes: for each note, according to the corresponding relation between the note and the tone number to which the note possibly belongs, the performance weight of the note is taken as an accumulated value, the tone number to which the note possibly belongs is accumulated to be taken as the possibility weight of the tone number, and then the tone number corresponding to the maximum possibility weight is the tone number of the MIDI music. Wherein the playing weight is determined according to the pitch, volume and duration of the notes, and preferably, the product of the pitch and duration can be used as the playing weight of the notes.
Regarding size key identification, when size key identification is completed for music, the main task of identifying the size key of music is to distinguish between relation size keys. The relation size is two, they are characterized by the same key number, i.e. the interval relation of the constituent notes is the same, but the main notes are different. Thus, after determining the key, the size key identification performed includes: for MIDI music of each key, the key is distinguished according to the number of the main tones of the determined size, and when the number of the main tones of the determined large key is greater than or equal to the number of the main tones of the determined small key, the key is regarded as the large key, otherwise the key is the small key. For example, if the number of C tones of a determined major key is equal to or greater than the number of a tones of a determined minor key at a certain key number, the music is classified as major key; if the number of C tones is less than the number of A tones, the music is classified as small.
Rationality assessment of chord progression
In an embodiment, the rationality evaluation of the chord progression mainly evaluates the rationality of the chord progression, which may specifically include chord identification and rationality evaluation of the chord, where the rationality evaluation of the chord is evaluated according to a music theory rule.
Chords, like tones, are considered an abstraction of melodies. For a key, a note is an in-key if it belongs to the key, otherwise it is an off-key. Likewise, for a chord, a note is a chord of a chord if it belongs to that chord, and is a chord outside. Based on this, the chord identification includes, similar to the key identification:
For each note of each bar, according to the corresponding relation between the note and the chord which the note possibly belongs to, the chord weight of the note is taken as an accumulated value, the chord which the note possibly belongs to is accumulated to be taken as the probability weight of the chord, then the chord corresponding to the maximum probability weight is taken as the chord of the bar, and all bar chords form a chord sequence of MIDI music; wherein the chord weight is determined according to the pitch, volume and duration of the notes, and preferably, the product of the pitch, volume and duration can be used as the chord weight of the notes. The correspondence of major chords and their constituent notes is shown in fig. 3, and the correspondence of minor chords and their constituent notes is shown in fig. 4.
And the chord identification is completed, and the chord sequence can be evaluated. In intelligently generated music, the situation that the randomness of chord sequences is strong and few artificial music is not very easy to occur is unavoidable, so that the rationality evaluation of the chords is required. In an embodiment, the rationality evaluation of the chords according to the music theory rule includes:
analyzing the chord distribution state of popular songs and the music theory rules accepted by the public to determine a plurality of music theory rules, and giving different violation weights to each music theory rule;
And analyzing each chord in the chord sequence of the MIDI music, performing violation matching with the music theory rule, and performing accumulation and multiplication according to the violation weight corresponding to the violation matching music theory rule and the set initial value to obtain the rationality score of each chord.
The plurality of music theory rules determined in the embodiment include:
the music theory rule 1, the first chord of a song is more the main chord or the subordinate chord. For example, if the song is a major key, then the C chord or G chord is started; if the song is a minor, it is an Am chord or an E chord beginning. If the initial chord is not C or G (the key is not Am or E), the coefficient k is multiplied 1
The music theory rule 2 is that in a song with a major key, the probability of occurrence of a major chord with an off-tone as a root is lower, and the probability of occurrence of a minor chord with the root is lower. If a major chord is present in the major song, the coefficient k is multiplied 2 The method comprises the steps of carrying out a first treatment on the surface of the If the minor chord is out of tone, the coefficient is multiplied by 2/3 xk 2
The music theory rule 3, in a small-key song, has a relatively low probability of occurrence of a small chord with the tuning-out as the root, but is a little higher than the music theory rule 2, and has a relatively low probability of occurrence of a large chord with the tuning-out. If a minor chord is present in the minor song, the coefficient k is multiplied 3 The method comprises the steps of carrying out a first treatment on the surface of the If a major chord is present, the coefficient is multiplied by 1.5 xk 3
Music theory rule 4, the popular style songs are more in a phrase unit of two or four, when the last phrase endsIn the case of the primary chord, the chord from which the next phrase starts is likely to be different from the last chord, i.e., changed. That is, if four bars are used as phrase units and the end of a phrase is not the primary chord, the chord of the next phrase is multiplied by the coefficient k if the chord is the last chord 4 . If two subsections are used as phrase units, the same applies.
The music theory rule 5 sometimes shows that one chord extends over two bars, but sometimes the two continuous bars adopt the same chord more monotonously. If a chord is consistent with the previous chord, multiplying the chord by a coefficient k 5
The music theory rule 6, if the chord of the song is richer and the color change is clearer, the chord of the bar is less consistent with the chords of the two previous bars. If the chord of the song is rich, when the chord is consistent with the previous chord and the last chord, the cumulative coefficient k 6
The music theory rule 7, in a phrase, if the F chord is connected with the C chord, the probability of the next chord a or Am chord is lower. That is, if the last two chords are identified as F and the last chord is identified as C, the coefficient k is multiplied when the chord is A or Am 7
The probability of the next occurrence of the E chord or Em chord after the occurrence of the C chord is comparatively low in the music theory rule 8. I.e. if the last chord is C, multiplying by the coefficient k when the chord is E or Em 8
The music theory rule 9, after the E chord appears, will have fewer cases where the C chord or Cm chord appears next. I.e. if the last chord is E, multiplying by the coefficient k when the chord is C or Cm 9
The music theory rule 10, when the tone outer chord appears, the next chord appears to solve the tone of the previous chord, and a principal relation solution, that is, a principal chord assuming that it is a subordinate chord, may be adopted. For example, a D chord occurs and the next chord may be G. Or if a B chord occurs, an E chord may follow.
Music theory rule 11, if a phrase is greater than 4 bars, then the chord of the genus appears in the phrase penultimate barThe probability of a node will be relatively low. That is, in phrases with phrase greater than 4 bars, if the penultimate bar of the phrase is the chord of the genus, then the coefficient k is multiplied 11
The probability that the last chord of a phrase is the subordinate chord or the main chord of the music rule 12 is relatively large. I.e. if the last chord of the phrase is not the subordinate chord or the major chord, the coefficient k is multiplied 12
The music theory rule 13 is that a song ends with major and minor chords, and at this time, the preceding chord is mostly the chord. The minority of cases will end up with chords. I.e. if the penultimate chord of the song is not the affiliated chord, the coefficient k is multiplied 13
In the embodiment, violation weights of the above 13 music theory rules are exemplarily given, as shown in table 1:
TABLE 1 violation weights of the music rules
Chord evaluation is performed based on the above music theory rules and the violation weights, that is, when a violation of the music theory rules is found in the music, that is, it is considered that the violation matching is achieved, and then scoring is performed based on the violation weights that may affect the hearing degree. When the rationality score evaluation is carried out on the chords of each bar, the adopted method is a cumulative method, namely taking 1 as a set initial value, and the cumulative method is carried out on the violation weights which are matched in a violation manner and correspond to the music theory rules, wherein the finally obtained numerical value is the rationality score of the chords, and for the MIDI music with n chords, the average value of the rationality scores of the n chords is taken as the rationality score of the MIDI music.
Harmony assessment of notes
In the embodiment, the listening harmony of notes is evaluated mainly from four aspects of pitch, intensity, duration, and performance position.
The dissonance of notes is judged according to pitch mainly by the following indexes: whether it is tuning off; whether it is a chord tone. Specifically, when the notes are evaluated for dissonance with respect to pitch, the pitch harmony of the notes is evaluated according to the following priorities:
the first priority is: for each note, the harmony of the non-tuning is greater than the harmony of the tuning;
second priority: for off-tuning and non-off-tuning, determining pitch harmony according to a relationship between a note and a chord note includes: the tone pitch harmony of the relationship between the notes and the chord tone without the semitone interval is greater than the tone pitch harmony of the relationship between the notes and the chord tone without the root tone constituting the semitone interval is greater than the tone pitch harmony of the relationship between the notes and the root tone constituting the semitone interval.
In an embodiment, the dissonance index of a note is judged according to the pitch, and the quantized dissonance index is taken as the dissonance degree, and the following table 2 can be referred to:
TABLE 2
The intensity is the volume, determines the playing strength of the notes and also reflects the volume position of the notes. The stronger the intensity of a note, the greater the volume it plays, and the stronger and more noticeable the listening sensation to the listener. Conversely, if the sound intensity is weaker, the smaller the volume of the performance is, the weaker the hearing feeling is. Whether or not the notes are harmonious in pitch, their auditory dissonance increases with increasing intensity.
In order to evaluate the note for the degree of dissonance with respect to the intensity, first, a tone intensity-dissonance curve is constructed with the intensity on the horizontal axis and the sense of hearing dissonance on the vertical axis, and the method includes: taking a measure as a unit, taking the volume average value of all notes in the measure as the volume average value of the measure, if the measure does not have notes, taking the volume average value of the previous measure as the volume average value of the measure, taking the volume average value as a boundary, restricting the starting point of a tone intensity-dissonance curve corresponding to notes with harmonious pitch to be above the boundary, restricting the starting point of the tone intensity-dissonance curve corresponding to notes with dissonance to be below the boundary, and in the tone intensity-dissonance curve, the tone intensity and the listening dissonance are positively correlated, as shown in fig. 5; then, the intensity dissonance evaluation is performed on each note according to the intensity-dissonance curve to determine the auditory dissonance of the note.
The duration refers to the length of the note played, i.e., the duration of the sound made by the note. The duration of a note is similar to the duration of a note, and can represent the note components. If the longer the note is, the longer it takes to play, the longer it gives the listener a longer lasting and prominent listening impression. Conversely, if the sound length is smaller, the playing time is shorter, and the hearing feeling is shorter and weaker.
Based on this, when the note is evaluated for dissonance with respect to the note length, the note length corresponding to the note with harmonious pitch and intensity is considered to be in a harmonious state as well; for notes which are not harmonious in pitch and intensity, a note length-dissonance curve is constructed with a note length as a horizontal axis and a note sense dissonance as a vertical axis, the note length and the note sense dissonance are positively correlated, and the starting point of the note length-dissonance curve is located on the vertical axis, as shown in fig. 6, namely for notes which are not harmonious in pitch and intensity, even if the note length is a certain note length, the note length-dissonance curve is a certain note sense dissonance. In an embodiment, notes that are not harmonious in pitch and intensity include: notes with harmonious pitches and intensity, and with both pitches and intensity.
The playing position of a note refers to the beat point of the note playing. The same note (only the pitch, the intensity and the duration are the same) is played at different beat points, so that different effects can be generated on emotion deductions, and different effects can be generated on harmony of auditory senses. In general, at the position of the re-beat of one beat, if the pitch of a performance is not harmonious, the resulting listening dissonance is relatively high. However, if the note is also a note with a dissonance in pitch, the dissonance of the sense of hearing is reduced when played at a beat point other than the beat point, because it can play a role of beautifying through the sound. If a pitch dissonance note is strong but its duration is short and appears at a point before the re-beat, it may be a decor. Not only can not cause the offence on the sense of hearing, but also can play the role of enhancing the color.
Based on this, when performing dissonance evaluation with respect to performance positions for notes, any performance position corresponding to notes that are harmonious with respect to pitch, intensity, and duration is also in a harmonious state; for notes of non-pitch, intensity and duration harmony, the listening dissonance of the performance at the strong beat point may increase, and the listening dissonance of the performance at the non-strong beat point may decrease. In an embodiment, notes that are not equal in pitch, intensity, and duration refer to notes that are not equal in pitch, intensity, and duration but are corresponding to at least one of them.
In the embodiment, taking 4/4 beats as an example, the relationship between the intensity of each beat can be divided into: strong, weak, next strong, weak, as shown in fig. 7, the double beats are beats 1 and 3. In this beat, the relationship of the auditory dissonance of a note and the performance position of the note can be referred to fig. 8.
In summary, when the MIDI music is single track music, according to the dissonance evaluation results of each note in terms of pitch, intensity, duration and playing position, the harmony evaluation is performed on each note of the MIDI music, similar to the rationality evaluation performed by the chord, a corresponding evaluation weight is given to each dissonance evaluation result, and the harmony score of each note is obtained by multiplying the evaluation weight corresponding to the evaluation result by a set initial value.
When the MIDI music is multi-track music, when the harmony score of the MIDI music is calculated, compared with the single-track music, one track weight is added to be multiplied with a set initial value so as to obtain the harmony score of each note, and the average value of the harmony scores of all notes is the harmony score of the MIDI music.
Fig. 9 is a flowchart of a calculation of a harmony score of a note according to an embodiment, where, as shown in fig. 9, when calculating the harmony score of the note, an initial value of the harmony score is set to be 1, and then the harmony score of the note is obtained by calculating according to the calculation flow shown in fig. 9 and the corresponding evaluation weight.
Stage two: musical harmony enhancing stage
The music harmony improving stage is mainly used for improving the music harmony according to the music harmony assessment result, and specifically comprises the following steps: and carrying out chord rationality improvement according to a rationality evaluation result of chord, and carrying out harmony improvement of notes according to harmony evaluation of notes.
Rationality improvement of chord
In an embodiment, performing chord rationality promotion according to a rationality evaluation result of chord performance includes: avoidance of illegal chords, beautification of plain chords, avoidance and solving of off-key chords and supplement of head-tail chords.
Aiming at avoiding the violations and the chords, in the process of evaluating the rationality of the chords, a plurality of music rules are designed for rationality evaluation, and an algorithm evaluates each chord. In other words, when the chord of a certain bar changes, the evaluation score values obtained are not necessarily the same, i.e., there are the lowest value and the highest value. When the bar is a chord, the score is highest, and the algorithm considers that the bar is the chord most reasonable, so that the chord can be a recommended chord, and the occurrence of the violation chord can be avoided to a certain degree.
Based on the above, when the violation chord avoidance is performed, according to the rationality score of each chord, probability coefficients of low scores of the violation chords such as the off-key chords are directly multiplied, so that the recommended probability of the violation chords is greatly reduced, and the avoidance of the violation chords is achieved.
For beautification of plain chords, in MIDI music, there occasionally occurs a case where there is no chord change for a plurality of bars. When the melody color change is small and the beautification of the plain chord is carried out, aiming at the chords which appear multiple times in succession, adjusting a chord of the chords to be a short-distance chord corresponding to the chord; the main melody is regarded as a chord, and the portion originally regarded as a chord is regarded as a main melody to enhance its melody.
In an embodiment, the first beautification of the plain chord is an alternative to the near chord. By close chord is meant a chord in which the two colors are not significantly different. Such as the Am chord and the F chord, whose constituent tones are A, C, E and F, A, C, respectively, with only one tone difference therebetween. The F chord is the same reason as the Dm chord. The close-up chord cycle is shown in fig. 10. Illustratively, when four consecutive Am chords are encountered, the modification may be performed in a manner that modifies the third chord to an F chord.
In the embodiment, the second type of plain chord beautification is to treat the main melody as a chord and treat the part originally treated as a chord as a main melody. Since the background emotion colors exhibited by this section of melody are substantially identical if recognized as the same chord in the chord recognition process, the chords to which they belong can be considered to be identical. For example, in "summer of Ju-hong-Lang", the melody with one phrase is completely a periodic cycle, so that the progression is made on the chord, as shown in FIG. 11. It can be seen that the melody of this paragraph, while being a "G5-G4-C5-D5-G5-D5-C5-G4 (at D major scale)" cycle, uses chords that are not "C-C-C-C" or "G-G-G-G", but "C-Am-F-G" with mood swings. Therefore, when the chord of eight consecutive bars is consistent, the first method is not adopted, and the second method is adopted for adjustment in a mode of 'second degree downlink-first degree uplink'.
For the avoidance and resolution of the off-key chord, in the result obtained by identifying the generated music, the case of the off-key chord may occur. The appearance of discrete and chords in a musical composition may result in a change in color that appears to turn the key, but is not actually present. The occurrence of the off-key chord must be short-lived and resolved quickly, otherwise the effect of the tuning may be audible. If the key is actually turned, the chord is likely not defined as an off-key chord, but rather an on-key chord after the key is turned. So if defined as an out-of-key chord, it should occur at an intermediate location and need to be resolved to an in-key chord as soon as possible.
Thus, the first step in the off-chord improvement is to modify the off-chord at which the off-chord position should not occur to an in-chord having the closest relationship to its musical interval so as to preserve as much as possible the emotional motivation of the original music; where the beginning and end of each phrase is defined as the location where the off-tuning should not occur.
In the middle of the phrase except the beginning and end, although occasional occurrence of the key chord is allowed, it cannot occur too frequently, otherwise the tonality of the song becomes blurred and the emotional color becomes more confusing. The probability of occurrence of the intermediate lift-off chord is greatly reduced during the chord lifting. In the embodiment, the second step of improving the chord separator is to correct the probability coefficient of the chord separator determined according to the key according to the preset probability coefficient of decreasing the chord separator, so as to decrease the rationality score of the chord separator, for example, in a big tune, the chord lifting algorithm identifies and evaluates the possible chord of each bar, and at this time, the probability of occurrence of the chord separator such as Cm, # C, # Cm, eb, ebm is greatly decreased. For the major and minor songs, the probability coefficients for decreasing the off chord are shown in table 3 below, where the median value is the probability coefficient for decreasing the off chord. The avoidance of the key chord can be achieved by modifying the rationality score of the key chord by the probability coefficient.
TABLE 3 Table 3
And then taking the modified rationality score of the separated chord and the rationality score of the chord in the chord as probability values of each chord in the chord sequence, adopting a roulette algorithm to draw the possible chord of each bar according to the probability values, and taking the chord obtained by drawing as a chord of the chord sequence to replace the original chord of the bar.
After the first and second steps of the chord improvement, the chord sequence may still have an off chord, based on which the chord sequence has an off chord, and a third step of the off chord improvement is performed, that is, the adjacent chord after the off chord that appears in the chord progression is modified to the major chord of the off chord according to the major relationship of the chord, and when the major chord of the off chord is still the off chord, the adjacent chord after the off chord is modified to the inner chord closest to the musical interval relationship.
In embodiments, the primary relationship of chords is a relative concept that may exist alone out of the tonality. For example, the E chord is the chord of the genus Am chord, then the Am chord is the primary chord of the E chord. Similarly, the D chord is the minor chord of the G chord, and the G chord is the major chord of the D chord. So that after an off-chord occurs in the chord progression, the off-chord can be resolved into its primary chord. For example, if a B chord is present, the next chord may be resolved to an E chord.
If the major chord to which it belongs is still an off chord, the off chord is not well resolved. So that the chords in-key can be resolved as soon as possible to the chords in-key which are closest to their musical interval relationship. Generally, uplink solutions are adopted. For example, if the #C chord appears, the main chord is #F chord, and the chord still is the off-chord, the chord can be resolved to the Dm chord, and the off-chord can be resolved.
Supplement to the head-tail chord. For MIDI music fragments without beginning and end. In order to make it a complete piece of music, chord features of the beginning and the end need to be added. Thus, when supplementing the leading-trailing chords, the trailing path is the leading or trailing chord and ends with the leading chord. For example, for major music, the C chord or G chord may be initiated and the other chords may be routed. And in the penultimate bar, the chord G or G7 is adjusted to the major chord C. If a minor chord, it may start to develop with an Am chord or an E chord, ultimately resolving to a major chord Am chord by the E chord.
Harmony enhancement of notes
The harmony of melody notes is improved, mainly by improving the pitch listening harmony. Scoring the harmony of the notes of the swirl has been achieved before, and this score will serve as an important reference basis for modifying the notes.
When the score value of a certain note is less than 1, it is considered that there is a certain degree of dissonance, and when the score is lower, the degree of dissonance is higher, but not dissonance is necessarily not present. A graceful song obviously cannot be constituted by only chord tones, which are the outline of the emotion, while the melody is more colorful in that it has pitches with rich sensations of chord outer tones, off-tuning, etc. The harmony of the harmony is not necessarily affected by the presence of the harmony outer tone in the main melody. Therefore, in the pitch-tuning of musical notes, all the sum chord tones should not be modified to chord tones, which may make the music lose some essence. However, if a certain note is too dissonant, even if a great number of notes are made, the listening feeling is obviously affected, so the principle of the degree of harmony is to reduce the dissonance component, but a certain diversity is reserved. Based on this, in the embodiment, when the harmony between the notes is improved, the difference between 1 and the harmony score of each note is taken as the probability of being modified, and the lottery is drawn according to the probability of being modified, and when the lottery is judged to be modified, the notes are modified to the chord tone with the closest musical interval relation.
In an embodiment, the score value obtained by the evaluation may be regarded as "probability of not being modified". I.e. if the score of a note is 0.8, it will have a 20% probability of not being modified. However, when it is determined to be modified by the "roulette" algorithm drawing, it will be modified to the chord tone closest to its musical interval to ensure that the harmony is not further reduced.
The embodiment also provides a MIDI music correction device based on the hearing harmony assessment, which comprises a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor realizes the MIDI music correction method based on the hearing harmony assessment when executing the computer program.
In practical applications, the computer memory may be a volatile memory at the near end, such as a RAM, or a nonvolatile memory, such as a ROM, a FLASH, a floppy disk, a mechanical hard disk, or a remote storage cloud. The computer processor may be a Central Processing Unit (CPU), a Microprocessor (MPU), a Digital Signal Processor (DSP), or a Field Programmable Gate Array (FPGA), i.e. the steps of the MIDI music correction method based on the evaluation of the hearing and harmony can be implemented by these processors.
The MIDI music correction method and device based on the listening comfort harmony assessment provided by the embodiment can effectively solve the harmony problem of the generated MIDI music, adopt a mode of saving the respective advantages of subjective assessment and objective assessment and compensating the disadvantages of the subjective assessment and objective assessment, integrate the artificial subjective experience into the objective assessment as far as possible, so as to moisten the generated MIDI fragment type music into complete music, and improve the quality of the intelligently generated music.
The foregoing detailed description of the preferred embodiments and advantages of the invention will be appreciated that the foregoing description is merely illustrative of the presently preferred embodiments of the invention, and that no changes, additions, substitutions and equivalents of those embodiments are intended to be included within the scope of the invention.

Claims (9)

1. A MIDI music correction method based on hearing harmony evaluation, comprising the steps of:
acquiring MIDI music, wherein chord transformation of the MIDI music is carried out by taking a bar as a unit;
performing music harmony assessment on the MIDI music, wherein the music harmony assessment comprises tonality identification, rationality assessment of chord progression and harmony assessment of notes; the chord rationality evaluation comprises chord identification and chord rationality evaluation according to the music theory rule, and the note harmonicity evaluation comprises note harmonicity evaluation from four aspects of pitch, intensity, duration and playing position;
The method comprises the steps of carrying out music harmony promotion according to a music harmony assessment result, including carrying out chord rationality promotion according to a rationality assessment result of chord progression and carrying out harmony promotion of notes according to harmony assessment of notes; the chord rationality improvement comprises avoidance of the illegal chords, beautification of the plain chords, avoidance and solving of the off-key chords and supplementation of the head-tail chords;
wherein, when evaluating notes for dissonance with respect to pitch, pitch harmony of notes is evaluated according to the following priorities:
the first priority is: for each note, the harmony of the non-tuning is greater than the harmony of the tuning;
second priority: for off-tuning and non-off-tuning, determining pitch harmony according to a relationship between a note and a chord note includes: the chord tone of notes and chord tones without semitone interval relation is the tone pitch harmony of notes and non-root tones forming the semitone interval relation is the tone pitch harmony of notes and root tones forming the semitone interval relation.
2. A MIDI music correction method based on a hearing harmony evaluation according to claim 1, wherein the tonality recognition includes a key number recognition, a size key recognition;
The tone identification includes: according to the corresponding relation between the notes and the associated key numbers, accumulating the key numbers to which the notes belong by taking the performance weights of the notes as accumulated values as the weights of the key numbers, and then taking the key number corresponding to the maximum weight as the key number of the MIDI music; wherein, the playing weight is determined according to the pitch, volume and duration of the notes;
the size adjustment identification includes: for MIDI music of each key, the key is distinguished according to the number of the main tones of the determined size, and when the number of the main tones of the determined large key is greater than or equal to the number of the main tones of the determined small key, the key is regarded as the large key, otherwise the key is the small key.
3. A MIDI music correction method based on a harmony evaluation of hearing sensation according to claim 1, wherein the chord identification includes:
for each note of each bar, according to the corresponding relation between the note and the chord, the chord weight of the note is taken as an accumulated value, the chord to which the note belongs is accumulated as the chord weight, then the chord corresponding to the maximum weight is taken as the chord of the bar, and all bar chords form a chord sequence of MIDI music;
the chord weight is determined according to the pitch, the volume and the duration of the notes.
4. The MIDI music correction method based on the auditory sense harmony evaluation according to claim 1, wherein the rationality evaluation of the chord according to the music rule comprises:
analyzing the chord distribution state of popular songs and the music theory rules to determine a plurality of music theory rules, and giving different violation weights to each music theory rule;
and analyzing each chord in the sum-rotation sequence of the MIDI music, carrying out violation matching with the music theory rule, and carrying out accumulation multiplication according to the violation weight corresponding to the violation matching music theory rule and the set initial value to obtain the rationality score of each chord.
5. The MIDI music correction method based on the evaluation of the harmony between the sense of hearing according to claim 1, wherein when the evaluation of the harmony between the notes with respect to the intensity is performed, first, a intensity-harmony curve having the intensity as the horizontal axis and the harmony between the sense of hearing as the vertical axis is constructed, comprising: taking a measure as a unit, taking the volume average value of all notes in the measure as the volume average value of the measure, if the measure does not have notes, taking the volume average value of the previous measure as the volume average value of the measure, taking the volume average value as a boundary, restricting the starting point of a tone intensity-dissonance curve corresponding to notes with harmonious pitch to be above the boundary, restricting the starting point of the tone intensity-dissonance curve corresponding to notes with dissonance to be below the boundary, leading the tone length to be positively correlated with the listening dissonance, and leading the tone intensity to be positively correlated with the listening dissonance in the tone intensity-dissonance curve; then, carrying out intensity dissonance assessment on each note according to the intensity-dissonance curve so as to determine the hearing dissonance of the note;
When the dissonance evaluation on the musical notes is carried out, the musical notes corresponding to the musical notes with harmonious pitch and intensity are considered to be in a harmonious state; for notes which are not harmonious in pitch and intensity, a note length-dissonance curve is constructed, wherein the note length is taken as a horizontal axis, the note length-dissonance curve is taken as a vertical axis, and the starting point of the note length-dissonance curve is positioned on the vertical axis, namely for notes which are not harmonious in pitch and intensity, even if the note length is a certain value, the note length-dissonance curve is a certain value;
when the dissonance evaluation is performed on the notes with respect to the performance positions, any performance position corresponding to the notes with harmonious pitch, intensity and duration is also in a harmonious state; for notes of non-pitch, intensity and duration harmony, the listening dissonance of the performance at the strong beat point may increase, and the listening dissonance of the performance at the non-strong beat point may decrease.
6. The MIDI music correction method based on the listening compromises assessment according to any one of claims 1 or 5, wherein when MIDI music is single track music, corresponding assessment weights are given to each of the assessment results of the dissonance from four aspects of pitch, intensity, duration and playing position according to each of the notes, the assessment weights corresponding to the assessment results are integrated with set initial values to obtain a harmony score of each of the notes, and the average value of the harmony scores of all the notes is the harmony score of MIDI music;
When the MIDI music is multi-track music, when the harmony score of the MIDI music is calculated, compared with the single-track music, one track weight is added to be multiplied with a set initial value so as to obtain the harmony score of each note, and the average value of the harmony scores of all notes is the harmony score of the MIDI music.
7. A MIDI music correction method based on the evaluation of auditory sense harmony according to claim 1 or 4, wherein when the avoidance of the offence and the chord is carried out, the probability coefficients of some of the outgoing chords are multiplied by a low score according to the rationality score of each chord, so that the recommended probability is greatly reduced to achieve the avoidance of the offence and the chord;
when beautifying the plain chord, aiming at the chord which continuously appears for a plurality of times, adjusting a certain chord into a short-distance chord corresponding to the chord; regarding the main melody as a chord, regarding the part originally regarded as the chord as the main melody;
when the avoidance and the solution of the off-tone chord are carried out, the off-tone chord at which the off-tone chord position should not appear is modified into an in-tone chord with the closest relationship with the interval thereof; according to a preset probability coefficient for reducing the key chord, carrying out probability coefficient correction on the rationality score of the key chord determined according to the key, then taking the corrected rationality score of the key chord and the rationality score of the chord in the key as probability values of each chord in the chord sequence, adopting a roulette algorithm to draw the chord appearing in each bar according to the probability values, and taking the chord obtained by drawing as a color-rendering suggested chord to replace the original chord of the bar; modifying the adjacent chord after the off-chord appearing in the chord progression into the main chord of the off-chord according to the main relationship of the chord, and modifying the adjacent chord after the off-chord into the inner chord closest to the interval relationship when the main chord of the off-chord is still the off-chord;
When the first chord and the last chord are supplemented, the main chord or the subordinate chord is used as the beginning, and the last path belongs to the chord or the subordinate chord and the main chord is used as the end.
8. A MIDI music correction method based on the evaluation of harmony with auditory sensation according to claim 1, wherein when the harmony promotion of the notes is performed, the difference value of the harmony score of 1 with each note is taken as the probability of being modified, the lottery is performed according to the probability of being modified, and when the lottery is judged to be modified, the note is modified to the chord tone closest to the musical interval relationship thereof.
9. A MIDI music correction apparatus based on a hearing and harmony evaluation comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the MIDI music correction method based on a hearing and harmony evaluation according to any one of claims 1 to 8 when executing the computer program.
CN202110825341.2A 2021-07-21 2021-07-21 MIDI music correction method and device based on hearing harmony evaluation Active CN113571030B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110825341.2A CN113571030B (en) 2021-07-21 2021-07-21 MIDI music correction method and device based on hearing harmony evaluation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110825341.2A CN113571030B (en) 2021-07-21 2021-07-21 MIDI music correction method and device based on hearing harmony evaluation

Publications (2)

Publication Number Publication Date
CN113571030A CN113571030A (en) 2021-10-29
CN113571030B true CN113571030B (en) 2023-10-20

Family

ID=78165958

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110825341.2A Active CN113571030B (en) 2021-07-21 2021-07-21 MIDI music correction method and device based on hearing harmony evaluation

Country Status (1)

Country Link
CN (1) CN113571030B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112951183B (en) * 2021-02-25 2022-08-16 西华大学 Music automatic generation and evaluation method based on deep learning

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0436976A1 (en) * 1989-12-18 1991-07-17 Meta-C Corporation Musical instrument, electronic and/or fretted, employing modified eastern music tru-scale octave transformation to avoid overtone collisions
JPH09230857A (en) * 1996-02-23 1997-09-05 Yamaha Corp Musical performance information analyzing device and automatic music arrangement device using it
US6057502A (en) * 1999-03-30 2000-05-02 Yamaha Corporation Apparatus and method for recognizing musical chords
US6365808B1 (en) * 2000-03-10 2002-04-02 Paul Murrell Method of constructing stringed instruments
GB0422418D0 (en) * 2003-10-10 2004-11-10 Univ Sussex The Music composing system
WO2008018056A2 (en) * 2006-08-07 2008-02-14 Silpor Music Ltd. Automatic analasis and performance of music
JP2008164932A (en) * 2006-12-28 2008-07-17 Sony Corp Music editing device and method, and program
CN101313477A (en) * 2005-12-21 2008-11-26 Lg电子株式会社 Music generating device and operating method thereof
CN101800046A (en) * 2010-01-11 2010-08-11 北京中星微电子有限公司 Method and device for generating MIDI music according to notes
CN103035253A (en) * 2012-12-20 2013-04-10 成都玉禾鼎数字娱乐有限公司 Method of automatic recognition of music melody key signatures
FR2994015A1 (en) * 2012-07-27 2014-01-31 Techlody Musical improvisation method for musical instrument e.g. piano, involves generating audio signal representing note or group of notes, and playing audio signal immediately upon receiving signal of beginning of note
WO2015066204A1 (en) * 2013-10-30 2015-05-07 Music Mastermind, Inc. System and method for enhancing audio, conforming an audio input to a musical key, and creating harmonizing tracks for an audio input
JP2015191194A (en) * 2014-03-28 2015-11-02 パイオニア株式会社 Musical performance evaluation system, server device, terminal device, musical performance evaluation method and computer program
JP2016161831A (en) * 2015-03-03 2016-09-05 ヤマハ株式会社 Device and program for singing evaluation
CN108877749A (en) * 2018-04-25 2018-11-23 杭州回车电子科技有限公司 A kind of generation method and system of E.E.G AI music
CN109935222A (en) * 2018-11-23 2019-06-25 咪咕文化科技有限公司 A kind of method, apparatus and computer readable storage medium constructing chord converting vector
CN111613199A (en) * 2020-05-12 2020-09-01 浙江大学 MIDI sequence generating device based on music theory and statistical rule
CN111613195A (en) * 2019-02-22 2020-09-01 浙江大学 Audio splicing method and device and storage medium
CN112309409A (en) * 2020-10-28 2021-02-02 平安科技(深圳)有限公司 Audio correction method and related device
CN112927667A (en) * 2021-03-26 2021-06-08 平安科技(深圳)有限公司 Chord identification method, apparatus, device and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7834260B2 (en) * 2005-12-14 2010-11-16 Jay William Hardesty Computer analysis and manipulation of musical structure, methods of production and uses thereof
US20080184872A1 (en) * 2006-06-30 2008-08-07 Aaron Andrew Hunt Microtonal tuner for a musical instrument using a digital interface
US9977645B2 (en) * 2015-10-01 2018-05-22 Moodelizer Ab Dynamic modification of audio content

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0436976A1 (en) * 1989-12-18 1991-07-17 Meta-C Corporation Musical instrument, electronic and/or fretted, employing modified eastern music tru-scale octave transformation to avoid overtone collisions
JPH09230857A (en) * 1996-02-23 1997-09-05 Yamaha Corp Musical performance information analyzing device and automatic music arrangement device using it
US6057502A (en) * 1999-03-30 2000-05-02 Yamaha Corporation Apparatus and method for recognizing musical chords
US6365808B1 (en) * 2000-03-10 2002-04-02 Paul Murrell Method of constructing stringed instruments
GB0422418D0 (en) * 2003-10-10 2004-11-10 Univ Sussex The Music composing system
CN101313477A (en) * 2005-12-21 2008-11-26 Lg电子株式会社 Music generating device and operating method thereof
WO2008018056A2 (en) * 2006-08-07 2008-02-14 Silpor Music Ltd. Automatic analasis and performance of music
JP2008164932A (en) * 2006-12-28 2008-07-17 Sony Corp Music editing device and method, and program
CN101800046A (en) * 2010-01-11 2010-08-11 北京中星微电子有限公司 Method and device for generating MIDI music according to notes
FR2994015A1 (en) * 2012-07-27 2014-01-31 Techlody Musical improvisation method for musical instrument e.g. piano, involves generating audio signal representing note or group of notes, and playing audio signal immediately upon receiving signal of beginning of note
CN103035253A (en) * 2012-12-20 2013-04-10 成都玉禾鼎数字娱乐有限公司 Method of automatic recognition of music melody key signatures
WO2015066204A1 (en) * 2013-10-30 2015-05-07 Music Mastermind, Inc. System and method for enhancing audio, conforming an audio input to a musical key, and creating harmonizing tracks for an audio input
JP2015191194A (en) * 2014-03-28 2015-11-02 パイオニア株式会社 Musical performance evaluation system, server device, terminal device, musical performance evaluation method and computer program
JP2016161831A (en) * 2015-03-03 2016-09-05 ヤマハ株式会社 Device and program for singing evaluation
CN108877749A (en) * 2018-04-25 2018-11-23 杭州回车电子科技有限公司 A kind of generation method and system of E.E.G AI music
CN109935222A (en) * 2018-11-23 2019-06-25 咪咕文化科技有限公司 A kind of method, apparatus and computer readable storage medium constructing chord converting vector
CN111613195A (en) * 2019-02-22 2020-09-01 浙江大学 Audio splicing method and device and storage medium
CN111613199A (en) * 2020-05-12 2020-09-01 浙江大学 MIDI sequence generating device based on music theory and statistical rule
CN112309409A (en) * 2020-10-28 2021-02-02 平安科技(深圳)有限公司 Audio correction method and related device
CN112927667A (en) * 2021-03-26 2021-06-08 平安科技(深圳)有限公司 Chord identification method, apparatus, device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
音乐评估系统MuSIC的正常值研究;董瑞娟;王硕;周芸;亓贝尔;陈雪清;刘博;;临床耳鼻咽喉头颈外科杂志(第13期);全文 *

Also Published As

Publication number Publication date
CN113571030A (en) 2021-10-29

Similar Documents

Publication Publication Date Title
JP6735100B2 (en) Automatic transcription of music content and real-time music accompaniment
Moore The so-called ‘flattened seventh’in rock
US9449083B2 (en) Performance data search using a query indicative of a tone generation pattern
JP5594052B2 (en) Information processing apparatus, music reconstruction method, and program
CN112382257B (en) Audio processing method, device, equipment and medium
EP2515249B1 (en) Performance data search using a query indicative of a tone generation pattern
US20170084261A1 (en) Automatic arrangement of automatic accompaniment with accent position taken into consideration
CN108766407B (en) Audio connection method and device
Herbst Heaviness and the electric guitar: Considering the interaction between distortion and harmonic structures
Eggink et al. Extracting Melody Lines From Complex Audio.
CN113571030B (en) MIDI music correction method and device based on hearing harmony evaluation
Penttinen et al. Model-based sound synthesis of the guqin
CN106652655A (en) Musical instrument capable of audio track replacement
CN104036764B (en) Musical sound information processing equipment and method
Ramirez et al. Automatic performer identification in commercial monophonic jazz performances
CN106844639B (en) Method and system for matching music with sports
JP2013097302A (en) Automatic tone correction device, automatic tone correction method and program therefor
CN115177956A (en) Music game score generation method, storage medium and terminal
Chen et al. Sound synthesis of the pipa based on computed timbre analysis and physical modeling
Dressler Towards computational auditory scene analysis: Melody extraction from polyphonic music
Kirke et al. Emergent Construction of melodic pitch and hierarchy through agents communicating emotion without melodic intelligence.
JP2007241026A (en) Simple musical score creating device and simple musical score creating program
JP6693596B2 (en) Automatic accompaniment data generation method and device
Kang et al. Automatic music generation and machine learning based evaluation
Capobianco et al. Assessing Acoustic Parameters in Early Music and Romantic Operatic Singing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant