WO2022143679A1 - 谱面分析和标注方法、装置及电子设备 - Google Patents
谱面分析和标注方法、装置及电子设备 Download PDFInfo
- Publication number
- WO2022143679A1 WO2022143679A1 PCT/CN2021/142134 CN2021142134W WO2022143679A1 WO 2022143679 A1 WO2022143679 A1 WO 2022143679A1 CN 2021142134 W CN2021142134 W CN 2021142134W WO 2022143679 A1 WO2022143679 A1 WO 2022143679A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- labeling
- musical
- notes
- analysis
- score
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B15/00—Teaching music
Definitions
- the present application relates to the technical field of data analysis, and more particularly, to a spectral analysis and labeling method, a spectral analysis and labeling device, and electronic equipment.
- the analysis of musical works generally refers to the in-depth excavation of the works from the technical and literary levels in the process of analyzing the musical works.
- Excavation at the literary level helps to understand the background of the work, the inner world of the composer, etc., while the excavation at the technical level is to use the professional knowledge of music to analyze the pitch, rhythm, texture, harmony, and musical form of the work. , to strengthen the overall cognition of musical works.
- musical elements are the basic elements that constitute music, generally including 1melody line; 2rhythm; 3rhythm; 4harmony and tonality; 5speed; 6velocity; 10Texture, etc.
- Music elements constitute the basic means of musical performance.
- the embodiments of the present application provide a method, device, electronic device and system for analyzing and labeling a musical score, which can trace and disassemble a musical composition based on the characteristics of the musical composition. Music term and musical notation, period feature extraction and analysis to analyze and label the spectrum, so that the spectrum recognition and labeling are carried out effectively.
- a method for analyzing and labeling musical scores comprising: determining a time signature of an electronic musical score; performing beat labeling and rhythm pattern analysis and labeling on the electronic musical score based on the determined time signature; performing pitch and interval labeling on the electronic musical score; and analyzing and labeling the musical pattern based on the labeling results of the pitch, the interval and the rhythm pattern.
- performing rhythm pattern analysis on the electronic musical score includes: determining a preset basic rhythm pattern, as well as the doubled rhythm pattern and doubled division of the basic rhythm pattern based on the rhythm combination characteristics of the electronic musical score. a rhythm pattern; and, performing a rhythm pattern analysis on the electronic musical score by comparing with the basic rhythm pattern, the doubling rhythm pattern and the doubling rhythm pattern.
- performing interval labeling on the electronic musical score includes: performing intra-bar interval labeling and inter-bar interval labeling on the electronic musical score.
- analyzing and labeling the musical pattern based on the labeling results of the pitch, the interval and the rhythm pattern includes: determining a reference note group with a predetermined number of notes, the reference note group constitutes a reference musical type; traverse other reference note groups with the predetermined number of notes in the electronic score, to determine the relationship with the reference music based on the pitch, duration and interval of each group of notes of the predetermined number of notes same, displaced, mirrored, or similar musical patterns.
- the above method for analyzing and labeling notation includes at least one of the following: the same musical type conforms to two sets of notes having the same rhythm pattern, the same pitch, and the same interval; similar musical types conform to the two sets of notes having different rhythm patterns, However, the pitch and interval are the same; the similar musical type conforms to the rhythm pattern of the two groups of notes is not the same, the pitch is different, and the interval is the same; The pitch or interval is more than 50% the same; the displacement musical type conforms to the same rhythm pattern of two sets of notes, the interval is the same, but the pitch is different; and the mirror musical type conforms to each corresponding note of the two sets of notes to be the highest of the group The interval of the symmetry axis of the intermediate value between the pitch and the lowest pitch is opposite to each other.
- the extracted and compared notes include at least one of the following situations: if the two groups of notes to be compared are all two-tones, then the two The upper notes (higher pitched notes) of the notes are compared with each other, the lower notes (lower pitched notes) are compared with each other, and then the upper and lower notes of the two sets of notes are cross-compared; One group is double-tone, the other is single-tone, then the single-tone group of notes is used as the reference group, and the upper and lower notes of the double-tone group of notes are compared with the single-tone group of notes respectively.
- musical score analysis and labeling method further comprising: performing key signature and temporary sharp and falling notation analysis and labeling on the electronic musical score; performing scale, chord and arpeggio analysis and labeling based on the analysis result of the key signature; and, Mode analysis and labeling are performed based on the analysis results of the key signature, the temporary sharps, the chords, and the pitch.
- the method further comprises: performing musical section analysis and labeling on the electronic musical score; , the musical terms and symbols are labeled and the analysis result of the musical section is analyzed and labeled.
- a musical score analysis and marking device comprising: a time signature determination unit for determining a time signature of an electronic musical score; a beat and rhythm pattern analysis unit for based on the determined time signature Perform beat labeling and rhythm pattern analysis and labeling on the electronic musical score; a pitch interval labeling unit, used for performing pitch and interval labeling on the electronic musical score; , the labeling result of the interval and the rhythm pattern to analyze and label the musical pattern.
- an electronic device comprising: a processor; and a memory in which computer program instructions are stored, the computer program instructions, when executed by the processor, cause all The processor performs the spectrum analysis and labeling methods described above.
- a computer-readable medium having computer program instructions stored thereon, the computer program instructions, when executed by a processor, cause the processor to perform the spectrum analysis and spectrum analysis as described above and labeling method.
- the present application provides a method, device and electronic device for analyzing and labeling a musical score, which can perform a musical composition based on the characteristics of the melody line; rhythm; beat; harmony, mode tonality; speed; dynamics; pitch; Tracing the source and dismantling, through the feature extraction and analysis of the note-based rhythm, melody, mode, fingering, structure, musical terms and musical symbols, and period to analyze and label the spectrum, so as to effectively identify and label the spectrum.
- the present invention also relates to the following content:
- Instrumental music teaching needs to integrate the training of basic finger skills, playing skills, sight-reading ability, rhythm sense, listening and singing ability, music analysis, music theory knowledge learning, music history, performance and expression ability, etc. There is a short board, which will affect the overall musical water storage.
- the learning of a large number of students lacks disassembly, and there is a lack of systematic special training, resulting in many "planks" that have been missing or lacked exercise since the beginning of enlightenment. For example, if the students' ability of sight-reading and notation cannot keep up, it will make it difficult for them to learn new repertoires of this level, and they will always make mistakes.
- the ability of fingers cannot keep up with the command of the brain.
- the teaching charm of instrumental music teachers is not only reflected in their superb performance skills, solid professional theoretical knowledge, mastery of modern teaching methods, psychology and related humanistic knowledge, but also in the ability to integrate the works represented by musical scores.
- the playing technique, the feeling and understanding of the connotation of the work and the expressive force are well integrated, and can be consciously applied in teaching practice.
- different students From the perspective of students, due to the influence of their personality characteristics, training level and other factors, different students often show different abilities in the development of musical scores, techniques and expressiveness; at the same time, the same student's performance at different stages makes it There are also differences in the degree of grasp of the work.
- the so-called ability refers not only to the skill of playing and the difficulty of repertoire, but also to the ability to understand and express music.
- the present application provides a modular music database, in which the special training library interacts with the classification music library, which can improve the accuracy of label classification.
- a modular music database comprising:
- Classification score library which is used to classify scores according to the extracted features of different categories and store scores according to the categories
- An encyclopedic knowledge base for storing knowledge content associated with the analysis and annotation of scores stored in the classified score library through a knowledge graph tool
- the special training library is used to store the special training tasks converted from the music scores stored in the classified music score library, and the special training tasks are used for interacting with the user, and the results of the interaction with the user are recorded, judged and evaluated for all
- the difficulty ranking is optimized for the special training tasks in the above-mentioned special training library.
- the classified music score library includes:
- a performance skill knowledge point feature extraction unit used for extracting music features according to the performance skill knowledge points, the music features including tonality, beat, rhythm pattern, hand position, musical notation, playing method, interval, and chord;
- An encyclopedic knowledge feature extraction unit used for extracting knowledge features related to encyclopedic knowledge according to musical scores, where the knowledge features include musical score period, musical score author, musical score style, musical score style, and musical score;
- a theme feature extraction unit configured to extract theme features according to the theme of the musical score, where the theme features include a general theme and a title theme;
- the label classification unit is used to classify labels according to the difficulty of the score.
- the classification process of the label classification unit includes:
- Step 1 Preset features for the materials in the libraries of different modules, where the preset features include features extracted from basic elements of music;
- Step 2 According to the predetermined difficulty material, use the machine learning method to learn the difficulty grading rules for the features preset in Step 1 and the combination of these features, and generate the first-level difficulty level label;
- Step 3 Define the first-level and second-level difficulty level labels for the features and feature combinations preset in step 1;
- Step 4 According to the second-level difficulty level label generated in step 3, select the corresponding features and their combinations, randomly arrange the combinations, and automatically generate training materials;
- Step 5 According to the second-level difficulty level label generated in Step 3, extract the qualified features and their combinations from the existing scores, and automatically generate the materials to be used;
- Step 6 Unify the ready-to-use materials generated in Step 5 according to the format requirements of each module data to generate training materials;
- Step 7 Compare the training materials generated in Step 4 and Step 6 with the first-level difficulty level label defined in Step 2 to verify whether it is an inclusive relationship;
- Step 8 If the result of step 7 is not included, then place the training material in the verification library, and through further verification, determine whether the material can be attributed to the first-level difficulty level label defined in step 2;
- Step 9-1 If the result of the further verification described in Step 8 is attributable, perform the machine learning of Step 2 on the material to optimize the machine's definition of the first-level difficulty level label;
- Step 9-2 If the result of the further verification described in Step 8 is not attributable, then further adjust and refine the difficulty label defined in Step 3 to make it consistent with the result of the machine judgment;
- the preset features include: a interval feature, b range feature, c note duration feature, d clef feature, e time signature feature, f rhythm feature, g rest feature, h Tie feature, i key feature, j bar feature, k musical term feature, l musical notation feature, m musical type feature, n chord feature, o hand feature, p fingering feature, q temporariness feature, r Playing method characteristics, s phrase structure characteristics, t velocity characteristics, u accompaniment texture, v pedal, w ornamental tone, x song structure characteristics, y part characteristics.
- the classification process of the label classification unit further includes:
- Step 10 Preset the sorting rules of materials under the same difficulty level label
- Step 11 Compare and sort the training materials that can pass the verification library in Step 8 according to the sorting rules preset in Step 10;
- Step 12 Store the training materials sorted in Step 11 into the database.
- the classification process of the label classification unit further includes:
- Step 13 Perform unified format processing on the materials in the database in Step 12 according to the standard spectrum presentation format of each module database, and generate a special library for each module data;
- Step 14 Convert the music chart in the special library described in Step 13 into an interactive task
- Step 15 The user performs an interactive task
- Step 16 Collect user training data tracking and feedback
- Step 17 According to the user data collected in step 16, continue to optimize the difficulty ranking under the second-level difficulty label, and return to step 12 to update the database ranking.
- the special training library includes:
- Rhythm library used to train the students' ability to apply note duration and rhythm pattern in different time signatures
- the sight-singing library is used to train the students' ability of reading music and intonation in different clefs
- Sight-reading library used to train trainees' ability to quickly read and play music
- Listening library used to train the inner auditory and auditory discrimination skills of the trainees.
- Skill library used to train students' basic finger skills and playing skills.
- the specific implementation process of the rhythm library includes:
- Step 1 Difficulty grading and difficulty ranking are defined by extracting features, the features include: a time signature feature, b rhythm feature, c rest feature, d tie feature, e bar feature, and f voice feature;
- Step 2 According to the difficulty level materials, according to the existing sight-reading data of each level, remove the pitch, and use the machine learning method to learn the difficulty grading rules for the features preset in Step 1 and the combination of these features to generate the first level difficulty level label;
- Step 3 Define the first-level and second-level difficulty level labels for the features and feature combinations preset in Step 1 according to the advanced learning rules of rhythm;
- Step 4 According to the second-level difficulty level label generated in step 3, select the corresponding features and their combinations, randomly arrange the combinations, and automatically generate training materials;
- Step 5 According to the second-level difficulty level label generated in Step 3, extract the qualified features and their combinations from the existing scores, and automatically generate the materials to be used;
- Step 6 Unify the format of the ready-to-use materials generated in Step 5 according to the format requirements of the rhythm training library to generate training materials;
- Step 7 Compare the training materials generated in Step 4 and Step 6 with the first-level difficulty level label defined in Step 2 to verify whether it is an inclusive relationship
- Step 8 If the result of Step 7 is not included, put the training material in the verification library, and determine whether the material can be attributed to the first-level difficulty level label defined in Step 2 through verification;
- Step 9-1 If the judgment result described in Step 8 is attributable, perform the machine learning process of Step 2 on the material to optimize the machine's definition of the first-level difficulty level label;
- Step 9-2 If the judgment result described in step 8 is not attributable, adjust and refine the manually defined difficulty label described in step 3 to make it consistent with the machine judgment result.
- the specific implementation process of the rhythm library further includes:
- Step 10 Preset the sorting rules of materials under the same difficulty level label
- Step 11 Compare and sort the training materials that can pass the verification library in Step 8 according to the sorting rules preset in Step 10;
- Step 12 Store the training materials sorted in Step 11 into the rhythm library.
- the specific implementation process of the rhythm library further includes:
- Step 13 Uniform format processing of the stored materials in Step 12 according to the standard notation presentation format of the special training library to generate a question library for special rhythm training;
- Step 14 Convert the music chart in the question bank described in Step 13 into an interactive rhythm training task
- Step 15 The user performs the task
- Step 16 Collect user training data tracking and feedback
- Step 17 According to the user data collected in step 16, continue to optimize the difficulty ranking under the second-level difficulty label, and return to step 12 to update the ranking in the rhythm library.
- the sight-singing library includes a pitch and pitch training sub-library and a single-part melody sight-singing training sub-library,
- the material of the pitch and pitch training sub-library has six features: a pitch feature, b interval feature, c range feature, d note quantity feature, e clef feature, and f key feature;
- the material of the monophonic melody sight-singing training sub-library has thirteen features: a interval feature, b range feature, c note duration feature, d clef feature, e time signature feature, f rhythm pattern feature, g rest symbol feature, h tie feature, i tonal feature, j bar number feature, k musical term feature, l musical notation feature, m musical type feature.
- the materials in the sight-reading library have 23 features: a interval feature, b range feature, c note duration feature, d clef feature, e time signature feature, f rhythm pattern feature , g rest feature, h tie feature, i key feature, j bar feature, k musical term feature, l musical notation feature, m melody line modality (musical type feature), n chord feature, o hand position Features, p fingering feature, q temporary symbol feature, r playing method feature, s phrase structure feature, t velocity feature, u accompaniment texture, v pedal, w ornament.
- the listening and distinguishing library includes:
- Listening training sub-library including six features: a pitch, b range, c clef, d chord, e interval, f key;
- Tonal listening sub-library including five features: a chord, b interval, c tonal feature, d measure number feature, e chord feature;
- the rhythm training sub-library includes five features: a rhythm feature, b rest feature, c tie feature, d measure number feature, e time signature feature;
- Rhythm training sub-library including five features: a rhythm feature, b rest feature, c tie feature, d measure number feature, e time signature feature;
- Melody listening and distinguishing sub-library including five features: a rhythm feature, b rest feature, c tie feature, d measure number feature, e pitch feature;
- Melody analysis sub-library including eighteen features: a velocity, b articulation, c tempo, d key, e period, f accompaniment texture, g phrase structure, h range, i time signature, j melody line progression ( Musical features), k musical symbols, l musical terms, m rhythmic patterns, n pedals, o ornaments, p scales and arpeggios, q terminations, r bars.
- the specific implementation process of the skill library includes the following steps:
- Step 1 Definition of Difficulty Rating and Difficulty Ranking
- Step 2 Automatically generate teaching materials and store them in the library
- Step 3 Translate into skill training software content
- Step 4 Record user operation data, and re-optimize the difficulty of the materials according to the user operation data.
- the modular music database provided by this application can improve the accuracy of label classification through the interaction between the special training library and the classification music library.
- FIG. 1 illustrates a schematic flowchart of beat labeling in the spectrum analysis and labeling method according to an embodiment of the present application, and is also a schematic diagram of the work of the beat labeling subunit in the spectrum analysis and labeling system according to an embodiment of the present application Sex Flowchart.
- FIG. 2A illustrates an example of a basic rhythm pattern in a musical score analysis and labeling method/system according to an embodiment of the present application.
- FIG. 2B illustrates an example of a doubling musical pattern in the method/system for spectrum analysis and labeling according to an embodiment of the present application.
- FIG. 2C illustrates an example of a doubled musical type in the spectrum analysis and labeling method/system according to an embodiment of the present application.
- FIG. 3 illustrates a schematic flowchart of rhythmic labeling in the method for analyzing and labeling a musical score according to an embodiment of the present application, and is also a rhythmic analyzing and labeling subunit in the system for analyzing and labeling musical scores according to an embodiment of the present application. Schematic flow chart of the work.
- FIG. 4 illustrates a schematic diagram of each musical interval in the method/system for spectrum analysis and labeling according to an embodiment of the present application.
- FIG. 5 illustrates a schematic diagram of labeling intervals of thirds and intervals of sixths within a bar in a method/system for analyzing and labeling a musical spectrum according to an embodiment of the present application.
- FIG. 6 illustrates a flowchart of musical type labeling in the method for analyzing and labeling a spectrum according to an embodiment of the present application, which is also the work of the musical type analysis and labeling subunit in the system for analyzing and labeling a spectrum according to an embodiment of the present application. flow chart.
- FIG. 7 illustrates a flowchart of a spectrum analysis and labeling method according to an embodiment of the present application.
- FIG. 8 illustrates a schematic flowchart of a scale labeling process in the spectrum analysis and labeling method according to an embodiment of the present application, and is also the work of the scale labeling subunit in the spectrum analysis and labeling system according to an embodiment of the present application Schematic flow chart.
- FIG. 9 illustrates a schematic flowchart of a chord and decomposed chord labeling process in the spectrum analysis and labeling method according to an embodiment of the present application, and is also a chord labeling subunit in the spectrum analysis and labeling system according to an embodiment of the present application Schematic flow chart of the work.
- FIG. 10 illustrates a schematic flowchart of mode labeling in the spectrum analysis and labeling method according to an embodiment of the present application, and is also a schematic diagram of the work of the mode labeling subunit in the spectrum analysis and labeling system according to an embodiment of the present application Sex Flowchart.
- FIG. 11 illustrates a schematic flow chart of musical section marking in the musical score analysis and marking method according to an embodiment of the present application, and is also a schematic flow chart of a musical section marking subunit in the musical score analysis and marking system according to an embodiment of the present application picture.
- FIG. 12 illustrates a block diagram of a spectrum analysis and labeling apparatus according to an embodiment of the present application.
- FIG. 13 illustrates a block diagram of an electronic device according to an embodiment of the present application.
- FIG. 14 illustrates a schematic block diagram of a spectrum analysis and labeling system according to an embodiment of the present application.
- FIG. 15 illustrates a schematic diagram of the overall architecture of a spectrum analysis and labeling system according to an embodiment of the present application.
- FIG. 16 illustrates a block diagram of an example of a modular music database according to an embodiment of the present application.
- FIG. 17 illustrates a block diagram of an example of a classified score library of a modular music database according to an embodiment of the present application.
- FIG. 18 illustrates tag features of a classified score library of a modular music database according to an embodiment of the present application.
- FIG. 19 illustrates a flowchart of a tag classification process of a modular music database according to an embodiment of the present application.
- FIG. 20 illustrates a block diagram of an example of a specialized training library of a modular music database according to an embodiment of the present application.
- FIG. 21 illustrates an example of difficulty labels in a rhythm library of a modular music database according to an embodiment of the present application.
- FIG. 22 illustrates the first application scenario in the pitch and pitch training sub-library of the modular music database according to an embodiment of the present application, that is, the musical chart application scenario.
- FIG. 23 illustrates a second application scenario in the pitch and pitch training sub-library of the modular music database according to an embodiment of the present application, that is, an application scenario of a sight-singing software.
- Figure 24 illustrates the relationship of the modular music database to the notation analysis and tagging method/system.
- the digitization of music in the computer has been relatively popular, which includes the digital music sequence MIDI in the computer, and the commonly used MusicXml electronic score.
- the electronic musical score refers to a file in which layout information of musical scores is stored in a computer.
- the existing electronic music score only contains the typesetting information of the music piece, which can only be used to browse the music piece's chart, and cannot be applied to more professional chart analysis.
- the purpose of the present application is to provide a method/system for analyzing and labeling musical scores, which can automatically analyze electronic musical scores, such as MusicXml electronic musical scores, so as to mark corresponding musical elements on the stave, so as to satisfy the player's reading score. and playing needs.
- electronic musical scores such as MusicXml electronic musical scores
- the musical score analysis and labeling method/system of the present application utilizes the concept of first principles, traces and disassembles the musical piece according to the characteristics of the music itself, and performs various characteristics on the electronic musical score based on different analysis angles of musical score analysis. Extract and analyze.
- said features include musical elements as described above and their derivatives, such as pitch, interval, range, melodic line, rhythm, beat, scale, arpeggio, chord, harmony, tonal, fingering usage , Music structure analysis, musical terminology, musical notation, dynamics, speed, creative background, author, genre theme, etc.
- the spectrum analysis and labeling method/system of the present application utilizes the logical relationship between various musical elements to automatically analyze and label the spectrum through the correlation between the extracted features, so that the spectrum can be accurately and automatically marked.
- Various musical elements in the spectrum do not require manual participation, which improves user convenience while ensuring the accuracy of notation labeling.
- FIG. 14 illustrates a schematic block diagram of a spectrum analysis and labeling system according to an embodiment of the present application.
- the system 100 for analyzing and labeling music scores includes: a beat rhythm and rhythm pattern analysis and labeling unit 110 for labeling the time signature and beat of an electronic musical score, and for analyzing and labeling rhythm patterns ; Pitch interval and musical type analysis and labeling unit 120, for labeling the pitch and interval of the electronic musical score and analyzing and labeling the musical type; Scale, chord and arpeggio labeling unit 130, for labeling the electronic musical score scales, chords and arpeggios in the instrument; key signature temporary sharps and sharps and mode marking unit 140 for marking the key signature, temporary sharpening and mode of the electronic musical score; special fingering marking unit 150 for marking the electronic musical score The special fingering of the musical score; the musical phrase labeling unit 160 is used to label the musical phrases of the electronic musical score; the musical term and symbol labeling unit 170 is used to label the musical terms
- the beat rhythm and rhythm pattern analysis and labeling unit includes: a time signature labeling subunit for labeling the time signature of the electronic score; a beat labeling subunit a unit for marking the beat of the electronic musical score based on the marked time signature; and a rhythm pattern analysis and marking subunit for performing rhythm pattern analysis and marking on the electronic musical score based on the marked beat.
- the time signature marking is to mark the time signature in the electronic musical score.
- the time signature can be extracted from the electronic musical score, for example, the time signature information can be read from the MusicXml electronic musical score, such as 3/4, 2/4, 4/4, 3/8 etc.
- FIG. 2 illustrates a schematic diagram of digital labeling of beats in a spectrum analysis and labeling method/system according to an embodiment of the present application.
- other methods can also be used for the digital labeling method of beats, the core of which is to calculate the ratio of the time value of the notes contained in a beat, divide the line segments according to the calculated ratio, and break each note in the middle; , numbers are combined with line segments.
- the digital labeling method needs to consider the labeling position of numbers and short lines in the case of more than one beat. As shown in Figure 2, the position and length of the line segments are equally divided.
- FIG. 1 illustrates a schematic flowchart of beat labeling in the spectrum analysis and labeling method according to an embodiment of the present application, and is also a schematic diagram of the work of the beat labeling subunit in the spectrum analysis and labeling system according to an embodiment of the present application Sex Flowchart.
- the beat labeling is performed according to the above-mentioned numerical labeling method, that is, a line is drawn under the note and the number of beats is marked, and then it is determined whether the accumulated beats of the current measure are equal to the number of beats per measure of the music, and if so, determine whether to traverse all measures, otherwise continue to underline and mark beats under the note.
- FIG. 2A illustrates an example of a basic rhythm pattern in a musical score analysis and labeling method/system according to an embodiment of the present application.
- FIG. 2A taking a quarter note as an example, nine basic rhythm patterns marked with a V-shaped notation are shown.
- the doubling of a half note is a beat: for the 9 basic rhythm patterns shown in Figure 2A (a quarter note is a beat), the time value of each rhythm pattern is doubled to obtain New 9 rhythm patterns, for example called "half note doubling patterns".
- FIG. 2B illustrates an example of a doubling musical pattern in the method/system for spectrum analysis and labeling according to an embodiment of the present application.
- the eighth note is used as the double of a beat: for the 9 basic rhythm patterns shown in Figure 2A (quarter note is a beat), after each rhythm pattern is halved its time value Get 9 rhythm patterns, for example called "eighth note double rhythm patterns".
- Get 9 rhythm patterns for example called "eighth note double rhythm patterns”.
- FIG. 2C illustrates an example of a doubled musical type in the spectrum analysis and labeling method/system according to an embodiment of the present application.
- rhythm patterns can be obtained according to 9 basic rhythm patterns and doubling and doubling.
- the basic rhythm pattern and the doubling and doubling rhythm patterns may be based on the read time signature, that is, the time signature data is the doubling and doubling of the rhythm pattern. unit data.
- the basic rhythm pattern, as well as the doubling rhythm pattern and the double rhythm pattern as described above are based on the time signature determining a quarter note as a beat, and if the time signature determines a quarter note as a beat or an eighth note as a beat.
- the basic rhythm pattern, the doubling rhythm pattern and the doubling rhythm pattern can be determined.
- the rhythm pattern analysis and labeling subunit is used to: determine a preset basic rhythm pattern and a doubling of the basic rhythm pattern based on the rhythm combination characteristics of the electronic musical score a rhythm pattern and a doubled rhythm pattern; and, performing a rhythm pattern analysis on the electronic score by comparing with the basic rhythm pattern, the doubled rhythm pattern and the doubled rhythm pattern.
- FIG. 3 illustrates a schematic flowchart of rhythmic labeling in the method for analyzing and labeling a musical score according to an embodiment of the present application, and is also a rhythmic analyzing and labeling subunit in the system for analyzing and labeling musical scores according to an embodiment of the present application. Schematic flow chart of the work.
- the basic rhythm patterns for comparison are first determined, for example, the 49 basic rhythm patterns as described above. Then, read all the note sequences of the music from an electronic score, such as a MusicXML music file, and take the current note as the target note.
- an electronic score such as a MusicXML music file
- a corresponding beat mark may also be performed under the matched note according to the matched rhythm pattern.
- a combined rhythm pattern if more than one matched basic rhythm pattern is adjacent, it is called a combined rhythm pattern.
- the doubling and doubling of a rhythm pattern can also be regarded as a similar rhythm pattern, and the same rhythm pattern or a similar rhythm pattern can be optionally marked, and can also be displayed at the same time.
- the rhythm pattern analysis and labeling subunit is used to perform rhythm pattern labeling with each beat of the note duration value accumulation as a unit, including: defining a rhythm pattern identifier.
- a note contains multiple beats, define the position coordinates of multiple rhythm pattern markers; and, in a unit of rhythm pattern identifiers, the ratio of the time values of the notes contained in one beat is calculated, according to the calculated ratio
- the rhythmic pattern signatures are equally divided, and the rhythmic pattern signatures are broken for each note.
- the pitch interval and musical type analysis and labeling unit includes: a pitch labeling subunit, used to label the pitch of the electronic score; a musical interval labeling subunit, used to label all the musical intervals of the electronic score; and, a musical type analysis and labeling subunit for analyzing and labeling musical types based on the marked pitch, interval and rhythm type.
- the interval labeling includes an intra-bar interval label and an inter-bar interval label.
- the interval labeling within a measure refers to labeling the interval relationship between two adjacent tones.
- the marking method can be to connect a line between two adjacent notes, and mark the interval with a number next to it.
- an interval of a certain degree such as a 3rd interval or a 4th interval, can be selected to be marked.
- all the 4th intervals in the electronic music score are selected to be marked, all adjacent ones are set as Tones of 4 degrees are connected by line segments and marked with the number 4.
- one or more of a third, a fourth, a fifth, a sixth, a seventh, and an octave may be marked, as shown in FIG. 4 .
- FIG. 4 illustrates a schematic diagram of each musical interval in the method/system for analyzing and labeling a spectrum according to an embodiment of the present application.
- FIG. 5 illustrates a schematic diagram of labeling intervals of thirds and intervals of sixths within a bar in a method/system for analyzing and labeling a musical spectrum according to an embodiment of the present application.
- Musical pattern analysis and labeling refers to finding out all identical, displaced, mirrored and similar musical patterns according to the contour features of the melody, and labeling them in the electronic score. For example, under the notes of musical patterns in electronic scores, identical, displaced, mirrored and similar musical patterns are marked with line segments of different forms and colors. And, you can specify the minimum number of notes included in the musical type. For example, if the specified minimum number of notes is 4, then there are at least 4 consecutive notes that match the same, displaced, mirrored, and similar musical types.
- the minimum number of notes may be specified as 4 through a received user instruction.
- the minimum range of the minimum note number is 4, and the maximum is the total number of bars/4. It is equal to M measures (because one musical section contains at least two phrases, and one phrase contains at least two musical types), and the maximum value of the number of notes in 4 groups of M measures is taken as the maximum value of this value range. That is, in 6.2 Phrase Analysis, the maximum number of notes that may appear in a single phrase.
- each note group contains a set of data, which includes the pitch, duration and interval of each note. relation. For example, if the data of a group of notes is described in JSON format, it is (take the minimum number of notes 4 as an example):
- noteName is the annotation result of pitch
- noteLength is the annotation result of beat and rhythm
- noteInterval is the annotation result of interval
- the lowest note in the first note group is the first note
- the highest note is the second note
- their midline is the third line, which is the axis of symmetry
- the first note should be 5 degrees to the axis of symmetry
- the second group of notes constitutes the first.
- Phrase is composed of two or more musical types. In order to achieve more accurate and more subdivided music type identification, it is necessary to compare the results identified in the above steps 5-9 with each music type as a unit with the phrase recognition results in 6.2, and compare their inclusion relationships. The lengths of the phrases in which the current musical pattern is located overlap, and the current musical pattern needs to be further divided into a narrower range.
- the rainbow arrangement method red, orange, yellow, green, blue, blue and purple
- the first group of the same music type is marked with a red connection
- the second group is marked with orange, and so on. If all seven colors are used once, divide their hex values by 2, and then perform the next round of labeling, and so on.
- the coordinates of the lower left corner of the first note in the music pattern can be used as the starting point, and the point at the lower right corner of each note in the music pattern can be marked by connecting a line. this melody.
- a solid line to connect for similar music type, use a dotted line to connect, for a displacement music type, use a long and short dotted line to connect, for a mirrored music type, use a dot + dotted line to connect. If the previous note of this music type is the last note of another music type, connect the coordinate point of the lower right corner of the previous note with the coordinate point of the lower left corner of the first note of the current music type to form a continuous long music type .
- the musical type annotation there is a logical relationship between the musical type annotation and the rhythm type annotation. It can be seen that: (1) the same musical type is a sufficient and unnecessary condition for the same rhythm type; (2) the musical type A similar situation can be that the rhythm patterns of the two groups of notes are different, but the pitch and interval are the same; (3) the situation of musical pattern displacement is that the two groups of notes have the same rhythm pattern, the same interval, but different pitches; (4) music Pattern mirroring is the case in which the intervals of each corresponding note of two groups of notes to the axis of symmetry that is the middle value between the highest and lowest notes in the group are opposite to each other.
- a musical section includes at least two musical phrases
- a musical phrase includes at least two musical type. Therefore, when the program recognizes and analyzes, it is necessary to compare the recognition results of musical patterns, phrases and passages, and analyze the structure of the music more accurately.
- FIG. 6 illustrates a flowchart of musical type labeling in the method for analyzing and labeling a spectrum according to an embodiment of the present application, which is also the work of the musical type analysis and labeling subunit in the system for analyzing and labeling a spectrum according to an embodiment of the present application. flow chart.
- a reference note group is taken, which can also be specified by the user, or a typical musical pattern is automatically identified and compared with the next adjacent group to determine whether the musical pattern is the same, displaced or mirrored. If there is a match, determine whether the current reference group has been compared. If the comparison has been completed, go to the labeling link and take another reference note group. And if the comparison is not completed, continue to compare with the next adjacent group.
- FIG. 7 illustrates a flowchart of a spectrum analysis and labeling method according to an embodiment of the present application.
- the method for analyzing and labeling a musical score includes: S110, determining the time signature of the electronic musical score; S120, performing beat labeling and rhythm pattern analysis and analysis on the electronic musical score based on the determined time signature labeling; S130, performing pitch and interval labeling on the electronic musical score; and, S140, analyzing and labeling musical patterns based on the labeling results of the pitch, the interval, and the rhythm pattern.
- performing a rhythm pattern analysis on an electronic musical score includes: determining a preset basic rhythm pattern, and a doubled rhythm pattern and a doubled rhythm pattern of the basic rhythm pattern based on the rhythm combination characteristics of the electronic musical score. a rhythm pattern; and, performing a rhythm pattern analysis on the electronic musical score by comparing with the basic rhythm pattern, the doubling rhythm pattern and the doubling rhythm pattern.
- performing rhythmic marking on the electronic musical score includes performing rhythmic marking with each beat of the note duration as a unit, including: defining the starting and ending coordinates of the rhythm marking; when a note contains multiple beats Define the position coordinates of a plurality of rhythm pattern marks; And, in the rhythm pattern mark of a unit, carry out the ratio calculation of the time value of the notes contained in one beat, and divide the rhythm pattern mark equally according to the calculated ratio, and The rhythmic pattern identification is disconnected for each note.
- performing interval labeling on the electronic musical score includes: performing intra-bar interval labeling and inter-bar interval labeling on the electronic musical score.
- analyzing and labeling the musical pattern based on the labeling results of the pitch, the interval and the rhythm pattern includes: determining a reference note group with a predetermined number of notes, the reference note and traversing other reference note groups with the predetermined number of notes in the electronic score, so as to determine the relationship with the Benchmark musical patterns of the same, displaced, mirrored, or similar musical patterns.
- the above method for analyzing and labeling musical scores includes at least one of the following: the same musical type conforms to two sets of notes having the same rhythm pattern, the same pitch, and the same interval; However, the pitch and interval are the same; the similar musical type conforms to the rhythm pattern of the two groups of notes is not the same, the pitch is different, and the interval is the same; The pitch or interval is more than 50% the same; the displacement musical type conforms to the same rhythm pattern of two sets of notes, the interval is the same, but the pitch is different; and the mirror musical type conforms to each corresponding note of the two sets of notes to be the highest of the group The interval of the symmetry axis of the intermediate value between the pitch and the lowest pitch is opposite to each other.
- the musical pattern analysis and labeling subunit is used for at least one of the following: based on two sets of notes having the same rhythm pattern, the same pitch, and the same interval, determine the The two groups of notes are of the same musical type; based on the fact that the two groups of notes conform to the rhythm patterns of the two groups of notes, but the pitch and interval are the same, the two groups of notes are determined to be of similar musical types; based on the fact that the two groups of notes conform to the rhythm of the two groups of notes If the types, pitches, and intervals are the same, the two groups of notes are determined to be similar musical types; based on the fact that the two groups of notes conform to the relationship of the same rhythm pattern or doubling or doubling, and the pitch or interval has the same rhythm pattern If more than 50% are identical, it is determined that the two groups of notes are similar musical types; based on the fact that the two groups of notes conform to the same rhythm pattern of the two groups of notes, the intervals are the same, but
- extracting and comparing the tones includes at least one of the following situations: if the two groups of notes compared are all two-tones, Then compare the upper-level notes (higher-pitched notes) of the two groups of notes with each other, the lower-level notes (lower-pitched notes) with each other, and then cross-comparison the upper and lower notes of the two groups of notes; One of the two groups of notes is double-tone, and the other is single-tone. The single-tone group of notes is used as the reference group, and the upper and lower notes of the double-tone group of notes are compared with the single-tone group of notes. Comparison.
- the scale, chord and arpeggio labeling unit includes: a scale labeling subunit for performing scale labeling on the electronic musical score; a chord labeling subunit for labeling the electronic musical score chord annotation; and an arpeggio annotation subunit for performing arpeggio annotation on the electronic score.
- scale labeling is: connect lines below the staff of successive scales, and mark the scale name next to it (for example, C+ represents the C major scale, ch- represents the C harmonic minor scale, and cx- represents the C melody. minor scale).
- Preset note data of 194 scales recognized by one hand Preset note data of 194 scales recognized by one hand.
- One-handed identification i.e. only in the treble staff or bass staff: 72 keys in 6 categories (natural major, harmonic major, melodic major, natural minor, harmonic minor, melodic minor ); 2 chromatic scales and octaves, 24 two-tone thirds, 24 two-tone sixths, and 72 octaves); preset note data of 146 scales recognized by both hands.
- Two-hand recognition simultaneous recognition of two staves at the same time: two-hand recognition of 6 types of tonal scales in the same direction or in the opposite direction, two-handed thirds, sixths, thirds, and sixths;
- the matching items in 3) are prioritized and recommended in order, that is, the key scale that conforms to the mode identification result is prioritized from the results listed in 3); for example, the key judgment of a song is A. Minor, and the five consecutive tones ABCDE in the score have two possibilities for scale identification: C major and A minor. Since the key is A minor, the option of A minor is used as the preferred option for users to choose. .
- the matching items in 6) are prioritized and recommended in order, that is, the key scale that conforms to the mode identification result is preferentially arranged from the results listed in 6);
- FIG. 8 illustrates a schematic flow chart of a scale labeling process in the spectrum analysis and labeling method according to the embodiment of the present application, and is also a schematic diagram of the scale labeling subunit in the spectrum analysis and labeling system according to the embodiment of the present application. Schematic flow chart of the work.
- arpeggio labeling is as follows: connecting consecutively appearing arpeggios with line segments below the staff, and marking the name of the arpeggio next to them.
- chord and broken chord labeling is: mark the chord name on the chord (columnar chord) played at the same time, or connect a line below the broken chord played continuously, and mark the chord name next to it, such as C, Em.
- interval relationship of various chords for example, the interval relationship of the three notes of a major triad is a major third and a minor third; the interval relationship of a minor triad is a minor third and a major third.
- chords traverse all non-simultaneously played notes on the same staff from beginning to end, and determine whether the last few notes of a target note satisfy the interval relationship of a certain chord. If a chord is matched, the chord name is marked above the notes.
- FIG. 9 illustrates a schematic flowchart of the chord and decomposed chord labeling process in the spectrum analysis and labeling method according to the embodiment of the present application, which is also the chord labeling in the spectrum analysis and labeling system according to the embodiment of the present application. Schematic flow diagram of the operation of the subunits.
- the key signature temporary rising and falling notation and key signature in the method/system for analyzing and labeling a musical spectrum includes key signature labeling, temporary rising and falling notation labeling, and mode identification and labeling.
- the key signature temporary rising and falling marks and the key marking unit include: a key signature marking subunit for performing key signature analysis and marking on the electronic musical score; temporary rising and falling marks marking a subunit for performing temporary sharp and flat notation analysis and labeling on the electronic score; and a mode analysis and labeling subunit for performing mode analysis and labeling based on the marked key signature, temporary sharp and sharp notation, chord and pitch.
- the specific implementation method is, for example: first, read the key signature of the score from an electronic score, such as MusicXml, and highlight it; then, identify the tone that needs to be raised and lowered marked by the key signature in the entire score. , and highlight it with the same color.
- perform mode identification which is expressed in the following form: identify the mode and key signature of the tune, and mark it on the upper left corner of the score.
- a conditional threshold is given for judging whether the music piece has a definite mode key.
- FIG. 10 illustrates a schematic flowchart of mode labeling in the spectrum analysis and labeling method according to an embodiment of the present application, and is also a schematic diagram of the work of the mode labeling subunit in the spectrum analysis and labeling system according to an embodiment of the present application Sex Flowchart.
- spectrum analysis and labeling method further comprising: carrying out key signature and temporary sharp and falling notation analysis and labeling to the electronic musical score; performing scale, chord and arpeggio analysis and labeling based on the analysis result of the key signature. ; and, performing mode analysis and labeling based on the analysis results of the key signature, the temporary flat and sharp notation, the chord and the pitch.
- the special fingering labeling unit includes: an expanded fingering subunit for labeling the expanded fingering in the electronic musical score; a shortened indexing subunit for labeling Mark the abbreviated fingerings in the electronic musical score; the cross-point note subunit is used to mark the cross-fingered fingering in the electronic musical score; the homophonic replacement index note subunit is used to mark the homophonic replacement fingering in the electronic music score and, a hand position change labeling subunit, used for labeling the hand position change fingering in the electronic music score.
- the musical phrase and phrase labeling unit includes: a musical paragraph analysis and labeling subunit, which is used for analyzing and labeling the electronic musical score; and, a musical phrase analyzing and labeling subunit, For phrase analysis and annotation based on the analysis and annotation results of the annotated musical patterns, rhythmic patterns, scales, chords, arpeggios, musical terms, musical symbols and passages.
- section marking is explained, and its expression is as follows: identifying the section structure of the music piece, and giving the marking according to the measure.
- Example: a piece of music has a total of 32 bars, then 32/4 8, 8 is the theoretical minimum number of consecutive bars N, the system takes N-1 as the minimum number of consecutive bars M for judging a piece in the system, then M is 7; That is, if there are two identical/similar passages of more than 7 bars in the music, it is considered that two identical/similar passages have been identified.
- the musical section must consist of at least two or more phrases.
- the upper left corner of the first measure of each identified passage is marked with a combination of letters such as a square, a circle, a triangle, etc.
- the same passage is marked with the same shape and letter. For example: A and a square represent a piece, and the same piece is also marked with A and a square; the next piece is marked with a combination of B and a circle; and so on.
- FIG. 11 illustrates a schematic flow chart of musical section marking in the musical score analysis and marking method according to an embodiment of the present application, and is also a schematic diagram of the work of the musical section marking sub-unit in the musical score analysis and marking system according to an embodiment of the present application Sex Flowchart.
- phrase identification and marking will be described, and its expression is as follows: identifying the phrase structure of a musical piece, and marking the phrase with arcs between the phrases.
- the recognition results of musical patterns, rhythm patterns, scales, arpeggios, and musical sections can all be used as weighting conditions for phrase recognition, and there are also terminal and semi-terminal expressions.
- phrase recognition is as follows:
- the phrase is composed of musical types. Look at the beginning of the next phrase and the first musical type. It starts with the same material (similar musical type) as the first phrase, which is also an important basis for dividing the musical structure. ”, that is: the same material can divide the musical structure. Therefore, if the musical structure of the previous piece of music is similar to the musical structure of the latter piece of music, it should be divided into two phrases. For example, when two AB musical patterns are compared, they are similar musical patterns (exactly the same rhythm pattern, with different pitches and different intervals); and the ending sound is a long sound, it is divided into two phrases.
- the phrase must have some form of semi-terminated or terminated, such as the ending sound is the tonic; such as the termination or semi-terminated chord progression (main chord, dominant chord, dominant seventh chord, subdominant chord, terminated fourth or sixth chord), etc. , you can preset various terminated chord patterns.
- the types of terminations include:
- Orthogonal Harmony progression containing the leading ⁇ dominant chord.
- V-I or vii-I Complete positive format termination, Perfect Authentic Cadence (PAC); incomplete positive format termination, Imperfect Authentic Cadence (IAC), inverted.
- Variation Harmonic progression without leading to the tonic.
- Plagal Cadence (PC) IV-I, Amen Termination/Church Termination.
- Half-terminated I-V or? ⁇ V, HalfCadence(HC); minor iv6 ⁇ V, Phrygian Cadence Phrygian termination; deceptive termination/false termination: Deceptive Cadence(DC)V ⁇ vi is the most common; omit termination: elision.
- the end of a phrase is also the beginning of the next phrase; Picardy Termination: Picardy Third.
- the major chord in the minor-terminated 3rd is raised to become a major chord.
- Phrase should have a certain length, generally about 4 bars, and the length may be 8 bars or even more. And the length of the phrase must be shorter than the section where the phrase is located, and must also be longer than the musical type of the position (the musical type is smaller than the phrase and smaller than the section).
- each phrase is a long note, so the note duration of the ending of a phrase is at least half of the denominator in the time signature.
- the method for analyzing and marking notation further comprises: performing musical section analysis and marking on the electronic musical score; and, based on the musical pattern, the rhythm pattern, the scale, the arpeggio, The chords, the musical terms and notation annotations and the analysis results of the musical passages are subjected to phrase analysis and annotation.
- the musical term and symbol labeling unit includes: a music term labeling subunit for labeling musical terms in the electronic score; a music notation labeling subunit for labeling the electronic score Musical notation in sheet music; and, Work Period Characterization Analysis and Labeling subunit for combining labelled musical patterns, scales, chords, arpeggios, tonality, rhythmic patterns, phrases based on labelled musical terms and musical notation
- the period characteristics of the electronic music score are analyzed and marked according to the marking results of the musical pieces.
- musical notation and musical term annotation are: identifying musical terms and notations on the score (including the dynamics, speed, expression, style, playing technique, period genre, etc.) Choose to display a paraphrase at the bottom of the score with a number associated with the notation on the score for printing.
- the expression form of the knowledge graph is: when the user selects elements such as the title and author of the music, a button to jump to the Baidu Encyclopedia of the entry is displayed. Clicking the button will display the Baidu Encyclopedia content of the entry in a pop-up window next to it. The user can take any of them (or edit them manually) and display them at the bottom of the score, associated with a number label
- the manifestations of period analysis of musical works are: according to the above-mentioned tonal characteristics, musical notation and musical terms, scale, arpeggio and chord characteristics, beat rhythm characteristics, phrase characteristics, etc., comprehensively judge the characteristics of the music, and give a conclusion in On the score, and according to the above features, the corresponding features extracted from the score are highlighted with colors, and the user can click on each highlighted part, and the number is marked on the score.
- the method/system for chart analysis and labeling may also adopt artificial intelligence deep learning technology to analyze the accuracy of analysis such as "musical pattern analysis”, “phrase analysis”, “tonality analysis”, and “period feature analysis”. aspects of improvement.
- the method of marking the phrases in the previous article is to mark the arc between the phrases, and the user can manually drag and adjust the position of the arc;
- FIG. 12 illustrates a block diagram of a spectrum analysis and labeling apparatus according to an embodiment of the present application.
- the musical score analysis and labeling apparatus 200 includes: a time signature determination unit 210 for determining the time signature of an electronic musical score; a beat and rhythm pattern analysis unit 220 for The time signature performs beat labeling and rhythm pattern analysis and labeling on the electronic musical score; the pitch and interval labeling unit 230 is used to perform pitch and interval labeling on the electronic musical score; The labeling results of the pitch, the interval and the rhythm pattern analyze and label the musical pattern.
- the beat and rhythm pattern analysis unit 220 is configured to: determine a preset basic rhythm pattern and a doubling rhythm pattern of the basic rhythm pattern based on the rhythm combination characteristics of the electronic musical score and doubling rhythm patterns; and, performing rhythm pattern analysis on the electronic score by comparing with the basic rhythm patterns, the doubling rhythm patterns and the doubling rhythm patterns.
- the beat and rhythm pattern analysis unit 220 performing rhythm pattern labeling on the electronic musical score includes performing rhythm pattern labeling with the accumulation of note duration values per beat as a unit, including: defining the start and end of the rhythm pattern mark Coordinates; when a note contains multiple beats, define the position coordinates of multiple rhythm pattern markers; and, in a unit of rhythm pattern identifiers, the ratio of the time values of the notes contained in one beat is calculated, and the calculated ratios are calculated.
- the rhythm pattern marker is divided equally, and the rhythm pattern marker is disconnected for each note.
- the pitch and interval labeling unit 230 is configured to: label the electronic musical score with intra-bar interval labeling and inter-bar interval labeling.
- the musical type analysis and labeling unit 240 is configured to: determine a reference note group with a predetermined number of notes, and the reference note group constitutes a reference musical type;
- the reference note group of the predetermined number of notes is determined based on the pitch, duration and interval of each group of notes of the predetermined number of notes to determine the same, displaced, mirrored or similar musical pattern to the reference musical pattern.
- the above apparatus for analyzing and labeling musical charts includes at least one of the following: the same musical type conforms to two sets of notes having the same rhythm pattern, the same pitch, and the same interval; However, the pitch and interval are the same; the similar musical type conforms to the rhythm pattern of the two groups of notes is not the same, the pitch is different, and the interval is the same; The pitch or interval is more than 50% the same; the displacement music type conforms to the same rhythm pattern of two groups of notes, the same interval, but different pitches; and the mirror music type conforms to the same rhythm pattern of the two groups of notes, the same interval, and pitch reverse order arrangement.
- extracting and comparing the tones includes at least one of the following situations: if the two groups of notes to be compared are both double tones, then The upper notes (higher pitched notes) of the notes are compared with each other, the lower notes (lower pitched notes) are compared with each other, and then the upper and lower notes of the two sets of notes are cross-compared; One group is double-tone, the other is single-tone, then the single-tone group of notes is used as the reference group, and the upper and lower notes of the double-tone group of notes are compared with the single-tone group of notes respectively.
- a mode analysis and labeling unit used for analyzing and labeling key signatures and temporary sharps and sharps on the electronic musical score; performing scales, chords and Arpeggio analysis and labeling; and, performing mode analysis and labeling based on the analysis results of the key signature, the accidental flats, the chords, and the pitch.
- the apparatus for analyzing and labeling musical scores further comprises: a section and phrase analysis and labeling unit for performing section analysis and labeling on the electronic musical score;
- the scales, the arpeggios, the chords, the musical terms and notation, and the analysis results of the musical passages are subjected to phrase analysis and labeling.
- the musical score analysis and labeling device further comprises: a work period characteristic analysis and labeling unit, used to label the electronic musical score with musical terms and symbols; According to the labeling results of the musical type, the scale, the arpeggio, the chord, the tonality, the beat rhythm pattern, and the phrase, the period characteristics of the work of the electronic score are analyzed and labelled.
- the spectrum analysis and labeling apparatus 200 may be implemented in various terminal devices, such as smart phones, computers, servers, and the like.
- the spectrum analysis and labeling apparatus 200 according to the embodiment of the present application may be integrated into a terminal device as a software module and/or a hardware module.
- a software module in the operating system of the terminal device, or may be an application program developed for the terminal device; of course, the spectrum analysis and labeling apparatus 200 may also be a software module of the terminal device One of many hardware modules.
- the spectrum analysis and labeling apparatus 200 and the terminal device can also be separate devices, and they can be connected to the device through a wired and/or wireless network, and according to the agreed data format to transmit interactive information.
- FIG. 13 illustrates a block diagram of an electronic device according to an embodiment of the present application.
- the electronic device 10 includes one or more processors 11 and a memory 12 .
- Processor 11 may be a central processing unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in electronic device 10 to perform desired functions.
- CPU central processing unit
- Processor 11 may control other components in electronic device 10 to perform desired functions.
- Memory 12 may include one or more computer program products, which may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory.
- the volatile memory may include, for example, random access memory (RAM) and/or cache memory, or the like.
- the non-volatile memory may include, for example, read only memory (ROM), hard disk, flash memory, and the like.
- One or more computer program instructions may be stored on the computer-readable storage medium, and the processor 11 may execute the program instructions to implement the spectrum analysis and labeling methods of the various embodiments of the present application described above and / or other desired functionality.
- Various contents such as rhythm patterns, pitches, intervals, musical patterns, etc. may also be stored in the computer-readable storage medium.
- the electronic device 10 may also include an input device 13 and an output device 14 interconnected by a bus system and/or other form of connection mechanism (not shown).
- the input device 13 may include, for example, a keyboard, a mouse, and the like.
- the output device 14 can output various information to the outside, including the marked spectrum and the like.
- the output device 14 may include, for example, displays, speakers, printers, and communication networks and their connected remote output devices, among others.
- the electronic device 10 may also include any other suitable components according to the specific application.
- embodiments of the present application may also be computer program products comprising computer program instructions that, when executed by a processor, cause the processor to perform the "exemplary methods" described above in this specification
- the steps in the spectral analysis and annotation methods according to various embodiments of the present application are described in the section.
- the computer program product can write program codes for performing the operations of the embodiments of the present application in any combination of one or more programming languages, including object-oriented programming languages, such as Java, C++, etc. , also includes conventional procedural programming languages, such as "C" language or similar programming languages.
- the program code may execute entirely on the first user computing device, partly on the first user device, as a stand-alone software package, partly on the first user computing device and partly on the remote computing device, or entirely on the first user computing device. Execute on a remote computing device or server.
- embodiments of the present application may also be computer-readable storage media having computer program instructions stored thereon, the computer program instructions, when executed by a processor, cause the processor to perform the above-mentioned "Example Method" section of this specification Steps in the spectral analysis and labeling methods according to various embodiments of the present application described in .
- the computer-readable storage medium may employ any combination of one or more readable media.
- the readable medium may be a readable signal medium or a readable storage medium.
- the readable storage medium may include, for example, but not limited to, electrical, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatuses or devices, or a combination of any of the above. More specific examples (non-exhaustive list) of readable storage media include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
- the knowledge graph unit includes: an author information subunit, used to label the author-related information of the electronic musical score; and, a music information subunit, used to label the music of the electronic musical score Related Information.
- FIG. 15 illustrates a schematic diagram of the overall architecture of a spectrum analysis and labeling system according to an embodiment of the present application. As shown in FIG. 15 , those skilled in the art can understand that each module has been described in detail above, and will not be repeated here.
- the spectrum analysis and labeling system according to the embodiments of the present application may be implemented in various terminal devices, such as smart phones, computers, servers, and the like.
- the spectrum analysis and labeling system according to the embodiment of the present application may be integrated into a terminal device as a software module and/or a hardware module.
- a software module in the operating system of the terminal device, or can be an application program developed for the terminal device; of course, the spectrum analysis and labeling system can also be a number of the terminal device One of the hardware modules.
- the spectrum analysis and labeling system and the terminal device can also be separate devices, and it can be connected to the device through a wired and/or wireless network, and according to the agreed data format to transmit interactive information.
- Another object of the present application is to support an efficient music teaching system through a modular music database.
- the generation of teaching content, the refinement of difficulty classification labels, and the accuracy of association recommendations are enhanced through the application of technologies such as machine learning, artificial intelligence, and knowledge graphs.
- Figure 24 illustrates the relationship of the modular music database to the notation analysis and labeling method/system.
- the user interaction unit 005 interacts with the difficulty label refinement training unit 003 to refine the difficulty label and return it to the modular music database 004;
- the MusicXML feature analysis and labeling unit is the foundation and technical support for building the modular music database;
- MusicXML feature analysis and labeling Unit is the basic data source of collaborative editing unit. After collaborative editing, it is stored in the intelligent score library for user interaction, making it more accurate and applied to adaptive learning recommendation.
- the modular music database of this application is an open library, which can form an open and variable library by interacting with other tool software, such as spectrum analysis software, so that a large number of users can feed back in the process of accumulation.
- teaching content and uploading materials contributing to the enrichment of the database.
- the basic performance elements of music include: mode, tonality, melody line, rhythm, beat, range, timbre, dynamics, speed, harmony, and texture.
- Point The smallest element of data collection "Pattern" (including notes, time signatures, rhythm patterns, chords, scales, arpeggios);
- Tree Structural development (form, theme and development of music) based on voice and texture progression;
- the performance elements can be further stratified.
- the modular music teaching database of the present application is a database that extracts a large number of musical score features according to teaching logic in the field of music teaching, and combines difficulty label classification. It has a machine learning function and can automatically create a back-end teaching database of teaching materials according to the difficulty label. After the materials in the database are generated, the materials will be processed and transformed into a system that users can interact with through corresponding algorithm tools.
- the modular music database of the present application includes a classified score library, an encyclopedia knowledge base and a special training library.
- the music scores in the classified music score library can be retrieved for intelligent music score analysis and labeling through a spectrum analysis tool, and after the labeling is completed, it is stored as an intelligent music score library, and can be tracked and analyzed by a large number of users to optimize the system's intelligent analysis. result.
- the content in the encyclopedia knowledge base can be related to musical score analysis and labeling through the knowledge graph tool, and the system recommendation function is optimized through the selection and interception of a large number of users, so that the content of the encyclopedic knowledge base is continuously optimized and accurately simplified.
- the content in the special training library can be converted into special training tasks through conversion tools. These interactive tasks can record, judge and evaluate user feedback. Difficulty sorting optimization.
- the modular music teaching database of the present application may further include a single teaching library, which can identify and crawl the teaching materials (video, audio) related to a single song by using the identification tool for melody waveform extraction and comparison. , and after a large number of users' material selection and use results, the system recommendation function is optimized, so that the content of the singles is continuously optimized and accurately simplified.
- a single teaching library which can identify and crawl the teaching materials (video, audio) related to a single song by using the identification tool for melody waveform extraction and comparison.
- the modular music database of the present application can further include a single teaching library, which is used to associate and extract the optimized results of the encyclopedia knowledge base, and can also extract the relevant features of the single song to form a special training for the single song. .
- the modular music database of the present application is used to create difficulty label classifications at the back end and generate teaching materials, and the teaching materials generated by the back end can be interacted with users through the front-end user interaction software to receive the use of users. Data, record and analyze, and give feedback to optimize the back-end material and difficulty ranking.
- FIG. 16 illustrates a block diagram of an example of a modular music database according to an embodiment of the present application.
- the modular music teaching database 400 includes a classified score library 410 , an encyclopedia knowledge base 420 and a special training library 430 .
- the classified music score library 410, the encyclopedia knowledge base 420 and the special training library 430 will be described in detail.
- FIG. 17 illustrates a block diagram of an example of a classified score library of a modular music database according to an embodiment of the present application.
- the classified music score library 410 of the modular music database 400 includes: a performance skill knowledge point feature extraction unit 411 for performing The skill knowledge point extracts music features, the music features include tonality, beat, rhythm pattern, hand position, musical notation, playing method, interval, etc.
- the encyclopedic knowledge feature extraction unit 412 is used for extracting the knowledge features related to encyclopedic knowledge according to the musical score,
- the knowledge features include score period, score author, score color, score style, mode, etc.
- the subject feature extraction unit 413 is used to extract theme features according to the theme of the score, and the theme features include general themes, title themes and so on
- the label classification unit 414 is used for label classification according to the difficulty of the score.
- the classified music score library 410 is a music score library for classifying music scores according to feature extraction of different categories, which can complete music score recognition and automatic classification, and accordingly, also supports users to perform intelligent retrieval according to tags.
- the label categories include but are not limited to the categories corresponding to the extracted features, including: A1 is classified according to the key, beat, rhythm, hand position, musical notation, harmony, playing method, interval, melody type/musical type of the score; A2 is classified according to The period, author, genre, style, and mode of the score are classified; A3 is classified according to the theme characteristics of the score; A4 is classified according to the difficulty label of the score.
- FIG. 18 illustrates tag features of a classified score library of a modular music database according to an embodiment of the present application.
- the hashtag may also include world famous songs, children's songs, humanities, common sense, geography, festivals, animals, emotions, transportation, etc.
- FIG. 19 illustrates a flowchart of a tag classification process of a modular teaching database according to an embodiment of the present application.
- the label classification unit 414 and the special training library 430 perform label classification through interactive iteration.
- the specific process is as follows:
- the preset features may include features extracted according to the basic elements of music, such as: a interval feature, b range feature, c note duration feature, d clef feature, e time signature feature, f rhythmic pattern feature, g rest feature, h tie feature, i key feature, j bar number feature, k musical term feature, l musical notation feature, m modality of melody line Progression (musical feature), n chord feature, o hand position feature, p fingering feature, q temporariness feature, r articulation feature, s phrase structure feature, t velocity feature, u accompaniment texture, v pedal, w ornament , x-shaped structure feature, y-voice feature (counterpoint method);
- a interval feature such as: a interval feature, b range feature, c note duration feature, d clef feature, e time signature feature, f rhythmic pattern feature, g rest feature, h tie feature, i key feature, j bar number feature, k musical term feature, l musical notation feature,
- S2 According to the grade test materials over the years (the Emperor’s grade test as an example), use the machine learning method to learn the difficulty grading rules for the preset features of S1 and the combination of these features, and generate the first-level difficulty level label;
- S3 Define the first-level and second-level difficulty level labels for the features and feature combinations preset by S1 (the second-level difficulty labels are more detailed and placed under the corresponding first-level difficulty level labels);
- S4 According to the second-level difficulty level label generated by S3, select the corresponding features and their combinations, randomly arrange the combinations, and automatically generate training materials;
- S5 According to the second-level difficulty level label generated by S3, extract the qualified features and their combinations from the existing scores (which can be extracted from the D-classified score library or crawled from the external network), and automatically generate them for use. material;
- S6 Unify the ready-to-use materials generated by S5 according to the format requirements of the teaching data of each module (for example, the materials in the rhythm library need to remove the pitch label), and generate training materials;
- S7 Compare the training materials generated by S4 and S6 with the first-level difficulty level label defined in S2 to verify whether it is an inclusive relationship (that is, whether it can be attributed to the corresponding first-level difficulty level label);
- S8 If the result of S7 does not belong, the training material should be placed in the verification library, and further verification is performed to determine whether the material can belong to the first-level difficulty level label defined in S2;
- S2-S9 may be referred to as a difficulty label definition cycle.
- the label classification unit 414 further includes sorting and storage under labels of the same difficulty level, including:
- S10 Preset under the same difficulty level label (refined second-level difficulty label), the sorting rules of the materials (for example, according to the number of bars from less to more, according to the sorting order of time signatures under the same number of bars, the first two If they are all the same, the notes are sorted from least to most, etc.);
- S11 Compare and sort the training materials that can pass the S8 verification library according to the sorting rules preset in S10;
- S12 Store the training materials sorted in S11 into the database.
- the tag classification unit 414 can optimize the ranking through interaction with the user, including:
- S13 The material in the database of S12 is processed in a unified format according to the standard spectrum presentation format of each module database, and a special database of the data of each module is generated;
- S16 Collect user training data tracking and feedback (such as the clearance time and accuracy of a large number of users on the same training);
- S17 According to the user data collected in S16, continuously optimize the difficulty ranking under the second-level difficulty label, and return to S12 to update the database ranking.
- S13-S17 may be referred to as a user interaction ranking optimization loop.
- This database can continuously generate and optimize the content of the corresponding modules in the process of machine learning, automatic content generation and user feedback.
- the encyclopedic knowledge base 420 includes knowledge related to musical compositions, such as musical score period, musical score author, musical score genre, musical score style, mode musical form, musical composition theme, and the like. And, as mentioned above, the content in the encyclopedia knowledge base 420 is related to music score analysis and labeling through the knowledge graph tool, and through interaction with the user, the content to be used can be selected and intercepted, and the system recommendation function can be optimized, so that the content of the encyclopedic knowledge base can be optimized. Continuous optimization, accurate and streamlined.
- FIG. 20 illustrates a block diagram of an example of a specialized training library of a modular music database according to an embodiment of the present application.
- the special training library 430 of the modular music database 400 includes: a rhythm library 431, a sight-singing library 432, a hearing library 433, Sight-reading library 434 and technique library 435.
- the purpose of the materials in the rhythm library 431 is to train the trainees to apply the note duration and rhythm pattern in different time signatures, to be familiar with different rhythm patterns and rhythm styles, to be able to quickly read the rhythm in the notation, and for the music Or the difficulty of reading musical instruments in musical instrument learning to lay a good foundation, and at the same time cultivate a good sense of rhythm.
- the specific implementation process of the rhythm library 431 includes the following steps:
- S1 Difficulty rating, difficulty ranking definition. That is, 6 features need to be extracted: a time signature feature, b rhythm feature feature, c rest feature feature, d tie line feature, e bar feature feature, f part feature (ensemble, round, Canon)
- S2 According to the grade test materials over the years (the Emperor’s grade test as an example), according to the existing sight-reading data of each level, remove the pitch, and perform feature extraction on the above 6 features; use the machine learning method to analyze the preset features of S1 and The combination of these features is used to learn the difficulty grading rule, and the first-level difficulty level label is generated;
- S3 According to the advanced learning rules of rhythm, define the first-level and second-level difficulty level labels for the features and feature combinations preset in S1 (the second-level difficulty labels are more detailed and placed in the corresponding first level respectively) under the difficulty level tab);
- S4 According to the second-level difficulty level label generated by S3, select the corresponding features and their combinations, randomly arrange the combinations, and automatically generate training materials;
- S5 According to the second-level difficulty level label generated by S3, extract the qualified features and their combinations from the existing scores (which can be extracted from the D-classified score library or crawled from the external network), and automatically generate them for use. material;
- S6 Unify the format of the ready-to-use materials generated by S5 according to the format requirements of the rhythm training library (remove pitch marks) to generate training materials;
- S7 Compare the training materials generated by S4 and S6 with the first-level difficulty level label defined in S2 to verify whether it is an inclusive relationship (that is, whether it can be attributed to the corresponding first-level difficulty level label);
- S2-S9 may be referred to as difficulty label definition loops of the rhythm library.
- the specific implementation process of the rhythm library 431 further includes sorting and storage under labels of the same difficulty level, including:
- S10 Preset under the same difficulty level label (refined second-level difficulty label), the sorting rules of the materials (for example, according to the number of bars from less to more, according to the sorting order of time signatures under the same number of bars, the first two If they are all the same, the notes are sorted from least to most, etc.);
- S11 Compare and sort the training materials that can pass the S8 verification library according to the sorting rules preset in S10;
- S12 Store the training materials sorted in S11 into the rhythm library 431.
- rhythm library 431 can optimize sorting through interaction with the user, including:
- S16 Collect user training data tracking and feedback (such as the clearance time and accuracy of a large number of users on the same training);
- S17 According to the user data collected in S16, continuously optimize the difficulty ranking under the second-level difficulty label, and return to S12 to update the ranking in the rhythm library 431.
- S13-S17 may be referred to as the user interaction sorting optimization loop of the rhythm library 431 .
- FIG. 21 illustrates an example of difficulty labels in the rhythm library of the modular music teaching database according to an embodiment of the present application.
- the sight-singing library 432 can be divided into two parts, the first part is a sub-library for pitch and pitch training, and the second part is a sub-library for single-part melody sight-singing training.
- the teaching purpose of the materials in the pitch and intonation training sub-library is to train the trainees' ability to recognize musical scores and intonation in different clefs, to be familiar with the standard pitch and the distance between pitches, to be able to sing pitch, and to gradually widen the pitch range. .
- the pitch and pitch training sub-library has 6 features (monophonic training, no rhythm): a pitch feature, b interval feature, c range feature, d note number feature, e clef feature, f key feature (key punctuation, accidental sharps and sharps, and tonic chords).
- the specific implementation steps are similar to S1-S17 of the rhythm library, but the extracted features are different.
- FIG. 22 illustrates the first application scenario in the pitch and pitch training sub-library of the modular music teaching database according to an embodiment of the present application, that is, the musical chart application scenario.
- FIG. 23 illustrates a second application scenario in the pitch and pitch training sub-library of the modular music teaching database according to an embodiment of the present application, that is, an application scenario of a sight-singing software.
- the teaching purpose of the materials in the single-voice melody sight-singing training sub-library is to train the trainee's ability to recognize scores in different clefs, and to be able to control the rhythm of a melody, the sense of music, Melody beauty, punctuated breathing, etc. Its application scenarios are also divided into two types, one is directly used as a spectrum material for the trainee to practice, and the second is converted into a training content in the sight-singing software through the converter, and the sight-singing software can perform according to the microphone.
- the comparison and demonstration of pitch and rhythm is similar to the sing-along scoring function of KTV or mobile phone "Sing it" application software.
- the material of the single-part melody sight-singing training sub-library has 13 features: a interval feature, b range feature, c note duration feature, d clef feature, e time signature feature, f rhythm feature, g rest feature, h
- a, b, c, d, e, f, i, j are mandatory options
- g, h, k, l, m are options that are gradually added according to the level.
- the specific implementation steps are similar to S1-S17 of the rhythm library, but the extracted features are different.
- FIG. 33 illustrates a second application scenario in the single-part melody sight-singing training sub-library of the modular music teaching database according to an embodiment of the present application, that is, a sight-singing software application scenario.
- the teaching purpose of the materials in the sight-reading library 433 is to train the trainee's ability to quickly read and play musical scores, that is, the ability to play correctly in the shortest time after obtaining a new musical score.
- musical scores that is, the ability to play correctly in the shortest time after obtaining a new musical score.
- the sight-reading software can be connected according to the microphone or the electric piano. The way of computer data transmission and reception is used to compare, correct and score performances.
- the material in the sight-reading library 433 has 23 features: a interval feature, b range feature, c note duration feature, d clef feature, e time signature feature, f rhythm feature, g rest feature, h tie feature , i key feature, j bar feature, k musical term feature, l musical notation feature, m melody line modality (musical feature), n chord feature, o hand position feature, p fingering feature, q temporary sharp and soft notation Features, r playing method features, s phrase structure features, t velocity features, u accompaniment textures, v pedals, w ornaments.
- a, b, c, d, e, f, i, j, o, p, q, r are mandatory options, g, h, k, l, m, n, s, t, u, v, w are Option to be added gradually based on level progression.
- the specific implementation steps of the sight-reading library 433 are similar to S1-S17 of the rhythm library, but the extracted features are different.
- the teaching purpose of the materials in the listening library 434 is to train the inner hearing and the ability of hearing discrimination, etc., and its sub-libraries are further divided into the following 434-1 to 434-6.
- Listening training library 434-1 (Guess Key/Guess interval/Guess Chord), including 6 feature extraction: a pitch, b range, c clef, d chord, e interval, f key.
- Tonal identification library 434-2 (Guess scales/Guess Chord/Guess the tonality), including 5 feature extraction: a chord, b interval, c tonality feature, d measure number feature, e chord feature.
- Rhythm training library 434-3 (clap the rhythm), including 5 feature extraction: a rhythm feature, b rest feature, c tie feature, d measure number feature, e time signature feature.
- Rhythm training library 434-4 (Clap the time), including 5 feature extraction: a rhythm feature, b rest feature, c tie feature, d measure number feature, e time signature feature.
- Melody listening and identification library 434-5 (Telling difference), including 5 feature extraction: a rhythm feature, b rest feature, c tie feature, d measure number feature, e pitch feature.
- Melody Analysis Library 434-6 (Music Analysis), including 18 feature extractions: a velocity, b method, c tempo, d key, e period, f accompaniment texture, g phrase structure, h range, i time signature, j
- the progression of the melody line musical pattern feature
- k musical notation k musical notation
- l musical term m rhythm pattern
- n pedal o ornament
- p scale p scale and arpeggio
- q termination r measure.
- the specific implementation steps of these libraries are similar to S1-S17 of the rhythm library, but the extracted features are different.
- the teaching purpose of the skill training library 435 is to train the trainees' basic finger skills and performance skills, so that they can support the expression of musical works from the level of playing skills.
- the specific implementation process of the skill training library 435 includes the following steps:
- S1 Difficulty rating, difficulty ranking definition. That is, define the material classification, level and difficulty ranking (this part can be defined by machine learning and difficulty labels, in addition, because it is all content with clear levels of difficulty, it is also defined by presets).
- S2 Automatically generate teaching materials and put them into the library, that is, according to the classification, difficulty and sorting definitions of S1, identify, sort, and type the materials into the library according to the template.
- S3 Convert the content into skill training software, for example, convert the molding material into a piece of training content in the skill training software through a converter.
- S4 Record user operation data in the software, and re-optimize the difficulty of the materials according to the operation data of a large number of users (the time to complete a certain piece of material, the correct rate, etc.).
- the special training library 430 can interact with the label classification unit 414 in the classified music library 410 to improve the accuracy of label classification.
- each component or each step can be decomposed and/or recombined. These disaggregations and/or recombinations should be considered as equivalents of the present application.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Auxiliary Devices For Music (AREA)
Abstract
种谱面分析和标注方法、装置和电子设备。谱面分析和标注方法包括:确定电子乐谱的拍号;基于所确定的拍号对电子乐谱进行节拍标注和节奏型分析和标注;对电子乐谱进行音高和音程标注;以及,基于音高、音程和节奏型的标注结果对音乐型进行分析和标注。综上所述,该谱面分析和标注方法能够基于乐曲的节奏,旋律,调式特性以及音乐时期风格对乐曲进行溯源拆解,通过基于音符的节奏,旋律,调式,指法,结构,音乐术语和音乐符号,时期的特征提取和分析来分析和标注谱面,从而有效地进行了谱面识别和标注。
Description
本申请涉及数据分析技术领域,且更为具体地,涉及一种谱面分析和标注方法、谱面分析和标注装置和电子设备。
随着经济的发展,人们对于文化方面的需求也在逐渐增加,其中,音乐作为一种重要的文艺作品形式,如何与当前的各种数据分析方法结合,越来越受到人们的关注。
对于音乐作品的分析一般是指在分析音乐作品的过程中,从技术和文艺两个层面对作品进行深度的挖掘。文艺层面的挖掘有助于了解作品的背景、作曲家的内心世界等,而技术层面的挖掘则是运用音乐专业知识对作品的音高、节奏、织体、和声、曲式等方面进行分析,强化对音乐作品的整体认知。
正如同诗歌用语言文字,图画用线条、色彩,建筑用砖头、石块等构成一样,音乐也是由基本的音乐元素构成。音乐元素是构成音乐的基本要素,一般包括①旋律线;②节奏;③节拍;④和声、调式调性⑤速度;⑥力度;⑦音区、音域;⑧音色;⑨演唱(奏)法;⑩织体等。音乐元素构成了音乐表现的基本手段。
正确和有效的谱面分析能够在很多方面起到重要作用,比如,读谱对演奏家理解作品的音乐内涵和风格起着决定性作用。
有经验的教师在教学过程当中,会给学生做曲目的讲解和谱面标示,帮助学生像阅读文章一样去分析乐曲的结构和基本要素,勾画出关联的音乐型和具有特色的节奏型,判断乐曲的调性,标注特殊的奏法和特殊演奏指法,解读谱面中的音乐术语和符号等。另外,教师还可能进行备课,从文学的角度去讲解复杂乐曲的创作背景,作者生平,乐曲的主题和风格特征等。然而这会占用教师很多时间,也需要施教者有一定的专业度和教学经验。
因此,期望提供一种自动化的谱面分析和标注方法和系统,从而有效地进行谱面标注。
发明内容
为了解决上述技术问题,提出了本申请。本申请的实施例提供了一种谱面分析和标注方法、装置和电子设备和系统,其能够基于乐曲的特性对乐曲进行溯源拆解,通过基于音符的节奏,旋律,调式,指法,结构,音乐术语和音乐符号,时期的特征提取和分析来分析和标注谱面,从而有效地进行了谱面识别和标注。
根据本申请的一方面,提供了一种谱面分析和标注方法,包括:确定电子乐谱的拍号;基 于所确定的拍号对所述电子乐谱进行节拍标注和节奏型分析和标注;对所述电子乐谱进行音高和音程标注;以及,基于所述音高、所述音程和所述节奏型的标注结果对音乐型进行分析和标注。
在上述谱面分析和标注方法中,对电子乐谱进行节奏型分析包括:基于所述电子乐谱的节奏组合特性,确定预设的基本节奏型,以及所述基本节奏型的倍加节奏型和倍分节奏型;以及,通过与所述基本节奏型、所述倍加节奏型和倍分节奏型的比较对所述电子乐谱进行节奏型分析。
在上述谱面分析和标注方法中,对所述电子乐谱进行音程标注包括:对所述电子乐谱进行小节内音程标注和小节间音程标注。
在上述谱面分析和标注方法中,基于所述音高、所述音程和所述节奏型的标注结果对音乐型进行分析和标注包括:确定具有预定音符数的基准音符组,所述基准音符组构成基准音乐型;遍历所述电子乐谱中的其它具有所述预定音符数的基准音符组,以基于所述预定音符数的每组音符的音高、时值及音程确定与所述基准音乐型相同、位移、镜像或者相似的音乐型。
在上述谱面分析和标注方法中,包括以下的至少一个:相同音乐型符合两组音符具有相同节奏型,相同的音高,相同的音程;相似音乐型符合两组音符的节奏型不相同,但音高与音程相同;相似音乐型符合两组音符的节奏型不相同,音高不相同,音程相同;相似音乐型符合两组音符的节奏型相同或是倍加或倍分的关系,且音高或音程有50%以上相同;位移音乐型符合两组音符的节奏型相同,音程相同,但音高不同;以及,镜像音乐型符合两组音符的每个对应音到作为这组中的最高音和最低音之间的中间值的对称轴的音程互为相反数。
在上述谱面分析和标注方法中,若谱面出现的音符存在双音,则提取比较的音包括以下的至少一种情况:如果相比较的两组音符组均为双音,则将两组音符的上层音符(音高较高的音符)相互比较,下层音符(音高较低的音符)相互比较,再将两组音符的上下层音符交叉比较;以及,如果相对比的两组音符中一组为双音,一组为单音,则将单音的那组音符作为基准组,将双音那组音符的上层音符和下层音符分别与单音的那一组音符进行比对。
在上述谱面分析和标注方法中,进一步包括:对所述电子乐谱进行调号和临时升降记号分析和标注;基于所述调号的分析结果进行音阶、和弦和琶音分析和标注;以及,基于所述调号、所述临时升降记号、所述和弦和所述音高的分析结果进行调式分析和标注。
在上述谱面分析和标注方法中,进一步包括:对所述电子乐谱进行乐段分析和标注;以及,基于所述音乐型、所述节奏型、所述音阶、所述琶音、所述和弦,所述音乐术语和符号标注和所述乐段的分析结果进行乐句分析和标注。
在上述谱面分析和标注方法中,进一步包括:对所述电子乐谱进行音乐术语和符号标注;以及,基于所述音乐术语和符号标注的结果,结合所述音乐型、所述音阶、所述琶音、所述和弦、所述调性、所述节拍节奏型、所述乐句乐段的标注结果对所述电子乐谱的作品时期特性进行分析和标注。
根据本申请的另一方面,提供了一种谱面分析和标注装置,包括:拍号确定单元,用于确 定电子乐谱的拍号;节拍和节奏型分析单元,用于基于所确定的拍号对所述电子乐谱进行节拍标注和节奏型分析和标注;音高音程标注单元,用于对所述电子乐谱进行音高和音程标注;以及,音乐型分析标注单元,用于基于所述音高、所述音程和所述节奏型的标注结果对音乐型进行分析和标注。
根据本申请的再一方面,提供了一种电子设备,包括:处理器;以及,存储器,在所述存储器中存储有计算机程序指令,所述计算机程序指令在被所述处理器运行时使得所述处理器执行如上所述的谱面分析和标注方法。
根据本申请的又一方面,提供了一种计算机可读介质,其上存储有计算机程序指令,所述计算机程序指令在被处理器运行时使得所述处理器执行如上所述的谱面分析和标注方法。
本申请提供的一种谱面分析和标注方法、装置和电子设备,其能够基于乐曲旋律线;节奏;节拍;和声、调式调性;速度;力度;音高;演奏法等特性对乐曲进行溯源拆解,通过基于音符的节奏,旋律,调式,指法,结构,音乐术语和音乐符号,时期的特征提取和分析来分析和标注谱面,从而有效地进行了谱面识别和标注。
本发明还涉及如下内容:
器乐教学需要集训练手指基本功、演奏技巧、视奏识谱能力、节奏感、听唱能力、音乐作品分析、乐理知识学习、音乐史,表演与表达能力等为一体,就像木桶原理,但凡有一块儿短板,都会影响整体音乐性的蓄水。然而大量学生的学习缺乏拆解的,更缺乏系统的专项训练,导致有很多“木板”是从启蒙初学时就缺失或者缺乏锻炼的。例如学生们的视奏识谱能力跟不上,就会让他们学习该级别新曲目难,总是出错;例如学生们的手指基本功或者演奏技巧磨练得不够,则演奏的时候也会吃力,手指能力跟不上大脑的指挥,美国钢琴家和教育家马库斯(Adele Macus)曾指出:技术是钢琴演奏的灵魂,只有通过技术手段,才能做到弹即所想;又例如学生若对音乐史,乐理知识缺乏学习,就会影响他们对音乐作品的理解、分析与表达,演奏出来的作品只是正确,但做不到好听,更难以还原作品的艺术特征。
从教师方面来说,器乐教师的教学魅力不仅体现在精湛的演奏技艺、扎实的专业理论知识、掌握现代教学手段及心理学和相关的人文知识,而且体现在能将以曲谱为表征的作品的弹奏技术、对作品内涵的感受理解和表现力等融会贯通,并能自觉地运用于教学实践中。从学生方面来说,不同学生由于受其性格特点、训练程度等因素的影响,对曲谱、技法与表现力等方面的能力发展往往表现不同;同时,同一个学生在不同阶段的演奏水平使其对作品领会把握的程度也有差异。在器乐演奏上,所谓能力不仅是指弹奏的技能和曲目难度,更是指理解音乐的能力和表现音乐的能力。
上面简述了立体而复杂的艺术教育学科,为此,希望能够提供一种以受教者为中心的高效音乐分类数据系统。以长期教研累积结果与算法和数据处理结合,通过模块化音乐数据库的技术支持,建立高效的音乐系统。
本申请提供了一种模块化音乐数据库,通过其中的专项训练库与分类曲谱库进行交互,可 以提高标签分类的准确性。
根据本申请的一方面,提供了一种模块化音乐数据库,包括:
分类曲谱库,用于根据不同类别的提取的特征进行曲谱的分类并按照类别进行曲谱的存储;
百科知识库,用于存储通过知识图谱工具与所述分类曲谱库中存储的曲谱的分析和标注相关联的知识内容;以及
专项训练库,用于存储由所述分类曲谱库中存储的曲谱转换成的专项训练任务,所述专项训练任务用于与用户交互,且与用户的交互结果通过记录、判断和测评以对所述专项训练库中的专项训练任务进行难度排序优化。
在上述模块化音乐数据库中,所述分类曲谱库包括:
演奏技巧知识点特征提取单元,用于根据演奏技巧知识点提取音乐特征,所述音乐特征包括调性、节拍、节奏型、手位、音乐记号、奏法、音程、和弦;
百科知识特征提取单元,用于根据曲谱提取百科知识相关的知识特征,所述知识特征包括曲谱时期、曲谱作者、曲谱体彩、曲谱风格、调式曲式;
主题特征提取单元,用于根据曲谱的主题提取主题特征,所述主题特征包括概括性主题、标题性主题;以及,
标签分类单元,用于根据曲谱难度进行标签分类。
在上述模块化音乐数据库中,所述标签分类单元的分类过程包括:
步骤1:对不同模块的库里的素材进行特征预置,所述预置的特征包括根据音乐的基本元素提取的特征;
步骤2:根据预定难度素材,以机器学习的方法对步骤1所预置的特征和这些特征的组合进行难度分级规律学习,生成第一级难度级别标签;
步骤3:对步骤1预置的特征和特征组合进行第一级和第二级难度级别标签定义;
步骤4:根据步骤3生成的第二级难度级别标签,选取与之相应的特征及其组合,随机排列组合,自动生成训练素材;
步骤5:根据步骤3生成的第二级难度级别标签,在已有的曲谱中提取符合条件的特征及其组合,自动生成待用素材;
步骤6:将步骤5生成的待用素材根据各模块数据的格式要求进行素材统一,以生成训练素材;
步骤7:将步骤4与步骤6生成的训练素材与步骤2中定义的第一级难度级别标签进行对比,验证是否为包含关系;
步骤8:如果步骤7的结果是不包含,则将所述训练素材放置在验证库,通过进一步验证来判断该素材是否能够归属到步骤2中定义的第一级难度级别标签;
步骤9-1:如果步骤8所述的进一步验证的结果是可以归属,则对该素材进行步骤2的机器学习,以优化机器对第一级难度级别标签的定义;
步骤9-2:如果步骤8所述的进一步验证的结果是不可归属,则进一步调整细化步骤3中所述的定义的难度标签,使其与机器判断的结果相符合;
在上述模块化音乐数据库中,所述预置的特征包括:a音程特征,b音域特征,c音符时值特征,d谱号特征,e拍号特征,f节奏型特征,g休止符特征,h延音线特征,i调性特征,j小节数特征,k音乐术语特征,l音乐符号特征,m音乐型特征,n和弦特征,o手位特征,p指法特征,q临时升降记号特征,r奏法特征,s乐句结构特征,t速度特征,u伴奏织体,v踏板,w装饰音,x曲式结构特征,y声部特征。
在上述模块化音乐数据库中,所述标签分类单元的分类过程进一步包括:
步骤10:预置在同等难度级别标签下,素材的排序规则;
步骤11:将能够通过步骤8验证库的训练素材根据步骤10预置的排序规则进行比对排序;
步骤12:将步骤11排序好的训练素材存储到数据库中。
在上述模块化音乐数据库中,所述标签分类单元的分类过程进一步包括:
步骤13:将步骤12的数据库中的素材按照各个模块数据库的规范谱面呈现格式进行格式统一处理,生成各模块数据的专项库;
步骤14:将步骤13所述专项库里的谱面转换成交互式任务;
步骤15:用户进行交互式任务;
步骤16:搜集用户训练的数据跟踪和反馈;
步骤17:根据步骤16搜集的用户数据,在第二级难度标签下持续优化难度排序,并返回步骤12更新数据库排序。
在上述模块化音乐教学数据库中,所述专项训练库包括:
节奏库,用于训练受教者在不同拍号中对音符时值和节奏型的应用能力;
视唱库,用于训练受教者在不同谱号中的识谱和音准能力;
视奏库,用于训练受教者的快速识谱演奏能力;
听力库,用于训练受教者的内心听觉和听辨能力;以及
技巧库,用于训练受教者的手指基本功和演奏技巧。
在上述模块化音乐教学数据库中,所述节奏库的具体实现过程包括:
步骤1:通过提取特征进行难度分级和难度排序定义,所述特征包括:a拍号特征,b节奏型特征,c休止符特征,d延音线特征,e小节数特征,f声部特征;
步骤2:根据难度级别素材,根据每个级别的现有视奏资料,去除音高,用机器学习的方法对步骤1所预置的特征和这些特征的组合进行难度分级规律学习,生成第一级难度级别标签;
步骤3:根据节奏的学习进阶规律做出对步骤1预置的特征和特征组合进行第一级和第二级难度级别标签定义;
步骤4:根据步骤3生成的第二级难度级别标签,选取与之相应的特征及其组合,随机排列组合,自动生成训练素材;
步骤5:根据步骤3生成的第二级难度级别标签,在已有的曲谱中提取符合条件的特征及其组合,自动生成待用素材;
步骤6:将步骤5生成的待用素材根据节奏训练库的格式要求进行格式统一,以生成训练素材;
步骤7:将步骤4与步骤6生成的训练素材与步骤2中定义的第一级难度级别标签进行对比,验证是否为包含关系
步骤8:如果步骤7的结果是不包含,则将该训练素材放置到验证库中,通过验证来判断该素材是否能够归属到步骤2中定义的第一级难度级别标签;
步骤9-1:如果步骤8所述的判断结果是可以归属,则对该素材进行步骤2的机器学习过程,以优化机器对第一级难度级别标签的定义;
步骤9-2:如果步骤8所述的判断结果是不可归属,则调整细化步骤3中所述的人工定义的难度标签,使其与机器判断的结果相符合。
在上述模块化音乐教学数据库中,所述节奏库的具体实现过程进一步包括:
步骤10:预置在同等难度级别标签下,素材的排序规则;
步骤11:将能够通过步骤8验证库的训练素材根据步骤10预置的排序规则进行比对排序;
步骤12:将步骤11排序好的训练素材存储到节奏库中。
在上述模块化音乐数据库中,所述节奏库的具体实现过程进一步包括:
步骤13:将步骤12的存储的素材按照专项训练库的规范谱面呈现格式进行格式统一处理,生成专项节奏训练的题库;
步骤14:将步骤13所述题库里的谱面转换成交互式节奏训练任务;
步骤15:用户执行任务;
步骤16:搜集用户训练的数据跟踪和反馈;
步骤17:根据步骤16搜集的用户数据,在第二级难度标签下持续优化难度排序,并返回步骤12更新节奏库中的排序。
在上述模块化音乐数据库中,所述视唱库包括音高音准训练子库和单声部旋律视唱训练子库,
所述音高音准训练子库的素材具有六个特征:a音高特征,b音程特征,c音域特征,d音符数量特征,e谱号特征,f调性特征;
所述单声部旋律视唱训练子库的素材具有十三个特征:a音程特征,b音域特征,c音符时值特征,d谱号特征,e拍号特征,f节奏型特征,g休止符特征,h延音线特征,i调性特征,j小节数特征,k音乐术语特征,l音乐符号特征,m音乐型特征。
在上述模块化音乐教学数据库中,所述视奏库中的素材有23个特征:a音程特征,b音域特征,c音符时值特征,d谱号特征,e拍号特征,f节奏型特征,g休止符特征,h延音线特征,i调性特征,j小节数特征,k音乐术语特征,l音乐符号特征,m旋律线条的模进(音 乐型特征),n和弦特征,o手位特征,p指法特征,q临时升降记号特征,r奏法特征,s乐句结构特征,t速度特征,u伴奏织体,v踏板,w装饰音。
在上述模块化音乐教学数据库中,所述听辨库包括:
听音训练子库,包括六个特征:a音高,b音域,c谱号,d和弦,e音程,f调性;
调性听辨子库,包括五个特征:a和弦,b音程,c调性特征,d小节数特征,e和弦特征;
节奏训练子库,包括五个特征:a节奏型特征,b休止符特征,c延音线特征,d小节数特征,e拍号特征;。
节拍训练子库,包括五个特征:a节奏型特征,b休止符特征,c延音线特征,d小节数特征,e拍号特征;
旋律听辨子库,包括五个特征:a节奏型特征,b休止符特征,c延音线特征,d小节数特征,e音高特征;
旋律分析子库,包括十八个特征:a力度,b奏法,c速度,d调性,e时期,f伴奏织体,g乐句结构,h音域,i拍号,j旋律线条的模进(音乐型特征),k音乐符号,l音乐术语,m节奏型,n踏板,o装饰音,p音阶与琶音,q终止式,r小节数。
在上述模块化音乐数据库中,所述技巧库的具体实施过程包括如下步骤:
步骤1:难度分级和难度排序定义;
步骤2:自动生成教学素材并入库;
步骤3:转化成技巧训练软件内容;以及
步骤4:记录用户操作数据,根据用户操作数据对素材再次进行优化难度排序。
因此,本申请提供的模块化音乐数据库,可以通过其中的专项训练库与分类曲谱库进行的交互,来提高标签分类的准确性。
通过结合附图对本申请实施例进行更详细的描述,本申请的上述以及其他目的、特征和优势将变得更加明显。附图用来提供对本申请实施例的进一步理解,并且构成说明书的一部分,与本申请实施例一起用于解释本申请,并不构成对本申请的限制。在附图中,相同的参考标号通常代表相同部件或步骤。
图1图示了根据本申请实施例的谱面分析和标注方法中的节拍标注的示意性流程图,也是根据本申请实施例的谱面分析和标注系统中的节拍标注子单元的工作的示意性流程图。
图2A图示了根据本申请实施例的谱面分析和标注方法/系统中的基本节奏型的示例。
图2B图示了根据本申请实施例的谱面分析和标注方法/系统中的倍加音乐型的示例。
图2C图示了根据本申请实施例的谱面分析和标注方法/系统中的倍分音乐型的示例。
图3图示了根据本申请实施例的谱面分析和标注方法中的节奏型标注的示意性流程图,也是根据本申请实施例的谱面分析和标注系统中的节奏型分析和标注子单元的工作的示意性流 程图。
图4图示了根据本申请实施例的谱面分析和标注方法/系统中的各度音程的示意图。
图5图示了根据本申请实施例的谱面分析和标注方法/系统中的小节内的三度音程和六度音程标注的示意图。
图6图示了根据本申请实施例的谱面分析和标注方法中的音乐型标注的流程图,也是根据本申请实施例的谱面分析和标注系统中的音乐型分析和标注子单元的工作的流程图。
图7图示了根据本申请实施例的谱面分析和标注方法的流程图。
图8图示了根据本申请实施例的谱面分析和标注方法中的音阶标注过程的示意性流程图,也是根据本申请实施例的谱面分析和标注系统中的音阶标注子单元的工作的示意性流程图。
图9图示了根据本申请实施例的谱面分析和标注方法中的和弦与分解和弦标注过程的示意性流程图,也是根据本申请实施例的谱面分析和标注系统中的和弦标注子单元的工作的示意性流程图。
图10图示了根据本申请实施例的谱面分析和标注方法中的调式标注的示意性流程图,也是根据本申请实施例的谱面分析和标注系统中的调式标注子单元的工作的示意性流程图。
图11图示了根据本申请实施例的乐谱分析和标注方法中的乐段标示的示意性流程图,也是根据本申请实施例的乐谱分析和标注系统中的乐段标注子单元的示意性流程图。
图12图示了根据本申请实施例的谱面分析和标注装置的框图。
图13图示了根据本申请实施例的电子设备的框图。
图14图示了根据本申请实施例的谱面分析和标注系统的示意性框图。
图15图示了根据本申请实施例的谱面分析和标注系统的整体架构的示意图。
图16图示了根据本申请实施例的模块化音乐数据库的示例的框图。
图17图示了根据本申请实施例的模块化音乐数据库的分类曲谱库的示例的框图。
图18图示了根据本申请实施例的模块化音乐数据库的分类曲谱库的标签特征。
图19图示了根据本申请实施例的模块化音乐数据库的标签分类过程的流程图。
图20图示了根据本申请实施例的模块化音乐数据库的专项训练库的示例的框图。
图21图示了根据本申请实施例的模块化音乐数据库的节奏库中的难度标签示例。
图22图示了根据本申请实施例的模块化音乐数据库的音高音准训练子库中的第一种应用场景,即谱面应用场景。
图23图示了根据本申请实施例的模块化音乐数据库的音高音准训练子库中的第二种应用场景,即视唱软件应用场景。
图24图示了模块化音乐数据库与谱面分析和标注方法/系统的关系。
下面,将参考附图详细地描述根据本申请的示例实施例。显然,所描述的实施例仅仅是本 申请的一部分实施例,而不是本申请的全部实施例,应理解,本申请不受这里描述的示例实施例的限制。
申请概述
目前,乐曲在计算机中的数字化已相对普及,这包括了计算机中的数字音乐序列MIDI,以及常用的MusicXml电子乐谱。这里,电子乐谱指的是在计算机中存储了乐谱排版信息的文件。电子乐谱的种类繁多,数量巨大,包括oVe,gtp,mjp等,也就是,现有的各类电子乐谱实际上包含了大量有价值的音乐信息。但是,在现有的电子乐谱中仅包含了乐曲的排版信息,这只能用于浏览乐曲的谱面,而无法应用于更加专业的谱面分析。
因此,本申请的目的在于提供一种谱面分析和标注方法/系统,其能够对电子乐谱,比如MusicXml电子乐谱进行自动分析,从而在五线谱上标注出相应的音乐元素,从而满足演奏者读谱和演奏的需要。
具体地,本申请的谱面分析和标注方法/系统利用了第一性原理的构思,将乐曲按照音乐本身的特性进行溯源拆解,基于乐谱分析的不同分析角度来对电子乐谱进行各种特征提取和分析。在本申请中,所述特征包括如上所述的音乐元素及其衍生物,比如音高、音程、音域、旋律线条、节奏、节拍、音阶、琶音、和弦、和声、调性、指法运用、音乐结构分析、音乐术语、音乐符号、力度、速度、作品创作背景、作者、体裁主题等。
并且,本申请的谱面分析和标注方法/系统利用各音乐元素之间的逻辑关系,通过所提取的特征之间的关联性来对谱面进行自动分析和标注,从而能够准确地自动标注出谱面中的各种音乐元素,不需要人工参与,在保证了谱面标注的准确性的同时提高了用户便利。
图14图示了根据本申请实施例的谱面分析和标注系统的示意性框图。如图1所示,根据本申请实施例的谱面分析和标注系统100包括:节拍节奏和节奏型分析和标注单元110,用于标注电子乐谱的拍号和节拍并进行节奏型的分析和标注;音高音程和音乐型分析和标注单元120,用于标注所述电子乐谱的音高和音程并进行音乐型的分析和标注;音阶和弦和琶音标注单元130,用于标注所述电子乐谱中的音阶、和弦和琶音;调号临时升降记号和调式标注单元140,用于标注所述电子乐谱的调号、临时升降记号和调式;特殊指法标注单元150,用于标注所述电子乐谱的特殊指法;乐段乐句标注单元160,用于标注所述电子乐谱的乐段乐句;音乐术语和符号标注单元170,用于标注所述电子乐谱的音乐术语和音乐符号;以及,知识图谱单元180,用于标注与所述电子乐谱相关联的音乐知识信息。
下面,将对本申请的谱面分析和标注方法中的各部分/系统中的各个单元进行具体说明。
节拍与节奏分析和标注
在本申请实施例中,在上述谱面分析和标注系统中,所述节拍节奏和节奏型分析和标注单元包括:拍号标注子单元,用于标注所述电子乐谱的拍号;节拍标注子单元,用于基于所标注的拍号标注所述电子乐谱的节拍;以及,节奏型分析和标注子单元,用于基于所标注的节拍对 所述电子乐谱进行节奏型分析和标注。
在本申请实施例中,拍号标注是为了标注出电子乐谱中的拍号,具体地,可以从电子乐谱中提取拍号,例如从MusicXml电子乐谱中读取拍号信息,如3/4,2/4,4/4,3/8等。
图2图示了根据本申请实施例的谱面分析和标注方法/系统中的节拍的数字标注的示意图。另外,节拍的数字标注法也可以采用其它方法,其核心在于将一拍中所含的音符进行时值的比例计算,根据计算的比例进行的线段等分,并每个音符中间断开;另外,数字与线段结合。值得注意的是,数字标注法需要考虑大于一拍的情况下,数字与短线的标注位置。如图2所示,线段的位置和长短是有等分规律的。
图1图示了根据本申请实施例的谱面分析和标注方法中的节拍标注的示意性流程图,也是根据本申请实施例的谱面分析和标注系统中的节拍标注子单元的工作的示意性流程图。
如图1所示,首先按照乐曲的拍号确定乐曲的每拍时值,然后按小节读取音符,并累加时值。如果累加时值与乐曲每拍时值相等,则开始进行节拍标注,否则继续读取音符累加时值。
在节拍标注中,按照如上所述的数字标注法进行节拍标注,即在音符下方划线并标注拍数,然后确定当前小节累加拍数是否与乐曲每小节拍数相等,如果是则确定是否遍历了所有小节,否则继续在音符下方划线并标注拍数。
接下来,如果未遍历所有小节,则进入下一小节,继续进行节拍标注,否则如果遍历了所有小节,则流程结束。
在完成节拍标注之后,可以进一步进行节奏型的分析和标注。
图2A图示了根据本申请实施例的谱面分析和标注方法/系统中的基本节奏型的示例。如图2A所示,以四分音符为一拍为示例,示出了以V型标注法标注的九种基本节奏型。如图2B所示,以二分音符为一拍的倍加:其对于如图2A所示的9种基本节奏型(4分音符为一拍),将每种节奏型分别双倍其时值后得到新的9个节奏型,例如称为“二分音符倍加节奏型”。此外,还可以有以全音符为一拍的倍加:即,对于如图2A所示的9种基本节奏型(4分音符为一拍),将每种节奏型分别4倍其时值后得到9个节奏型,例如称为“全音符倍加节奏型”。图2B图示了根据本申请实施例的谱面分析和标注方法/系统中的倍加音乐型的示例。
如图2C所示,以八分音符为一拍的倍分:对于如图2A所示的9种基本节奏型(4分音符为一拍),将每种节奏型分别减半其时值后得到9个节奏型,例如称为“八分音符倍分节奏型”。此外,可以有以十六分音符为一拍的倍分:即,对于如图2A所示的9种基本节奏型(4分音符为一拍),将每种节奏型时值分别取1/4时值后得到9个节奏型,例如称为“八分音符倍分节奏型”。图2C图示了根据本申请实施例的谱面分析和标注方法/系统中的倍分音乐型的示例。
因此,在本申请实施例中,可以按照9种基本节奏型以及倍加和倍分,一共获得45种节奏型。另外,还可以补充附点四分音符,附点八分音符,附点二分音符,32分音符这几个单独的音符,共49种节奏型。
这里,在本申请实施例中,所述基本节奏型、以及倍加和倍分节奏型可以以读取的拍号为基础,也就是,所述拍号数据是对节奏型进行倍加和倍分的单位数据。例如,如上所述的基本节奏型、以及倍加节奏型和倍分节奏型是基于以拍号确定4分音符为一拍,而如果以拍号确定2分音符为一拍或者8分音符为一拍,则可以以类似的方式确定基本节奏型、倍加节奏型和倍分节奏型。
因此,在上述谱面分析和标注系统中,所述节奏型分析和标注子单元用于:基于所述电子乐谱的节奏组合特性,确定预设的基本节奏型,以及所述基本节奏型的倍加节奏型和倍分节奏型;以及,通过与所述基本节奏型、所述倍加节奏型和倍分节奏型的比较对所述电子乐谱进行节奏型分析。
图3图示了根据本申请实施例的谱面分析和标注方法中的节奏型标注的示意性流程图,也是根据本申请实施例的谱面分析和标注系统中的节奏型分析和标注子单元的工作的示意性流程图。
如图3所示,首先确定用于比较的基本节奏型,例如如上所述的49种基本节奏型。然后,从电子乐谱,例如MusicXML音乐文件中读取该乐曲的所有音符序列,并取当前音符作为目标音符。
然后,将当前音符的时值与预置的49种基本节奏型的每个的第一个音符的时值进行对比,若匹配其中一种或几种,则继续读取其下一个音符。再与匹配到的节奏型的第二个音符进行对比,以此类推,直至遍历所有音符后,找出所有匹配这49种基本节奏型的音符序列。
另外,在本申请实施例中,如果当前乐曲还没有进行节拍标注,也可以根据匹配的节奏型,在匹配到的音符下方进行相应的节拍标注。
这里,如果匹配到的基本节奏型中有多个是相邻的,称之为组合节奏型。另外,一种节奏型的倍分和倍加也可以被视为相似节奏型,可选择性地标注相同节奏型或相似节奏型,也可以同时显示。
也就是,在根据本申请实施例的谱面分析和标注系统中,所述节奏型分析和标注子单元用于以音符时值累积每拍作为一个单位进行节奏型标注,包括:定义节奏型标识的起止坐标;当一个音符包含多拍时定义多个节奏型标识的位置坐标;以及,在一个单位的节奏型标识,将一拍中所含的音符进行时值的比例计算,根据计算的比例对所述节奏型标识进行等分,并针对每个音符断开所述节奏型标识。
音高音程和音乐型分析和标注单元
在上述谱面分析和标注系统中,所述音高音程和音乐型分析和标注单元包括:音高标注子单元,用于标注所述电子乐谱的音高;音程标注子单元,用于标注所述电子乐谱的音程;以及,音乐型分析和标注子单元,用于基于所标注的音高、音程和节奏型对音乐型进行分析和标注。
在本申请实施例中,音程标注包括小节内音程标注和小节间音程标注。
首先,小节内音程标注指的是标注出相邻两个音的音程关系。具体的,标注方式可以是 在相邻两个音符之间做连线,并用数字在旁边标注出音程。在本申请实施例中,可以选择标注某度的音程,例如3度音程或者4度音程,这种情况下,例如,如果选择标注电子乐谱中的所有4度音程,则将所有的相邻为4度的音用线段连接,并标注数字4。具体地,在本申请实施例中,可以标注三度音程、四度音程、五度音程、六度音程、七度音程和八度音程中的一个或多个,如图4所示。这里,图4图示了根据本申请实施例的谱面分析和标注方法/系统中的各度音程的示意图。
标注某度的音程的具体实现方法如下:
(1)从电子乐谱,例如MusicXML音乐文件中读取该乐曲的所有音符序列。
(2)遍历所有音符,取当前音符与其下一音符的音高做差值运算结果的绝对值,得到这相邻两个音的音程数字。
(3)确定需要标记的音程关系,例如,可以由用户指定,并根据确定的音程关系取得所有符合该音程关系的相邻音符。
(4)对于每组符合音程关系的两个音,从第一个音符符头X轴最大值的点到第二个音符符头X轴最小值的点做连线。
(5)取得当前音符符杆的Y坐标最大值,在其上方偏移预定像素,比如5个像素的位置标注这个音程关系。
标注结果例如如图5所示。这里,图5图示了根据本申请实施例的谱面分析和标注方法/系统中的小节内的三度音程和六度音程标注的示意图。
音乐型分析和标注是指根据旋律的轮廓特征,找出所有相同、位移、镜像和相似的音乐型,并在电子乐谱中标注出来。例如,在电子乐谱中的音乐型的音符下方,用不同形式和颜色的线段标示出来相同、位移、镜像和相似的音乐型。并且,可以指定音乐型所包含的最小音符数,例如,如果指定最小音符数为4,则匹配最少有连续4个音符符合相同、位移、镜像和相似的音乐型。
具体的实现方法包括:
(1)从电子乐谱,例如MusicXML音乐文件中读取该乐曲的所有音符序列。
(2)指定最小音符数,例如可以通过接收的用户指令来指定最小音符数为4。在实际应用过程中,最小音符数的取值范围最小为4,最大为总小节数/4。等于M个小节(因为一个乐段至少包含两个乐句,一个乐句至少包含两个音乐型),取4组M个小节的音符数的最大值作为该取值范围的最大值。即6.2乐句分析中,单个乐句可能出现的最大音符数量。
(3)遍历所有音符,并按最小音符数为一组对所有音符进行分组,即如果最小音符数为4,则第1、2、3、4个音符为一组,第2、3、4、5个音符为一组,第3、4、5、6个音符为一组,以此类推。最后不足一组的几个音不分组。
(4)分别记录每组若干个音符的音高、时值及音程关系,即每个音符组都包含一组数据,该数据包括每个音的音高、时值以及这几个音的音程关系。例如,如果用JSON格式描述一组 音符的数据,即为(以最小音符数4为例):
{
“noteGroup1”:{
“noteName”:[“C”,“D”,“E”,“G”],
“noteLength”:[“quarter”,“half”,“whole”,“eighth”],
“noteInterval”:[2,2,3]
}
}
这里,“noteName”即为音高的标注结果,“noteLength”为节拍与节奏型的标注结果,且“noteInterval”为音程的标注结果。
(5)选出整首乐曲中旋律轮廓相同的音乐型:对于音符组中的每一组,都与其余所有音符组进行比对(对于第一组音符,从第5组即第5、6、7、8音符那组开始对比即可),如果该组(称为基准组)数据与当前对比组的数据相等,则说明这两组音符完全一样。接下来取与该基准组相邻的下一组和当前对比组相邻的下一组进行对比,若数据仍完全一致,则说明从该基准组第一个音符开始的连续8个音符都与当前对比组的第一个音符开始的连续8个音符完全一致,以此类推。若不相等,则判断该相邻组中的第一个音是否与当前对比组中的第一个音一样,若不一样,则说明该基准组只与当前对比组一致,若一样,则再判断第二个音是否一样。这样便可得出与此基准组相同的所有相同的音乐型。
(6)确定整首乐曲中的旋律轮廓位移的音乐型:使用(5)中的比对方法,比对每个音符组中的音程关系即可。即noteName不完全相同,但noteLength和noteInterval相同的音符组。
(7)确定整首乐曲中的镜像旋律音乐型:对于音符组中的每一组,都与其余所有音符组两两进行比对(对于第一组音符,从第5组即第5,6,7,8音符那组开始对比即可)。取第一组中的最高音和最低音的中间值作为对称轴,分别取这一组中的几个音到对称轴的音程,并依次与第二组中的每一个音对比。如果第二组中的每个对应音到这条对称轴的音程与第一组中的相应音到对称轴的音程数字(即noteInterval)为相反数,则第一组与第二组为镜像音乐型。
比如,第一个音符组里的最低音是第一个音,最高音是第二个音,它们的中线就是第三线,也就是对称轴,第一个音到对称轴应该是5度,那与它的轴对称的音就应该是从对称轴再往上数5度(如果做差值,就是相反数),所以就是第五线的音,以此类推,第二组音符组构成了第一组音符组的镜面对称。
(8)确定整首乐曲中的相似音乐型:使用(5)中的对比方法,对每个音符组中的音符数据进行对比,被认定为相似的两段音乐型需满足至少一个预设条件。例如,在本申请实施例中,预设条件可以是:
1.相似音乐型符合两组音符的节奏型不相同,但音高与音程相同;
2.相似音乐型符合两组音符的节奏型不相同,音高不相同,音程相同;
3.相似音乐型符合两组音符的节奏型相同或是倍加或倍分的关系,且音高或音程有50%以上相同;
(9)对于双音线条,如果相对比的两个音符组均为双音线条,则上层旋律相互对比,下层旋律相互对比。如果一组为双音线条一组为单音线条,则在双音线条中找出和单音线条对应的音相同的音作为此双音线条中的主音,用于音乐型的标注。
(10)乐句是由两个及以上的音乐型组成。为达到更准确和更细分的音乐型识别,需要将以上5-9步骤识别出来的结果以每一个音乐型作为单位去与6.2乐句识别结果进行比较,比较其包含关系,若当前音乐型与当前音乐型所在的乐句的长度重合,则需要将当前音乐型再进行缩小范围的拆分处理。
(11)对于相同、位移、镜像和相似的音乐型,可以在其音符下以不同形式的线段,例如不同颜色的线段进行连接,以标注出谱面的音乐型。
(12)在用不同颜色的线段标示相同、位移、镜像和相似音乐型的情况下,可以使用彩虹排列法(红橙黄绿青蓝紫)依次按组别对相同、位移、镜像、相似的音乐型进行标注。即第一组相同音乐型,用红色连线标注,第二组用橙色,以此类推。如果七种颜色均使用过一次,则对它们的hex值分别除以2,再进行下一轮标注,以此类推。
(13)为了达到(7)所述的目的,可将音乐型中的第一个音符的左下角坐标作为起点,对音乐型中每个音符右下角所在的点做连线,即可标注出该段旋律。对于相同的音乐型,用实线连接,对于相似的音乐型,用虚线连接,对于位移的音乐型,用长短相间的虚线连接,对于镜像的音乐型,用点+虚线连接。若此音乐型的前一音符是另一音乐型的最后一个音符,则将前一音符右下角坐标点与当前音乐型第一个音符的左下角的坐标点相连,形成一个连续的长音乐型。
(14)在自动识别出相同、位移、镜像和相似的音乐型并画出连线后,用户仍可自行向前或向后拖动音乐型下方的连线以涵盖更多的音符。
在本申请实施例中,音乐型标注和节奏型标注之间是具有相关的逻辑关系的,可以看到:(1)相同的音乐型是相同节奏型的充分不必要条件;(2)音乐型相似的情况可以是两组音符的节奏型不相同,但音高与音程相同;(3)音乐型位移的情况为两组音符的节奏型相同,音程相同,但音高不同;(4)音乐型镜像的情况为两组音符的每个对应音到作为这组中的最高音和最低音之间的中间值的对称轴的音程互为相反数。
在本申请实施例中,音乐型标注和乐句标注之间是具有相关的逻辑关系的,乐曲的结构特征从包含关系来看,1个乐段至少包括2个乐句,1个乐句至少包括2个音乐型。因此在程序进行识别和分析的时候,需要将音乐型和乐句以及乐段的识别结果进行比对,更精准的对乐曲的结构进行分析。
图6图示了根据本申请实施例的谱面分析和标注方法中的音乐型标注的流程图,也是根据本申请实施例的谱面分析和标注系统中的音乐型分析和标注子单元的工作的流程图。
如图6所示,首先进行音符分组,比如按照用户指令,以4个音符为一组,然后生成每组的数据。接下来,取基准音符组,这同样可以由用户指定,或者自动识别典型的音乐型,并与相邻下一组进行对比,可以确定音乐型是否为相同、位移或者镜像。如果匹配,确定当前基准组是否对比完毕,如果已经对比完毕则进行到标注环节,并取另一基准音符组。而如果未对比完毕,继续与相邻下一组进行对比。
也就是,将基准组跟对比组进行匹配,如果匹配成功,则需要看基准组后面一组音符与对比组的后面一组音符是否能够匹配;如果能够匹配,则说明可以相连成为较长的音乐型。(比如1,2,3,4与9,10,11,12能够匹配,5,6,7,8与13,14,15,16也能够匹配,那么1,2,3,4,5,6,7,8就应该相连成为一个节奏型)。以此类推,一直匹配,能够相连的最大范围不会超过总小节数/4。这里,并不是匹配到一个乐段,而是总小节数/4。
继续如图6所示,在标注环节中,将匹配到的乐段(相同、位移、镜像)的每个音符下方做连线,并判定前一音符是否在前一乐段中,如果为是,则与前一乐段相连接,如果为否,判定是否所有乐段连接完毕。如果所有乐段连接完毕,则结束流程,否则,继续进行标注。
综上所述,根据本申请实施例的谱面分析和标注方法的技术方案如下。
图7图示了根据本申请实施例的谱面分析和标注方法的流程图。如图7所示,根据本申请实施例的谱面分析和标注方法包括:S110,确定电子乐谱的拍号;S120,基于所确定的拍号对所述电子乐谱进行节拍标注和节奏型分析和标注;S130,对所述电子乐谱进行音高和音程标注;以及,S140,基于所述音高、所述音程和所述节奏型的标注结果对音乐型进行分析和标注。
在上述谱面分析和标注方法中,对电子乐谱进行节奏型分析包括:基于所述电子乐谱的节奏组合特性,确定预设的基本节奏型,以及所述基本节奏型的倍加节奏型和倍分节奏型;以及,通过与所述基本节奏型、所述倍加节奏型和倍分节奏型的比较对所述电子乐谱进行节奏型分析。
在上述谱面分析和标注方法中,对电子乐谱进行节奏型标注包括以音符时值累积每拍作为一个单位进行节奏型标注,包括:定义节奏型标识的起止坐标;当一个音符包含多拍时定义多个节奏型标识的位置坐标;以及,在一个单位的节奏型标识,将一拍中所含的音符进行时值的比例计算,根据计算的比例对所述节奏型标识进行等分,并针对每个音符断开所述节奏型标识。
在上述谱面分析和标注方法中,对所述电子乐谱进行音程标注包括:对所述电子乐谱进行小节内音程标注和小节间音程标注。
在上述谱面分析和标注方法中,基于所述音高、所述音程和所述节奏型的标注结果对音乐型进行分析和标注包括:确定具有预定音符数的基准音符组,所述基准音符组构成基准音乐型;以及,遍历所述电子乐谱中的其它具有所述预定音符数的基准音符组,以基于所述预定音符数的每组音符的音高、时值及音程确定与所述基准音乐型相同、位移、镜像或者相似的音乐型。
在上述谱面分析和标注方法中,包括以下的至少一个:相同音乐型符合两组音符具有相同节奏型,相同的音高,相同的音程;相似音乐型符合两组音符的节奏型不相同,但音高与音程相同;相似音乐型符合两组音符的节奏型不相同,音高不相同,音程相同;相似音乐型符合两 组音符的节奏型相同或是倍加或倍分的关系,且音高或音程有50%以上相同;位移音乐型符合两组音符的节奏型相同,音程相同,但音高不同;以及,镜像音乐型符合两组音符的每个对应音到作为这组中的最高音和最低音之间的中间值的对称轴的音程互为相反数。
并且,在上述谱面分析和标注系统中,所述音乐型分析和标注子单元用于以下的至少一个:基于两组音符符合具有相同节奏型,相同的音高,相同的音程,确定所述两组音符为相同音乐型;基于两组音符符合两组音符的节奏型不相同,但音高与音程相同,确定所述两组音符为相似音乐型;基于两组音符符合两组音符的节奏型不相同,音高不相同,音程相同,确定所述两组音符为相似音乐型;基于两组音符符合两组音符的节奏型相同或是倍加或倍分的关系,且音高或音程有50%以上相同,确定所述两组音符为相似音乐型;基于两组音符符合两组音符的节奏型相同,音程相同,但音高不同,确定所述两组音符为位移音乐型;以及,基于两组音符符合两组音符的每个对应音到作为这组中的最高音和最低音之间的中间值的对称轴的音程互为相反数,确定所述两组音符为镜像音乐型。
此外,在上述谱面分析和标注方法/系统中,若谱面出现的音符存在双音,则提取比较的音包括以下的至少一种情况:如果相比较的两组音符组均为双音,则将两组音符的上层音符(音高较高的音符)相互比较,下层音符(音高较低的音符)相互比较,再将两组音符的上下层音符交叉比较;以及,如果相对比的两组音符中一组为双音,一组为单音,则将单音的那组音符作为基准组,将双音那组音符的上层音符和下层音符分别与单音的那一组音符进行比对。
音阶和弦和琶音标注单元
下面,将说明根据本申请实施例的谱面分析和标注方法/系统中的音阶、琶音、和弦标注。
在上述谱面分析和标注系统中,所述音阶和弦和琶音标注单元包括:音阶标注子单元,用于对所述电子乐谱进行音阶标注;和弦标注子单元,用于对所述电子乐谱进行和弦标注;以及,琶音标注子单元,用于对所述电子乐谱进行琶音标注。
音阶标注的表现形式为:在连续出现的音阶的五线谱下方用线段进行连接,并在旁边标注出音阶名称(如C+代表C大调音阶,ch-代表C和声小调音阶,cx-代表C旋律小调音阶)。
实现方式为:
1)预置194种单手识别的音阶的音符数据。单手识别(即仅在高音谱表或者低音谱表中单独进行识别):6类调性共72条(自然大调,和声大调,旋律大调,自然小调,和声小调,旋律小调);半音阶与八度半音阶2条,双音三度音阶24条,双音六度音阶24条,八度音阶72条);预置146种双手识别的音阶的音符数据。双手识别(同时两个谱表同时识别):6类调性音阶的双手同向或反向识别,双手三度音阶,六度音阶,三度半音阶,六度半音阶;
2)从电子乐谱,比如MusicXML音乐文件中分别读取该乐曲的右手(通常为高音谱表)的所有音符序列和左手(通常为低音谱表)的所有音符序列,与1)里的音阶数据进行比对。
3)从头至尾遍历所有音符,如果当前音符属于某种预置的音阶之中,则进而判断其下一个音符。直到如果有连续5个音符都在预置的调式音阶中,则在这些音符的下方做连线,并在 旁边罗列出可能符合的调式音阶供用户进行点选标注。
4)结合调式识别结果,将3)中的符合项进行优先推荐顺序排列,即从3)罗列的结果中优先排列符合调式识别结果的那个调性音阶;例如一首歌的调性判断是A小调,而乐谱中连续的五个音ABCDE在音阶识别的可能性中有C大调和A小调两种可能,由于调性是A小调,因此将A小调这个选项作为首推的选项供用户点选。
5)从电子乐谱,比如MusicXML音乐文件中分别读取该乐曲的双手的所有音符序列,同时与1)里的146种双手识别的音阶的音符数据进行比对。
6)从头至尾遍历所有音符,如果当前双手同时弹奏的音符属于1)中某种预置的音阶之中,则进而判断其下一对双手同时弹奏的音符。直到如果有连续5对音符都在预置的调式音阶中,则在这些音符的下方做连线,并在旁边罗列出可能符合的调式音阶供用户进行点选标注。
7)结合调式识别结果,将6)中的符合项进行优先推荐顺序排列,即从6)中罗列的结果中优先排列符合调式识别结果的那个调性音阶;
8)当7)与4)同时得出符合的选项供用户进行点选标注时,仅取7)的结果进行推荐即可;若7)或4)得出的结果是唯一的,则直接进行标注,无需用户再进行判断和勾选。
这里,图8图示了根据本申请实施例的谱面分析和标注方法中的音阶标注过程的示意性流程图,也是根据本申请实施例的谱面分析和标注系统中的音阶标注子单元的工作的示意性流程图。
琶音标注的表现形式为:在连续出现的琶音的五线谱下方用线段进行连接,并在旁边标注出该琶音的名称。
实现方式为:
1)预置单手识别的琶音:36个大小调琶音及其转位,36个大小调的属七/减七和弦琶音及其转位;预置双手识别的琶音:36个大小调琶音及其转位的双手同向或反向,36个大小调的属七/减七和弦琶音及其转位的双手同向或反向,36个大小调的三度琶音。
2)从电子乐谱,比如MusicXML音乐文件中分别读取该乐曲的右手(通常为高音谱表)的所有音符序列和左手(通常为低音谱表)的所有音符序列,与S1里的单手识别的琶音数据进行比对。
3)从头至尾遍历所有音符,如果当前音符属于某种预置的琶音之中,则进而判断其下一个音符。直到如果有连续不少于4个音符都在预置的琶音中,则在这些音符的下方做连线,并在旁边罗列出可能符合的琶音供用户进行点选标注。
4)从电子乐谱,比如MusicXML音乐文件中分别读取该乐曲的双手的所有音符序列,同时与1)里的双手识别的琶音数据进行比对。
5)从头至尾遍历所有音符,如果当前双手同时弹奏的音符属于1)中某种预置的双手识别的琶音之中,则进而判断其下一对双手同时弹奏的音符。直到如果有连续4对音符都在预置的调式琶音中,则在这些音符的下方做连线,并在旁边罗列出可能符合的调式琶音供用户进行 点选标注。
6)当5)与3)同时得出符合的选项供用户进行点选标注时,仅取5)的结果进行推荐即可;若5)或3)得出的结果是唯一的,则直接进行标注,无需用户再进行判断和勾选。
和弦与分解和弦标注的表现形式为:在同时弹奏的和弦(柱状和弦)标注和弦名,或连续弹奏的分解和弦下方连线,并在旁边标注出和弦名,如C,Em。
实现方式为:
1)预置出各种和弦的音程关系,如大三和弦的三个音的音程关系为大三度、小三度;小三和弦的音程关系为小三度,大三度。
2)从电子乐谱,比如MusicXML音乐文件中读取该乐曲的所有音符序列。
3)识别标注柱状和弦:首先找出所有同时弹奏的三个音符或同时弹奏的大于三个音符的音(即同一五线谱上X轴坐标位置相同的音),判断这几个音的音程关系是否满足某个和弦的音程关系,若匹配到某和弦,则在其上方标注出和弦名称简写。
4)识别标注分解和弦:从头至尾遍历同一五线谱上所有非同时弹奏的音符,判断某目标音符的后几个音符是否与其满足某和弦的音程关系。如果匹配到某和弦,则在这几个音符上方标注出和弦名称。
这里,图9图示了根据本申请实施例的谱面分析和标注方法中的和弦与分解和弦标注过程的示意性流程图,也是根据本申请实施例的谱面分析和标注系统中的和弦标注子单元的工作的示意性流程图。
调号临时升降记号和调式标注单元
下面,将说明根据本申请实施例的谱面分析和标注方法/系统中的调号临时升降记号和调式标注。这里,其包括调号标注、临时升降记号标注及调式识别和标注。
也就是,在上述谱面分析和标注系统中,所述调号临时升降记号和调式标注单元包括:调号标注子单元,用于对所述电子乐谱进行调号分析和标注;临时升降记号标注子单元,用于对所述电子乐谱进行临时升降记号分析和标注;以及,调式分析和标注子单元,用于基于所标注的调号、临时升降记号、和弦和音高进行调式分析和标注。
首先,高亮调号,具体的实现方式例如为:首先,从电子乐谱,比如MusicXml中读取谱面调号,并高亮显示;然后,在整个乐谱中识别调号标注的需要升降的音,并用同一个颜色高亮。
接下来,高亮临时升降记号和还原记号。具体的实现方式例如为:
1)从电子乐谱,比如MusicXml中读取所有音符序列并生成五线谱。
2)识别出谱面中所有临时升降记号和临时还原记号,和其所标注的音(即临时升降或还原记号后的X轴坐标值最小,且Y坐标值等于该升降或还原记号的音),并将它们高亮显示。
3)识别出当前小节内与2中临时升降或还原记号音高相同的且顺序排列在该临时升降或还原记号之后得所有音,并高亮。
然后,进行调式识别,其表现形式为:识别出曲子的调式调号,并在乐谱左上角标出给标注。
具体的实现方式为:
1)将调号、首尾音、结尾和弦根音、主干音频率、和弦走向分别赋予权重,总和为100%。
2)给出条件阈值,用于判断该乐曲是否有一个明确的调式调性。
3)从电子乐谱,比如MusicXML音乐文件中读取该乐曲的所有音符序列及调号信息。
4)读取调号并根据权重得到该调号的加权值
5)读取并判断乐曲首尾音,如果首尾音相同,则得到一个以该音为调号的加权值。
6)判断出乐曲结尾处和弦的根音,并得到一个以该音为调号的加权值。
7)把曲子中所有的音从低到高映射到同一个八度里形成一个音阶,找出高频出现的主干音(1356)按出现频率高低分别对应的加权值。
8)找出首尾句旋律中的和弦,判断其走向。如果走向是从调式主和弦->属7和弦->主和弦,则得到一个以该主和弦根音为调号的加权值。
9)把曲子中所有的音从低到高映射到同一个八度里形成一个音阶,看第六,第七级的升降情况和音程,去确定是大调还是和声小调或旋律小调。
10)将取得的所有加权值相加,若大于等于条件阈值,则得到该乐曲的调号,并根据9)得到调式。例如,调式可以标注在乐谱的左上角。
图10图示了根据本申请实施例的谱面分析和标注方法中的调式标注的示意性流程图,也是根据本申请实施例的谱面分析和标注系统中的调式标注子单元的工作的示意性流程图。
也就是,在上述谱面分析和标注方法中,进一步包括:对所述电子乐谱进行调号和临时升降记号分析和标注;基于所述调号的分析结果进行音阶、和弦和琶音分析和标注;以及,基于所述调号、所述临时升降记号、所述和弦和所述音高的分析结果进行调式分析和标注。
在根据本申请实施例的谱面分析和标注系统中,所述特殊指法标注单元包括:扩指标注子单元,用于标注所述电子乐谱中的扩指指法;缩指标注子单元,用于标注所述电子乐谱中的缩指指法;穿指标注子单元,用于标注所述电子乐谱中的穿指指法;同音换指标注子单元,用于标注所述电子乐谱中的同音换指指法;以及,手位变换标注子单元,用于标注所述电子乐谱中的手位变换指法。
乐段乐句标注单元
下面,将说明根据本申请实施例的谱面分析和标注方法/系统中的乐曲结构和分析和标示。
在上述谱面分析和标注系统中,所述乐段乐句标注单元包括:乐段分析和标注子单元,用于对所述电子乐谱进行乐段分析和标注;以及,乐句分析和标注子单元,用于基于所标注的音乐型、节奏型、音阶、和弦、琶音,音乐术语、音乐符号和乐段的分析和标注结果进行乐句分析和标注。
首先,说明乐段标示,其表现形式为:识别出乐曲的段落结构,并按小节给出标识。
具体的实现方式为:
1)从电子乐谱,比如MusicXML音乐文件中读取该乐曲的所有音符序列。
2)按小节对所有音符进行分组。这里,据观察和总结,绝大多数乐曲的乐段不超过4个,则可用乐曲的总音符数除以4并向下取整,得到理论上判断乐段的最小连续小节数N,有时乐句的断句并非在完整的小节结束进行断句,因此N-1作为系统里判断乐段的最小连续小节数M。例:一首乐曲共32小节,则32/4=8,8则是理论上最小连续小节数N,系统取N-1作为系统里判断乐段的最小连续小节数M,则M为7;即乐曲中如果出现两段超过7小节的相同/相似乐段,则认为是识别到了两个相同/相似乐段。
3)遍历所有小节,用当前小节I与I+M小节进行对比,如果完全相同,则进而对比I+1与I+M+1小节,以此类推,最终得到两个乐段,如果这两个乐段长度均超过M小节,则说明这两个乐段是相同结构。
4)设定小节内音符相似度阈值N,即如果两个小节内的音有N%相同,则认为这两小节属于相似结构。
5)遍历所有小节,用当前小节I与I+M小节进行对比,如果相同的音大于N%,则进而对比I+1与I+M+1小节,以此类推,最终得到两个乐段。如果这两个乐段长度均超过M小节,并且这两个乐段并非完全相同,则说明这两个乐段是相似结构。
6)所有未被识别为相同乐段的乐段应识别为无重复的独立乐段。
7)乐段一定由至少2个以上的乐句组成。
8)在识别出的每个乐段的第一个小节的左上角分别用方形,圆形,三角形等结合字母进行标示,相同的乐段用相同的形状和字母进行标示。例:A与正方形代表一个乐段,相同的乐段也标注A与正方形;下一个乐段则用B与圆形结合进行标示;以此类推。
9)相似的乐段用相同的形状,字母右上角加“’”号进行标示。例:A与正方形代表一个乐段,相似的乐段标注A’与正方形。
图11图示了根据本申请实施例的乐谱分析和标注方法中的乐段标示的示意性流程图,也是根据本申请实施例的乐谱分析和标注系统中的乐段标示子单元的工作的示意性流程图。
下面,将说明乐句识别与标记,其表现形式为:识别出乐曲的乐句结构,并在乐句之间用弧线进行断句标注。
在本申请实施例中,音乐型、节奏型、音阶、琶音和乐段等部分的识别结果都可以作为乐句识别的加权条件,此外还有终止式半终止式等。
乐句识别的具体实现方式为:
1)从电子乐谱,比如MusicXML音乐文件中读取该乐曲的所有音符序列。
2)乐句是由音乐型构成,看下一个乐句的开始第一个音乐型,它与第一乐句是相同材料(相似的音乐型)开始的,这也是划分音乐结构的重要依据“同则分”,即:相同的材料可以划分音乐结构。因此若前一段音乐的音乐型结构与后一段音乐的音乐型结构相似,则应该被分为 两个乐句。比如,两段AB音乐型对比起来,是相似的音乐型(完全一样的节奏型,音高不同,音程不同);且尾音是长音,则被分成两个乐句。
3)乐句要有某种形式的半终止式或终止式,比如尾音是主音;比如终止或半终止的和弦进行(主和弦,属和旋,属七和弦,下属和弦,终止四六和弦)等,可以预置各种终止式和弦规律。
这里,終止式的種類包括:
正格:包含导音→主和弦的和声進行。如V─I或vii─I。完全正格終止式,Perfect Authentic Cadence(PAC);不完全正格終止式,Imperfect Authentic Cadence(IAC),inverted。变格:沒有导音進行到主音的和声进行。如Plagal Cadence(PC),IV─I,阿门终止/教会终止。半終止:I─V或?─V,Hal fCadence(HC);小調的iv6─V,Phrygian Cadence弗里吉安終止式;欺騙終止/假終止:Deceptive Cadence(DC)V─vi為最常見;省略終止:elision。樂句的終止處同時是下一個樂句的開始處;皮卡弟終止:Picardy Third。小調終止式的主和弦3音升高成為大三和弦。
4)乐句要有一定长度,一般在4小节左右,长的可能8小节甚至更多。且乐句的长度一定比乐句所在的乐段短,也一定比该位置音乐型长(音乐型小于乐句小于乐段)。
5)音阶,琶音作为基本音乐型不能够被分割开于两个乐句中,因此出现音阶和琶音或者半音阶的基本音乐型,就一定要被包含在当前乐句
6)每个乐句的尾音是长音,因此一个乐句尾音的音符时值至少为拍号中分母的一半。
7)遇到反复记号,双小节线,Fermata延音记号,音乐术语中的终止记号有,等,都一定是一个乐句的段句处。
8)并且,下文中的音乐符号和音乐术语的识别,也能够帮助乐句识别,例如Legato的线条通常能够指导断句,乐句断句不会切断Legato的连奏线条。
也就是,在上述谱面分析和标注方法中,进一步包括:对所述电子乐谱进行乐段分析和标注;以及,基于所述音乐型、所述节奏型、所述音阶、所述琶音、所述和弦,所述音乐术语和符号标注和所述乐段的分析结果进行乐句分析和标注。
音乐术语和符号标注单元
下面,将说明根据本申请实施例的谱面分析和标注系统的音乐术语和符号标注。
在上述谱面分析和标注系统中,所述音乐术语和符号标注单元包括:音乐术语标注子单元,用于标注所述电子乐谱中的音乐术语;音乐符号标注子单元,用于标注所述电子乐谱中的音乐符号;以及,作品时期特性分析和标注子单元,用于基于所标注的音乐术语和音乐符号,结合所标注的音乐型、音阶、和弦、琶音、调性、节奏型、乐句和乐段的标注结果对所述电子乐谱的作品时期特性进行分析和标注。
音乐符号和音乐术语注释的表现形式为:识别乐谱上的音乐术语和符号(包括乐曲的力度,速度,表情,风格,演奏技巧,时期流派等),在鼠标停留时给出释义,并且用户可选择 将某条释义显示在乐谱底部,并用编号与乐谱上的符号进行关联,以便打印后使用。
具体的实现方式为:
1)预置各种音乐术语和符号以及它们的释义。
2)从电子乐谱,例如MusicXml中读取音乐术语和符号。
3)当鼠标移动到某个音乐术语或符号上时,读取预置的释义,并用tooltip方式显示。
4)如果用户点击tooltip中的“显示到底部”按钮,在该术语或符号旁显示一个数字标示,并将该释义显示在乐谱底部,同样用该数字标示进行关联。
知识图谱的表现形式为:当用户选中乐曲标题、作者等元素时,显示一个跳转至该词条百度百科的按钮,点击该按钮会在旁边以弹窗形式显示该词条的百度百科内容。用户可将其中任意内容(或手动编辑内容),并将其显示在乐谱底部,并用数字标号进行关联
实现方式为:
1)从电子乐谱,例如MusicXml中读取乐曲标题、作者等元素。
2)当用户选中标题或作者时,在旁边显示一个能够跳转至该词条百度百科的按钮。
3)当用户点击该按钮时,通过百度百科的api获取该词条的百科内容,并在旁边通过弹窗形式进行展示。
4)若用户选中其中内容(可手动编辑)时,展示一个“显示到底部”按钮,点击该按钮,在该词条旁显示一个数字标示,并将该选中的内容显示在乐谱底部,同样用该数字标示进行关联。
音乐作品的时期分析的表现形式为:根据上述的调性特征,音乐符号和音乐术语,音阶琶音和弦特征,节拍节奏特征,乐句乐段特征等,综合判断乐曲的特征,给出一个结论在谱面上,并且根据以上特征分别用颜色高亮乐谱中提取到的相应特征,用户可以对每个高亮部分进行点选,编号标注于谱面。
实现方式为:
1)根据下表,做出判断;
2)结合知识图谱中关键词关联的功能,也能相应找到绝大多数作品的时期结论,将本结论与1)中的结论进行比对,若一致则完毕,若不一致,需要出现结论不确定的红色预警,使用户可以根据1)中机器选取出来的判断特征进行删减,删减后直至1)的结论与2)的结论相同,以提高分析的准确性。
另外,根据本申请实施例的谱面分析和标注方法/系统还可以采用人工智能深度学习技术,在关于“音乐型分析”“乐句分析”“调性分析”“时期特征分析”分析等准确性方面进行提升。
具体的表现形式为:上文中对于音乐型的标注是在五线谱中该音乐型的音符下方用不同形式和颜色的线段标示出来,线段的起始可以用户手动拖拉调整;
前文中对于乐句的标注方法是在乐句之间标注弧线,弧线的位置用户可以手动拖拉调整;
前文中对调性特征分析和时期特征分析的部分,极小概率有判断与知识图谱爬取出来的结果不一致的可能;
综上所述,就需要有人工调整和干预给出确定结果的过程,因此用户使用标注的过程当中,程序需要对这些产生了差异的结果进行记录,若多个用户进行调整和干预后的结果是一样的,那么机器深度学习就会以不断被更新和优化了的结果进行记录学习,优化之后的谱面标注推荐。
示例性装置
图12图示了根据本申请实施例的谱面分析和标注装置的框图。
如图12所示,根据本申请实施例的谱面分析和标注装置200包括:拍号确定单元210,用于确定电子乐谱的拍号;节拍和节奏型分析单元220,用于基于所确定的拍号对所述电子乐谱进行节拍标注和节奏型分析和标注;音高音程标注单元230,用于对所述电子乐谱进行音高和音程标注;以及,音乐型分析标注单元240,用于基于所述音高、所述音程和所述节奏型的标注结果对音乐型进行分析和标注。
在上述谱面分析和标注装置中,所述节拍和节奏型分析单元220用于:基于所述电子乐谱的节奏组合特性,确定预设的基本节奏型,以及所述基本节奏型的倍加节奏型和倍分节奏型;以及,通过与所述基本节奏型、所述倍加节奏型和倍分节奏型的比较对所述电子乐谱进行节奏型分析。
在上述谱面分析和标注装置中,所述节拍和节奏型分析单元220对电子乐谱进行节奏型标注包括以音符时值累积每拍作为一个单位进行节奏型标注,包括:定义节奏型标识的起止坐标;当一个音符包含多拍时定义多个节奏型标识的位置坐标;以及,在一个单位的节奏型标识,将一拍中所含的音符进行时值的比例计算,根据计算的比例对所述节奏型标识进行等分,并针对每个音符断开所述节奏型标识。
在上述谱面分析和标注装置中,所述音高音程标注单元230用于:对所述电子乐谱进行小节内音程标注和小节间音程标注。
在上述谱面分析和标注装置中,所述音乐型分析标注单元240用于:确定具有预定音符数的基准音符组,所述基准音符组构成基准音乐型;遍历所述电子乐谱中的其它具有所述预定音符数的基准音符组,以基于所述预定音符数的每组音符的音高、时值及音程确定与所述基准音乐型相同、位移、镜像或者相似的音乐型。
在上述谱面分析和标注装置中,包括以下的至少一个:相同音乐型符合两组音符具有相同节奏型,相同的音高,相同的音程;相似音乐型符合两组音符的节奏型不相同,但音高与音程相同;相似音乐型符合两组音符的节奏型不相同,音高不相同,音程相同;相似音乐型符合两组音符的节奏型相同或是倍加或倍分的关系,且音高或音程有50%以上相同;位移音乐型符合两组音符的节奏型相同,音程相同,但音高不同;以及,镜像音乐型符合两组音符的节奏型相同,音程相同,且音高倒序排列。
在上述谱面分析和标注装置中,若谱面出现的音符存在双音,则提取比较的音包括以下的至少一种情况:如果相比较的两组音符组均为双音,则将两组音符的上层音符(音高较高的音 符)相互比较,下层音符(音高较低的音符)相互比较,再将两组音符的上下层音符交叉比较;以及,如果相对比的两组音符中一组为双音,一组为单音,则将单音的那组音符作为基准组,将双音那组音符的上层音符和下层音符分别与单音的那一组音符进行比对。
在上述谱面分析和标注装置中,进一步包括:调式分析和标注单元,用于对所述电子乐谱进行调号和临时升降记号分析和标注;基于所述调号的分析结果进行音阶、和弦和琶音分析和标注;以及,基于所述调号、所述临时升降记号、所述和弦和所述音高的分析结果进行调式分析和标注。
在上述谱面分析和标注装置中,进一步包括:乐段乐句分析和标注单元,用于对所述电子乐谱进行乐段分析和标注;以及,基于所述音乐型、所述节奏型、所述音阶、所述琶音、所述和弦,所述音乐术语和符号标注和所述乐段的分析结果进行乐句分析和标注。
在上述谱面分析和标注装置中,进一步包括:作品时期特性分析和标注单元,用于对所述电子乐谱进行音乐术语和符号标注;以及,基于所述音乐术语和符号标注的结果,结合所述音乐型、所述音阶、所述琶音、所述和弦、所述调性、所述节拍节奏型、所述乐句乐段的标注结果对所述电子乐谱的作品时期特性进行分析和标注。
这里,本领域技术人员可以理解,上述谱面分析和标注装置200中的各个单元和模块的具体功能和操作已经在上面参考图1到图11的谱面分析和标注方法的描述中得到了详细介绍,并因此,将省略其重复描述。
如上所述,根据本申请实施例的谱面分析和标注装置200可以实现在各种终端设备中,例如智能手机、电脑,服务器等。在一个示例中,根据本申请实施例的谱面分析和标注装置200可以作为一个软件模块和/或硬件模块而集成到终端设备中。例如,其可以是该终端设备的操作系统中的一个软件模块,或者可以是针对于该终端器设备所开发的一个应用程序;当然,该谱面分析和标注装置200同样可以是该终端设备的众多硬件模块之一。
替换地,在另一示例中,该谱面分析和标注装置200与该终端设备也可以是分立的设备,并且其可以通过有线和/或无线网络连接到该中的设备,并且按照约定的数据格式来传输交互信息。
示例性电子设备
下面,参考图13来描述根据本申请实施例的电子设备。
图13图示了根据本申请实施例的电子设备的框图。
如图13所示,电子设备10包括一个或多个处理器11和存储器12。
处理器11可以是中央处理单元(CPU)或者具有数据处理能力和/或指令执行能力的其他形式的处理单元,并且可以控制电子设备10中的其他组件以执行期望的功能。
存储器12可以包括一个或多个计算机程序产品,所述计算机程序产品可以包括各种形式的计算机可读存储介质,例如易失性存储器和/或非易失性存储器。所述易失性存储器例如可以包括随机存取存储器(RAM)和/或高速缓冲存储器(cache)等。所述非易失性存储器例如 可以包括只读存储器(ROM)、硬盘、闪存等。在所述计算机可读存储介质上可以存储一个或多个计算机程序指令,处理器11可以运行所述程序指令,以实现上文所述的本申请的各个实施例的谱面分析和标注方法以及/或者其他期望的功能。在所述计算机可读存储介质中还可以存储诸如节奏型、音高、音程、音乐型等各种内容。
在一个示例中,电子设备10还可以包括:输入装置13和输出装置14,这些组件通过总线系统和/或其他形式的连接机构(未示出)互连。
该输入装置13可以包括例如键盘、鼠标等等。
该输出装置14可以向外部输出各种信息,包括标注的谱面等。该输出装置14可以包括例如显示器、扬声器、打印机、以及通信网络及其所连接的远程输出设备等等。
当然,为了简化,图13中仅示出了该电子设备10中与本申请有关的组件中的一些,省略了诸如总线、输入/输出接口等等的组件。除此之外,根据具体应用情况,电子设备10还可以包括任何其他适当的组件。
示例性计算机程序产品和计算机可读存储介质
除了上述方法和设备以外,本申请的实施例还可以是计算机程序产品,其包括计算机程序指令,所述计算机程序指令在被处理器运行时使得所述处理器执行本说明书上述“示例性方法”部分中描述的根据本申请各种实施例的谱面分析和标注方法中的步骤。
所述计算机程序产品可以以一种或多种程序设计语言的任意组合来编写用于执行本申请实施例操作的程序代码,所述程序设计语言包括面向对象的程序设计语言,诸如Java、C++等,还包括常规的过程式程序设计语言,诸如“C”语言或类似的程序设计语言。程序代码可以完全地在第一用户计算设备上执行、部分地在第一用户设备上执行、作为一个独立的软件包执行、部分在第一用户计算设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。
此外,本申请的实施例还可以是计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令在被处理器运行时使得所述处理器执行本说明书上述“示例性方法”部分中描述的根据本申请各种实施例的谱面分析和标注方法中的步骤。
所述计算机可读存储介质可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以包括但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。
知识图谱单元
在上述谱面分析和标注系统中,所述知识图谱单元包括:作者信息子单元,用于标注所述电子乐谱的作者相关信息;以及,乐曲信息子单元,用于标注所述电子乐谱的乐曲相关信息。
图15图示了根据本申请实施例的谱面分析和标注系统的整体架构的示意图。如图15所示,本领域技术人员可以理解,其各个模块已经在上文中进行了详细说明,在此便不再赘述。
根据本申请实施例的谱面分析和标注系统可以实现在各种终端设备中,例如智能手机、电脑,服务器等。在一个示例中,根据本申请实施例的谱面分析和标注系统可以作为一个软件模块和/或硬件模块而集成到终端设备中。例如,其可以是该终端设备的操作系统中的一个软件模块,或者可以是针对于该终端器设备所开发的一个应用程序;当然,该谱面分析和标注系统同样可以是该终端设备的众多硬件模块之一。
替换地,在另一示例中,该谱面分析和标注系统与该终端设备也可以是分立的设备,并且其可以通过有线和/或无线网络连接到该中的设备,并且按照约定的数据格式来传输交互信息。
本申请的另一目的在于通过模块化音乐数据库来支持高效的音乐教学系统。对于本申请的模块化音乐数据库来说,通过机器学习、人工智能、知识图谱等技术的运用增强教学内容的生成、难度分类标签的细化以及关联推荐的准确性。
图24图示出了模块化音乐数据库与谱面分析和标注方法/系统的关系。其中用户交互单元005与难度标签精进训练单元003交互从而精进难度标签并返回应用于模块化音乐数据库004;MusicXML特征分析与标注单元是构建模块化音乐数据库的基础和技术支持;MusicXML特征分析与标注单元是协同编辑单元的基础数据来源,通过协同编辑后存储在智能曲谱库供用户交互,使其更准确并应用于自适应学习推荐。
并且,本申请的模块化音乐数据库是开放的库,其可以通过与其它工具软件,比如谱面分析软件等交互,形成一个开放的可变的库,从而能够让大量用户在使用积累过程中反哺教学内容和上传素材,为数据库的内容丰富做出贡献。
在本申请的模块化音乐教学数据库的构建过程中,首先是以第一性原理的思维方式,把大量乐曲根据一定特性进行溯源,拆解。具体地,音乐的基本表现要素包括:调式,调性,旋律线,节奏,节拍,音区,音色,力度,速度,和声,织体。
·点:数据采集的最小元素“模式(Pattern)”(包括音符,拍号,节奏型,和旋,音阶,琶音);
·线:数据运用的基本规律(音程规律,调性规律,指法运用规律);
·面:建立在基本规律上的多声部延展(和声,和弦连接,复调对位法,织体的行进);
·树:建立在声部和织体进行上的结构性发展(曲式,音乐的主题与发展);
·网:数据采集的交织分类(不同时期的音乐作品特性,音乐的风格属性,著名作曲家的作品示例);
另外,根据通用的音乐评级系统,比如根据英皇/中音/圣三一等音乐考级系统的大纲知识点构成和难度进阶,还可以将表现要素进一步分层。
因此,本申请的模块化音乐教学数据库是一个在音乐教学领域根据教学逻辑提取大量曲谱特征,结合难度标签分类的数据库。它具有机器学习功能,能够按照难度标签自动创建教学素 材的后端教学数据库,数据库的素材生成后,会通过相应的算法工具,将素材处理和转化成用户可以交互的系统。
具体地,本申请的模块化音乐数据库包括分类曲谱库,百科知识库和专项训练库。其中,所述分类曲谱库中的曲谱可以调取以通过谱面分析工具进行智能曲谱分析和标注,标注完成后存储为智能曲谱库,且可以经过大量用户的分析标注跟踪,优化系统智能分析的结果。所述百科知识库中的内容可以通过知识图谱工具关联到曲谱分析和标注,通过大量用户的挑选和截取使用的内容,优化系统推荐功能,使得百科知识库的内容不断优化,准确精简。所述专项训练库中的内容可以通过转化工具将曲谱转化成专项训练任务,这些交互型的任务能够记录、判断和测评用户反馈,通过大量用户的交互记录,对专项训练库里的训练内容进行难度排序优化。
可选地,本申请的模块化音乐教学数据库可以进一步包括单曲教学库,其可以运用旋律波形提取和比对的识别工具,识别并爬取一首单曲相关的教学素材(视频,音频),并经过大量用户的素材选取使用结果,优化系统推荐功能,让单曲精讲内容不断优化,准确精简。
可选地,本申请的模块化音乐数据库可以进一步包括单曲教学库,其用于关联并提取百科知识库优化后的结果,也可以提取该单曲的相关特征形成针对该单曲的专项训练。
综上所述,本申请的模块化音乐数据库用于在后端创建难度标签分类,并生成教学素材,且可以通过前端的用户交互软件将后端生成的教学素材与用户交互,接收用户的使用数据,并记录并分析后给出反馈,优化后端的素材及难度排序。
在介绍了本申请的基本原理之后,下面将参考附图来具体介绍本申请的各种非限制性实施例。
示例性数据库
图16图示了根据本申请实施例的模块化音乐数据库的示例的框图。
如图16所示,如上所述,根据本申请实施例的模块化音乐教学数据库400包括分类曲谱库410,百科知识库420和专项训练库430。下面,将详细说明分类曲谱库410,百科知识库420和专项训练库430。
图17图示了根据本申请实施例的模块化音乐数据库的分类曲谱库的示例的框图。
如图17所示,在如图16所示的实施例的基础上,根据本申请实施例的模块化音乐数据库400的分类曲谱库410包括:演奏技巧知识点特征提取单元411,用于根据演奏技巧知识点提取音乐特征,所述音乐特征包括调性、节拍、节奏型、手位、音乐记号、奏法、音程等;百科知识特征提取单元412,用于根据曲谱提取百科知识相关的知识特征,所述知识特征包括曲谱时期、曲谱作者、曲谱体彩、曲谱风格、调式曲式等;主题特征提取单元413,用于根据曲谱的主题提取主题特征,所述主题特征包括概括性主题、标题性主题等;以及,标签分类单元414,用于根据曲谱难度进行标签分类。
也就是,在本申请实施例中,分类曲谱库410是根据不同类别的特征提取进行音乐曲谱分类的曲谱库,能够完成曲谱识别和自动分类,相应的,也支持用户根据标签进行智能检索。其 标签类别包括但不限于提取的特征对应的类别,包括:A1按照曲谱的调性,节拍,节奏型,手位,音乐记号,和声,奏法,音程,旋律型/音乐型分类;A2按照曲谱的时期,作者,体裁,风格,调式曲式分类;A3按照曲谱的主题特征进行分类;A4按照曲谱的难度标签分类。
图18图示了根据本申请实施例的模块化音乐数据库的分类曲谱库的标签特征。例如,主题标签还可以包括世界名曲,儿歌名摇,人文风情,常识,地理,节日,动物,情绪,交通等。
下面,将结合图19详细说明标签分类单元414的标签分类方法。这里,图19图示了根据本申请实施例的模块化教学数据库的标签分类过程的流程图。
如图18所示,标签分类单元414与专项训练库430通过交互式的迭代来进行标签分类。具体过程如下:
S1:对不同模块的库里的素材进行特征预置,具体地,预置的特征可以包括根据音乐的基本元素提取的特征,例如包括:a音程特征,b音域特征,c音符时值特征,d谱号特征,e拍号特征,f节奏型特征,g休止符特征,h延音线特征,i调性特征,j小节数特征,k音乐术语特征,l音乐符号特征,m旋律线条的模进(音乐型特征),n和弦特征,o手位特征,p指法特征,q临时升降记号特征,r奏法特征,s乐句结构特征,t速度特征,u伴奏织体,v踏板,w装饰音,x曲式结构特征,y声部特征(对位法);
S2:根据历年的考级素材(英皇考级为例),用机器学习的方法对S1所预置的特征和这些特征的组合进行难度分级规律学习,生成第一级难度级别标签;
S3:对S1预置的特征和特征组合进行第一级和第二级难度级别标签定义(第二级难度标签更细化并分别置于相应的第一级难度级别标签下);
S4:根据S3生成的第二级难度级别标签,选取与之相应的特征及其组合,随机排列组合,自动生成训练素材;
S5:根据S3生成的第二级难度级别标签,在已有的曲谱中(可以从D分类曲谱库进行提取,也可以从外网爬取)提取符合条件的特征及其组合,自动生成待用素材;
S6:将S5生成的待用素材根据各模块教学数据的格式要求进行素材统一(比如节奏库的素材需要去除音高标注),生成训练素材;
S7:将S4与S6生成的训练素材与S2中定义的第一级难度级别标签进行对比,验证是否为包含关系(即是否能归属到对应的第一级难度级别标签);
S8:若S7的结果是不属于,则应该将该训练素材放置在验证库,通过进一步验证来判断该素材是否能够归属到S2中定义的第一级难度级别标签;
S9-1:如果S8所述的进一步验证的结果是可以归属,则需要对该素材进行S2的机器学习,优化机器对第一级难度级别标签的定义;
S9-2:如果S8所述的进一步验证的结果是不可归属,则需要进一步调整细化S3中所述的定义的难度标签,使其与机器判断的结果相符合;
这里,以上S2-S9的部分可以被称为难度标签定义循环。
可选地,标签分类单元414进一步包括在同等难度级别标签下的排序和存储,包括:
S10:预置在同等难度级别标签下(细化后的第二级难度标签),素材的排序规则(例如按照小节数从少到多,按照同等小节数下拍号的排序顺序,前两条都相同的情况下音符从少到多进行排序等);
S11:将能够通过S8验证库的训练素材根据S10预置的排序规则进行比对排序;
S12:将S11排序好的训练素材存储到数据库中。
并且,如上所述,标签分类单元414可以通过与用户的交互来优化排序,包括:
S13:将S12的数据库中的素材按照各个模块数据库的规范谱面呈现格式进行格式统一处理,生成各模块数据的专项库;
S14:将S13所述专项库里的谱面转换成交互式任务;
S15:用户进行交互式任务(进行训练);
S16:搜集用户训练的数据跟踪和反馈(比如大量用户在同一条训练上的过关时间和准确度);
S17:根据S16搜集的用户数据,在第二级难度标签下持续优化难度排序,并返回S12更新数据库排序。
因此,上述S13-S17的部分可以被称为用户交互排序优化循环。
这样,通过以上过程,最终形成一个不断更新和优化的模块化音乐数据库。这个数据库可以在机器学习,内容自动生成和用户反馈的过程中不断地产生和优化相应模块的内容。
百科知识库420包括与乐曲相关的知识,比如曲谱时期、曲谱作者、曲谱体裁、曲谱风格、调式曲式、乐曲主题等。并且,如上所述,百科知识库420中的内容通过知识图谱工具关联到曲谱分析和标注,并可以通过与用户的交互,挑选和截取使用的内容,优化系统推荐功能,使得百科知识库的内容不断优化,准确精简。
图20图示了根据本申请实施例的模块化音乐数据库的专项训练库的示例的框图。
如图20所示,在如图16所示的实施例的基础上,根据本申请实施例的模块化音乐数据库400的专项训练库430包括:节奏库431,视唱库432,听力库433,视奏库434和技巧库435。
下面,将对专项训练库中的各个库进行进一步的详细说明。
节奏库431中的素材的目的是训练受教者在不同拍号中对音符时值和节奏型的应用能力,熟悉不同的节奏型和节奏风格,能够快速的阅读谱面中的节奏,为音乐或乐器学习的识谱难关打下良好的基础,同时培养好的节奏感。
节奏库431的具体实现过程包含以下步骤:
S1:难度分级,难度排序定义。也就是,需要提取6个特征:a拍号特征,b节奏型特征,c休止符特征,d延音线特征,e小节数特征,f声部特征(合奏,轮奏,卡农)
S2:根据历年考级素材(英皇考级为例),根据每个级别的现有视奏资料,去除音高,对以上6个特征进行特征提取;用机器学习的方法对S1所预置的特征和这些特征的组合进行难 度分级规律学习,生成第一级难度级别标签;
S3:根据节奏的学习进阶规律做出对S1预置的特征和特征组合进行第一级和第二级难度级别标签定义(第二级难度标签更细化并分别置于相应的第一级难度级别标签下);
S4:根据S3生成的第二级难度级别标签,选取与之相应的特征及其组合,随机排列组合,自动生成训练素材;
S5:根据S3生成的第二级难度级别标签,在已有的曲谱中(可以从D分类曲谱库进行提取,也可以从外网爬取)提取符合条件的特征及其组合,自动生成待用素材;
S6:将S5生成的待用素材根据节奏训练库的格式要求进行格式统一(去除音高标注),以生成训练素材;
S7:将S4与S6生成的训练素材与S2中定义的第一级难度级别标签进行对比,验证是否为包含关系(即是否能归属到对应的第一级难度级别标签);
S8:若S7的结果是不属于,则应该将该训练素材放置到验证库中,通过验证来判断该素材是否能够归属到S2中定义的第一级难度级别标签;
S9-1:如果S8所述的判断结果是可以归属,则需要队该素材进行S2的机器学习过程,以优化机器对第一级难度级别标签的定义;
S9-2:如果S8所述的判断结果是不可归属,则需要调整细化S3中所述的人工定义的难度标签,使其与机器判断的结果相符合;
以下的S2-S9的部分可以被称为节奏库的难度标签定义循环。
可选地,节奏库431的具体实现过程进一步包括在同等难度级别标签下的排序和存储,包括:
S10:预置在同等难度级别标签下(细化后的第二级难度标签),素材的排序规则(例如按照小节数从少到多,按照同等小节数下拍号的排序顺序,前两条都相同的情况下音符从少到多进行排序等);
S11:将能够通过S8验证库的训练素材根据S10预置的排序规则进行比对排序;
S12:将S11排序好的训练素材存储到节奏库431中。
并且,如上所述,节奏库431可以通过与用户的交互来优化排序,包括:
S13:将S12的存储在节奏库431中的素材按照专项训练库的规范谱面呈现格式进行格式统一处理,生成专项节奏训练的题库;
S14:将S13所述题库里的谱面转换成交互式节奏训练任务;
S15:用户执行任务(进行训练);
S16:搜集用户训练的数据跟踪和反馈(比如大量用户在同一条训练上的过关时间和准确度);
S17:根据S16搜集的用户数据,在第二级难度标签下持续优化难度排序,并返回S12更新节奏库431中的排序。
上述S13-S17的部分可以被称为节奏库431的用户交互排序优化循环。
并且,图21图示了根据本申请实施例的模块化音乐教学数据库的节奏库中的难度标签示例。
视唱库432可以分为两个部分,第一个部分是音高音准训练子库,第二个部分是单声部旋律视唱训练子库。
其中,音高音准训练子库的素材的教学目的是训练受教者在不同谱号中的识谱和音准能力,熟悉标准音高和音与音之间的距离,能够唱准音,逐步拓宽音域。其应用场景分两种,一种是直接作为谱面素材供受教者练习,第二种是通过转化器转化为视唱软件里的一条训练内容,视唱软件可以根据麦克风收音去进行音准的比对和示范,直观的将看不见的声音形象化为看得见的旋律线条,使受教者能够看到音的行进方向和音与音的距离,以及标准音与自己唱的音之间的差距,指导受教者修订错误,提升进步。
音高音准训练子库的素材有6个特征(单音训练,没有节奏):a音高特征,b音程特征,c音域特征,d音符数量特征,e谱号特征,f调性特征(调号,临时升降记号以及主和弦)。具体实施步骤近似于节奏库的S1-S17,只是所提取的特征不同。
图22图示了根据本申请实施例的模块化音乐教学数据库的音高音准训练子库中的第一种应用场景,即谱面应用场景。
图23图示了根据本申请实施例的模块化音乐教学数据库的音高音准训练子库中的第二种应用场景,即视唱软件应用场景。
单声部旋律视唱训练子库中的素材的教学目的是训练受教者在不同谱号中的识谱能力,能够在唱准音的同时,控制好一条旋律的节奏节拍,音乐乐感,旋律美感,断句呼吸等。其应用场景也分为两种,一种是直接作为谱面素材供受教者练习,第二种是通过转化器转化为视唱软件里的一条训练内容,视唱软件可以根据麦克风收音去进行音准和节奏节拍的比对和示范,类似于KTV或者手机“唱吧“应用软件的跟唱打分功能。
单声部旋律视唱训练子库的素材有13个特征:a音程特征,b音域特征,c音符时值特征,d谱号特征,e拍号特征,f节奏型特征,g休止符特征,h延音线特征,i调性特征,j小节数特征,k音乐术语特征,l音乐符号特征,m旋律线条的模进(音乐型特征)。其中,a,b,c,d,e,f,i,j为必选项,g,h,k,l,m为根据级别进阶逐渐加入的选项。此外,具体实施步骤近似于节奏库的S1-S17,只是所提取的特征不同。
图33图示了根据本申请实施例的模块化音乐教学数据库的单声部旋律视唱训练子库中的第二种应用场景,即视唱软件应用场景。
视奏库433中的素材的教学目的是训练受教者的快速识谱演奏能力,即拿到新曲谱在最短时间内正确演奏的能力。其应用场景分两种,一种是直接作为谱面素材供受教者练习,第二种是通过转化器转化为视奏软件里的一条训练内容,视奏软件可以根据麦克风收音或者电钢琴连接电脑数据传收的方式去进行演奏的比对,纠错和评分。
视奏库433中的素材有23个特征:a音程特征,b音域特征,c音符时值特征,d谱号特征,e拍号特征,f节奏型特征,g休止符特征,h延音线特征,i调性特征,j小节数特征,k音乐术语特征,l音乐符号特征,m旋律线条的模进(音乐型特征),n和弦特征,o手位特征,p指法特征,q临时升降记号特征,r奏法特征,s乐句结构特征,t速度特征,u伴奏织体,v踏板,w装饰音。其中a,b,c,d,e,f,i,j,o,p,q,r为必选项,g,h,k,l,m,n,s,t,u,v,w为根据级别进阶逐渐加入的选项。并且,视奏库433的具体实施步骤近似于节奏库的S1-S17,只是所提取的特征不同。
听力库434的素材的教学目的是训练受教者的内心听觉,听辨能力等,其子库又分为以下的434-1至434-6。
听音训练库434-1(Guess Key/Guess interval/Guess Chord),包括6个特征提取:a音高,b音域,c谱号,d和弦,e音程,f调性。
调性听辨库434-2(Guess scales/Guess Chord/Guess the tonality),包括5个特征提取:a和弦,b音程,c调性特征,d小节数特征,e和弦特征。
节奏训练库434-3(clap the rhythm),包括5个特征提取:a节奏型特征,b休止符特征,c延音线特征,d小节数特征,e拍号特征。
节拍训练库434-4(Clap the time),包括5个特征提取:a节奏型特征,b休止符特征,c延音线特征,d小节数特征,e拍号特征。
旋律听辨库434-5(Telling difference),包括5个特征提取:a节奏型特征,b休止符特征,c延音线特征,d小节数特征,e音高特征。
旋律分析库434-6(Music Analysis),包括18个特征提取:a力度,b奏法,c速度,d调性,e时期,f伴奏织体,g乐句结构,h音域,i拍号,j旋律线条的模进(音乐型特征),k音乐符号,l音乐术语,m节奏型,n踏板,o装饰音,p音阶与琶音,q终止式,r小节数。这些库的具体实施步骤近似于节奏库的S1-S17,只是所提取的特征不同。
技巧训练库435(Finger boogie)的教学目的是训练受教者的手指基本功和演奏技巧,使之能从弹奏技术的层面支撑音乐作品的表达。技巧训练库435的具体实施过程包括如下步骤:
S1:难度分级,难度排序定义。也就是,定义素材分类,级别和难度排序(这个部分可以通过机器学习和难度标签定义,此外,因为其都是难度分层明确的内容,也通过预置定义)。
S2:自动生成教学素材并入库,也就是,根据S1的分类,难度和排序定义,对素材进行识别,排序,以及按照模板排版入库。
S3:转化成技巧训练软件内容,例如,将成型素材通过转化器转化为技巧训练软件里的一条训练内容。
S4:在软件中记录用户操作数据,根据大量用户的操作数据(完成某一条素材的时间,正确率等)对素材再次进行优化难度排序。
也就是,在本申请实施例中,专项训练库430可以与分类曲谱库410中的标签分类单元 414进行交互,以提高标签分类的准确性。
以上结合具体实施例描述了本申请的基本原理,但是,需要指出的是,在本申请中提及的优点、优势、效果等仅是示例而非限制,不能认为这些优点、优势、效果等是本申请的各个实施例必须具备的。另外,上述公开的具体细节仅是为了示例的作用和便于理解的作用,而非限制,上述细节并不限制本申请为必须采用上述具体的细节来实现。
本申请中涉及的器件、装置、设备、系统的方框图仅作为例示性的例子并且不意图要求或暗示必须按照方框图示出的方式进行连接、布置、配置。如本领域技术人员将认识到的,可以按任意方式连接、布置、配置这些器件、装置、设备、系统。诸如“包括”、“包含”、“具有”等等的词语是开放性词汇,指“包括但不限于”,且可与其互换使用。这里所使用的词汇“或”和“和”指词汇“和/或”,且可与其互换使用,除非上下文明确指示不是如此。这里所使用的词汇“诸如”指词组“诸如但不限于”,且可与其互换使用。
还需要指出的是,在本申请的装置、设备和方法中,各部件或各步骤是可以分解和/或重新组合的。这些分解和/或重新组合应视为本申请的等效方案。
提供所公开的方面的以上描述以使本领域的任何技术人员能够做出或者使用本申请。对这些方面的各种修改对于本领域技术人员而言是非常显而易见的,并且在此定义的一般原理可以应用于其他方面而不脱离本申请的范围。因此,本申请不意图被限制到在此示出的方面,而是按照与在此公开的原理和新颖的特征一致的最宽范围。
为了例示和描述的目的已经给出了以上描述。此外,此描述不意图将本申请的实施例限制到在此公开的形式。尽管以上已经讨论了多个示例方面和实施例,但是本领域技术人员将认识到其某些变型、修改、改变、添加和子组合。
Claims (26)
- 一种谱面分析和标注方法,包括:确定电子乐谱的拍号;基于所确定的拍号对所述电子乐谱进行节拍标注和节奏型分析和标注;对所述电子乐谱进行音高和音程标注;以及基于所述音高、所述音程和所述节奏型的标注结果对音乐型进行分析和标注。
- 根据权利要求1所述的谱面分析和标注方法,其中,对电子乐谱进行节奏型分析包括:基于所述电子乐谱的节奏组合特性,确定预设的基本节奏型,以及所述基本节奏型的倍加节奏型和倍分节奏型;以及通过与所述基本节奏型、所述倍加节奏型和倍分节奏型的比较对所述电子乐谱进行节奏型分析。
- 根据权利要求1所述的谱面分析和标注方法,其中,对电子乐谱进行节奏型标注包括以音符时值累积每拍作为一个单位进行节奏型标注,包括:定义节奏型标识的起止坐标;当一个音符包含多拍时定义多个节奏型标识的位置坐标;以及在一个单位的节奏型标识,将一拍中所含的音符进行时值的比例计算,根据计算的比例对所述节奏型标识进行等分,并针对每个音符断开所述节奏型标识。
- 根据权利要求1所述的谱面分析和标注方法,其中,对所述电子乐谱进行音程标注包括:对所述电子乐谱进行小节内音程标注和小节间音程标注。
- 根据权利要求4所述的谱面分析和标注方法,其中,基于所述音高、所述音程和所述节奏型的标注结果对音乐型进行分析和标注包括:确定具有预定音符数的基准音符组,所述基准音符组构成基准音乐型;遍历所述电子乐谱中的其它具有所述预定音符数的基准音符组,以基于所述预定音符数的每组音符的音高、时值及音程确定与所述基准音乐型相同、位移、镜像或者相似的音乐型。
- 根据权利要求5所述的谱面分析和标注方法,其包括以下的至少一个:相同音乐型符合两组音符具有相同节奏型,相同的音高,相同的音程;相似音乐型符合两组音符的节奏型不相同,但音高与音程相同;相似音乐型符合两组音符的节奏型不相同,音高不相同,音程相同;相似音乐型符合两组音符的节奏型相同或是倍加或倍分的关系,且音高或音程有50%以上相同;位移音乐型符合两组音符的节奏型相同,音程相同,但音高不同;镜像音乐型符合两组音符的每个对应音到作为这组中的最高音和最低音之间的中间值的对称轴的音程互为相反数。
- 根据权利要求6所述的谱面分析和标注方法,若谱面出现的音符存在双音,则提取比较的音包括以下的至少一种情况:如果相比较的两组音符组均为双音,则将两组音符的上层音符,即音高较高的音符相互比较,下层音符,即音高较低的音符相互比较,再将两组音符的上下层音符交叉比较;如果相对比的两组音符中一组为双音,一组为单音,则将单音的那组音符作为基准组,将双音那组音符的上层音符和下层音符分别与单音的那一组音符进行比对。
- 根据权利要求1所述的谱面分析和标注方法,进一步包括:对所述电子乐谱进行调号和临时升降记号分析和标注;基于所述调号的分析结果进行音阶、和弦和琶音分析和标注;以及基于所述调号、所述临时升降记号、所述和弦和所述音高的分析结果进行调式分析和标注。
- 根据权利要求8所述的谱面分析和标注方法,进一步包括:对所述电子乐谱进行乐段分析和标注;以及基于所述音乐型、所述节奏型、所述音阶、所述琶音、所述和弦,所述音乐术语和符号标注和所述乐段的分析结果进行乐句分析和标注。
- 根据权利要求9所述的谱面分析和标注方法,进一步包括:对所述电子乐谱进行音乐术语和符号标注;以及基于所述音乐术语和符号标注的结果,结合所述音乐型、所述音阶、所述琶音、所述和弦、所述调性、所述节拍节奏型、所述乐句乐段的标注结果对所述电子乐谱的作品时期特性进行分析和标注。
- 一种谱面分析和标注装置,包括:拍号确定单元,用于确定电子乐谱的拍号;节拍和节奏型分析单元,用于基于所确定的拍号对所述电子乐谱进行节拍标注和节奏型分析和标注;音高音程标注单元,用于对所述电子乐谱进行音高和音程标注;以及音乐型分析标注单元,用于基于所述音高、所述音程和所述节奏型的标注结果对音乐型进行分析和标注。
- 一种电子设备,包括:处理器;以及存储器,在所述存储器中存储有计算机程序指令,所述计算机程序指令在被所述处理器运行时使得所述处理器执行如权利要求1-13中任一项所述的谱面分析和标注方法。
- 一种谱面分析和标注系统,包括:节拍节奏和节奏型分析和标注单元,用于标注电子乐谱的拍号和节拍并进行节奏型的分析和标注;音高音程和音乐型分析和标注单元,用于标注所述电子乐谱的音高和音程并进行音乐型的 分析和标注;音阶和弦和琶音标注单元,用于标注所述电子乐谱中的音阶、和弦和琶音;调号临时升降记号和调式标注单元,用于标注所述电子乐谱的调号、临时升降记号和调式;特殊指法标注单元,用于标注所述电子乐谱的特殊指法;乐段乐句标注单元,用于标注所述电子乐谱的乐段乐句;音乐术语和符号标注单元,用于标注所述电子乐谱的音乐术语和音乐符号;以及知识图谱单元,用于标注与所述电子乐谱相关联的音乐知识信息。
- 根据权利要求13所述的谱面分析和标注系统,其中,所述节拍节奏和节奏型分析和标注单元包括:拍号标注子单元,用于标注所述电子乐谱的拍号;节拍标注子单元,用于基于所标注的拍号标注所述电子乐谱的节拍;以及节奏型分析和标注子单元,用于基于所标注的节拍对所述电子乐谱进行节奏型分析和标注。
- 根据权利要求14所述的谱面分析和标注系统,其中,所述节奏型分析和标注子单元用于:基于所述电子乐谱的节奏组合特性,确定预设的基本节奏型,以及所述基本节奏型的倍加节奏型和倍分节奏型;以及通过与所述基本节奏型、所述倍加节奏型和倍分节奏型的比较对所述电子乐谱进行节奏型分析。
- 根据权利要求15所述的谱面分析和标注系统,其中,所述节奏型分析和标注子单元用于以音符时值累积每拍作为一个单位进行节奏型标注,包括:定义节奏型标识的起止坐标;当一个音符包含多拍时定义多个节奏型标识的位置坐标;以及在一个单位的节奏型标识,将一拍中所含的音符进行时值的比例计算,根据计算的比例对所述节奏型标识进行等分,并针对每个音符断开所述节奏型标识。
- 根据权利要求13所述的谱面分析和标注系统,其中,所述音高音程和音乐型分析和标注单元包括:音高标注子单元,用于标注所述电子乐谱的音高;音程标注子单元,用于标注所述电子乐谱的音程;以及音乐型分析和标注子单元,用于基于所标注的音高、音程和节奏型对音乐型进行分析和标注。
- 根据权利要求17所述的谱面分析和标注系统,其中,所述音程标注子单元包括:对所述电子乐谱进行小节内音程标注和小节间音程标注。
- 根据权利要求18所述的谱面分析和标注系统,其中,所述音乐型分析和标注子单元用于:确定具有预定音符数的基准音符组,所述基准音符组构成基准音乐型;遍历所述电子乐谱中的其它具有所述预定音符数的基准音符组,以基于所述预定音符数的每组音符的音高、时值及音程确定与所述基准音乐型相同、位移、镜像或者相似的音乐型。
- 根据权利要求19所述的谱面分析和标注系统,其中,所述音乐型分析和标注子单元用于以下的至少一个:基于两组音符符合具有相同节奏型,相同的音高,相同的音程,确定所述两组音符为相同音乐型;基于两组音符符合两组音符的节奏型不相同,但音高与音程相同,确定所述两组音符为相似音乐型;基于两组音符符合两组音符的节奏型不相同,音高不相同,音程相同,确定所述两组音符为相似音乐型;基于两组音符符合两组音符的节奏型相同或是倍加或倍分的关系,且音高或音程有50%以上相同,确定所述两组音符为相似音乐型;基于两组音符符合两组音符的节奏型相同,音程相同,但音高不同,确定所述两组音符为位移音乐型;基于两组音符符合两组音符的每个对应音到作为这组中的最高音和最低音之间的中间值的对称轴的音程互为相反数,确定所述两组音符为镜像音乐型。
- 根据权利要求20所述的谱面分析和标注系统,其中,所述音乐型分析和标注子单元用于若谱面出现的音符存在双音,则提取比较的音包括以下的至少一种情况:如果相比较的两组音符组均为双音,则将两组音符的上层音符,即音高较高的音符相互比较,下层音符,即音高较低的音符相互比较,再将两组音符的上下层音符交叉比较;如果相对比的两组音符中一组为双音,一组为单音,则将单音的那组音符作为基准组,将双音那组音符的上层音符和下层音符分别与单音的那一组音符进行比对。
- 根据权利要求13所述的谱面分析和标注系统,其中,所述音阶和弦和琶音标注单元包括:音阶标注子单元,用于对所述电子乐谱进行音阶标注;和弦标注子单元,用于对所述电子乐谱进行和弦标注;以及琶音标注子单元,用于对所述电子乐谱进行琶音标注。
- 根据权利要求13所述的谱面分析和标注系统,其中,所述调号临时升降记号和调式标注单元包括:调号标注子单元,用于对所述电子乐谱进行调号分析和标注;临时升降记号标注子单元,用于对所述电子乐谱进行临时升降记号分析和标注;调式分析和标注子单元,用于基于所标注的调号、临时升降记号、和弦和音高进行调式分析和标注。
- 根据权利要求13所述的谱面分析和标注系统,其中,所述乐段乐句标注单元包括:乐段分析和标注子单元,用于对所述电子乐谱进行乐段分析和标注;以及乐句分析和标注子单元,用于基于所标注的音乐型、节奏型、音阶、和弦、琶音,音乐术语、音乐符号和乐段的分析和标注结果进行乐句分析和标注。
- 根据权利要求13所述的谱面分析和标注系统,其中,所述音乐术语和符号标注单元包括:音乐术语标注子单元,用于标注所述电子乐谱中的音乐术语;音乐符号标注子单元,用于标注所述电子乐谱中的音乐符号;以及作品时期特性分析和标注子单元,用于基于所标注的音乐术语和音乐符号,结合所标注的音乐型、音阶、和弦、琶音、调性、节奏型、乐句和乐段的标注结果对所述电子乐谱的作品时期特性进行分析和标注。
- 根据权利要求13所述的谱面分析和标注系统,其中,所述知识图谱单元包括:作者信息子单元,用于标注所述电子乐谱的作者相关信息;以及乐曲信息子单元,用于标注所述电子乐谱的乐曲相关信息。
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011577843.X | 2020-12-28 | ||
CN202011577843 | 2020-12-28 | ||
CN202011578343.8A CN116704850A (zh) | 2020-12-28 | 2020-12-28 | 谱面分析和标识系统 |
CN202011578345.7A CN117496792A (zh) | 2020-12-28 | 2020-12-28 | 谱面分析和标注方法、装置及电子设备 |
CN202011578343.8 | 2020-12-28 | ||
CN202011578345.7 | 2020-12-28 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022143679A1 true WO2022143679A1 (zh) | 2022-07-07 |
Family
ID=82259080
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/142134 WO2022143679A1 (zh) | 2020-12-28 | 2021-12-28 | 谱面分析和标注方法、装置及电子设备 |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2022143679A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115527035A (zh) * | 2022-11-01 | 2022-12-27 | 北京安德医智科技有限公司 | 图像分割模型优化方法、装置、电子设备及可读存储介质 |
CN115862573A (zh) * | 2022-11-23 | 2023-03-28 | 成都潜在人工智能科技有限公司 | 基于织体音程关系解析逆构的伴奏合成方法与系统 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101702316A (zh) * | 2009-11-20 | 2010-05-05 | 北京中星微电子有限公司 | 一种将midi音乐转化为颜色信息的方法和系统 |
CN102663998A (zh) * | 2012-04-09 | 2012-09-12 | 谷文慧 | 一种适用于视唱的五线谱记谱法 |
CN103778821A (zh) * | 2014-03-03 | 2014-05-07 | 罗淑文 | 乐谱、指法电子灯光模拟显示方法及键盘类乐器辅助教学器 |
US20150179156A1 (en) * | 2013-12-19 | 2015-06-25 | Yamaha Corporation | Associating musical score image data and logical musical score data |
JP3201408U (ja) * | 2015-09-09 | 2015-12-10 | 昭郎 伊東 | 五線譜上に記載されている調性を色彩にて表現させた楽譜 |
CN105931621A (zh) * | 2016-04-19 | 2016-09-07 | 北京理工大学 | 一种由midi到盲文乐谱的翻译方法及系统 |
CN108766463A (zh) * | 2018-04-28 | 2018-11-06 | 平安科技(深圳)有限公司 | 电子装置、基于深度学习的乐曲演奏风格识别方法及存储介质 |
CN110544411A (zh) * | 2019-09-03 | 2019-12-06 | 玖月音乐科技(北京)有限公司 | 五线谱指法快速标注方法及系统 |
-
2021
- 2021-12-28 WO PCT/CN2021/142134 patent/WO2022143679A1/zh active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101702316A (zh) * | 2009-11-20 | 2010-05-05 | 北京中星微电子有限公司 | 一种将midi音乐转化为颜色信息的方法和系统 |
CN102663998A (zh) * | 2012-04-09 | 2012-09-12 | 谷文慧 | 一种适用于视唱的五线谱记谱法 |
US20150179156A1 (en) * | 2013-12-19 | 2015-06-25 | Yamaha Corporation | Associating musical score image data and logical musical score data |
CN103778821A (zh) * | 2014-03-03 | 2014-05-07 | 罗淑文 | 乐谱、指法电子灯光模拟显示方法及键盘类乐器辅助教学器 |
JP3201408U (ja) * | 2015-09-09 | 2015-12-10 | 昭郎 伊東 | 五線譜上に記載されている調性を色彩にて表現させた楽譜 |
CN105931621A (zh) * | 2016-04-19 | 2016-09-07 | 北京理工大学 | 一种由midi到盲文乐谱的翻译方法及系统 |
CN108766463A (zh) * | 2018-04-28 | 2018-11-06 | 平安科技(深圳)有限公司 | 电子装置、基于深度学习的乐曲演奏风格识别方法及存储介质 |
CN110544411A (zh) * | 2019-09-03 | 2019-12-06 | 玖月音乐科技(北京)有限公司 | 五线谱指法快速标注方法及系统 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115527035A (zh) * | 2022-11-01 | 2022-12-27 | 北京安德医智科技有限公司 | 图像分割模型优化方法、装置、电子设备及可读存储介质 |
CN115527035B (zh) * | 2022-11-01 | 2023-04-28 | 北京安德医智科技有限公司 | 图像分割模型优化方法、装置、电子设备及可读存储介质 |
CN115862573A (zh) * | 2022-11-23 | 2023-03-28 | 成都潜在人工智能科技有限公司 | 基于织体音程关系解析逆构的伴奏合成方法与系统 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Bell et al. | Integrating computational thinking with a music education context | |
Bozkurt et al. | Computational analysis of Turkish makam music: Review of state-of-the-art and challenges | |
WO2022143679A1 (zh) | 谱面分析和标注方法、装置及电子设备 | |
Ferretti | On the modeling of musical solos as complex networks | |
McVicar et al. | AutoLeadGuitar: Automatic generation of guitar solo phrases in the tablature space | |
Cook | Computational and comparative musicology | |
US7576280B2 (en) | Expressing music | |
Benetos et al. | Automatic transcription of Turkish microtonal music | |
Yang et al. | Development status, frontier hotspots, and technical evaluations in the field of AI music composition since the 21st century: a systematic review | |
Onyeji | Composing art music from indigenous African musical paradigms | |
Dean | Pat Metheny's Finger Routes: the role of muscle memory in guitar Improvisation | |
CN117496792A (zh) | 谱面分析和标注方法、装置及电子设备 | |
Johnson | The Standard, Power, and Color Model of Instrument Combination in Romantic-Era Symphonic Works. | |
WO2024002070A1 (zh) | 乐谱训练数据库的构建和应用 | |
Wang et al. | Interactive teaching system for remote vocal singing based on decision tree algorithm | |
CN105551472A (zh) | 具有指法标示的乐谱产生方法及其系统 | |
Chiu et al. | Automatic system for the arrangement of piano reductions | |
Knopke et al. | Symbolic data mining in musicology | |
Shafiei | Extracting Theory from Practice: A Computational Analysis of the Persian Radif | |
Sébastien et al. | Dynamic music lessons on a collaborative score annotation platform | |
Serna | Guitar Theory For Dummies with Online Practice | |
Schön | PAUL-2: a transformer-based algorithmic composer of two-track piano pieces | |
Sutcliffe et al. | The C@ merata task at MediaEval 2016: Natural Language Queries Derived from Exam Papers, Articles and Other Sources against Classical Music Scores in MusicXML. | |
Han | The use of digital technologies in teaching the saxophone in a Chinese conservatory: learning based on the experience of saxophonists Du Yinjiao and Liu Yuan | |
McFarland | Dave Brubeck and Polytonal Jazz |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21914413 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21914413 Country of ref document: EP Kind code of ref document: A1 |