CN117496792A - Spectral plane analysis and labeling method and device and electronic equipment - Google Patents
Spectral plane analysis and labeling method and device and electronic equipment Download PDFInfo
- Publication number
- CN117496792A CN117496792A CN202011578345.7A CN202011578345A CN117496792A CN 117496792 A CN117496792 A CN 117496792A CN 202011578345 A CN202011578345 A CN 202011578345A CN 117496792 A CN117496792 A CN 117496792A
- Authority
- CN
- China
- Prior art keywords
- notes
- labeling
- music
- analysis
- type
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000002372 labelling Methods 0.000 title claims abstract description 258
- 238000004458 analytical method Methods 0.000 title claims abstract description 176
- 230000003595 spectral effect Effects 0.000 title claims abstract description 138
- 230000033764 rhythmic process Effects 0.000 claims abstract description 172
- 238000000034 method Methods 0.000 claims description 62
- 238000004590 computer program Methods 0.000 claims description 10
- 238000006073 displacement reaction Methods 0.000 claims description 10
- MQJKPEGWNLWLTK-UHFFFAOYSA-N Dapsone Chemical compound C1=CC(N)=CC=C1S(=O)(=O)C1=CC=C(N)C=C1 MQJKPEGWNLWLTK-UHFFFAOYSA-N 0.000 claims description 5
- 230000001020 rhythmical effect Effects 0.000 claims 1
- 238000005211 surface analysis Methods 0.000 abstract description 14
- 238000000605 extraction Methods 0.000 abstract description 3
- 239000011295 pitch Substances 0.000 description 72
- 238000010586 diagram Methods 0.000 description 24
- 239000000203 mixture Substances 0.000 description 19
- 238000001228 spectrum Methods 0.000 description 19
- 230000008569 process Effects 0.000 description 8
- 230000008901 benefit Effects 0.000 description 5
- 239000003086 colorant Substances 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 230000004807 localization Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000000354 decomposition reaction Methods 0.000 description 3
- 230000002441 reversible effect Effects 0.000 description 3
- 238000007792 addition Methods 0.000 description 2
- 230000001174 ascending effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 238000011049 filling Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000009193 crawling Effects 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000000149 penetrating effect Effects 0.000 description 1
- 230000035515 penetration Effects 0.000 description 1
- 238000007639 printing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000006798 recombination Effects 0.000 description 1
- 238000005215 recombination Methods 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001568 sexual effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B15/00—Teaching music
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Auxiliary Devices For Music (AREA)
Abstract
A spectral plane analysis and labeling method, a device and an electronic device are disclosed. The spectral surface analysis and labeling method comprises the following steps: determining a signature of the electronic music score; performing beat labeling and rhythm analysis and labeling on the electronic music score based on the determined beat number; pitch and chord Cheng Biaozhu of the electronic score; and analyzing and labeling the music type based on labeling results of the pitch, the interval and the rhythm type. In summary, the spectral face analysis and labeling method can trace and disassemble the music based on the rhythm, melody, tuning characteristics and musical period style of the music, and analyze and label the spectral face through the extraction and analysis of the characteristics based on the rhythm, melody, tuning, fingering, structure, music terms and musical symbols and the period, thereby effectively performing spectral face identification and labeling.
Description
Technical Field
The present disclosure relates to the field of data analysis technologies, and more particularly, to a spectral plane analysis and labeling method, a spectral plane analysis and labeling apparatus, and an electronic device.
Background
At present, digitization of music pieces in computers has been relatively popular, which includes the digital musical sequence MIDI in computers, as well as the commonly used MusicXml electronic score. Here, the electronic musical score refers to a file in which musical score layout information is stored in a computer. Electronic musical scores are of a great variety and are enormous in number, including oVe, gtp, mjp, etc., that is, the existing various types of electronic musical scores actually contain a great deal of valuable musical information. However, the existing electronic music score only contains typesetting information of music, which can only be used for browsing the music spectrum surface, but cannot be applied to more professional spectrum surface analysis, so that the requirements of a player on reading the spectrum and playing cannot be met.
Disclosure of Invention
The present application has been made in order to solve the above technical problems. The embodiment of the application provides a spectral face analysis and labeling method, a device and electronic equipment, which can carry out traceability disassembly on music based on the characteristics of the music, analyze and label the spectral face through the characteristic extraction and analysis based on the rhythm, melody, mode, fingering, structure, music term and music symbol of notes and period, so that the spectral face identification and labeling are effectively carried out.
According to an aspect of the present application, there is provided a spectral face analysis and labeling method, including: determining a signature of the electronic music score; performing beat labeling and rhythm analysis and labeling on the electronic music score based on the determined beat number; pitch and chord Cheng Biaozhu of the electronic score; and analyzing and labeling the music type based on labeling results of the pitch, the interval and the rhythm type.
In the above spectral face analysis and labeling method, performing rhythm analysis on the electronic music score includes: determining a preset basic rhythm type, and a doubling rhythm type of the basic rhythm type based on rhythm combination characteristics of the electronic music score; and performing a rhythm type analysis on the electronic musical score by comparison with the basic rhythm type, the double-added rhythm type and the double-divided rhythm type.
In the above method for analyzing and labeling a music score, the step of labeling the rhythm type of the electronic music score includes the step of labeling the rhythm type by accumulating each beat as a unit with a note duration value, and includes the steps of: defining start-stop coordinates of rhythm type marks; defining position coordinates of a plurality of rhythm type marks when one note contains a plurality of beats; and in the rhythm type mark of a unit, calculating the proportion of the time value of notes contained in a beat, equally dividing the rhythm type mark according to the calculated proportion, and cutting off the rhythm type mark for each note.
In the above spectral face analysis and labeling method, the step of labeling the electronic musical score with a pitch includes: c positioning, FG positioning and EFGA positioning are carried out on the electronic music score.
In the above spectral face analysis and labeling method, performing musical interval labeling on the electronic music score includes: and marking the intra-measure musical intervals and the inter-measure musical intervals of the electronic music score.
In the above spectral face analysis and labeling method, performing musical interval labeling on the electronic musical score further includes: and marking the sound played by the left hand and the right hand of the electronic music score.
In the above spectral face analysis and labeling method, analyzing and labeling the music type based on the labeling results of the pitch, the interval and the rhythm type includes: determining a set of reference notes having a predetermined number of notes, the set of reference notes constituting a reference musical style; other groups of reference notes in the electronic score having the predetermined number of notes are traversed to determine a musical style that is the same, shifted, mirrored or similar to the reference musical style based on the pitch, duration and interval of each group of notes of the predetermined number of notes.
The above spectral surface analysis and labeling method includes at least one of the following: the same music type accords with that two groups of notes have the same rhythm type, the same pitch and the same interval; the similar music type accords with that the rhythm type of the two groups of notes is different, but the pitch is the same as the interval; the similar music type accords with the rhythm type of the two groups of notes to be different, the pitch is different, and the musical interval is the same; the similar music type accords with the relationship that the rhythms of the two groups of notes are the same or multiplied or divided, and the pitch or interval is more than 50% the same; the displacement music type accords with the same rhythm type of the two groups of notes, the musical interval is the same, but the pitch is different; and the intervals of the mirror image music type conforming to each corresponding note of the two sets of notes to the symmetry axis as the intermediate value between the highest note and the lowest note in the set are mutually opposite numbers.
In the above method for analyzing and labeling a spectral face, if a double tone exists in a note appearing on the spectral face, extracting a compared tone includes at least one of the following cases: if the two groups of notes are two notes, comparing the upper notes (notes with higher pitch) of the two groups of notes with each other, comparing the lower notes (notes with lower pitch) with each other, and then cross-comparing the upper notes and the lower notes of the two groups of notes; and if one of the two groups of notes is a biphone and the other group is a monophone, the group of notes of the monophone is used as a reference group, and the upper-layer notes and the lower-layer notes of the biphone group of notes are respectively compared with the group of notes of the monophone.
In the above spectral surface analysis and labeling method, further comprising: analyzing and marking the number adjustment and temporary lifting marks of the electronic music score; analyzing and labeling musical scales, chords and arpeggies based on the analysis result of the key numbers; and performing adjustment analysis and labeling based on the analysis results of the key, the temporary lifting mark, the chord and the pitch.
In the above spectral surface analysis and labeling method, further comprising: performing music piece analysis and labeling on the electronic music score; and performing phrase analysis and annotation based on the music type, the rhythm type, the musical scale, the arpeggio and the chord, the music term and symbol annotation and the analysis result of the music piece.
In the above spectral surface analysis and labeling method, further comprising: the electronic music score is marked with music terms and symbols; and analyzing and labeling the work period characteristics of the electronic music score based on the labeling results of the music terms and the symbols and combining the labeling results of the music type, the musical scale, the arpeggio, the chord, the tonality, the beat rhythm type and the phrase music piece.
In the above spectral surface analysis and labeling method, further comprising: and carrying out special fingering analysis and labeling on the electronic music score based on the musical interval labeling result.
According to another aspect of the present application, there is provided a spectral face analysis and labeling apparatus, including: a sign determining unit for determining a sign of the electronic score; the beat and rhythm type analysis unit is used for marking the beats and analyzing and marking the rhythm type of the electronic music score based on the determined beat number; a pitch interval labeling unit, configured to perform pitch chord Cheng Biaozhu on the electronic score; and a music type analysis and labeling unit for analyzing and labeling the music type based on the labeling results of the pitch, the interval and the rhythm type.
According to still another aspect of the present application, there is provided an electronic apparatus including: a processor; and a memory having stored therein computer program instructions that, when executed by the processor, cause the processor to perform the spectral plane analysis and labeling method as described above.
According to yet another aspect of the present application, there is provided a computer readable medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform the spectral surface analysis and labeling method as described above.
The method, the device and the electronic equipment for analyzing and labeling the spectrum surface can be based on melody lines of music; a cadence; beat; harmony and adjustment; a speed; a force; pitch of the sound; the music is traced and disassembled by the characteristics of a playing method and the like, and the spectral faces are analyzed and marked by the characteristic extraction and analysis based on the rhythm, melody, mode, fingering, structure, music terms and music symbols and the period, so that the spectral faces are effectively identified and marked.
Drawings
The foregoing and other objects, features and advantages of the present application will become more apparent from the following more particular description of embodiments of the present application, as illustrated in the accompanying drawings. The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate the application and not constitute a limitation to the application. In the drawings, like reference numerals generally refer to like parts or steps.
Fig. 1 illustrates a schematic diagram of V-type labeling of beats in a spectral face analysis and labeling method according to an embodiment of the application.
Fig. 2 illustrates a schematic diagram of digital labeling of beats in a spectral face analysis and labeling method according to an embodiment of the application.
Fig. 3 illustrates a schematic flow chart of beat annotation in a spectral plane analysis and annotation method according to an embodiment of the present application.
Fig. 4A illustrates an example of a basic rhythm pattern in a spectral plane analysis and labeling method according to an embodiment of the present application.
Fig. 4B illustrates an example of a octave type in a spectral face analysis and labeling method according to an embodiment of the present application.
Fig. 4C illustrates an example of a fold-score musical style in a spectral face analysis and labeling method according to an embodiment of the present application.
Fig. 5 illustrates a schematic flow chart of rhythm type labeling in a spectral face analysis and labeling method according to an embodiment of the present application.
FIG. 6 illustrates a schematic diagram of 5C localization in a spectral face analysis and labeling method according to an embodiment of the present application.
Fig. 7 illustrates a schematic diagram of FG positioning in a spectral face analysis and labeling method according to an embodiment of the present application.
FIG. 8 illustrates a schematic diagram of EFGA localization in spectral face analysis and labeling methods according to embodiments of the present application.
Fig. 9 illustrates a schematic diagram of intervals of degrees in a spectral face analysis and labeling method according to an embodiment of the present application.
FIG. 10 illustrates a schematic diagram of tri-degree interval and hexa-degree interval labeling within a bar in a spectral face analysis and labeling method according to an embodiment of the present application.
FIG. 11 illustrates a schematic diagram of inter-segment tri-degree musical interval labeling in a spectral face analysis and labeling method according to an embodiment of the present application.
Fig. 12 illustrates a schematic diagram of annotation of sounds played by both left and right hands in a spectral plane analysis and annotation method according to an embodiment of the present application.
Fig. 13 illustrates a flowchart of music type annotation in a spectral facial analysis and annotation method according to an embodiment of the present application.
FIG. 14 illustrates a flow chart of a spectral face analysis and labeling method according to an embodiment of the present application.
Fig. 15 illustrates a schematic flow chart of a scale labeling process in a spectral face analysis and labeling method according to an embodiment of the present application.
Fig. 16 illustrates a schematic flowchart of a chord and exploded chord labeling process in a spectral face analysis and labeling method according to an embodiment of the present application.
Fig. 17 illustrates a schematic flow chart of the tuning annotation in the spectral surface analysis and annotation method according to an embodiment of the present application.
Fig. 18 illustrates a schematic flow chart of the expanded finger annotation in the score analysis and annotation method according to the embodiment of the present application.
Fig. 19 illustrates a schematic flow chart of condensed annotation in a score analysis and annotation method according to an embodiment of the present application.
Fig. 20 illustrates a schematic flow chart of music passage labeling in a music score analysis and labeling method according to an embodiment of the present application.
FIG. 21 illustrates a block diagram of a spectral face analysis and labeling apparatus according to an embodiment of the present application.
Fig. 22 illustrates a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application and not all of the embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
Summary of the application
At present, digitization of music pieces in computers has been relatively popular, which includes the digital musical sequence MIDI in computers, as well as the commonly used MusicXml electronic score. Here, the electronic musical score refers to a file in which musical score layout information is stored in a computer. Electronic musical scores are of a great variety and are enormous in number, including oVe, gtp, mjp, etc., that is, the existing various types of electronic musical scores actually contain a great deal of valuable musical information. However, in the existing electronic musical score, only the typesetting information of the musical composition is contained, which can only be used for browsing the spectral faces of the musical composition, but cannot be applied to more specialized spectral face analysis.
It is therefore an object of the present application to provide a spectral face analysis and labeling method, which is capable of automatically analyzing an electronic score, such as MusicXml electronic score, so as to label corresponding music elements on a staff, thereby meeting the needs of a player for reading a spectrum and performing.
Specifically, the spectral face analysis and labeling method of the application utilizes the concept of the first sexual principle, trace-source disassembles music according to the characteristics of music, and extracts and analyzes various characteristics of the electronic music score based on different analysis angles of the music score analysis. In this application, the features include musical elements and derivatives thereof as described above, such as pitch, interval, gamut, melody line, rhythm, beat, scale, arpeggio, chord, harmony, tonality, fingering, musical structure analysis, musical terms, musical symbols, dynamics, tempo, composition background, author, genre theme, and the like.
In addition, the spectrum surface analysis and labeling method utilizes the logical relation among the music elements, and automatically analyzes and labels the spectrum surface through the relevance among the extracted features, so that various music elements in the spectrum surface can be accurately and automatically labeled without manual participation, the accuracy of the spectrum surface labeling is ensured, and the user convenience is improved.
Next, each part of the spectral face analysis and labeling method of the present application will be specifically described.
Beat and rhythm analysis and labeling
In the embodiment of the application, the signature marking is to mark a signature in the electronic score, specifically, the signature may be extracted from the electronic score, for example, signature information, such as 3/4,2/4,4/4,3/8, etc., may be read from the MusicXml electronic score.
Then, according to the beat number of the music piece, the note beat of each bar is marked. In the embodiment of the application, the beat label can be marked by adopting a V-shaped marking method or a digital marking method, which can be selected by a user.
First, as described above, the beat number of a musical piece, such as 2/4 beat, is read from an electronic musical score, for example, musicXML music file, and the time value of each beat, such as one beat for a score of 4 notes, and the number of beats per minute, such as 2 beats, is acquired. Then, the tones of the music are sequentially read, and the values thereof are accumulated.
In "type V notation": if the current note has a time value exceeding 1 beat, an equivalent V-shaped folding line is drawn below the note according to the number of beats occupied by the note; if the current accumulated time value is 1 beat, making a V-shaped broken line under the group of notes; each "V" shaped polyline represents a beat. Further, the notes of the group need to be divided equally according to the time length occupied by each note in the beat, and the fold lines are broken between notes, so that the time value of the notes in each beat can be intuitively represented by the length and the proportion of the fold lines.
The specific implementation method is as follows:
(1) The starting point Y coordinate of the "V" shaped fold line: the Y-coordinate of the first line of the staff is shifted downward by a predetermined pixel value, e.g., 10 pixels (i.e., the pixel value of the coordinate is subtracted by 10 pixels).
(2) The starting point X coordinate of the V-shaped broken line is as follows: if the group has only one note, the X coordinate of the start point is the X coordinate of the note minus a predetermined pixel, e.g., 10 pixels, and the X coordinate of the end point is the X coordinate of the note plus a predetermined pixel, e.g., 10 pixels; if the set has two or more notes, the start point is the X coordinate of the first note of the set and the end point is the X coordinate of the last note of the set.
(3) If the duration of a note is greater than 1 beat, it is calculated how many "V" -shaped folds are needed for the note (and rounded up and then denoted by the letter M) based on the number of beats it takes. For example, for a musical composition with 1 beat for a score of 4, one score takes 2 beats, i.e., 2 "V" -shaped fold lines are required for representation, m=2; one full note takes up 4 beats, i.e. 4 "V" shaped fold lines are required to represent, m=4; one appendage quarter note takes 1 beat and half, i.e. a "V" shaped polyline plus one "\" is required to represent, m=2.
(4) Coordinates of bottom point of polyline of "V": if the current sound exceeds 1 beat, a progressive X coordinate difference value of the top point is obtained by using (end point X coordinate-start point X coordinate)/(M+1), namely, the distance between the X coordinates of each point at the top is further obtained, the X coordinates of each point at the top and the bottom are further obtained, and the Y coordinate point of the bottom Y coordinate is used as the start point to subtract the pixel value corresponding to the V type, for example, 30 pixels; if the current value is accumulated to be 1 beat, taking (the end point X coordinate-the starting point X coordinate)/2 to obtain the X coordinate at the bottom of the V-shaped folding line; the Y coordinate point with the bottom Y coordinate as the starting point subtracts the pixel value corresponding to the "V" type, for example, 30 pixels.
(5) If the current sound exceeds 1 beat, all top points and bottom points obtained in the step (4) are sequentially connected alternately.
(6) If the group is 1 beat, and there is only one note (i.e., this note is 1 beat): then a line is drawn from the starting point to the bottom center point and then from the bottom center point to the ending point.
(7) If the group has two notes, and a note is an attachment note: if the attachment point is on the first note, a connecting line is formed from the starting point to the bottom center point, and two sections of broken lines are formed from the bottom center point to the end point; if the attachment point is on the second note, two sections of broken lines are connected from the starting point to the bottom center, and then a line is connected from the bottom center to the end point. (8) If the group does not meet the two conditions, reading each note in turn and accumulating the values, when the accumulated values occupy half a beat, judging that a plurality of notes are accumulated, if one note is accumulated (namely, for a score of 4, the note is an 8-score note), subtracting a preset pixel, such as 3 pixels, from the X coordinate of the bottom center point, and connecting with the starting point; if two notes (namely, two 16 notes) are provided, subtracting a preset pixel, such as 3 pixels, from the X coordinate of the bottom center point, and connecting with a broken line of which the starting point is divided into two sections; reading the note of the second half beat, if the note is a note (namely an 8-note), taking the X coordinate of the bottom center point to add a preset pixel, such as 3 pixels, and connecting with the end point; in the case of two notes (i.e., two 16-note), the bottom center point X coordinate is taken plus a predetermined pixel, e.g., 3 pixels, and two equally divided dashed lines are connected to the end point.
Fig. 1 illustrates a schematic diagram of V-type labeling of beats in a spectral face analysis and labeling method according to an embodiment of the application. In addition, the V-shaped marking method of the beats can also adopt other methods, and the core of the method is that each beat is accumulated as a unit to carry out V-shaped marking by using the note time value; wherein it is necessary to define the start-stop coordinates and the nadir coordinates of "V" and how to define the position of the second "V" when one note is multi-beat; in addition, in a "V" type unit, the ratio of the time values of notes contained in a beat is calculated, the line segments of the V type are equally divided according to the calculated ratio, and each note is disconnected in the middle.
In the digital labeling method, the voice is sequentially read in units of each section, and the time value is accumulated. If the currently accumulated time value is equal to each beat described by the beat of the spectrum surface, making a digital mark under the group of notes, wherein each digital mark represents one beat; further, the notes of the group need to be divided equally according to the time period respectively occupied in the beat, and the notes are disconnected by the dash line, so that the time value of the notes in each beat can be intuitively represented by numbers and the distribution of the dash line. The length of the short transverse line also corresponds to the note duration.
The specific implementation method is as follows:
(1) Acquiring a digital marking position: taking the X-axis center point of the first note of the current group as an X coordinate, taking a preset pixel offset below the first line of the staff, for example, 10 pixels as a Y coordinate, and taking the point as the labeling position of each beat of numbers.
(2) If only one note exists for the current group, the number of beats in the bar of the current group is marked on the marking position of each beat number.
(3) If the current group has two notes, marking the number of beats of the current group in the bar at the marking position of each beat number, sequentially reading all notes in the group from the second note, and if the current note takes half a beat (namely, for taking a 4-note as a beat, the current note is an 8-note), drawing a short line with a first length, such as a length of 20 pixels, below the current note (X coordinate is the X coordinate of the symbol head, Y coordinate is the first line Y coordinate minus a preset pixel, such as 10 pixels); if the current note takes 1/4 beat, a short line of a second length, such as a length of 10 pixels, is drawn. And the like, the purpose that the length of the short horizontal line corresponds to the value of the note time is achieved.
(4) Other cases include the case of a note with points, or a plurality of notes in a beat (group), or a note with more than one beat, and the like, and are labeled with numeric and halved dashes.
Fig. 2 illustrates a schematic diagram of digital labeling of beats in a spectral face analysis and labeling method according to an embodiment of the application. In addition, the digital marking method of the beat can also adopt other methods, and the core of the method is that the musical notes contained in a beat are subjected to the proportion calculation of the time value, line segments are equally divided according to the calculated proportion, and each musical note is disconnected in the middle; in addition, numbers are combined with line segments. It should be noted that the number labeling method needs to consider the labeling positions of the numbers and the short lines in the case of more than one beat. As shown in fig. 2, the positions and lengths of the line segments are equally divided.
Fig. 3 illustrates a schematic flow chart of beat annotation in a spectral plane analysis and annotation method according to an embodiment of the present application.
As shown in fig. 3, each beat of the musical composition is first determined according to its beat number, then notes are read in bars, and the values are accumulated. If the accumulated time value is equal to the time value of each beat of the music, beat marking is started, otherwise, the accumulated time value of the notes is continuously read.
In beat labeling, beat labeling is performed according to the V-shaped labeling method or the digital labeling method, namely, the beat number is marked under the notes, then whether the accumulated beat number of the current bar is equal to the beat number of the music piece per minute is determined, if so, whether all bars are traversed is determined, and if not, the mark is continued under the notes, and the beat number is marked.
And then, if all bars are not traversed, entering the next bar, continuing to mark beats, otherwise, if all bars are traversed, ending the flow.
After the beat labeling is completed, analysis and labeling of the rhythm type can be further performed.
Fig. 4A illustrates an example of a basic rhythm pattern in a spectral plane analysis and labeling method according to an embodiment of the present application. As shown in fig. 4A, taking quarter notes as an example, nine basic rhythm patterns marked with V-type mark method are shown. As shown in fig. 4B, the half-note is multiplied by one beat: it obtains new 9 rhythm patterns, for example, called "half-note multiple-plus-rhythm pattern", after doubling its duration value for each of the 9 basic rhythm patterns (4-note is one beat) as shown in fig. 4A. In addition, there may be a double of one beat for the whole note: that is, for 9 basic rhythm patterns (4 notes are one beat) as shown in fig. 4A, each rhythm pattern is 4 times its duration value to obtain 9 rhythm patterns, which are called "full note times plus rhythm pattern", for example. Fig. 4B illustrates an example of a octave type in a spectral face analysis and labeling method according to an embodiment of the present application.
As shown in fig. 4C, the octave is a multiple of one beat: for the 9 basic rhythm patterns (4 notes are one beat) as shown in fig. 4A, 9 rhythm patterns are obtained by halving each rhythm pattern by its time value, for example, called "octave multiple rhythm pattern". In addition, there may be a doubling of the sixteen notes to one beat: that is, for 9 basic rhythm patterns (4 notes are one beat) as shown in fig. 4A, 9 rhythm patterns are obtained by taking 1/4 of each rhythm pattern value, for example, called "octave multiple rhythm pattern". Fig. 4C illustrates an example of a fold-score musical style in a spectral face analysis and labeling method according to an embodiment of the present application.
Thus, in the embodiment of the present application, 45 types of rhythm patterns can be obtained in total in accordance with 9 types of basic rhythm patterns and doubling. In addition, the four notes of the attachment point, the eight notes of the attachment point, the two notes of the attachment point and the three notes of the 32 notes can be supplemented, and 49 rhythm types can be added.
Here, in the embodiment of the present application, the base rhythm type, and the doubling and doubling rhythm type may be based on the read beat, that is, the beat data is unit data that doubles and doubles the rhythm type. For example, the basic rhythm type, and the doubling rhythm type as described above are based on determining the score 4 as one beat by the beat, whereas if the score 2 as one beat or the score 8 as one beat by the beat, the basic rhythm type, the doubling rhythm type and the doubling rhythm type may be determined in a similar manner.
Fig. 5 illustrates a schematic flow chart of rhythm type labeling in a spectral face analysis and labeling method according to an embodiment of the present application.
As shown in fig. 5, first, the basic rhythm patterns for comparison, for example, 49 basic rhythm patterns as described above, are determined. Then, all note sequences of the piece of music are read from an electronic score, such as a MusicXML music file, and the current note is taken as the target note.
The duration of the current note is then compared with the duration of the first note of each of the preset 49 base rhythms, and if one or more of these are matched, the reading of the next note is continued. And comparing with the second notes of the matched rhythm type, and so on until all the notes are traversed, and finding out all the note sequences matched with the 49 basic rhythm types.
In addition, in the embodiment of the present application, if the current music piece has not yet been beat-marked, corresponding beat marking may also be performed under the matched notes according to the matched rhythm type.
Here, if a plurality of the matched base rhythm patterns are adjacent, it is called a combined rhythm pattern. In addition, the fold and the addition of one rhythm pattern can be regarded as similar rhythm patterns, and can be selectively marked
The same rhythm type or similar rhythm type can be displayed at the same time.
Pitch and interval annotation
In the present embodiment, pitch notation includes C-positioning, FG-positioning, and EFGA-positioning.
First, C-location refers to labeling the line or space where C are located in an electronic score, e.g., in a staff with a treble clef and a bass clef, 5 lines or spaces where C are located are labeled in a treble clef staff and a bass clef staff.
The specific implementation method is as follows:
(1) The next line where the center C in the treble clef staff is located, i.e., the line where the first C is located, is marked.
(2) The third room of the staff with the treble clef, namely the room where the second C is located, specifically comprises:
acquiring Y coordinate positions of a third line and a fourth line in a staff of a treble clef;
acquiring an X coordinate position of a starting point of a staff of a treble clef; and
connecting the 4 coordinate points into a rectangle;
the rectangle may then be marked, for example, by filling the rectangle with color as described above, or highlighting the edges of the rectangle.
(3) The upper two lines of the treble clef staff are marked, namely the line where the third C is located.
(4) The second room for obtaining the staff of the bass clef, namely the room where the fourth C is located, specifically comprises:
acquiring Y coordinate positions of a second line and a third line in a staff of a bass clef;
acquiring an X coordinate position of a starting point of a staff of a bass spectrum; and
connecting the 4 coordinate points into a rectangle;
the rectangle may then be marked, for example, by filling the rectangle with color as described above, or highlighting the edges of the rectangle.
(5) The lower two lines in the bass clef staff are noted, i.e., the line where the fifth C is located. The results are shown in FIG. 6. Here, fig. 6 illustrates a schematic diagram of 5C localization in a spectral face analysis and labeling method according to an embodiment of the present application.
Second, FG positioning refers to labeling the fourth line in the bass spectrum, i.e., F, and the second line in the treble spectrum, i.e., G.
The specific implementation method comprises the following steps:
(1) The fourth line in the bass clef staff, i.e., the line where F is located, is noted.
(2) The second line in the treble clef staff, the line where G is located, is noted.
The labeling results are shown in fig. 7. Here, fig. 7 illustrates a schematic diagram of FG positioning in a spectral face analysis and labeling method according to an embodiment of the present application.
Third, EFGA location refers to the line where the first and fifth lines, E, F, in the treble clef staff are noted and the line where the first and fifth lines, G, A, in the bass clef staff are noted.
The specific implementation method comprises the following steps:
(1) The first line in the treble clef staff, i.e., the line where E is located, is marked.
(2) The fifth line in the treble clef staff, i.e., the line where F is located, is noted.
(3) The first line in the bass clef staff, the line where G is located, is noted.
(4) The fifth line in the bass clef staff, the line where a is located, is noted.
The labeling results are shown in fig. 8. Here, fig. 8 illustrates a schematic diagram of EFGA localization in a spectral plane analysis and labeling method according to an embodiment of the present application.
In embodiments of the present application, pitch notation may help identify spectral faces. Traditionally, whether players learn a music score manually or by other music score recognition methods, the names of notes between each line and each line are memorized (or counted), which is easy to be confused when the treble bass clefs are read together (such as when hands are simultaneously playing), and the error rate of learning the music score is very high, which is also a pain point of learning the music score. Through the pitch marking in the embodiment of the application, the spectrum face can be identified through positioning, and the method is just like looking at a map in a strange city, and other buildings are found through the marked buildings, namely, in the embodiment of the application, the three positioning modes are the marked buildings in the staff.
In the embodiment of the application, the interval annotation comprises intra-segment interval annotation and inter-segment interval annotation.
First, intra-measure musical interval labeling refers to labeling the musical interval relationship of two adjacent sounds. Specifically, the labeling mode may be to make a line between two adjacent notes and label the interval beside by a number. In the embodiment of the present application, an interval labeled with a certain degree, for example, a 3-degree interval or a 4-degree interval, may be selected, in which case, for example, if all 4-degree intervals in the electronic score are selected to be labeled, all adjacent 4-degree segments are connected, and the number 4 is labeled. Specifically, in the embodiment of the present application, one or more of a three-degree interval, a four-degree interval, a five-degree interval, a six-degree interval, a seven-degree interval, and an eight-degree interval may be noted, as shown in fig. 9. Here, fig. 9 illustrates a schematic diagram of each degree interval in the spectral face analysis and labeling method according to the embodiment of the present application.
The specific implementation method for labeling the musical interval of a certain degree is as follows:
(1) All note sequences of the piece of music are read from an electronic score, e.g. MusicXML music file.
(2) And traversing all notes, and taking the absolute value of the difference value operation result of the pitch of the current note and the pitch of the next note to obtain the interval numbers of the two adjacent notes.
(3) The musical interval relation to be marked is determined, for example, the musical interval relation can be specified by a user, and all adjacent notes conforming to the musical interval relation are acquired according to the determined musical interval relation.
(4) For each set of two notes meeting the musical interval relationship, a line is drawn from the point of the first note header X-axis maximum to the point of the second note header X-axis minimum.
(5) The maximum value of the Y coordinate of the current note bar is taken, and the position above the maximum value of the Y coordinate of the current note bar, which is offset by a preset pixel, such as 5 pixels, is marked with the interval relation.
The labeling result is shown in fig. 10, for example. Here, fig. 10 illustrates a schematic diagram of three-degree interval and six-degree interval labeling within a bar in the spectral face analysis and labeling method according to an embodiment of the present application.
Secondly, the inter-measure annotation refers to connecting two notes before and after the measure line by a line segment, and marking the measure relation number beside.
The specific implementation method comprises the following steps:
(1) The last note in each bar and the first note in the next bar of the piece of music are read from an electronic score, e.g. a MusicXML music file.
(2) And (3) carrying out difference value on the two notes to obtain a musical interval relation, and taking the absolute value of the musical interval relation.
(3) A line is drawn from the rightmost edge of the header of the first note to the leftmost edge of the second note.
(4) If the connection line in (3) is from the upper left to the lower right, shifting a predetermined pixel, such as a number marking an interval relation at a position of 5 pixels, above the right of the center point of the connection line; if from bottom left to top right, the number of the musical interval relation is marked at a position offset above and to the left of the center point of the line by a predetermined pixel, for example, 5 pixels.
The labeling result is shown in fig. 11, for example. Here, fig. 11 illustrates a schematic diagram of three-degree interval labeling between the bars in the spectral face analysis and labeling method according to the embodiment of the present application.
In the present embodiment, interval labeling is used in combination with pitch labeling to aid in spectral face recognition. The difficulty in learning the spectrum for a performance is not only to know what the above notes are, but also to know the relationship between notes. For example, when playing a piano, the positions of piano keys can be inferred by the relation between notes, and since the coordinate positioning and the distance of interval are corresponding to the play distance on the piano keyboard, the player can play by moving directly according to the distance, and the same is true regardless of the left and right hands.
That is, by measuring the traveling direction (higher or lower) of the sound and the distance (interval) between the sound, the position of the next note can be found, which is very useful in automatic recognition based on the spectral plane and subsequent audio processing.
In addition, in the performance of a keyboard musical instrument, it is most of the time necessary to perform with both hands, so there are sounds to be played simultaneously by both left and right hands. Therefore, in the embodiment of the present application, the musical interval is marked, and the sound played by both the left and right hands can be marked. For example, all notes played by the left and right hands simultaneously may be in the mullion.
The specific implementation method comprises the following steps:
(1) All note sequences of the piece of music are read from an electronic score, e.g. MusicXML music file.
(2) The composition is presented in the form of a staff containing high and low tones.
(3) And acquiring notes with the same X-axis coordinates from the staff played by the left hand and the right hand respectively.
(4) The specific position coordinates and dimensions of notes for which the X-axis coordinates of each set are identical are obtained.
(5) And drawing a border outside the notes with the same X-axis coordinates in each group to frame the notes.
Of course, the unnecessary labels can be arbitrarily canceled according to the instruction of the user.
The labeling results are shown in fig. 12. Here, fig. 12 illustrates a schematic diagram of the annotation of sounds played by both left and right hands in the spectral plane analysis and annotation method according to the embodiment of the present application.
Music analysis and annotation
The music type analysis and labeling refers to finding out all the same, displacement, mirror image and similar music types according to the outline characteristics of the melody, and labeling the music types in the electronic music score. For example, under musical notes of the musical type in an electronic score, the same, shifted, mirrored and similar musical types are marked with line segments of different forms and colors. Also, the minimum number of notes contained in the musical style may be specified, for example, if the minimum number of notes is specified to be 4, then matching at least 4 consecutive notes matches the same, shifted, mirrored, and similar musical style.
The specific implementation method comprises the following steps:
(1) All note sequences of the piece of music are read from an electronic score, e.g. MusicXML music file.
(2) The minimum number of notes is specified, for example, 4 by a received user instruction. In the practical application process, the minimum note number has a minimum value range of 4 and a maximum value of total bar number/4. Equal to M bars (because a music passage contains at least two phrases and a phrase contains at least two musical types), the maximum value of the note numbers of the 4 groups of M bars is taken as the maximum value of the value range. I.e., 6.2 phrase analysis, the maximum number of notes that a single phrase may appear.
(3) All notes are traversed and grouped by minimum number of notes, i.e., if the minimum number of notes is 4, the 1 st, 2 nd, 3 rd, 4 th notes are grouped, the 2 nd, 3 rd, 4 th, 5 th notes are grouped, the 3 rd, 4 th, 5 th notes are grouped, and so on. Finally, several tones of less than one group are not grouped.
(4) The pitch, duration and interval relation of each group of notes are recorded respectively, i.e. each group of notes contains a set of data including the pitch, duration and interval relation of each note. For example, if the data of a set of notes is described in JSON format, it is (taking the minimum note number of 4 as an example):
{
“noteGroup1”:{
“noteName”:[“C”,“D”,“E”,“G”],
“noteLength”:[“quarter”,“half”,“whole”,“eighth”],
“noteInterval”:[2,2,3]
}
}
here, "noteName" is the labeling result of pitch, "noteLength" is the labeling result of beat and rhythm, and "noteInterval" is the labeling result of interval.
(5) Selecting music types with the same melody outline in the whole music: for each of the note groups, a comparison is made with all the remaining note groups (for the first group of notes, comparison is made starting from the group 5, i.e., the 5 th, 6 th, 7 th, 8 th notes), and if the group (referred to as the reference group) data is equal to the current comparison group data, it is assumed that the two groups of notes are identical. Next, the next group adjacent to the reference group and the next group adjacent to the current comparison group are compared, and if the data is still completely consistent, it is indicated that the consecutive 8 notes from the first note of the reference group are all completely consistent with the consecutive 8 notes from the first note of the current comparison group, and so on. If not, judging whether the first sound in the adjacent group is identical to the first sound in the current comparison group, if not, indicating that the reference group is identical to the current comparison group only, and if so, judging whether the second sound is identical. This gives all the same musical patterns as this reference group.
(6) Determining a musical style of melody contour displacement in the whole piece of music: and (3) comparing the interval relation in each note group by using the comparison method in (5). I.e. the noteName is not exactly the same, but the noteLength and noteInterval are the same set of notes.
(7) Determining the mirror image melody music type in the whole music piece: for each of the note groups, the comparison is made with all the remaining note groups (for the first group of notes, the comparison is made starting from group 5, i.e., the 5,6,7,8 note groups). Taking the intermediate value of the highest sound and the lowest sound in the first group as symmetry axes, taking the intervals from several sounds in the group to the symmetry axes respectively, and comparing with each sound in the second group in turn. If the interval of each corresponding sound in the second group to this symmetry axis is the opposite number to the interval number of the corresponding sound in the first group to the symmetry axis (i.e. noteInterval), the first group and the second group are of the mirror image music type.
For example, the lowest note in the first note set is the first note, the highest note is the second note, their midlines are the third line, i.e., the symmetry axis, the first note should be 5 degrees to the symmetry axis, that is, the notes symmetrical to its axis should be 5 degrees further up from the symmetry axis (the opposite number if the difference is made), so the fifth note, and so on, the second note set constitutes the mirror symmetry of the first note set.
(8) Determining similar music types in the whole music piece: the comparison method in (5) is used for comparing the note data in each note group, and at least one preset condition is required to be met for two music types which are considered to be similar. For example, in the embodiment of the present application, the preset condition may be:
1. the similar music type accords with that the rhythm type of the two groups of notes is different, but the pitch is the same as the interval;
2. the similar music type accords with the rhythm type of the two groups of notes to be different, the pitch is different, and the musical interval is the same;
3. the similar music type accords with the relationship that the rhythms of the two groups of notes are the same or multiplied or divided, and the pitch or interval is more than 50% the same;
(9) For the double lines, if the two note groups are double lines, the upper melody is compared with each other, and the lower melody is compared with each other. If one group is double-tone lines and one group is single-tone lines, the same tone as the tone corresponding to the single-tone line is found out from the double-tone lines and used as the main tone in the double-tone lines for music type marking.
(10) The phrase is composed of two or more musical types. In order to achieve more accurate and more subdivided music type recognition, the result of the above 5-9 steps needs to be compared with the 6.2 phrase recognition result by taking each music type as a unit, the inclusion relationship is compared, and if the length of the current music type is coincident with that of the phrase where the current music type is located, the current music type needs to be split again in a reduced range.
(11) For the same, shifted, mirrored and similar musical styles, the musical styles of the spectral planes can be marked by connecting segments in different forms, e.g., segments of different colors, under their notes.
(12) In the case of marking the same, displacement, mirror image and similar music types with line segments of different colors, the same, displacement, mirror image and similar music types can be marked in sequence by groups by using a rainbow arrangement method (red, orange, yellow, green, blue and purple). I.e. the first group of the same musical types, marked with red lines, the second group with orange, and so on. If all seven colors are used once, then their hex values are divided by 2, respectively, and then the next round of labeling is performed, and so on.
(13) In order to achieve the objective of (7), the bottom left corner of the first note in the musical form is used as a starting point, and the point of the bottom right corner of each note in the musical form is connected, so that the melody can be marked. For the same musical style, the solid lines are connected, for similar musical styles, the dashed lines are connected, for shifted musical styles, the dashed lines are connected with alternate long and short lengths, and for mirrored musical styles, the dot + dashed lines are connected. If the previous note of the music type is the last note of another music type, the coordinate point of the right lower corner of the previous note is connected with the coordinate point of the left lower corner of the first note of the current music type to form a continuous long music type.
(14) After automatically identifying the same, shifted, mirrored and similar musical types and drawing the connection lines, the user can still drag the connection lines under the musical types forward or backward by himself to cover more notes.
In the embodiment of the application, the music type label and the rhythm type label have related logic relations, and it can be seen that: (1) The same music type is a sufficient unnecessary condition of the same rhythm type; (2) The music type is similar, the rhythms of the two groups of notes are different, but the pitch is the same as the interval; (3) The music type displacement is that the rhythms of two groups of notes are the same, the musical intervals are the same, but the pitches are different; (4) The case of the music type image is that the intervals of each corresponding note of the two sets of notes to the symmetry axis as the intermediate value between the highest note and the lowest note in the set are opposite numbers to each other.
In the embodiment of the application, the music type label and the phrase label have related logic relations, and the structural characteristics of the music piece are that from the perspective of containing relations, 1 music piece at least comprises 2 phrases, and 1 phrase at least comprises 2 music types. Therefore, when the program is identified and analyzed, the identification results of the music type, the phrase and the music passage need to be compared, and the structure of the music piece needs to be analyzed more accurately.
Fig. 13 illustrates a flowchart of music type annotation in a spectral facial analysis and annotation method according to an embodiment of the present application.
As shown in fig. 13, note grouping is first performed, for example, 4 notes are grouped into one group according to a user instruction, and then data of each group is generated. Next, the reference note set is taken, which may be specified by the user as such, or a typical musical style may be automatically identified and compared to the next adjacent set to determine whether the musical style is the same, shifted or mirrored. If the comparison is completed, the current reference group is determined, if the comparison is completed, the process goes to the labeling link, and another reference note group is taken. And if the comparison is not completed, continuing to compare with the next adjacent group.
That is, the reference group is matched with the comparison group, and if the matching is successful, it is necessary to see whether a group of notes behind the reference group can be matched with a group of notes behind the comparison group; if a match is possible, the description may be connected to a longer musical style. (e.g., 1,2,3,4 and 9, 10, 11, 12 can be matched, 5,6,7,8 and 13, 14, 15, 16 can be matched, then 1,2,3,4,5,6,7,8 should be connected in a rhythm). And so on, always match, the maximum range that can be connected does not exceed the total number of bars/4. Here, instead of matching to a music piece, the total number of bars is 4.
Continuing to show as fig. 13, in the labeling link, a line is connected under each note of the matched music piece (same, displacement, mirror image), whether the previous note is in the previous music piece is judged, if yes, the music piece is connected with the previous music piece, and if no, whether all music pieces are connected is judged. And if all the music pieces are connected, ending the flow, otherwise, continuing to label.
In summary, the technical scheme of the spectral plane analysis and labeling method according to the embodiment of the application is as follows.
FIG. 14 illustrates a flow chart of a spectral face analysis and labeling method according to an embodiment of the present application. As shown in fig. 14, a spectral plane analysis and labeling method according to an embodiment of the present application includes: s110, determining the number of the electronic music score; s120, performing beat labeling and rhythm analysis and labeling on the electronic music score based on the determined beat number; s130, performing pitch chord Cheng Biaozhu on the electronic music score; and S140, analyzing and labeling the music type based on the labeling results of the pitch, the interval and the rhythm type.
In the above spectral face analysis and labeling method, performing rhythm analysis on the electronic music score includes: determining a preset basic rhythm type, and a doubling rhythm type of the basic rhythm type based on rhythm combination characteristics of the electronic music score; and performing a rhythm type analysis on the electronic musical score by comparison with the basic rhythm type, the double-added rhythm type and the double-divided rhythm type.
In the above method for analyzing and labeling a music score, the step of labeling the rhythm type of the electronic music score includes the step of labeling the rhythm type by accumulating each beat as a unit with a note duration value, and includes the steps of: defining start-stop coordinates of rhythm type marks; defining position coordinates of a plurality of rhythm type marks when one note contains a plurality of beats; and in the rhythm type mark of a unit, calculating the proportion of the time value of notes contained in a beat, equally dividing the rhythm type mark according to the calculated proportion, and cutting off the rhythm type mark for each note.
In the above spectral face analysis and labeling method, the step of labeling the electronic musical score with a pitch includes: c positioning, FG positioning and EFGA positioning are carried out on the electronic music score.
In the above spectral face analysis and labeling method, performing musical interval labeling on the electronic music score includes: and marking the intra-measure musical intervals and the inter-measure musical intervals of the electronic music score.
In the above spectral face analysis and labeling method, performing musical interval labeling on the electronic musical score further includes: and marking the sound played by the left hand and the right hand of the electronic music score.
In the above spectral face analysis and labeling method, analyzing and labeling the music type based on the labeling results of the pitch, the interval and the rhythm type includes: determining a set of reference notes having a predetermined number of notes, the set of reference notes constituting a reference musical style; other groups of reference notes in the electronic score having the predetermined number of notes are traversed to determine a musical style that is the same, shifted, mirrored or similar to the reference musical style based on the pitch, duration and interval of each group of notes of the predetermined number of notes.
The above spectral surface analysis and labeling method includes at least one of the following: the same music type accords with that two groups of notes have the same rhythm type, the same pitch and the same interval; the similar music type accords with that the rhythm type of the two groups of notes is different, but the pitch is the same as the interval; the similar music type accords with the rhythm type of the two groups of notes to be different, the pitch is different, and the musical interval is the same; the similar music type accords with the relationship that the rhythms of the two groups of notes are the same or multiplied or divided, and the pitch or interval is more than 50% the same; the displacement music type accords with the same rhythm type of the two groups of notes, the musical interval is the same, but the pitch is different; and the intervals of the mirror image music type conforming to each corresponding note of the two sets of notes to the symmetry axis as the intermediate value between the highest note and the lowest note in the set are mutually opposite numbers.
In the above method for analyzing and labeling a spectral face, if a double tone exists in a note appearing on the spectral face, extracting a compared tone includes at least one of the following cases: if the two groups of notes are two notes, comparing the upper notes (notes with higher pitch) of the two groups of notes with each other, comparing the lower notes (notes with lower pitch) with each other, and then cross-comparing the upper notes and the lower notes of the two groups of notes; and if one of the two groups of notes is a biphone and the other group is a monophone, the group of notes of the monophone is used as a reference group, and the upper-layer notes and the lower-layer notes of the biphone group of notes are respectively compared with the group of notes of the monophone.
Next, musical scales, arpeggies, and chord notes in the spectral face analysis and labeling method according to the embodiment of the present application will be described.
The scale annotation is expressed in the following form: the connection is made by line segments below the staff of consecutively occurring musical scales and the scale names are marked aside (e.g., C + for the C major scale, ch-for the C and minor scale, cx-for the C melody minor scale).
The implementation mode is as follows:
1) Note data of 194 single-hand-recognized musical scales are preset. Single hand identification (i.e. identification alone in either the treble staff or the bass staff): class 6 tonality is 72 (natural major, harmony major, melody major, natural minor, harmony minor, melody minor); 2 chromatic and octave, 24 diatonic tri-chromatic, 24 diatonic hexabasic, 72 octave); note data of 146 scales recognized by both hands are preset. Two-hand recognition (simultaneous recognition of two staff): the two hands of the 6-class tonal scale are identified in the same direction or in opposite directions, and the three-degree scale, the six-degree scale, the three-degree chromatic scale and the six-degree chromatic scale of the two hands are identified;
2) All the sequences of notes for the right hand (typically the treble staff) and all the sequences of notes for the left hand (typically the bass staff) of the piece are read separately from an electronic score, such as a MusicXML music file, and compared with the scale data in 1).
3) All notes are traversed from beginning to end, and if the current note belongs to a preset musical scale, the next note is judged. Until, if there are consecutive 5 notes in the preset scale of the pitch, a line is made below these notes, and listing the possibly conforming tone scales aside for the user to click and mark.
4) Combining the adjustment recognition result, and carrying out priority recommendation order arrangement on the coincidence items in the step 3), namely, preferentially arranging the tone scale which accords with the adjustment recognition result from the results listed in the step 3); for example, the tonality of a song is judged to be a small tune, while five continuous tones ABCDE in a music score have two possibilities of C major and a small tune in the scale recognition possibility, and the tonality is the small tune a, so that the small tune a is selected as a first-push option for a user to click.
5) All note sequences of both hands of the piece are read from an electronic score, such as a MusicXML music file, respectively, while being compared with note data of 146 hands-identified musical scales in 1).
6) All notes are traversed from beginning to end, and if the notes played by the current hands simultaneously belong to a preset musical scale in 1), the notes played by the next pair of hands simultaneously are judged. Until if there are consecutive 5 pairs of notes in the preset scale of pitches, then a line is drawn under the notes and the scale of pitches that may be met is listed aside for the user to click the label.
7) The matching items in the step 6) are subjected to priority recommendation sequence arrangement by combining the adjustment type recognition results, namely, the adjustment scale which matches the adjustment type recognition results is preferentially arranged from the results listed in the step 6);
8) When the 7) and the 4) simultaneously obtain the conforming options for the user to click and mark, only the result of the 7) is recommended; if the result obtained in 7) or 4) is unique, the marking is directly carried out without the need of the user to judge and select again.
Here, fig. 15 illustrates a schematic flowchart of a scale labeling process in the spectral face analysis and labeling method according to the embodiment of the present application.
The expression form of the arpeggio mark is as follows: the lower parts of the staff of the continuously appeared arpeggies are connected by line segments, and the names of the arpeggies are marked beside.
The implementation mode is as follows:
1) Presetting an arpeggio recognized by a single hand: 36 size-tuning arpeggies and indexing thereof, 36 size-tuning arpeggies belong to seven/seven chord arpeggies and indexing thereof; presetting an arpeggio recognized by two hands: 36 size-tuning arpeggies and the two shifted hands of the 36 size-tuning arpeggies are in the same direction or reverse directions, 36 size-tuning arpeggies belong to seven/seven chord arpeggies and the two shifted hands of the 36 size-tuning arpeggies are in the same direction or reverse directions, and 36 size-tuning arpeggies are in three degrees.
2) All the sequences of notes for the right hand (typically a treble staff) and all the sequences of notes for the left hand (typically a bass staff) of the piece are read separately from an electronic score, such as a MusicXML music file, and compared with the one-hand identified arpeggio data in S1.
3) All notes are traversed from beginning to end, and if the current note belongs to a preset arpeggio, the next note is judged. Until if there are not less than 4 notes in the preset arpeggies, connecting lines are made below the notes, and the arpeggies possibly conforming to the arpeggies are listed aside for the user to click and mark.
4) All note sequences of both hands of the piece are read separately from an electronic score, such as a MusicXML music file, and are simultaneously compared with the arpeggio data identified by both hands in 1).
5) All notes are traversed from beginning to end, and if the notes played by the current two hands simultaneously belong to the arpeggies identified by certain preset two hands in 1), the notes played by the next pair of the two hands simultaneously are judged. Until 4 pairs of notes are in the preset tone arpeggies, connecting lines are arranged below the notes, and the tone arpeggies possibly conforming to the tone arpeggies are listed aside for the user to click and mark.
6) When the conforming options are obtained by the 5) and the 3) at the same time for the user to click and mark, only the result of the 5) is recommended; if the result obtained in 5) or 3) is unique, the marking is directly carried out without the need of the user to judge and select again.
The chord and the chord decomposition mark are expressed in the following forms: the chord names, such as C, em, are marked below the chord (columnar chord) played at the same time or the broken chord played successively.
The implementation mode is as follows:
1) Presetting the musical interval relation of various chords, such as the musical interval relation of three tones of a major chord to be a major chord and a minor chord; the musical interval relationship of the minor chord is the minor chord and the major chord.
2) All note sequences of the composition are read from an electronic score, such as a MusicXML music file.
3) Identifying and labeling columnar chords: firstly, all three notes which are played simultaneously or notes which are played simultaneously and are more than three notes (namely, the notes with the same X-axis coordinate position on the same staff) are found out, whether the musical interval relation of the notes meets the musical interval relation of a chord is judged, and if the chord is matched, chord name shorthand is marked above the chord.
4) Identifying the label decomposition chord: all non-simultaneous played notes on the same staff are traversed from beginning to end, and whether the last notes of a target note meet the interval relation of a chord is judged. If a chord is matched, a chord name is marked over the notes.
Here, fig. 16 illustrates a schematic flowchart of a chord and exploded chord labeling process in the spectral face analysis and labeling method according to the embodiment of the present application.
Next, the key notation in the spectral face analysis and notation method according to the embodiment of the present application will be described. Here, the registration mark includes a temporary lifting mark and a tuning identification.
First, the highlighting and the marking are implemented by, for example: first, the spectral face key is read from an electronic music score such as MusicXml and highlighted; then, the sounds to be lifted of the key marks are identified in the whole music score and highlighted with the same color.
Next, the temporary lift mark and the restore mark are highlighted. Specific implementation manners are as follows:
1) All note sequences are read from an electronic score, such as MusicXml, and a staff is generated.
2) All temporary lifting marks and temporary restoring marks in the spectrum surface are identified and marked with the sound (namely, the coordinate value of the X axis after the temporary lifting or restoring marks is minimum and the coordinate value of Y axis is equal to the sound of the lifting or restoring marks) are highlighted.
3) All sounds within the current section that are the same pitch as the temporary lift or restore token in 2 and that are sequentially arranged after the temporary lift or restore token are identified and highlighted.
Then, performing adjustment recognition, wherein the expression form is as follows: the tune key of the tune is identified and marked on the upper left corner of the score.
The specific implementation mode is as follows:
1) The key number, the head and tail sounds, the tail chord root, the main tone frequency and the chord trend are respectively given weight, and the sum is 100%.
2) A conditional threshold is given for determining whether the piece of music has a clear tonal nature.
3) All note sequences and key number information for the piece of music are read from an electronic score, such as a MusicXML music file.
4) Reading the tuning number and obtaining the weighted value of the tuning number according to the weight
5) And reading and judging the head and tail sounds of the music, and if the head and tail sounds are the same, obtaining a weighted value taking the sound as a tone number.
6) The root of the chord at the end of the musical composition is judged, and a weighting value with the root as a key number is obtained.
7) All the sounds in the music are mapped from low to high to the same octave to form a musical scale, and the weighted values corresponding to the high-frequency occurrence of the main sounds (1356) according to the occurrence frequency are found.
8) And finding out the chord in the melody of the head sentence and the tail sentence, and judging the trend of the chord. If the trend is from the major chord- > 7 chord- > major chord, a weight with the major and chord root as key is obtained.
9) Mapping all the tones in the music from low to high into the same octave to form a musical scale, looking at the ascending and descending conditions and musical intervals of the sixth and seventh stages, determining whether the music is big or small.
10 All the obtained weighting values are added, and if the weighting value is larger than or equal to a condition threshold value, the tuning of the music is obtained, and the tuning is obtained according to 9). For example, the mode may be marked in the upper left corner of the score.
Fig. 17 illustrates a schematic flow chart of the tuning annotation in the spectral surface analysis and annotation method according to an embodiment of the present application.
That is, in the above-described spectral plane analysis and labeling method, further comprising: analyzing and marking the number adjustment and temporary lifting marks of the electronic music score; analyzing and labeling musical scales, chords and arpeggies based on the analysis result of the key numbers; and performing adjustment analysis and labeling based on the analysis results of the key, the temporary lifting mark, the chord and the pitch.
In the following, special fingering in the spectral face analysis and labeling method according to the embodiment of the present application will be described. Special fingering indicia includes the following aspects.
1. The expansion of the finger, called the extension of the finger, refers to the finger method that the finger is opened transversely, and the sum of the two fingers to be touched and the number of fingers between the two fingers is smaller than the number of strokes of the two keyboards to be touched.
The expression form is as follows: the notes to be played are marked with a label, such as the capital letter "S".
The implementation method comprises the following steps:
1) All the note sequences of the composition and the fingering of each note (i.e. with which finger the note is played) are read from an electronic score, such as a MusicXML music file.
2) Traversing all notes, if the absolute value of the fingering difference between the current note and the next note is smaller than the absolute value of the interval difference between them, and the absolute value range of the interval difference is [2,8], then the current note and the next note are required to be played by expanding fingers. For example, the current tone is C, its fingering is 1, the next tone is E, its fingering is 2,2-1<2 (the interval difference from C to E is 2), then the next tone needs to be finger-spread for playing.
3) Capital letter "S" is marked above all notes that need to be played by expanding the fingers.
Fig. 18 illustrates a schematic flow chart of the expanded finger annotation in the score analysis and annotation method according to the embodiment of the present application.
2. The shrinking of the finger refers to the finger method that the finger is transversely shrunk, and the sum of the number of the two fingers to be touched and the number of the fingers between the two fingers is larger than the number of the strokes of the two keyboards to be touched. The fingering required when a set of notes alternately ascending or descending and having a large interval span is encountered is generally C, G, E, treble C.
The expression form of the labeling of the condensed finger is as follows: the notes to be fingered are marked with a label, such as the lowercase letter "s".
The implementation method comprises the following steps:
1) All the note sequences of the composition and the fingering of each note (i.e. with which finger the note is played) are read from an electronic score, such as a MusicXML music file.
2) All notes are traversed, and if the absolute value of the fingering difference between the current note and the next note is greater than the absolute value of the difference between their intervals, the fingering is required to play the current note to the next note. For example, the current tone is C, its fingering is 1, the next tone is E, and its fingering is 5,5-1>2 (interval difference from C to E is 2), then the next tone needs to be fingered for playing.
3) Lower case letters "s" are marked above all notes that need to be played with condensed fingers.
Fig. 19 illustrates a schematic flow chart of condensed annotation in a score analysis and annotation method according to an embodiment of the present application.
3. The finger penetration is a finger method that 1 finger penetrates from the lower part of 2 or 3 fingers or 4 fingers to play higher sound.
The marked expression forms are as follows: with marks, e.g. symbols, on notes to be traversedAnd (5) marking.
The implementation method comprises the following steps:
1) All the note sequences of the composition and the fingering of each note (i.e. with which finger the note is played) are read from an electronic score, such as a MusicXML music file.
2) Traversing all notes, and if the current note and the next note are in an uplink structure, namely the pitch of the current note is less than that of the next note, and the fingering of the next note is 1 finger, playing the next note by penetrating fingers; the current note and the next note are in a downlink structure, namely, the pitch of the current note is greater than the pitch of the next note, the fingering of the current note is 1 finger, and when the next note is not 1 finger, the note needs to be played through fingers.
3) Symbolically on all notes to be played by finger penetrationAnd (5) marking.
4. Homophonic fingers, i.e. the situation where two adjacent and identical sounds are played with different fingers, are called homophonic fingers.
The marked expression forms are as follows: the fingering numbers of notes to be played with homophones are encircled with labels, such as circles.
The implementation mode is as follows:
1) All the note sequences of the composition and the fingering of each note (i.e. with which finger the note is played) are read from an electronic score, such as a MusicXML music file.
2) All notes are traversed, and if the current note and the next note have the same pitch and different fingering, the next note is required to be homophone transformed.
3) And (3) drawing circles outside all the fingering numbers of the notes needing homophonic transformation, and circling the fingering numbers therein.
4. The hand position changes, i.e. the whole hand is moved left or right in order to play the subsequent notes.
The marked expression forms are as follows: the note that requires a hand-position change is marked above with a label, such as the capital letter "C".
The implementation mode is as follows:
1) All the note sequences of the composition and the fingering of each note (i.e. with which finger the note is played) are read from an electronic score, such as a MusicXML music file.
2) Two adjacent tones are marked as a hand change, either with a direct (i.e., consecutive fingers) or with any fingering arrangement within 5.1-5.4 (not spread, not contracted, not penetrated, not spanned, not homophonic).
3) The upper part of all positions where the hand position needs to be changed is marked by capital letter 'C'.
Next, a musical composition structure and analysis and labeling in the spectral face analysis and labeling method according to the embodiment of the present application will be described.
First, a music passage sign is described, which has the following expression form: the paragraph structure of the music piece is identified and the identification is given by the bar.
The specific implementation mode is as follows:
1) All note sequences of the composition are read from an electronic score, such as a MusicXML music file.
2) All notes are grouped by section. Here, it is observed and summarized that if the music pieces of most music pieces are not more than 4, the total note number of the music pieces can be divided by 4 and rounded down to obtain the minimum continuous number N of the music pieces which is theoretically judged, and sometimes the sentence breaking of the phrase is not carried out at the end of the complete bar, so N-1 is used as the minimum continuous number M of the music pieces which is judged in the system. Examples: 32 bars are added to one piece of music, if 32/4=8, 8 is the theoretical minimum continuous bar number N, the system takes N-1 as the minimum continuous bar number M of the music piece in the system, and M is 7; i.e. if two identical/similar passages exceeding 7 passages appear in the piece of music, it is considered that two identical/similar passages are identified.
3) Traversing all sections, comparing the current section I with the section I+M, if the sections I+1 and the section I+M+1 are identical, and so on, finally obtaining two music pieces, and if the lengths of the two music pieces are more than the section M, indicating that the two music pieces are of the same structure.
4) The intra-measure note similarity threshold N is set, i.e., if the sounds in two measures are N% identical, then the measures are considered to belong to a similar structure.
5) Traversing all the sections, comparing the current section I with the section I+M, if the same sound is larger than N%, comparing the section I+1 with the section I+M+1, and the like, and finally obtaining two music pieces. If both music pieces are longer than M bars and the two music pieces are not identical, it is indicated that the two music pieces are of similar structure.
6) All music pieces that are not identified as identical music pieces should be identified as independent music pieces without repetition.
7) Music pieces must consist of at least 2 or more phrases.
8) The upper left corner of the first bar of each identified music piece is marked with a square, a circle, a triangle, etc. in combination with letters, and the same music piece is marked with the same shape and letters. Examples: a and square represent a music piece, and the same music piece is also marked with A and square; the next music passage is marked by combining B with a circle; and so on.
9) Similar music pieces are marked with the same shape, with the letter upper right hand corner plus the prime. Examples: a and square represent a music piece, and similar music pieces are marked with A' and square.
Fig. 20 illustrates a schematic flow chart of music passage labeling in a music score analysis and labeling method according to an embodiment of the present application.
Next, phrase identification and tagging will be described in terms of: the phrase structure of the music is identified, and the phrases are marked by arc lines.
In the embodiment of the application, the recognition results of the music type, the rhythm type, the musical scale, the arpeggio, the music passage and the like can be used as weighting conditions for phrase recognition, and further, a termination type semi-termination type and the like are adopted.
The specific implementation mode of phrase identification is as follows:
1) All note sequences of the composition are read from an electronic score, such as a MusicXML music file.
2) The phrase is made up of musical types, looking at the beginning first musical type of the next phrase, it starts with the same material (similar musical type) as the first phrase, which is also an important basis for dividing the musical structure "same score", namely: the same material may divide the musical structure. Therefore, if the music type structure of the preceding piece of music is similar to that of the following piece of music, it should be divided into two phrases. For example, two AB music types are similar in comparison (identical rhythm type, different pitch, different interval); and the tail is a long tone, it is split into two phrases.
3) The phrase is to have some form of semi-termination or termination, such as the tail is the dominant one; such as terminated or semi-terminated chord progression (primary, secondary, tertiary, terminated quaternary), etc., various terminated chord rules may be preset.
Here, the types of termination include:
positive grid: including guide→harmony progression of the primary chord. Such as V-I or vii-I. A fully positive termination format, perfect Authentic Cadence (PAC); incomplete positive termination form, imperfect Authentic Cadence (IAC), inverted. Changing grids: there is a harmony going on from the guiding sound to the main sound. Such as Plagal Cadence (PC), IV-I, and the Algorithm/teaching will terminate. Semi-termination I-V or? V, halocadence (HC); small-tuned iv 6-V, phrygian Cadence virginian termination; spoofed termination/false termination: deceptive Cadence (DC) V-vi is most common; termination is omitted. The ending point of a phrase is at the same time the beginning point of the next phrase; pi Kadi Picard Third. The major and minor 3 notes of the minor stop type rise to major tri.
4) The phrase is of a certain length, typically around 4 bars, possibly 8 bars or more. And the length of the phrase is necessarily shorter than the music section where the phrase is located, and is also necessarily longer than the music type at the position (the music type is smaller than the phrase and smaller than the music section).
5) The musical scale, the arpeggio, as the basic musical pattern, cannot be divided into two phrases, and therefore the basic musical pattern, in which the musical scale and the arpeggio or chromatic scale appear, must be included in the current phrase
6) The tail of each phrase is a long tone, so that the note duration of the tail of one phrase is at least half of the denominator in the beat.
7) Repeated tokens, double bar lines, fermanta extension tokens, termination tokens in musical terms, etc. are encountered, all necessarily at the paragraph of a phrase.
8) Also, the recognition of musical symbols and musical terms hereinafter can also aid in phrase recognition, e.g., the legto line can generally guide sentence breaks that do not cut through the legto line.
That is, in the above-described spectral plane analysis and labeling method, further comprising: performing music piece analysis and labeling on the electronic music score; and performing phrase analysis and annotation based on the music type, the rhythm type, the musical scale, the arpeggio and the chord, the music term and symbol annotation and the analysis result of the music piece.
Next, other contents of the spectral face analysis and labeling method according to the embodiment of the present application will be described.
The musical notation and musical term annotation are presented in the form of: the terms and symbols of the music on the score (including the dynamics, tempo, expression, style, performance skills, period genre, etc.) are identified, the paraphrasing is given while the mouse is hovering, and the user can choose to display a certain paraphrasing at the bottom of the score and associate the number with the symbol on the score for use after printing.
The specific implementation mode is as follows:
1) Various musical terms and symbols and their definitions are preset.
2) Music terms and symbols are read from electronic musical scores, e.g. MusicXml.
3) When the mouse moves to a certain music term or symbol, the preset paraphrasing is read and displayed in a tooltip mode.
4) If the user clicks the "show to bottom" button in the tooltip, a numerical designation is displayed next to the term or symbol and the paraphrase is displayed at the bottom of the score, again with the numerical designation being used for association.
The expression form of the knowledge graph is as follows: when the user selects the elements such as the title, the author and the like of the music, a button for jumping to the hundred degrees encyclopedia of the entry is displayed, and the hundred degrees encyclopedia content of the entry is displayed in a popup window mode beside the button. The user can display any content (or manually edit the content) in the music score bottom and correlate the content with the numerical label
The implementation mode is as follows:
1) The elements of the song title, author, etc. are read from an electronic score, e.g. MusicXml.
2) When the user selects a title or author, a button is displayed next to the button that can jump to the term hundred degrees encyclopedia.
3) When the user clicks the button, the encyclopedic content of the entry is acquired through the api of the encyclopedic, and is displayed in a popup window mode.
4) If the user selects the content (which can be edited manually), a button of 'display to bottom' is displayed, the button is clicked, a digital mark is displayed beside the entry, the selected content is displayed at the bottom of the music score, and the digital mark is used for association.
The presentation form of the time period analysis of the musical composition is: according to the above-mentioned characteristics of tone, musical symbol and musical term, musical scale arpeggio chord characteristic, beat rhythm characteristic, phrase music section characteristic and others, the characteristics of music composition are comprehensively judged, a conclusion is given on the spectrum surface, and according to the above-mentioned characteristics, the correspondent characteristics extracted from colour highlight music score are respectively used, the user can make point selection on every highlight portion, and the number is marked on the spectrum surface.
The implementation mode is as follows:
1) Making a judgment according to the following table;
2) According to the function of the keyword association in the knowledge graph, the period conclusion of most works can be found correspondingly, the conclusion is compared with the conclusion in 1), if the period conclusion is consistent, the conclusion is inconsistent, an uncertain red early warning is required to appear, so that a user can prune according to the judging features selected by the machine in 1), and the conclusion in 1) is the same as the conclusion in 2) after prune, so that the analysis accuracy is improved.
In addition, the spectral surface analysis and labeling method according to the embodiment of the application can also adopt an artificial intelligent deep learning technology to improve the accuracy of music analysis, phrase analysis, tonality analysis, period characteristic analysis and the like.
The concrete expression form is as follows: the method for labeling the music type is that the lower part of the musical note of the music type is labeled by line segments with different forms and colors in a staff, and the beginning of the line segments can be manually dragged and adjusted by a user;
the labeling method for phrases in the foregoing is to label arcs among the phrases, and the positions of the arcs can be manually dragged and adjusted by a user;
in the previous section of the scheduling feature analysis and the period feature analysis, the extremely small probability has the possibility of judging the inconsistency with the result obtained by crawling the knowledge graph;
In summary, a process of manually adjusting and intervening to give a determination result is needed, so that in the process of labeling by users, the program needs to record the results with differences, if the results of adjustment and intervention by a plurality of users are the same, machine deep learning can record and learn with the continuously updated and optimized results, and the optimized spectral face labeling recommendation is performed.
Exemplary apparatus
FIG. 21 illustrates a block diagram of a spectral face analysis and labeling apparatus according to an embodiment of the present application.
As shown in fig. 21, the spectral plane analyzing and labeling apparatus 200 according to the embodiment of the present application includes: a score determining unit 210 for determining a score of the electronic score; a beat and rhythm type analysis unit 220 for performing beat labeling and rhythm type analysis and labeling on the electronic score based on the determined beat number; a pitch interval labeling unit 230 for performing pitch and chord Cheng Biaozhu on the electronic score; and a music type analysis labeling unit 240 for analyzing and labeling a music type based on labeling results of the pitch, the interval, and the rhythm type.
In the above-described spectral plane analyzing and labeling apparatus, the beat and rhythm type analyzing unit 220 is configured to: determining a preset basic rhythm type, and a doubling rhythm type of the basic rhythm type based on rhythm combination characteristics of the electronic music score; and performing a rhythm type analysis on the electronic musical score by comparison with the basic rhythm type, the double-added rhythm type and the double-divided rhythm type.
In the above-described spectral plane analysis and labeling apparatus, the beat and rhythm type analysis unit 220 performs rhythm type labeling on the electronic musical score including performing rhythm type labeling with note duration accumulation per beat as a unit, including: defining start-stop coordinates of rhythm type marks; defining position coordinates of a plurality of rhythm type marks when one note contains a plurality of beats; and in the rhythm type mark of a unit, calculating the proportion of the time value of notes contained in a beat, equally dividing the rhythm type mark according to the calculated proportion, and cutting off the rhythm type mark for each note.
In the above-mentioned spectral plane analyzing and labeling apparatus, the pitch interval labeling unit 230 is configured to: c positioning, FG positioning and EFGA positioning are carried out on the electronic music score.
In the above-mentioned spectral plane analyzing and labeling apparatus, the pitch interval labeling unit 230 is configured to: and marking the intra-measure musical intervals and the inter-measure musical intervals of the electronic music score.
In the above-described spectral plane analyzing and labeling apparatus, the pitch interval labeling unit 230 is further configured to: and marking the sound played by the left hand and the right hand of the electronic music score.
In the above-mentioned spectral plane analysis and labeling apparatus, the music type analysis and labeling unit 240 is configured to: determining a set of reference notes having a predetermined number of notes, the set of reference notes constituting a reference musical style; other groups of reference notes in the electronic score having the predetermined number of notes are traversed to determine a musical style that is the same, shifted, mirrored or similar to the reference musical style based on the pitch, duration and interval of each group of notes of the predetermined number of notes.
In the above-described spectral face analysis and labeling apparatus, at least one of: the same music type accords with that two groups of notes have the same rhythm type, the same pitch and the same interval; the similar music type accords with that the rhythm type of the two groups of notes is different, but the pitch is the same as the interval; the similar music type accords with the rhythm type of the two groups of notes to be different, the pitch is different, and the musical interval is the same; the similar music type accords with the relationship that the rhythms of the two groups of notes are the same or multiplied or divided, and the pitch or interval is more than 50% the same; the displacement music type accords with the same rhythm type of the two groups of notes, the musical interval is the same, but the pitch is different; and the mirror image music type accords with the same rhythm type of the two groups of notes, the same musical interval and the reverse pitch arrangement.
In the above-mentioned spectral face analysis and labeling device, if the notes appearing on the spectral face have a diphone, extracting the compared notes includes at least one of the following cases: if the two groups of notes are two notes, comparing the upper notes (notes with higher pitch) of the two groups of notes with each other, comparing the lower notes (notes with lower pitch) with each other, and then cross-comparing the upper notes and the lower notes of the two groups of notes; and if one of the two groups of notes is a biphone and the other group is a monophone, the group of notes of the monophone is used as a reference group, and the upper-layer notes and the lower-layer notes of the biphone group of notes are respectively compared with the group of notes of the monophone.
In the above-mentioned spectral plane analysis and labeling device, further comprising: the adjusting analysis and marking unit is used for carrying out the analysis and marking of the tuning and temporary lifting marks on the electronic music score; analyzing and labeling musical scales, chords and arpeggies based on the analysis result of the key numbers; and performing adjustment analysis and labeling based on the analysis results of the key, the temporary lifting mark, the chord and the pitch.
In the above-mentioned spectral plane analysis and labeling device, further comprising: the music piece phrase analysis and marking unit is used for performing music piece analysis and marking on the electronic music score; and performing phrase analysis and annotation based on the music type, the rhythm type, the musical scale, the arpeggio and the chord, the music term and symbol annotation and the analysis result of the music piece.
In the above-mentioned spectral plane analysis and labeling device, further comprising: the composition period characteristic analysis and marking unit is used for marking music terms and symbols on the electronic music score; and analyzing and labeling the work period characteristics of the electronic music score based on the labeling results of the music terms and the symbols and combining the labeling results of the music type, the musical scale, the arpeggio, the chord, the tonality, the beat rhythm type and the phrase music piece.
In the above-mentioned spectral plane analysis and labeling device, further comprising: and the special fingering analysis and labeling unit is used for carrying out special fingering analysis and labeling on the electronic music score based on the musical interval labeling result.
Here, it will be understood by those skilled in the art that the specific functions and operations of the respective units and modules in the above-described spectral plane analyzing and labeling apparatus 200 have been described in detail in the above description of the spectral plane analyzing and labeling method with reference to fig. 1 to 20, and thus, repetitive descriptions thereof will be omitted.
As described above, the spectrum surface analysis and labeling apparatus 200 according to the embodiment of the present application may be implemented in various terminal devices, such as a smart phone, a computer, a server, and the like. In one example, the spectral plane analysis and labeling apparatus 200 according to embodiments of the present application may be integrated into the terminal device as a software module and/or a hardware module. For example, it may be a software module in the operating system of the terminal device, or may be an application developed for the terminal device; of course, the spectral plane analysis and labeling apparatus 200 may also be one of a plurality of hardware modules of the terminal device.
Alternatively, in another example, the spectral plane analysis and labeling apparatus 200 and the terminal device may be separate devices, and may be connected to the devices therein through a wired and/or wireless network, and transmit the interaction information in a agreed data format.
Exemplary electronic device
Next, an electronic apparatus according to an embodiment of the present application is described with reference to fig. 22.
Fig. 22 illustrates a block diagram of an electronic device according to an embodiment of the present application.
As shown in fig. 22, the electronic device 10 includes one or more processors 11 and a memory 12.
The processor 11 may be a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities, and may control other components in the electronic device 10 to perform desired functions.
Memory 12 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer readable storage medium that can be executed by the processor 11 to implement the spectral face analysis and labeling methods and/or other desired functions of the various embodiments of the present application described above. Various contents such as rhythm type, pitch, interval, music type, etc. may also be stored in the computer-readable storage medium.
In one example, the electronic device 10 may further include: an input device 13 and an output device 14, which are interconnected by a bus system and/or other forms of connection mechanisms (not shown).
The input means 13 may comprise, for example, a keyboard, a mouse, etc.
The output device 14 can output various information including the noted spectral plane and the like to the outside. The output means 14 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, etc.
Of course, only some of the components of the electronic device 10 relevant to the present application are shown in fig. 22 for simplicity, components such as buses, input/output interfaces, etc. being omitted. In addition, the electronic device 10 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer readable storage Medium
In addition to the methods and apparatus described above, embodiments of the present application may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the steps in a spectral face analysis and labeling method according to various embodiments of the present application described in the "exemplary methods" section of the present specification.
The computer program product may write program code for performing the operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the first user computing device, partly on the first user device, as a stand-alone software package, partly on the first user computing device, partly on a remote computing device, or entirely on a remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium, having stored thereon computer program instructions, which when executed by a processor, cause the processor to perform the steps in a spectral face analysis and labeling method according to various embodiments of the present application described in the above "exemplary methods" section of the present specification.
The computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The basic principles of the present application have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present application are merely examples and not limiting, and these advantages, benefits, effects, etc. are not to be considered as necessarily possessed by the various embodiments of the present application. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, as the application is not intended to be limited to the details disclosed herein as such.
The block diagrams of the devices, apparatuses, devices, systems referred to in this application are only illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
It is also noted that in the apparatus, devices and methods of the present application, the components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered as equivalent to the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit the embodiments of the application to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.
Claims (15)
1. A spectral face analysis and labeling method for automatically analyzing and labeling an electronic music score, comprising:
Acquiring an electronic music score;
determining a signature of the electronic music score;
performing beat labeling and rhythm analysis and labeling on the electronic music score based on the determined beat number;
pitch and chord Cheng Biaozhu of the electronic score; and
analyzing and labeling the music type based on the labeling results of the pitch, the interval and the rhythm type;
displaying the information marked above.
2. The spectral face analysis and labeling method of claim 1, wherein performing a rhythmic analysis on the electronic score comprises:
determining a preset basic rhythm type, and a doubling rhythm type of the basic rhythm type based on rhythm combination characteristics of the electronic music score; and
and analyzing the rhythm type of the electronic music score by comparing with the basic rhythm type, the multiple rhythm type and the multiple rhythm type.
3. The method of spectral face analysis and labeling according to claim 1, wherein rhythm-type labeling of the electronic musical score includes rhythm-type labeling in which each beat is accumulated as a unit of note duration, comprising:
defining start-stop coordinates of rhythm type marks;
defining position coordinates of a plurality of rhythm type marks when one note contains a plurality of beats; and
And in the rhythm type mark of one unit, calculating the proportion of the time value of notes contained in one beat, equally dividing the rhythm type mark according to the calculated proportion, and cutting off the rhythm type mark for each note.
4. The spectral face analysis and labeling method of claim 1, wherein pitch labeling the electronic score comprises:
c positioning, FG positioning and EFGA positioning are carried out on the electronic music score.
5. The spectral face analysis and labeling method of claim 1, wherein musical interval labeling the electronic musical score comprises:
and marking the intra-measure musical intervals and the inter-measure musical intervals of the electronic music score.
6. The spectral face analysis and labeling method of claim 5, wherein musical interval labeling the electronic musical score further comprises:
and marking the sound played by the left hand and the right hand of the electronic music score.
7. The spectral face analysis and labeling method according to claim 5, wherein analyzing and labeling a music type based on labeling results of the pitch, the interval, and the rhythm type comprises:
determining a set of reference notes having a predetermined number of notes, the set of reference notes constituting a reference musical style;
Other groups of reference notes in the electronic score having the predetermined number of notes are traversed to determine a musical style that is the same, shifted, mirrored or similar to the reference musical style based on the pitch, duration and interval of each group of notes of the predetermined number of notes.
8. The spectral face analysis and labeling method of claim 7, comprising at least one of:
the same music type accords with that two groups of notes have the same rhythm type, the same pitch and the same interval;
the similar music type accords with that the rhythm type of the two groups of notes is different, but the pitch is the same as the interval;
the similar music type accords with the rhythm type of the two groups of notes to be different, the pitch is different, and the musical interval is the same;
the similar music type accords with the relationship that the rhythms of the two groups of notes are the same or multiplied or divided, and the pitch or interval is more than 50% the same;
the displacement music type accords with the same rhythm type of the two groups of notes, the musical interval is the same, but the pitch is different;
the intervals of the mirror image music type, which correspond to each of the two sets of notes, to the symmetry axis which is the intermediate value between the highest and lowest notes in the set, are opposite numbers to each other.
9. The spectral face analysis and labeling method of claim 8, wherein extracting the compared sounds if the notes appearing in the spectral face have a diphone comprises at least one of:
If the two groups of notes are two notes, the upper notes of the two groups of notes, namely the notes with higher pitch, are compared with each other, the lower notes, namely the notes with lower pitch, are compared with each other, and then the upper notes and the lower notes of the two groups of notes are compared in a crossing way;
if one of the two groups of notes is a diphone and the other group is a monophone, the notes of the diphone group are used as reference groups, and the upper notes and the lower notes of the diphone group of notes are respectively compared with the notes of the monophone group.
10. The spectral face analysis and labeling method of claim 1, further comprising:
analyzing and marking the number adjustment and temporary lifting marks of the electronic music score;
analyzing and labeling musical scales, chords and arpeggies based on the analysis result of the key numbers; and
and performing adjustment analysis and labeling based on the analysis results of the key, the temporary lifting mark, the chord and the pitch.
11. The spectral face analysis and labeling method of claim 10, further comprising:
performing music piece analysis and labeling on the electronic music score; and
and performing phrase analysis and annotation based on the music type, the rhythm type, the musical scale, the arpeggio and the chord, and the music term and symbol annotation and the analysis result of the music piece.
12. The spectral face analysis and labeling method of claim 11, further comprising:
the electronic music score is marked with music terms and symbols; and
and analyzing and marking the time period characteristics of the works of the electronic music score according to the marking results of the music type, the musical scale, the arpeggio, the chord, the tone, the beat rhythm type and the phrase music piece based on the marking results of the music term and the symbol.
13. The spectral face analysis and labeling method of claim 11, further comprising:
and carrying out special fingering analysis and labeling on the electronic music score based on the musical interval labeling result.
14. A spectral face analysis and labeling device for automatically analyzing and labeling an electronic music score, comprising:
an electronic score acquisition unit for acquiring an electronic score;
a clapping determining unit for determining clapping of the electronic music score;
the beat and rhythm type analysis unit is used for marking the beats and analyzing and marking the rhythm type of the electronic music score based on the determined beat number;
a pitch interval labeling unit, configured to perform pitch chord Cheng Biaozhu on the electronic score; and
the music type analyzing and labeling unit is used for analyzing and labeling the music type based on the labeling results of the pitch, the interval and the rhythm type;
And the marking display unit is used for displaying the information marked by each unit.
15. An electronic device, comprising:
a processor; and
a memory having stored therein computer program instructions that, when executed by the processor, cause the processor to perform the spectral face analysis and labeling method of any of claims 1-13.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011578345.7A CN117496792A (en) | 2020-12-28 | 2020-12-28 | Spectral plane analysis and labeling method and device and electronic equipment |
PCT/CN2021/142134 WO2022143679A1 (en) | 2020-12-28 | 2021-12-28 | Sheet music analysis and marking method and apparatus, and electronic device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011578345.7A CN117496792A (en) | 2020-12-28 | 2020-12-28 | Spectral plane analysis and labeling method and device and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117496792A true CN117496792A (en) | 2024-02-02 |
Family
ID=89666488
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011578345.7A Pending CN117496792A (en) | 2020-12-28 | 2020-12-28 | Spectral plane analysis and labeling method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117496792A (en) |
-
2020
- 2020-12-28 CN CN202011578345.7A patent/CN117496792A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wattenberg | Arc diagrams: Visualizing structure in strings | |
US10614786B2 (en) | Musical chord identification, selection and playing method and means for physical and virtual musical instruments | |
JP6197631B2 (en) | Music score analysis apparatus and music score analysis method | |
US7875787B2 (en) | Apparatus and method for visualization of music using note extraction | |
US8912418B1 (en) | Music notation system for two dimensional keyboard | |
US8835737B2 (en) | Piano tablature system and method | |
JP5790686B2 (en) | Chord performance guide apparatus, method, and program | |
US20060191399A1 (en) | Fingering guidance apparatus and program | |
CN102663423A (en) | Method for automatic recognition and playing of numbered musical notation image | |
CN105280170A (en) | Method and device for playing music score | |
JPH09293083A (en) | Music retrieval device and method | |
Cook | Computational and comparative musicology | |
JP6889420B2 (en) | Code information extraction device, code information extraction method and code information extraction program | |
Rocamora et al. | Tools for detection and classification of piano drum patterns from candombe recordings | |
WO1994011857A1 (en) | Improvements in and relating to musical computational devices | |
US20150310876A1 (en) | Raw sound data organizer | |
JP3963112B2 (en) | Music search apparatus and music search method | |
CN116034421A (en) | Musical composition analysis device and musical composition analysis method | |
CN108806394A (en) | A kind of piano playing software fingering display methods and fingering storage device | |
CN117496792A (en) | Spectral plane analysis and labeling method and device and electronic equipment | |
JP5598937B2 (en) | Keyboard instrument learning system and keyboard instrument learning method | |
CN116704850A (en) | Spectral plane analysis and identification system | |
WO2017195106A1 (en) | Method and system for writing and editing common music notation | |
KR102395137B1 (en) | Kalimba musical note and auto generation system thereof | |
JP2001265324A (en) | Method for judging similarity of music melody |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |