CN110299049B - Intelligent display method of electronic music score - Google Patents

Intelligent display method of electronic music score Download PDF

Info

Publication number
CN110299049B
CN110299049B CN201910519625.1A CN201910519625A CN110299049B CN 110299049 B CN110299049 B CN 110299049B CN 201910519625 A CN201910519625 A CN 201910519625A CN 110299049 B CN110299049 B CN 110299049B
Authority
CN
China
Prior art keywords
user
music score
played
skill
phrases
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910519625.1A
Other languages
Chinese (zh)
Other versions
CN110299049A (en
Inventor
沈之锐
韩玉梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xinqi Intelligent Technology Co ltd
Original Assignee
Shaoguan Qizhi Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaoguan Qizhi Information Technology Co ltd filed Critical Shaoguan Qizhi Information Technology Co ltd
Priority to CN201910519625.1A priority Critical patent/CN110299049B/en
Publication of CN110299049A publication Critical patent/CN110299049A/en
Application granted granted Critical
Publication of CN110299049B publication Critical patent/CN110299049B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B15/00Teaching music
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/18Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being spectral information of each sub-band
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination

Abstract

The invention provides an intelligent display method and device of an electronic music score, wherein the method comprises the following steps: setting a music score of a music to be played by a user, extracting the content of the music score, acquiring the tone, the key and the note annotation of playing skill notes in the music score, and performing spectrum analysis on the skill notes; dividing the music score into sentences, and clustering the phrases; presetting phrases of the same category by adopting similar colors and different color depths; the note difference between phrases in the same category is highlighted and preset; identifying notes played by a user at the current moment, and corresponding the notes to a music score; identifying the musical sentence played by the user, judging the intention of the user when the musical note played by the user is different from the musical score musical note, and reminding the user of different contents; and identifying the playing effect of the user on the skill sounds, and changing the skill sound grade of the music score according to different proficiency degrees of the user on each skill sound.

Description

Intelligent display method of electronic music score
Technical Field
The invention relates to the field of audio and text processing, in particular to an intelligent presentation method of an electronic music score.
Background
During the beginners or practice of the instrument, the notes of the score are often not known. It is necessary to play while seeing the music score. When the key is seen to the low head, when the music book is seen to the new head, often can be in a busy and disorderly condition, do not know which sound of oneself on playing the music book, the phrase of music mostly has very high similarity moreover, and the required time constantly makes a round trip to seek and carefully see on the music book, consequently carries out the colour highlight to similar part in the music book, help user's location that can be fine. Similar phrases in music scores often enable users to ignore difference details in the music scores and reduce playing effects, so that the music score has an important role in highlighting the small differences of the similar phrases and colors.
While there is currently some software that can identify whether a note currently being played by a user has been played, music practice is different from music performance, and a learner often plays several times over and over in places that he is not familiar with. It is not enough to merely recognize whether only the sound above the current score has been bounced. It is also necessary to identify whether the user has fallen back and repeatedly practised in the previous two phrases, so that the system does not continuously generate performance error information in the form of erroneous performance.
In musical instrument practice, playing skills are often used in some phrases that are comparatively expressive in music. Musical instrument playing skills are distinguished by high and low level difficulty. If the player of some high-level notes plays inaccurate all the time, the exercise enthusiasm of the user is also influenced. The music score is required to be capable of automatically and adaptively matching the skill sound of the player one by one according to the performance level of the current player.
Disclosure of Invention
The invention provides an intelligent display method of an electronic music score, which aims to solve the problem that the electronic music score is not intelligent enough.
In a first aspect, an embodiment of the present invention provides an electronic music score intelligent presentation method, including:
setting a music score to be played by a user, extracting the content of the music score, including obtaining the name, the tone and the playing skill sound in the music score, and performing spectrum analysis on the skill sound; the playing trick is the particular notation marked in the score in the upper right corner of the note that indicates how the note should be added to the trick to play.
Dividing the music score into sentences, and clustering the phrases;
presetting phrases of the same category by adopting similar colors and different color depths; the presetting refers to presetting colors according to analyzed results, and displaying according to the preset colors when the colors need to be displayed.
Highlighting note differences among the phrases in the same category;
identifying notes played by a user at the current moment, and corresponding the notes to a music score; the corresponding means that the notes played by the user according to the music score are corresponding to the music score content through software.
Identifying a musical sentence played by a user, recording the position of the current musical sentence played by the user in a music score, judging the intention of the user when the note played by the user is different from the note of the music score, and reminding the user of different contents;
and identifying the playing effect of the user on the skill sounds, and changing the skill sound grade of the music score according to different proficiency degrees of the user on each skill sound.
With reference to the first aspect, in a second implementation manner of the first aspect, the segmenting the score and clustering the phrases includes:
and acquiring the duration of each sound in the music score, and acquiring the last sound in each measure as a candidate segmentation sound. And acquiring the interval of the candidate segmentation sound, and comparing the interval. The n candidate segmented sounds with the longest musical interval are extracted as segmented sounds. And performing sentence division according to the segmentation sound. And when the phrase is less than the preset length after the phrase is divided, combining the phrase and the shorter phrase in the left and right adjacent phrases, and reallocating the phrases.
Calculating the similarity between the phrases according to the names and intervals of the phrases, clustering the phrases, and taking the similar phrases as the same phrase category;
the similarity calculation method is carried out in a similarity calculation mode of editing distance, and the clustering algorithm is carried out by clustering a text clustering toolkit in a scimit-leann toolkit to obtain phrase categories of the music score.
With reference to the first aspect or any implementation manner of the first aspect, in a third implementation manner of the first aspect, the presetting the phrases of the same category with similar colors and different color depths includes:
and for the phrases of the same category, presetting the background color to be displayed when the phrases are played by using similar colors, and displaying the background colors of the phrases of different categories by using different colors. Different phrases in the same category are displayed by adopting similar colors and different color depths.
With reference to the first aspect or any implementation manner of the first aspect, in a fourth implementation manner of the first aspect, the performing a highlighting presetting of a difference note for a note difference between phrases in the same category includes:
when a user plays the current phrase, highlighting the sounds which are different from other phrases in the same category;
and the phrase difference analysis is to adopt a hash algorithm to position phrases with differences in phrases in the same category, then adopt a diff function to match the different phrases to obtain the tiny differences among the phrases, and highlight the differences.
With reference to the first aspect or any implementation manner thereof, in a fifth implementation manner of the first aspect, the identifying a current sound performed by a user and corresponding the sound to a sound in a score includes:
the current playing note of the player is acquired and converted into a standard musical tone according to the fundamental frequency of vibration of the current tone produced by the musical instrument. The conversion comprises the steps of identifying the frequency of the current note, corresponding the audio frequency to a music frequency spectrum, carrying out frequency correspondence on the sound of the frequency spectrum, and corresponding the converted note to a music score to obtain the corresponding note in the music score corresponding to the current playing sound;
with reference to the first aspect or any implementation manner thereof, in a sixth implementation manner of the first aspect, the identifying a phrase played by a user, recording a position of the phrase played by the current user in a music score, determining a user intention when a note played by the user is different from a note of the music score, and performing a reminder of different contents to the user includes:
the method comprises the steps of recording the continuous n previous notes of a user's performance, searching which sentence of a user's current phrase is in a music score, and continuously recording the progress of the user's playing content in the music score by the system.
When the content played by the user is inconsistent with the music score, sequentially judging the intention of the user, mainly comprising judging whether the user jumps to other similar phrases due to misreading, judging whether the user repeatedly practices a sentence unskilled by the user, and judging whether the user really plays wrong notes.
With reference to the first aspect or any implementation manner thereof, in a seventh implementation manner of the first aspect, the determining whether the user jumps to another similar phrase due to misreading includes:
and when the played content of the user is not consistent with the music score content, matching the currently played music sentence of the user with the music sentences of the same clustering category, if the music sentences are more similar to the music sentences played by the user, realizing the preset colors of the two music sentences, highlighting the currently correct music score music sentences, and giving a prompt.
The judging whether the user is repeatedly practicing a sentence which is not familiar to him includes:
and searching whether the content of the musical score played by the user is repeated or not when the content played by the user is not consistent with the content of the musical score, performing similarity calculation on wrong content played by the user in a similarity calculation mode, searching the musical score by taking the content of the musical score played by the user for the closest time as a search sequence if the musical score played by the user for multiple times belongs to the same musical score, and highlighting the matched musical score according to the preset color.
The judging whether the user really plays the wrong note comprises the following steps: and when the user is judged not to play similar phrases or not to practice the same phrase for multiple times, judging that the phrase is played wrongly, and prompting the user by the system.
With reference to the first aspect or any one of the embodiments thereof, in an eighth implementation manner of the first aspect, the identifying a performance effect of the user on the trick sounds, and changing the level of the trick sounds on the score according to the proficiency level of the user on each of the trick sounds includes:
and acquiring the current playing skill sound of the user, performing spectrum analysis on the skill sound, and performing similarity matching with the authoritative playing effect audio frequency spectrum. If the similarity is larger than a certain threshold value, judging that the playing skill of the user is correct, and if the similarity does not reach the relevant threshold value, judging that the playing skill of the user is wrong;
acquiring the playing skill sound of each user, and respectively judging the playing accuracy and playing error times of each skill sound. When a certain skill is played correctly or incorrectly for multiple times, the skill sound grade of the music score is changed;
the changing of the skill sound grade comprises changing the difficulty of the skill on the music score to a higher skill when the correct times of the skill sound played by the user is more than a certain times; when the user playing the trick sound error is greater than a certain threshold, the trick sound is changed to a lower level of trick.
In a second aspect, an embodiment of the present invention further provides an apparatus for intelligently presenting an electronic music score, including:
and the music score content extraction module is used for acquiring the music score content to be played by the user, and acquiring a music score tune and a playing skill note annotation. Extracting the video with authoritative playing effect of the note annotation;
and the music score sentence dividing and clustering module is used for dividing and clustering the phrases in the music score.
The music score color presetting module is used for presetting the change rule of the music score color by a user;
the performance sound identification and recording module is used for identifying the current performance sound of the user, corresponding the current performance sound to the music score and recording the sound played by the user;
the music score color control module is used for presetting the color of a current playing phrase of a user according to the music score color presetting module, highlighting the music score according to the current playing sound and giving different error prompts when the user performs errors;
and the music score playing skill modifying module dynamically changes the playing skill sound according to the playing condition of the user on the skill sound.
By the method and the device, the music score model is segmented, different changes of colors are displayed on the current played phrase of the user, and the current playing sound is highlighted. And the current phrase can be backtracked and highlighted according to the practice condition of the user. When the playing skill of the user can not reach the standard, the low-level skill of the sound changing on the music score is modified in time.
The method achieves the following four technical effects:
firstly, the difference details in the similar phrases in the music score can be highlighted, so that the user can be helped to pay attention to the difference sounds in the similar phrases, and the playing effect is improved.
Secondly, the user can quickly know the similar phrase and the front and back call condition of the music score according to the color of the music score. Knowing what playing method and fixed finger mode should be used. And find which section in the music score the current music passage belongs to, the condition that the naked eye often misplaces because of the similar line of the music score can not appear.
And thirdly, when the user repeatedly practices a phrase, the intention of the user is recognized, and the color is highlighted along with the playing progress of the user.
Finally, the difficulty level of adding the playing skill into the music score can be automatically corrected according to the proficiency level of the user on the playing skill, and not only is the difficulty level changed completely, but also the skill sound which is more adaptive is displayed to the user by the individual user according to the proficiency level of the user.
Drawings
Fig. 1 is a flowchart of an embodiment of an intelligent presentation method for electronic music scores according to the present invention.
Fig. 2 is a schematic diagram of an electronic music score according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of clustering the clauses and the similar clauses of the score according to the embodiment of the present invention.
Fig. 4 is a schematic diagram illustrating that the phrases in the musical score are displayed in different colors according to the embodiment of the present invention.
Fig. 5 is a block diagram of an embodiment of an intelligent presentation apparatus for electronic music score according to the present invention.
Detailed Description
In the following, only certain exemplary embodiments are briefly described. As those skilled in the art will recognize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the present invention. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.
The embodiment of the invention provides an intelligent display method of an electronic music score. The method comprises the following specific steps:
step 100, setting a music score of a music to be played by a current user. The method comprises the steps of obtaining a tone of a music score, carrying out sentence segmentation on the music score, identifying skill sounds of the music score, and obtaining correct playing audio of the skill sounds.
Referring to fig. 1, the score contains tones, and the symbol of Bb tone in the upper left corner can be obtained and stored in the database, for example, by image recognition ocr or the format of the score itself. Similarly, the pitch name and pitch interval of each tone can be obtained. Meanwhile, the performance skill indicators, such as the tremolo (tr), the folding sound (turn), the strong sound (>) and other skill symbols are obtained. And correct and authoritative playing audio adopted when the famous performing artists perform the trick sound can be obtained through the network.
The playing skills are different representation methods and playing methods for different musical instruments, which are required to be learned by a player for improving the musical expression when the player learns the musical instrument, and the description of the textbook can be checked for the playing skills of various musical instruments, which is not described herein again.
The spectrum analysis is carried out on the skill sound recording of the on-line authoritative audio, and librosa software can be used for carrying out the spectrum analysis to obtain the frequency spectrums of the skill sounds, so that the comparison and analysis can be conveniently carried out later.
Step 101, the music score is divided into sentences according to the musical interval. And extracting the content of the electronic music score to obtain the duration of each sound of the music score. And judging the last sound in each measure to obtain n longest sound intervals in the music score, and carrying out sentence division according to the long sound. And when the phrase is less than the preset length after the sentence is divided, combining the phrase with the shorter phrase in the left and right adjacent phrases. And obtaining the phrase again.
For example, as shown in fig. 1, the first sentence and the second sentence of the phrase can be segmented by the above-mentioned segmentation method because the last note of the measure in the song score is judged and the longest pitch is 2 minutes. Thus, between the first sentence and the second sentence, a cut by 5-tones is possible. When written phrases are too long after being segmented, the phrases can be segmented according to the fact that the phrases have similar duration.
And 102, according to the sound names and the intervals of the sounds in the phrases. And calculating the similarity between the phrases, and clustering the phrases. The similarity calculation method is carried out in a similarity calculation mode of editing distance, and the clustering algorithm is carried out by clustering a text clustering toolkit in a scimit-leann toolkit to obtain phrase categories of the music score.
Phrases gathered in the same category are preset with similar colors but different depths, and are displayed as preset colors when the user plays the phrase. The user can quickly recognize the current phrase, and can adopt approximately similar playing skills according to the similarity of the phrases. Because the phrase often has similarity and puzzlement nature between, and the rhythm of music is that some time delay will lead to the rhythm to be disorderly, and the beginner often leads to the condition that can't find the position of playing sound in the music score at present because of need watch the music score and watch the key again, distinguishes the similar phrase that belongs to different phrases, can make things convenient for the user to distinguish phrase position location according to the colour difference, plays eyes quick location, the effect of playing rapidly. And also let the user know the approximate extent of similar phrases in the score.
As shown in fig. 3, the second sentence and the sixth sentence are grouped into the same category, and the third sentence and the 7 th sentence are grouped into the same category.
Step 103, as shown in fig. 4, phrases of the same category use similar colors to preset the background colors that should be displayed when the phrases are played, and use different colors to display the background colors of phrases of different categories. Different phrases in the same category are displayed by adopting similar colors and different color depths.
When a user plays the current phrase, highlighting the sounds which are different from other phrases in the same category;
and the phrase difference analysis is to adopt a hash algorithm to position phrases with differences in phrases in the same category, then adopt a diff function to match the different phrases to obtain the tiny differences among the phrases, and highlight the differences. The highlighting manner, as shown in fig. 3, can be to enlarge the differential notes or change the foreground color.
For example, in fig. 3, the second sentence and the sixth sentence belong to phrases in the same category, and they are still different. For example, the 4 th tone 5 from the left in the second sentence is different from the two 55 playing methods in the 6 th sentence. These tones, if not noticed, can be easily ignored by the player. The music score can make the differences in the similar music scores played and can deduct more vivid music effects by highlighting the differences.
And 104, identifying the current tone played by the user, and corresponding the played tone to the tone in the music score.
The current playing note of the player is acquired and converted into a standard musical tone according to the fundamental frequency of vibration of the current tone produced by the musical instrument. The conversion comprises the steps of identifying the frequency of the current note, corresponding the audio frequency to a music frequency spectrum, carrying out frequency correspondence on the sound of the frequency spectrum, and corresponding the converted note to a music score to obtain the corresponding note in the music score corresponding to the current playing sound;
this process is similar to humming recognition, or music retrieval, and belongs to the prior art. This can be done by the associated music retrieval software.
And 105, identifying the played music sentence of the user, recording the position of the currently played music sentence of the user in the music score, judging the intention of the user when the played musical note of the user is different from the musical score musical note, and reminding the user of different contents.
The method comprises the steps of recording the continuous n previous notes of a user's performance, searching which sentence of a user's current phrase is in a music score, and continuously recording the progress of the user's playing content in the music score by the system.
When the content played by the user is inconsistent with the music score, sequentially judging the intention of the user, mainly comprising judging whether the user jumps to other similar phrases due to misreading, judging whether the user repeatedly practices a sentence unskilled by the user, and judging whether the user really plays wrong notes.
The judging whether the user jumps to other similar phrases due to misreading includes:
and when the played content of the user is not consistent with the music score content, matching the currently played music sentence of the user with the music sentences of the same clustering category, if the music sentences are more similar to the music sentences played by the user, realizing the preset colors of the two music sentences, highlighting the currently correct music score music sentences, and giving a prompt.
The prompting mode can be that the similar phrases are simultaneously flickered in preset colors, and then the phrase which should be played at present is normally highlighted to prompt the user that the phrase should be played. In the music score, a prompt such as "whether you are playing the phrase or not should be the phrase" may be displayed in text near the correct phrase.
The judging whether the user is repeatedly practicing a sentence which is not familiar to him includes:
and searching whether the content of the musical score played by the user is repeated or not when the content played by the user is not consistent with the content of the musical score, performing similarity calculation on wrong content played by the user in a similarity calculation mode, searching the musical score by taking the content of the musical score played by the user for the closest time as a search sequence if the musical score played by the user for multiple times belongs to the same musical score, and highlighting the matched musical score according to the preset color.
The highlighted manner may be flashing with a preset color or a fixed color, and when the content played by the user is more consistent with the rhythm that the phrase should originally be played, the displayed color is lighter, or encouragement and prompt such as "play a good music |", etc. are given.
The judging whether the user really plays the wrong note comprises the following steps: and when the user is judged not to play similar phrases or not to practice the same phrase for multiple times, judging that the phrase is played wrongly, and prompting the user by the system.
The way of giving the prompt may be to flash the phrase in a preset color, or to display a letter "play error here!near the note where a match error is found on the score! "and so on.
And 106, identifying the playing effect of the user on the skill sounds, and changing the skill sound grade of the music score according to different proficiency degrees of the user on each skill sound.
And acquiring the current playing skill sound of the user, performing spectrum analysis on the skill sound, and performing similarity matching with the authoritative playing effect audio frequency spectrum. If the similarity is larger than a certain threshold value, judging that the playing skill of the user is correct, and if the similarity does not reach the relevant threshold value, judging that the playing skill of the user is wrong;
the spectrum analysis can adopt librosa software, wherein the MFCC algorithm can extract the characteristics of the audio frequency spectrum of the trick sound and then perform similarity calculation on the extracted characteristics. And obtaining whether the skill sound played by the user is similar to that played by the authoritative player or not.
Acquiring the playing skill sound of each user, and respectively judging the playing accuracy and playing error times of each skill sound. When a certain skill is played correctly or incorrectly for multiple times, the skill sound grade of the music score is changed;
the changing of the skill sound grade comprises changing the difficulty of the skill on the music score to a higher skill when the correct times of the skill sound played by the user is more than a certain times; when the user playing the trick sound error is greater than a certain threshold, the trick sound is changed to a lower level of trick.
The skill level can be switched according to the skill level defined in the textbook.
Through the above description of the embodiments, it is clear to those skilled in the art that the above embodiments can be implemented by software, and can also be implemented by software plus a necessary general hardware platform. With this understanding, the technical solutions of the embodiments can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.), and includes several instructions for enabling a computer device (which can be a personal computer, a server, or a network device, etc.) to execute the methods according to the embodiments of the present invention.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. An electronic music score intelligent display method is characterized by comprising the following steps:
setting a music score to be played by a user, extracting the content of the music score, including obtaining the name, the tone and the playing skill sound in the music score, and performing spectrum analysis on the skill sound; the playing trick sound is the specific mark marked in the music score at the upper right corner of the note, which indicates how the note should be added to the trick to play;
dividing the music score into sentences, and clustering the phrases after the division;
presetting phrases of the same category by adopting similar colors and different color depths; the presetting refers to presetting colors according to the analyzed result, and displaying according to the preset colors when the colors need to be displayed;
highlighting note differences among the phrases in the same category;
identifying notes played by a user at the current moment, and corresponding the notes to a music score; the correspondence refers to that notes played by a user according to the music score are corresponding to the music score content through software;
identifying a musical sentence played by a user, recording the position of the current musical sentence played by the user in a music score, judging the intention of the user when the note played by the user is different from the note of the music score, and reminding the user of different contents; the user intentions include whether the user jumps to other similar phrases due to misreading, whether unskilled phrases are repeatedly practiced, and whether wrong notes are played;
and identifying the playing effect of the user on the skill sounds, and changing the skill sound grade of the music score according to different proficiency degrees of the user on each skill sound.
2. The method of claim 1, wherein the setting of a score of a music score to be played by a user, the extraction of score content, the acquisition of tones, tunes and note notes of playing tricks within the score, and the spectral analysis of the tricks comprises:
note representation related to playing skills on a music score is obtained, meanwhile, authoritative playing effect audio of the skill is extracted from teaching audio, and the playing effect audio is subjected to spectrum analysis through software to obtain a playing skill audio spectrum.
3. The method of claim 1, wherein the segmenting the score into sentences and clustering the sentences comprises:
obtaining the length of each sound in the music score, obtaining the last sound in each measure as a candidate segmentation sound, obtaining the interval of the candidate segmentation sound, comparing the interval, extracting n candidate segmentation sounds with the longest interval as the segmentation sounds, performing sentence division according to the segmentation sounds, combining the phrase with a shorter phrase in a left adjacent sentence and a right adjacent sentence when the phrase is less than the preset length after the sentence division, and reallocating the phrase;
calculating the similarity between the phrases according to the names and intervals of the phrases, clustering the phrases, and taking the similar phrases as the same phrase category;
the similarity calculation method is carried out in a similarity calculation mode of editing distance, and the clustering algorithm is carried out by clustering a text clustering toolkit in a scimit-leann toolkit to obtain phrase categories of the music score.
4. The method according to claim 1 or 3, wherein the presetting of phrases of the same category with similar colors and different color depths comprises:
for the phrases in the same category, the background color which should be displayed when the phrases are played is preset by using similar color, and the background color of the phrases in different categories is displayed by using different color, and the phrases in the same category are displayed by using similar color and different color depth.
5. The method according to claim 1 or 3, wherein the pre-highlighting of the difference note for the note difference between phrases in the same category comprises:
when a user plays the current phrase, highlighting the sounds which are different from other phrases in the same category;
and the step of presetting the highlighting of the difference notes comprises the steps of firstly adopting a hash algorithm to position the phrases with difference in phrases in the same category, then adopting a diff function to match the different phrases to obtain the tiny difference between the phrases, and highlighting the difference.
6. The method of claim 1, wherein the identifying the musical notes performed by the user at the current moment and corresponding the musical notes to the score comprises:
the method comprises the steps of obtaining current playing notes of a player, converting the current playing notes into standard musical tones according to the fundamental frequency vibration frequency of the current tone emitted by the musical instrument, wherein the conversion comprises the steps of identifying the frequency of the current notes, corresponding the frequency of the current notes to a musical tone frequency spectrum, carrying out frequency correspondence on the tones of the frequency spectrum, and corresponding the converted notes to a musical score to obtain the corresponding notes in the musical score corresponding to the current playing tone.
7. The method of claim 1, wherein the identifying the musical sentence played by the user, recording the position of the current musical sentence played by the user in the music score, judging the user intention when the musical note played by the user is different from the musical score musical note, and reminding the user of different contents comprises:
recording continuous n notes in front of the performance of the user, searching which sentence of the current phrase of the user is in the music score, and continuously recording the progress of the playing content of the user in the music score by the system; when the content played by the user is inconsistent with the music score, sequentially judging the intention of the user, mainly comprising judging whether the user jumps to other similar phrases due to misreading, judging whether the user repeatedly practices one phrase which is not familiar to the user, and judging whether the user really plays wrong notes.
8. The method of claim 7, wherein said determining whether the user has jumped to other similar phrases due to misreading comprises:
when the content played by the user is not consistent with the music score content, matching the current played music sentence of the user with the music sentences of the same clustering category, if the music sentences are more similar to the music sentences played by the user, realizing the preset colors of the two music sentences, highlighting the current correct music score music sentences, and giving a prompt;
the judging whether the user is repeatedly practicing a sentence which is not familiar to him includes:
when the content played by the user is not consistent with the music score content, searching whether the content of a music sentence played by the user currently is repeated, performing similarity calculation on the wrong content played by the user in a similarity calculation mode, if the music scores played by the user for multiple times belong to the same music sentence, searching the music sentence by taking the music score content played by the user closest as a search sequence, and highlighting the matched music score according to the preset color;
the judging whether the user really plays the wrong note comprises the following steps: and when the user is judged not to play similar phrases or practice the same phrase for multiple times, the user is judged to be playing wrongly, and the electronic music score system should prompt the user.
9. The method of claim 1, wherein said identifying the user's performance effects on the trick sounds and varying the level of the trick sounds on the score according to the user's proficiency level on each of the trick sounds comprises:
acquiring current playing skill sounds of a user, performing spectrum analysis on the skill sounds, and performing similarity matching with an authoritative playing effect audio frequency spectrum; if the similarity is larger than a certain threshold value, judging that the playing skill of the user is correct, and if the similarity does not reach the relevant threshold value, judging that the playing skill of the user is wrong;
acquiring the playing skill sounds of each user, and respectively judging the playing accuracy and the playing error times of each skill sound; when a certain skill is played correctly for multiple times or is mistaken for multiple times, the skill sound grade of the music score is changed;
the changing of the skill sound grade comprises changing the difficulty of the skill on the music score to a higher skill when the correct times of the skill sound played by the user is more than a certain times; and when the user plays the skill sound more than a certain number of times, changing the skill sound into a lower skill.
10. An intelligent presentation device of electronic music scores, comprising:
the music score content extraction module is used for acquiring music score content to be played by a user, and acquiring a music score tune and a playing skill note annotation; extracting the video with authoritative playing effect of the note annotation;
the music score sentence dividing and clustering module is used for dividing and clustering the phrases in the music score;
the music score color presetting module is used for presetting the change rule of the music score color by a user;
the performance sound identification and recording module is used for identifying the current performance sound of the user, corresponding the current performance sound to the music score and recording the sound played by the user;
the music score color control module is used for presetting the color of a current playing phrase of a user according to the music score color presetting module, highlighting the music score according to the current playing sound and giving different error prompts when the user performs errors;
and the music score playing skill modifying module dynamically changes the playing skill sound according to the playing condition of the user on the skill sound.
CN201910519625.1A 2019-06-17 2019-06-17 Intelligent display method of electronic music score Active CN110299049B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910519625.1A CN110299049B (en) 2019-06-17 2019-06-17 Intelligent display method of electronic music score

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910519625.1A CN110299049B (en) 2019-06-17 2019-06-17 Intelligent display method of electronic music score

Publications (2)

Publication Number Publication Date
CN110299049A CN110299049A (en) 2019-10-01
CN110299049B true CN110299049B (en) 2021-12-17

Family

ID=68027974

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910519625.1A Active CN110299049B (en) 2019-06-17 2019-06-17 Intelligent display method of electronic music score

Country Status (1)

Country Link
CN (1) CN110299049B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113076967B (en) * 2020-12-08 2022-09-23 无锡乐骐科技股份有限公司 Image and audio-based music score dual-recognition system

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002006835A (en) * 2000-06-21 2002-01-11 Yamaha Corp Method and device for displaying data
CN1591517A (en) * 2003-09-01 2005-03-09 刘见平 Music labelling method
EP1835503A3 (en) * 2006-03-16 2008-03-26 Sony Corporation Method and apparatus for attaching metadata
CN106157973A (en) * 2016-07-22 2016-11-23 南京理工大学 Music detection and recognition methods
CN106203465A (en) * 2016-06-24 2016-12-07 百度在线网络技术(北京)有限公司 A kind of method and device generating the music score of Chinese operas based on image recognition
CN107039024A (en) * 2017-02-10 2017-08-11 美国元源股份有限公司 Music data processing method and processing device
CN107680614A (en) * 2017-09-30 2018-02-09 广州酷狗计算机科技有限公司 Acoustic signal processing method, device and storage medium
CN108364528A (en) * 2018-04-17 2018-08-03 南通理工学院 Piano playing note correction system and method
CN109473084A (en) * 2018-12-14 2019-03-15 广州沛乐科技有限公司 Electronic music device
CN109522959A (en) * 2018-11-19 2019-03-26 哈尔滨理工大学 A kind of music score identification classification and play control method
CN109791758A (en) * 2016-09-21 2019-05-21 雅马哈株式会社 Musical performance training device and method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6538187B2 (en) * 2001-01-05 2003-03-25 International Business Machines Corporation Method and system for writing common music notation (CMN) using a digital pen
CN204496755U (en) * 2015-03-16 2015-07-22 周友仁 A kind of intelligent piano Partner training device
CN106340286B (en) * 2016-09-27 2020-05-19 华中科技大学 Universal real-time musical instrument playing evaluation system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002006835A (en) * 2000-06-21 2002-01-11 Yamaha Corp Method and device for displaying data
CN1591517A (en) * 2003-09-01 2005-03-09 刘见平 Music labelling method
EP1835503A3 (en) * 2006-03-16 2008-03-26 Sony Corporation Method and apparatus for attaching metadata
CN106203465A (en) * 2016-06-24 2016-12-07 百度在线网络技术(北京)有限公司 A kind of method and device generating the music score of Chinese operas based on image recognition
CN106157973A (en) * 2016-07-22 2016-11-23 南京理工大学 Music detection and recognition methods
CN109791758A (en) * 2016-09-21 2019-05-21 雅马哈株式会社 Musical performance training device and method
CN107039024A (en) * 2017-02-10 2017-08-11 美国元源股份有限公司 Music data processing method and processing device
CN107680614A (en) * 2017-09-30 2018-02-09 广州酷狗计算机科技有限公司 Acoustic signal processing method, device and storage medium
CN108364528A (en) * 2018-04-17 2018-08-03 南通理工学院 Piano playing note correction system and method
CN109522959A (en) * 2018-11-19 2019-03-26 哈尔滨理工大学 A kind of music score identification classification and play control method
CN109473084A (en) * 2018-12-14 2019-03-15 广州沛乐科技有限公司 Electronic music device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于卷积神经网络的钢琴音乐推荐算法研究;李碧;《武汉工程大学硕士论文》;20171205;摘要,第4-5章 *

Also Published As

Publication number Publication date
CN110299049A (en) 2019-10-01

Similar Documents

Publication Publication Date Title
CN103003875B (en) Methods and systems for performing synchronization of audio with corresponding textual transcriptions and determining confidence values of the synchronization
JP5120826B2 (en) Pronunciation diagnosis apparatus, pronunciation diagnosis method, recording medium, and pronunciation diagnosis program
CN110085261A (en) A kind of pronunciation correction method, apparatus, equipment and computer readable storage medium
US20100121637A1 (en) Semi-Automatic Speech Transcription
Mion et al. Score-independent audio features for description of music expression
CN107103915A (en) A kind of audio data processing method and device
CN109410664A (en) A kind of pronunciation correction method and electronic equipment
CN108038146B (en) Music playing artificial intelligence analysis method, system and equipment
CN110070847B (en) Musical tone evaluation method and related products
JP4803797B2 (en) Music score recognition apparatus and music score recognition program
CN109979257A (en) A method of partition operation is carried out based on reading English auto-scoring and is precisely corrected
JP6217304B2 (en) Singing evaluation device and program
CN110299049B (en) Intelligent display method of electronic music score
CN105895079B (en) Voice data processing method and device
CN111552777A (en) Audio identification method and device, electronic equipment and storage medium
CN105118490B (en) Polyphony instrumental notes localization method and device
JP4738135B2 (en) Music score recognition apparatus and music score recognition program
US20150000505A1 (en) Techniques for analyzing parameters of a musical performance
CN114677431A (en) Piano fingering identification method and computer readable storage medium
CN116034421A (en) Musical composition analysis device and musical composition analysis method
CN108182946B (en) Vocal music mode selection method and device based on voiceprint recognition
Duggan Machine annotation of traditional Irish dance music
KR101790998B1 (en) Switching Method of music score and device thereof
CN111079725B (en) Method for distinguishing English from pinyin and electronic equipment
CN113920786B (en) Singing teaching method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230714

Address after: Room 424, Building 2, No. 318, Waihuan West Road, University Town, Xiaoguwei Street, Panyu District, Guangzhou, Guangdong 510000

Patentee after: Guangzhou Xinqi Intelligent Technology Co.,Ltd.

Address before: Room f101-12, No.1 incubation and production building, guanshao shuangchuang (equipment) center, Huake City, 42 Baiwang Avenue, Wujiang District, Shaoguan City, Guangdong Province, 512026

Patentee before: Shaoguan Qizhi Information Technology Co.,Ltd.