CN105845115B - Song mode determining method and song mode determining device - Google Patents

Song mode determining method and song mode determining device Download PDF

Info

Publication number
CN105845115B
CN105845115B CN201610149513.8A CN201610149513A CN105845115B CN 105845115 B CN105845115 B CN 105845115B CN 201610149513 A CN201610149513 A CN 201610149513A CN 105845115 B CN105845115 B CN 105845115B
Authority
CN
China
Prior art keywords
note
natural
song
mode
level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610149513.8A
Other languages
Chinese (zh)
Other versions
CN105845115A (en
Inventor
冯穗豫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201610149513.8A priority Critical patent/CN105845115B/en
Publication of CN105845115A publication Critical patent/CN105845115A/en
Application granted granted Critical
Publication of CN105845115B publication Critical patent/CN105845115B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/18Selecting circuits
    • G10H1/20Selecting circuits for transposition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/325Musical pitch modification
    • G10H2210/331Note pitch correction, i.e. modifying a note pitch or replacing it by the closest one in a given scale

Abstract

The invention provides a song mode determining method, which comprises the following steps: acquiring music score information of songs; acquiring the intonation level of each mode; determining natural notes in the note sequence corresponding to each mode of the song according to the tone internal sound level of each mode; determining the proportion of the natural notes in the note sequence corresponding to each mode of the song according to the note sequence of the song and the duration of each note in the note sequence; and determining the mode corresponding to the maximum ratio of the natural notes as the mode of the song. The invention also provides a song mode determining device, and the song mode determining method and the song mode determining device of the invention determine the mode of the song according to the natural note proportion of the note sequence of the song, and correct the pitch according to the pitch level in the tone corresponding to the mode, and the mode determining accuracy is high and the pitch correction accuracy is high.

Description

Song mode determining method and song mode determining device
Technical Field
The present invention relates to the field of audio processing, and in particular, to a method and an apparatus for determining a song mode.
Background
The method comprises the steps that an electric sound effect is set in some existing Karaoke software, the electric sound effect is used for detecting the pitch of a song sung by a Karaoke user, and when the pitch does not fall on the intonation level corresponding to the mode of the song, the pitch is forcibly aligned to the intonation level corresponding to the mode of the song according to the principle of being close to the nearest.
For example, a song with a key pattern of C major, the pitch level in the C major is C, D, E, F, G, A, B, when the user of K song wants to sing the level E, but sings 20 tones higher than the level D # (i.e. 80 tones lower than the level E), if the corresponding key pattern is not set, or sets the wrong key pattern, the pitch corrector will force the inaccurate level sung by the user of K song to be aligned to the wrong level.
If the song with the major key C is incorrectly set as the major key E, the pitch level in the major key E is E, F #, G #, A, B, C #, and D #, and when the user of the karaoke wants to sing the pitch level E, the song sings 20 pitches higher than the pitch level D # (i.e., 80 pitches lower than the pitch level E of the pitch level), then the pitch corrector of the karaoke software forcibly corrects the pitch level sung by the user of the karaoke to the pitch level D #, but not to the pitch level E, so that the correction result of the pitch corrector is inaccurate, and the corresponding electric sound effect is affected.
Disclosure of Invention
The embodiment of the invention provides a song mode determining method and a song mode determining device with higher mode determining accuracy, and aims to solve the technical problem that the mode determining accuracy of the existing song mode determining method and the existing song mode determining device is lower.
The embodiment of the invention provides a song mode determining method, which comprises the following steps:
acquiring music score information of a song, wherein the music score information comprises a note sequence forming the song and the duration of each note in the note sequence;
acquiring the intonation level of each mode;
according to the intonation level of each mode, determining natural notes in the note sequence corresponding to each mode of the song;
determining the proportion of each mode of the song corresponding to the natural notes in the note sequence according to the note sequence of the song and the duration of each note in the note sequence; and
and determining the mode corresponding to the maximum ratio of the natural notes as the mode of the song.
An embodiment of the present invention further provides a device for determining a song style, including:
the system comprises a note sequence acquisition module, a note sequence acquisition module and a note selection module, wherein the note sequence acquisition module is used for acquiring music score information of a song, and the music score information comprises a note sequence forming the song and the duration of each note in the note sequence; and obtaining the tone internal sound level of each tone;
the natural note setting module is used for determining natural notes in the note sequence corresponding to each mode of the song according to the intonation level of each mode;
a note proportion determining module, configured to determine, according to the note sequence of the song and the duration of each note in the note sequence, a proportion of each tone of the song corresponding to the natural note in the note sequence; and
and the mode determining module is used for determining the mode corresponding to the maximum ratio of the natural notes as the mode of the song.
Compared with the song mode determining method and the song mode determining device in the prior art, the song mode determining method and the song mode determining device in the invention determine the mode of the song according to the natural note proportion of the note sequence of the song, and correct the pitch according to the pitch level in the tone corresponding to the mode, so that the mode determining accuracy is high, the pitch correcting accuracy is high, and the technical problems of low mode determining accuracy and low pitch correcting accuracy of the existing song mode determining method are solved.
Drawings
FIG. 1 is a schematic diagram of intonation levels corresponding to different intonations;
FIG. 2 is a flow chart of a first preferred embodiment of a song mode determination method of the present invention;
FIG. 3 is a flow chart of a second preferred embodiment of a song mode determination method of the present invention;
FIG. 4 is a flow chart of a third preferred embodiment of the song mode determination method of the present invention;
fig. 5 is a schematic structural diagram of a first preferred embodiment of the song mode determining apparatus of the present invention;
fig. 6 is a schematic structural diagram of a second preferred embodiment of the song mode determining apparatus of the present invention;
FIG. 7 is a schematic diagram of a third preferred embodiment of the song mode determining apparatus according to the present invention;
FIG. 8 is a schematic structural diagram of a note proportion determining module of a third preferred embodiment of the song mode determining apparatus according to the present invention;
fig. 9 is a schematic view of an operating environment structure of an electronic device in which the song mode determination apparatus of the present invention is located.
Detailed Description
Referring to the drawings, wherein like reference numbers refer to like elements, the principles of the present invention are illustrated as being implemented in a suitable computing environment. The following description is based on illustrated embodiments of the invention and should not be taken as limiting the invention with regard to other embodiments that are not detailed herein.
In the description that follows, embodiments of the invention are described with reference to steps and symbols of operations performed by one or more computers, unless otherwise indicated. It will thus be appreciated that those steps and operations, which are referred to herein several times as being computer-executed, include being manipulated by a computer processing unit in the form of electronic signals representing data in a structured form. This manipulation transforms the data or maintains it at locations in the computer's memory system, which may reconfigure or otherwise alter the computer's operation in a manner well known to those skilled in the art. The data maintains a data structure that is a physical location of the memory that has particular characteristics defined by the data format. However, while the principles of the invention have been described in language specific to above, it is not intended to be limited to the specific details shown, since one skilled in the art will recognize that various steps and operations described below may be implemented in hardware.
The song tune determination apparatus of the present invention may be implemented using a variety of electronic devices including, but not limited to, personal computers, server computers, hand-held or laptop devices, mobile devices (such as mobile phones, Personal Digital Assistants (PDAs), media players, and the like), multiprocessor systems, consumer electronics, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The electronic device is preferably a computer or mobile device with karaoke software to accurately pitch the singing output notes of the song.
The following is a brief description of the process of determining the intonation level of a tune of a song. Referring to fig. 1, fig. 1 is a schematic diagram of pitch levels corresponding to different pitch styles.
Wherein C, C #, D, D #, E, F, F #, G, G #, A, A # and B in the figure are 12 notes of music, i.e. all the notes of the song are selected from the 12 notes.
In addition, the 12 tones are divided into seven natural tone levels and five unnatural tone levels according to the tone setting of the songs, wherein the natural tone levels refer to the musical note sequences with high use probability corresponding to the songs with the set main tones. The dominant note is the first of seven natural levels. The musical intervals between the seven natural sound levels are respectively full half, full half and full half.
When the tune of the song is C major tune, the scale C is the main sound of the song and is also used as the first natural scale of the song, and seven natural scales corresponding to the tune are C, D, E, F, G, A, B, wherein the scale between the E scale and the F scale is 100 scales, the scale between the B scale and the C scale is also 100 scales, and the scale between other scales is 200 scales.
If the tune of the song is D major, the scale D is the main note of the song and also serves as the first natural scale of the song, seven natural scales corresponding to the tune are D, E, F #, G, A, B, C #, respectively, wherein the scale interval of scale F # and scale G is 100 scale, the scale interval of scale C # and scale D is 100 scale, and the scale intervals between other scale are 200 scale. The other tunes may obtain the corresponding natural sound levels in the manner described above.
When the mode of a song is determined, the song is generally represented by only seven natural sound levels corresponding to the mode; if the song includes other sound levels, such as the F # sound level of the C major, the sound level will be the unnatural sound level corresponding to the style. Typically in a streaming song, unnatural sound levels in the song that set the correct tune are very few or none.
Referring to fig. 2, fig. 2 is a flowchart illustrating a song mode determining method according to a first preferred embodiment of the present invention. The song mode determination method of the preferred embodiment may be implemented using the electronic device described above, and includes:
step S201, acquiring music score information of songs, and acquiring the intonation level of each mode;
step S202, according to the tone internal sound level of each tone, determining natural notes in a note sequence corresponding to each tone of the song;
step S203, determining the proportion of natural notes in the note sequence corresponding to each mode of the song according to the note sequence of the song and the duration of each note in the note sequence;
step S204, determining the corresponding mode when the proportion of the natural notes of the note sequence of the song is maximum as the mode of the song;
and S205, according to the intonation level of the tone type of the song, performing pitch correction on the singing output notes of the song.
The specific flow of each step of the song mode determination method of the present preferred embodiment is described in detail below.
In step S201, a song mode determination device, such as a computer with a song software, acquires music score information of a song through a midi file of the song; the score information here includes a sequence of notes that make up a song and the duration of each note in the sequence of notes. The note sequence here refers to all notes in the music score of a song, and the music has 12 notes, all the notes including C, C #, D, D #, E, F, F #, G, G #, A, A # and B.
The intonation levels of the modes corresponding to the 12 notes are then obtained,
the intonation levels of the major key C are C, D, E, F, G, A and B;
the pitch level of the major key of C # is C #, D #, F, F #, G #, A # and C;
the tone internal tone levels of the major key D are D, E, F #, G, A, B and C #;
the tone scale of the major key of D # is D #, F, G, G #, A #, C and D;
the tone scale of the major key E is E, F #, G #, A, B, C # and D #;
the intonation levels of the major key F are F, G, A, A #, C, D and E;
the pitch level of the major key of the F # is F #, G #, A #, B, C #, D # and F;
the tone scale of the major G is G, A, B, C, D, E and F #;
the tone scale of the major key G # is G #, A #, C, C #, D #, F and G;
the tone internal tone levels of the major key A are A, B, C #, D, E, F # and G #;
the tone scale of the major key of A # is A #, C, D, D #, F, G and A;
the pitch level of the major key B is B, C #, D #, E, F #, G # and A #. Subsequently, the process goes to step S202.
In step S202, the song mode determination means sets, as natural notes, notes of the key scale corresponding to each mode in the note sequence of the song acquired in step S201.
If the note sequence of the song is set to C major, the natural notes corresponding to the note sequence of the song are C, D, E, F, G, A and B; other levels in the song, such as C #, D #, F #, G #, and A # levels, are unnatural notes in the C major of the song.
If the note sequence of the song is set to the major key of D, the natural notes corresponding to the note sequence of the song are D, E, F #, G, A, B and C #. Other levels in the song, such as C, D #, F, G #, and A # are unnatural notes in the D major scale of the song.
Thus, 12 natural notes corresponding to the note sequence of the song can be set in the above-described 12 key patterns. Subsequently, the process goes to step S203.
In step S203, the song mode determination means determines the natural note ratio of the note sequence of the song corresponding to each mode according to the duration of the natural note acquired in step S202 in the note sequence of the song. Thus, the natural note ratios of the note sequences of the songs corresponding to the 12 key modes can be obtained.
Here, if the song is in C major, the main note of the song is C, and the corresponding intonation level is C, D, E, F, G, A, B, i.e., the note level C, D, E, F, G, A, B is the natural note of C major of the song; the song mode determining means then calculates the proportion of the duration of the scale C, D, E, F, G, A, B in all the note sequences of the song as the natural note proportion of the note sequence of the song at C major.
If the song is in the major scale of D, the main note of the song is D, the corresponding intonation pitch levels are D, E, F #, G, A, B, C #, namely, the pitch levels D, E, F # and G, A, B, C # are the natural notes of the major scale of D of the song; the song mode determining means then calculates the proportion of the duration of the scale D, E, F #, G, A, B, C # in all the note sequences of the song as the natural note proportion of the note sequence of the song at D major.
If a song has a sequence of notes in which C of 1000ms, C of 2000ms, E of 1000ms, and F # of 500ms occur in sequence, and the natural notes in C major scale are C and E, the natural note length in C major scale is 1000+2000+1000 to 4000 ms. The natural notes in major D scale are E and F #, and the natural note length in major D scale is 1000+ 500-1500 ms. The natural note proportion of songs in different tunes may be different. Subsequently, the process goes to step S204.
In step S204, the song mode determination means finds the maximum value of the natural note ratios from the natural note ratios of the note sequences of the songs in the different modes acquired in step S203, and determines the mode corresponding to the maximum natural note ratio as the mode of the song. Since the proportion of the natural notes corresponding to the correct pitch of the song should be the largest, determining the pitch of the song by the proportion of the natural notes of the note sequence of the song herein will improve the accuracy of the subsequent pitch correction. Subsequently, it goes to step S205.
In step S205, the song mode determining device performs pitch correction on the output note sung by the user of the K song according to the intonation level corresponding to the mode acquired in step S204.
If the tone determined in step S204 is C major, the singing output notes of the user singing song K are forcibly corrected to C, D, E, F, G, A, B according to the principle of correction, and the unnatural notes corrected to C major such as C #, D #, F #, G #, a # and the like will not appear, so that the generation of unnatural notes is effectively avoided, and the accuracy of pitch correction is improved.
This completes the tune determination and the song pitch correction process of the song tune determination method of the preferred embodiment.
The song mode determining method of the preferred embodiment determines the mode of the song according to the natural note proportion of the note sequence of the song, and performs pitch correction according to the pitch level in the tone corresponding to the mode, and the accuracy rate of the pitch correction is high.
Referring to fig. 3, fig. 3 is a flowchart illustrating a song mode determining method according to a second preferred embodiment of the present invention. The song mode determination method of the preferred embodiment may be implemented using the electronic device described above, and includes:
step S301, acquiring music score information of songs, and acquiring the intonation level of each mode;
step S302, according to the tone internal tone level of each tone, determining natural notes in a note sequence corresponding to each tone of the song;
step S303, determining the natural note proportion of the note sequence of the song corresponding to each mode according to the note time proportion of the natural notes in all the note sequences;
step S304, determining the corresponding mode when the natural note proportion of the note sequence of the song is maximum as the mode of the song;
step S305, according to the tone inner tone level of the tone type of the song, the tone pitch of the singing output notes of the song is corrected.
The specific flow of each step of the song mode determination method of the present preferred embodiment is described in detail below.
Step S301 and step S302 of the present preferred embodiment are the same as or similar to the descriptions in step S201 and step S202 of the first preferred embodiment of the song mode determination method, and refer to the related descriptions in the first preferred embodiment of the song mode determination method.
In step S303, the song mode determining device obtains, from the natural notes corresponding to the respective modes obtained in step S302, note time ratios of the natural notes corresponding to the respective modes in all note sequences (i.e., the sum of the natural notes and the unnatural notes) as natural note ratios of the note sequences of the song in the mode.
For example, the total note length of the sequence of notes in a song is 35000ms, wherein the C scale appears 7000ms, the D scale appears 5000ms, the E scale appears 6000ms, the F scale appears 2000ms, the G scale appears 7000ms, the A scale appears 6000ms, the B scale appears 1000ms, the F # scale appears 500ms, and the C # scale appears 500 ms.
Thus, the natural note in C major is C, D, E, F, G, A, B, and the natural note length in C major is 7000+5000+6000+2000+7000+6000+1000, 34000. The natural notes in major D scale are D, E, F #, G, A, B, C #, and the natural note length in major D scale is 5000+6000+500+7000+6000+1000+ 500-26000. Thus, the natural note proportion of the note sequence of the song in C major scale is 34000/35000-0.97, and the natural note proportion of the note sequence of the song in D major scale is 26000/35000-0.74. Subsequently, the process goes to step S304.
In step S304, the song mode determination means determines the mode corresponding to the largest natural note ratio of the note sequence of the song acquired in step S303 as the mode of the song. The natural note proportion corresponding to the C major key of the song calculated in step S303 is 0.97, the natural note proportion corresponding to the D major key of the song is 0.74, and the natural note proportion corresponding to other key expressions of the song … … is … …; if 0.97 is the maximum, the C major key is determined as the key of the song.
Since the proportion of the natural notes corresponding to the correct pitch for the song should be the largest, determining the pitch of the song by the proportion of the natural notes of the note sequence of the song herein will improve the accuracy of the subsequent pitch correction. Subsequently, the process goes to step S305.
In step S305, the song mode determining device performs pitch correction on the output note sung by the user of the K song according to the intonation level corresponding to the mode acquired in step S304.
If the tone determined in step S304 is C major, the output note sung by the user singing karaoke is forcibly corrected to C, D, E, F, G, A, B level according to the principle of correction, and the unnatural note corrected to C major such as C #, D #, F #, G #, a # and the like will not occur, so that the generation of the unnatural note is effectively avoided, and the accuracy of pitch correction is improved.
This completes the tune determination and the song pitch correction process of the song tune determination method of the preferred embodiment.
On the basis of the first preferred embodiment, the song mode determining method of the present preferred embodiment determines the mode of the song according to the note time ratio of the natural notes in all note sequences under different modes, and the obtained mode or song tune is more accurate; and the pitch correction is carried out according to the intonation level corresponding to the mode, and the accuracy of the pitch correction is higher.
Referring to fig. 4, fig. 4 is a flowchart illustrating a song mode determination method according to a third preferred embodiment of the present invention. The song mode determination method of the preferred embodiment may be implemented using the electronic device described above, and includes:
step S401, acquiring music score information of songs, and acquiring the intonation level of each mode;
step S402, according to the tone internal tone level of each tone, determining natural notes in a note sequence corresponding to each tone of the song;
step S403, obtaining the tone level weight of the intonation tone level corresponding to the mode;
step S404, determining the natural note weight of the natural note in the note sequence according to the scale weight of the pitch level in the key;
step S405, determining the proportion of the natural notes of the mode corresponding to the note sequence of the song according to the proportion of the note time of the natural notes in all the note sequences and the weight of the natural notes in the note sequences;
step S406, determining the corresponding mode when the natural note proportion of the note sequence of the song is maximum as the mode of the song;
step S407, according to the tone inner tone level corresponding to the tone type of the song, pitch correction is carried out on the singing output note of the song.
The specific flow of each step of the song mode determination method of the present preferred embodiment is described in detail below.
Step S401 and step S402 of the present preferred embodiment are the same as or similar to the descriptions in step S201 and step S202 of the first preferred embodiment of the song mode determination method, and refer to the related descriptions in the first preferred embodiment of the song mode determination method.
In step S403, since the pitch of the song is determined only by the ratio of the note time of the natural note corresponding to each pitch in all note sequences, it may cause that some songs with less used pitch level cannot accurately determine the pitch of the song.
If a song uses only C, D, E, G, A levels, the natural note share ratio for the C major key (with an intonation level of C, D, E, F, G, A, B) and the natural note share ratio for the G major key (with an intonation level of G, A, B, C, D, E, F #) are the same. In the preferred embodiment, therefore, different levels of pitch within a key are given weights according to the stability of each level in the tune, so that natural note proportions under different tunes are better distinguished, i.e. the more levels of pitch that are stable, the higher the natural note proportions under the tune.
Therefore, in this step, the song mode determination means acquires the level weight of the intonation level corresponding to each mode.
The natural notes of songs having different key styles are set to a first, second, third, fourth, fifth, sixth and seventh scale, respectively, from high to low in scale.
If the song is C major, the corresponding intonation sound levels are a first sound level C, a second sound level D, a third sound level E, a fourth sound level F, a fifth sound level G, a sixth sound level A and a seventh sound level B. Wherein the first scale C, the second scale D, the third scale E, the fifth scale G and the sixth scale A are the first weighted scale, and the fourth scale F and the seventh scale B are the second weighted scale.
Since the first scale of each song tune is a major, the second scale is an upper major, the third scale is a middle, the fifth scale is a subordinate, and the sixth scale is a lower middle, the first weighted scale may appear at the beginning and end of the song passage, so the first weighted scale is a scale which is more stable and occurs more frequently in the corresponding tune. The fourth tone level of each song tune is subordinate tone, the seventh tone level is lower dominant tone, the second weighted tone level cannot be singly presented at the end of the song paragraph, and the first weighted tone level is inevitably presented after the second weighted tone level for end guidance, so the second weighted tone level is unstable and presents a tone level with lower frequency in the corresponding tune.
Therefore, the tone scale weight of the first weighted tone scale of the intonation tone scale corresponding to each mode is set to be greater than the tone scale weight of the second weighted tone scale. Subsequently, the process goes to step S404.
In step S404, the song mode determining means determines the natural note weights of the corresponding natural notes in the note sequence according to the scale weights of the intonation scale corresponding to the respective modes acquired in step S403. That is, the natural note weight of the natural note corresponding to the first weighted scale is greater than the natural note weight of the natural note corresponding to the second weighted scale. Subsequently, the process goes to step S405.
In step S405, the song mode determination apparatus obtains the note-time ratio of the natural note corresponding to each mode in all note sequences (i.e., the sum of the natural note and the unnatural note) according to the natural note corresponding to each mode obtained in step S402.
And then the song mode determining device determines the natural note proportion of the mode corresponding to the note sequence of the song according to the note time proportion of the natural notes in all the note sequences and the natural note weight of the natural notes in the note sequences.
The method specifically comprises the following steps: if the scale factor of the first weighted scale factor of the intonation scale corresponding to the mode is set to be 2, the scale factor of the second weighted scale factor is set to be 1.
Meanwhile, if the total note length of the note sequence of the song is 35000ms, 7000ms appears at the C scale, 5000ms appears at the D scale, 6000ms appears at the E scale, 2000ms appears at the F scale, 7000ms appears at the G scale, 6000ms appears at the A scale, 1000ms appears at the B scale, 500ms appears at the F scale, and 500ms appears at the C scale.
Thus, the natural note in the major key of C is C, D, E, F, G, A, B, and the natural note ratio of the sequence of notes in the major key of C is (7000 × 2+5000 × 2+6000 × 2+2000 × 1+7000 × 2+6000 × 2+1000 × 1)/35000 ═ 1.857; the natural notes in the major key D are D, E, F #, G, A, B, C #, and the natural note ratio of the sequence of notes in the song in the major key D is (5000 × 2+6000 × 2+500 × 2+7000 × 1+6000 × 2+1000 × 2+500 × 1)/35000 ═ 1.271; this determines the natural note proportion of the song at each tune. Subsequently, the process goes to step S406.
In step S406, the song mode determination means determines the mode corresponding to the case where the natural note proportion of the note sequence of the song acquired in step S405 is the maximum as the mode of the song. The natural note proportion corresponding to the C-mode of the song calculated in step S405 is 1.857, the natural note proportion corresponding to the D-mode of the song is 1.271, and the natural note proportion corresponding to other modes of the … … song is … …; if 1.857 is the maximum, the C key is determined as the key of the song, i.e. the tune of the song is set as the C major.
Since the proportion of the natural notes corresponding to the correct pitch for the song should be the largest, determining the pitch of the song by the proportion of the natural notes of the note sequence of the song herein will improve the accuracy of the subsequent pitch correction. Subsequently, it goes to step S407.
In step S407, the song mode determination device performs pitch correction on the output note sung by the user of the K song according to the intonation level corresponding to the mode acquired in step S406.
If the tone determined in step S406 is C major, the output note sung by the user singing karaoke is forcibly corrected to C, D, E, F, G, A, B level according to the principle of correction, and the unnatural notes corrected to C major such as C #, D #, F #, G #, a # and the like will not occur, so that the generation of unnatural notes is effectively avoided, and the accuracy of pitch correction is improved.
This completes the tune determination and the song pitch correction process of the song tune determination method of the preferred embodiment.
On the basis of the second preferred embodiment, the song mode determining method of the preferred embodiment determines the mode of the song according to the note time ratio of the natural notes in the note sequence under different modes and the weight of the natural notes, thereby avoiding the problem that the mode is difficult to determine through the note time ratio due to too small sound level of the song, and further improving the accuracy of the obtained mode or the song tune.
Preferably, in step S403, the intonation levels corresponding to each mode may be divided into a first weighted level, a second weighted level, and a third weighted level.
If the song is C major, the corresponding intonation sound levels are a first sound level C, a second sound level D, a third sound level E, a fourth sound level F, a fifth sound level G, a sixth sound level A and a seventh sound level B. The first, third, fifth and sixth sound levels C, E, G and a are the first, second and fourth weighted sound levels, and the fourth and seventh sound levels F and B are the third weighted sound level.
The tone scale weight of the first weighted tone scale of the intonation tone scale corresponding to each mode is set to be greater than the tone scale weight of the second weighted tone scale, and the tone scale weight of the second weighted tone scale is greater than the tone scale weight of the third weighted tone scale.
Thus, in step S404, the song mode determining apparatus determines that the natural note weight of the natural note corresponding to the first weighted scale is greater than the natural note weight of the natural note corresponding to the second weighted scale, and the natural note weight of the natural note corresponding to the second weighted scale is greater than the natural note weight of the third weighted scale.
Due to the fact that the weights of the natural notes are divided accurately, accuracy of the obtained mode or song tune is further improved.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a first preferred embodiment of the song mode determining apparatus according to the present invention. The song mode determining apparatus of the present preferred embodiment can be implemented using the first preferred embodiment of the song mode determining method described above, and the song mode determining apparatus 50 includes a note sequence obtaining module 51, a natural note setting module 52, a note proportion determining module 53, a mode determining module 54, and a pitch correcting module 55.
The note sequence acquiring module 51 is configured to acquire score information of a song, where the score information includes a note sequence constituting the song and a duration of each note in the note sequence; and obtaining the tone internal sound level of each tone; the natural note setting module 52 is configured to determine a natural note in the note sequence corresponding to each mode of the song according to the pitch-in level of each mode; the note proportion determining module 53 is configured to determine the proportion of the natural notes in the note sequence corresponding to each mode of the song according to the note sequence of the song and the duration of each note in the note sequence; the mode determining module 54 is configured to determine a mode corresponding to the largest proportion of the natural notes as a mode of the song; the pitch correction module 55 is configured to perform pitch correction on the singing output notes of the song according to the intonation level of the tone of the song.
When the song mode determining device 50 of the preferred embodiment is used, firstly, the note sequence obtaining module 51, such as a computer with a song K software, obtains the music score information of a song through the midi file of the song; the score information here includes a sequence of notes that make up a song and the duration of each note in the sequence of notes. The note sequence here refers to all notes in the music score of a song, and the music has 12 notes, all the notes including C, C #, D, D #, E, F, F #, G, G #, A, A # and B.
The note sequence acquiring module 51 then acquires the intonation levels of the tonic corresponding to the 12 tones,
the intonation levels of the major key C are C, D, E, F, G, A and B;
the pitch level of the major key of C # is C #, D #, F, F #, G #, A # and C;
the tone internal tone levels of the major key D are D, E, F #, G, A, B and C #;
the tone scale of the major key of D # is D #, F, G, G #, A #, C and D;
the tone scale of the major key E is E, F #, G #, A, B, C # and D #;
the intonation levels of the major key F are F, G, A, A #, C, D and E;
the pitch level of the major key of the F # is F #, G #, A #, B, C #, D # and F;
the tone scale of the major G is G, A, B, C, D, E and F #;
the tone scale of the major key G # is G #, A #, C, C #, D #, F and G;
the tone internal tone levels of the major key A are A, B, C #, D, E, F # and G #;
the tone scale of the major key of A # is A #, C, D, D #, F, G and A;
the pitch level of the major key B is B, C #, D #, E, F #, G # and A #.
Then, the natural note setting module 52 sets the notes of the note sequence of the song acquired by the note sequence acquiring module 51 corresponding to the intonation levels of the respective modes as natural notes.
If the note sequence of the song is set to C major, the natural notes corresponding to the note sequence of the song are C, D, E, F, G, A and B; other levels in the song, such as C #, D #, F #, G #, and A # levels, are unnatural notes in the C major of the song.
If the note sequence of the song is set to the major key of D, the natural notes corresponding to the note sequence of the song are D, E, F #, G, A, B and C #. Other levels in the song, such as C, D #, F, G #, and A # are unnatural notes in the D major scale of the song.
Thus, 12 natural notes corresponding to the note sequence of the song can be set in the above-described 12 key patterns.
Then, the note proportion determining module 53 determines the natural note proportion of the note sequence of the song corresponding to each mode according to the duration of the natural note in the note sequence of the song acquired by the natural note setting module 52. Thus, the natural note ratios of the note sequences of the songs corresponding to the 12 key modes can be obtained.
Here, if the song is in C major, the main note of the song is C, and the corresponding intonation level is C, D, E, F, G, A, B, i.e., the note level C, D, E, F, G, A, B is the natural note of C major of the song; the song mode determining means then calculates the proportion of the duration of the scale C, D, E, F, G, A, B in all the note sequences of the song as the natural note proportion of the note sequence of the song at C major.
If the song is in the major scale of D, the main note of the song is D, the corresponding intonation pitch levels are D, E, F #, G, A, B, C #, namely, the pitch levels D, E, F # and G, A, B, C # are the natural notes of the major scale of D of the song; the song mode determining means then calculates the proportion of the durations of the scale D, E, F #, G, A, B, C # in all the note sequences of the song as the natural note proportion of the note sequence of the song in the mode of D.
If a song has a sequence of notes in which C of 1000ms, C of 2000ms, E of 1000ms, and F # of 500ms occur in sequence, and the natural notes in C major scale are C and E, the natural note length in C major scale is 1000+2000+1000 to 4000 ms. The natural notes in major D scale are E and F #, and the natural note length in major D scale is 1000+ 500-1500 ms. The natural note proportion of songs in different tunes may be different.
Then, the mode determining module 54 finds the maximum value of the natural note ratio from the natural note ratios of the note sequences of the songs in different modes acquired by the note ratio determining module 53, and determines the mode corresponding to the maximum natural note ratio as the mode of the song. Since the proportion of the natural notes corresponding to the correct pitch of the song should be the largest, determining the pitch of the song by the proportion of the natural notes of the note sequence of the song herein will improve the accuracy of the subsequent pitch correction.
Finally, the pitch correction module 55 performs pitch correction on the output notes sung by the karaoke user according to the intonation level corresponding to the mode obtained by the mode determination module 54.
If the tone determined by the tone determining module 54 is C major, the output notes sung by the user in karaoke are forcibly corrected to C, D, E, F, G, A, B level according to the principle of correction, and the phenomenon of correcting unnatural notes in C major such as C #, D #, F #, G #, a # and the like will not occur, so that the generation of unnatural notes is effectively avoided, and the accuracy of pitch correction is improved.
This completes the tune determination by the song tune determination apparatus of the present preferred embodiment and the song pitch correction process.
The song mode determining device of the preferred embodiment determines the mode of the song according to the natural note proportion of the note sequence of the song, and performs pitch correction according to the pitch level in the tone corresponding to the mode, and the accuracy rate of the pitch correction is high.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a song mode determining apparatus according to a second preferred embodiment of the present invention. The song mode determining apparatus of the present preferred embodiment can be implemented using the second preferred embodiment of the song mode determining method described above, and the song mode determining apparatus 60 includes a note sequence obtaining module 61, a natural note setting module 62, a note proportion determining module 63, a mode determining module 64, and a pitch correcting module 65.
The note sequence acquiring module 61 is configured to acquire score information of a song, where the score information includes a note sequence constituting the song and a duration of each note in the note sequence, and acquires a pitch level of each mode; the natural note setting module 62 is configured to determine a natural note in a note sequence corresponding to each mode of the song according to the pitch-in level of each mode; the note proportion determining module 63 is configured to determine the natural note proportion of the note sequence of the song corresponding to each mode according to the note time proportion of the natural note in the note sequence; the mode determining module 64 is configured to determine a mode corresponding to the largest proportion of the natural notes as a mode of the song; the pitch correction module 65 is configured to perform pitch correction on the singing output notes of the song according to the intonation level of the mode of the song.
When the song mode determining device 60 of the preferred embodiment is used, first, the note sequence obtaining module 61 obtains the score information of the song, and obtains the intonation level of each mode; the natural note setting module 62 then determines the natural notes in the note sequence corresponding to each mode of the song based on the level of pitch in each mode.
Then, the note proportion determining module 63 obtains the note time ratio of the natural note corresponding to each mode in all note sequences (i.e. the sum of the natural note and the unnatural note) according to the natural note corresponding to each mode obtained by the natural note setting module 62, and uses the note time ratio as the natural note proportion of the note sequence of the song in the mode.
For example, the total note length of the sequence of notes in a song is 35000ms, wherein the C scale appears 7000ms, the D scale appears 5000ms, the E scale appears 6000ms, the F scale appears 2000ms, the G scale appears 7000ms, the A scale appears 6000ms, the B scale appears 1000ms, the F # scale appears 500ms, and the C # scale appears 500 ms.
Thus, the natural note in C major is C, D, E, F, G, A, B, and the natural note length in C major is 7000+5000+6000+2000+7000+6000+1000, 34000. The natural notes in major D scale are D, E, F #, G, A, B, C #, and the natural note length in major D scale is 5000+6000+500+7000+6000+1000+ 500-26000. Thus, the natural note proportion of the note sequence of the song in C major scale is 34000/35000-0.97, and the natural note proportion of the note sequence of the song in D major scale is 26000/35000-0.74.
Then, the mode determining module 64 determines the mode corresponding to the largest natural note ratio of the note sequence of the song acquired by the note ratio determining module 63 as the mode of the song. If the natural note proportion corresponding to the C major key of the song calculated by the note proportion determining module 63 is 0.97, the natural note proportion corresponding to the D major key of the song is 0.74, and the natural note proportion corresponding to other key expressions of the song is … … … …; if 0.97 is the maximum, the C major key is determined as the key of the song.
Since the proportion of the natural notes corresponding to the correct pitch for the song should be the largest, determining the pitch of the song by the proportion of the natural notes of the note sequence of the song herein will improve the accuracy of the subsequent pitch correction.
Finally, the pitch correction module 65 performs pitch correction on the output notes sung by the karaoke user according to the intonation level corresponding to the mode obtained by the mode determination module 64.
If the tone determined by the tone determining module 64 is C major, the output notes sung by the user of the karaoke song are forcibly corrected to C, D, E, F, G, A, B level according to the principle of correction, and the phenomenon of correcting unnatural notes in C major tones such as C #, D #, F #, G #, a # and the like does not occur, so that the generation of unnatural notes is effectively avoided, and the accuracy of pitch correction is improved.
This completes the tune determination by the song tune determination apparatus of the present preferred embodiment and the song pitch correction process.
On the basis of the first preferred embodiment, the song mode determining device of the present preferred embodiment determines the mode of the song according to the note time ratio of the natural notes in all note sequences under different modes, and the obtained mode or song tune is more accurate; and the pitch correction is carried out according to the intonation level corresponding to the mode, and the accuracy of the pitch correction is higher.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a song mode determination apparatus according to a third preferred embodiment of the present invention. The song mode determining apparatus of the present preferred embodiment can be implemented using the third preferred embodiment of the song mode determining method described above, and the song mode determining apparatus 70 includes a note sequence obtaining module 71, a natural note setting module 72, a note proportion determining module 73, a mode determining module 74, and a pitch correcting module 75. .
The note sequence acquiring module 71 is configured to acquire score information of a song, where the score information includes a note sequence constituting the song and a duration of each note in the note sequence, and acquires a pitch level of each mode; the natural note setting module 72 is configured to determine a natural note in the note sequence corresponding to each mode of the song according to the pitch-in level of each mode; the note proportion determining module 73 is configured to determine a natural note proportion of the note sequence of the song corresponding to each mode according to the note time proportion of the natural note in the note sequence and the weight of the natural note; the mode determining module 74 is configured to determine a mode corresponding to the largest proportion of the natural notes as a mode of the song; the pitch correction module 75 is configured to perform pitch correction on the singing output notes of the song according to the intonation level of the mode of the song.
Referring to fig. 8, fig. 8 is a schematic structural diagram of a note proportion determining module of a third preferred embodiment of the song mode determining apparatus of the present invention. The note proportion determining module 73 includes a scale weight obtaining unit 81, a natural note weight determining unit 82, and a natural note proportion determining unit 83.
The tone level weight acquiring unit 81 is configured to acquire a tone level weight of an intonation tone level corresponding to a mode; the natural note weight determining unit 82 is configured to determine a natural note weight of the natural note in the note sequence according to the scale weight of the pitch level in the key; the natural note ratio determining unit 83 is configured to determine the natural note ratio of the note sequence of the song corresponding to each mode according to the note time ratios of the natural notes in all note sequences and the natural note weights of the natural notes in the note sequences.
When the song mode determining apparatus 70 of the preferred embodiment is used, first, the note sequence obtaining module 71 obtains the score information of the song, and obtains the intonation level of each mode; the natural note setting module 72 then determines the natural notes in the sequence of notes corresponding to each mode of the song based on the level of pitch in each mode.
Since the pitch of a song is determined only by the note-time ratio of the natural notes in all note sequences corresponding to each pitch, it may cause some songs using a smaller pitch class to fail to accurately determine the pitch of the song.
If a song uses only C, D, E, G, A levels, the natural note share ratio for the C major key (with an intonation level of C, D, E, F, G, A, B) and the natural note share ratio for the G major key (with an intonation level of G, A, B, C, D, E, F #) are the same. In the preferred embodiment, therefore, different levels of pitch within a key are given weights according to the stability of each level in the tune, so that natural note proportions under different tunes are better distinguished, i.e. the more levels of pitch that are stable, the higher the natural note proportions under the tune.
The scale weight acquiring unit 81 of the note proportion determining module 73 acquires the scale weights of the intonation scales corresponding to the respective key expressions.
The natural notes of songs having different tonic pitches are set to a first, second, third, fourth, fifth, sixth, and seventh scale from high to low, respectively, in scale.
If the song is C major, the corresponding intonation sound levels are a first sound level C, a second sound level D, a third sound level E, a fourth sound level F, a fifth sound level G, a sixth sound level A and a seventh sound level B. Wherein the first scale C, the second scale D, the third scale E, the fifth scale G and the sixth scale A are the first weighted scale, and the fourth scale F and the seventh scale B are the second weighted scale.
Since the first scale of each song tune is a major, the second scale is an upper major, the third scale is a middle, the fifth scale is a subordinate, and the sixth scale is a lower middle, the first weighted scale may appear at the beginning and end of the song passage, so the first weighted scale is a scale which is more stable and occurs more frequently in the corresponding tune. The fourth tone level of each song tune is subordinate tone, the seventh tone level is lower dominant tone, the second weighted tone level cannot be singly presented at the end of the song paragraph, and the first weighted tone level is inevitably presented after the second weighted tone level for end guidance, so the second weighted tone level is unstable and presents a tone level with lower frequency in the corresponding tune.
Therefore, the tone scale weight of the first weighted tone scale of the intonation tone scale corresponding to each mode is set to be greater than the tone scale weight of the second weighted tone scale.
The natural note weight determining unit 82 of the note proportion determining module 73 determines the natural note weights of the corresponding natural notes in the note sequence according to the scale weights of the intonation scales corresponding to the respective scales acquired by the scale weight acquiring unit 81. That is, the natural note weight of the natural note corresponding to the first weighted scale is greater than the natural note weight of the natural note corresponding to the second weighted scale.
The natural note ratio determining unit 83 obtains the note time ratio of the natural note corresponding to each mode in all note sequences (i.e., the sum of the natural note and the unnatural note) according to the natural note corresponding to each mode obtained by the natural note setting module 72.
Then, the natural note proportion determining unit 83 determines the natural note proportion of the mode corresponding to the note sequence of the song according to the note time proportion of the natural notes in all note sequences and the natural note weight of the natural notes in the note sequence.
The method specifically comprises the following steps: if the scale factor of the first weighted scale factor of the intonation scale corresponding to the mode is set to be 2, the scale factor of the second weighted scale factor is set to be 1.
Meanwhile, if the total note length of the note sequence of the song is 35000ms, 7000ms appears at the C scale, 5000ms appears at the D scale, 6000ms appears at the E scale, 2000ms appears at the F scale, 7000ms appears at the G scale, 6000ms appears at the A scale, 1000ms appears at the B scale, 500ms appears at the F scale, and 500ms appears at the C scale.
Thus, the natural note in the major key of C is C, D, E, F, G, A, B, and the natural note ratio of the sequence of notes in the major key of C is (7000 × 2+5000 × 2+6000 × 2+2000 × 1+7000 × 2+6000 × 2+1000 × 1)/35000 ═ 1.857; the natural notes in the major key D are D, E, F #, G, A, B, C #, and the natural note ratio of the sequence of notes in the song in the major key D is (5000 × 2+6000 × 2+500 × 2+7000 × 1+6000 × 2+1000 × 2+500 × 1)/35000 ═ 1.271; this determines the natural note proportion of the song at each tune.
Then, the mode determining module 74 determines the mode corresponding to the maximum natural note ratio of the note sequence of the song acquired by the note ratio determining module 73 as the mode of the song. The natural note proportion corresponding to the C-mode of the song calculated in step S405 is 1.857, the natural note proportion corresponding to the D-mode of the song is 1.271, and the natural note proportion corresponding to other modes of the … … song is … …; if 1.857 is the maximum, the C key is determined as the key of the song, i.e. the tune of the song is set as the C major.
Since the proportion of the natural notes corresponding to the correct pitch for the song should be the largest, determining the pitch of the song by the proportion of the natural notes of the note sequence of the song herein will improve the accuracy of the subsequent pitch correction.
The pitch correction module 75 performs pitch correction on the output notes sung by the karaoke user according to the intonation level corresponding to the tone obtained by the tone determination module 74.
If the tone determined by the tone determining module 74 is C major, the output notes sung by the user in karaoke are forcibly corrected to C, D, E, F, G, A, B level according to the principle of correction, and the phenomenon of correcting unnatural notes in C major such as C #, D #, F #, G #, a # and the like will not occur, so that the generation of unnatural notes is effectively avoided, and the accuracy of pitch correction is improved.
This completes the tune determination by the song tune determination apparatus of the present preferred embodiment and the song pitch correction process.
On the basis of the second preferred embodiment, the song mode determining apparatus of the present preferred embodiment determines the mode of the song according to the note time ratio of the natural notes in the note sequence under different modes and the weight of the natural notes, thereby avoiding the problem that the mode is difficult to determine through the note time ratio due to too small sound level of the song, and further improving the accuracy of the obtained mode or the song tune.
Preferably, the tone scale weight obtaining unit 81 may also divide the intonation tone scale corresponding to each mode into a first weighted tone scale, a second weighted tone scale and a third weighted tone scale.
If the song is C major, the corresponding intonation sound levels are a first sound level C, a second sound level D, a third sound level E, a fourth sound level F, a fifth sound level G, a sixth sound level A and a seventh sound level B. The first, third, fifth and sixth sound levels C, E, G and a are the first, second and fourth weighted sound levels, and the fourth and seventh sound levels F and B are the third weighted sound level.
The tone scale weight of the first weighted tone scale of the intonation tone scale corresponding to each mode is set to be greater than the tone scale weight of the second weighted tone scale, and the tone scale weight of the second weighted tone scale is greater than the tone scale weight of the third weighted tone scale.
Thus, the natural note weight determination unit 82 determines that the natural note weight of the natural note corresponding to the first weighted scale is greater than the natural note weight of the natural note corresponding to the second weighted scale, which is greater than the natural note weight of the third weighted scale.
Due to the fact that the weights of the natural notes are divided accurately, accuracy of the obtained mode or song tune is further improved.
The following describes a specific working principle of the song mode determining method and the song mode determining apparatus according to the present invention with an embodiment.
For a song, the natural notes corresponding to the intonation levels of 12 tones are respectively expressed as the following table in terms of the natural note ratios (time ratios) of the whole note sequence:
Figure GDA0000984916780000221
Figure GDA0000984916780000231
TABLE 1
Since the natural note proportion corresponding to the major key or the mode of D # is the largest, the device for determining the tone of the song determines that D # is the mode of the song, so that the device for determining the tone of the song can use the intonation levels D #, F, G #, A #, C, D of D # to perform pitch correction on the singing output notes of the song.
Preferably, the scale weights of the intonation scales corresponding to each mode are taken into account in the ratio of natural notes, and here, the scale weight of the first scale of each tune is set to 7, the scale weight of the second scale to 5, the scale weight of the third scale to 6, the scale weight of the fourth scale to 2, the scale weight of the fifth scale to 7, the scale weight of the sixth scale to 6, and the scale weight of the seventh scale to 1.
Thus, the natural note ratio is determined according to the scale weight of the intonation scale corresponding to each mode and the note time ratio of the natural notes of the intonation scale in all note sequences, and the specific results are shown in the following table:
regulating type Note time ratio Natural note proportion
C 0.801036 4.255036
C# 0.640429 2.756964
D 0.427214 1.245536
D# 0.996393 4.892429
E 0.266821 1.103321
F 0.836429 4.182643
F# 0.444536 1.59925
G 0.658714 3.04375
G# 0.925286 4.394679
A 0.249179 0.640857
A# 0.925179 4.483536
B 0.302214 1.564714
TABLE 2
And also, the natural note proportion corresponding to the major key or the mode of the D # is the largest, so that the D # is determined as the mode of the song.
Since the adjacent tunes to the D #, the D major and the E major must have more unnatural notes, after calculating the natural note ratios by using the scale weights of the pitch levels in the keys, the natural note ratios corresponding to the D major are made to be closer to the true values, i.e. the difference of the natural note ratios corresponding to the D # major is increased.
Therefore, the natural note proportion corresponding to each mode is determined more accurately by using the tone scale weight of the intonation level corresponding to each mode and the note time proportion of the natural notes of the intonation level in all note sequences, so that the obtained mode is more accurate, and the accuracy of pitch correction of the song mode determining device is improved.
The song mode determining method and the song mode determining device determine the mode of the song according to the natural note proportion of the note sequence of the song, correct the pitch according to the pitch level in the tone corresponding to the mode, have high accuracy of pitch correction, and solve the technical problem of low accuracy of pitch correction of the existing song mode determining method.
As used herein, the terms "component," "module," "system," "interface," "process," and the like are generally intended to refer to a computer-related entity: hardware, a combination of hardware and software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a controller and the controller can be a component. One or more components can reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers.
Furthermore, the claimed subject matter may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed subject matter. The term "article of manufacture" as used herein is intended to encompass a computer program accessible from any computer-readable device, carrier, or media. Of course, those skilled in the art will recognize many modifications may be made to this configuration without departing from the scope or spirit of the claimed subject matter.
Fig. 9 and the following discussion provide a brief, general description of an operating environment of an electronic device in which a song tune determination apparatus of the present invention may be implemented. The operating environment of FIG. 9 is only one example of a suitable operating environment and is not intended to suggest any limitation as to the scope of use or functionality of the operating environment. Example electronic devices 912 include, but are not limited to, personal computers, server computers, hand-held or laptop devices, mobile devices (such as mobile phones, Personal Digital Assistants (PDAs), media players, and the like), multiprocessor systems, consumer electronics, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
Although not required, embodiments are described in the general context of "computer readable instructions" being executed by one or more electronic devices. Computer readable instructions may be distributed via computer readable media (discussed below). Computer readable instructions may be implemented as program modules, such as functions, objects, Application Programming Interfaces (APIs), data structures, etc. that perform particular tasks or implement particular abstract data types. Typically, the functionality of the computer readable instructions may be combined or distributed as desired in various environments.
Fig. 9 illustrates an example of an electronic device 912 including one or more embodiments of a song tune determination apparatus of the present invention. In one configuration, electronic device 1212 includes at least one processing unit 916 and memory 918. Depending on the exact configuration and type of electronic device, memory 918 may be volatile (such as RAM), non-volatile (such as ROM, flash memory, etc.) or some combination of the two. This configuration is illustrated in fig. 9 by dashed line 914.
In other embodiments, electronic device 912 may include additional features and/or functionality. For example, device 912 may also include additional storage (e.g., removable and/or non-removable) including, but not limited to, magnetic storage, optical storage, and the like. Such additional storage is illustrated in fig. 9 by storage 920. In one embodiment, computer readable instructions to implement one or more embodiments provided herein may be in storage 920. Storage 920 may also store other computer readable instructions to implement an operating system, an application program, and the like. Computer readable instructions may be loaded in memory 918 for execution by processing unit 916, for example.
The term "computer readable media" as used herein includes computer storage media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions or other data. Memory 918 and storage 920 are examples of computer storage media. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by electronic device 912. Any such computer storage media may be part of electronic device 912.
Electronic device 912 may also include communication connection 926 that allows electronic device 912 to communicate with other devices. Communication connection 926 may include, but is not limited to, a modem, a Network Interface Card (NIC), an integrated network interface, a radio frequency transmitter/receiver, an infrared port, a USB connection, or other interfaces for connecting electronic device 912 to other electronic devices. Communication connection 926 may include a wired connection or a wireless connection. Communication connection 926 may transmit and/or receive communication media.
The term "computer readable media" may include communication media. Communication media typically embodies computer readable instructions or other data in a "modulated data signal" such as a carrier wave or other transport mechanism and includes any information delivery media. The term "modulated data signal" may include signals that: one or more of the signal characteristics may be set or changed in such a manner as to encode information in the signal.
The electronic device 912 may include input device(s) 924 such as keyboard, mouse, pen, voice input device, touch input device, infrared camera, video input device, and/or any other input device. Output device(s) 922 such as one or more displays, speakers, printers, and/or any other output device may also be included in device 912. Input device 924 and output device 922 may be connected to electronic device 912 via a wired connection, wireless connection, or any combination thereof. In one embodiment, an input device or an output device from another electronic device may be used as input device 924 or output device 922 for electronic device 912.
Components of electronic device 912 may be connected by various interconnects, such as a bus. Such interconnects may include Peripheral Component Interconnect (PCI), such as PCI express, Universal Serial Bus (USB), firewire (IEEE1394), optical bus structures, and the like. In another embodiment, components of electronic device 912 may be interconnected by a network. For example, memory 918 may be comprised of multiple physical memory units located in different physical locations interconnected by a network.
Those skilled in the art will realize that storage devices utilized to store computer readable instructions may be distributed across a network. For example, an electronic device 930 accessible via a network 928 may store computer readable instructions to implement one or more embodiments provided by the present invention. Electronic device 912 may access electronic device 930 and download a part or all of the computer readable instructions for execution. Alternatively, electronic device 912 may download pieces of the computer readable instructions, as needed, or some instructions may be executed at electronic device 912 and some at electronic device 930.
Various operations of embodiments are provided herein. In one embodiment, the one or more operations may constitute computer readable instructions stored on one or more computer readable media, which when executed by an electronic device, will cause the computing device to perform the operations. The order in which some or all of the operations are described should not be construed as to imply that these operations are necessarily order dependent. Those skilled in the art will appreciate alternative orderings having the benefit of this description. Moreover, it should be understood that not all operations are necessarily present in each embodiment provided herein.
Also, as used herein, the word "preferred" is intended to serve as an example, instance, or illustration. Any aspect or design described herein as "preferred" is not necessarily to be construed as advantageous over other aspects or designs. Rather, use of the word "preferred" is intended to present concepts in a concrete fashion. The term "or" as used in this application is intended to mean an inclusive "or" rather than an exclusive "or". That is, unless specified otherwise or clear from context, "X employs A or B" is intended to include either of the permutations as a matter of course. That is, if X employs A; b is used as X; or X employs both A and B, then "X employs A or B" is satisfied in any of the foregoing examples.
Also, although the disclosure has been shown and described with respect to one or more implementations, equivalent alterations and modifications will occur to others skilled in the art based upon a reading and understanding of this specification and the annexed drawings. The present disclosure includes all such modifications and alterations, and is limited only by the scope of the appended claims. In particular regard to the various functions performed by the above described components (e.g., elements, resources, etc.), the terms used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., that is functionally equivalent), even though not structurally equivalent to the disclosed structure which performs the function in the herein illustrated exemplary implementations of the disclosure. In addition, while a particular feature of the disclosure may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for a given or particular application. Furthermore, to the extent that the terms "includes," has, "" contains, "or variants thereof are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term" comprising.
Each functional unit in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium. The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Each apparatus or system described above may perform the method in the corresponding method embodiment.
In summary, although the present invention has been described with reference to the preferred embodiments, the above-described preferred embodiments are not intended to limit the present invention, and those skilled in the art can make various changes and modifications without departing from the spirit and scope of the present invention, therefore, the scope of the present invention shall be determined by the appended claims.

Claims (14)

1. A method for determining a tune of a song, comprising:
acquiring music score information of a song, wherein the music score information comprises a note sequence forming the song and the duration of each note in the note sequence;
acquiring the intonation level of each mode;
according to the intonation level of each mode, determining natural notes in the note sequence corresponding to each mode of the song;
determining the proportion of each mode of the song corresponding to the natural notes in the note sequence according to the note sequence of the song and the duration of each note in the note sequence; and
and determining the mode corresponding to the maximum ratio of the natural notes as the mode of the song.
2. The method for determining the tune of a song according to claim 1, wherein the step of determining the tune corresponding to the largest proportion of the natural note as the tune of the song further comprises the steps of:
and according to the tone internal tone level of the tone of the song, performing pitch correction on the singing output notes of the song.
3. A method as claimed in claim 1, wherein the step of determining the proportion of each mode of the song corresponding to the natural notes in the note sequence according to the note sequence of the song and the duration of each note in the note sequence is specifically:
and determining the natural note proportion of the note sequence of the song corresponding to each mode according to the note time proportion of the natural notes in the note sequence, wherein the note time proportion of the natural notes in the note sequence is the ratio of the natural note length to the total note length under the corresponding mode, and the total note length and the natural note length are calculated according to the duration of each note in the note sequence.
4. A method as claimed in claim 1, wherein the step of determining the proportion of each mode of the song corresponding to the natural notes in the note sequence according to the note sequence of the song and the duration of each note in the note sequence is specifically:
and determining the natural note proportion of the note sequence of the song corresponding to each mode according to the note time proportion of the natural notes in the note sequence and the weight of the natural notes, wherein the note time proportion of the natural notes in the note sequence is the ratio of the length of the natural notes to the length of the total notes under the corresponding mode, and the length of the total notes and the length of the natural notes are calculated according to the duration of each note in the note sequence.
5. The method for determining the tone of a song according to claim 4, wherein the step of determining the natural note ratio of the note sequence of the song corresponding to each tone according to the note time ratio of the natural notes in the note sequence and the natural note weight comprises:
acquiring the tone level weight of the tone level in the tone corresponding to the tone style;
determining natural note weights of the natural notes in the note sequence according to the scale weights of the intonation scale; and
and determining the natural note ratios of the note sequences of the songs corresponding to the various modes according to the note time ratios of the natural notes in all the note sequences and the natural note weights of the natural notes in the note sequences.
6. The song mode determination method according to claim 5, wherein the intonation sound level comprises a first weighted sound level and a second weighted sound level, the sound level weight of the first weighted sound level being greater than the sound level weight of the second weighted sound level;
the natural note weight of the natural note corresponding to the first weighted scale is greater than the natural note weight of the natural note corresponding to the second weighted scale.
7. The song mode determination method according to claim 5, wherein the intonation levels include a first weighted level, a second weighted level, and a third weighted level, the first weighted level having a level weight greater than the second weighted level, the second weighted level having a level weight greater than the third weighted level;
the natural note weight of the natural note corresponding to the first weighted scale is greater than the natural note weight of the natural note corresponding to the second weighted scale; the natural note weight of the natural note corresponding to the second weighted scale is greater than the natural note weight of the natural note corresponding to the third weighted scale.
8. A song mode determination apparatus, comprising:
the system comprises a note sequence acquisition module, a note sequence acquisition module and a note selection module, wherein the note sequence acquisition module is used for acquiring music score information of a song, and the music score information comprises a note sequence forming the song and the duration of each note in the note sequence; and obtaining the tone internal sound level of each tone;
the natural note setting module is used for determining natural notes in the note sequence corresponding to each mode of the song according to the intonation level of each mode;
a note proportion determining module, configured to determine, according to the note sequence of the song and the duration of each note in the note sequence, a proportion of each tone of the song corresponding to the natural note in the note sequence; and
and the mode determining module is used for determining the mode corresponding to the maximum ratio of the natural notes as the mode of the song.
9. The song tune determination apparatus according to claim 8, further comprising:
and the pitch correction module is used for performing pitch correction on the singing output notes of the song according to the tone internal tone level of the tone type of the song.
10. The apparatus of claim 8, wherein the note proportion determining module is specifically configured to determine a natural note proportion of the note sequence of the song corresponding to each mode according to a note time proportion of the natural note in the note sequence, the note time proportion of the natural note in the note sequence is a ratio of a natural note length to a total note length in the corresponding mode, and the total note length and the natural note length are calculated according to a duration of each note in the note sequence.
11. The apparatus of claim 8, wherein the note proportion determining module is specifically configured to determine a natural note proportion of the note sequence of the song corresponding to each mode according to the note time proportion of the natural note in the note sequence and a natural note weight, the note time proportion of the natural note in the note sequence is a ratio of a natural note length to a total note length in the corresponding mode, and the total note length and the natural note length are calculated according to a duration of each note in the note sequence.
12. The apparatus of claim 11, wherein the note proportion determination module comprises:
the tone level weight acquiring unit is used for acquiring the tone level weight of the intonation tone level corresponding to the mode;
a natural note weight determining unit, configured to determine a natural note weight of the natural note in the note sequence according to the scale weight of the pitch level in the key; and
and the natural note ratio determining unit is used for determining the natural note ratio of the note sequence of the song corresponding to each mode according to the note time ratios of the natural notes in all the note sequences and the natural note weights of the natural notes in the note sequences.
13. The song mode determination device of claim 12, wherein the intonation level comprises a first weighted level and a second weighted level, the first weighted level having a level weight greater than the second weighted level;
the natural note weight of the natural note corresponding to the first weighted scale is greater than the natural note weight of the natural note corresponding to the second weighted scale.
14. The mode determination device of claim 12, wherein the intonation sound level comprises a first weighted sound level, a second weighted sound level, and a third weighted sound level, wherein the sound level weight of the first weighted sound level is greater than the second weighted sound level, and wherein the sound level weight of the second weighted sound level is greater than the third weighted sound level;
the natural note weight of the natural note corresponding to the first weighted scale is greater than the natural note weight of the natural note corresponding to the second weighted scale; the natural note weight of the natural note corresponding to the second weighted scale is greater than the natural note weight of the natural note corresponding to the third weighted scale.
CN201610149513.8A 2016-03-16 2016-03-16 Song mode determining method and song mode determining device Active CN105845115B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610149513.8A CN105845115B (en) 2016-03-16 2016-03-16 Song mode determining method and song mode determining device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610149513.8A CN105845115B (en) 2016-03-16 2016-03-16 Song mode determining method and song mode determining device

Publications (2)

Publication Number Publication Date
CN105845115A CN105845115A (en) 2016-08-10
CN105845115B true CN105845115B (en) 2021-05-07

Family

ID=56587479

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610149513.8A Active CN105845115B (en) 2016-03-16 2016-03-16 Song mode determining method and song mode determining device

Country Status (1)

Country Link
CN (1) CN105845115B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106548768B (en) * 2016-10-18 2018-09-04 广州酷狗计算机科技有限公司 A kind of modified method and apparatus of note
CN108231046B (en) * 2017-12-28 2020-07-07 腾讯音乐娱乐科技(深圳)有限公司 Song tone identification method and device
CN112435680A (en) * 2019-08-08 2021-03-02 北京字节跳动网络技术有限公司 Audio processing method and device, electronic equipment and computer readable storage medium
CN111081209B (en) * 2019-12-19 2022-06-07 中国地质大学(武汉) Chinese national music mode identification method based on template matching
CN111404808B (en) * 2020-06-02 2020-09-22 腾讯科技(深圳)有限公司 Song processing method
CN116312636B (en) * 2023-03-21 2024-01-09 广州资云科技有限公司 Method, apparatus, computer device and storage medium for analyzing electric tone key

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004246069A (en) * 2003-02-13 2004-09-02 Yamaha Corp Electronic music device and transposition setting program
CN102521281A (en) * 2011-11-25 2012-06-27 北京师范大学 Humming computer music searching method based on longest matching subsequence algorithm
CN103035253A (en) * 2012-12-20 2013-04-10 成都玉禾鼎数字娱乐有限公司 Method of automatic recognition of music melody key signatures
CN104008747A (en) * 2013-02-27 2014-08-27 雅马哈株式会社 Apparatus and method for detecting music chords
CN104091594A (en) * 2013-08-16 2014-10-08 腾讯科技(深圳)有限公司 Audio classifying method and device
CN105161087A (en) * 2015-09-18 2015-12-16 努比亚技术有限公司 Automatic harmony method, device, and terminal automatic harmony operation method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004246069A (en) * 2003-02-13 2004-09-02 Yamaha Corp Electronic music device and transposition setting program
CN102521281A (en) * 2011-11-25 2012-06-27 北京师范大学 Humming computer music searching method based on longest matching subsequence algorithm
CN103035253A (en) * 2012-12-20 2013-04-10 成都玉禾鼎数字娱乐有限公司 Method of automatic recognition of music melody key signatures
CN104008747A (en) * 2013-02-27 2014-08-27 雅马哈株式会社 Apparatus and method for detecting music chords
CN104091594A (en) * 2013-08-16 2014-10-08 腾讯科技(深圳)有限公司 Audio classifying method and device
CN105161087A (en) * 2015-09-18 2015-12-16 努比亚技术有限公司 Automatic harmony method, device, and terminal automatic harmony operation method

Also Published As

Publication number Publication date
CN105845115A (en) 2016-08-10

Similar Documents

Publication Publication Date Title
CN105845115B (en) Song mode determining method and song mode determining device
CN109272975B (en) Automatic adjustment method and device for singing accompaniment and KTV jukebox
CN109166564B (en) Method, apparatus and computer readable storage medium for generating a musical composition for a lyric text
CN109065008B (en) Music performance music score matching method, storage medium and intelligent musical instrument
CN108257613B (en) Method and device for correcting pitch deviation of audio content
US9607593B2 (en) Automatic composition apparatus, automatic composition method and storage medium
CN101996232B (en) Information processing apparatus, method for processing information, and program
US9460694B2 (en) Automatic composition apparatus, automatic composition method and storage medium
CN108206026B (en) Method and device for determining pitch deviation of audio content
CN106157979B (en) A kind of method and apparatus obtaining voice pitch data
US20190051275A1 (en) Method for providing accompaniment based on user humming melody and apparatus for the same
US9245508B2 (en) Music piece order determination device, music piece order determination method, and music piece order determination program
JP2007121457A (en) Information processor, information processing method, and program
CN111613199B (en) MIDI sequence generating device based on music theory and statistical rule
US11074897B2 (en) Method and apparatus for training adaptation quality evaluation model, and method and apparatus for evaluating adaptation quality
CN112861521B (en) Speech recognition result error correction method, electronic device and storage medium
CN111462748A (en) Voice recognition processing method and device, electronic equipment and storage medium
JP5196550B2 (en) Code detection apparatus and code detection program
CN106847273B (en) Awakening word selection method and device for voice recognition
CN109841202B (en) Rhythm generation method and device based on voice synthesis and terminal equipment
CN113140230B (en) Method, device, equipment and storage medium for determining note pitch value
CN111863030A (en) Audio detection method and device
JPH0736478A (en) Calculating device for similarity between note sequences
US20210390932A1 (en) Methods and systems for vocalist part mapping
JP2007122186A (en) Information processor, information processing method and program

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant