CN100373383C - Music rhythm sectionalized automatic marking method based on eigen-note - Google Patents

Music rhythm sectionalized automatic marking method based on eigen-note Download PDF

Info

Publication number
CN100373383C
CN100373383C CNB2005100294966A CN200510029496A CN100373383C CN 100373383 C CN100373383 C CN 100373383C CN B2005100294966 A CNB2005100294966 A CN B2005100294966A CN 200510029496 A CN200510029496 A CN 200510029496A CN 100373383 C CN100373383 C CN 100373383C
Authority
CN
China
Prior art keywords
note
melody
feature
segmentation
length
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2005100294966A
Other languages
Chinese (zh)
Other versions
CN1737798A (en
Inventor
吴亚栋
赵芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CNB2005100294966A priority Critical patent/CN100373383C/en
Publication of CN1737798A publication Critical patent/CN1737798A/en
Application granted granted Critical
Publication of CN100373383C publication Critical patent/CN100373383C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The present invention relates to a music melody segment automatically marking method based on characteristic notes, which belongs to the technical field of computer application and comprises the following steps that step 1, reading digital music files and extracting melody characteristics, which is used for controlling the read of the digital music files and the extraction of the melody characteristics thereof from a digital music material library; step 2, detecting characteristic notes of the melody segment, which is used for carrying out the melody segment detection and marking on the music melody characteristics, and step 2 is based on note class characteristics and note length characteristics of the music melody; step 3, generating melody characteristic template which is used for finally generating a melody characteristic template file which has melody segment information and is stored in a music melody characteristic database, and the melody characteristics are described by the note characteristics which are characterized by pitch difference and time vector sequence. The present invention has the advantages that the present invention can not only keep fault tolerance on users' humming input, but also make the system greatly raise match retrieval speed of humming input.

Description

Music rhythm segmentation automatic marking method based on the feature note
Technical field
The present invention relates to the method in a kind of Computer Applied Technology field, specifically is a kind of music rhythm segmentation automatic marking method based on the feature note.
Background technology
Digital music retrieval mode based on the humming input is called for short singing search (QBH:Query by Humming), and it allows the user to retrieve needed song by the form of humming.As long as the user can remember fragment melody wherein, and hum out by microphone, the QBH searching system just can find its needed song for the user.At singing search, the technology towards the melody coupling that has proposed both at home and abroad includes: approximate symbol string comparison algorithm, typically as DP (Dynamic Programming) method and quick Approximate Match Method, pitch profile geometric match method and linear alignment are similar to melody matching algorithm LAN (Linear Alignment Mahcing) etc.In the method for melody coupling, the melody characteristics pattern is described by the note characteristic sequence at these, and the note feature is then generally characterized by note pitch (or pitch is poor relatively) and note length (or relative note length ratio); When between standard melody, carrying out the pattern match of non-isometric note characteristic sequence, all be to carry out mobile continuously one by one the coupling along the sequence of notes direction of the standard melody of melody according to the sequence of notes that the note Moving Unit will import the melody fragment to humming input melody fragment and whole first melody.
Find through literature search prior art, publication is at volume o. 11th in " computer research and development " November the 40th in 2003, article on the pp.1554-1560 " linear alignment is similar to the melody matching algorithm ", if mate and when moving to K note when input melody fragment moves along the sequence of notes of standard melody, then the core procedure of this algorithm is: the head note that (1) will import the melody fragment earlier aligns with K note of standard melody, and taking-up length serves as to import the sequence of notes section of melody fragment length 1.3 double-lengths as the standard melody matching section under this head note alignment from the standard melody that is starting point with K note; (2) on time shaft, carry out the approximate melody coupling of linear alignment of two melody matching sections, be about to import melody fragment linearity and be extended to the length identical with the sequence of notes section of standard melody, and constantly approaching note takes place in alignment in certain error range, calculate the similarity of melody on rhythm, continue the relatively pitch difference of two sections isometric melody on each time point simultaneously, calculate the similarity of melody on pitch, at last, take all factors into consideration the similarity of rhythm and pitch two aspects, provide the matching score of input melody fragment and this standard melody sequence of notes section.Then, the head note of input melody fragment will continue to move a note backward along the sequence of notes of standard melody, and repeat the coupling of above-mentioned melody sequence of notes section based on core procedure, till moving at last.The approximate melody matching algorithm of linear alignment has rhythm zmodem, advantage that matching precision is high, but because this mobile matching way response time based on the note Moving Unit is longer, is unsuitable for the match retrieval as the large scale digital music libraries.Therefore,, will become increasingly conspicuous, become based on march toward a big bottleneck of practicability of the digital music searching system of humming input towards the response speed and the contradiction between the retrieval precision of melody coupling retrieval along with the scale in digital music storehouse increases day by day.
Summary of the invention
The objective of the invention is at the deficiencies in the prior art, a kind of music rhythm segmentation automatic marking method based on the feature note is provided, make content-based digital music searching system when adopting the tone character data structure of this mark method, the fault-tolerance of input can be kept the user is hummed, the coupling retrieval rate of system can also be increased substantially simultaneously the humming input.
The present invention is achieved by the following technical solutions, and the inventive method is made of following steps:
(1) digital music file reads and the melody characteristics extraction step: be used for controlling reading and it is carried out melody characteristics extract from digital music material database digital music file;
(2) melody segmentation feature note detection step: be used for that described melody melody characteristics is carried out the melody segmentation and detect and mark;
(3) the melody characteristics template generates step: be used for final the generation and have the melody characteristics template file of melody segment information and it is saved in the music rhythm property data base.
Wherein, described digital music file can be the music performance data file of midi format, also audio volume control files such as WAVE, MP3; Melody segmentation feature note detection step is based on that note category feature and note length feature thereof carry out.And have following feature: melody characteristics is that pitch is poor, the vector sequence of time is described by characterizing the note feature.Wherein, refer to and the difference of previous note frequency (note pitch information), be in harmonious proportion unspecified person humming, and be unit with semitone (Semitone) to adapt to rising of different melody as " pitch is poor " of note pitch feature; Refer to initial moment of this note as " time " of note otonaga features, it has expressed the rhythm characteristic of melody.Yet, why select the initial moment of note for use but not note duration length is used as characterizing the rhythm parameter of melody, it is the rhythm characteristic of considering when the user hums melody, be that the user is when humming, general than being easier to hold going out now of note, and to the persistence length of each note sensitivity or differ greatly less.And ignore this cadence information of time if only adopt pitch information, and along with the expansion of music libraries scale, be matched to power will certainly glide thereupon, it is more and more lower to become.In addition, considering single is the limitation of parameter matching with the pitch information, and the present invention selects that pitch is poor, the vector sequence of time is described the feature of melody.Standard feature template for the melody melody that generates, this melody characteristics template is not for expressing with the note characteristic sequence of this melody merely, but express by the note characteristic sequence that has the melody segment information, whether be the markup information that characterize the feature note of melody segmentation feature, the note characteristic sequence between the feature note of per two marks is a melody segmentation if promptly in each note characteristic information piece, all automatically being added with this note.Therefore, when input melody fragment is mated with the standard melody of putting in order first melody, just can the sequence of notes of input melody fragment be carried out moving continuously coupling along the sequence of notes direction of whole first melody melody if adopt this standard melody characteristics template that has the melody segment information not according to the note Moving Unit, but can according in every first melody melody in advance the melody of mark be segmented into the coupling Moving Unit and carry out great-leap-forward and move coupling, thereby but for realizing that increasing substantially content-based music retrieval matching speed provides a kind of solution of property conscientiously.
Is that the input humming melody fragment of N is when carrying out matching operation adopting the standard melody characteristics template that has the melody segment information generate according to the present invention to the note number, with the existing mobile one by one matching method of note unit of pressing, as comparing with the approximate melody matching method LAN of linear alignment, coupling at the melody standard melody segmentation that is M of a certain note number that has marked, when adopting the LAN method to need | M-N|K+K/2 time, promptly need at least | M-N|K time, the K here is that the segmentation of standard melody is to the fault-tolerant scope (K=0.58N in the LAN method) of input melody fragment sequence of notes length when adopting the LAN method, and its unit is the note number.And adopt the standard melody characteristics template that has the melody segment information of the method for the invention, and need 2K time at most, wherein, the 1st time for importing melody slice header and K sublinear under melody melody segmentation head the aligns coupling of aliging; And the 2nd time be in order to consider that the user hums fragment and has the note obscission in its initial part, and the head that will import the melody fragment prolongs melody melody segmentation sequence of notes direction and moves a note backward, promptly is equivalent to input melody slice header and K sublinear under the next note of melody melody segmentation head the aligns coupling of aliging.This shows,, adopt the speed of matching method of the melody characteristics template that has the melody segment information of the method for the invention faster than LAN method at least aspect the coupling retrieval of melody melody section | M-N|/2 times; And be the coupling of R and the whole first melody melody that contains L melody segmentation at a certain note number, adopt the LAN method to need (R-N) K+K/2 time, promptly to carry out (R-N) K coupling at least, adopt the method for the invention then to need 2LK time at most, so the speed of the coupling descriptor index method of the employing standard melody characteristics template that has a melody segment information of the present invention is faster than LAN method at least | R-N|/2L times.
Therefore, reach the notion that is segmented into mobile matching unit with its melody owing in the standard melody characteristics, introduced the notion of melody segmentation, make and to realize that by detecting the feature note position that characterizes segmentation feature significantly great-leap-forward moves coupling, thereby solve the problem of melody retrieval high speed.The long more superiority that then can embody the method for the invention more of standard note sequence of whole first melody melody.
Described melody segmentation feature note detection step also can be made of following characteristics.Promptly, for avoiding too much insignificant segmentation, earlier by eliminating the note characteristic sequence that to ignore quiet section (being equivalent to rest) step search criterion melody, if the note length of searching is then deleted this note less than a certain predefined quiet segment length threshold value, and this section is incorporated into the voiced segments of previous note.Because quiet segment length threshold value is generally established lowlyer (as 8 dieresis length), so this deletion does not almost influence the result when the coupling retrieval.After having deleted insignificant quiet section, then come each note in the standard melody is detected according to note category feature and note length feature thereof by the detection step of feature note.Feature note classification is divided into the location class note and the class note that stops, and all whether surpasses the feature note threshold value that sets in advance by its note length separately for this two classes note and determines whether this note is the segmentation feature note.
Described melody segmentation is carried out the melody segmentation based on the feature note, its according to and the feasibility that is used for singing search be according to music theory rudimentary knowledge and a large amount of melody melody carried out draw on the basis of statistical study and checking.At first, for the position divided of whole first melody, can consider that the rest position is a kind of feature note that characterizes the melody segmentation feature.But generally in theme, the number of rest is not a lot, only considers to adopt rest certainly will will cause the melody segmentation long as the cutting position of melody segmentation, thereby does not play real segmenting function.By note signature analysis, find that at melody especially in the song, the melody segmentation that links up mostly finishes at minim/whole note place to a large amount of melody melody.Why like this, this wherein has the notion of a sense-group.Sense-group is meant phrase, phrase or a short sentence with relatively independent meaning, and any interchange all is a notion and the exchanging of the combination of notion.The symbol of no notion or sound are insignificant, also just can not become language.In wirtiting any sentence, article all by key concept in some way structure form.Just as our daily interchange, all be to occur with complete one by one statement or phrase, and can be not suddenly do not exchange since the centre of a phrase, humming also is the same, the formula of being used to for people's thinking, taking certain lyrics " wind in 5,000 years and rain " is example, the humming people generally can not hum with " year wind and " such segment, because such segment does not constitute an independently sense-group, in melody, corresponding to the phrase one by one that mostly is of sense-group one by one, therebetween with long note or rest as mark at interval, to exchange thematic replacing for, at the intermittence in the performance, reality also is the transformation of sense-group.So basis of the coupling of can be with these residing positions of feature note jumping during as match retrieval.
Described melody characteristics template generates step, also can be made of following characteristics.The melody characteristics template that described melody characteristics template generates the step generation is the note characteristic sequence that has the melody segment information, wherein each note feature comprises pitch feature (describing with the pitch difference), two numerical value content of otonaga features (with the initial moment description of note), and when being the feature note as if this note, then the otonaga features value of this note will be set to negative value, and will in this note feature, insert optional mark position, be used to write down note UL (note duration) value of this feature note.Wherein, when the feature note was location class note, the value that optional mark position is noted down was the pronunciation duration of this location note; When the feature note when stopping the class note, this rest is with deleted, the voiced segments of then this rest segment length being incorporated into previous note, be about to previous note length and prolong the rest section that to have been deleted by decision, and the note after this and rest merged is defined as the category feature note that stops, otonaga features value with this note is set to negative value simultaneously, and in this note feature, insert optional mark position, write down the actual UL of note (referring to the pronounce the duration) value of this category feature note that stops.And the mode by this feature templates file sequence number is associated with corresponding digital music file in the digital music material database, the melody characteristics template that generates is saved in the music rhythm feature templates storehouse.
After the present invention is applied to large-scale digital music data storehouse, can be improved significantly for the performance of music retrieval.The ART of implementing by the present invention based on the music retrieval system of humming input reduces about 2/3 than tradition according to the searching system that note unit moves the coupling retrieval mode, system performance improve the effect highly significant.
Description of drawings
Fig. 1 is a workflow diagram of the present invention.
Fig. 2 is the comparison curves that is used to illustrate singing search average service time of being carried out by effect of the present invention under different scales digital music material database.
Fig. 3 A-Fig. 3 F is used to illustrate that the location class of being undertaken by embodiments of the invention 1 and the category feature note melody section segmentation of stopping detect the figure of marking program.
Embodiment
Followingly each example of the present invention is described with reference to figure.
Embodiment 1 (location category feature note detection)
In Fig. 1, read and read digital music file (midi format) with the melody characteristics extraction step and by it being extracted original note characteristic sequence that melody characteristics obtains (music rhythm: " A Night At Moscow Suburb ") as shown in Figure 3A by digital music file.
Transverse axis is represented the note zero-time among the figure, and its unit is the peculiar TICK of unit of expression note time in the MIDI file, and the longitudinal axis is then represented pitch, and its unit is a semitone.The duration of a sound that can observe the tenth note and last note is longer, and whether such note can will await the detection of melody segmentation feature note detection step as the feature note of sign melody of the present invention segmentation and determine.Whether is the detection of melody segmentation feature note by melody segmentation feature note detection step to each note promptly, this detection is undertaken by its note category feature and note length feature thereof.Characterize the feature note of segmentation feature, its classification is divided into location category feature note and rest category feature note, all whether surpasses the threshold value that sets in advance separately by its note length separately for this two category features note and determines whether this note is the segmentation feature note.In the present embodiment for locating being set at of category feature note: if its note length then is defined as the segmentation feature note with this note when growing more than or equal to 2 dieresis, and the length that will work as prelocalization category feature note is set to negative value, to mark as segmentation, in this note information piece, insert optional mark position simultaneously, write down the actual duration of this location category feature note, so that improve matching precision.Sequence of notes in the whole accurate melody of head between per two adjacent positioned category feature notes promptly is defined as a melody segmentation.This section melody melody just is detected 2 location category feature notes (with being coated with colour specification) and is divided into two melody segmentations shown in Fig. 3 A, B.
According to the method for the invention,, in Fig. 1, generate the standard melody characteristics template that step can obtain having melody segmentation feature note mark by the melody characteristics template for the digital music melody of " A Night At Moscow Suburb " by name.The form of this melody characteristics template is made up of melodic information head and melody note characteristic sequence.Wherein first field of the data structure of each note feature is represented the pitch difference with last note, but first note is the perfect pitch value this field record, though when coupling, do not use the perfect pitch value, yet this first note record mode is essential (as the pitch curve among the embodiment 1) for calculating the pitch curve that reappears the description melody characteristics; Its second field record be the initial moment of pronunciation of this note of representing with tick, if this note is common note, this the value be exactly on the occasion of, if and this note is the segmentation feature note, so just before this value, add negative sign, the 3rd field of note feature is optional mark position, is used for the actual tone period of recording feature note, equally also is to represent with tick.As shown in the table, to have only when note is the feature note of sign melody segmentation, the field of this note just has 3 fields, rather than all notes all have 3 fields.
Music ID The note number Average pitch Loudness of a sound Beat Note 1 Note 2 Note 3
3309 18 69.39 127 2/4 69 0 3 48 4 96
Note 4 Note 5 Note 6 Note 7 Note 8 Note 9 Note 10 Note 11
-4 144 2 192 -2 288 -1 336 5 432 -2 528 -5 -624 192 3 816
Note 12 Note 13 Note 14 Note 15 Note 16 Note 17 Note 18
4 864 3 912 0 960 2 1008 -2 1104 -2 1152 -1 -1200 192
Embodiment 2 (the category feature note detection of stopping)
According to the method for the invention, another situation is, occurred quiet section in the note characteristic sequence, promptly during rest, also will be defined as the node of melody melody segmentation.That is, adjacent location category feature note and the sequence of notes between the rest class in the whole accurate melody of head, perhaps two adjacent sequence of notes of stopping between the category feature note also will be defined as a melody segmentation.To the processing of rest, have two kinds of situations, a kind of situation is that quiet period is shorter relatively, another kind of situation is that quiet period is longer relatively.For the former, in the present invention, find that length quiet section below quaver can be ignored by statistics, this disposal route is to extend to the concluding time with previous note till next note pronunciation zero hour, and melody segmentation feature note zone bit is not changed.Fig. 3 C has provided to be read by digital music file and has read digital music file (midi format) with the melody characteristics extraction step and by it being extracted the original note characteristic sequence (music rhythm: " wholehearted hero ") that melody characteristics obtains.Fig. 3 D has provided by melody segmentation feature note detection step it has been carried out the quiet section note characteristic sequence of eliminating this melody characteristics after handling.
And for the latter, if promptly the section of the stopping time longer, surpassed the length threshold that is used for judging the feature note (it is the quaver length value that present embodiment is got this threshold value) that sets in advance, will enter treatment scheme so at rest category feature note.Being set at for rest category feature note in the present embodiment: if its note length then is defined as the segmentation feature note with such note more than or equal to quaver when long.At this moment, judging current feature note by melody segmentation feature note detection step in Fig. 1 is the rest time-like, then the otonaga features value of the previous note of just current rest (note zero-time) is set to negative value, so that with the stop category feature note of this note as the segmentation mark, in this feature note message block, insert optional mark position simultaneously, note down the actual pronunciation duration of this note, so that improve the matching precision of melody section.And current detection to rest itself will be removed.Fig. 3 E has provided to be read by digital music file and has read digital music file (midi format) with the melody characteristics extraction step and by it being extracted the original note characteristic sequence that melody characteristics obtains (music rhythm: " love be exactly you ").Detect and mark the note characteristic sequence of this melody characteristics after handling shown in Fig. 3 F by it being carried out the melody segmentation by melody segmentation feature note detection step.Wherein, right oblique line represents to locate the category feature note, the category feature note and left oblique line is represented to stop.
For investigating the effect of present embodiment, in the present embodiment, when this impact of performance of enforcement compares, adopted the SoundBreeze Clubtunes storehouse of the different scales that does not wait from 1000 first MIDI music to 10000 head, and adopt the inventive method to generate corresponding to different scales SoundBreeze Clubtunes storehouse respectively, the standard melody characteristics template base that has the melody segment information, then it was compared in average coupling retrieval time under the invention process and the ART under the former linear alignment Approximate Match Method (LAN), this relatively is under equal experiment condition identical humming input content to be carried out.The result shows: the ART of implementing by the present invention based on the music retrieval system of humming input reduces about 2/3 than tradition according to the searching system that note unit moves the coupling retrieval mode, system performance improve effect highly significant (Fig. 2 with reference to).When retrieval " A Night At Moscow Suburb " humming input section, the retrieval comparative result is as follows:
Using system Result for retrieval (melody ID) Service time (millisecond)
Based on former linear alignment Approximate Match Method database creating system 3309 4109
Based on tone character data of the present invention storehouse generation system 3309 1110

Claims (7)

1. the music rhythm segmentation automatic marking method based on the feature note is characterized in that, may further comprise the steps:
(1) digital music file reads and the melody characteristics extraction step: be used for controlling reading and it is carried out melody characteristics extract from digital music material database digital music file;
(2) melody segmentation feature note detection step: be used for that the aforementioned melody characteristics that extracts from digital music file is carried out the melody segmentation and detect and mark, this step is based on that note category feature and note length feature carry out;
(3) the melody characteristics template generates step: be used for final the generation and have the melody characteristics template of melody segment information and it is saved in music rhythm feature templates storehouse;
Described melody characteristics is that pitch is poor, the vector sequence of time is described by characterizing the note feature, wherein, refer to and the difference of previous note frequency as " pitch is poor " of note pitch feature, be in harmonious proportion unspecified person humming, and be unit with the semitone to adapt to rising of melody; Refer to initial moment of this note as " time " of note otonaga features, it has expressed the rhythm characteristic of melody.
2. the music rhythm segmentation automatic marking method based on the feature note according to claim 1, it is characterized in that, express by the note characteristic sequence that has the melody segment information for the melody characteristics template that generates, whether promptly all automatically be added with this note in each note characteristic information piece is the markup information that characterizes the feature note of melody segmentation feature, note characteristic sequence between the feature note of per two marks is a melody segmentation, when the standard melody to input melody fragment and whole first melody mates, according in every first melody melody in advance the melody of mark be segmented into the coupling Moving Unit and carry out great-leap-forward and move coupling.
3. the music rhythm segmentation automatic marking method based on the feature note according to claim 1, it is characterized in that, described melody segmentation feature note detection step, be divided into elimination and can ignore quiet section and detect two steps of melody segmentation feature note and carry out, the note characteristic sequence that it is the search criterion melody that elimination can be ignored quiet section step, search note length less than the note of a certain predefined quiet segment length threshold value and with its deletion, the voiced segments of then this note length being incorporated into previous note is about to previous note length and prolongs quiet a section of will have been deleted by decision; Detecting melody segmentation feature note step is to carry out based on note category feature and note length feature thereof, feature note classification is divided into the location class note and the class note that stops, and all whether surpasses the feature note threshold value that sets in advance by its note length separately for this two classes note and determines whether this note is the segmentation feature note.
4. the music rhythm segmentation automatic marking method based on the feature note according to claim 3 is characterized in that, for locating being set at of class note: if its note length then is defined as the segmentation feature note with this note when growing more than or equal to 2 dieresis.
5. the music rhythm segmentation automatic marking method based on the feature note according to claim 3 is characterized in that, for being set at of the class note that stops: if its note length then is defined as the segmentation feature note with such note more than or equal to 8 dieresis when long.
6. the music rhythm segmentation automatic marking method based on the feature note according to claim 3 is characterized in that quiet segment length threshold value is 8 dieresis length.
7. the music rhythm segmentation automatic marking method based on the feature note according to claim 2, it is characterized in that, each note feature comprises the pitch feature in the described melody characteristics template, two numerical value content of otonaga features, and when being the feature note as if this note, then the otonaga features value of this note will be set to negative value, and will in this note characteristic information piece, insert optional mark position, be used to write down the note physical length value of this feature note, wherein, when the feature note was location class note, the value that optional mark position is write down was the pronunciation duration of this location note; When the feature note when stopping the class note, this rest is incorporated this rest segment length into the voiced segments of previous note then with deleted.
CNB2005100294966A 2005-09-08 2005-09-08 Music rhythm sectionalized automatic marking method based on eigen-note Expired - Fee Related CN100373383C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2005100294966A CN100373383C (en) 2005-09-08 2005-09-08 Music rhythm sectionalized automatic marking method based on eigen-note

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2005100294966A CN100373383C (en) 2005-09-08 2005-09-08 Music rhythm sectionalized automatic marking method based on eigen-note

Publications (2)

Publication Number Publication Date
CN1737798A CN1737798A (en) 2006-02-22
CN100373383C true CN100373383C (en) 2008-03-05

Family

ID=36080595

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2005100294966A Expired - Fee Related CN100373383C (en) 2005-09-08 2005-09-08 Music rhythm sectionalized automatic marking method based on eigen-note

Country Status (1)

Country Link
CN (1) CN100373383C (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1953046B (en) * 2006-09-26 2010-09-01 中山大学 Automatic selection device and method for music based on humming sing
CN101398827B (en) * 2007-09-28 2013-01-23 三星电子株式会社 Method and device for singing search
CN101488128B (en) * 2008-01-14 2013-06-12 三星电子株式会社 Music search method and system based on rhythm mark
CN101944356B (en) * 2010-09-17 2012-07-04 厦门大学 Music rhythm generating method suitable for playing music of abbreviated character notation of seven-stringed plucked instrument
JP5672280B2 (en) * 2012-08-31 2015-02-18 カシオ計算機株式会社 Performance information processing apparatus, performance information processing method and program
CN102930873B (en) * 2012-09-29 2014-04-09 福州大学 Information entropy based music humming detecting method
CN105244021B (en) * 2015-11-04 2019-02-12 厦门大学 Conversion method of the humming melody to MIDI melody
CN105893460B (en) * 2016-03-22 2019-11-29 无锡五楼信息技术有限公司 A kind of automatic creative method of music based on artificial intelligence technology and device
CN107039024A (en) * 2017-02-10 2017-08-11 美国元源股份有限公司 Music data processing method and processing device
CN107704631B (en) * 2017-10-30 2020-12-01 西华大学 Crowdsourcing-based music annotation atom library construction method
CN109493879B (en) * 2018-12-24 2021-12-17 成都嗨翻屋科技有限公司 Music rhythm analysis and extraction method and device
CN109920449B (en) * 2019-03-18 2022-03-04 广州市百果园网络科技有限公司 Beat analysis method, audio processing method, device, equipment and medium
CN110517655B (en) * 2019-08-28 2023-03-14 广州艾颂智能科技有限公司 Melody generation method and system
CN111739492B (en) * 2020-06-18 2023-07-11 南京邮电大学 Music melody generation method based on pitch contour curve
CN113071243B (en) * 2020-11-25 2022-04-01 无锡乐骐科技股份有限公司 Automatic page turning system applied to music book

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5402339A (en) * 1992-09-29 1995-03-28 Fujitsu Limited Apparatus for making music database and retrieval apparatus for such database
US20030023421A1 (en) * 1999-08-07 2003-01-30 Sibelius Software, Ltd. Music database searching
WO2005057429A1 (en) * 2003-12-08 2005-06-23 Koninklijke Philips Electronics N.V. Searching in a melody database

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5402339A (en) * 1992-09-29 1995-03-28 Fujitsu Limited Apparatus for making music database and retrieval apparatus for such database
US20030023421A1 (en) * 1999-08-07 2003-01-30 Sibelius Software, Ltd. Music database searching
WO2005057429A1 (en) * 2003-12-08 2005-06-23 Koninklijke Philips Electronics N.V. Searching in a melody database

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种新的近似旋律匹配方法及其在哼唱检索系统中的应用. 李扬,吴亚栋,刘宝龙.计算机研究与发展,第40卷第11期. 2003 *

Also Published As

Publication number Publication date
CN1737798A (en) 2006-02-22

Similar Documents

Publication Publication Date Title
CN100373383C (en) Music rhythm sectionalized automatic marking method based on eigen-note
CN100373382C (en) Rhythm character indexed digital music data-base based on contents and generation system thereof
CN100397387C (en) Summarizing digital audio data
Prechelt et al. An interface for melody input
Dighe et al. Swara Histogram Based Structural Analysis And Identification Of Indian Classical Ragas.
CN1737796A (en) Across type rapid matching method for digital music rhythm
CN102053998A (en) Method and system device for retrieving songs based on voice modes
AU2006288921A1 (en) Music analysis
Lemström et al. SEMEX-An efficient Music Retrieval Prototype.
CN101441649A (en) Spoken document retrieval system
CN100367279C (en) Leap over type high speed matching device of numerical music melody
Şentürk et al. Linking scores and audio recordings in makam music of Turkey
CN101278350B (en) Method and apparatus for automatically generating a playlist by segmental feature comparison
Rizo et al. A Pattern Recognition Approach for Melody Track Selection in MIDI Files.
JP5897718B2 (en) Voice search device, computer-readable storage medium, and voice search method
JP3444831B2 (en) Editing processing device and storage medium storing editing processing program
CN115359785A (en) Audio recognition method and device, computer equipment and computer-readable storage medium
Shenhuang et al. Query by humming via multiscale transportation distance in random query occurrence context
Vaglio et al. The words remain the same: Cover detection with lyrics transcription
JP5085577B2 (en) Playlist creation device, music playback device, playlist creation method, and playlist creation program
Takasu et al. Music structure analysis and its application to theme phrase extraction
Müller New developments in music information retrieval
Chai Structural analysis of musical signals via pattern matching
Yang et al. A novel approach based on fault tolerance and recursive segmentation to query by humming
Valero-Mas et al. Analyzing the influence of pitch quantization and note segmentation on singing voice alignment in the context of audio-based Query-by-Humming

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20080305

Termination date: 20100908