CN1953046B - Automatic selection device and method for music based on humming sing - Google Patents
Automatic selection device and method for music based on humming sing Download PDFInfo
- Publication number
- CN1953046B CN1953046B CN2006101224306A CN200610122430A CN1953046B CN 1953046 B CN1953046 B CN 1953046B CN 2006101224306 A CN2006101224306 A CN 2006101224306A CN 200610122430 A CN200610122430 A CN 200610122430A CN 1953046 B CN1953046 B CN 1953046B
- Authority
- CN
- China
- Prior art keywords
- melody
- user
- result
- humming
- server
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 20
- 230000008878 coupling Effects 0.000 claims description 10
- 238000010168 coupling process Methods 0.000 claims description 10
- 238000005859 coupling reaction Methods 0.000 claims description 10
- 230000006870 function Effects 0.000 claims description 7
- 230000004044 response Effects 0.000 claims description 7
- 238000012216 screening Methods 0.000 claims description 6
- 230000008859 change Effects 0.000 claims description 4
- 230000013011 mating Effects 0.000 claims description 2
- 238000002360 preparation method Methods 0.000 claims description 2
- 238000005516 engineering process Methods 0.000 description 8
- 238000011160 research Methods 0.000 description 6
- 239000012634 fragment Substances 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000010276 construction Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- 241001342895 Chorus Species 0.000 description 1
- 241001269238 Data Species 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 238000005311 autocorrelation function Methods 0.000 description 1
- HAORKNGNJCEJBX-UHFFFAOYSA-N cyprodinil Chemical compound N=1C(C)=CC(C2CC2)=NC=1NC1=CC=CC=C1 HAORKNGNJCEJBX-UHFFFAOYSA-N 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000008676 import Effects 0.000 description 1
- 239000004615 ingredient Substances 0.000 description 1
- 230000036651 mood Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000009394 selective breeding Methods 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
Images
Landscapes
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
This invention discloses a music automatic selection device and method, which comprises audio frequency collection device, servo, output device and the method comprises the following steps: selecting songs of property through sing selection, wherein, the user only need sing or speak basic property of song, the audio collection device collects data to send to servo; the servo analysis audio data finds music in database; finally displaying relative results through output device.
Description
Technical field
The present invention relates to a kind of melody automatic dialing unit and method based on humming, specifically, relate to a kind of user in the theme of only remembering melody, and forgotten under the name of melody and singer's the situations such as information, selected the apparatus and method of song by the humming melody.
Background technology
1, content-based audio retrieval technology introduction
The computer search audio fragment can use the text marking mode based on title or filename, as voice data being marked " music ", " speech " etc.But because the imperfection and the subjectivity of filename and textual description, people are difficult to find the audio fragment that satisfies specific requirement.For addressing the above problem, content-based audio retrieval technology is arisen at the historic moment.Content-based audio retrieval is exactly by the audio frequency characteristics analysis, gives different semantemes to different voice datas, makes the audio frequency with identical semanteme keep similar acoustically.Sample between the audio fragment that the simplest content-based audio retrieval use is inquired about and stored is to the comparison between the sample.But because sound signal is variable, and different audio fragments can be expressed by different sampling rates, and each sample can use different bit numbers, reserves so this method effect differs.Therefore, content-based audio retrieval is the basis by extraction audio frequency characteristics collection such as average amplitude and frequency distribution usually.
Speech recognition technology also helps the realization of content-based audio search simultaneously.If there are the lyrics in the audio section that is used to mate, can use speech recognition technology that voice signal is converted into text, use the IR technology then and carry out index and retrieval.Except that the sounding vocabulary of reality, be included in the out of Memory in the voice, as enunciator's identity and mood etc., all help speech index and retrieval.
2, the introduction of song pattern is selected in existing craft
The consumer is to the similar place of Karaoke now, and the consumer generally can select many backgrounds to play, the accompaniment music during as singing.The method of choosing song has following several usually: 1, select song according to the number of words of song names; 2, select song according to singer's name number of words; 3, song is divided into 3 classes, and then carries out binary search according to man, woman, chorus type according to the number of words of song title, singer's name; 4, according to the language of song song is classified, carry out binary search then.
Because each search all is to search for according to the condition of very easy repetitions such as song title number of words, singer's name number of words, each Search Results all comprises a large amount of non-purpose information, so just carries out binary search, three search etc.And usually final Search Results also comprises a large amount of melodies, and a result displayed bar of display terminal number is very limited, and the Pagination Display of just having to is in the time of so each user search, all to carry out a large amount of browsing, screen just with human eye and can choose the melody of oneself wanting.
3, Related product and patent
There are many research units that content-based audio search is being studied at present.
The content-based audio frequency coupling research of NUS is finished by Jonathan Foote, and at first, this research requires the audio file sample storehouse of accumulation certain scale, and will form proper vector through handling automatically; Secondly, the audio file in this sample storehouse all will that is to say that each file all will be included into a class through artificial mark.
It is that advertisement in the TV programme is analyzed that fundamental purpose is studied to this aspect by Mannheim, Germany university.The researchist is at first cut apart roughly the audio-frequency unit of the TV commercials that record in advance with audio frequency characteristics such as loudness, obtains the audio file of music and environment sound equipment (noise).With fundamental frequency sequence extraction method the music file that splits is analyzed automatically then, extracted corresponding fundamental frequency time series, and corresponding music file is carried out index with this.
MIT, University of Southern California etc. have all launched audio retrieval research, by humming inquiry, audio classification, structured audio is represented and based on speaker's the research with aspect such as index cut apart.
But now content-based audio search technology is imperfection also, and our system will take a kind of more excellently in searching algorithm in the research prior art, makes the coupling degree of accuracy higher.But because the restriction of prior art development, the matching precision of system still can not satisfy people's demand fully, thereby we take the method for conventional manual search and humming search combination, comes the music data storehouse is searched for.Our system will take traditional manual search method when the humming search can not retrieve satisfactory result like this.
Device of the present invention will solve the time-consuming shortcoming of the trouble of tradition search in the past, solve the jejune problem of content-based search technique simultaneously, have the commercialization practical value.
Summary of the invention
Long at the former technology input search condition time, shortcomings such as way of search complexity, and people's ubiquity such a case, only remember the theme of melody, but forget information such as the name of melody and singer, in this case, traditional way of search can not satisfy people's demand fully, the present invention proposes a kind of melody automatic dialing unit and method based on humming.
It comprises audio collecting device, server, output device and output device based on the melody automatic dialing unit of humming.
The melody base attribute that described audio collecting device is responsible for gathering the melody of user's humming and is gathered user's oral account, the data that will collect send to the server of system simultaneously.
Described server is responsible for storing melody, receives the data that audio collecting device is gathered, and mates with it, the result of mating is sent back outlet terminal, and the various command that entry terminal sends is analyzed.
Described outlet terminal is responsible for the result of display server coupling and is shown response user's various inputs.
Described entry terminal is responsible for importing matching condition and is sent various command to server.
A kind of melody automatic selecting method based on humming, it comprises by humming choosing song and passes through oral account melody base attribute choosing song:
Described step by humming choosing song is specially:
1) user sends the order of preparing humming selection melody by entry terminal to server;
2) open the switch of audio collecting device, the user hums melody to audio collecting device;
3) voice data that sends over of server analyzing audio collecting device and the melody in the music data storehouse mate;
When 4) matching result is one or more of, server is sent to outlet terminal to matching result, if have only a result, wait for that then the user confirms the result, if many results are arranged, wait for that then the user selects the result and confirms, if the user thinks that all matching results are not the results that he wants, change step 6) over to; If thinking, the user result who has him to want in the matching result changes step 7) over to;
When 5) matching result was zero bar, server returned to outlet terminal to the result that it fails to match, and system also changes step 6) over to automatically;
6) system enters conventional manual and selects the song pattern, and the user is by entry terminal input singer title, singer's sex, and conditions such as song title are carried out manual screening;
7) melody of system plays response.
Described step by oral account melody base attribute choosing song is specially:
1) user sends the order that preparation oral account melody attribute is selected melody by entry terminal to server;
2) open the switch of audio collecting device, the user says certain attribute of melody to audio collecting device;
3) voice data that sends over of server analyzing audio collecting device and the melody in the music data storehouse mate;
When 4) matching result was one or more of, server was sent to outlet terminal to matching result, if having only a result, waited for that then the user confirms the result, if many results are arranged, waited for that then the user selects the result and confirms; If the user thinks that all matching results are not the results that he wants, change step 6) over to; If thinking, the user result who has him to want in the matching result changes step 7) over to;
When 5) matching result was zero bar, server returned to outlet terminal to the result that it fails to match, and system also changes step 6) over to automatically;
6) system enters conventional manual and selects the song pattern, and the user is by entry terminal input singer title, singer's sex, and conditions such as song title are carried out manual screening;
7) melody of system plays response.
Compare with conventional art, the present invention has following characteristics:
1) user is only with the humming band lyrics or not with the song of the lyrics, system can mate according to the melody (can contain the lyrics in the melody) of user's humming and the melody in the musical database, the song title that the match is successful is returned to the user select.Solved the user and need remember that song names just can select the inconvenience of the song of wanting, the mode of humming has simultaneously also improved the song efficiency of selection.
2) user hums the part song or music can be selected melody, and promptly the user does not need song intactly hummed before microphone and finishes, and the climax part of humming song usually can meet the demands.
3) user can the import-restriction condition improve the precision that the humming mode is selected song.Restrictive condition can be input to system with form of sound by microphone, also can be input to system by external connection keyboard.These restrictive conditions can be some speech or words of song title, also can be some words of singer's name.
4) system has kept the mode of manual selection song.When the user by humming can not select the song of wanting the time, manual selection mode can be used as of system and replenishes.
Description of drawings
Fig. 1 is this device basic architecture figure;
Fig. 2 hums choosing song process flow diagram for the user;
Fig. 3 gives an oral account melody attribute choosing song process flow diagram for the user;
Fig. 4 is this device case study on implementation figure.
Embodiment
Below in conjunction with accompanying drawing the present invention is further set forth.
A kind of as shown in Figure 1 melody automatic dialing unit based on humming comprises audio collecting device, server, output device and output device.
Audio collecting device is meant microphone or has same function with microphone, can gather the equipment of surrounding environment sound.Audio collecting device is used to gather the melody of user's humming and gathers user's oral account in system melody base attribute, audio collecting device also has the ability that transmits the data that collect to the server of system, can be to carry while gathering, also can be to carry after collection is finished.This equipment also should have out and close two states simultaneously, only when opening, could gather the sound of surrounding environment.
Server is one and comprises the music data storehouse, can carry out the computing machine of the access and the analysis of melody in the above.Be sent in the server by the audio collecting device information of collecting, server is analyzed the part of user's humming or the melody base attribute of user's oral account, mate with the melody in the database, the calculating of coupling is finished on server, and server sends back outlet terminal to the result of coupling.Server is also analyzed the various command that entry terminal sends, and these orders comprise: the matching condition of user's input before the humming choosing song; After coupling is finished, carry the result behind outlet terminal, the order that the user selects once more; The order of importing during full manual screening.The specific implementation step of server coupling melody:
1) user who collects is hummed audio frequency and be sent to server.
2) audio frequency is carried out pre-service.Pre-service makes in the audio frequency step afterwards and is easier to by Computer Processing.The average of audio frequency is equivalent to a DC component, and the average μ x of audio frequency x (n) is estimated by following formula:
Wherein Xn (n) is the record of N the point of X (n), and μ x is the estimated value of the right real average μ x of X (n).
3) audio feature extraction.Characteristic sounds is a kind of random data, in the data processing of at random dynamics parameter, these characteristic sounds are described, need to calculate earlier the mean square deviation of audio frequency, obtain the statistics of amplitude domain, and then draw the statistics of time domain by autocorrelation function, draw the statistics of frequency domain at last again by the autopower spectral density function.After obtaining the statistics of amplitude domain feature A, the temporal signatures T of audio frequency and frequency domain character F, promptly think the useful feature that has obtained audio frequency acquiring.
4) compare by the feature of audio frequency and the feature of the music in the database.N music arranged in the tentation data storehouse, we precompute each music i in the database (i=1,2 ..., amplitude domain feature A n)
i, temporal signatures T
iWith frequency domain character F
i, obtain all D
i(i=1,2 ..., n), D
jThe D of expression value minimum
i, music j is our coupling optimum so.
Wherein:
5) obtain set k={k|D
k≤ D
j+ Δ }, wherein Δ is the coefficient of a control matching result scope.Return set k, i.e. all music that the match is successful.
Outlet terminal can be the equipment that general display, projector etc. have Presentation Function, is used for result that display server returns and the various inputs that show the response user.
Entry terminal can be the equipment that a kind of specific or general keyboard, touch-screen etc. have write-in functions, is used for the user and imports matching condition and send various command to server.
A kind of melody automatic selecting method based on humming comprises by humming choosing song with by oral account melody base attribute choosing song.
Be illustrated in figure 2 as the step of user by humming choosing song.
Be illustrated in figure 3 as the step of user by oral account melody base attribute choosing song.
Here the realization of system is sung in the automatic choosing that illustrates us as shown in Figure 4.The system implementation case diagram is different with system construction drawing, and system construction drawing is the ingredient of brief description system, and system should be that a server can be handled the request that many cover automatic musical composition selection equipment send over simultaneously, simultaneously the result is returned to them.
A present user has come us wherein before the cover melody selection equipment, wants to carry out the selection operation of melody.He has been owing to forgotten name and original singer's name of melody, thus can't obtain the result by the traditional-handwork inquiry, but he still remembers the theme of song, so hummed his melody section of remembering of part to microphone.Because the tone and the beat of user's humming are not very accurate, are shown on the display so system matches has a plurality of results to return, the user has selected him to want melody from the result because the melody name is remembered in resultful prompting.He selects then next melody again, remain mode by humming, specifically he to have imported the melody original singer earlier be these screening conditions of songster, hum then, because humming is too inaccurate really, so it fails to match, the prompting user enters by hand and selects the song pattern, and at this moment the user searches for according to melody name and original singer.The user selects the 3rd first melody once more, he remembers full name of melody specifically, so he selects song by method from the melody name to microphone that say, system mates according to user's language and the song names in the database, same because the interference of a variety of causes, coupling can not fully accurately be carried out, and system has returned several the melody names that the match is successful and waited for that the user carries out artificial selection, the melody that the user has selected him to want by keyboard from return results.
By such flow process, a user has just passed through native system and has selected 3 first melodies to play behaviour
Claims (8)
- One kind based on the humming the melody automatic dialing unit, it comprisesAudio collecting device is used for being responsible for gathering the melody of user's humming and the melody base attribute of collection user oral account under opening, and the data that will collect send to the server of system simultaneously;Server is used for being responsible for the storage melody, receives the data that described audio collecting device is gathered, and mates with it, the result of mating is sent back outlet terminal, and the various command that entry terminal sends is analyzed;Described outlet terminal is used for the various inputs of being responsible for showing the result of described server coupling and showing the response user;Described entry terminal is used for being responsible for the input matching condition and sends various command to server.
- 2. the melody automatic dialing unit based on humming according to claim 1 is characterized in that described audio collecting device has out and closes two states, only when opening, could gather ambient data.
- 3. the melody automatic dialing unit based on humming according to claim 1 is characterized in that, described audio collecting device is microphone or has the equipment of same function with microphone.
- 4. the melody automatic dialing unit based on humming according to claim 1 is characterized in that described server is one and comprises the music data storehouse, can carry out the computing machine of the access and the analysis of melody.
- 5. the melody automatic dialing unit based on humming according to claim 1 is characterized in that described outlet terminal is the equipment with Presentation Function.
- 6. the melody automatic dialing unit based on humming according to claim 1 is characterized in that described entry terminal is the equipment with write-in functions.
- 7. melody automatic selecting method based on humming is characterized in that it comprises that described step by humming choosing song is specially by humming choosing song with by oral account melody base attribute choosing song:1) user sends the order of preparing humming selection melody by entry terminal to server;2) open the switch of audio collecting device, the user hums melody to audio collecting device;3) voice data that sends over of server analyzing audio collecting device and the melody in the music data storehouse mate;When 4) matching result is one or more of, server is sent to outlet terminal to matching result, if have only a result, wait for that then the user confirms the result, if many results are arranged, wait for that then the user selects the result and confirms, if the user thinks that all matching results are not the results that he wants, change step 6) over to; If thinking, the user result who has him to want in the matching result changes step 7) over to;When 5) matching result was zero bar, server returned to outlet terminal to the result that it fails to match, and system also changes step 6) over to automatically;6) system enters conventional manual and selects the song pattern, and the user is by entry terminal input singer title, singer's sex, and conditions such as song title are carried out manual screening;7) melody of system plays response.
- 8. the melody automatic selecting method based on humming according to claim 7 is characterized in that, described step by oral account melody base attribute choosing song is specially:1) user sends the order that preparation oral account melody attribute is selected melody by entry terminal to server;2) open the switch of audio collecting device, the user says certain attribute of melody to audio collecting device;3) voice data that sends over of server analyzing audio collecting device and the melody in the music data storehouse mate;When 4) matching result was one or more of, server was sent to outlet terminal to matching result, if having only a result, waited for that then the user confirms the result, if many results are arranged, waited for that then the user selects the result and confirms; If the user thinks that all matching results are not the results that he wants, change step 6) over to; If thinking, the user result who has him to want in the matching result changes step 7) over to;When 5) matching result was zero bar, server returned to outlet terminal to the result that it fails to match, and system also changes step 6) over to automatically;6) system enters conventional manual and selects the song pattern, and the user is by entry terminal input singer title, singer's sex, and conditions such as song title are carried out manual screening;7) melody of system plays response.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2006101224306A CN1953046B (en) | 2006-09-26 | 2006-09-26 | Automatic selection device and method for music based on humming sing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2006101224306A CN1953046B (en) | 2006-09-26 | 2006-09-26 | Automatic selection device and method for music based on humming sing |
Publications (2)
Publication Number | Publication Date |
---|---|
CN1953046A CN1953046A (en) | 2007-04-25 |
CN1953046B true CN1953046B (en) | 2010-09-01 |
Family
ID=38059346
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2006101224306A Expired - Fee Related CN1953046B (en) | 2006-09-26 | 2006-09-26 | Automatic selection device and method for music based on humming sing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN1953046B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105118518A (en) * | 2015-07-15 | 2015-12-02 | 百度在线网络技术(北京)有限公司 | Sound semantic analysis method and device |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI386916B (en) * | 2008-12-31 | 2013-02-21 | Univ Far East | Voice song system |
CN101741975B (en) * | 2009-12-18 | 2012-12-05 | 上海华勤通讯技术有限公司 | Method for processing music fragment to obtain song information by using mobile phone and mobile phone thereof |
CN102063904B (en) * | 2010-11-30 | 2012-06-27 | 广州酷狗计算机科技有限公司 | Melody extraction method and melody recognition system for audio files |
CN102332262B (en) * | 2011-09-23 | 2012-12-19 | 哈尔滨工业大学深圳研究生院 | Method for intelligently identifying songs based on audio features |
CN103594083A (en) * | 2012-08-14 | 2014-02-19 | 韩凯 | Technology of television program automatic identification through television accompanying sound |
CN103366784B (en) * | 2013-07-16 | 2016-04-13 | 湖南大学 | There is multi-medium play method and the device of Voice command and singing search function |
CN103559232B (en) * | 2013-10-24 | 2017-01-04 | 中南大学 | A kind of based on two points approach dynamic time consolidation coupling music singing search method |
CN106469557B (en) * | 2015-08-18 | 2020-02-18 | 阿里巴巴集团控股有限公司 | Method and device for providing accompaniment music |
CN105244021B (en) * | 2015-11-04 | 2019-02-12 | 厦门大学 | Conversion method of the humming melody to MIDI melody |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1737797A (en) * | 2005-09-08 | 2006-02-22 | 上海交通大学 | Rhythm character indexed digital music data-base based on contents and generation system thereof |
CN1737796A (en) * | 2005-09-08 | 2006-02-22 | 上海交通大学 | Across type rapid matching method for digital music rhythm |
CN1737798A (en) * | 2005-09-08 | 2006-02-22 | 上海交通大学 | Music rhythm sectionalized automatic marking method based on eigen-note |
CN1752970A (en) * | 2005-09-08 | 2006-03-29 | 上海交通大学 | Leap over type high speed matching device of numerical music melody |
-
2006
- 2006-09-26 CN CN2006101224306A patent/CN1953046B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1737797A (en) * | 2005-09-08 | 2006-02-22 | 上海交通大学 | Rhythm character indexed digital music data-base based on contents and generation system thereof |
CN1737796A (en) * | 2005-09-08 | 2006-02-22 | 上海交通大学 | Across type rapid matching method for digital music rhythm |
CN1737798A (en) * | 2005-09-08 | 2006-02-22 | 上海交通大学 | Music rhythm sectionalized automatic marking method based on eigen-note |
CN1752970A (en) * | 2005-09-08 | 2006-03-29 | 上海交通大学 | Leap over type high speed matching device of numerical music melody |
Non-Patent Citations (6)
Title |
---|
李扬 等.一种新的近似旋律匹配方法及其在哼唱检索系统中的应用.计算机研究与发展40 11.2003,40(11),1554-1559页. |
李扬等.一种新的近似旋律匹配方法及其在哼唱检索系统中的应用.计算机研究与发展40 11.2003,40(11),1554-1559页. * |
李明 等.一种基于哼唱的音乐检索方法.第八届全国人机语音通讯学术会议.2005,433-436. |
李明等.一种基于哼唱的音乐检索方法.第八届全国人机语音通讯学术会议.2005,433-436. * |
李珂 等.基于音频检索的点歌系统.北京师范大学学报(自然科学版)42 4.2006,42(4),383-386. |
李珂等.基于音频检索的点歌系统.北京师范大学学报(自然科学版)42 4.2006,42(4),383-386. * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105118518A (en) * | 2015-07-15 | 2015-12-02 | 百度在线网络技术(北京)有限公司 | Sound semantic analysis method and device |
Also Published As
Publication number | Publication date |
---|---|
CN1953046A (en) | 2007-04-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN1953046B (en) | Automatic selection device and method for music based on humming sing | |
Typke et al. | A survey of music information retrieval systems | |
JP5329968B2 (en) | How to store and retrieve non-text based information | |
US7031980B2 (en) | Music similarity function based on signal analysis | |
Cornelis et al. | Access to ethnic music: Advances and perspectives in content-based music information retrieval | |
JP5115966B2 (en) | Music retrieval system and method and program thereof | |
KR100895009B1 (en) | System and method for recommending music | |
Cano et al. | ISMIR 2004 audio description contest | |
Gulati et al. | Automatic tonic identification in Indian art music: approaches and evaluation | |
EP2096626A1 (en) | Method for visualizing audio data | |
US20110225153A1 (en) | Content search device and content search program | |
JPH06290574A (en) | Music retrieving device | |
CN110010159B (en) | Sound similarity determination method and device | |
Hoffmann et al. | Music recommendation system | |
JP5293018B2 (en) | Music information processing apparatus, music information processing method, and computer program | |
Ramirez et al. | Automatic performer identification in commercial monophonic jazz performances | |
Moelants et al. | Exploring African tone scales | |
Lee et al. | Korean traditional music genre classification using sample and MIDI phrases | |
Murthy et al. | Singer identification from smaller snippets of audio clips using acoustic features and DNNs | |
Nagavi et al. | Overview of automatic Indian music information recognition, classification and retrieval systems | |
Kroher et al. | Computational ethnomusicology: a study of flamenco and Arab-Andalusian vocal music | |
Gounaropoulos et al. | Synthesising timbres and timbre-changes from adjectives/adverbs | |
JP2003131674A (en) | Music search system | |
Sharma et al. | Audio songs classification based on music patterns | |
CN109710797B (en) | Audio file pushing method and device, electronic device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
C17 | Cessation of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20100901 Termination date: 20130926 |