AU751683B2 - A system and method for querying a music database - Google Patents
A system and method for querying a music database Download PDFInfo
- Publication number
- AU751683B2 AU751683B2 AU45061/00A AU4506100A AU751683B2 AU 751683 B2 AU751683 B2 AU 751683B2 AU 45061/00 A AU45061/00 A AU 45061/00A AU 4506100 A AU4506100 A AU 4506100A AU 751683 B2 AU751683 B2 AU 751683B2
- Authority
- AU
- Australia
- Prior art keywords
- music
- signal
- pieces
- window
- sub
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Landscapes
- Auxiliary Devices For Music (AREA)
Description
S&F Ref: 452380D1
AUSTRALIA
PATENTS ACT 1990 COMPLETE SPECIFICATION FOR A STANDARD PATENT
ORIGINAL
r r r Name and Address of Applicant: Actual Inventor(s): Address for Service: Invention Title: Canon Kabushiki Kaisha 30-2, Shimomaruko 3-chome, Ohta-ku Tokyo 146 Japan Zhenya Yourlo Spruson Ferguson St Martins Tower 31 Market Street Sydney NSW 2000 A System and Method for Querying a Music Database The following statement is a full description of this invention, including the best method of performing it known to me/us:- Qrvnjm: -iS ?rP~forl on: jl 2000 ba,,cci No'.r 5845c -1- A SYSTEM AND METHOD FOR QUERYING A MUSIC DATABASE FIELD OF THE INVENTION The present invention relates to the field of music systems and, in particular, to the identification and retrieval of particular pieces of music or alternately, attributes of a desired piece of music, from a music database on the basis of a query composed of desired features and conditional statements.
BACKGROUND OF THE INVENTION Retrieval of music or music attributes from a database requires, in common with generic database functionality, a query method which is powerful and flexible, and preferably intuitively meaningful to the user. This in turn requires that the database contain music which has been classified in a manner which is conducive to systematic search and sort procedures. This latter aspect in turn requires that pieces of music be characterised in a manner which permits of such classification.
15 Thus a hierarchy of requirements or elements which make up a music database system are as follows: 0 characterising music using attributes useful in a classification scheme classifying music in a meaningful searchable structure, and S querying the database so formed, to yield meaningful results.
The hierarchy has been defined "bottom up" since this presents a more meaningful progression by which the invention can be described.
When considering audio signals in general, and in particular those relating to music, the nature of the signals may be considered in terms of various attributes which are intuitively meaningful. These attributes include, among others, tempo, loudness, pitch and timbre. Timbre can be considered to be made up of a number of constituent sub- 060700; 09:14: 06/07/00; 09:18 AM 452380dl.doc -2features including "sharpness" and "percussivity". These features can be extracted from music and are useful in characterising the music for a classification scheme.
The publication entitled "Using Bandpass and Comb Filters to Beat-track Digital Audio" by Eric D. Scheirer (MIT Media Laboratory, December 20, 1996) discloses a method for extraction of rhythm information or "beat track" from digital audio representing music. An "amplitude-modulated noise" signal is produced by processing a musical signal through a filter bank of bandpass filters. A similar operation is also performed on a white noise signal from a pseudo-random generator. Subsequently the amplitude of each band of the noise signal is modulated with the amplitude envelope of the corresponding band of the musical filter bank output. Finally the resulting amplitude modulated noise signals are summed together to form an output signal. It is claimed that the resulting noise signal has a rhythmic percept which is significantly the same as that of the original music signal. The method described can run in real-time on a very fast desktop workstation or alternately, a multi-processor architecture may be utilised. This method suffers from the disadvantage of being highly computationally intensive.
Percussivity is that attribute which relates to a family of musical instruments known as "percussion" when considering an orchestra or a band. This family includes such musical instruments as drums, cymbals, castanets and others. Processing of audio signals in general and musical signals in particular, benefits from the ability to estimate various attributes of the signals, and the present invention is concerned with estimating the attribute of percussivity.
A number of different methods have been used to estimate percussivity of a given signal, such methods including those broadly based upon: Short-time power analysis statistical analysis of signal amplitude 060700; 09:14: 06/07/00; 09:18 AM 452380d1.doc -3comparison of harmonic spectral component power with total spectral power Short-time signal power estimation involves calculation of an equivalent power (or an approximation thereof) within a short segment or "window" of a signal under consideration. The power estimate can be compared to a threshold in order to determine whether the portion of the signal within the window is percussive in nature.
Alternatively, the power estimate can be compared to a sliding scale of thresholds, and the percussive content of the signal classified with reference to the range of thresholds.
Statistical analysis of signal amplitude is typically based upon a "running mean" or average signal amplitude value, where the mean is determined for a window which slides across the signal uinder consideration. By sliding the window, the running mean is determined over a pre-determined time period of interest. The mean value at each window position is compared to mean values for other windows in a neighborhood in order to determine whether signal variations in the running mean are sufficiently large to signify S 15 that the signal is percussive.
Harmonic spectral component power analysis involves taking a windowed Fourier transform of the signal in question over the time period of interest, and then examining the resulting set of spectral components. The spectral components which are indicative of harmonic series are removed. It is noted that such harmonic series components typically represent local maxima in the overall spectral envelope of the signal. After removing the harmonic series spectral components, remaining spectral components substantially consist only of the inharmonic components of the signal, these being considered to represent percussive components of the signal. The total power in these inharmonic components is determined and compared with a total signal power for all components, harmonic and non-harmonic, to yield an indication ofpercussivity.
060700: 09:14: 06/07/00; 09:18 AM 452380d1.doc -4- The aforementioned analysis methods are typically intended to identify a range of signal attributes, and thus suffer from relatively limited accuracy, and a tendency to produce false or unreliable percussivity estimates. The methods are also relatively complex and thus expensive to implement, particularly the harmonic spectral component estimation method.
U.S. Patent No. 5,616,876 (Cluts et al) entitled "System and Methods for Selecting Music on the Basis of Subjective Content" describes an interactive network providing music to subscribers which allows a subscriber to use a seed song to identify other songs similar to the seed song, the similarity between songs being based on the subjective content of the songs, as reflected in style tables prepared by editors. The system and methods described in this publication are based on the manual categorisation r of music, with the attendant requirement for human participation in the process, with the resultant speed, accuracy and repeatability of the process limited by human attributes.
The publication entitled "Content Based Classification, Search, and Retrieval of Audio" by Erling et al (IEEE Multimedia Vol. 3, No. 3, 1996, pp.27-36) discloses indexing and retrieving short audio files "sounds") from a database. Features from the sound in question are extracted, and feature vectors based on statistical measures relating to the features are generated. Both the sound and the set of feature vectors are stored in a database for later search and retrieval. A method of feature comparison is used to determine whether or not a selected sound is similar to another sound stored in the database. The feature set selected does not include tempo and thus the system will not perform well in differentiating between pieces of music. Furthermore, the method determines features which provide scalar statistical measures over short time windows.
Furthermore, the method uses features such as bandwidth which are not readily conceptualized in terms of impact of music selection.
060700; 09:14: 06/07/00: 09:18 AM 452380dl.doc It is seen from the above that existing arrangements have shortcomings in all elements in the hierarchy of requirements described, and it is an object of the invention to ameliorate one or more disadvantages of the prior art.
SUMMARY OF THE INVENTION According to one aspect of the invention, there is provided a method for querying a music database, which contains a plurality of pieces of music wherein the pieces are indexed according to one or more parameters, the method comprising the steps of: forming a request which specifies one or more pieces of music and/or associated parameters and one or more conditional expressions; determining associated parameters for the specified pieces of mnusic if the parameters have not been specified; comparing the specified parameters and corresponding parameters associated with other pieces of music in the database; 15 s calculating a distance based on the comparisons; and identifying pieces of music which are at distances from the specified pieces )e of music as to satisfy the conditional expressions.
.According to another aspect of the invention, there is provided an apparatus for querying a music database, which contains a plurality of pieces of music wherein the pieces S: 20 are indexed according to one or more parameters the apparatus comprising: a request means for forming a request which specifies one or more pieces of music and/or associated parameters and one or more conditional expressions; a parameter determination means for determining associated parameters for the specified pieces of music if the parameters have not been specified; a comparison means for comparing the specified parameters and corresponding parameters associated with other pieces of music in the database; 12381 .doc:kxa a distance determination means for calculating a distance based on the comparisons; and a determination means for identifying pieces of music which are at distances from the specified pieces of music as to satisfy the conditional expressions.
According to yet another aspect of the invention, there is provided a computer readable mediumn incorporating a computer program product for querying a music database, which contains a plurality of pieces of music wherein the pieces are indexed according to one or more parameters said computer program product comprising: a request means for forming a request which specifies one or more pieces of music and/or associated parameters and one or more conditional expressions; a parameter determination means for determining associated parameters for the specified pieces of music if the parameters have not been specified; a comparison means for comparing the specified parameters and corresponding parameters associated with other pieces of music in the database; 15 a distance determination means for calculating a distance based on the comparisons; and a determination means for identifying pieces of music which are at distances from the specified pieces of music as to satisfy the conditional expressions.
BRIEF DESCRIPTION OF THE DRAWINGS A preferred embodiment of the present invention will be described in detail with reference to the accompanying drawings in which: Fig. 1 depicts a music database system in a kiosk embodiment; Fig. 2 illustrates a music database system in a network embodiment; Fig. 3 provides a functional description of a music database system; A Fig. 4 illustrates a generic feature extraction process; 1- Fig. 5 depicts the tempo feature extraction process;
S
S S [R:\LIBK]2381 .doc:kxa -6- Fig. 6 presents a further illustration of the tempo feature extraction process; Fig. 7 depicts a process flow diagram for a preferred embodiment of the percussivity estimator; Fig. 8 presents more detail of the preferred embodiment; Fig. 9 illustrates a preferred embodiment of a comb filter; Fig. 10 depicts a linear function obtained from the comb filter output energies; Fig. 11 presents an accumulated histogram of a signal having an overall high percussivity; Fig. 12 presents an accumulated histogram of a signal having an overall low percussivity; ~Fig. 13 illustrates a typical percussive signal; Fig. 14 depicts a generic feature classification process; Fig. 15 shows a database query process music identifiers supplied in query; Fig. 16 illustrates a database query process music features supplied in query; Fig. 17 illustrates a distance metric used to assess similarity between two pieces of music; and V Fig. 18 21 depict feature representations for four pieces of music; and Fig. 22 depicts a general purpose computer upon which the preferred embodiment of the invention can be practiced.
DETAILED DESCRIPTION In the context of this specification and claims, the word "comprising" means "including principally but not necessarily solely". Variations of the word "comprising", such as "comprise" and "comprises" have correspondingly varied meanings.
Fig. 1 depicts a music database system in a kiosk 102 embodiment. For the purpose of the description it is noted that "kiosk" is a term of art denoting a public access 060700; 09:14; 06/07/00. 09:18 AM 452380dl.doc -7data terminal for use in, for example, information data retrieval and audio output receipt. In this embodiment, the owner/operator of the kiosk 102 inputs pieces of music 100 into the kiosk 102 where they are classified and stored in a database for future retrieval. A music lover comes up to the kiosk and inputs a music query 104 to the kiosk 102, which after perfonrming a search of the kiosk music database based on parameters in the music query 104, outputs a desired piece of music 106 which is based on the music query 104. The kiosk 102 also outputs music identifiers 108 associated with the desired piece of music 106. Such identifiers could include, for example, the name of the piece of music.
Fig. 2 illustrates a music database system in a network embodiment. In this embodiment, a plurality of music database servers 202 are connected to a network 206 via access lines 204. The owner/operators of server 202 input pieces of music 100 into the servers 202 where they are classified and stored in the database for future retrieval. Servers may be embodied in various forms including by means of general purpose computers as described in Fig. 22 below. A plurality of music database clients 210 are also connected to the network 206 via access lines 208. An owner of a client inputs a music query 104 into the client 210, which establishes a connection to the music database server 202 via a network connection comprising access line 208, network 206 and access line 204. The server 202 performs a search of the music database based upon user query 104, and then outputs a desired piece of music 106, which is based on the music query 104, across the same network 20 connection 204-206-208. The server 202 also outputs music identifiers 108 associated with the desired piece of music 106. Such identifiers could include, for example, the name of the piece of music.
Fig. 3 provides a functional description of the music database system. The database system performs two high level processes, namely inputting pieces of music 100, classifying them and storing them in the database for later search and retrieval and [R:\LIBK]2381 .doc:kxa -8- (ii) servicing queries 104 to the music database system consequent upon which it outputs a desired piece of music 106 and/or music identifiers 108 associated with the desired piece of music 106. Such identifiers could include, for example, the name of the piece of music.
Considering first the music input and classification process, a piece of music 100 is input, it then s undergoes a feature extraction step 304 after which the features undergo a classification step 306 and are stored in feature database 308 respectively. hIn parallel with this process, the actual music piece itself 100 is stored in music database 302. Thus the piece of music 100 and its associated representative features are stored in two databases 302 and 308 respectively. Next considering the database query process, the user query 104 is input whereupon a feature comparison step 312 lo is performed between the features associated with the user query 104 and the features of the pieces of music stored in the feature database 308. After a successful search, a music selection process 314 extracts the desired piece of music 106 from the music database 302 on the basis of the feature comparison step 312, and the music selection step 314 outputs the desired piece of nmsic 106 and/or the music identifiers 108 associated with the desired piece of music 106.
Fig. 4 depicts a generic feature extraction process. Recalling from the functional Oooo description of the database system in Fig. 3, the piece of music 100 is input, it then oooo undergoes feature extraction in the step 304 after which the features are classified in the step 306 and stored in feature database 308. In Fig. 4, the piece of music 100 is input, and the feature extraction process 304 is seen to include, in this illustration, five parallel processes, 20 one for each feature. The tempo extraction process 402 operates upon the input piece of music 100 to produce tempo data output 404. The loudness extraction process 406 operates o upon the input piece of music 100 to produce loudness data output 408. The pitch extraction process 410 operates upon the input piece of music 100 to produce pitch data output 412.
The timbre extraction process 414 operates upon the input piece of music 100 to produce sharpness data output 416 and percussivity data output 418. The aforementioned five sets -9of feature data, namely the tempo data 404, the loudness data 408, the pitch data 412, the sharpness data 416, and the percussivity data 418 constitute the data carried by a connection 333 from the feature extraction process 304 to the feature classification process 306 (see Fig.
Thus, referring again to Fig. 3 it is seen that for this example, the output line 332 between s the feature comparison process 312 and the feature database 308 is also handling the five different data sets 404, 408, 412, 416 and 418.
Fig. 5 shows the tempo feature extraction process 402 (described in Fig. 4) which will be described in some detail. Tempo extraction firstly involves determination of the output signal 620 (see Fig. 6) from the piece of music 100, and then involves filtering this io output signal through a bank of comb filters. Finally the energy in the comb filters, accumulated over substantially the entire duration 602 of the piece of music 100, provides the raw tempo data 404 (see Fig. 4) indicative of the tempo or tempi (various tempos) present in the piece of music 100 substantially over it's duration 602. This set of processes is preferably implemented in software. Alternatively, a number of processes and sub- I 5 processes can, if desired, be performed, for example, on the audio input card 2216 (see Fig.
22), where say a Fast Fourier Transform (FFT) can be performed using a Digital Signal oooe Processor (DSP). Furthermore, comb filters, described in relation to feature extraction, can also be implemented using a DSP on the audio card 2216. Alternatively, these processes can be performed by the general purpose processor 2204. In Fig. 5, an input music signal 100 is 2o partitioned into windows in a process 502, and the Fourier coefficients in each window are determined in a process 504. These two processes 502 and 504 together form an expanded view of a Fast Fourier Transform process 522. After calculating the FFT, the coefficients in each window or "bin" are summed in a process 506 and the resulting signal, output on a connection 524 from the process 506 is low pass I .doc:kxa filtered in a process 508, then differentiated in a process 510, and then half wave rectified in a process 512 to produce an onset signal 618 (see Fig. 6).
Turning to Fig. 6, a waveform representation of the process described in Fig. 5 is shown. After windowing the input music signal 100, the signal in each time window 604 is processed by a Fast Fourier Transform (FFT) process to form an output signal 620 which is shown pictorially as frequency components 606 in frequency bins 622-624 which are divided into individual time windows 604. The output signal 620 then has its frequency components 606 in the various frequency bins 622-624 added by an addition process 608. This summed signal, which may be considered as an energy signal, has positive polarity and is passed through a low pass filter process 610, whose output signal 628, is differentiated 612 to detect peaks, and then half-wave rectified 614 to remove negative peaks, finally producing onset signal 618. The music signal is processed across substantially the full duration of the piece of music 100. hn an alternate embodiment, the onset signal 618 could be derived by sampling the signal 628 (which is output from the low pass filter 610), after the signal 628 is o 15 differentiated 612 and half-wave rectified 614 comparing consecutive samples to detect positive peaks of the signal, and generating pulses each time such a peak is detected. A brief Sexplanation about the effect of partitioning the signal into time windows is in order.
Summing frequency component amplitudes in each window is a form of decimation reduction of the sampling frequency), since the number of digitised music samples in a 20 window are summed to form one resultant point. Thus selection of the window size has the effect of reducing the number of sample points. The optimum selection of window size requires a balance between the accuracy of the resultant representation of the feature, and compression of the data in order to reduce computational burden. The inventor has found that a 256 point FFT (equivalent to an 11.6 msec music window size) yields good performance when using the resultant feature for comparing and selecting music pieces in regard to tempo. Once significant changes in 1238 1.doc:kxa -11the spectrum the starting points of notes 616) are located, the onset signal 618 is passed through a bank of comb filters in a process 514 (see Fig. 5) in order to determine the tempo. As noted previously, comb filters can be implemented using DSPs on the audio card 2216 or alternatively by using the general purpose processor 2204. Each comb filter has a transfer function of the form: t oyt-r (1 lC)x/ where: y, represents the instantaneous comb filter output yt-z represents a time delayed version of the comb filter output x, represents the onset signal (618).
Each of these comb filters has a resonant frequency (at which the output is reinforced) determined by the parameter The parameter ca (alpha) corresponds to the amount of weighting placed on previous inputs relative to the amount of weighting placed on current and future inputs. The onset signal 618 is filtered through the bank of comb filters in 0: the process 514, the comb filter resonant frequencies being placed at frequencies which are 1 5 at multiple sample spacings resulting from windowing. The filters should typically cover the range from about 0.1Hz through to about 8Hz. The filter with the highest energy output at each sample point is considered to have "won", and a tally of wins is maintained, in the process 516, for each filter in the filterbank, for example by using a power comparator to determine the highest energy and a counter to tally the "wins". After the onset signal 618 over substantially the full duration 602 of the piece of music 100 has been filtered, (noting that the onset signal is depicted by the reference number 618 on an output line from the process 512 in Fig. the filter in the comb filter bank 514 that has the greatest tally is said to be the dominant tempo present in the original music signal 100. Secondary tempo's may also be identified using the method. A set of tempos comprising 1l.doc:kxa -12the dominant tempo and the secondary tempos is designated by the reference number 518 in Fig. The timbre of a sequence of sound, which is characteristic of the difference in sounds say between two musical instruments, is largely dependent upon the frequencies present, and their respective magnitudes.
The spectral centroid provides an estimate of "brightness" or "sharpness" of the sound, and is one of the metrics used in the present embodiment in relation to extraction of timbre. This brightness characteristic is given by:
A
w where: S spectral centroid f= frequency A Amplitude W Window selected In order to differentiate between the timbral characteristics of different audio 15 signals, the present embodiment makes use of the Fourier transform of successive 0.5 second windows of the audio signal 100 in question. There is no necessary relationship between the window size used for loudness feature extraction and that used for tempo or other feature extraction. Other techniques for extracting timbre may be used.
Percussivity is that attribute which relates to a family of musical instruments known as "percussion" when considering an orchestra or a band. This family includes such musical •instruments as drums, cymbals, castanets and others.
S'OO: Fig. 7 depicts a flow diagram of a preferred embodiment of the percussivity estimator. An input signal 736 on a line 700 is analysed for percussivity during a time interval of interest 742. The input signal 736 is described in an inset 702 where the signal S736 is depicted on axes of time 706 and amplitude 704.
13 The signal 736 is operated upon by a windowing process 710 which outputs a windowed signal on a line 734, the windowed signal being shown in more detail in an inset 712. In the inset 712 windows, exemplified by a window 738, each having a predetermined width 708, are overlapped with each other to an extent 716. Each window 738 is passed through a bank of comb filters 740 which is made up of individual comb filters exemplified by a comb filter 718. The structure and operation of an embodiment of the comb filter 718 is presented in more detail in relation to Fig. 9. The comb filter 718 integrates the energy of the signal 736 across the particular window 738 being considered. The bank of comb filters 740, outputs a peak energy 726 for each comb filter 718 in the bank of comb filters 740 for the window 738 being considered, representing the energy at frequencies corresponding to the comb filter.
This is shown in an inset 724. It is noted that the outputs exemplified by the output 726 of the comb filter bank 740 are represented on axes of amplitude and frequency, and are spaced according to the frequencies corresponding to the individual comb filters 718. The output from the comb filter bank 740 on a line 720 is processed by a gradient process 722 which determines a straight line of best fit 732 which approximates the output signal exemplified by the signal 726.
Fig. 8 presents a more detailed description of the preferred embodiment of the percussivity estimator as it relates to a digitised input signal. Given an input signal on line 800 to be analysed, the signal is first digitised in process 802. The digitised signal which is 20 then output on a line 804 is windowed by a process 806 into 100 msec windows, with a overlap between windows. Each window is passed through a bank of comb filters 740 (see Fig. 7) represented by a process 810. The comb filters making up process 810 are spaced at frequencies between 200 Hz and 3000 Hz. The number and spacing of the individual comb filters 718 in the comb filter bank (see Fig. 7) are discussed in more detail in relation to Fig.
9. The linear function on line 812 which is formed from the peak energy output of SNj T i IBK]2381.doc:kxa -14each comb filter comprising the comb filter bank process 810 is passed to a gradient process 814. The gradient process 814 determines a straight line of best fit which approximates the linear function which is output by the comb filter process 810 on line 812, and outputs the straight line function on line 816 for further processing.
s Fig. 9 depicts a block diagram of a preferred embodiment of an individual comb filter 718 (see Fig. 7) used in the embodiment of the percussivity estimator. The comb filter 718 is used as a building block to implement the bank of comb filters 740 (see Fig. As described in relation to Fig. 6, each comb filter 718 has a time response which can be represented mathematically as follows: y(t) a*y(t-T) [1] where: x(t) is an input signal 900 to the comb filter; y(t) is an output signal 906 from the comb filter; T is a delay parameter determining the period of the comb filter; and a is a gain factor determining the frequency selectivity of the comb filter.
15 For each comb filter 718 in the bank of comb filters 740 (see Fig. the delay factor T is selected to be an integral number of samples long, the sample attributes beine ee determined by process 802 (see Fig. In the preferred embodiment of the comb filter bank the number of filters 718 in the bank 740 is determined by the number of integral sample lengths between the resonant frequency edges, these edges being defined in the S 20 embodiment described in relation to Fig. 8 to be 200 Hz and 3000 Hz. The individual filters 718 need not be equally spaced between the frequency edges, however they need to provide substantial coverage of the entire frequency band between the edges.
Fig. 10 depicts a linear function 1000 which is formed from the peak energy outputs of each comb filter 718 in the comb filter bank 740. The ordinate 1010 represents I.doc:kxa the peak energy output 726 of each comb filter 718 in the filterbank 740 (see Fig. while the abscissa 1004 represents the resonant frequency of each filter 718. Thus exemplary point 1012 indicates that a filter having a resonant frequency which is depicted by a reference numeral 1008 has a peak energy output depicted by a reference numeral 1010 for the particular window being considered. A line of best fit 1006 is shown, having a gradient 1014 which is representative of the percussivity of the signal 736 within the particular window in question.
Fig. 11 depicts how an aggregate of individual gradients, (say 1014 in Fig. 10), each having been determined for a particular window, (say 738 in Fig. can be consolidated and represented in the form of a histogram 1100 covering the entire period of interest 742 for the signal 736 being considered. An ordinate 1102 represents the fraction of time during the time interval 742 during which a particular percussivity is found to be present. An abscissa 1104 represents a normalised percussivity measure, which can be determined by normalising all measured percussivity values during the time interval of interest 742 by the maximum 15 percussivity value during the period 742. Thus, a point 1106 indicates that a normalised percussivity value 1110 is found to be present for a fraction 1108 of the total time 742. It is noted that the area under the curve 1100 can also be normalised for different signals being analyzed, in order to enable comparison of percussivity between the different signals. Fig.
11 represents a histogram for a signal having an overall high percussivity.
20 Fig. 12 depicts a percussivity histogram for a different signal than the one considered in Fig. 11, the signal in Fig. 12 having an overall low percussivity.
Fig. 13 depicts a typical percussive signal 1304 in the time domain, where the signal 1304 is plotted as a function of an amplitude axis 1300 and a time axis 1302.
The loudness feature is representative of the loudness over substantially the full duration of the piece of music 100 (see Fig. The piece of music 100 is first partitioned 2381.doc:kxa -16into a sequence of time windows, which for the purpose of classification and comparison on the basis of loudness, should be preferably about one half a second wide. There is no necessary relationship between the window size used for loudness feature extraction and that used for tempo or other feature extraction. The Fourier transform of the signal in each window is taken, and then the power in each window is calculated. The magnitude of this power value is an estimate of the loudness of the music within the corresponding half-second interval. Other methods of extracting loudness are known.
Pitch is another feature in the present embodiment determined by the feature extraction means in order to represent music while storing a new piece of music into the music database. The localised pitch is determined over a small window (say 0.1 seconds in this instance) by using a bank of comb filters. There is no necessary relationship ~between the window size used for pitch feature extraction and that used for tempo or other feature extraction. These comb filters have resonant frequencies covering a range of valid pitches. Advantageously this includes frequencies from around 200Hz up to around 3500Hz, and the filters are spaced at intervals determined by the rate at which the original musical signal was sampled. The sampled signal is filtered through the filter bank, and the comb filter that has the greatest output power will have a resonant frequency corresponding to the dominant pitch over the window in question. From these resulting pitches, a histogram of dominant pitches present in the original music is formed.
This procedure is followed over substantially the entire duration of the piece of music.
The method of pitch extraction employed is one of a number of methods for pitch extraction which exists and other methods may be used.
Returning to Fig. 3, and considering the music input and classification process, when the piece of music 100 is input, it then undergoes feature extraction 304 after which the features are classified 306 and stored in feature database 308. Substantially in parallel 060700; 09:14: 06/07/00; 09:18 AM 452380d1.doc -17with this process, the actual music piece itself 100 is stored in music database 302. Thus the piece of music 100 and the associated representative features are stored in two distinct but related databases 302 and 308 respectively. If the music is initially derived from an analogue source, it is first digitised before being input into the feature extraction process s 304. The digitisation step may be implemented by way of a standard soundcard or, if the music is already in digital form, this digitisation step may be bypassed and the digital music used directly for 100. Thus, arbitrary digitization structures including the Musical Instrument Digital Interface (MIDI) format and others may be supported in the system.
There are no specific requirements in terms of sampling rate, bits per sample, or channels, io but it should be noted that if higher reproduction quality is desirable it is preferable to select an audio resolution close to that ofa CD.
Fig. 14 depicts a generic feature classification process. Extracted feature signals 404, 408, 412, 416 and 418 (refer Fig. 4) are accumulated in a process step 1404 as histograms over substantially the whole duration of the piece of music 100 resulting in an indicative 15 feature output 1406 for each extracted feature signal. This output 1406 is stored in the feature database 308 (see Fig. By identifying the N highest tempo's in the manner described in Figs. 5 and 6, a histogram describing the relative occurrence of each tempo across o: substantially the whole duration of the piece of music 100 can be formed. Similarly, by identifying the M highest volumes, a histogram describing the relative occurrence of each I* 20 loudness across substantially the whole duration of the piece of music 100 can be formed.
0..0 Again, by identifying the K dominant pitches, a histogram describing the relative occurrence 0.9.
6900 •of each pitch across substantially the whole duration of the piece of music 100 can be formed.
ofg 'S *S The spectral centroid is advantageously used to describe the sharpness in a window. This can be accumulated as a histogram over substantially the full duration of the piece of music being analyzed and by identifying P sharpnesses (one -18for each window), a histogram describing the relative occurrence of each sharpness across substantially the whole duration of the piece of music 100 can be formed. Accumulation of features as histograms across substantially the entire duration of pieces of music yields a duration independent mechanism for feature classification suitable for search and comparison between pieces of music. This forms the foundation for classification in the music database system. The method described on pages 12-16 in relation to Figs. 7-10 is advantageously used to describe the percussivity in a window. This is accumulated as a histogram over substantially the full duration of the piece of music being analyzed and by identifying P percussivities (one for each window), a histogram describing the relative occurrence of each percussivity across substantially the whole duration of the piece of music 100 can be formed.
Fig. 15 describes a database query process where music identifiers are supplied in the query. A music query 104 (see Fig. 1) may take on a number of forms which include, but are not limited to: a set of names of known pieces of music and a degree of similarity/dissimilarity 15 specified by a conditional expression (shown underlined) for each piece very much like "You can hear me in the harmony" by Harry Conick Jr., a little like "1812 Overture" by Tchaikovsky, and not at all like "Breathless" by Kenny G); a set of user specified features and a similarity/dissimilarity specification in the form of a conditional expression something that has a tempo of around 120 beats per 20 minute, and is mostly loud).
In Fig. 15, the music query 104, containing music identifiers and conditional expressions is input into the feature comparison process 312 (see Fig. This process 312 includes the feature retrieval process 1502 which retrieves the features associated with the pieces of music named in the music query 104 from feature database 308. Next of music named in the music query 104 from feature database 308. Next 2381.doc:kxa -19these retrieved features are passed to similarity comparison process 1504 which searches the feature database 308 for features satisfying the conditional expression contained in music query 104 as applied to the features associated with pieces of music named in music query 104. The results of this comparison are passed to the identifier retrieval process 1506 which s retrieves the music identifiers of the pieces of music whose features satisfy the conditional expressions as applied to the identifiers specified in music query 104. These identifiers are passed to the music selection process 314 which enables the output of the desired music 106 and/or music identifiers 108 from music database 302 and feature database 308 respectively.
Fig. 16 describes a database query process where music features are supplied in the music query 104. The music query 104 contains music features and conditional expressions, at the query stage 104, and thus in this case the feature retrieval process 1502 is bypassed (see Fig. 15). These provided features are passed to a similarity comparison process 1604 which searches the feature database 308 for features satisfying the conditional expression contained in music query 104 as applied to the features provided in the music query 104. The results of this comparison are passed to an identifier retrieval process 1606 which retrieves the music identifiers of the pieces of music whose features satisfy the conditional expressions in relation to the identifiers specified in music query 104. These identifiers are passed to the music selection process 314 which ensures the output of the desired music 106 and/or music identifiers 108 from music database 302 and feature S 20 database 308 respectively.
Considering the process of feature comparison 312 (see Fig. a similarity Io:i comparison is performed between the features of music stored by the system in the feature database 308 which correspond to pieces of music 100 stored in music database 302, and features corresponding to the music query 104. Since a number of different features (and At jd. R~~eature *017 [R:\L!BK]2381 .doc:kxa representations) exist in the feature database 308, the comparisons between corresponding features are advantageously performed differently for each feature, for example: S comparison between loudness features stored as histograms are made through the use of a histogram difference, or comparison of a number of moments about the mean of each histogram, or other methods that achieve the same goal; comparison between tempo features stored as histograms are accomplished by methods such as histogram difference, or comparison of a number of moments about the mean of each histogram or other methods that achieve the same goal, S comparison between pitch features stored as histograms are performed using a histogram difference, or a comparison of a number of moments about the mean of each histogram. Other methods for comparison of pitch features may also be used, comparison between sharpness features stored as histograms are achieved through the use of methods such as histogram difference, or comparison of a number of moments about the mean of each histogram, or other methods that achieve the same goal, and S comparison between percussivity features stored as histograms are achieved through the use of methods such as histogram difference, or comparison of a number of moments about the mean of each histogram, or other methods that achieve the same goal.
Once the comparison of each of the relevant features has been made, the overall degree of similarity is ascertained. A simple, yet effective way of determining this is through the use of a distance metric (also known as the Minkowski metric with r 1), with each of the feature comparison results representing an individual difference along an orthogonal axis.
060700 09:14: 06/07/00; 09:18 AM 452380dl.doc -21 Fig. 17 illustrates a distance metric used to assess the similarity between two pieces of music where D is the distance between the two pieces of music 1708 and 1710 (only 3 features are shown for ease of representation). In this case, a smaller value of D represents a greater similarity. D is advantageously represented by: SQRT ((loudness histogram difference) 2 (tempo histogram difference) 2 (pitch histogram difference) 2 (timbre histogram difference) 2 Fig. 17 illustrates the distance between two pieces of music 1708, 1710, these pieces of music being defined in terms of three exemplary features namely pitch 1702, tempo 1704, and sharpness 1706. The distance D (ie 1712) represents the distance between the pieces of music 1710 and 1708 when measured in this context.
The above method will be partially described for a specific query 104 namely "Find a piece of music similar to piece where the database contains pieces of music A, B, C, and D. This query 104 is of a type described in Fig. 15 where music identifiers (ie the name of the piece of music and a conditional expression ("similar to") is provided in the query 104.
Each piece of music stored in the database is represented by a number of features that have been extracted when the pieces were classified and stored in the database. For the 20 sake of simplicity the example presented is restricted to two features, namely tempo and sharpness, where both features are represented by simplified histograms.
S:..:The four music pieces to be considered are named A, B, C and D. Their corresponding feature histograms are illustrated in Figs. 18-21.
Fig. 18 illustrates a tempo histogram and a timbre (alternatively called sharpness) histogram for piece of music A. This piece of music is shown to have a tempo 0o o° -22of 1 Hz (or 60 beats/min) (ie 1800) for 0.5 or 50% of the time (indicated by a reference numera: 1808) and a beat of 2 Hz (or 120 beats/minute) (ie 1802) for 50% of the time (indicated by reference numeral 1808). The piece of music displays a brightness of 22050 Hz (ie 1804) for 200: of the time (indicated by a reference numeral 1810) and a brightness of 44100 Hz (ie 1806) fo: s 80% of the time (indicated by a reference numeral 1812). Figs. 19 21 display similar features fo pieces of music B D.
When the query is presented, the following sequence of operations is performed: Comparison of the features of A and B Comparison of the features of A and C i0 Comparison of the features of A and D Selection of the music that is least distant from A Since all features of the music in the database are preferably represented as histograms comparisons between these features is based on a comparison between the histograms. Tw\\ methods that are useful in forming this comparison are the histogram difference, and the is comparison of moments.
Considering the first method, the histogram difference is performed by comparing thl relative occurrence frequencies of the different observations, taking the sum over all these comparisons and then normalising by the number of histograms being compared. If both histograms are individually normalised such that their individual integral sums are equal to 1.0, ther 20 the maximum histogram difference will be 2.0 (and if the absolute value of each comparison is taken, the minimum difference will be 0.0).
Considering the second method, comparison of moments is achieved by considering the differences between a number of moments about the origin of each histogram. The general form may be used to calculate moments about the origin: tk xk.f(x) all x R where: uk is the Kth moment about the origin -23xk is the Xth component of the histogram, and f(x) is the value of the histogram of xk.
It is also common to normalise the moments with respect to the second moment about the origin, in order to make them independent of the scale of measurement: 5 -k/2 k 2 With reference to Figs. 18 and 19, for the query 104 "similar to A" employing histogram difference, the calculation of distance is performed as follows: The difference between A and B in regard to tempo is: 0.33 0.5- 0.33 0- 0.331 0.335 2 io where the number of terms in the numerator is determined by the number of histogram points being compared, and the denominator is determined by the fact that two histograms are being compared.
Similarly, for A and B in regard to timbre: 0.2 0.9 0.8 0.1 0.7 0.7 2 Thus, distance between A and B is given by: 0.72 +0.3352 =0.776 If we consider the histograms in Figs. 18-21 for the features extracted from the piece of music A, B, C and D: Music A, tempo histogram: 20 /2 0.5X 1.0 2 0.5 X 2.0 2 0 X 3.02 2.50 3 0.5X 1.03 +0.5 X 2.0 3 0 X 3.0 3 4.50 #4 0.5 X 1.0 4 0.5 X 2.0 4 0 X 3.04 8.50 t 3 2- 3 2 1.14 81 .doc:kxa 24 I-L4 PL2 1.36 Music A, sharpness histogram: P42 1.653 x P43 7.076 x 4 3.073 x P- 3 I -12 1.05 t4 t2-4/2 1.12 109~ 1013 1018 Music B, tempo histogram: P42 P43 4 4.62 11.88 32.34 P- 3 PL -2 1.20 I-'4 It-12 1.52 Music B, sharpness histogram: /42 6.321 x 108 P43 1.823 x10 13 4 5.91 x 1017 P- 3 t31k 2 1.15 P- 4 I41k2 1.48 Comparisons for the query "similar to A": A and B tempo: 1. 14 1.201 11.36 1.521 0. 22 A and B sharpness: 060700; 09:14;:06/07/00: 09:18 AM 5301.o 452380di.doc 25 1.05- 1.15 1.12- 1.48 0.46 A and B distance: V0.222 +0.462 0.51 The above analysis is only partially shown for the sake of brevity, however if fully expanded, it is seen that in both histogram difference and moment methods, Music B would be selected by the query 104 as being "similar to since the calculated distance between Music A and Music B is smallest when compared to C, D.
In the above example, the query 104 was "find a piece of music similar to piece A" and thus the method sought to establish which pieces of music B, C and D are at a distance smallest from A.
ooooo ,In a more complex query 104 of the form, for example, "find a piece of music .ooo very similar to A, a little bit like B, and not at all like the same general form of analysis as illustrated in the previous example would be used. In this case however, the other pieces of music in the database namely D, E, K, would be assessed in order to 15 establish which piece of music had features which could simultaneously satisfy the requirements of being at a minimum distance from A, a larger distance from B, and a maximum distance from C.
It is further possible to apply a weighting to each individual feature in order to bias the overall distance metric in some fashion (for example biasing in favour of tempo similarity rather than loudness similarity).
In considering similarity assessment on the basis of either the histogram difference, or the comparison of moments, these being applied to the attributes of pitch, loudness, tempo, and timbre sharpness and percussivity), it is found that two-pass assessment provides better classification results in some cases. The two-pass assessment process performs a first assessment on the basis of loudness, percussivity and sharpness, 060700; 09:14; 06/07/00; 09:18 AM 452380d1.doc -26and then a second sorting process based on tempo. In the present embodiments, it is found that the feature of pitch may be omitted from the similarity assessment process without significantly degrading the overall similarity assessment results.
In considering similarity assessment using the comparison of moments process, good results are produced by selecting particular moments for each feature as shown in the following table: Feature Moments loudness mode, mean, variance sharpness mode, mean, variance percussivity variance tempo mode, mode tally, variance where "mean" and "variance" are determined in accordance with the following general form which expresses moments about the mean: Pk (x-x)k .f(x) all x where: "mean" [tk for k=l; and "variance" jtk for k=2; The "mode", in particular having regard to tempo, represents the most frequently occurring i.e. the "dominant" tempo in the tempo histogram, and is thus the tempo associated with the peak of the histogram. The "mode tally" is the amplitude of the peak, and represents the relative strength of the dominant tempo.
Application of clustering techniques to a complete set of moments corresponding to the extracted features, including the mode of each histogram, provides better 060700: 09:14; 06/07/00; 09:18 AM 452380dl.doc 27 classification results in some cases. Use of Bayesian estimation produces a "best" set of classes by which a given dataset may be classified.
Fig. 22 shows how the system can preferably be practised using a conventional general-purpose computer 2200 wherein the various processes described may be implemented as software executing on the computer 2200. In particular, the various process steps are effected by instructions in the software that are carried out by the computer 2200. The software may be stored in a computer readable medium, is loaded onto the computer 2200 from the medium, and then executed by the computer 2200. The use of the computer program product in the computer preferably effects an apparatus for extracting one or more features from a music signal, said features including, for instance, tempo, loudness, pitch, and timbre, (ii) classification of music using extracted features, and (iii) method of querying a music database. Corresponding systems upon which the above method steps may be practised may be implemented as described by software executing on the above mentioned general-purpose computer 2200. The computer system 2200 includes a computer module 2202, an audio input card 2216, and input devices 2218, 2220. In addition, the computer system 2200 can have any of a number of other output devices including an audio output card 2210 and output display 2224. The computer system 2200 can be connected to one or more other computers using an appropriate communication channel such as a modem communications path, a computer network, or the like. The computer network may include a local area network (LAN), a wide area network (WAN), an Intranet, and/or Internet. Thus, for example, pieces of music 100 may be input via audio input card 2216, music queries may be input via keyboard 2218, desired music 106 may be output via audio output card 2210 and desired music identifiers such as the names of the desired pieces of music may be output via display device 2224. The network embodiment shown in Fig. 2 would be 060700; 09:14; 06/07/00; 09:18 AM 452380d1.doc -28implemented by using the communication channels to connect server computers to the network 206 via access lines 204. Client computers would be connected to the network 206 via access lines 208 also using the computer communication channels. The computer 2202 itself includes a central processing unit(s) (simply referred to as a processor hereinafter) 2204, a memory 2206 which may include random access memory (RAM) and read-only memory (ROM), an input/output (10) interface 2208, an audio input interface 2222, and one or more storage devices generally represented by a block 2212. The storage device(s) 2212 can include one or more of the following: a floppy disk, a hard disk drive, a magneto-optical disk drive, CD-ROM, magnetic tape or any other of a number of non-volatile storage devices well known to those skilled in the art. Each of the components 2204, 2206, 2208, 2212 and 2222, is typically connected to one or more of the other devices via a bus 2204 that in turn can include data, address, and control buses.
The audio interface 2222 is connected to the audio input 2216 and audio output 2210 cards, and provides audio input from the audio input card 2216 to the computer 2202 and from the computer 2202 to the audio output card 2210.
The preferred embodiment of the invention can, alternatively, be implemented on one or more integrated circuits. This mode of implementation allows incorporation of the various system elements into individual pieces of equipment where the functionality provided by such apparatus is required.
The foregoing describes a number of embodiments for the present invention.
Further modifications can be made thereto without departing from the scope of the inventive concept.
060700; 09:14; 06/07/00: 09:18 AM 452380d1.doc
Claims (31)
1. A method for querying a music database, which contains a plurality of pieces of music wherein the pieces are indexed according to one or more parameters, the method comprising the steps of: forming a request which specifies one or more pieces of music and/or associated parameters and one or more conditional expressions; determining associated parameters for the specified pieces of music if the parameters have not been specified; comparing the specified parameters and corresponding parameters associated with other pieces of music in the database; calculating a distance based on the comparisons; and identifying pieces of music which are at distances from the specified pieces of music as to satisfy the conditional expressions. s1 2. A method according to claim 1, including the further steps of: outputting at least one of the identified pieces of music and the names of said pieces.
3. A method according to claim 1, wherein calculating a distance in step (d) 20 comprises the sub-steps of: calculating a distance based on comparison of a loudness, a percussivity, and a sharpness; and subsequently sorting on the basis of a tempo. N. "2222 [R:\LIBK]2351.doc:kxa
4. A method according to claim 1, wherein the comparing in step is performed in relation to pieces of music in a class of the plurality of pieces of music in the database.
5. A method according to claim 2, wherein the at least one of identified pieces of music and the names of said pieces are in a class of the plurality of pieces of music in the database.
6. A method according to claim 1, wherein a classification according to o0 which the pieces of music are indexed uses feature extraction, the method further comprising the steps of: segmenting a piece of music over time into a plurality of windows; extracting one or more features in each of said windows; and arranging the features in histograms wherein the histograms are representative of the features over the entire piece of music.
7. A method according to claim 6, wherein a first feature extracted in step (b) S. is at least one tempo extracted from a digitised music signal, the feature extraction comprising the further sub-steps of: 20 segmenting the music signal into a plurality of windows; (ii) determining values indicative of the energy in each window; (iii) locating the peaks of an energy signal which signal is derived from the energy values in each window; (iv) generating an onset signal comprising pulses, where pulse peaks S 25 substantially coincide with the peaks of the energy signal; A k K]2381.doc:kxa -31 filtering the onset signal through a plurality of comb filter processes with resonant frequencies located according to frequencies derived from the window segmentation; (vi) accumulating an energy in each filter process over a duration of the music signal; and (vii) identifying the filter processes having Nth highest energies wherein resonant frequencies of the identified processes are representative of at least one tempo in the rmusic signal.
8. A method according to claim 7, wherein the determination of the energy signal in sub-step (ii) comprises the further sub-sub-step of: determining transform components for the music signal in each window: and adding amplitudes of the components in each window to form a component Is sum, said component sum being indicative of energy in a window.
9. A method according to claim 7, wherein after locating the peaks of an energy signal in sub-step (iii) and prior to forming the onset signal generated in sub-step (iv) the method comprises the further sub-sub-steps of: 20 low pass filtering the energy signal.
10. A method according to claim 7, wherein the onset signal is formed in sub- a step (iv) according to the further sub-sub-steps steps of: differentiating the energy signal; and half-wave rectifying the differentiated signal to form the onset signal. half-wave rectifying the differentiated signal to form the onset signal. *-RA/ -32-
11. A method according to claim 7, wherein the onset signal is formed in sub- step (iv) according to the further sub-sub-steps of: sampling the energy signal; comparing consecutive samples to determine a positive peak; and generating a single pulse when each positive peak is detected.
12. A method according to claim 7, wherein the filter process resonant frequencies span a frequency range substantially between 1Hz and 4Hz. 2
13. A method according to claim 6, wherein a second feature extracted in step is a percussivity of a signal, the method comprising the sub-steps of: segmenting the signal into a plurality of windows, and for each window; (ii) filtering by a plurality of filters; (iii) determining an output for each filter; (iv) determining a function of the filter output values; determining a gradient for the linear function; and (vi) determining a percussivity as a function of the gradient.
14. A method according to claim 13, wherein the segmentation sub-step (i) comprises the further sub-sub-steps of: selecting a window width; selecting a window overlap extent; and segmenting the signal into windows each window having the selected window width and the windows overlapping each other to the selected overlap extent. l1.doc:kxa -33- A method according to claim 13, wherein the filtering sub-step (ii) utilises comb filters.
16. A method according to claim 13, wherein the gradient determination step s is performed by determining a straight line of best fit to the linear function.
17. A method according to claim 13, wherein the percussivity values determined in step (vi) for a window for each window are consolidated in a histogram.
18. An apparatus for querying a music database, which contains a plurality of pieces of music wherein the pieces are indexed according to one or more parameters the apparatus comprising: a request means for forming a request which specifies one or more pieces of music and/or associated parameters and one or more conditional expressions; a parameter determination means for determining associated parameters for the specified pieces of music if the parameters have not been specified; a comparison means for comparing the specified parameters and corresponding parameters associated with other pieces of music in the database; a distance determination means for calculating a distance based on the 20 comparisons; and a determination means for identifying pieces of music which are at distances from the specified pieces of music as to satisfy the conditional expressions. 00000 I81.doc:kxa
34- 19. An apparatus according to claim 18, wherein the apparatus further comprises: an output means for outputting the identified pieces of music and/or the names of said pieces. An apparatus according to claim 18, wherein the distance determination means comprises: a distance determination means for a loudness, a percussivity, and a sharpness; and (ii) a sorting means for a tempo. 21. An apparatus according to claim 18, wherein the apparatus further comprises a means for clustering the pieces of music in the database into classes. 22. An apparatus according to claim 18, wherein a classification according to which the pieces of music are indexed uses feature extraction means, the means comprising: i segmentation means for segmenting an entire piece of music over time into :a plurality of windows; feature extraction means for extracting one or more features in each of said 20 windows; and histogram determination means for arranging the features in histograms wherein the histograms are representative of the features over the entire piece of music. 23. An apparatus according to claim 22, wherein a first feature extracted in 25 step is at least one tempo extracted from a digitised music signal, and wherein the feature segmentation means for segmenting the music signal into a plurality of windows; (ii) energy determination means for determining values indicative of the energy in each window; (iii) peak location determination means for locating the peaks of an energy signal which signal is derived from the energy values in each window; (iv) onset signal generation means for generating an onset signal comprising pulses, where pulse peaks substantially coincide with the peaks of the energy signal; a plurality of comb filter means for filtering the onset signal wherein the plurality of comb filter means have resonant frequencies located according to frequencies derived from the window segmentation; (vi) energy accumulation means for accumulating an energy in each filter process over a duration of the music signal; and (vii) identification means for identifying the filter processes having Nth highest is energies wherein resonant frequencies of the identified processes are representative of at least one tempo in the music signal. S: 24. An apparatus according to claim 23, wherein the energy determination 0means in sub-step (ii) comprise: 20 transform determination means for determining transform components for the music signal in each window; and addition means for adding amplitudes of the components in each window to form a component sum, said component sum being indicative of energy in a window. 25 25. An apparatus according to claim 23, wherein the apparatus further RAL/m mprises low pass filtering means for low pass filtering the energy signal after locating the -36- peaks of an energy signal in sub-step (iii) and prior to forming the onset signal generated in sub-step (iv). 26. An apparatus according to claim 23, wherein the onset signal generation s means in sub-step (iv) comprise: differentiating means for differentiating the energy signal; and rectification means for half-wave rectifying the differentiated signal to form the onset signal. io 27. An apparatus according to claim 23, wherein the onset signal generation means in sub-step (iv) comprise: sampling means for sampling the energy signal; comparator means for comparing consecutive samples to determine a positive peak; and is pulse generation means for generating a single pulse when each positive peak is detected. S S S 555 **SS *SS. 5 S 5*SS *S. *SS S S 5555 1 28. An apparatus according to claim 23, wherein the comb filter means resonant frequencies span a frequency range substantially between 1Hz and 4Hz. 29. An apparatus according to claim 22, wherein a second feature extracted in step is a percussivity of a signal, and wherein the feature extraction means comprise: segmentation means for segmenting the signal into a plurality of windows, and for each window; (ii) filtering means for filtering by a plurality of filters; -37- (iii) filter output determination means for determining an output for each filter; (iv) function determination means for determining a function of the filter output values; gradient determination means for determining a gradient for the linear function; and (vi) percussivity determination means for determining a percussivity as a function of the gradient. 30. An apparatus according to claim 29, wherein the segmentation means in sub-step comprise: selection means for selecting a window width; overlap determination means for selecting a window overlap extent; and segmentation means for segmenting the signal into windows each window having the selected window width and the windows overlapping each other to the selected overlap extent. .step (ii) are comb filters. 0 32. An apparatus according to claim 29, wherein the fitering means terminatio sub- 0 means in sub-step comprises means for determining a straight line of best fit to the linear function. [R:\LIBK]238 I.doc:kxa -38- 33. An apparatus according to claim 29, wherein the percussivity determination means in sub-step (vi) consolidate the percussivity for each window into a histogram. 34. A computer readable medium incorporating a computer program product for querying a music database, which contains a plurality of pieces of music wherein the pieces are indexed according to one or more parameters said computer program product comprising: a request means for forming a request which specifies one or more pieces of music and/or associated parameters and one or more conditional expressions; a parameter determination means for determining associated parameters for the specified pieces of music if the parameters have not been specified; a comparison means for comparing the specified parameters and corresponding parameters associated with other pieces of music in the database; a distance determination means for calculating a distance based on the comparisons; and a determination means for identifying pieces of music which are at distances from the specified pieces of music as to satisfy the conditional expressions. "0 20 35. A computer readable medium according to claim 34 said computer program product comprising: an output means for outputting the identified pieces of music and/or the names of said pieces. ooo o *°oO -39-
36. A computer readable medium according to claim 34, wherein the distance determination means comprises: a distance determination means for a loudness, a percussivity, and a sharpness; and (ii) a sorting means for a tempo.
37. A computer readable medium according to claim 34, wherein the computer program product relating to the comparison means in comprises a means for clustering the pieces of music in the database into classes.
38. A computer readable medium according to claim 34, wherein a classification according to which the pieces of music are indexed uses feature extraction means, said computer program product comprising: segmentation means for segmenting an entire piece of music over time into a plurality of windows; feature extraction means for extracting one or more features in each of said windows; and histogram determination means for arranging the features in histograms wherein the histograms are representative of the features over the entire piece of music. o*0.
39. A computer readable medium method according to claim 38, wherein a first feature extracted in step is at least one tempo extracted from a digitised music signal, and wherein said computer program product comprising: 0*go °o o segmentation means for segmenting the music signal into a plurality of dows; 00.. A1,o (ii) energy determination means for determining values indicative of the energy in each window; (iii) peak location determination means for locating the peaks of an energy signal which signal is derived from the energy values in each window; (iv) onset signal generation means for generating an onset signal comprising pulses, where pulse peaks substantially coincide with the peaks of the energy signal; a plurality of comb filter means for filtering the onset signal wherein the plurality of comb filter means have resonant frequencies located according to frequencies derived from the window segmentation; (vi) energy accumulation means for accumulating an energy in each filter process over a duration of the music signal; and (vii) identification means for identifying the filter processes having Nth highest energies wherein resonant frequencies of the identified processes are representative of at least one tempo in the music signal. A computer readable medium according to claim 39, wherein said computer program product relating to the energy determination means in sub-step (ii) comprises: o:i transform determination means for determining transform components for the music signal in each window; and addition means for adding amplitudes of the components in each window to form a component sum, said component sum being indicative of energy in a window. S41. A computer readable medium according to claim 39, said computer program product further comprising low pass filtering means for low pass filtering the -41- energy signal after locating the peaks of an energy signal in sub-step (iii) and prior to forming the onset signal generated in sub-step (iv).
42. A computer readable medium according to claim 39, wherein said computer program product relating to the onset signal generation means in sub-step (iv) comprises: differentiating means for differentiating the energy signal; and rectification means for half-wave rectifying the differentiated signal to form the onset signal.
43. A computer readable medium according to claim 39, wherein said computer program product relating to the onset signal generation means in sub-step (iv) comprises: sampling means for sampling the energy signal; comparator means for comparing consecutive samples to determine a positive peak; and pulse generation means for generating a single pulse when each positive peak is detected.
44. A computer readable medium according to claim 39, said computer program product relating to the filter means resonant frequencies spaning a frequency range substantially between 1Hz and 4Hz. oooo
45. A computer readable medium according to claim 38, wherein a second feature extracted in step is a percussivity of a signal, and wherein said computer F-A o rogram product relating to the feature extraction means comprise: .doc:kxa (i) windows, and (ii) (iii) -42- segmentation means for segmenting the signal into a plurality of for each window; filtering means for filtering by a plurality of filters; filter output determination means for determining an output for each function determination means for determining a function of the filter gradient determination means for determining a gradient for the linear filter; (iv) output values; (v) function; and (vi) percussivity determination means for determining a percussivity as a function of the gradient.
46. A computer readable medium according to claim 45, wherein said computer program product relating to the segmentation means in sub-step comprises: selection means for selecting a window width; overlap determination means for selecting a window overlap extent; and segmentation means for segmenting the signal into windows each window having the selected window width and the windows overlapping each other to the selected overlap extent. S S
47. A computer readable medium according to claim 45, wherein the said computer program product relating to filtering means in sub-step (ii) are comb filters. L]2381.doc:kxa -43-
48. A computer readable medium according to claim 45, wherein said computer program product relating to the gradient determination means in sub-step (v) comprises means for determining a straight line of best fit to the linear function.
49. A computer readable medium according to claim 45, wherein said computer program product relating to the percussivity determination means in sub-step (vi) consolidates the percussivity for each window into a histogram. A method for querying a music database substantially as described herein with reference to any one of the embodiments, as that embodiment is shown in the accompanying drawings.
51. An apparatus for querying a music database substantially as described herein with reference to any one of the embodiments, as that embodiment is shown in the accompanying drawings.
52. A computer readable medium for querying a music database substantially as described herein with reference to any one of the embodiments, as that embodiment is shown in the accompanying drawings. DATED this second Day of July, 2002 Canon Kabushiki Kaisha Patent Attorneys for the Applicant SSPRUSON FERGUSON [R:\LIBK]2381.doc:kxa
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU45061/00A AU751683B2 (en) | 1998-05-07 | 2000-07-06 | A system and method for querying a music database |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AUPP3405 | 1998-05-07 | ||
AUPP3410 | 1998-05-07 | ||
AUPP3408 | 1998-05-07 | ||
AU45061/00A AU751683B2 (en) | 1998-05-07 | 2000-07-06 | A system and method for querying a music database |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
AU23854/99A Division AU718035B2 (en) | 1998-05-07 | 1999-04-19 | A system and method for querying a music database |
Publications (2)
Publication Number | Publication Date |
---|---|
AU4506100A AU4506100A (en) | 2000-09-07 |
AU751683B2 true AU751683B2 (en) | 2002-08-22 |
Family
ID=3732217
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
AU45061/00A Ceased AU751683B2 (en) | 1998-05-07 | 2000-07-06 | A system and method for querying a music database |
Country Status (1)
Country | Link |
---|---|
AU (1) | AU751683B2 (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09293083A (en) * | 1996-04-26 | 1997-11-11 | Toshiba Corp | Music retrieval device and method |
US5756915A (en) * | 1992-10-19 | 1998-05-26 | Kabushiki Kaisha Kawai Gakki Seisakusho | Electronic musical instrument having a search function and a replace function |
US5986199A (en) * | 1998-05-29 | 1999-11-16 | Creative Technology, Ltd. | Device for acoustic entry of musical data |
-
2000
- 2000-07-06 AU AU45061/00A patent/AU751683B2/en not_active Ceased
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5756915A (en) * | 1992-10-19 | 1998-05-26 | Kabushiki Kaisha Kawai Gakki Seisakusho | Electronic musical instrument having a search function and a replace function |
JPH09293083A (en) * | 1996-04-26 | 1997-11-11 | Toshiba Corp | Music retrieval device and method |
US5986199A (en) * | 1998-05-29 | 1999-11-16 | Creative Technology, Ltd. | Device for acoustic entry of musical data |
Also Published As
Publication number | Publication date |
---|---|
AU4506100A (en) | 2000-09-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6201176B1 (en) | System and method for querying a music database | |
Tzanetakis et al. | Audio analysis using the discrete wavelet transform | |
US5918223A (en) | Method and article of manufacture for content-based analysis, storage, retrieval, and segmentation of audio information | |
US7022907B2 (en) | Automatic music mood detection | |
US7396990B2 (en) | Automatic music mood detection | |
Peeters et al. | The timbre toolbox: Extracting audio descriptors from musical signals | |
Gouyon et al. | On the use of zero-crossing rate for an application of classification of percussive sounds | |
Gómez | Tonal description of polyphonic audio for music content processing | |
Goto | A robust predominant-F0 estimation method for real-time detection of melody and bass lines in CD recordings | |
US20080300702A1 (en) | Music similarity systems and methods using descriptors | |
US20030205124A1 (en) | Method and system for retrieving and sequencing music by rhythmic similarity | |
Yoshii et al. | Drum sound recognition for polyphonic audio signals by adaptation and matching of spectrogram templates with harmonic structure suppression | |
EP1143409A1 (en) | Rhythm feature extractor | |
Yang | Macs: music audio characteristic sequence indexing for similarity retrieval | |
Tzanetakis et al. | Audio information retrieval (AIR) tools | |
Yoshii et al. | Automatic Drum Sound Description for Real-World Music Using Template Adaptation and Matching Methods. | |
WO2009001202A1 (en) | Music similarity systems and methods using descriptors | |
Lu et al. | Automated extraction of music snippets | |
Wold et al. | Classification, search and retrieval of audio | |
US20060075883A1 (en) | Audio signal analysing method and apparatus | |
Izmirli | Template based key finding from audio | |
Zhang et al. | System and method for automatic singer identification | |
Alonso et al. | A study of tempo tracking algorithms from polyphonic music signals | |
Jun et al. | Music structure analysis using self-similarity matrix and two-stage categorization | |
Gulati et al. | Meter detection from audio for Indian music |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FGA | Letters patent sealed or granted (standard patent) |