USRE43379E1 - Music selecting apparatus and method - Google Patents

Music selecting apparatus and method Download PDF

Info

Publication number
USRE43379E1
USRE43379E1 US12/392,579 US39257909A USRE43379E US RE43379 E1 USRE43379 E1 US RE43379E1 US 39257909 A US39257909 A US 39257909A US RE43379 E USRE43379 E US RE43379E
Authority
US
United States
Prior art keywords
music
sensitivity
storage device
accordance
learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US12/392,579
Inventor
Yasunori Suzuki
Yasuteru Kodama
Satoshi Odagawa
Takehiko Shioda
Shinichi Gayama
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pioneer Corp
Original Assignee
Pioneer Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US10/959,314 external-priority patent/US7385130B2/en
Application filed by Pioneer Corp filed Critical Pioneer Corp
Priority to US12/392,579 priority Critical patent/USRE43379E1/en
Application granted granted Critical
Publication of USRE43379E1 publication Critical patent/USRE43379E1/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0033Recording/reproducing or transmission of music for electrophonic musical instruments
    • G10H1/0041Recording/reproducing or transmission of music for electrophonic musical instruments in coded form

Definitions

  • This invention relates to a music selecting apparatus and method which selects one of a plurality of music pieces.
  • a well-known method to select a music piece preferred by a user of a plurality of music pieces involves extracting as data the physical characteristics of music pieces, classifying the plurality of music pieces in accordance with the extraction results, and using the result for music selection.
  • a method for obtaining physical characteristic data of each music piece for example, a method for obtaining power spectrum data from music data is widely known (see Japanese Patent Application Kokai No. 10-134549).
  • a method for obtaining physical characteristic data through the patterning of time-series changes using an N-gram method, based on the frequency bandwidth and the length of the reproduced sound of the music piece and the musical score, is also known.
  • the physical characteristic data is not data which has a correlation with the sensitivities of the user. Hence there is the problem that the music piece imagined by the user is not necessarily selected.
  • a music selecting apparatus is an apparatus for selecting a music piece from a plurality of music pieces in accordance with an input operation, comprising: a first storage device which stores, as data, a degree of chord change for each of the plurality of music pieces; a setting device which sets a sensitivity word for music selection in accordance with the input operation; and, a music selector which detects a music piece having a degree of chord change corresponding to the sensitivity word set by the setting device, in accordance with the chord change degree for each of the plurality of music pieces.
  • a music selecting method is a method for selecting a music piece from among a plurality of music pieces in accordance with an input operation, comprising the steps of: storing, as data, a degree of chord change for each of the plurality of music pieces; setting a sensitivity word for music selection in accordance with the input operation; and, detecting a music piece having a degree of chord change corresponding to the set sensitivity word, in accordance with the chord change degree for each of the plurality of music pieces.
  • a music selecting apparatus is an apparatus for selecting a music piece from among a plurality of music pieces in accordance with an input operation, comprising: a first storage device which stores, as data, a characteristic value of at least one characteristic parameter for each of the plurality of music pieces; a setting device which sets a sensitivity word for music selection from among a plurality of sensitivity words, in accordance with the input operation; a second storage device which stores, as data, a correction value for each of the plurality of sensitivity words; a reading portion which reads, from the second storage device, the correction value corresponding to the sensitivity word for the music selection set by the setting device; a correction device which corrects the characteristic value of characteristic parameter for each of the plurality of music pieces in accordance with correction value read by the reading portion to compute a sensitivity matching degree; a music selector which selects at least one music piece from among the plurality of music pieces, in accordance with the sensitivity matching degree for each of the plurality of music pieces, computed by the correction device; a matching
  • a music selecting method is a method for selecting a music piece from among a plurality of music pieces in accordance with an input operation, comprising the steps of: storing a characteristic value of at least one characteristic parameter as data for each of the plurality of music pieces; setting a sensitivity word for music selection from among a plurality of sensitivity words in accordance with the input operation; storing a correction value as data for each of the plurality of sensitivity words in a second storage device; reading the correction value corresponding to the sensitivity word for the music selection from the second storage device; correcting characteristic value of characteristic parameters for each of the plurality of music pieces in accordance with the read correction value to compute a sensitivity matching degree; selecting at least one music from among the plurality of music pieces in accordance with the sensitivity matching degrees computed for each of the plurality of music pieces; judging whether the selected music piece matches the sensitivity word for the music selection, in accordance with the input operation; computing a learning value in accordance with the judgment result, and storing the computed learning value in a
  • FIG. 1 is a block diagram showing the configuration of a music selecting apparatus according to the present invention
  • FIG. 2 shows a default database
  • FIG. 3 is a flowchart showing music selection operation
  • FIG. 4 is a flowchart showing the continuous portion of the music selection operation of FIG. 3 ;
  • FIG. 5 is a flowchart showing a learning routine
  • FIG. 6 is a flowchart showing personal learning value computation operation
  • FIG. 7 is a flowchart showing another example of the learning routine
  • FIG. 8 is a flowchart showing personal learning value computation operation in the learning routine of FIG. 7 ;
  • FIG. 9 shows a second personal learning value database having unmatched music data
  • FIG. 10 is a flowchart showing a portion of music selection operation to which the learning routine of FIG. 7 is applied.
  • FIG. 1 shows a music selecting apparatus according to the present invention.
  • the music selecting apparatus comprises a music input device 1 , input operation device 2 , data storing devices 3 , 4 and 5 , control device 6 , display device 7 , music reproducing device 8 , digital-analog converter 9 , and speaker 10 .
  • the music input device 1 is connected to the control device 6 and data storing device 3 , and is a device for input of audio signals (for example, PCM data) of digitized music pieces to the music selecting apparatus.
  • the music input device 1 for example, a disc player which plays a disc such as CD, or a streaming interface which receives streaming music data, is employed.
  • the input operation device 2 is a device operated by the user of the music selecting apparatus to input data and instructions. In addition to character keys and numeric keys, the input operation device 2 is provided with a “YES” key, a “NO” key, an “END” key, a “NEXT MUSIC” key, and other specialized keys.
  • the output of the input operation device 2 is connected to the control device 6 .
  • the types of keys of the input operation device 2 are not necessarily limited to those described above.
  • the data storing device 3 which is the third storage means, stores, as files, music data supplied from the music input device 1 .
  • Music data is data indicating the reproduced sounds of a music piece, and may be, for example, PCM data, MP3 data, MIDI data, or similar.
  • the music name, singer name, and other music information is stored for each music piece in the data storing device 3 .
  • Music data accumulated in the data storing device 3 corresponds to a plurality of music pieces 1 through n (where n is greater than one).
  • the data storing device 4 stores as a characteristic parameter database (first storage device), for each of the n music pieces for which music data is accumulated in the data storing device 3 , characteristic values for the degree of chord change ( 1 ), degree of chord change ( 2 ), degree of chord change ( 3 ), beat (number of beats per unit time), maximum beat level, mean amplitude level, maximum amplitude level, and the key, as characteristic parameters.
  • the degree of chord change ( 1 ) is the number of chords per minute in the music piece;
  • the degree of chord change ( 2 ) is the number of types of chords used in the music piece;
  • the degree of chord change ( 3 ) is the number of change points, such as discord, which change an impression of the music piece during the chord progression.
  • Chords themselves have elements which may provide depth to a music piece, or impart a sense of tension to the listener, or similar. Further, a music piece may be provided with atmosphere through a chord progression. Chords having such psychological elements are optimal as music-characterizing quantities used by a music selecting apparatus to select music pieces through sensitivity words, and in addition to the simple characteristics of the melody, it is thought that the intentions of the composer, including the contents of the lyrics, may to some extent be reflected therein; hence chords are employed as a portion of the characteristic parameters.
  • an average value and an unbiased variances for characteristic parameters comprising the degree of chord change ( 1 ), degree of chord change ( 2 ), degree of chord change ( 3 ), beat, maximum beat level, mean amplitude level, maximum amplitude level, and the key.
  • the average value and unbiased variance represent a characteristic value for each of the characteristic parameters, as well as a correction value used for computation of a sensitivity matching degree.
  • the average value and unbiased variance are described below.
  • FIG. 2 shows, in a table, the average values and unbiased variances of each of the characteristic parameters for different sensitivity words, which are the contents of the default database.
  • Ma 1 to Ma 6 , Mb 1 to Mb 6 , and similar are average values
  • Sa 1 to Sa 6 , Sb 1 to Sb 6 , and similar are unbiased variances.
  • the sensitivity word is a word expressing feelings felt when a listener listens to a music piece. Examples are “rhythmical”, “gentle”, “bright”, “sad” “healing”, and “lonely”.
  • a matched music database (fourth storage device) and unmatched music database (sixth storage device) are formed in the data storing device 5 .
  • each of these databases is stored data for 50 music pieces for each sensitivity word.
  • the new data is written while erasing the oldest data.
  • the number of music pieces stored for each sensitivity word in the matched music database and in the unmatched music database is not limited to 50 music pieces, but may be a different number of music pieces.
  • the control device 6 comprises for example a microcomputer, and performs music selection operation in accordance with an input operation by a user, described below.
  • the display device 7 displays selection fields related to the control of the control device 6 , the contents input to the music input device 1 , and a list of music pieces presented to the user.
  • the music reproducing device 8 reads music data for a music piece selected by the user from the data storing device 3 , and reproduces a digital audio signal in accordance with the read music data.
  • the digital-analog converter 9 converts the digital audio signals reproduced by the music reproducing device 8 into analog audio signals, which are supplied to the speaker 10 .
  • music selection operation in a music selection system of this configuration is explained. It is assumed that a single user uses the music selecting apparatus; in the case of a device used by a plurality of users, when starting the music selection operation, a user ID identifying the user must be input via the input operation device 2 . This is in order to specify the user utilizing personal learning values, described below.
  • the control device 6 When music selection operation begins, the control device 6 first causes the display device 7 to display an image in order to request selection of a sensitivity word, as shown in FIG. 3 and FIG. 4 (step S 1 ). As sensitivity words for music selection, “rhythmical”, “gentle”, “bright”, “sad”, “healing”, “lonely”, and other items are displayed on the screen of the display device 7 , and in addition an “other sensitivity word” items is displayed. At the same time, an instruction to select from among these displayed items is shown. The user can perform an input operation through the input operation device 2 to select one of these sensitivity words or another sensitivity word in response to the display. After executing step S 1 , the control device 6 judges whether there has been operation input (step S 2 ).
  • control device 6 judges whether one of the sensitivity words displayed has been selected, in accordance with the output from the input operation device 2 (step S 3 ). That is, a judgment is made as to whether one sensitivity word of the sensitivity words displayed, or “other sensitivity word”, has been selected.
  • the control device 6 captures the selected sensitivity word (step S 4 ), and judges whether, for the selected sensitivity word, there exist personal learning values (step S 5 ).
  • the personal learning values are the average value and unbiased variance, specific to the user, of each of the characteristic parameters for the selected sensitivity word; the average values and unbiased variances are computed in a step described below, and stored in a personal-learning value database (fifth storage device) in the data storing device 4 . If personal learning values for the selected sensitivity word do not exist in the data storing device 4 , an average value and an unbiased variance for each of the characteristic parameters corresponding to the selected sensitivity word are read from the default database (step S 6 ).
  • step S 7 an image asking the user whether to select a music piece using the personal learning values is displayed on the display device 7 (step S 7 ).
  • the user can perform an input operation on a “YES” key or a “NO” key using the input operation device 2 , based on the display, to select whether or not to use personal learning values.
  • the control device 6 judges whether there has been input operation of the “YES” key or of the “NO” key (step S 8 ).
  • step S 9 If there is input operation of the “YES” key indicating that personal learning values are to be used, the average value and unbiased variance of each of the characteristic parameters corresponding to the selected sensitivity word are read from the personal learning value database (step S 9 ). If there is input operation of the “NO” key indicating that personal learning values are not to be used, processing proceeds to step S 6 , and the average value and unbiased variance of each of the characteristic parameters corresponding to the selected sensitivity word are read from the default database.
  • the control device 6 Upon reading the average values and unbiased variances of each of the characteristic parameters in step S 6 or in step S 9 , the control device 6 computes a sensitivity matching degree for each of the n music pieces (step S 10 ).
  • the sensitivity matching degree for the i-th music piece is computed as follows.
  • Sensitivity matching degree (1/
  • the degree of chord change ( 1 ) of the i-th music piece is a(i)
  • the degree of chord change ( 2 ) of the i-th music piece is b(i)
  • the degree of chord change ( 3 ) of the i-th music piece is c(i)
  • the beat of the i-th music piece is d(i)
  • the maximum beat level of the i-th music piece is e(i)
  • the mean amplitude level of the i-th music piece is f(i)
  • the maximum amplitude level of the i-th music piece is g(i)
  • the key of the i-th music piece is h(i).
  • the selected sensitivity word is A
  • the average values and unbiased variances for this sensitivity word A are Ma, Sa for the degree of chord change ( 1 ), Mb, Sb for the degree of chord change ( 2 ), Mc, Sc for the degree of chord change ( 3 ), Md, Sd for the beat, Me, Se for the maximum beat level, Mf, Sf for the mean amplitude level, Mg, Sg for the maximum amplitude level, and Mh, Sh for the key.
  • the units of numerical values differ depending on the characteristic parameter, and so levels may be adjusted.
  • the degree of chord change ( 1 ) may be computed as (100/
  • the control device 6 Upon computing the sensitivity matching degree for each of n music pieces, the control device 6 makes up a music list showing music pieces in order of the greatest sensitivity matching degree (step S 11 ), and causes the display device 7 to display an image showing this music list (step S 12 ).
  • the screen of the display device 7 shows music names, singer names, and other music information, read from the data storing device 3 , and displayed with music pieces in the order of greatest sensitivity matching degree.
  • step S 3 “other sensitivity word” is selected; that is, the user desires a music piece which conforms to a sensitivity word other than the sensitivity words prepared in advance.
  • the control device 6 causes the display device 7 to display an image to request input of a sensitivity word (step S 13 ).
  • the user can use the input operation device 2 to input, as text, any arbitrary sensitivity word, in accordance with the displayed instructions.
  • step S 14 the control device 6 judges whether text has been input (step S 14 ). If there has been input, the control device 6 captures and stores the input text as a sensitivity word (step S 15 ).
  • the control device 6 uses the music pieces 1 through n for which music data is accumulated in the data storing device 3 to make up a random music list (step S 16 ), and then proceeds to the above step S 12 and causes the display device 7 to display an image showing this music list. On the screen of the display device 7 are listed, in random order, the names, singers, and other music information for the music pieces.
  • the sensitivity word captured at step S 15 can be included in the sensitivity words displayed at step S 1 of the next music selection operation.
  • step S 12 After execution of step S 12 , the variable m is set to 1 (step S 17 ), music data for the m-th music piece in the music list is read from the data storing device 3 and is supplied to the music reproducing device 8 , to specify music reproduction (step S 18 ).
  • the music reproducing device 8 reproduces a digital signal on the music data for the m-th music piece thus supplied, and the digital signal is supplied to the digital-analog converter 9 . After conversion into analog audio signals in the digital-analog converter 9 , reproduced sounds for the m-th music piece are output from the speaker 10 . Thus, the user can listen to the reproduced sounds of the music piece.
  • step S 19 An image is displayed on the display device 7 to ask the user whether or not to perform personal learning for the music piece being reproduced (step S 19 ).
  • the user can use the input operation device 2 to operate the “YES” key or the “NO” key, in accordance with the displayed contents, to select whether or not to perform personal learning for the music piece being reproduced.
  • the control device 6 judges whether there has been operation input of the “YES” key or of the “NO” key (step S 20 ). If there has been input due to operation of the “YES” key, indicating that personal learning is to be performed, processing proceeds to the learning routine.
  • step S 21 the display device 7 is caused to display an image asking the user whether to proceed to reproduction of the next music piece on the list of music pieces, or whether to halt music selection.
  • the control device 6 judges whether there has been input operation of the “NEXT MUSIC” key (step S 22 ). If there has not been input operation of the “Next music” key, the control device judges whether there has been operation of the “END” key (step S 23 ).
  • the variable m is increased by 1 to compute the new value of the variable m (step S 24 ), and a judgment is made as to whether the variable m is greater than the final number MAX of the music list (step S 25 ). If m>MAX, the music selection operation ends. On the occasion of this ending, the display device 7 may be caused to display an image informing the user that music pieces have been reproduced up to the final number of the music list. On the other hand, if m ⁇ MAX, processing returns to step S 18 and the above operations are repeated.
  • step S 26 music selection by the control device 6 ends; but processing may also return to step S 1 .
  • the control device 6 first causes the display device 7 to display an image to ask the user whether the music piece currently being reproduced is a music piece which matches the sensitivity word which has been selected or input, as shown in FIG. 5 (step S 31 ).
  • the user can use the input operation device 2 to input “YES” or “NO”, in accordance with the displayed contents, to select whether or not the music piece being reproduced matches the sensitivity word.
  • the control device 6 judges whether there has been input using either the “YES” key or the “NO” key (step S 32 ).
  • step S 33 If there is input using the “YES” key, indicating that the music piece being reproduced matches the sensitivity word, matched music data indicating this music piece is written to the matched music database of the data storing device 5 (step S 33 ). On the other hand, if there is input using the “NO” key, indicating that the music piece being reproduced does not match the sensitivity word, the learning routine is ended and processing returns to the above step S 21 .
  • step S 34 the control device 6 judges whether there is a sensitivity word for which the number of matched music pieces written as matched music data to the matched music database has reached 10 music pieces (a predetermined number of music pieces) (step S 34 ). If it is judged that there is a sensitivity word for which the number of matched music pieces is 10 music pieces or greater, matched music data is read from the matched music database of the data storing device 5 , unmatched music data is read from a unmatched music database (step S 35 ), and the read data is used to compute personal learning values using statistical processing (step S 36 ).
  • the predetermined number of music pieces is stipulated to be 10 music pieces, but another value for the number of music pieces may be used.
  • a characteristic value for each of the characteristic parameters (degree of chord change ( 1 ), degree of chord change ( 2 ), degree of chord change ( 3 ), beat (number of beats per unit time), maximum beat level, mean amplitude level, maximum amplitude level, and key) for each music piece indicated by the matched music data corresponding to the sensitivity word A in the matched music database is read from the characteristic parameter database of the data storing device 4 (step S 51 ), and the average value Mave of the read characteristic values for each characteristic parameter are computed (step S 52 ).
  • the unbiased variance S for each characteristic parameter is also computed (step S 53 ).
  • the control device 6 writes the average value Mave and unbiased variance S computed for each characteristic parameter into fields for the respective characteristic parameters corresponding to the sensitivity word A in the personal learning value database (step S 54 ).
  • control device 6 After thus computing personal learning values, the control device 6 returns to the above step S 21 , and continues operation as described above.
  • the degree of chord change ( 1 ), degree of chord change ( 2 ), degree of chord change ( 3 ), beat (number of beats per unit time), maximum beat level, mean amplitude level, maximum amplitude level, and the key are described as characteristic parameters, but others are possible. Also, the sensitivity matching degree may be computed for at only at least one of the three degrees of chord change ( 1 ) through ( 3 ).
  • degrees of chord change are not limited to the above-described number of chords per minute in the music piece, number of types of chords used in the music piece, and number of change points, such as discord, which impart an impression of the music piece during the chord progression.
  • the amount of change in the chord root, or a change from a major chord to a minor chord, or the number of changes to other types of chords can also be used as degrees of chord change.
  • average values and unbiased variances are used as correction values, but other values may be used.
  • a multiplicative factor, variance or other weighting value to correct a degree of chord change or other characteristic value may be used.
  • the variance of one characteristic parameter for sensitivity word A as described above can be expressed by the following equation.
  • the unmatched music data for the music piece is written to the unmatched music database of the data storing device 5 (step S 34 ).
  • Variance ⁇ (Mave ⁇ C1) 2 +(Mave ⁇ C2) 2 + . . . +(Mave ⁇ Cj) 2 ⁇ /j
  • FIG. 7 shows another example of a learning routine in the above step S 30 .
  • this learning routine if there is input operation of the “YES” key indicating a match of the music piece being reproduced in step S 32 with a sensitivity word, the control device 6 writes matched music data indicating the music piece to the matched music database of the data storing device 5 (step S 33 ); on the other hand, if there is input operation of the “NO” key indicating that the music piece being reproduced does not match the sensitivity word, unmatched music data indicating the music piece is written to the unmatched music database (sixth storage device) of the data storing device 5 (step S 37 ), the learning routine is ended, and processing proceeds to the above step S 21 .
  • step S 38 the control device 6 judges whether the number of matched music pieces written as matched music data to the matched music database has reached 10 music pieces (a predetermined number of music pieces) (step S 38 ). If the number of matched music pieces is judged to be 10 or greater, matched music data is read from the matched music database of the data storing device 5 , unmatched music data is read from the unmatched music database (step S 39 ), and the read data is used to compute personal learning values through statistical processing (step S 40 ). In step S 38 , the predetermined number of music pieces is stipulated to be 10 music pieces, but of course a different value for the number of music pieces may be used.
  • an average value Mave and an unbiased variance S of a characteristic value for each characteristic parameter are computed for a sensitivity word A using the matched music data, and these values are written to the fields for the respective characteristic parameters corresponding to the sensitivity word A in the personal learning value database (steps S 51 to S 54 ).
  • a characteristic value for each of the characteristic parameters for each music piece indicated by unmatched music data for the sensitivity word A in the unmatched music database is read from the characteristic parameter database of the data storing device 4 (step S 55 ), and the average value Mave′ of characteristic values is computed for each characteristic parameter using the unmatched music data (step S 56 ).
  • the unbiased variance S′ is computed for each characteristic parameter using the unmatched music data (step S 57 ).
  • the methods for computing the average value Mave′ and unbiased variance S′ are similar to those used for the average value Mave and unbiased variance S.
  • the control device 6 writes the average value Mave′ and unbiased variance S′ computed for each characteristic parameter to the respective characteristic parameter fields corresponding to the sensitivity work A in the personal learning value database (step S 58 ).
  • the personal learning values computed based on this unmatched music data are stored in a second personal learning value database (seventh storage device) as shown in FIG. 9 .
  • M′a 1 to M′a 6 , M′b 1 to M′b 6 , and so on are average values
  • S′a 1 to S′a 6 , S′b 1 to S′b 6 , and so on are unbiased variances. Only the average values Mave′ may be used as personal learning values for unmatched music data.
  • step S 8 When providing personal learning values for unmatched music data, when in music selection operation there is input operation of the “YES” key in step S 8 indicating that personal learning values are to be used, as shown in FIG. 10 , average values and unbiased variances are read from the personal learning value database for matched music data and for unmatched music data for each of the characteristic parameters corresponding to the selected sensitivity word (step S 61 ), and in addition, an unmatched correction value is computed in accordance with at least one of the average value and unbiased variance for the unmatched music data (step S 62 ).
  • the unmatched correction value is computed by, for example, multiplying the average value by a coefficient, or by multiplying the reciprocal of the unbiased variance by a coefficient. The coefficient is specified for each of the characteristic parameters.
  • step S 63 the control device 6 computes a sensitivity matching degree for each of n music pieces (step S 63 ).
  • the sensitivity matching degree is computed using the following equation.
  • ⁇ a, ⁇ b, ⁇ c, ⁇ d, ⁇ e, ⁇ f, ⁇ g, ⁇ h are unmatched correction values, computed in step S 62 , for the characteristic parameters, which are the degree of chord change ( 1 ), degree of chord change ( 2 ), degree of chord change ( 3 ), beat (number of beats per unit time), maximum beat level, mean amplitude level, maximum amplitude level, and the key, respectively.
  • Sensitivity matching degree ⁇ (1/
  • the unmatched correction values ⁇ a, ⁇ b, ⁇ c, ⁇ d, ⁇ e, ⁇ f, ⁇ g, ⁇ h act so as to reduce the sensitivity matching degree computed using matched music data based on personal learning values.
  • step S 63 after computation of sensitivity matching degrees, processing proceeds to step S 11 and a music list is made up, similarly to the music selection operation of FIG. 3 .
  • Sensitivity matching degree ⁇ (1/a(i) ⁇ Ma) 2 ) ⁇ ( ⁇ /Sa) ⁇ a ⁇ + ⁇ (1/b(i) ⁇ M ) 2 ) ⁇ ( ⁇ /Sb) ⁇ b ⁇ + ⁇ (1/c(i) ⁇ Mc) 2 ) ⁇ ( ⁇ /Sc) ⁇ c ⁇ + ⁇ (1/d(i) ⁇ Md) 2 ) ⁇ ( ⁇ /Sd) ⁇ d ⁇ + ⁇ (1/e(i) ⁇ Me) 2 ) ⁇ ( ⁇ /Se) ⁇ e ⁇ + ⁇ (1/f(i) ⁇ Mf) 2 ) ⁇ ( ⁇ /Sf) ⁇ f ⁇ + ⁇ (1/g(i) ⁇ Mg) 2 ) ⁇ ( ⁇ /Sg) ⁇ g ⁇ + ⁇ ( ⁇ /h(i) ⁇ Mh) 2 ) ⁇ ( ⁇ /Sh) ⁇ h ⁇
  • “rhythmical”, “gentle”, “bright”, “sad” “healing”, and “lonely” are selected sensitivity words, but other sensitivity words may be used.
  • “joyful” or other sensitivity words may of course be used.
  • music pieces matching with the sensitivities of the user can be presented to the user, so that music selection by the user becomes easy.
  • the sensitivities of the user relating to music selection are learned, so that music pieces more closely matching with those sensitivities can be provided to the user, and music selection by the user is made easy.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Electrophonic Musical Instruments (AREA)

Abstract

A music selecting apparatus and method, which are capable to indicate a music piece matching with the sensitivities of the user. A degree of chord change is stored as data for each of a plurality of music pieces, a sensitivity word for music selection is set in accordance with an input operation, and a music piece having the chord change degree corresponding to the set sensitivity word is detected in accordance with the chord change degree of each of the plurality of music pieces.

Description

BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates to a music selecting apparatus and method which selects one of a plurality of music pieces.
2. Description of the Related Art
A well-known method to select a music piece preferred by a user of a plurality of music pieces involves extracting as data the physical characteristics of music pieces, classifying the plurality of music pieces in accordance with the extraction results, and using the result for music selection. As a method for obtaining physical characteristic data of each music piece, for example, a method for obtaining power spectrum data from music data is widely known (see Japanese Patent Application Kokai No. 10-134549). A method for obtaining physical characteristic data through the patterning of time-series changes using an N-gram method, based on the frequency bandwidth and the length of the reproduced sound of the music piece and the musical score, is also known.
However, in such conventional music selection methods, the physical characteristic data is not data which has a correlation with the sensitivities of the user. Hence there is the problem that the music piece imagined by the user is not necessarily selected.
SUMMARY OF THE INVENTION
It is an object of the present invention to provide a music selecting apparatus and method capable of providing a music piece appropriate to the sensitivities of the user.
A music selecting apparatus according to the present invention is an apparatus for selecting a music piece from a plurality of music pieces in accordance with an input operation, comprising: a first storage device which stores, as data, a degree of chord change for each of the plurality of music pieces; a setting device which sets a sensitivity word for music selection in accordance with the input operation; and, a music selector which detects a music piece having a degree of chord change corresponding to the sensitivity word set by the setting device, in accordance with the chord change degree for each of the plurality of music pieces.
A music selecting method according to the present invention is a method for selecting a music piece from among a plurality of music pieces in accordance with an input operation, comprising the steps of: storing, as data, a degree of chord change for each of the plurality of music pieces; setting a sensitivity word for music selection in accordance with the input operation; and, detecting a music piece having a degree of chord change corresponding to the set sensitivity word, in accordance with the chord change degree for each of the plurality of music pieces.
A music selecting apparatus according to the present invention is an apparatus for selecting a music piece from among a plurality of music pieces in accordance with an input operation, comprising: a first storage device which stores, as data, a characteristic value of at least one characteristic parameter for each of the plurality of music pieces; a setting device which sets a sensitivity word for music selection from among a plurality of sensitivity words, in accordance with the input operation; a second storage device which stores, as data, a correction value for each of the plurality of sensitivity words; a reading portion which reads, from the second storage device, the correction value corresponding to the sensitivity word for the music selection set by the setting device; a correction device which corrects the characteristic value of characteristic parameter for each of the plurality of music pieces in accordance with correction value read by the reading portion to compute a sensitivity matching degree; a music selector which selects at least one music piece from among the plurality of music pieces, in accordance with the sensitivity matching degree for each of the plurality of music pieces, computed by the correction device; a matching judgment device which judges whether the at least one music piece selected by the music selector matches the sensitivity word for the music selection, in accordance with an input operation; a learning value storage device which computes a learning value in accordance with a result of the judgment by the matching judgment device, and stores the computed learning value in association with the sensitivity word for the music selection; and, a learning judgment device which judges, when the sensitivity word for the music selection is set by the setting device, whether the learning value corresponding to the sensitivity word for the music selection exist in the learning value storage device; and wherein when the learning value corresponding to the sensitivity word for the music selection is judged by the learning judgment device to be stored in the learning value storage device, the correction device corrects the characteristic value of characteristic parameter for each of the plurality of music pieces in accordance with the stored learning value to compute the sensitivity matching degree.
A music selecting method according to the present invention is a method for selecting a music piece from among a plurality of music pieces in accordance with an input operation, comprising the steps of: storing a characteristic value of at least one characteristic parameter as data for each of the plurality of music pieces; setting a sensitivity word for music selection from among a plurality of sensitivity words in accordance with the input operation; storing a correction value as data for each of the plurality of sensitivity words in a second storage device; reading the correction value corresponding to the sensitivity word for the music selection from the second storage device; correcting characteristic value of characteristic parameters for each of the plurality of music pieces in accordance with the read correction value to compute a sensitivity matching degree; selecting at least one music from among the plurality of music pieces in accordance with the sensitivity matching degrees computed for each of the plurality of music pieces; judging whether the selected music piece matches the sensitivity word for the music selection, in accordance with the input operation; computing a learning value in accordance with the judgment result, and storing the computed learning value in a learning value storage device in association with the sensitivity word for the music selection; judging whether the learning value corresponding to the sensitivity word for the music selection exists in the learning value storage device at the time the sensitivity word for the music selection is set; and, when it is judged that the learning value corresponding to the sensitivity word for the music selection is stored in the learning value storage device, correcting the characteristic value of characteristic parameter for each of the plurality of music pieces in accordance with the stored learning value to compute the sensitivity matching degree.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a block diagram showing the configuration of a music selecting apparatus according to the present invention;
FIG. 2 shows a default database;
FIG. 3 is a flowchart showing music selection operation;
FIG. 4 is a flowchart showing the continuous portion of the music selection operation of FIG. 3;
FIG. 5 is a flowchart showing a learning routine;
FIG. 6 is a flowchart showing personal learning value computation operation;
FIG. 7 is a flowchart showing another example of the learning routine;
FIG. 8 is a flowchart showing personal learning value computation operation in the learning routine of FIG. 7;
FIG. 9 shows a second personal learning value database having unmatched music data; and,
FIG. 10 is a flowchart showing a portion of music selection operation to which the learning routine of FIG. 7 is applied.
DETAILED DESCRIPTION OF THE INVENTION
Below, embodiments of the invention are explained in detail, referring to the drawings.
FIG. 1 shows a music selecting apparatus according to the present invention. The music selecting apparatus comprises a music input device 1, input operation device 2, data storing devices 3, 4 and 5, control device 6, display device 7, music reproducing device 8, digital-analog converter 9, and speaker 10.
The music input device 1 is connected to the control device 6 and data storing device 3, and is a device for input of audio signals (for example, PCM data) of digitized music pieces to the music selecting apparatus. As the music input device 1, for example, a disc player which plays a disc such as CD, or a streaming interface which receives streaming music data, is employed. The input operation device 2 is a device operated by the user of the music selecting apparatus to input data and instructions. In addition to character keys and numeric keys, the input operation device 2 is provided with a “YES” key, a “NO” key, an “END” key, a “NEXT MUSIC” key, and other specialized keys. The output of the input operation device 2 is connected to the control device 6. The types of keys of the input operation device 2 are not necessarily limited to those described above.
The data storing device 3, which is the third storage means, stores, as files, music data supplied from the music input device 1. Music data is data indicating the reproduced sounds of a music piece, and may be, for example, PCM data, MP3 data, MIDI data, or similar. The music name, singer name, and other music information is stored for each music piece in the data storing device 3. Music data accumulated in the data storing device 3 corresponds to a plurality of music pieces 1 through n (where n is greater than one). The data storing device 4 stores as a characteristic parameter database (first storage device), for each of the n music pieces for which music data is accumulated in the data storing device 3, characteristic values for the degree of chord change (1), degree of chord change (2), degree of chord change (3), beat (number of beats per unit time), maximum beat level, mean amplitude level, maximum amplitude level, and the key, as characteristic parameters. The degree of chord change (1) is the number of chords per minute in the music piece; the degree of chord change (2) is the number of types of chords used in the music piece; and the degree of chord change (3) is the number of change points, such as discord, which change an impression of the music piece during the chord progression.
Chords themselves have elements which may provide depth to a music piece, or impart a sense of tension to the listener, or similar. Further, a music piece may be provided with atmosphere through a chord progression. Chords having such psychological elements are optimal as music-characterizing quantities used by a music selecting apparatus to select music pieces through sensitivity words, and in addition to the simple characteristics of the melody, it is thought that the intentions of the composer, including the contents of the lyrics, may to some extent be reflected therein; hence chords are employed as a portion of the characteristic parameters.
In the data storing device 4, for each sensitivity word previously determined are stored, as the default database (second storage device), an average value and an unbiased variances for characteristic parameters, comprising the degree of chord change (1), degree of chord change (2), degree of chord change (3), beat, maximum beat level, mean amplitude level, maximum amplitude level, and the key. The average value and unbiased variance represent a characteristic value for each of the characteristic parameters, as well as a correction value used for computation of a sensitivity matching degree. The average value and unbiased variance are described below. FIG. 2 shows, in a table, the average values and unbiased variances of each of the characteristic parameters for different sensitivity words, which are the contents of the default database. In FIG. 2, Ma1 to Ma6, Mb1 to Mb6, and similar are average values, and Sa1 to Sa6, Sb1 to Sb6, and similar are unbiased variances.
Here, the sensitivity word is a word expressing feelings felt when a listener listens to a music piece. Examples are “rhythmical”, “gentle”, “bright”, “sad” “healing”, and “lonely”.
A matched music database (fourth storage device) and unmatched music database (sixth storage device) are formed in the data storing device 5. In each of these databases is stored data for 50 music pieces for each sensitivity word. When music data for more than 50 music pieces is to be written, the new data is written while erasing the oldest data. Of course the number of music pieces stored for each sensitivity word in the matched music database and in the unmatched music database is not limited to 50 music pieces, but may be a different number of music pieces.
The control device 6 comprises for example a microcomputer, and performs music selection operation in accordance with an input operation by a user, described below.
The display device 7 displays selection fields related to the control of the control device 6, the contents input to the music input device 1, and a list of music pieces presented to the user.
The music reproducing device 8 reads music data for a music piece selected by the user from the data storing device 3, and reproduces a digital audio signal in accordance with the read music data. The digital-analog converter 9 converts the digital audio signals reproduced by the music reproducing device 8 into analog audio signals, which are supplied to the speaker 10.
Next, music selection operation in a music selection system of this configuration is explained. It is assumed that a single user uses the music selecting apparatus; in the case of a device used by a plurality of users, when starting the music selection operation, a user ID identifying the user must be input via the input operation device 2. This is in order to specify the user utilizing personal learning values, described below.
When music selection operation begins, the control device 6 first causes the display device 7 to display an image in order to request selection of a sensitivity word, as shown in FIG. 3 and FIG. 4 (step S1). As sensitivity words for music selection, “rhythmical”, “gentle”, “bright”, “sad”, “healing”, “lonely”, and other items are displayed on the screen of the display device 7, and in addition an “other sensitivity word” items is displayed. At the same time, an instruction to select from among these displayed items is shown. The user can perform an input operation through the input operation device 2 to select one of these sensitivity words or another sensitivity word in response to the display. After executing step S1, the control device 6 judges whether there has been operation input (step S2). If there has been operation input, the control device 6 judges whether one of the sensitivity words displayed has been selected, in accordance with the output from the input operation device 2 (step S3). That is, a judgment is made as to whether one sensitivity word of the sensitivity words displayed, or “other sensitivity word”, has been selected.
If one of the displayed sensitivity words has been selected, the control device 6 captures the selected sensitivity word (step S4), and judges whether, for the selected sensitivity word, there exist personal learning values (step S5). The personal learning values are the average value and unbiased variance, specific to the user, of each of the characteristic parameters for the selected sensitivity word; the average values and unbiased variances are computed in a step described below, and stored in a personal-learning value database (fifth storage device) in the data storing device 4. If personal learning values for the selected sensitivity word do not exist in the data storing device 4, an average value and an unbiased variance for each of the characteristic parameters corresponding to the selected sensitivity word are read from the default database (step S6). On the other hand, if personal learning values for the selected sensitivity word exist in the data storing device 5, an image asking the user whether to select a music piece using the personal learning values is displayed on the display device 7 (step S7). The user can perform an input operation on a “YES” key or a “NO” key using the input operation device 2, based on the display, to select whether or not to use personal learning values. After execution of step S7, the control device 6 judges whether there has been input operation of the “YES” key or of the “NO” key (step S8). If there is input operation of the “YES” key indicating that personal learning values are to be used, the average value and unbiased variance of each of the characteristic parameters corresponding to the selected sensitivity word are read from the personal learning value database (step S9). If there is input operation of the “NO” key indicating that personal learning values are not to be used, processing proceeds to step S6, and the average value and unbiased variance of each of the characteristic parameters corresponding to the selected sensitivity word are read from the default database.
Upon reading the average values and unbiased variances of each of the characteristic parameters in step S6 or in step S9, the control device 6 computes a sensitivity matching degree for each of the n music pieces (step S10). The sensitivity matching degree for the i-th music piece is computed as follows.
Sensitivity matching degree=(1/|a(i)−Ma|)×(1/Sa)+(1/|b(i)−Mb|)×(1/Sb)+(1/|c(i)−Mc|)×(1/Sc)+(1/|d(i)×Md|)×(1/Sd)+(1/|e(i)−Me|)×(1/Se)+(1/|f(i)−Mf|)×(1/Sf)+(1/|g(i)−Mg|)×(1/Sg)+(1/|h(i)−Mh|)×(1/Sh)
In this formula, the degree of chord change (1) of the i-th music piece is a(i), the degree of chord change (2) of the i-th music piece is b(i), the degree of chord change (3) of the i-th music piece is c(i), the beat of the i-th music piece is d(i), the maximum beat level of the i-th music piece is e(i), the mean amplitude level of the i-th music piece is f(i), the maximum amplitude level of the i-th music piece is g(i), and the key of the i-th music piece is h(i). Assume that the selected sensitivity word is A, and the average values and unbiased variances for this sensitivity word A are Ma, Sa for the degree of chord change (1), Mb, Sb for the degree of chord change (2), Mc, Sc for the degree of chord change (3), Md, Sd for the beat, Me, Se for the maximum beat level, Mf, Sf for the mean amplitude level, Mg, Sg for the maximum amplitude level, and Mh, Sh for the key.
Further, when computing the sensitivity matching degree, the units of numerical values differ depending on the characteristic parameter, and so levels may be adjusted. In the formula to compute the sensitivity matching degree, for example, the degree of chord change (1) may be computed as (100/|a(i)−Ma|)×(1/Sa), increasing the value by a factor of 100.Other degrees of chord change and the beat may similarly be increased by a factor of 100.
Upon computing the sensitivity matching degree for each of n music pieces, the control device 6 makes up a music list showing music pieces in order of the greatest sensitivity matching degree (step S11), and causes the display device 7 to display an image showing this music list (step S12). The screen of the display device 7 shows music names, singer names, and other music information, read from the data storing device 3, and displayed with music pieces in the order of greatest sensitivity matching degree.
There are cases in which, in step S3, “other sensitivity word” is selected; that is, the user desires a music piece which conforms to a sensitivity word other than the sensitivity words prepared in advance. In such a case, the control device 6 causes the display device 7 to display an image to request input of a sensitivity word (step S13). The user can use the input operation device 2 to input, as text, any arbitrary sensitivity word, in accordance with the displayed instructions. After execution of step S13, the control device 6 judges whether text has been input (step S14). If there has been input, the control device 6 captures and stores the input text as a sensitivity word (step S15). The control device 6 uses the music pieces 1 through n for which music data is accumulated in the data storing device 3 to make up a random music list (step S16), and then proceeds to the above step S12 and causes the display device 7 to display an image showing this music list. On the screen of the display device 7 are listed, in random order, the names, singers, and other music information for the music pieces.
The sensitivity word captured at step S15 can be included in the sensitivity words displayed at step S1 of the next music selection operation.
After execution of step S12, the variable m is set to 1 (step S17), music data for the m-th music piece in the music list is read from the data storing device 3 and is supplied to the music reproducing device 8, to specify music reproduction (step S18). The music reproducing device 8 reproduces a digital signal on the music data for the m-th music piece thus supplied, and the digital signal is supplied to the digital-analog converter 9. After conversion into analog audio signals in the digital-analog converter 9, reproduced sounds for the m-th music piece are output from the speaker 10. Thus, the user can listen to the reproduced sounds of the music piece.
An image is displayed on the display device 7 to ask the user whether or not to perform personal learning for the music piece being reproduced (step S19). The user can use the input operation device 2 to operate the “YES” key or the “NO” key, in accordance with the displayed contents, to select whether or not to perform personal learning for the music piece being reproduced. After execution of step S19, the control device 6 judges whether there has been operation input of the “YES” key or of the “NO” key (step S20). If there has been input due to operation of the “YES” key, indicating that personal learning is to be performed, processing proceeds to the learning routine.
If there has been input of the “NO” key indicating that personal learning is not to be performed, the display device 7 is caused to display an image asking the user whether to proceed to reproduction of the next music piece on the list of music pieces, or whether to halt music selection (step S21). By operating the input operation device 2 in accordance with the displayed contents, the user can begin reproduction of the next music piece on the displayed music list after the music piece currently being reproduced, or can halt music selection without selecting another music piece. After execution of step S21, the control device 6 judges whether there has been input operation of the “NEXT MUSIC” key (step S22). If there has not been input operation of the “Next music” key, the control device judges whether there has been operation of the “END” key (step S23).
If there has been input of the “NEXT MUSIC” key, the variable m is increased by 1 to compute the new value of the variable m (step S24), and a judgment is made as to whether the variable m is greater than the final number MAX of the music list (step S25). If m>MAX, the music selection operation ends. On the occasion of this ending, the display device 7 may be caused to display an image informing the user that music pieces have been reproduced up to the final number of the music list. On the other hand, if m<MAX, processing returns to step S18 and the above operations are repeated.
If there has been input of the “END” key, the music reproducing device 8 is instructed to halt music reproduction (step S26). By this means music selection by the control device 6 ends; but processing may also return to step S1.
When execution of the above learning routine has been begun, the control device 6 first causes the display device 7 to display an image to ask the user whether the music piece currently being reproduced is a music piece which matches the sensitivity word which has been selected or input, as shown in FIG. 5 (step S31). The user can use the input operation device 2 to input “YES” or “NO”, in accordance with the displayed contents, to select whether or not the music piece being reproduced matches the sensitivity word. After execution of step S26, the control device 6 judges whether there has been input using either the “YES” key or the “NO” key (step S32). If there is input using the “YES” key, indicating that the music piece being reproduced matches the sensitivity word, matched music data indicating this music piece is written to the matched music database of the data storing device 5 (step S33). On the other hand, if there is input using the “NO” key, indicating that the music piece being reproduced does not match the sensitivity word, the learning routine is ended and processing returns to the above step S21.
After execution of step S33, the control device 6 judges whether there is a sensitivity word for which the number of matched music pieces written as matched music data to the matched music database has reached 10 music pieces (a predetermined number of music pieces) (step S34). If it is judged that there is a sensitivity word for which the number of matched music pieces is 10 music pieces or greater, matched music data is read from the matched music database of the data storing device 5, unmatched music data is read from a unmatched music database (step S35), and the read data is used to compute personal learning values using statistical processing (step S36). In step S34, the predetermined number of music pieces is stipulated to be 10 music pieces, but another value for the number of music pieces may be used.
Computation of personal learning values is explained for a sensitivity word A, for which the number of matched music pieces has reached 10 or greater. As shown in FIG. 6, a characteristic value for each of the characteristic parameters (degree of chord change (1), degree of chord change (2), degree of chord change (3), beat (number of beats per unit time), maximum beat level, mean amplitude level, maximum amplitude level, and key) for each music piece indicated by the matched music data corresponding to the sensitivity word A in the matched music database is read from the characteristic parameter database of the data storing device 4 (step S51), and the average value Mave of the read characteristic values for each characteristic parameter are computed (step S52). Further, the unbiased variance S for each characteristic parameter is also computed (step S53). When computing the unbiased variance S of one characteristic parameter of the sensitivity word A, if the music pieces indicated by the matched music data corresponding to the sensitivity word A are M1 to Mj (where for example 50≧j≧10), and the characteristic values of one characteristic parameter for the respective music pieces M1 to Mj are C1 to Cj, then the average value Mave of the characteristic values C1 to Cj for one characteristic parameter can be expressed by
Mave=C1+C2+ . . . +Cj/j
The unbiased variance S of a characteristic parameter of the sensitivity word A can be expressed by
S={(Mave−C1)2+(Mave−C2)2+ . . . +(Mave−Cj)2}/ (j−1)
The control device 6 writes the average value Mave and unbiased variance S computed for each characteristic parameter into fields for the respective characteristic parameters corresponding to the sensitivity word A in the personal learning value database (step S54).
After thus computing personal learning values, the control device 6 returns to the above step S21, and continues operation as described above.
Through this music selection operation, a music list conforming to a selected sensitivity word can be presented to the user. Further, in music selection using personal learning values, as a user utilizes this music selection system, it becomes possible to provide music pieces which more closely conform to the sensitivities of the user.
In the above embodiment, the degree of chord change (1), degree of chord change (2), degree of chord change (3), beat (number of beats per unit time), maximum beat level, mean amplitude level, maximum amplitude level, and the key are described as characteristic parameters, but others are possible. Also, the sensitivity matching degree may be computed for at only at least one of the three degrees of chord change (1) through (3).
Further, degrees of chord change are not limited to the above-described number of chords per minute in the music piece, number of types of chords used in the music piece, and number of change points, such as discord, which impart an impression of the music piece during the chord progression. For example, the amount of change in the chord root, or a change from a major chord to a minor chord, or the number of changes to other types of chords, can also be used as degrees of chord change.
In the above-described embodiment, average values and unbiased variances are used as correction values, but other values may be used. In place of unbiased variances, for example, a multiplicative factor, variance or other weighting value to correct a degree of chord change or other characteristic value may be used. When using a variance in place of an unbiased variance, the variance of one characteristic parameter for sensitivity word A as described above can be expressed by the following equation. The unmatched music data for the music piece is written to the unmatched music database of the data storing device 5 (step S34).
Variance={(Mave−C1)2+(Mave−C2)2+ . . . +(Mave−Cj)2}/j
FIG. 7 shows another example of a learning routine in the above step S30. In this learning routine, if there is input operation of the “YES” key indicating a match of the music piece being reproduced in step S32 with a sensitivity word, the control device 6 writes matched music data indicating the music piece to the matched music database of the data storing device 5 (step S33); on the other hand, if there is input operation of the “NO” key indicating that the music piece being reproduced does not match the sensitivity word, unmatched music data indicating the music piece is written to the unmatched music database (sixth storage device) of the data storing device 5 (step S37), the learning routine is ended, and processing proceeds to the above step S21.
After execution of step S33, the control device 6 judges whether the number of matched music pieces written as matched music data to the matched music database has reached 10 music pieces (a predetermined number of music pieces) (step S38). If the number of matched music pieces is judged to be 10 or greater, matched music data is read from the matched music database of the data storing device 5, unmatched music data is read from the unmatched music database (step S39), and the read data is used to compute personal learning values through statistical processing (step S40). In step S38, the predetermined number of music pieces is stipulated to be 10 music pieces, but of course a different value for the number of music pieces may be used.
In the personal learning value computation of step S40, as shown in FIG. 8, an average value Mave and an unbiased variance S of a characteristic value for each characteristic parameter are computed for a sensitivity word A using the matched music data, and these values are written to the fields for the respective characteristic parameters corresponding to the sensitivity word A in the personal learning value database (steps S51 to S54). Thereafter, a characteristic value for each of the characteristic parameters for each music piece indicated by unmatched music data for the sensitivity word A in the unmatched music database is read from the characteristic parameter database of the data storing device 4 (step S55), and the average value Mave′ of characteristic values is computed for each characteristic parameter using the unmatched music data (step S56). Also, the unbiased variance S′ is computed for each characteristic parameter using the unmatched music data (step S57). The methods for computing the average value Mave′ and unbiased variance S′ are similar to those used for the average value Mave and unbiased variance S.
The control device 6 writes the average value Mave′ and unbiased variance S′ computed for each characteristic parameter to the respective characteristic parameter fields corresponding to the sensitivity work A in the personal learning value database (step S58). The personal learning values computed based on this unmatched music data are stored in a second personal learning value database (seventh storage device) as shown in FIG. 9. In FIG. 9, M′a1 to M′a6, M′b1 to M′b6, and so on are average values, and S′a1 to S′a6, S′b1 to S′b6, and so on are unbiased variances. Only the average values Mave′ may be used as personal learning values for unmatched music data.
When providing personal learning values for unmatched music data, when in music selection operation there is input operation of the “YES” key in step S8 indicating that personal learning values are to be used, as shown in FIG. 10, average values and unbiased variances are read from the personal learning value database for matched music data and for unmatched music data for each of the characteristic parameters corresponding to the selected sensitivity word (step S61), and in addition, an unmatched correction value is computed in accordance with at least one of the average value and unbiased variance for the unmatched music data (step S62). The unmatched correction value is computed by, for example, multiplying the average value by a coefficient, or by multiplying the reciprocal of the unbiased variance by a coefficient. The coefficient is specified for each of the characteristic parameters.
After execution of step S62, the control device 6 computes a sensitivity matching degree for each of n music pieces (step S63). The sensitivity matching degree is computed using the following equation. In this equation, αa, αb, αc, αd, αe, αf, αg, αh are unmatched correction values, computed in step S62, for the characteristic parameters, which are the degree of chord change (1), degree of chord change (2), degree of chord change (3), beat (number of beats per unit time), maximum beat level, mean amplitude level, maximum amplitude level, and the key, respectively.
Sensitivity matching degree={(1/|a(i)−Ma|)×(1/Sa)−αa}+{(1/|b(i)−Mb|)×(1/Sb)−αb}+{(1/|c(i)−Mc|)×(1/Sc)−αc}+{(1/|d(i)−Md|)×(1/Sd)−αd}+{(1/|e(i)−Me|)×(1/Se)−αe}+{(1/|f(i)−Mf|)×(1/Sf)−αf}+{(1/|g(i)−Mg|)×(1/Sg)−αg}+{(1/|h(i)−Mh|)×(1/Sh)−αh}
The unmatched correction values αa, αb, αc, αd, αe, αf, αg, αh act so as to reduce the sensitivity matching degree computed using matched music data based on personal learning values.
In step S63, after computation of sensitivity matching degrees, processing proceeds to step S11 and a music list is made up, similarly to the music selection operation of FIG. 3.
The method for computing the sensitivity matching degree is not limited to the above example. For example, the following equation may also be used in computation. Here σ is the standard deviation computed from characteristic values of matched music data.
Sensitivity matching degree={(1/a(i)−Ma)2)×(σ/Sa)−αa}+{(1/b(i)−M )2)×(σ/Sb)−αb}+{(1/c(i)−Mc)2)×(σ/Sc)−αc}+{(1/d(i)−Md)2)×(σ/Sd)−αd}+{(1/e(i)−Me)2)×(σ/Se)−αe}+{(1/f(i)−Mf)2)×(σ/Sf)−αf}+{(1/g(i)−Mg)2)×(σ/Sg)−αg}+{(σ/h(i)−Mh)2)×(σ/Sh)−αh}
In the above embodiment, “rhythmical”, “gentle”, “bright”, “sad” “healing”, and “lonely” are selected sensitivity words, but other sensitivity words may be used. For example, “joyful” or other sensitivity words may of course be used.
Thus, according to the present invention, music pieces matching with the sensitivities of the user can be presented to the user, so that music selection by the user becomes easy.
Also, according to the present invention, the sensitivities of the user relating to music selection are learned, so that music pieces more closely matching with those sensitivities can be provided to the user, and music selection by the user is made easy.
This application is based on a Japanese Application No. 2003-350728 and No. 2004-095916 which are hereby incorporated by reference.

Claims (19)

1. A music selecting apparatus for selecting a music piece from a plurality of music pieces in accordance with an input operation, comprising:
a first storage device which stores, as data, a degree of chord change for each of the plurality of music pieces;
a setting device which sets a sensitivity word for music selection from among a plurality of sensitivity words which are previously determined, in accordance with the input operation; and,
a music selector which detects a music piece having a degree of chord change corresponding to the sensitivity word set by said setting device, in accordance with the chord change degree for each of the plurality of music pieces,
wherein said music selector includes:
a second storage device which stores, as data, a correction value for each of the plurality of sensitivity words;
a reading portion which reads, from said second storage device, the correction value corresponding to the sensitivity word set by said setting device;
a correction device which corrects the chord change degree for each of the plurality of music pieces in accordance with the correction value read by said reading portion to compute a sensitivity matching degree; and
an indicating device which indicates the plurality of music pieces in an order corresponding to the sensitivity matching degree computed for each of the plurality of music pieces by said correction device.
2. The music selecting apparatus according to claim 1, wherein said setting device includes an input device which receives a sensitivity word other than the plurality of sensitivity words in accordance with said input operation, and wherein, when the sensitivity word other than the plurality of sensitivity words is received by said input device, said indicating device indicates the plurality of music pieces in random order.
3. The music selecting apparatus according to claim 1, wherein
said first storage device stores, as data, the chord change degree for each of the plurality of music pieces, and at least one characteristic parameter indicating a characteristic other than the chord change degree of for each of the plurality of music pieces;
said setting device selects and sets, in accordance with the input operation, the sensitivity word for the music selection from among a plurality of sensitivity words which are previously determined; and,
said music selector includes:
a second storage device which stores, as data, a correction value for each of the plurality of sensitivity words, with respect to the chord change degree and the characteristic parameter;
a reading portion which reads, from said second storage device, the correction value with respect to the chord change degree and the characteristic parameter corresponding to the sensitivity word set by said setting device;
a correction device which corrects the chord change degree and the characteristic parameter for each of the plurality of music pieces in accordance with the correction values read by said reading portion, and obtains the sum of the correction results as a sensitivity matching degree; and,
an indicating device which indicates the plurality of music pieces, in an order corresponding to the sensitivity matching degree of each of the plurality of music pieces computed by said correction device.
4. The music selecting apparatus according to claim 3, wherein said indicating device includes a third storage device which stores music data indicating a reproduced sound for each of the plurality of music pieces, and an audio output device which reads music data from said third storage device in the order of music pieces corresponding to the sensitivity matching degree of each of the plurality of music pieces, and outputs a reproduced sound based on the read music data.
5. The music selecting apparatus according to claim 1, further comprising:
a matching judgment device which judges, in accordance with an input operation, whether a music piece indicated by said indicating device matches the sensitivity word for the music selection;
a fourth storage device which stores, when the indicated music piece is judged to match the sensitivity word for the music selection by said matching judgment device, the matched music piece in association with the sensitivity word for the music selection;
a matched learning device which computes a correction value corresponding to a sensitivity word for which the number of music pieces stored in said fourth storage device has become equal to or greater than a predetermined number of music pieces, in accordance with the stored values of the chord change degree of the stored music pieces of equal to or greater than the predetermined number;
a fifth storage device which stores the correction value computed by said matched learning device with respect to the chord change degree, in association with each of the plurality of sensitivity words; and,
a learning judgment device which judges whether a correction value corresponding to the sensitivity word set by said setting device exists in said fifth storage device; and wherein
when said learning judgment device judges that the correction value corresponding to the sensitivity word exist in said fifth storage device, said reading portion reads the correction value corresponding to the sensitivity word from said fifth storage device, instead of from said second storage device.
6. The music selecting apparatus according to claim 5, wherein said reading portion switches the reading of the correction value corresponding to the sensitivity word from said second storage device to said fifth storage device in accordance with an input operation.
7. The music selecting apparatus according to claim 5, further comprising:
a sixth storage device which stores, when said matching judgment device judges that the indicated music piece does not match the sensitivity word for the music selection, the unmatched music piece for each of the plurality of sensitivity words;
an unmatched learning device which computes the correction value corresponding to a sensitivity word for which the number of music pieces stored in said fourth storage device is equal to or greater than a predetermined number, in accordance with the degrees of chord change in unmatched music pieces stored in said sixth storage device; and,
a seventh storage device which stores the correction value computed by said unmatched learning device with respect to the chord change degrees, in association with each of the plurality of sensitivity words; and wherein
said correction device reads the correction value corresponding to the sensitivity word from said seventh storage device, and corrects the sensitivity matching degree in accordance with the read correction value.
8. The music selecting apparatus according to claim 3, further comprising:
a matching judgment device which judges whether a music piece indicated by said indicating device matches the sensitivity word for the music selection, in accordance with an input operation;
a fourth storage device which stores, when said matching judgment device judges that the indicated music piece matches the sensitivity word for the music selection, the matched music piece, with respect to the degree of chord change and the characteristic parameter, for each of the plurality of sensitivity words;
a matched learning device which computes the correction value for each of the chord change degree and the characteristic parameter corresponding to a sensitivity word for which the number of music pieces stored in said fourth storage device is equal to or greater than a predetermined number, in accordance with the stored values of the chord change degree and the characteristic parameter for the stored music pieces of equal to or greater than the predetermined number;
a fifth storage device which stores the correction value computed by said matched learning device for each of the chord change degree and the characteristic parameters, in association with each of the plurality of sensitivity words; and,
a learning judgment device which judges whether correction values corresponding to the sensitivity word set by said setting device exist in said fifth storage device; and wherein
when said learning judgment device judges that a correction value corresponding to the sensitivity word exist in said fifth storage device, said reading portion reads the correction value corresponding to the sensitivity word from said fifth storage device instead of from said second storage device.
9. The music selecting apparatus according to claim 1, wherein the chord change degree is at least one of the number of chords per minute in a music piece, the number of types of chords used in the music piece, and the number of change points each of which changes an impression of the music piece such as discord during the chord progression.
10. The music selecting apparatus according to claim 1, wherein the plurality of sensitivity words are “rhythmical”, “gentle”, “bright”, “sad” “healing”, and “lonely”.
11. The music selecting apparatus according to claim 3, wherein the at least one characteristic parameter is any of a beat, a maximum beat level, an average amplitude level, a maximum amplitude level, and a key, of the music piece.
12. The music selecting apparatus according to claim 1, wherein the correction value includes an average value and an unbiased variance of the chord change degrees.
13. A music selection method for selecting a music piece from among a plurality of music pieces in accordance with an input operation, comprising the steps of:
storing, as data, a degree of chord change for each of the plurality of music pieces;
setting a sensitivity word for music selection in accordance from among a plurality of sensitivity words which are previously determined, in accordance with the input operation; and,
detecting a music piece having a degree of chord change corresponding to the set sensitivity word, in accordance with the chord change degree for each of the plurality of music pieces,
wherein said music selector the music piece detecting step includes:
a second storage device which stores, as data, a correction value for each of the plurality of sensitivity words;
a reading portion which reads, step for reading from said second a storage device, the a correction value corresponding to the sensitivity word set by said setting device in the setting step, the storage device storing, as data, a correction value for each of the plurality of sensitivity words;
a correction device which corrects step for correcting the chord change degree for each of the plurality of music pieces in accordance with the correction value read by said reading portion in the reading step to compute a sensitivity matching degree; and
an indicating device which indicates step for indicating the plurality of music pieces in an order corresponding to the sensitivity matching degree computed for each of the plurality of music pieces by said correction device in the correction step.
14. A music selecting apparatus for selecting a music piece from among a plurality of music pieces in accordance with an input operation, comprising:
a first storage device which stores, as data, a characteristic value of at least one characteristic parameter for each of the plurality of music pieces;
a setting device which sets a sensitivity word for music selection from among a plurality of sensitivity words, in accordance with the input operation;
a second storage device which stores, as data, a correction value for each of the plurality of sensitivity words;
a reading portion which reads, from said second storage device, the correction value corresponding to the sensitivity word for the music selection set by said setting device;
a correction device which corrects the characteristic value of characteristic parameter for each of the plurality of music pieces in accordance with correction value read by said reading portion to compute a sensitivity matching degree;
a music selector which selects at least one music piece from among the plurality of music pieces, in accordance with the sensitivity matching degree for each of the plurality of music pieces, computed by said correction device;
a matching judgment device which judges whether the at least one music piece selected by said music selector matches the sensitivity word for the music selection, in accordance with an input operation;
a learning value storage device which computes a learning value in accordance with a result of the judgment by said matching judgment device, and stores the computed learning value in association with the sensitivity word for the music selection; and,
a learning judgment device which judges, when the sensitivity word for the music selection is set by said setting device, whether the learning value corresponding to the sensitivity word for the music selection exist in said learning value storage device; and wherein
when the learning value corresponding to the sensitivity word for the music selection is judged by said learning judgment device to be stored in said learning value storage device, said correction device corrects the characteristic value of characteristic parameter for each of the plurality of music pieces in accordance with the stored learning value to compute the sensitivity matching degree.
15. The music selecting apparatus according to claim 14, wherein said learning value storage device includes:
a fourth storage device which stores, when said matching judgment device judges that the selected music piece matches the sensitivity word for the music selection, the matched music piece in association with the sensitivity word for the music selection;
a matched learning device which computes the learning value for each of the plurality of sensitivity words in accordance with the characteristic value of the characteristic parameter for each of the music pieces stored in said fourth storage device when the number of music pieces stored in said fourth storage device is equal to or greater than a predetermined number;
a fifth storage device which stores the learning value computed by said matched learning device with respect to the characteristic parameter, in association with each of the plurality of sensitivity words;
a sixth storage device which stores, when said matching judgment device judges that the selected music piece does not match the sensitivity word for the music selection, the unmatched music piece in association with the sensitivity word for the music selection;
an unmatched learning device which computes the learning value for each of the plurality of sensitivity words in accordance with the characteristic value of the characteristic parameter for each of the music pieces stored in said fifth storage device when the number of music pieces stored in said fourth storage device is equal to or greater than a predetermined number; and
a seventh storage device which stores the learning value computed by said unmatched learning device with respect to the characteristic parameter, in association with each of the plurality of sensitivity words.
16. The music selecting apparatus according to claim 14, wherein said correction device includes a user judgment device, when said learning judgment device judges that the learning value corresponding to the sensitivity word is stored in said learning value storage device, which judges, in accordance with an input operation, whether the learning value stored in said learning value storage device is to be used in music selection, and, when said user judgment device judges that the learning value stored in said learning value storage device is to be used in music selection, said correction device corrects the characteristic value of characteristic parameter for each of the plurality of music pieces in accordance with the stored learning value to compute the sensitivity matching degree.
17. The music selecting apparatus according to claim 15, wherein said correction device reads the learning value corresponding to the sensitivity word for the music selection from said fifth storage device, and reads the learning value corresponding to the sensitivity word for the music selection from said seventh storage device; and,
corrects the characteristic value of the characteristic parameter for each of the plurality of music pieces in accordance with the learning value read from said fifth storage device to compute a basic degree of sensitivity matching, and corrects the basic degree in accordance with the learning value read from said seventh storage device to obtain the sensitivity matching degree.
18. The music selecting apparatus according to claim 14, wherein the at least one characteristic parameter is any of a degree of chord change, a beat, a maximum beat level, an average amplitude level, a maximum amplitude level, and a key, of the music piece.
19. A music selection method for selecting a music piece from among a plurality of music pieces in accordance with an input operation, comprising the steps of:
storing a characteristic value of at least one characteristic parameter as data for each of the plurality of music pieces;
setting a sensitivity word for music selection from among a plurality of sensitivity words in accordance with the input operation;
storing a correction value as data for each of the plurality of sensitivity words in a second storage device;
reading the correction value corresponding to the sensitivity word for the music selection from said second storage device;
correcting characteristic value of characteristic parameters for each of the plurality of music pieces in accordance with the read correction value to compute a sensitivity matching degree;
selecting at least one music from among the plurality of music pieces in accordance with the sensitivity matching degrees computed for each of the plurality of music pieces;
judging whether the selected music piece matches the sensitivity word for the music selection, in accordance with the input operation;
computing a learning value in accordance with the judgment result, and storing the computed learning value in a learning value storage device in association with the sensitivity word for the music selection;
judging whether the learning value corresponding to the sensitivity word for the music selection exists in said learning value storage device at the time the sensitivity word for the music selection is set; and,
when it is judged that the learning value corresponding to the sensitivity word for the music selection is stored in said learning value storage device, correcting the characteristic value of characteristic parameter for each of the plurality of music pieces in accordance with the stored learning value to compute the sensitivity matching degree.
US12/392,579 2003-10-09 2009-02-25 Music selecting apparatus and method Expired - Fee Related USRE43379E1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/392,579 USRE43379E1 (en) 2003-10-09 2009-02-25 Music selecting apparatus and method

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
JP2003-350728 2003-10-09
JP2003350728 2003-10-09
JP2004095916 2004-03-29
JP2004-095916 2004-03-29
US10/959,314 US7385130B2 (en) 2003-10-09 2004-10-07 Music selecting apparatus and method
US12/392,579 USRE43379E1 (en) 2003-10-09 2009-02-25 Music selecting apparatus and method

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US10/959,314 Reissue US7385130B2 (en) 2003-10-09 2004-10-07 Music selecting apparatus and method

Publications (1)

Publication Number Publication Date
USRE43379E1 true USRE43379E1 (en) 2012-05-15

Family

ID=46033368

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/392,579 Expired - Fee Related USRE43379E1 (en) 2003-10-09 2009-02-25 Music selecting apparatus and method

Country Status (1)

Country Link
US (1) USRE43379E1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8847054B2 (en) * 2013-01-31 2014-09-30 Dhroova Aiylam Generating a synthesized melody
US20150018626A1 (en) * 2001-08-14 2015-01-15 Applied Medical Resources Corporation Access sealing apparatus and method
US9590941B1 (en) * 2015-12-01 2017-03-07 International Business Machines Corporation Message handling

Citations (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5218153A (en) * 1990-08-30 1993-06-08 Casio Computer Co., Ltd. Technique for selecting a chord progression for a melody
US5402339A (en) 1992-09-29 1995-03-28 Fujitsu Limited Apparatus for making music database and retrieval apparatus for such database
US5481066A (en) * 1992-12-17 1996-01-02 Yamaha Corporation Automatic performance apparatus for storing chord progression suitable that is user settable for adequately matching a performance style
US5510572A (en) * 1992-01-12 1996-04-23 Casio Computer Co., Ltd. Apparatus for analyzing and harmonizing melody using results of melody analysis
JPH08239171A (en) 1995-03-02 1996-09-17 Fujitec Co Ltd Group control system for elevator
JPH10134549A (en) 1996-10-30 1998-05-22 Nippon Columbia Co Ltd Music program searching-device
US5852252A (en) * 1996-06-20 1998-12-22 Kawai Musical Instruments Manufacturing Co., Ltd. Chord progression input/modification device
US5963957A (en) 1997-04-28 1999-10-05 Philips Electronics North America Corporation Bibliographic music data base with normalized musical themes
US5990407A (en) * 1996-07-11 1999-11-23 Pg Music, Inc. Automatic improvisation system and method
JP2000057177A (en) 1998-08-06 2000-02-25 Mitsubishi Electric Corp Device for supporting design
US6166314A (en) * 1997-06-19 2000-12-26 Time Warp Technologies, Ltd. Method and apparatus for real-time correlation of a performance to a musical score
JP2001306580A (en) 2000-04-27 2001-11-02 Matsushita Electric Ind Co Ltd Music database retrieving device
US20020004420A1 (en) * 2000-07-10 2002-01-10 Konami Corporation Game system, and computer readable medium having recorded thereon processing program for controlling the game system
US20030045953A1 (en) 2001-08-21 2003-03-06 Microsoft Corporation System and methods for providing automatic classification of media entities according to sonic properties
US6545209B1 (en) 2000-07-05 2003-04-08 Microsoft Corporation Music content characteristic identification and matching
JP2003132085A (en) 2001-10-19 2003-05-09 Pioneer Electronic Corp Information selection device and method, information selection reproducing device and computer program for information selection
US20030221541A1 (en) * 2002-05-30 2003-12-04 Platt John C. Auto playlist generation with multiple seed songs
US20040002310A1 (en) * 2002-06-26 2004-01-01 Cormac Herley Smart car radio
US20040007120A1 (en) * 1999-07-28 2004-01-15 Yamaha Corporation Portable telephony apparatus with music tone generator
US20040243592A1 (en) * 2003-05-30 2004-12-02 Bill David S. Personalizing content using an intermediary bridge
US20040237759A1 (en) * 2003-05-30 2004-12-02 Bill David S. Personalizing content
US20050103189A1 (en) * 2003-10-09 2005-05-19 Pioneer Corporation Music selecting apparatus and method
US6993532B1 (en) * 2001-05-30 2006-01-31 Microsoft Corporation Auto playlist generator
US20060064037A1 (en) * 2004-09-22 2006-03-23 Shalon Ventures Research, Llc Systems and methods for monitoring and modifying behavior
US7247786B2 (en) * 2004-01-22 2007-07-24 Pioneer Corporation Song selection apparatus and method
US20090088877A1 (en) * 2005-04-25 2009-04-02 Sony Corporation Musical Content Reproducing Device and Musical Content Reproducing Method
US20090249945A1 (en) * 2004-12-14 2009-10-08 Sony Corporation Music composition data reconstruction device, music composition data reconstruction method, music content reproduction device, and music content reproduction method

Patent Citations (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5218153A (en) * 1990-08-30 1993-06-08 Casio Computer Co., Ltd. Technique for selecting a chord progression for a melody
US5510572A (en) * 1992-01-12 1996-04-23 Casio Computer Co., Ltd. Apparatus for analyzing and harmonizing melody using results of melody analysis
US5402339A (en) 1992-09-29 1995-03-28 Fujitsu Limited Apparatus for making music database and retrieval apparatus for such database
US5481066A (en) * 1992-12-17 1996-01-02 Yamaha Corporation Automatic performance apparatus for storing chord progression suitable that is user settable for adequately matching a performance style
JPH08239171A (en) 1995-03-02 1996-09-17 Fujitec Co Ltd Group control system for elevator
US5852252A (en) * 1996-06-20 1998-12-22 Kawai Musical Instruments Manufacturing Co., Ltd. Chord progression input/modification device
US5990407A (en) * 1996-07-11 1999-11-23 Pg Music, Inc. Automatic improvisation system and method
JPH10134549A (en) 1996-10-30 1998-05-22 Nippon Columbia Co Ltd Music program searching-device
US5963957A (en) 1997-04-28 1999-10-05 Philips Electronics North America Corporation Bibliographic music data base with normalized musical themes
US6166314A (en) * 1997-06-19 2000-12-26 Time Warp Technologies, Ltd. Method and apparatus for real-time correlation of a performance to a musical score
JP2000057177A (en) 1998-08-06 2000-02-25 Mitsubishi Electric Corp Device for supporting design
US20040007120A1 (en) * 1999-07-28 2004-01-15 Yamaha Corporation Portable telephony apparatus with music tone generator
US6911592B1 (en) * 1999-07-28 2005-06-28 Yamaha Corporation Portable telephony apparatus with music tone generator
JP2001306580A (en) 2000-04-27 2001-11-02 Matsushita Electric Ind Co Ltd Music database retrieving device
US6545209B1 (en) 2000-07-05 2003-04-08 Microsoft Corporation Music content characteristic identification and matching
US20020004420A1 (en) * 2000-07-10 2002-01-10 Konami Corporation Game system, and computer readable medium having recorded thereon processing program for controlling the game system
US6821203B2 (en) * 2000-07-10 2004-11-23 Konami Corporation Musical video game system, and computer readable medium having recorded thereon processing program for controlling the game system
US7024424B1 (en) * 2001-05-30 2006-04-04 Microsoft Corporation Auto playlist generator
US6993532B1 (en) * 2001-05-30 2006-01-31 Microsoft Corporation Auto playlist generator
US20030045953A1 (en) 2001-08-21 2003-03-06 Microsoft Corporation System and methods for providing automatic classification of media entities according to sonic properties
JP2003132085A (en) 2001-10-19 2003-05-09 Pioneer Electronic Corp Information selection device and method, information selection reproducing device and computer program for information selection
US6987221B2 (en) * 2002-05-30 2006-01-17 Microsoft Corporation Auto playlist generation with multiple seed songs
US7196258B2 (en) * 2002-05-30 2007-03-27 Microsoft Corporation Auto playlist generation with multiple seed songs
US20030221541A1 (en) * 2002-05-30 2003-12-04 Platt John C. Auto playlist generation with multiple seed songs
US20060032363A1 (en) * 2002-05-30 2006-02-16 Microsoft Corporation Auto playlist generation with multiple seed songs
US20050262528A1 (en) * 2002-06-26 2005-11-24 Microsoft Corporation Smart car radio
US20040002310A1 (en) * 2002-06-26 2004-01-01 Cormac Herley Smart car radio
US6996390B2 (en) * 2002-06-26 2006-02-07 Microsoft Corporation Smart car radio
US20040237759A1 (en) * 2003-05-30 2004-12-02 Bill David S. Personalizing content
US20040243592A1 (en) * 2003-05-30 2004-12-02 Bill David S. Personalizing content using an intermediary bridge
US20050103189A1 (en) * 2003-10-09 2005-05-19 Pioneer Corporation Music selecting apparatus and method
US7385130B2 (en) * 2003-10-09 2008-06-10 Pioneer Corporation Music selecting apparatus and method
US7247786B2 (en) * 2004-01-22 2007-07-24 Pioneer Corporation Song selection apparatus and method
US20060064037A1 (en) * 2004-09-22 2006-03-23 Shalon Ventures Research, Llc Systems and methods for monitoring and modifying behavior
US20090249945A1 (en) * 2004-12-14 2009-10-08 Sony Corporation Music composition data reconstruction device, music composition data reconstruction method, music content reproduction device, and music content reproduction method
US20090088877A1 (en) * 2005-04-25 2009-04-02 Sony Corporation Musical Content Reproducing Device and Musical Content Reproducing Method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Hiroshi Shibata, and another one, "Learning meta-information of contents by agents and its evaluation," Proceedings of the 2003 Communications Society Conference 2, the Institute of Electronics, Information and Communication Engineers, Sep. 10, 2003, pp. 17 and 18.
Shusaku Sawato, and other four, "Study on Extracting Music Characteristics with Genetic Algorithm for Intelligent Retrieval," Proceedings of the 1999 Information and Systems Society Conference of IEICE, the Institute of Electronics, Information and Communication Engineers, Aug. 16, 1999, p. 25.

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150018626A1 (en) * 2001-08-14 2015-01-15 Applied Medical Resources Corporation Access sealing apparatus and method
US9878140B2 (en) * 2001-08-14 2018-01-30 Applied Medical Resources Corporation Access sealing apparatus and method
US8847054B2 (en) * 2013-01-31 2014-09-30 Dhroova Aiylam Generating a synthesized melody
US9590941B1 (en) * 2015-12-01 2017-03-07 International Business Machines Corporation Message handling

Similar Documents

Publication Publication Date Title
US7247786B2 (en) Song selection apparatus and method
US5889223A (en) Karaoke apparatus converting gender of singing voice to match octave of song
US7189912B2 (en) Method and apparatus for tracking musical score
US7288710B2 (en) Music searching apparatus and method
US20230402026A1 (en) Audio processing method and apparatus, and device and medium
WO2009104269A1 (en) Music discriminating device, music discriminating method, music discriminating program and recording medium
JP2764961B2 (en) Electronic musical instrument
USRE43379E1 (en) Music selecting apparatus and method
WO2019180830A1 (en) Singing evaluating method, singing evaluating device, and program
JPH0895585A (en) Musical piece selector and musical piece selection method
JP6539887B2 (en) Tone evaluation device and program
JP3452792B2 (en) Karaoke scoring device
US7385130B2 (en) Music selecting apparatus and method
JP2924208B2 (en) Electronic music playback device with practice function
JP6288197B2 (en) Evaluation apparatus and program
JP3484719B2 (en) Performance guide device with voice input function and performance guide method
JP6102076B2 (en) Evaluation device
JPH04277798A (en) Chord detecting apparatus and automatic accompniment apparatus
JP3290945B2 (en) Singing scoring device
JP4491743B2 (en) Karaoke equipment
US7470853B2 (en) Musical composition processing device
JP2900753B2 (en) Automatic accompaniment device
JP4723222B2 (en) Music selection apparatus and method
JP3879524B2 (en) Waveform generation method, performance data processing method, and waveform selection device
JPH1026992A (en) Karaoke device

Legal Events

Date Code Title Description
REMI Maintenance fee reminder mailed
LAPS Lapse for failure to pay maintenance fees