WO2006106825A1 - Music composition search device, music composition search method, music composition search program, and information recording medium - Google Patents

Music composition search device, music composition search method, music composition search program, and information recording medium Download PDF

Info

Publication number
WO2006106825A1
WO2006106825A1 PCT/JP2006/306663 JP2006306663W WO2006106825A1 WO 2006106825 A1 WO2006106825 A1 WO 2006106825A1 JP 2006306663 W JP2006306663 W JP 2006306663W WO 2006106825 A1 WO2006106825 A1 WO 2006106825A1
Authority
WO
WIPO (PCT)
Prior art keywords
song
search
feature information
information
word
Prior art date
Application number
PCT/JP2006/306663
Other languages
French (fr)
Japanese (ja)
Inventor
Yasuteru Kodama
Yasunori Suzuki
Takehiko Shioda
Satoshi Odagawa
Original Assignee
Pioneer Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pioneer Corporation filed Critical Pioneer Corporation
Priority to JP2007512854A priority Critical patent/JP4459269B2/en
Publication of WO2006106825A1 publication Critical patent/WO2006106825A1/en

Links

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/638Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/683Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/685Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using automatically derived transcript of audio data, e.g. lyrics
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/102Programmed access in sequence to addressed parts of tracks of operating record carriers
    • G11B27/105Programmed access in sequence to addressed parts of tracks of operating record carriers of operating discs

Definitions

  • the present application belongs to the technical field of a song search device, a song search method, a song search program, and an information recording medium, and more specifically, a song (singing) and a performance (prelude, accompaniment, interlude and follower), respectively.
  • a song search device and song search method for searching for one or a plurality of the songs from among a plurality of songs, and a song search program used for the song search and the song search program Belongs to the technical field of information recording media on which is recorded.
  • each search keyword described in Patent Document 1 below for example, “cheerful song”, “song as a song”
  • each feature word for example, “Brightness”, “Genki”
  • a renewable sensitivity table is prepared, and the presence / absence of the feature related to each feature word is set to “1”.
  • search method in which a feature word list indicated by “0” was prepared, and when a user entered a desired search keyword, a search was made based on a plurality of sensibility tables and feature word lists that matched this.
  • a display method of the sorted file group in a file management system that manages songs and general documents as files a plurality of types of attached calo information are attached to each file and stored.
  • the order of the files is determined (sorted) for one additional information, and the X coordinate of the file object corresponding to the file with the additional information is set to one additional information (for example, registration date and time).
  • Patent Document 1 JP 2003-132085 (FIGS. 1 to 3)
  • Patent Document 2 JP 2000-122770 (Fig. 2)
  • the present application has been made in view of the above points, and one example of the problem is to visually display the characteristics of the search keyword itself input by the user and use the search keyword.
  • the feature of the searched file can also be displayed visually, so that the similarity between the feature of the search keyword entered by the user and the feature of the extracted file can be grasped as an image and used.
  • the invention according to claim 1 is characterized in that in the song search device for searching for one or a plurality of the songs from a plurality of songs, the characteristics of the lyrics included in the songs
  • a song feature information storage means such as a song feature information database for storing the song feature information indicating at least each song in an identifiable manner, and a search word indicating the subject to be searched and a word indicating the subject
  • a search term input means such as an input unit used to input the search term and any of the songs to be searched using the search term as appropriate for the subjectivity indicated by the input search term.
  • a search word feature information database that stores search song feature information that at least shows the features of the included lyrics in an identifiable manner for each search word
  • the search song feature information storage means such as the search song feature information to be compared with the input search song feature information which is the search song feature information corresponding to the input search word, and the stored song feature information
  • a search processing unit or the like that compares the conversion means such as a search processing unit that is generated by conversion using conversion information such as a conversion table, the input search song feature information, and the generated comparison target information. And the song corresponding to the song feature information that is the generation source of the comparison target information that is most similar to the input search song feature information, based on the comparison result of the comparison means and the input search song feature information.
  • An extraction unit such as a search processing unit that extracts the music corresponding to the searched search word, an instruction input unit such as an input unit used to input an instruction to change the conversion information, and the input Instructions Change means such as a search processing unit for changing the conversion information, and when the conversion information is changed, the conversion means uses the converted conversion information after the change to compare the music feature information. Configured to convert to information.
  • the invention according to claim 5 is a music feature information database or the like that stores music feature information that at least shows the characteristics of the lyrics included in the music in an identifiable manner for each of the songs.
  • the songs to be searched using the search word as being suitable for the subjectivity indicated by the search word comprising the song feature information storage means and the word indicating the song to be searched and indicating the subjectivity
  • Search song feature information storage means such as a search word feature information database for storing the search song feature information indicating at least the feature of the lyrics to be identifiable for each search word, and a display means such as a display
  • a song search method executed in a song search device for searching for one or a plurality of the songs, the search word input step for inputting the search terms, and the input search terms A conversion step for generating comparison target information to be compared with input search song feature information, which is the search song feature information, by converting the stored song feature information using conversion information; and the input search A comparison step for comparing the song feature information with the generated comparison target information,
  • the conversion information after the change is used to change the conversion feature information. It is comprised so that conversion to the said comparison object information may be performed.
  • the invention according to claim 6 causes a computer to function as the music search device according to any one of claims 1 to 4.
  • the invention according to claim 7 is recorded with the program for searching for music according to claim 6 so as to be readable by the computer.
  • FIG. 1 is a block diagram showing a schematic configuration of a music search device according to an embodiment.
  • FIG. 2 is a diagram illustrating a data structure of information stored in the song search device according to the embodiment; (a) is a diagram illustrating a data structure of song feature information; and (b) is a search word. It is a figure which illustrates the data structure of feature information.
  • FIG. 3 is a flowchart showing song search processing according to the embodiment.
  • FIG. 4 is a diagram illustrating a conversion table according to the embodiment.
  • FIG. 5 is a diagram showing a display example of a search word feature information figure and a song feature information figure.
  • FIG. 6 is a diagram illustrating the data structure of history information according to the embodiment, (a) is a diagram illustrating the data structure of match history information, and (b) is the data structure of non-match history information. It is a figure which shows an example.
  • FIG. 7 is a flowchart showing details of conversion table update processing.
  • FIG. 8 is a diagram (I) showing a specific example of conversion table update processing.
  • FIG. 9 is a diagram (II) showing a specific example of conversion table update processing.
  • FIG. 10 is a diagram showing another display example of a search word feature information figure and a song feature information figure. Explanation of symbols
  • FIGS. 1 is a block diagram showing a schematic configuration of the song search device
  • FIG. 2 is a diagram illustrating a data structure of information stored in the song search device. It is.
  • a song search device S includes a song input unit 1, a song database 2, a song feature information database 3 as song feature information storage means, and a search song feature information storage.
  • Search word feature information database 4 as product means, sound feature information extraction section 5, constituent word information extraction section 6, song feature information generation section 7, conversion means, change means, comparison means, and extraction means
  • a search processing unit 8 an input unit 9 as a search word input unit and an instruction input unit, a song output unit 10, a history storage unit 11, and a display 12 as a display unit.
  • the song database 2 stores and stores a plurality of songs as search objects to be searched by a song search process described later.
  • Each song includes at least a song (singing) and a performance including a prelude, accompaniment, interlude and postlude.
  • the input of the music to be stored in the music database 2 to the music database 2 is that the music information Ssg corresponding to the music is input to the music input unit 1 from the outside.
  • the music information Ssg is subjected to a format conversion process for storage in the music database 2, and the music information Ssg after the processing is input to the music database 2 and executed.
  • song feature information corresponding to all songs stored in the song database 2 is stored in the song feature information database 3 so as to be identifiable for each song.
  • the song feature information is stored in the song feature information database 3 corresponding to each song stored in the song database 2! It is the information that characterizes each song and performance.
  • the above song feature information is obtained when a new song is input to song database 2 as song information Ssg.
  • the song information Ssg corresponding to the song is read from the song database 2 as shown in FIG. And output to the constituent word feature information extraction unit 6.
  • the sound feature information extraction unit 5 extracts a plurality of parameters indicating the acoustic features of the song from the song information Ssg and outputs the parameters to the song feature information generation unit 7 as the sound feature information Sav.
  • the plurality of parameters included in the sound feature information Sav include, for example, the speed of the song (BPM (Beat Per Minutes)), the maximum of the song, as shown in the right of FIG. Output level (maximum volume), average output level of the song (average volume), chords contained in the song, beat level of the song (ie, signal level (magnitude) of the beat component of the song), Key (key such as C major or A minor).
  • the constituent word information extraction unit 6 extracts a song (lyrics) included in the song from the song information Ssg, and is preset in the extracted song. A search is made to determine whether or not a word (phrase; hereinafter simply referred to as a constituent word) is included, and the search result (whether it is a constituent word included in the song) is determined for each constituent.
  • the constituent word information Swd shown for each word is generated and output to the song feature information generating unit 7.
  • the constituent word information Swd is included in the songs constituting the powerful songs such as "Ai”, “Umi”, “Omoi” or "Kibo", which are preset constituent words.
  • the constituent word feature information Swd for that constituent word is “1”.
  • the value of the constituent word feature information Swd is “0” according to the constituent word. More specifically, for example, as shown in the left of FIG. 2 (a), in the song with the song number “0” in the song database 2, the constituent word “eye” is included in the song song.
  • the song feature information generation unit 7 combines the sound feature information Sav and the component word feature information Swd for each corresponding song, and, for example, a plurality of songs corresponding to each song as shown in FIG. 2 (a).
  • the song feature information Ssp composed of the feature information 20 is output to the song feature information database 3 and is registered and stored in the song feature information database 3.
  • one piece of music feature information 20 is the sound of the music piece extracted by the sound feature information extraction unit 5.
  • the characteristic feature and the constituent word information in the song in the song extracted by the constituent word information extracting unit 6 are paired for each song.
  • search word feature information database 4 is used for the search keyword input by the user (that is, the song that the user wants to listen to at that time is subjectively recorded in the song search process described later).
  • Search term characteristic information corresponding to all of the search terms preset as search keywords (hereinafter simply referred to as “search terms”) is stored in an identifiable manner for each search term.
  • the search term feature information is each of the search terms presented to the user as to be selected and input when the user searches for the song stored in the song database 2. Information that characterizes (not a search term itself!).
  • the search term feature information 30 for one search term feature information identification information (FIG. 2 (b)) for identifying each search term feature information 30 from other search feature information 30. )
  • the search term corresponding to the search term feature information 30 itself, and the search term corresponding to the search term feature information 30 should be searched and extracted from the song database 2.
  • search 'expected to be extracted' song feature information characterizing songs included in the song, and the search 'extracted to be extracted' And sound feature information including a plurality of parameters indicating the target features.
  • the sound feature information constituting the search word feature information 30 specifically includes acoustic parameters similar to the parameters included in the sound feature information Sav.
  • the song feature information constituting the search word feature information 30 includes a plurality of subjective characteristics that characterize the songs that are included in the song to be searched and extracted using the search word feature information 30.
  • Each of the song feature information is searched using the search term corresponding to the search term feature information 30. It is a set of weighted according to the specific contents of each song included in the song to be extracted .
  • the song feature information named “heart warming” is the other song.
  • the feature information is included with a weighting of “0.9”
  • the song feature named “Encouraged” Collection information is included with a weighting of “0.3” for other song feature information, and “sad”, “t”, and song feature information for the song name is “0” for other song feature information.
  • 1 is included with a weighting of “1!”, While “0!” Is assigned to the song feature information of “bright! /,”.
  • the song feature information “bright” is included with a weight of “0.7” with respect to other song feature information.
  • the song feature information “heartwarming” is included with a weight of “0.2” with respect to other song feature information, and the song feature information “encouraged” is “0” with respect to other song feature information. .5 ”is included, while the weight of the song feature information“ Sadness / Samoshi ”is“ 0 ”.
  • the search term feature information 30 corresponding to the search term “sad” the song feature information “heartwarming” is included with a weight of “0.3” with respect to other song feature information.
  • the song feature information “Sad” is included with a weight of “0.8” relative to other song feature information, while the song feature information “bright” and the song feature information “encouraged” The weights of "" are each "0". Furthermore, the search word feature information 30 ⁇ which corresponds to other search words “encouraged” or “settled”, etc., but each song feature information “heartwarming”, “bright”, “sad” Each of 'Samish' and 'Encouraged' is included with a preset weight.
  • each song feature information is indicated by one of the search words (the search word itself becomes a word) as described later. This is to obtain a search result for each user according to their preference.
  • Each song feature information itself is stored in the song database 2 and indicates the feature of the song (more specifically, the song included in the song), and is separate from the search term itself. Details will be described later.
  • the input unit 9 when searching for a song that the user desires subjectively using information stored in the song feature information database 3 and the search word feature information database 4, respectively, the input unit 9 first.
  • input information Sin indicating the input search term is output to the search processing unit 8.
  • the search processing unit 8 uses the search term feature information data corresponding to the inputted search term as the search term feature information Swds based on the input information Sin as the search term feature information data.
  • a plurality of pieces of song feature information 20 corresponding to all songs stored in the song database 2 are extracted from the song feature information database 3 as song feature information Ssp.
  • the extracted search word feature information 30 and each song feature information 20 are respectively compared, and the song feature information 20 that is most similar to the search word feature information 30 indicates a song.
  • the song identification information Srz is generated and output to the song database 2.
  • the search processing unit 8 is a figure indicating one search word feature information 30 from which the search word feature information database 4 is extracted as corresponding to the input search word (hereinafter simply referred to as a search word feature). And the figure showing the song feature information 20 that is most similar to the extracted search term feature information 30 (hereinafter simply called the song feature information figure).
  • a display signal Sdp is generated and displayed on the display 12 for display on the display 12 as a figure of composition.
  • a specific composition example of the search word feature information graphic and the song feature information graphic will be described in detail later.
  • the song database 2 outputs the song indicated by the song identification information Srz to the song output unit 10 as song information Ssg.
  • the music output unit 10 performs necessary output interface processing or the like on the output music information Ssg, and the processed music information Ssg is transmitted to an external amplifying unit (not shown) or broadcast transmission. Output to the part.
  • the song corresponding to the output song information Ssg is the user who first inputs the search word. Whether or not the force that was desired is input as evaluation information in the input unit 9 again, and the corresponding input information Sin is output to the search processing unit 8.
  • the search processing unit 8 generates history information indicating the result of the past song search processing based on the evaluation information input as the input information Sin, and stores the history information as history information Sm.
  • the data is temporarily stored in the unit 11 and read out as necessary, and the history management process described later is performed.
  • FIG. 4 is a flowchart illustrating song search processing in the search device
  • FIG. 4 is a diagram illustrating the contents of a conversion table as conversion information used in the song search processing
  • FIG. 5 is a search word corresponding to the input search word.
  • FIG. 6 is a diagram showing a display example of a music feature information graphic corresponding to a characteristic information graphic and a music extracted as appropriate
  • FIG. 6 is a diagram illustrating history information used for history processing according to the embodiment
  • FIG. 7 is a flowchart showing the history management process using the history information
  • FIG. 8 and 9 are diagrams illustrating databases used for the history management process
  • FIG. 10 is an input search. It is a figure which shows the other example of a search word characteristic information figure corresponding to a word, and the other display example of the music feature information figure corresponding to the music extracted as it correspondingly.
  • step S1 the search term feature information 30 corresponding to the input search term is extracted from the search term feature information database 4 and output to the search processing unit 8 (step S2). ).
  • step S2 the search processing unit 8 displays graphic information for displaying the search word feature information graphic corresponding to the extracted search word characteristic information 30 on the display 12 according to the contents of the search word characteristic information 30 at that time. And is temporarily stored in a memory (not shown) in the search processing unit 8 (step S15).
  • the constituent words constituting the songs included in all the songs are read from the song feature information database 3 for each song, and the search processing unit 8 (Step S3). Then, by the operation by the user using the input unit 9, the read constituent word is converted, and song characteristic information corresponding to the song included in each song in which the constituent word is included is generated. A conversion table is determined for (step S3-1).
  • the conversion table is stored in a memory (not shown) in the search processing unit 8 as described later.
  • a newly created (changed) conversion table is appropriately created based on the user's evaluation of the search word feature information figure and the song feature information figure displayed on the display 12 according to the song search result. Not shown in the figure above, it is stored in memory!
  • the search processing unit 8 converts the read component word into the determined conversion tape.
  • a process for converting the constituent words into song feature information corresponding to the songs included in each song is performed for each song (step S4).
  • the song feature information generated by the processing in step S4 is the same as the song feature information weighted in the search word feature information 30 described above, and each song to which the song feature information should correspond. Each is a set of weights of the constituent words constituting the song included in the song according to the specific contents of the song feature information.
  • the constituent word “Kibo” is “0.4” with respect to other constituent words. It is included in the weighting, and the constituent word “umi” and the constituent word “momoi” are included with the weighting of “0.1” relative to other constituent words, while the constituent word “eye” is included.
  • the weight is set to “0”, and the song feature information “warming up” is generated.
  • the constituent word “momoi” is included with a weighting of “0.8” with respect to other constituent words, and “eye”
  • a constituent word is included with a weight of “0.2” with respect to other constituent words, and a constituent word of “Umi” and a constituent word of “Kibo” are “0.1” with respect to other constituent words. ”Is included.
  • the constituent word “Kibo” is included with a weighting of “0.7” relative to other constituent words.
  • the constituent word “mi” is included with a weight of “0.2” relative to other constituent words, while the weighting of the constituent word “eye” and the constituent word “omoi” is “0”.
  • the song feature information 40 “sad, sad,” is generated.
  • the constituent word “Kibo” is included with a weight of “0.8” relative to other constituent words, and the constituent word “Umi” is included. Is included with a weight of “0.4” for other constituent words, and the constituent word of “eye” is included with a weight of “0.5” for other constituent words.
  • the weight of the constituent word “momoi” is set to “0”, and the song feature information 40 “encouraged” is generated.
  • corresponding song feature information 40 is generated from each constituent word for each song using the conversion table T illustrated in FIG. More specific For example, when the conversion table T illustrated in FIG. 4 is used, among the constituent words listed in the conversion table ⁇ , only the constituent words “Umi”, “Omoy”, and “Kibo” are included in a song. If it is included, the value of the song characteristic information 40, which is “warming up” for a certain song, is the “heart” of each of the constituent words “Umi”, “Omoy” and “Kibo”. It becomes “0.6” with the weights “0.1”, “0.1” and “0.4” in the song feature information 40 “warm”.
  • the value of the song feature information 40 “bright” for the song is the weight in the song feature information 40 “bright” for each of the constituent words “umi”, “moy”, and “kibo”. It becomes “1.0” with “0.1”, “0.8” and “0.1”.
  • each of the song characteristic information 40 listed in the conversion table ⁇ is added, and the respective weight values corresponding to the constituent words are added to determine the respective values in the song.
  • step S5 In parallel with the processing of steps S1 and S2 and steps S3 to S4, only the sound feature information in each of the song feature information 20 corresponding to all songs from the song feature information database 3 is stored in each song. Each time it is read out and output to the search processing unit 8 (step S5).
  • the search processing unit 8 includes each song feature information (including the weight of each song feature information 40) included in the one search word feature information 30 extracted in step S2 and The song feature information 40 corresponding to each song converted in step S4 is compared for each song, and the sound feature information contained in the one search word feature information 30 and extracted in step S5. The sound feature information corresponding to each song is compared for each song, and the song feature information 40 corresponding to each song and the similarity between the sound feature information and the input search word are calculated for each song ( Step S6).
  • a reproduction list is created in which the songs to be output are arranged in descending order of the similarity (step S7).
  • the search processing unit 8 selects a song feature information figure corresponding to the song feature information 20 corresponding to the song to be played first, for example, among the songs included in the created playlist.
  • the graphic information to be displayed on the display 12 is constructed according to the contents of the song feature information 20.
  • the display signal Sdp including graphic information corresponding to the constructed music feature information graphic and graphic information corresponding to the search word characteristic information graphic constructed and stored by the processing of step S15 is obtained.
  • the search word feature information figure and the song feature information figure are simultaneously displayed on the same screen (step S16).
  • step S17 When an operation for changing the conversion table T is performed on the input unit 9 (step S17; YES), the data is stored and stored in the memory, not shown so far. Display the conversion table T and let the user select it, or execute a display or the like on the display 12 to create a new conversion table T. As a result, the newly changed conversion table is displayed. T is stored in the memory (step S18), and the process proceeds to step S3-1. After this, a new conversion table is formally determined in the process of step S3-1, and is thereafter used for the process after step S4.
  • step S17 if the operation to change the conversion table T is not performed in the determination in step S17 (step S17; NO), it is indicated by the reproduction list (see step S7).
  • the songs are also shown in the order of V and are extracted from the song database 2 and output via the song output unit 10 (step S8).
  • the search word feature information graphic 50 and the song feature information graphic 60 are displayed side by side in the display 12 at the same time.
  • the music feature information graphic 60 is a graphic showing the tone of the music characterized by the music feature information 20 corresponding to the music feature information graphic 60.
  • the color of the arrow 67 indicating each key and the display position and display direction of the arrow 67 are determined in advance. In any one of the display position and the display direction, an arrow 67 force S indicating the key included in the music feature information 20 is displayed.
  • the feature figures 61 to 65 are obtained by converting the constituent words included in the song of the song characterized by the song feature information 20 using the conversion table T shown in FIG.
  • one of the song feature information that characterizes the song, the deviation, and the parameters included in the song feature information 20 as sound feature information (excluding the parameters that indicate the key, see Fig. 2 (a))
  • the preset number (five in the case of FIG. 5) is set to the preset display position (in the case of FIG. 5, the positions of the respective vertices of the pentagon centered on the key figure 66). Is displayed.
  • the feature figure 61 corresponds to the song feature information “heartwarming” of the song
  • the feature figure 62 corresponds to the song parameter “BPM”
  • the feature figure 63 corresponds to the song feature information “bright” of the song
  • the feature graphic 64 corresponds to the parameter “maximum level” of the song
  • the feature graphic 65 is displayed as corresponding to the parameter “average level” of the song.
  • the song feature information or parameter values indicated as the feature graphics are obtained by using the maximum value and the minimum value that each of the song feature information and each parameter can take. It is expressed by a preset color as a normalized value.
  • the conversion table T is stored in a memory (not shown) in the search processing unit 8 and can be arbitrarily selected by the user (see step S18). Based on the conversion table T selected by the user, the deviation of the song characteristic information obtained is displayed at the display position. [0062] Of the characteristic figures 61 to 65, the characteristic figure indicating the parameters included in the music characteristic information 20 as the sound characteristic information is displayed as the characteristic figure during the reproduction of the music. When the value of the specified parameter changes, the color or shape of the corresponding feature figure may be changed according to the change.
  • the search word feature information graphic 50 corresponds to the search word feature information graphic 50.
  • One or more strengths suitable for the song to be searched using the search terms characterized by the word feature information 30 One or more colors whose colors change according to their degree of suitability (hereinafter referred to as correlation values) Is displayed at the center of the search term characteristic figure 50 as a key figure 56 including an arrow 57.
  • the key figure 56 is a figure including one or more arrows 57 indicating any of the 24 key types described above in the music feature information figure 60.
  • the display position and the display direction of the arrow 57 indicating each key are determined in advance, and the display position and the display direction are set to any of the determined display position and display direction.
  • An arrow 57 indicating the key included in the search term characteristic information 30 is displayed.
  • the color of each arrow 57 in the diagram 56 is, for example, a negative value depending on the correlation value between each of the 24 types of key and the key of the song to be searched by the search word.
  • Tones with correlation values are indicated by displaying blue arrows 57 in the corresponding preset display position and orientation, while tones with positive correlation values are preset with corresponding red arrows 57. It is indicated by displaying in the displayed display position and display direction.
  • the absolute value of each correlation value is displayed darker as the absolute value is larger, depending on the color of the corresponding arrow 57 (red or blue).
  • the feature graphics 51 to 55 are included in the search word feature information 30 as sound feature information, as well as the song feature information included in the search word feature information 30.
  • One of the parameters (excluding the parameter indicating the key, see Fig. 2 (b)) and the preset number (five in the case of Fig. 5) correspond to the above-mentioned music feature information figure 60.
  • are displayed at preset display positions in the case of FIG. 5, the positions of the respective vertices of the pentagon centered on the above-mentioned key figure 56). For example, in the case of FIG.
  • the feature graphic 51 corresponding to the feature graphic 61 in the music feature information graphic 60 corresponds to the song characteristic information “heartwarming” included in the search word feature information 30 and the music characteristic information graphic
  • the feature figure 52 corresponding to the feature figure 62 in 60 corresponds to the parameter “BPM” included in the search word feature information 30, and the feature figure 53 corresponding to the feature figure 63 in the song feature information figure 60 is the search word feature information 30 corresponds to the song feature information “bright”, and the feature figure 54 corresponding to the feature figure 64 in the song feature information figure 60 corresponds to the parameter “maximum level” contained in the search word feature information 30.
  • the feature graphic 55 corresponding to the feature graphic 65 in the music feature information graphic 60 is displayed as corresponding to the parameter “average level” included in the search word feature information 30.
  • the song feature information or parameter values shown as the respective feature graphics are the same as each of the song feature information and each of the parameters. It is represented by a preset color as a value normalized using the maximum and minimum values that can be taken.
  • the weight value of each song feature information included in each search feature information 30 Due to the difference in gradation, the higher the weighting value, the more the background is not filled with gradation, and the smaller the weighting value, the more prominent the gradation is for the background.
  • step S8 When one song is output by the process in step S8, the user who has listened to the output song does the output song correspond to the search term input and determined in step S1? The evaluation result is input using the input unit 9 (step S9).
  • step S9 When the input evaluation result is an evaluation that the output song is suitable for the search word (step S9; match), the match history information described later is updated (step S9). 11) Proceed to step S12. On the other hand, if it is determined in step S9 that the output song is not suitable for the search term (step S9; non-match), it will be described later. The non-match history information to be updated is updated (step S10), and the process proceeds to step S12.
  • step S 10 the non-match history information updated in step S 10 and the match history information updated in step S 11 will be described more specifically with reference to FIG.
  • the matching history information G as shown in Fig. 6 (a), the song feature information 20 of the song that is evaluated as appropriate for the search word by the evaluation corresponding to the search word input by the user 20 Based on the constituent word information included in the song feature information 20, the method similar to the method described in step S4 above with reference to the conversion table T, which is valid at that time. Is included in the song feature information 40 (song feature information 40 corresponding to the song).
  • the non-match history information NG as shown in Fig. 6 (b), the song feature of the song evaluated by the above user's evaluation as not suitable for the search term input by the user is shown. Similar to the method described in step S4 above with reference to the conversion table T which is valid at that time based on the constituent word information included in the song feature information 20 instead of the information 20
  • the song feature information 40 (song feature information 40 corresponding to the song) generated by the above method is included as in the case of the matching history information G.
  • the conversion table T after the change is added when the conversion table T is changed in step S18. Will say processing.
  • the addition process only the conversion table T is newly added, and other old! And conversion table T! / Are likely to be deleted by updating themselves.
  • step S13 it is confirmed whether or not the output has been completed up to the last song in the playlist created in step S7 (step S13), and the output up to the last song is completed! / If not (Step S13; NO), return to Step S8 and output the next song in the playlist, and repeat Steps S9 to S12 described above for the next song. on the other hand, If it is determined in step S13 that the output up to the last song has been completed (step S13; YES), the series of song search processing ends.
  • Step S20 whether or not the search term that is input at the timing to perform the update process is different from each song feature information 40.
  • Step S20 if the subjectivity does not match, the conversion table T is not updated. Then, the process proceeds to processing for updating the next search term feature information 30.
  • step S20 when the subjectivity matches (step S20; YES), the process proceeds to the actual conversion table T update processing.
  • one search term is used for a preset number of songs (40 songs in the case of FIGS. 8 and 9 (20 songs suitable and 20 songs not suitable)).
  • the contents of V the constituent words included in the song evaluated as appropriate for the search word "heartwarming" and that it was not appropriate for the search word
  • the following describes how to update the conversion table T corresponding to the search term based on the contents of the! / And the constituent words included in the song.
  • 8 and 9 show items necessary for the update process of the conversion table T in the history information shown in FIG. 6 and the correlation used for displaying the graphic 56 in the search word characteristic information graphic 50. Only items that are necessary for the calculation process described later are extracted.
  • each song evaluated as V corresponding to the match history information For each word, add all “0” or “1” in the vertical direction in Fig. 8 for all songs (20 songs in total) and add it to the total number of songs (Fig. 8).
  • the average value AA is obtained by dividing by “20”) (step S21). For example, in the case of the constituent word “eye” in the match history information, there are five songs that contain that constituent word (in other words, the value is “1” in FIG. 8). Is divided by “20” which is the total number of songs, the average value AA is “0.25” for the constituent word “eye”.
  • this average value calculation process is executed for all the constituent words.
  • all songs were set to “0” in the vertical direction in FIG.
  • all “1” s are added and divided by the total number of songs to obtain an average value DA (step S21).
  • the average value DA for the constituent word “Ai” is “0.70”. And this average value calculation process is performed about all the constituent words.
  • the average values AA and DA calculated by the above-described processing are accumulated in the song database 2 and are evaluated as a result of evaluation for all the songs. It is not obtained. Therefore, the confidence margin of each calculated average value (statistically equivalent to what is called the “sample ratio”) is obtained, and the difference between the average value AA and the average value DA corresponding to each component word is obtained. Confirm whether (average value AA-average value DA) can be used as a weighting value in the history information of the constituent word. More specifically, assume that the reliability of the confidence limit range is 90%, and check using the trust limit range calculated by the following formula (1) (Step S22). That is, for each component word,
  • Confidence limit width 2 X 1.65 X [(AAX (1 ⁇ AA)) / number of songs] 1 / 2 ... H) Calculate the corresponding confidence limit width.
  • Step S23 when the confidence limit width is calculated, it is used to determine whether the absolute value of the value obtained by subtracting the average value DA from the average value AA is equal to or greater than the calculated confidence limit width.
  • step S23 when the absolute value of the average value AA is subtracted from the confidence limit width (step S23; YES), the difference is regarded as a reliable value. Adopted as a weighting value of the corresponding constituent word in the conversion table T (in the case of FIG. 8, the constituent word “heartwarming”) is registered (stored) in the conversion table T (step S 24). On the other hand, when the absolute value of the value obtained by subtracting the average value DA from the average value AA is less than the confidence limit range (step S23; NO), the difference is Value is unreliable Thus, the weight of the corresponding constituent word in the conversion table T is updated to “0” (step S25).
  • each history information stores a preset initial value, and the number of songs to be the history information is limited. As a result, old history information is overwritten with new history information. As a result, the subjectivity specific to each user is gradually reflected in the weighting value of the conversion table ⁇ ⁇ ⁇ ⁇ as the evaluation of the output music progresses. Then, the newly created conversion table ⁇ ⁇ reflecting the overwritten history information is stored in a memory (not shown) in the search processing unit 8 each time, and the user converts it into song feature information (Step 3 in Fig. 3).
  • the conversion table ⁇ used in can be selected according to individual preference (see step S18 in Fig. 3).
  • the calculation process is calculated by the same process as the conversion table ⁇ ⁇ ⁇ ⁇ update process described above.
  • one search term is used for a preset number of songs (40 songs in the case of Figs. 8 and 9 (20 songs that are not appropriate and 20 songs that are not appropriate)).
  • the search word "heartwarming” the search word "heartwarming"
  • the key of the song evaluated as inappropriate for the search word The case where the correlation value of the key corresponding to the search term is calculated will be described.
  • the key corresponding to one song even if the song is modulated in the middle, one key is associated with one song, so Fig. 8 and Fig. 9 (a There is only one key in a song with a value of “1” in the horizontal line.
  • the confidence limit width of each average value calculated here is also calculated. It is determined whether the difference between the average value AA and the average value DA corresponding to each constituent word (average value AA—average value DA) can be used as a weighting value on the history information of the constituent word. Check. More specifically, the reliability of the confidence margin is assumed to be 90%, and confirmation is performed using the confidence margin calculated by the above formula (1) (see step S22 in Fig. 7).
  • the confidence limit width is calculated, it is used to determine whether the absolute value of the value obtained by subtracting the average value DA from the average value AA is equal to or greater than the calculated confidence limit width. (Refer to step S23 in Fig. 7).
  • each history information stores the initial value of the correlation value set in advance.
  • the new history information is overwritten on the old history information as a result.
  • the information 20 and the song are searched to search for the song, so that it is possible to surely search for a suitable song by the input search word and to use the input search word directly compared to the case of searching for the song. Search for songs that match the user's subjectivity.
  • search term feature information image 50 and the song feature information image 60 are displayed on the same screen of the display 12, the search term feature information 30 to be searched based on the search term input by the user. You can compare the content with the content of the song feature information 20 that indicates the song actually extracted by the search term, and use the similarity between the feature of the search term entered by the user and the feature of the extracted song as an image. I can grasp it.
  • the conversion table T used for generating the song feature information can be arbitrarily selected from those stored in a memory (not shown) in the search processing unit 8, the user can select If the song actually extracted by the search word is not like, the conversion table T can be arbitrarily selected from the past and changed.
  • search word feature information image 50 and the song feature information image 60 are displayed on the display 12 simultaneously and using images having the same composition, the comparison of both can be easily performed visually.
  • each parameter indicating the acoustic feature indicated by each of the song feature information 20 and the search word feature information 30 is displayed for each parameter in a visible manner, the details for each parameter are displayed. It is possible to visually recognize large and small relationships.
  • each song feature information indicating the feature of the lyrics indicated by each of the song feature information 20 and the search word feature information 30 is displayed for each song feature information, It is possible to visually recognize a detailed magnitude relationship for each feature information.
  • the size of each parameter is expressed by the difference in the color of the graphic representing each parameter, the size of each parameter can be easily recognized visually.
  • the search word feature information 30 is updated based on the evaluation information, and the size of the feature graphics 51 to 55 indicating the weighting of the song feature information in the search word feature information 30 is the search word feature information 30. Therefore, it is possible to visually grasp the weighting of the song feature information in the search word feature information 30 and the transition of the search word feature information 30 based on the weighting.
  • the weighting difference of the song feature information in each search word is expressed by changing the gradation of the feature figures 51 to 55 in the search word feature information figure 50.
  • the length in the characteristic figures 71 to 75 like the search word characteristic information figure 70 shown in the left of FIG.
  • the right side of FIG. 10 is a music feature information figure 80 having a display form corresponding to the search word characteristic information figure 70, and the characteristic figure 81 to 85 (the characteristic figure in the music feature information figure 60 shown on the right side of FIG. 5).
  • the characteristic figure 81 to 85 is also a triangle according to the search word feature information graphic 70.
  • the same members as those in each figure shown in FIG. 5 are indicated by the same member numbers.
  • the user can change the search word feature information 30 corresponding to the displayed search word feature information graphic 50 or 70 by the operation of the input unit 9. You can also.
  • the user can change the contents of the search term characteristic information 30. Therefore, by changing the search word feature information 30 according to the user's preference, the user can be further changed. Music according to hobbies etc. can be extracted.
  • the search word feature information 30 corresponding to the displayed search word feature information image 50 or 70 and the song feature information included in the search word feature information 30 are displayed. It is also possible to configure each weighting to be changed.
  • search word feature information graphic 50 or 70 and the song feature information graphic 60 or 80 are displayed at the same time, the search word characteristic information graphic 50 or 70 displayed is displayed.
  • the search word feature information 30 corresponding to is replaced with the same content as the song feature information 20 corresponding to the song feature information image 60 or 80 displayed at that time in the search processing unit 8. Can be configured to be stored again in the search term feature information database 4
  • the search term feature information 30 corresponding to the entered search term can be replaced with the song feature information 20 of the song, so that the search can be performed again.
  • the extracted music is extracted again, and the music can be extracted according to the user's hobbies.
  • the power described in the case where the present application is applied to the music search apparatus S that stores and searches for a plurality of songs, other than this, still images or moving images are stored. It is also possible to apply the present application to an image search apparatus that searches for these according to the subjectivity of the user.
  • a program corresponding to the flowcharts shown in FIGS. 3 and 7 is recorded on an information recording medium such as a flexible disk, or the program is acquired and recorded via a network such as the Internet.
  • the general-purpose microcomputer can be used as the search processing unit 8 according to the embodiment by reading and executing these using a general-purpose microcomputer.

Abstract

There is provided a music composition search device capable of visually displaying characteristics of a search keyword itself and characteristics of a music composition searched by using the search keyword and enabling a user to change the search condition according to user’s preference. The music composition search device includes: music composition characteristic information database (3) for accumulating music composition characteristic information indicating the characteristics of lyrics; an input unit (9) for inputting a search word; a search word characteristic information database (4) for accumulating search word characteristic information indicating the characteristics of lyrics to be searched in accordance with a personal point indicated by the inputted search word; and a search processing unit (8) for generating comparison object information to be compared to the inputted search music composition characteristic information by converting the music composition characteristic information by using a conversion table T, comparing the inputted search music composition characteristic information to the generated comparison object information, and extracting a music composition corresponding to the music composition information corresponding to the comparison object information most similar to the inputted search music composition characteristic information. The search processing unit (8) modifies the conversion table T according to a modification instruction of the conversion table T.

Description

曲検索装置、曲検索方法及び曲検索用プログラム並びに情報記録媒体 技術分野  Song search apparatus, song search method, song search program, and information recording medium
[0001] 本願は、曲検索装置、曲検索方法及び曲検索用プログラム、情報記録媒体の技術 分野に属し、より詳細には、夫々が歌 (歌唱)と演奏 (前奏、伴奏、間奏及び後奏を含 む。以下同じ。)とからなる複数の曲の中から一又は複数の当該曲を検索する曲検索 装置及び曲検索方法、当該曲検索に用いられる曲検索用プログラム並びに当該曲 検索用プログラムが記録された情報記録媒体の技術分野に属する。 背景技術  [0001] The present application belongs to the technical field of a song search device, a song search method, a song search program, and an information recording medium, and more specifically, a song (singing) and a performance (prelude, accompaniment, interlude and follower), respectively. The same applies to the following.) A song search device and song search method for searching for one or a plurality of the songs from among a plurality of songs, and a song search program used for the song search and the song search program Belongs to the technical field of information recording media on which is recorded. Background art
[0002] 近年、例えば車載用のナビゲーシヨン装置や家庭用のサーバ装置等、多数の曲を 再生するためのデジタルデータを蓄積し、この中力 好きな曲を選んで再生すること が行われつつある。  [0002] In recent years, digital data for reproducing a large number of songs, such as in-vehicle navigation devices and home server devices, has been accumulated, and this favorite song has been selected and played. is there.
[0003] このとき、従来においては、第一の曲検索方法として、再生したい曲に含まれてい る歌を構成する構成語 (フレーズ)の一部をその文言のまま入力し、その構成語を含 む歌を含む曲を検索して再生することが一般に行われていた。  [0003] At this time, conventionally, as a first song search method, a part of a constituent word (phrase) included in a song to be reproduced is input as it is, and the constituent word is input. It has been common practice to search for and play songs that contain songs.
[0004] また、使用者の主観を反映した第二の曲検索方法として、下記特許文献 1に記載さ れている如ぐ各検索キーワード (例えば、「陽気な曲」、「スカツとする曲」)と各特徴ヮ ード (例えば、「明るさ」、「元気」)との相関値を含むと共に更新可能な感性テーブル を用意すると共に、各特徴ワードに係る特徴の有無を" 1"ど' 0"とで示す特徴ワードリ ストを用意し、使用者が所望の検索キーワードを入力すると、これに合致する複数の 曲力 感性テーブル及び特徴ワードリストを基準に検索する検索方法があった。  [0004] In addition, as a second song search method that reflects the user's subjectivity, each search keyword described in Patent Document 1 below (for example, "cheerful song", "song as a song") ) And each feature word (for example, “Brightness”, “Genki”), and a renewable sensitivity table is prepared, and the presence / absence of the feature related to each feature word is set to “1”. There was a search method in which a feature word list indicated by “0” was prepared, and when a user entered a desired search keyword, a search was made based on a plurality of sensibility tables and feature word lists that matched this.
[0005] 一方、曲や一般の文書等をファイルとして管理するファイル管理システムなどにお ける、ソートしたファイル群の表示方法として、それぞれのファイルに複数種類の付カロ 情報を付けて記憶しておき、指示に応じて、一つの付加情報について上記ファイル の順番を決め(ソートし)、上記付加情報の付けられたファイルに対応したファイルォ ブジエタトの X座標を一つの付加情報 (例えば登録日時)の順番'値に対応付け、 Y 座標を他の付加情報 (例えば検索キーとの合致度)の順番 ·値に対応付け、ファイル オブジェクトの大きさを更に他の付加情報 (例えば参照頻度)の値に対応付けて一つ の XY座標上に複数のファイルオブジェクトを表示する下記特許文献 2に示すような 表示方法があった。 [0005] On the other hand, as a display method of the sorted file group in a file management system that manages songs and general documents as files, a plurality of types of attached calo information are attached to each file and stored. In response to the instruction, the order of the files is determined (sorted) for one additional information, and the X coordinate of the file object corresponding to the file with the additional information is set to one additional information (for example, registration date and time). Associate with 'order' value, associate Y-coordinate with order / value of other additional information (eg match degree with search key), file There has been a display method as shown in Patent Document 2 below in which a plurality of file objects are displayed on one XY coordinate by associating the size of the object with a value of other additional information (for example, reference frequency).
特許文献 1 :特開 2003— 132085 (第 1図乃至第 3図)  Patent Document 1: JP 2003-132085 (FIGS. 1 to 3)
特許文献 2 :特開 2000— 122770 (第 2図)  Patent Document 2: JP 2000-122770 (Fig. 2)
発明の開示  Disclosure of the invention
発明が解決しょうとする課題  Problems to be solved by the invention
[0006] しかしながら、上述した従来の各曲検索方法及び検索結果の表示方法では、あくま で検索されたファイル自体がその検索に用いられた検索キーワードとどの程度類似 しているかを表示するのみであり、その検索キーワード自体の特徴と、その検索キー ワードにより検索された曲等のファイルの特徴と、を比較するという概念は、従来勘案 されたことはなかった。 [0006] However, the conventional song search method and the search result display method described above only display how similar the search keyword used for the search itself is to the file itself. The concept of comparing the characteristics of the search keyword itself with the characteristics of a file such as a song searched by the search keyword has never been considered.
[0007] そこで、本願は上記の点に鑑みて為されたもので、その課題の一例は、使用者が 入力した検索キーワード自体の特徴を視覚的に表示すると共に、その検索キーヮー ドを用いて検索されたファイルの特徴をも視覚的に表示することができ、これにより自 己が入力した検索キーワードの特徴と抽出されたファイルの特徴との類似度をィメー ジとして把握することができ、使用者は好みに応じて検索条件を変化させることが可 能な曲検索装置及び曲検索方法、当該曲検索に用いられる曲検索用プログラム並 びに当該曲検索用プログラムが記録された情報記録媒体を提供することにある。 課題を解決するための手段  [0007] Therefore, the present application has been made in view of the above points, and one example of the problem is to visually display the characteristics of the search keyword itself input by the user and use the search keyword. The feature of the searched file can also be displayed visually, so that the similarity between the feature of the search keyword entered by the user and the feature of the extracted file can be grasped as an image and used. Provides a song search device and song search method that can change search conditions according to preference, a song search program used for the song search, and an information recording medium on which the song search program is recorded. There is to do. Means for solving the problem
[0008] 上記の課題を解決するために、請求項 1に記載の発明は、複数の曲の中から一又 は複数の当該曲を検索する曲検索装置において、前記曲に含まれる歌詞の特徴を 少なくとも示す曲特徴情報を各前記曲毎に識別可能に蓄積する曲特徴情報データ ベース等の曲特徴情報蓄積手段と、検索されるべき前記曲を示す検索語であって主 観を示す言葉よりなる検索語を入力するために用いられる入力部等の検索語入力手 段と、前記入力される検索語により示される主観に相応しいとして当該検索語を用い て検索されるべきいずれかの前記曲に含まれる歌詞の特徴を少なくとも示す検索曲 特徴情報を、各前記検索語毎に識別可能に蓄積する検索語特徴情報データベース 等の検索曲特徴情報蓄積手段と、前記入力された検索語に対応する前記検索曲特 徴情報である入力検索曲特徴情報と比較される比較対象情報を、前記蓄積されてい る曲特徴情報を変換テーブル等の変換情報を用いて変換することにより生成する検 索処理部等の変換手段と、前記入力検索曲特徴情報と、前記生成された比較対象 情報と、を夫々比較する検索処理部等の比較手段と、前記比較手段による比較結果 に基づいて、前記入力検索曲特徴情報に最も類似している前記比較対象情報の生 成元である前記曲特徴情報に対応する前記曲を、前記入力された検索語に対応す る前記曲として抽出する検索処理部等の抽出手段と、前記変換情報を変更する旨の 指示を入力するために用いられる入力部等の指示入力手段と、前記入力された指示 に基づき前記変換情報を変更する検索処理部等の変更手段と、を備え、前記変換 手段は、前記変換情報が変更されたとき、当該変更後の変換情報を用いて前記曲 特徴情報の前記比較対象情報への変換を行うように構成される。 [0008] In order to solve the above-mentioned problem, the invention according to claim 1 is characterized in that in the song search device for searching for one or a plurality of the songs from a plurality of songs, the characteristics of the lyrics included in the songs A song feature information storage means such as a song feature information database for storing the song feature information indicating at least each song in an identifiable manner, and a search word indicating the subject to be searched and a word indicating the subject A search term input means such as an input unit used to input the search term and any of the songs to be searched using the search term as appropriate for the subjectivity indicated by the input search term. A search word feature information database that stores search song feature information that at least shows the features of the included lyrics in an identifiable manner for each search word The search song feature information storage means such as the search song feature information to be compared with the input search song feature information which is the search song feature information corresponding to the input search word, and the stored song feature information A search processing unit or the like that compares the conversion means such as a search processing unit that is generated by conversion using conversion information such as a conversion table, the input search song feature information, and the generated comparison target information. And the song corresponding to the song feature information that is the generation source of the comparison target information that is most similar to the input search song feature information, based on the comparison result of the comparison means and the input search song feature information. An extraction unit such as a search processing unit that extracts the music corresponding to the searched search word, an instruction input unit such as an input unit used to input an instruction to change the conversion information, and the input Instructions Change means such as a search processing unit for changing the conversion information, and when the conversion information is changed, the conversion means uses the converted conversion information after the change to compare the music feature information. Configured to convert to information.
上記の課題を解決するために、請求項 5に記載の発明は、曲に含まれる歌詞の特 徴を少なくとも示す曲特徴情報を各前記曲毎に識別可能に蓄積する曲特徴情報デ ータベース等の曲特徴情報蓄積手段と、検索されるべき前記曲を示し且つ主観を示 す言葉よりなる検索語により示される当該主観に相応しいとして当該検索語を用いて 検索されるべきいずれかの前記曲に含まれる歌詞の特徴を少なくとも示す検索曲特 徴情報を、各前記検索語毎に識別可能に蓄積する検索語特徴情報データベース等 の検索曲特徴情報蓄積手段と、ディスプレイ等の表示手段と、を備え、複数の当該 曲の中力 一又は複数の当該曲を検索する曲検索装置において実行される曲検索 方法であって、前記検索語を入力する検索語入力工程と、前記入力された検索語に 対応する前記検索曲特徴情報である入力検索曲特徴情報と比較される比較対象情 報を、前記蓄積されている曲特徴情報を変換情報を用いて変換することにより生成 する変換工程と、前記入力検索曲特徴情報と、前記生成された比較対象情報と、を 夫々比較する比較工程と、前記比較工程における比較結果に基づいて、前記入力 検索曲特徴情報に最も類似している前記比較対象情報の生成元である前記曲特徴 情報に対応する前記曲を、前記入力された検索語に対応する前記曲として抽出する 抽出工程と、前記変換情報を変更する旨の指示を入力する指示入力工程と、前記 入力された指示に基づき前記変換情報を変更する変更工程と、を含み、前記変換ェ 程においては、前記変換情報が変更されたとき、当該変更後の変換情報を用いて前 記曲特徴情報の前記比較対象情報への変換を行うように構成される。 In order to solve the above problems, the invention according to claim 5 is a music feature information database or the like that stores music feature information that at least shows the characteristics of the lyrics included in the music in an identifiable manner for each of the songs. Included in any of the songs to be searched using the search word as being suitable for the subjectivity indicated by the search word comprising the song feature information storage means and the word indicating the song to be searched and indicating the subjectivity Search song feature information storage means such as a search word feature information database for storing the search song feature information indicating at least the feature of the lyrics to be identifiable for each search word, and a display means such as a display, A song search method executed in a song search device for searching for one or a plurality of the songs, the search word input step for inputting the search terms, and the input search terms A conversion step for generating comparison target information to be compared with input search song feature information, which is the search song feature information, by converting the stored song feature information using conversion information; and the input search A comparison step for comparing the song feature information with the generated comparison target information, and generation of the comparison target information most similar to the input search song feature information based on the comparison result in the comparison step. Extracting the music corresponding to the original music feature information as the music corresponding to the input search word; an instruction input step for inputting an instruction to change the conversion information; A change step of changing the conversion information based on an inputted instruction. In the conversion process, when the conversion information is changed, the conversion information after the change is used to change the conversion feature information. It is comprised so that conversion to the said comparison object information may be performed.
[0010] 上記の課題を解決するために、請求項 6に記載の発明は、コンピュータを、請求項 1力 4の 、ずれか一項に記載の曲検索装置として機能させる。  In order to solve the above-described problem, the invention according to claim 6 causes a computer to function as the music search device according to any one of claims 1 to 4.
[0011] 上記の課題を解決するために、請求項 7に記載の発明は、請求項 6に記載の曲検 索用プログラムが前記コンピュータにより読み出し可能に記録されて 、る。  [0011] In order to solve the above-described problem, the invention according to claim 7 is recorded with the program for searching for music according to claim 6 so as to be readable by the computer.
図面の簡単な説明  Brief Description of Drawings
[0012] [図 1]実施形態に係る曲検索装置の概要構成を示すブロック図である。 FIG. 1 is a block diagram showing a schematic configuration of a music search device according to an embodiment.
[図 2]実施形態に係る曲検索装置内に蓄積されている情報のデータ構造を例示する 図であり(a)は曲特徴情報のデータ構造を例示する図であり、 (b)は検索語特徴情 報のデータ構造を例示する図である。  FIG. 2 is a diagram illustrating a data structure of information stored in the song search device according to the embodiment; (a) is a diagram illustrating a data structure of song feature information; and (b) is a search word. It is a figure which illustrates the data structure of feature information.
[図 3]実施形態に係る曲検索処理を示すフローチャートである。  FIG. 3 is a flowchart showing song search processing according to the embodiment.
[図 4]実施形態に係る変換テーブルを例示する図である。  FIG. 4 is a diagram illustrating a conversion table according to the embodiment.
[図 5]検索語特徴情報図形及び曲特徴情報図形の表示例を示す図である。  FIG. 5 is a diagram showing a display example of a search word feature information figure and a song feature information figure.
[図 6]実施形態に係る履歴情報のデータ構造を例示する図であり、 (a)は合致履歴情 報のデータ構造を例示する図であり、 (b)は非合致履歴情報のデータ構造を例示す る図である。  FIG. 6 is a diagram illustrating the data structure of history information according to the embodiment, (a) is a diagram illustrating the data structure of match history information, and (b) is the data structure of non-match history information. It is a figure which shows an example.
[図 7]変換テーブルの更新処理の細部を示すフローチャートである。  FIG. 7 is a flowchart showing details of conversion table update processing.
[図 8]変換テーブルの更新処理の具体例を示す図(I)である。  FIG. 8 is a diagram (I) showing a specific example of conversion table update processing.
[図 9]変換テーブルの更新処理の具体例を示す図(II)である。  FIG. 9 is a diagram (II) showing a specific example of conversion table update processing.
[図 10]検索語特徴情報図形及び曲特徴情報図形の他の表示例を示す図である。 符号の説明  FIG. 10 is a diagram showing another display example of a search word feature information figure and a song feature information figure. Explanation of symbols
[0013] 1 曲入力部 [0013] 1 song input section
2 曲データベース  2 song database
3 曲特徴情報データベース  3 Song feature information database
4 検索語特徴情報データベース  4 Search term feature information database
5 音特徴情報抽出部 6 構成語情報抽出部 5 Sound feature information extraction unit 6 Component word information extraction unit
7 曲特徴情報生成部  7 Song feature information generator
8 検索処理部  8 Search processing section
9 入力部  9 Input section
10 曲出力部  10 Song output section
11 履歴記憶部  11 History storage
12 ディスプレイ  12 display
20 曲特徴情報  20 song feature information
30 検索語特徴情報  30 Search term feature information
40 歌特徴情報  40 Song feature information
50、 70 検索語特徴情報表示図形  50, 70 Search term feature information display graphic
51、 52、 53、 54、 55、 61、 62、 63、 64、 65、 71、 72、 73、 74、 75、 81、 82、 83 、 84、 85 特徴図形  51, 52, 53, 54, 55, 61, 62, 63, 64, 65, 71, 72, 73, 74, 75, 81, 82, 83, 84, 85
56、 66 調図形  56, 66 keys
57、 67 矢印  57, 67 arrows
60、 80 曲特徴情報表示図形  60, 80 song feature information display figure
S 曲検索装置  S song search device
T 変換テーブル  T conversion table
TT 相関値  TT correlation value
発明を実施するための最良の形態  BEST MODE FOR CARRYING OUT THE INVENTION
[0014] 次に、本願に対応する最良の実施形態について、図面に基づいて説明する。なお 、以下に説明する実施形態は、複数の曲を蓄積し、使用者の要求に応じていずれか の曲を検索して出力(再生)する曲検索装置に対して本願を適用した場合の実施の 形態である。  Next, the best embodiment corresponding to the present application will be described with reference to the drawings. The embodiment described below is implemented when the present application is applied to a song search apparatus that accumulates a plurality of songs and searches for and outputs (plays back) any song according to a user's request. It is a form.
[0015] m 体構 び 作  [0015] m body structure
始めに、実施形態に係る曲検索装置の全体構成及び全体動作について、図 1及 び図 2を用いて説明する。なお、図 1は当該曲検索装置の概要構成を示すブロック図 であり、図 2は当該曲検索装置内に蓄積されている情報のデータ構造を例示する図 である。 First, the overall configuration and overall operation of the music search apparatus according to the embodiment will be described with reference to FIGS. 1 is a block diagram showing a schematic configuration of the song search device, and FIG. 2 is a diagram illustrating a data structure of information stored in the song search device. It is.
[0016] 図 1に示すように、実施形態に係る曲検索装置 Sは、曲入力部 1と、曲データベース 2と、曲特徴情報蓄積手段としての曲特徴情報データベース 3と、検索曲特徴情報蓄 積手段としての検索語特徴情報データベース 4と、音特徴情報抽出部 5と、構成語情 報抽出部 6と、曲特徴情報生成部 7と、変換手段、変更手段、比較手段及び抽出手 段としての検索処理部 8と、検索語入力手段及び指示入力手段としての入力部 9と、 曲出力部 10と、履歴記憶部 11と、表示手段としてのディスプレイ 12と、により構成さ れている。  As shown in FIG. 1, a song search device S according to the embodiment includes a song input unit 1, a song database 2, a song feature information database 3 as song feature information storage means, and a search song feature information storage. Search word feature information database 4 as product means, sound feature information extraction section 5, constituent word information extraction section 6, song feature information generation section 7, conversion means, change means, comparison means, and extraction means A search processing unit 8, an input unit 9 as a search word input unit and an instruction input unit, a song output unit 10, a history storage unit 11, and a display 12 as a display unit.
[0017] このとき、曲データベース 2には、後述する曲検索処理により検索される検索対象と しての複数の曲が蓄積記憶されている。そして、夫々の曲には、少なくとも歌 (歌唱) と、前奏、伴奏、間奏及び後奏を含む演奏と、が含まれている。  At this time, the song database 2 stores and stores a plurality of songs as search objects to be searched by a song search process described later. Each song includes at least a song (singing) and a performance including a prelude, accompaniment, interlude and postlude.
[0018] ここで、曲データベース 2に蓄積されるべき曲の当該曲データベース 2への入力は 、当該曲に対応する曲情報 Ssgが外部から曲入力部 1に入力されると、当該曲入力 部 1において曲情報 Ssgに対して曲データベース 2に蓄積するためのフォーマット変 換処理等が施され、当該処理後の曲情報 Ssgが曲データベース 2に入力されることに より実行される。  Here, the input of the music to be stored in the music database 2 to the music database 2 is that the music information Ssg corresponding to the music is input to the music input unit 1 from the outside. In step 1, the music information Ssg is subjected to a format conversion process for storage in the music database 2, and the music information Ssg after the processing is input to the music database 2 and executed.
[0019] 次に、曲特徴情報データベース 3には、曲データベース 2内に蓄積されている全て の曲に対応する曲特徴情報が、各曲毎に識別可能に蓄積されている。  Next, song feature information corresponding to all songs stored in the song database 2 is stored in the song feature information database 3 so as to be identifiable for each song.
[0020] ここで、当該曲特徴情報は、曲データベース 2内に蓄積されている各曲毎に夫々対 応して曲特徴情報データベース 3内に蓄積されて!、るものであり、当該各曲内の歌 及び演奏の夫々を特徴付ける情報である。 [0020] Here, the song feature information is stored in the song feature information database 3 corresponding to each song stored in the song database 2! It is the information that characterizes each song and performance.
[0021] 次に、当該曲特徴情報について具体的に図 1及び図 2 (a)を用いて説明する。 Next, the music feature information will be specifically described with reference to FIG. 1 and FIG. 2 (a).
上記曲特徴情報は、新たな曲が曲情報 Ssgとして曲データベース 2に入力されると The above song feature information is obtained when a new song is input to song database 2 as song information Ssg.
、当該曲に対応するものとして新たに生成され、当該生成された曲特徴情報が曲特 徴情報データベース 3に新たに登録 ·蓄積される。 Then, it is newly generated as corresponding to the song, and the generated song feature information is newly registered and stored in the song feature information database 3.
[0022] ここで、新たな曲が曲データベース 2に蓄積されると、図 1に示すように、当該曲に 対応する曲情報 Ssgが曲データベース 2から読み出され、夫々音特徴情報抽出部 5 及び構成語特徴情報抽出部 6に出力される。 [0023] そして、音特徴情報抽出部 5は、曲情報 Ssgからその曲の音響的特徴を示す複数 のパラメータを抽出し、音特徴情報 Savとして曲特徴情報生成部 7に出力する。 Here, when a new song is accumulated in the song database 2, the song information Ssg corresponding to the song is read from the song database 2 as shown in FIG. And output to the constituent word feature information extraction unit 6. [0023] Then, the sound feature information extraction unit 5 extracts a plurality of parameters indicating the acoustic features of the song from the song information Ssg and outputs the parameters to the song feature information generation unit 7 as the sound feature information Sav.
[0024] このとき、音特徴情報 Savに含まれる上記複数のパラメータには、図 2 (a)右に示す ように、例えばその曲の速さ(BPM (Beat Per Minutes) )、その曲の最大出力レベル( 最大音量)、その曲の平均出力レベル (平均音量)、その曲に含まれるコード、その曲 のビートレベル(すなわち、その曲のビート成分の信号レベル(大きさ))、その曲の調 (ハ長調又はイ短調等の調)などがある。  At this time, the plurality of parameters included in the sound feature information Sav include, for example, the speed of the song (BPM (Beat Per Minutes)), the maximum of the song, as shown in the right of FIG. Output level (maximum volume), average output level of the song (average volume), chords contained in the song, beat level of the song (ie, signal level (magnitude) of the beat component of the song), Key (key such as C major or A minor).
[0025] これと並行して、構成語情報抽出部 6は、曲情報 Ssgからその曲に含まれている歌( 歌詞)を抽出すると共に、当該抽出された歌の中に予め設定されている単語 (フレー ズ。以下、単に構成語と称する)が含まれている力否かを検索し、当該検索結果 (そ の歌に含まれて 、た構成語であるカゝ否か)を各構成語毎に示す構成語情報 Swdを生 成し、曲特徴情報生成部 7に出力する。  In parallel with this, the constituent word information extraction unit 6 extracts a song (lyrics) included in the song from the song information Ssg, and is preset in the extracted song. A search is made to determine whether or not a word (phrase; hereinafter simply referred to as a constituent word) is included, and the search result (whether it is a constituent word included in the song) is determined for each constituent. The constituent word information Swd shown for each word is generated and output to the song feature information generating unit 7.
[0026] このとき、当該構成語情報 Swdは、予め設定されている構成語である「アイ」、「ゥミ」 、「ォモイ」又は「キボウ」等の夫々力 ある曲を構成する歌に含まれている力否かをそ の曲毎に示すものであり、その構成語がその歌に含まれて ヽればその構成語にっ ヽ ての構成語特徴情報 Swdの値は「1」となり、一方その構成語がその歌に含まれてい なければその構成語にっ 、ての構成語特徴情報 Swdの値は「0」となる。より具体的 には、例えば図 2 (a)左に示すように、曲データベース 2内の曲番号「0」の曲におい ては「アイ」なる構成語はその曲の歌に含まれている力 「ゥミ」、「ォモイ」及び「キボウ 」なる構成語はその曲の歌に含まれていないことになる。また同様に、曲データべ一 ス 2内の曲番号「 1」の曲にお ヽては「アイ」、「ゥミ」及び「キボウ」なる構成語はその曲 の歌に含まれて 、るが、「ォモイ」なる構成語はその曲の歌に含まれて ヽな 、ことにな る。  [0026] At this time, the constituent word information Swd is included in the songs constituting the powerful songs such as "Ai", "Umi", "Omoi" or "Kibo", which are preset constituent words. For each song, if the constituent word is included in the song, the constituent word feature information Swd for that constituent word is “1”. On the other hand, if the constituent word is not included in the song, the value of the constituent word feature information Swd is “0” according to the constituent word. More specifically, for example, as shown in the left of FIG. 2 (a), in the song with the song number “0” in the song database 2, the constituent word “eye” is included in the song song. The words “Umi”, “Omoy” and “Kibo” are not included in the song. Similarly, for the song with the song number “1” in song data base 2, the constituent words “Ai”, “Umi” and “Kibo” are included in the song. However, the word “momoi” is included in the song song.
[0027] これらにより、曲特徴情報生成部 7は、音特徴情報 Savと構成語特徴情報 Swdとを 対応する曲毎に組み合わせ、例えば図 2 (a)に示す如き各曲に対応する複数の曲特 徴情報 20により構成される曲特徴情報 Sspを曲特徴情報データベース 3に出力し、 当該曲特徴情報データベース 3内に登録 '蓄積させる。このとき、図 2 (a)に示すよう に、一の曲特徴情報 20は、上記音特徴情報抽出部 5により抽出されたその曲の音響 的特徴と、上記構成語情報抽出部 6により抽出されたその曲内の歌における構成語 情報と、が各曲毎に対となって構成されている。 Thus, the song feature information generation unit 7 combines the sound feature information Sav and the component word feature information Swd for each corresponding song, and, for example, a plurality of songs corresponding to each song as shown in FIG. 2 (a). The song feature information Ssp composed of the feature information 20 is output to the song feature information database 3 and is registered and stored in the song feature information database 3. At this time, as shown in FIG. 2 (a), one piece of music feature information 20 is the sound of the music piece extracted by the sound feature information extraction unit 5. The characteristic feature and the constituent word information in the song in the song extracted by the constituent word information extracting unit 6 are paired for each song.
[0028] 更に、検索語特徴情報データベース 4は、後述する曲検索処理にお!、て使用者か ら入力される検索キーワード (すなわち、その使用者がその時に聴取したいと感じて いる曲を主観的に特徴付ける検索キーワード。以下、単に検索語と称する)として予 め設定されている検索語の全てに対応する検索語特徴情報が、各検索語毎に識別 可能に蓄積されている。  [0028] Further, the search word feature information database 4 is used for the search keyword input by the user (that is, the song that the user wants to listen to at that time is subjectively recorded in the song search process described later). Search term characteristic information corresponding to all of the search terms preset as search keywords (hereinafter simply referred to as “search terms”) is stored in an identifiable manner for each search term.
[0029] ここで、当該検索語特徴情報は、曲データベース 2内に蓄積されている曲を使用者 が検索する際に選択して入力すべきものとして当該使用者に提示される各検索語の 夫々を特徴付ける情報 (検索語自体ではな!、)である。  [0029] Here, the search term feature information is each of the search terms presented to the user as to be selected and input when the user searches for the song stored in the song database 2. Information that characterizes (not a search term itself!).
[0030] 次に、当該検索語特徴情報について具体的に図 2 (b)を用いて説明する。  [0030] Next, the search term feature information will be specifically described with reference to FIG.
一の当該検索語特徴情報 30は、図 2 (b)に例示するように、各検索語特徴情報 30 を他の検索特徴情報 30から識別するための検索語特徴情報識別情報 (図 2 (b)中、 「検索 ID」と示す)と、その検索語特徴情報 30が対応している検索語自体と、当該対 応している検索語を用いて曲データベース 2内から検索'抽出されるべき (換言すれ ば、検索 '抽出されることが期待される)曲に含まれる歌を特徴付ける歌特徴情報と、 その検索'抽出されるべき (検索'抽出されることが期待される)曲の音響的特徴を示 す複数のパラメータを含む音特徴情報と、により構成されて 、る。  As shown in FIG. 2 (b), the search term feature information 30 for one search term feature information identification information (FIG. 2 (b)) for identifying each search term feature information 30 from other search feature information 30. ), The search term corresponding to the search term feature information 30 itself, and the search term corresponding to the search term feature information 30 should be searched and extracted from the song database 2. (In other words, search 'expected to be extracted' song feature information characterizing songs included in the song, and the search 'extracted to be extracted' And sound feature information including a plurality of parameters indicating the target features.
[0031] ここで、検索語特徴情報 30を構成する音特徴情報は、具体的には上記音特徴情 報 Savに含まれる各パラメータと同様の音響的なパラメータを含んでいる。  Here, the sound feature information constituting the search word feature information 30 specifically includes acoustic parameters similar to the parameters included in the sound feature information Sav.
[0032] また、同様に検索語特徴情報 30を構成する歌特徴情報は、その検索語特徴情報 30を用いて検索'抽出されるべき曲に含まれて ヽる歌を特徴付ける主観的な複数の 歌特徴情報の夫々を、その検索語特徴情報 30が対応して 、る検索語を用いて検索 •抽出されるべき曲に含まれる歌個々の具体的な内容に応じて重み付けしたものの 集合である。  [0032] Similarly, the song feature information constituting the search word feature information 30 includes a plurality of subjective characteristics that characterize the songs that are included in the song to be searched and extracted using the search word feature information 30. Each of the song feature information is searched using the search term corresponding to the search term feature information 30. It is a set of weighted according to the specific contents of each song included in the song to be extracted .
[0033] より具体的に図 2 (b)に示す例では、検索語「心あたたまる」に対応している検索語 特徴情報 30においては、「心あたたまる」という名称の歌特徴情報が他の歌特徴情 報に対して「0. 9」なる重み付けで含まれており、「勇気付けられる」という名称の歌特 徴情報が他の歌特徴情報に対して「0. 3」なる重み付けで含まれており、「悲しい'さ み 、」 t 、う名称の歌特徴情報が他の歌特徴情報に対して「0. 1」なる重み付けで 含まれて 、る一方、「明る!/、」 、う名称の歌特徴情報の当該重み付けは「0」とされて いる。これに対し、検索語「明るい」に対応している検索語特徴情報 30においては、 歌特徴情報「明る 、」が他の歌特徴情報に対して「0. 7」なる重み付けで含まれてお り、歌特徴情報「心あたたまる」が他の歌特徴情報に対して「0. 2」なる重み付けで含 まれており、歌特徴情報「勇気付けられる」が他の歌特徴情報に対して「0. 5」なる重 み付けで含まれて 、る一方、歌特徴情報「悲し ヽ ·さみし 、」の当該重み付けは「0」と されている。更に、検索語「悲しい'さみしい」に対応している検索語特徴情報 30にお いては、歌特徴情報「心あたたまる」が他の歌特徴情報に対して「0. 3」なる重み付け で含まれており、歌特徴情報「悲しい'さみしい」が他の歌特徴情報に対して「0. 8」 なる重み付けで含まれて 、る一方、歌特徴情報「明る 、」及び歌特徴情報「勇気付け られる」の当該重み付けは夫々「0」とされている。更にまた、他の検索語「勇気付けら れる」又は「落ちつく」等に夫々対応して ヽる検索語特徴情報 30ぉ 、ても、各歌特徴 情報「心あたたまる」、「明るい」、「悲しい'さみしい」並びに「勇気付けられる」の夫々 が予め設定された重み付けで含まれて 、る。 More specifically, in the example shown in FIG. 2 (b), in the search word feature information 30 corresponding to the search word “heart warming”, the song feature information named “heart warming” is the other song. The feature information is included with a weighting of “0.9”, and the song feature named “Encouraged” Collection information is included with a weighting of “0.3” for other song feature information, and “sad”, “t”, and song feature information for the song name is “0” for other song feature information. 1 is included with a weighting of “1!”, While “0!” Is assigned to the song feature information of “bright! /,”. On the other hand, in the search term feature information 30 corresponding to the search term “bright”, the song feature information “bright” is included with a weight of “0.7” with respect to other song feature information. The song feature information “heartwarming” is included with a weight of “0.2” with respect to other song feature information, and the song feature information “encouraged” is “0” with respect to other song feature information. .5 ”is included, while the weight of the song feature information“ Sadness / Samoshi ”is“ 0 ”. Further, in the search term feature information 30 corresponding to the search term “sad”, the song feature information “heartwarming” is included with a weight of “0.3” with respect to other song feature information. The song feature information “Sad” is included with a weight of “0.8” relative to other song feature information, while the song feature information “bright” and the song feature information “encouraged” The weights of "" are each "0". Furthermore, the search word feature information 30 ぉ which corresponds to other search words “encouraged” or “settled”, etc., but each song feature information “heartwarming”, “bright”, “sad” Each of 'Samish' and 'Encouraged' is included with a preset weight.
[0034] なお、各歌特徴情報により示される主観的概念と同一の主観的概念がいずれかの 検索語により示されて 、る(検索語そのものとなって 、る)のは、後述するように各使 用者毎にその好みに応じた曲の検索結果を得るためである。また、各歌特徴情報自 体は、曲データベース 2に蓄積されて 、る曲(より具体的にはその曲に含まれる歌)の 特徴を示すものであって検索語自体とは別個のものであるが、その詳細については 後述する。 [0034] Note that the subjective concept that is the same as the subjective concept indicated by each song feature information is indicated by one of the search words (the search word itself becomes a word) as described later. This is to obtain a search result for each user according to their preference. Each song feature information itself is stored in the song database 2 and indicates the feature of the song (more specifically, the song included in the song), and is separate from the search term itself. Details will be described later.
[0035] そして、曲特徴情報データベース 3及び検索語特徴情報データベース 4に夫々蓄 積されている情報を用いて使用者が主観的に所望する曲を検索する場合には、先 ず、入力部 9においていずれかの上記検索語が使用者により入力されると、当該入 力された検索語を示す入力情報 Sinが検索処理部 8に出力される。  [0035] Then, when searching for a song that the user desires subjectively using information stored in the song feature information database 3 and the search word feature information database 4, respectively, the input unit 9 first. When any one of the above search terms is input by the user, input information Sin indicating the input search term is output to the search processing unit 8.
[0036] これにより、検索処理部 8は、入力情報 Sinに基づき、入力された検索語に対応して いる一の検索語特徴情報 30を検索語特徴情報 Swdsとして検索語特徴情報データ ベース 4から抽出すると共に、曲データベース 2に蓄積されている全ての曲に対応す る複数の曲特徴情報 20を曲特徴情報 Sspとして曲特徴情報データベース 3から抽出 する。そして、夫々抽出された検索語特徴情報 30と各曲特徴情報 20とを夫々比較し 、その検索語特徴情報 30に最も類似して 、る曲特徴情報 20が対応して 、る曲を示 す曲識別情報 Srzを生成して曲データベース 2に出力する。これらと並行して、検索 処理部 8は、入力された検索語に対応しているとして検索語特徴情報データベース 4 力も抽出された一の検索語特徴情報 30を示す図形 (以下、単に検索語特徴情報図 形と称する)と、その抽出された検索語特徴情報 30に最も類似しているとされた曲特 徴情報 20を示す図形 (以下、単に曲特徴情報図形と称する)と、を、同じ構図の図形 としてディスプレイ 12に表示すべく表示信号 Sdpを生成して当該ディスプレイ 12に出 力する。なお、この検索語特徴情報図形及び曲特徴情報図形の具体的構図例につ いては、後ほど詳述する。 Thereby, the search processing unit 8 uses the search term feature information data corresponding to the inputted search term as the search term feature information Swds based on the input information Sin as the search term feature information data. In addition to extracting from the base 4, a plurality of pieces of song feature information 20 corresponding to all songs stored in the song database 2 are extracted from the song feature information database 3 as song feature information Ssp. Then, the extracted search word feature information 30 and each song feature information 20 are respectively compared, and the song feature information 20 that is most similar to the search word feature information 30 indicates a song. The song identification information Srz is generated and output to the song database 2. In parallel with these, the search processing unit 8 is a figure indicating one search word feature information 30 from which the search word feature information database 4 is extracted as corresponding to the input search word (hereinafter simply referred to as a search word feature). And the figure showing the song feature information 20 that is most similar to the extracted search term feature information 30 (hereinafter simply called the song feature information figure). A display signal Sdp is generated and displayed on the display 12 for display on the display 12 as a figure of composition. A specific composition example of the search word feature information graphic and the song feature information graphic will be described in detail later.
[0037] そして、曲データベース 2は、上記曲識別情報 Srzにより示される曲を曲情報 Ssgと して曲出力部 10へ出力する。  Then, the song database 2 outputs the song indicated by the song identification information Srz to the song output unit 10 as song information Ssg.
[0038] これにより、曲出力部 10は、出力されてきた曲情報 Ssgに対して必要な出力インタ 一フェース処理等を施し、当該処理後の曲情報 Ssgを図示しない外部の増幅部又は 放送送信部等に出力する。  [0038] Thereby, the music output unit 10 performs necessary output interface processing or the like on the output music information Ssg, and the processed music information Ssg is transmitted to an external amplifying unit (not shown) or broadcast transmission. Output to the part.
[0039] また、一の曲を示す曲情報 Ssgが曲出力部 10から出力された後には、次に、その 出力された曲情報 Ssgに対応する曲が、検索語を最初に入力した使用者の所望する ものであった力否かが評価情報として再度入力部 9において入力され、対応する入 力情報 Sinが検索処理部 8へ出力される。  [0039] After the song information Ssg indicating one song is output from the song output unit 10, the song corresponding to the output song information Ssg is the user who first inputs the search word. Whether or not the force that was desired is input as evaluation information in the input unit 9 again, and the corresponding input information Sin is output to the search processing unit 8.
[0040] これにより、検索処理部 8は、入力情報 Sinとして入力されてきた上記評価情報に基 づいて、過去の曲検索処理の結果を示す履歴情報を生成し、履歴情報 Smとして履 歴記憶部 11に一時的に記憶させると共に必要に応じてこれを読み出し、後述する履 歴管理処理を行う。  [0040] Thereby, the search processing unit 8 generates history information indicating the result of the past song search processing based on the evaluation information input as the input information Sin, and stores the history information as history information Sm. The data is temporarily stored in the unit 11 and read out as necessary, and the history management process described later is performed.
[0041] (II)曲検索処理.  [0041] (II) Song search processing.
次に、上述した構成を備える曲検索装置 Sを用いて実行される実施形態に係る曲 検索処理について、具体的に図 3乃至図 8を用いて説明する。なお、図 3は当該曲 検索装置における曲検索処理を示すフローチャートであり、図 4は当該曲検索処理 に用いられる変換情報としての変換テーブルの内容を例示する図であり、図 5は入力 された検索語に対応する検索語特徴情報図形及びそれに相応しいとして抽出され た曲に対応する曲特徴情報図形の表示例を示す図であり、図 6は実施形態に係る履 歴処理に用 、られる履歴情報を例示する図であり、図 7は当該履歴情報を用 、た履 歴管理処理を示すフローチャートであり、図 8及び図 9は当該履歴管理処理に用いら れるデータベースを夫々例示する図であり、図 10は入力された検索語に対応する検 索語特徴情報図形及びそれに相応 ヽとして抽出された曲に対応する曲特徴情報 図形の他の表示例を示す図である。 Next, the music search processing according to the embodiment executed using the music search device S having the above-described configuration will be specifically described with reference to FIGS. Figure 3 shows the song FIG. 4 is a flowchart illustrating song search processing in the search device, FIG. 4 is a diagram illustrating the contents of a conversion table as conversion information used in the song search processing, and FIG. 5 is a search word corresponding to the input search word. FIG. 6 is a diagram showing a display example of a music feature information graphic corresponding to a characteristic information graphic and a music extracted as appropriate, and FIG. 6 is a diagram illustrating history information used for history processing according to the embodiment; FIG. 7 is a flowchart showing the history management process using the history information, FIGS. 8 and 9 are diagrams illustrating databases used for the history management process, and FIG. 10 is an input search. It is a figure which shows the other example of a search word characteristic information figure corresponding to a word, and the other display example of the music feature information figure corresponding to the music extracted as it correspondingly.
[0042] 図 3に示すように、主として検索処理部 8を中心として実行される実施形態の曲検 索処理にお!、ては、始めに使用者により所望の主観的な検索語が決定されて入力 部 9において入力されると (ステップ S1)、その入力された検索語に対応する検索語 特徴情報 30が検索語特徴情報データベース 4から抽出されて検索処理部 8に出力 される (ステップ S2)。そして、検索処理部 8は、当該抽出された検索語特徴情報 30 に対応する検索語特徴情報図形を、その時の検索語特徴情報 30の内容に応じてデ イスプレイ 12上に表示するための図形情報を構築し、当該検索処理部 8内の図示し な 、メモリ内に一時的に格納する (ステップ S 15)。  [0042] As shown in FIG. 3, in the music search process of the embodiment that is mainly executed centering on the search processing unit 8, first, the user determines a desired subjective search word. Input in the input unit 9 (step S1), the search term feature information 30 corresponding to the input search term is extracted from the search term feature information database 4 and output to the search processing unit 8 (step S2). ). Then, the search processing unit 8 displays graphic information for displaying the search word feature information graphic corresponding to the extracted search word characteristic information 30 on the display 12 according to the contents of the search word characteristic information 30 at that time. And is temporarily stored in a memory (not shown) in the search processing unit 8 (step S15).
[0043] また、このステップ S1及び S2の処理と並行して、曲特徴情報データベース 3から全 ての曲に含まれる歌を構成する構成語が、各曲毎に読み出されて検索処理部 8へ出 力される (ステップ S3)。そして、入力部 9を用いた使用者による操作により、上記読 み出された構成語を変換し、当該構成語が含まれていた曲各曲に含まれる歌に対応 する歌特徴情報を生成するための変換テーブルが決定される (ステップ S3— 1)。  In parallel with the processing of steps S 1 and S 2, the constituent words constituting the songs included in all the songs are read from the song feature information database 3 for each song, and the search processing unit 8 (Step S3). Then, by the operation by the user using the input unit 9, the read constituent word is converted, and song characteristic information corresponding to the song included in each song in which the constituent word is included is generated. A conversion table is determined for (step S3-1).
[0044] なお、当該変換テーブルは、後述するように検索処理部 8内の図示しないメモリに 記憶されている。そして、曲の検索結果に伴ってディスプレイ 12に表示される上記検 索語特徴情報図形及び曲特徴情報図形を見た使用者による評価に基づき、適宜新 たに作成 (変更)された変換テーブルが上記図示しな 、メモリ内に追加記憶されて!ヽ くものである。  Note that the conversion table is stored in a memory (not shown) in the search processing unit 8 as described later. A newly created (changed) conversion table is appropriately created based on the user's evaluation of the search word feature information figure and the song feature information figure displayed on the display 12 according to the song search result. Not shown in the figure above, it is stored in memory!
[0045] その後、検索処理部 8は、当該読み出された構成語を上記決定された変換テープ ルを用いて変換することにより、当該構成語を各曲に含まれる歌に対応する歌特徴 情報に変換する処理を、各歌毎に実行する (ステップ S4)。 [0045] After that, the search processing unit 8 converts the read component word into the determined conversion tape. A process for converting the constituent words into song feature information corresponding to the songs included in each song is performed for each song (step S4).
[0046] ここで、当該変換テーブルにつ 、て、具体的に図 4を用いて例示しつつ説明する。  [0046] Here, the conversion table will be described specifically with reference to FIG.
ステップ S4における処理により生成される歌特徴情報は、上述した検索語特徴情 報 30内に重み付けされて含まれている歌特徴情報と同一のものであり、その歌特徴 情報が対応すべき各曲毎に、その曲に含まれている歌を構成する構成語の夫々を、 その歌特徴情報個々の具体的な内容に応じて重み付けしたものの集合である。  The song feature information generated by the processing in step S4 is the same as the song feature information weighted in the search word feature information 30 described above, and each song to which the song feature information should correspond. Each is a set of weights of the constituent words constituting the song included in the song according to the specific contents of the song feature information.
[0047] より具体的に図 4に示す変換テーブル Tの例では、「心あたたまる」なる歌特徴情報 40においては、「キボウ」なる構成語が他の構成語に対して「0. 4」なる重み付けで含 まれており、「ゥミ」なる構成語及び「ォモイ」なる構成語が他の構成語に対して「0. 1」 なる重み付けで含まれている一方、「アイ」なる構成語の当該重み付けは「0」とされた 上で当該「心あたたまる」なる歌特徴情報が生成されることとされている。これに対し、 「明る 、」なる歌特徴情報 40にお 、ては、「ォモイ」なる構成語が他の構成語に対し て「0. 8」なる重み付けで含まれており、「アイ」なる構成語が他の構成語に対して「0 . 2」なる重み付けで含まれており、更に「ゥミ」なる構成語及び「キボウ」なる構成語が 他の構成語に対して「0. 1」なる重み付けで含まれている。更にまた、「悲しい'さみし V、」なる歌特徴情報 40にお 、ては、「キボウ」なる構成語が他の構成語に対して「0. 7」なる重み付けで含まれており、「ゥミ」なる構成語が他の構成語に対して「0. 2」なる 重み付けで含まれて ヽる一方、「アイ」なる構成語及び「ォモイ」なる構成語の当該重 み付けは「0」とされた上で当該「悲 、 ·さみ 、」なる歌特徴情報 40が生成されるこ ととされている。最後に、「勇気付けられる」なる歌特徴情報 40においては、「キボウ」 なる構成語が他の構成語に対して「0. 8」なる重み付けで含まれており、「ゥミ」なる構 成語が他の構成語に対して「0. 4」なる重み付けで含まれており、「アイ」なる構成語 が他の構成語に対して「0. 5」なる重み付けで含まれている一方、「ォモイ」なる構成 語の当該重み付けは「0」とされた上で当該「勇気付けられる」なる歌特徴情報 40が 生成されることとされて 、る。  More specifically, in the example of the conversion table T shown in FIG. 4, in the song feature information 40 “heartwarming”, the constituent word “Kibo” is “0.4” with respect to other constituent words. It is included in the weighting, and the constituent word “umi” and the constituent word “momoi” are included with the weighting of “0.1” relative to other constituent words, while the constituent word “eye” is included. The weight is set to “0”, and the song feature information “warming up” is generated. On the other hand, in the song feature information 40 “bright”, the constituent word “momoi” is included with a weighting of “0.8” with respect to other constituent words, and “eye” A constituent word is included with a weight of “0.2” with respect to other constituent words, and a constituent word of “Umi” and a constituent word of “Kibo” are “0.1” with respect to other constituent words. ”Is included. Furthermore, in the song characteristic information 40 “sad”, the constituent word “Kibo” is included with a weighting of “0.7” relative to other constituent words. The constituent word “mi” is included with a weight of “0.2” relative to other constituent words, while the weighting of the constituent word “eye” and the constituent word “omoi” is “0”. Then, the song feature information 40 “sad, sad,” is generated. Finally, in the song feature information 40 “encouraged”, the constituent word “Kibo” is included with a weight of “0.8” relative to other constituent words, and the constituent word “Umi” is included. Is included with a weight of “0.4” for other constituent words, and the constituent word of “eye” is included with a weight of “0.5” for other constituent words. The weight of the constituent word “momoi” is set to “0”, and the song feature information 40 “encouraged” is generated.
[0048] そして、上記ステップ S4の処理においては、図 4に例示される変換テーブル Tを用 いて、各曲毎に夫々の構成語から対応する歌特徴情報 40が生成される。より具体的 には、例えば図 4に例示する変換テーブル Tを用いる場合に、変換テーブル Τに挙 げられている構成語のうち、ある曲に構成語「ゥミ」、「ォモイ」及び「キボウ」のみが含 まれて 、るとすると、そのある曲につ 、ての「心あたたまる」なる歌特徴情報 40の値は 、各構成語「ゥミ」、「ォモイ」及び「キボウ」の夫々の、「心あたたまる」なる歌特徴情報 40における重み付けである「0. 1」、 「0. 1」及び「0. 4」をカ卩えた「0. 6」となる。同様 に、その曲についての「明るい」なる歌特徴情報 40の値は、各構成語「ゥミ」、「ォモイ 」及び「キボウ」の夫々の、「明るい」なる歌特徴情報 40における重み付けである「0. 1 」、 「0. 8」及び「0. 1」をカ卩えた「1. 0」となる。以下同様に、変換テーブル Τに挙げら れている各歌特徴情報 40の夫々にっき、各構成語に対応する重み付け値を加算す ることでその曲における夫々の値が決定される。 [0048] In the process of step S4, corresponding song feature information 40 is generated from each constituent word for each song using the conversion table T illustrated in FIG. More specific For example, when the conversion table T illustrated in FIG. 4 is used, among the constituent words listed in the conversion table Τ, only the constituent words “Umi”, “Omoy”, and “Kibo” are included in a song. If it is included, the value of the song characteristic information 40, which is “warming up” for a certain song, is the “heart” of each of the constituent words “Umi”, “Omoy” and “Kibo”. It becomes “0.6” with the weights “0.1”, “0.1” and “0.4” in the song feature information 40 “warm”. Similarly, the value of the song feature information 40 “bright” for the song is the weight in the song feature information 40 “bright” for each of the constituent words “umi”, “moy”, and “kibo”. It becomes “1.0” with “0.1”, “0.8” and “0.1”. Similarly, each of the song characteristic information 40 listed in the conversion table き is added, and the respective weight values corresponding to the constituent words are added to determine the respective values in the song.
[0049] 更に、上記ステップ S1及び S2並びにステップ S3乃至 S4の処理と並行して、曲特 徴情報データベース 3から全ての曲に対応する曲特徴情報 20の夫々における音特 徴情報のみが各曲毎に読み出されて検索処理部 8へ出力される (ステップ S5)。  [0049] Further, in parallel with the processing of steps S1 and S2 and steps S3 to S4, only the sound feature information in each of the song feature information 20 corresponding to all songs from the song feature information database 3 is stored in each song. Each time it is read out and output to the search processing unit 8 (step S5).
[0050] これらにより、検索処理部 8は、ステップ S2において抽出された一の検索語特徴情 報 30に含まれて ヽる各歌特徴情報 (夫々の歌特徴情報 40の重み付けを含む)とス テツプ S4において変換された各曲に対応する歌特徴情報 40とが各曲毎に比較され ると共に、当該一の検索語特徴情報 30に含まれている音特徴情報とステップ S5に おいて抽出された各曲に対応する音特徴情報とが各曲毎に比較され、各曲に対応 する歌特徴情報 40及び音特徴情報と入力された検索語との類似度が各曲毎に算出 される(ステップ S6)。  [0050] Thus, the search processing unit 8 includes each song feature information (including the weight of each song feature information 40) included in the one search word feature information 30 extracted in step S2 and The song feature information 40 corresponding to each song converted in step S4 is compared for each song, and the sound feature information contained in the one search word feature information 30 and extracted in step S5. The sound feature information corresponding to each song is compared for each song, and the song feature information 40 corresponding to each song and the similarity between the sound feature information and the input search word are calculated for each song ( Step S6).
[0051] そして、算出された各曲毎の類似度に基づいて、出力されるべき曲をその類似度 の高 、順に並べた再生リストが作成される (ステップ S 7)。  [0051] Based on the calculated similarity for each song, a reproduction list is created in which the songs to be output are arranged in descending order of the similarity (step S7).
[0052] 次に、検索処理部 8は、当該作成された再生リストに含まれている曲のうち例えば 最初に再生される曲に対応する曲特徴情報 20に対応する曲特徴情報図形を、その 曲特徴情報 20の内容に応じてディスプレイ 12上に表示するための図形情報を構築 する。そして、当該構築された曲特徴情報図形に対応する図形情報と、上記ステップ S15の処理により構築されて格納されていた上記検索語特徴情報図形に対応する 図形情報と、を含む上記表示信号 Sdpを生成してディスプレイ 12に出力し、当該検 索語特徴情報図形及び曲特徴情報図形を同時に同一の画面内に並べて表示する( ステップ S 16)。 [0052] Next, the search processing unit 8 selects a song feature information figure corresponding to the song feature information 20 corresponding to the song to be played first, for example, among the songs included in the created playlist. The graphic information to be displayed on the display 12 is constructed according to the contents of the song feature information 20. Then, the display signal Sdp including graphic information corresponding to the constructed music feature information graphic and graphic information corresponding to the search word characteristic information graphic constructed and stored by the processing of step S15 is obtained. Generated and output to display 12, The search word feature information figure and the song feature information figure are simultaneously displayed on the same screen (step S16).
[0053] 次に、当該表示が実行されている間において、当該表示されている検索語特徴情 報図形と曲特徴情報図形とが類似していない等の理由により、上記歌特徴情報の生 成 (ステップ S4)に用いられた変換テーブル Tを変更して他の新たな変換テーブル T を用いて再度歌特徴情報の生成を行う旨の操作が、入力部 9において実行されたか 否かを確認する (ステップ S 17)。  [0053] Next, while the display is being executed, the above-mentioned song feature information is generated due to reasons such as the displayed search word feature information figure and the song feature information figure not being similar. Check whether the operation to change the conversion table T used in (Step S4) and generate song feature information again using another new conversion table T has been executed in the input unit 9. (Step S17).
[0054] そして、当該変換テーブル Tの変更を行う旨の操作が入力部 9にお ヽて実行された ときは (ステップ S 17; YES)、今まで図示しな 、メモリ内に蓄積 ·記憶されて 、た変換 テーブル Tを表示して当該使用者に選択させるか、或 、は新たに変換テーブル Tを 作成させるための表示等をディスプレイ 12において実行し、その結果新たに変更さ れた変換テーブル Tを上記メモリ内に格納し (ステップ S 18)、上記ステップ S3— 1に 移行する。この後は、当該ステップ S3— 1の処理において新たな変換テーブル丁が 正式に決定され、以後上記ステップ S4以降の処理に供される。  [0054] When an operation for changing the conversion table T is performed on the input unit 9 (step S17; YES), the data is stored and stored in the memory, not shown so far. Display the conversion table T and let the user select it, or execute a display or the like on the display 12 to create a new conversion table T. As a result, the newly changed conversion table is displayed. T is stored in the memory (step S18), and the process proceeds to step S3-1. After this, a new conversion table is formally determined in the process of step S3-1, and is thereafter used for the process after step S4.
[0055] 他方、ステップ S 17の判定にぉ 、て、変換テーブル Tの変更を行う旨の操作が行わ れない場合は (ステップ S17 ;NO)、その再生リスト (ステップ S7参照)により示されて V、る順に同じく示されて 、る曲が曲データベース 2から抽出されて曲出力部 10を介し て出力される (ステップ S8)。  [0055] On the other hand, if the operation to change the conversion table T is not performed in the determination in step S17 (step S17; NO), it is indicated by the reproduction list (see step S7). The songs are also shown in the order of V and are extracted from the song database 2 and output via the song output unit 10 (step S8).
[0056] ここで、上記検索語特徴情報図形及び曲特徴情報図形夫々の表示態様につ!、て 、具体的に図 5を用いて説明する。  [0056] Here, the display forms of the search word feature information graphic and the song feature information graphic will be specifically described with reference to FIG.
[0057] 図 5に例示するように、検索語特徴情報図形 50と曲特徴情報図形 60とは、デイス プレイ 12内に同時に並べて表示される。  As illustrated in FIG. 5, the search word feature information graphic 50 and the song feature information graphic 60 are displayed side by side in the display 12 at the same time.
[0058] このとき、曲特徴情報図形 60についてより具体的には、例えば図 5右に示すように 、その曲特徴情報図形 60が対応する曲特徴情報 20により特徴付けられる曲の調を 示す図形である調図形 66を中心として、その曲特徴情報 20に含まれる音特徴情報 に含まれるいずれかのパラメータ及び当該曲特徴情報 20に含まれる構成語を用い て生成されるいずれかの歌特徴情報の中から予め設定されたものを含む特徴図形 6 1乃至 65が含まれている。 [0059] このうち、調図形 66は、 24種類 (具体的には、「ハ長調」、「イ短調」、「ヘ長調」、「二 短調」、「変ロ長調」、「ト短調」、「変ホ長調」、「ハ短調」、「変イ長調」、「ヘ短調」、「変 ニ長調」、「変ロ短調」、「変ト長調」、「変ホ短調」、「ロ長調」、「嬰ト短調」、「ホ長調」、 「嬰ハ短調」、「イ長調」、「嬰へ短調」、「ニ長調」、「ロ短調」、「ト長調」及び「ホ短調」 の 24種類)の調の!/、ずれか一つを示す矢印 67を一つのみ含む(すなわち、一つの 曲を一つの調で代表して示す)図形とされている。このとき、当該各調を示す矢印 67 の色並びにその矢印 67の表示位置及び表示方向(調図形 66を構成する円内にお ける表示位置及び表示方向)は予め定められており、この定められた表示位置及び 表示方向のいずれか一つに、その曲特徴情報 20に含まれている調を示す矢印 67 力 S表示されること〖こなる。 At this time, more specifically, for example, as shown on the right side of FIG. 5, the music feature information graphic 60 is a graphic showing the tone of the music characterized by the music feature information 20 corresponding to the music feature information graphic 60. One of the song feature information generated using one of the parameters included in the sound feature information included in the song feature information 20 and the constituent word included in the song feature information 20, centering on the key figure 66 Characteristic figures 61 to 65 including preset ones are included. [0059] Of these, there are 24 types of key 66 (specifically, "C major,""Bminor,""Fmajor,""Dminor,""b-flat,""Gminor,"" `` E-flat major '', `` C minor '', `` B-flat major '', `` F minor '', `` B-flat major '', `` B-flat minor '', `` G-flat major '', `` E-flat minor '', `` B major '', `` 嬰(G minor), “E major”, “イ C minor”, “A major”, “F minor minor”, “D major”, “B minor”, “G major” and “E minor”)! / A figure that includes only one arrow 67 indicating one of the shifts (that is, one song is represented by one key). At this time, the color of the arrow 67 indicating each key and the display position and display direction of the arrow 67 (display position and display direction in a circle constituting the key figure 66) are determined in advance. In any one of the display position and the display direction, an arrow 67 force S indicating the key included in the music feature information 20 is displayed.
[0060] 一方、特徴図形 61乃至 65については、その曲特徴情報 20により特徴付けられる 曲の歌に含まれて 、る構成語を図 4に示す変換テーブル Tを用 、て変換して得られ 且つその曲を特徴付ける歌特徴情報の 1、ずれかと、その曲特徴情報 20に音特徴情 報として含まれて 、るパラメータ(上記調を示すパラメータを除く。図 2 (a)参照)の ヽ ずれかと、のうち、予め設定された数(図 5の場合五つ)のものが予め設定された表示 位置(図 5の場合、上記調図形 66を中心とした五角形の夫々の頂点の位置)に表示 される。例えば、特徴図形 61がその曲の歌特徴情報「心あたたまる」に対応し、特徴 図形 62がその曲のパラメータ「BPM」に対応し、特徴図形 63がその曲の歌特徴情 報「明るい」に対応し、特徴図形 64がその曲のパラメータ「最大レベル」に対応し、特 徴図形 65がその曲のパラメータ「平均レベル」に対応するものとして表示される。そし て、各特徴図形 61乃至 65においては、夫々の特徴図形として示される歌特徴情報 又はパラメータの値が、当該各歌特徴情報及び当該各パラメータの夫々が採り得る 最大値及び最小値を用いて正規化された値として予め設定された色により表現され る。  On the other hand, the feature figures 61 to 65 are obtained by converting the constituent words included in the song of the song characterized by the song feature information 20 using the conversion table T shown in FIG. In addition, one of the song feature information that characterizes the song, the deviation, and the parameters included in the song feature information 20 as sound feature information (excluding the parameters that indicate the key, see Fig. 2 (a)) Of the heels, the preset number (five in the case of FIG. 5) is set to the preset display position (in the case of FIG. 5, the positions of the respective vertices of the pentagon centered on the key figure 66). Is displayed. For example, the feature figure 61 corresponds to the song feature information “heartwarming” of the song, the feature figure 62 corresponds to the song parameter “BPM”, and the feature figure 63 corresponds to the song feature information “bright” of the song. Correspondingly, the feature graphic 64 corresponds to the parameter “maximum level” of the song, and the feature graphic 65 is displayed as corresponding to the parameter “average level” of the song. In each of the feature graphics 61 to 65, the song feature information or parameter values indicated as the feature graphics are obtained by using the maximum value and the minimum value that each of the song feature information and each parameter can take. It is expressed by a preset color as a normalized value.
[0061] また、上記変換デーブル Tは検索処理部 8内の図示しな 、メモリに蓄積されて!、る ものから使用者が任意に選択することができる(上記ステップ S18参照)ため、当該使 用者が選択した変換テーブル Tに基づ 、て得られた歌特徴情報のうち 、ずれかが上 記表示位置に表示されることになる。 [0062] なお、上記特徴図形 61乃至 65のうち、曲特徴情報 20に音特徴情報として含まれ て 、るパラメータを示す特徴図形にっ 、ては、その曲の再生中に特徴図形として表 示されているパラメータの値が変化したとき、その変化に応じて対応する特徴図形の 色又は形状を変化させて表示するように構成しても良 ヽ。 [0061] Further, the conversion table T is stored in a memory (not shown) in the search processing unit 8 and can be arbitrarily selected by the user (see step S18). Based on the conversion table T selected by the user, the deviation of the song characteristic information obtained is displayed at the display position. [0062] Of the characteristic figures 61 to 65, the characteristic figure indicating the parameters included in the music characteristic information 20 as the sound characteristic information is displayed as the characteristic figure during the reproduction of the music. When the value of the specified parameter changes, the color or shape of the corresponding feature figure may be changed according to the change.
[0063] 他方、検索語特徴情報図形 50についてより具体的には、例えば図 5左に示すよう に、当該検索語特徴情報図形 50においては、先ず、その検索語特徴情報図形 50 が対応する検索語特徴情報 30により特徴付けられる検索語を用いて検索されるべき 曲に相応しい一又は複数の調力 夫々の相応しさの度合い(以下、相関値と称する) に応じて色が変化する一又は複数の矢印 57を含む調図形 56としてその検索語特徴 図形 50の中心に表示される。そして更に、その調図形 56を中心として、その検索語 特徴情報 30に含まれる音特徴情報に含まれるいずれ力のパラメータ及び当該検索 語特徴情報 30に含まれるいずれかの歌特徴情報の中から、上記曲特徴情報図形 6 0における特徴図形 61乃至 65に対応するものとして予め設定されたものを含む特徴 図形 51乃至 55が含まれている。  [0063] On the other hand, more specifically, for example, as shown on the left side of FIG. 5, in the search word feature information graphic 50, in the search word feature information graphic 50, first, the search word feature information graphic 50 corresponds to the search word feature information graphic 50. One or more strengths suitable for the song to be searched using the search terms characterized by the word feature information 30 One or more colors whose colors change according to their degree of suitability (hereinafter referred to as correlation values) Is displayed at the center of the search term characteristic figure 50 as a key figure 56 including an arrow 57. Further, with the key figure 56 as a center, from any of the power parameters included in the sound feature information included in the search word feature information 30 and any song feature information included in the search word feature information 30, Characteristic figures 51 to 55 including those previously set as corresponding to the characteristic figures 61 to 65 in the music characteristic information figure 60 are included.
[0064] このうち、調図形 56は、曲特徴情報図形 60において上記した 24種類の調のいず れかを示す矢印 57を一又は複数含む図形とされている。このとき、上記した曲特徴 図形 60の場合と同様に、当該各調を示す矢印 57の表示位置及び表示方向は予め 定められており、この定められた表示位置及び表示方向のいずれかに、その検索語 特徴情報 30に含まれている調を示す矢印 57が表示されることになる。そして、調図 形 56における夫々の矢印 57の色は、上記 24種類の調の夫々と、その検索語により 検索されるべき曲の調と、の間の上記相関値に応じて、例えば負の相関値を有する 調は青色の矢印 57を対応する予め設定された表示位置及び表示方向に表示するこ とにより示され、一方、正の相関値を有する調は赤色の矢印 57を対応する予め設定 された表示位置及び表示方向に表示することにより示される。また、各相関値の絶対 値は、対応する矢印 57の色 (赤色又は青色)の濃淡により、絶対値が大きいほど濃く 表示される。  [0064] Of these, the key figure 56 is a figure including one or more arrows 57 indicating any of the 24 key types described above in the music feature information figure 60. At this time, as in the case of the music feature graphic 60 described above, the display position and the display direction of the arrow 57 indicating each key are determined in advance, and the display position and the display direction are set to any of the determined display position and display direction. An arrow 57 indicating the key included in the search term characteristic information 30 is displayed. Then, the color of each arrow 57 in the diagram 56 is, for example, a negative value depending on the correlation value between each of the 24 types of key and the key of the song to be searched by the search word. Tones with correlation values are indicated by displaying blue arrows 57 in the corresponding preset display position and orientation, while tones with positive correlation values are preset with corresponding red arrows 57. It is indicated by displaying in the displayed display position and display direction. In addition, the absolute value of each correlation value is displayed darker as the absolute value is larger, depending on the color of the corresponding arrow 57 (red or blue).
[0065] 一方、特徴図形 51乃至 55については、その検索語特徴情報 30に含まれている歌 特徴情報の ヽずれかと、その検索語特徴情報 30に音特徴情報として含まれて 、る パラメータ(上記調を示すパラメータを除く。図 2 (b)参照)のいずれかと、のうち、予め 設定された数(図 5の場合五つ)のものが、上記曲特徴情報図形 60に対応して予め 設定された表示位置(図 5の場合、上記調図形 56を中心とした五角形の夫々の頂点 の位置)に表示される。例えば、図 5の場合、曲特徴情報図形 60における特徴図形 6 1に対応する特徴図形 51がその検索語特徴情報 30に含まれる歌特徴情報「心あた たまる」に対応し、曲特徴情報図形 60における特徴図形 62に対応する特徴図形 52 がその検索語特徴情報 30に含まれるパラメータ「BPM」に対応し、曲特徴情報図形 60における特徴図形 63に対応する特徴図形 53がその検索語特徴情報 30に含まれ る歌特徴情報「明るい」に対応し、曲特徴情報図形 60における特徴図形 64に対応 する特徴図形 54がその検索語特徴情報 30に含まれるパラメータ「最大レベル」に対 応し、曲特徴情報図形 60における特徴図形 65に対応する特徴図形 55がその検索 語特徴情報 30に含まれるパラメータ「平均レベル」に対応するものとして表示される。 そして、各特徴図形 51乃至 55においては、上記特徴図形 61乃至 65と同様に、夫 々の特徴図形として示される歌特徴情報又はパラメータの値が、当該各歌特徴情報 及び当該各パラメータの夫々が採り得る最大値及び最小値を用いて正規化された値 として予め設定された色により表現される。これにカ卩えて、特徴図形 51乃至 55にお いては、各検索特徴情報 30に含まれる各歌特徴情報の重み付けの値 (図 2 (b)並び にその関連説明部分参照)が、背景に対するグラデーションの差により、その重み付 けの値が大きいほど背景に対するグラデーションがなく塗り潰されるように、またその 重み付けの値が小さいほど背景に対してグラデーションが顕著になるようにして、表 示される。 [0065] On the other hand, the feature graphics 51 to 55 are included in the search word feature information 30 as sound feature information, as well as the song feature information included in the search word feature information 30. One of the parameters (excluding the parameter indicating the key, see Fig. 2 (b)) and the preset number (five in the case of Fig. 5) correspond to the above-mentioned music feature information figure 60. Are displayed at preset display positions (in the case of FIG. 5, the positions of the respective vertices of the pentagon centered on the above-mentioned key figure 56). For example, in the case of FIG. 5, the feature graphic 51 corresponding to the feature graphic 61 in the music feature information graphic 60 corresponds to the song characteristic information “heartwarming” included in the search word feature information 30 and the music characteristic information graphic The feature figure 52 corresponding to the feature figure 62 in 60 corresponds to the parameter “BPM” included in the search word feature information 30, and the feature figure 53 corresponding to the feature figure 63 in the song feature information figure 60 is the search word feature information 30 corresponds to the song feature information “bright”, and the feature figure 54 corresponding to the feature figure 64 in the song feature information figure 60 corresponds to the parameter “maximum level” contained in the search word feature information 30. The feature graphic 55 corresponding to the feature graphic 65 in the music feature information graphic 60 is displayed as corresponding to the parameter “average level” included in the search word feature information 30. Then, in each of the feature graphics 51 to 55, as in the above feature graphics 61 to 65, the song feature information or parameter values shown as the respective feature graphics are the same as each of the song feature information and each of the parameters. It is represented by a preset color as a value normalized using the maximum and minimum values that can be taken. In contrast, in the feature graphics 51 to 55, the weight value of each song feature information included in each search feature information 30 (see FIG. 2 (b) and its related explanation part) Due to the difference in gradation, the higher the weighting value, the more the background is not filled with gradation, and the smaller the weighting value, the more prominent the gradation is for the background.
[0066] 上記ステップ S8の処理により一の曲が出力されると、その出力された曲を聴取した 使用者が、出力された曲が上記ステップ S1において入力 ·決定された検索語に相応 しいか否かを評価し、その評価結果を入力部 9を用いて入力する (ステップ S9)。  [0066] When one song is output by the process in step S8, the user who has listened to the output song does the output song correspond to the search term input and determined in step S1? The evaluation result is input using the input unit 9 (step S9).
[0067] そして、入力された評価結果が、その出力された曲が検索語に相応しいとする旨の 評価であったときは (ステップ S9;合致)、後述する合致履歴情報を更新し (ステップ S 11)ステップ S 12へ移行する。一方、ステップ S9の判定において、その出力された曲 が検索語に相応しくな 、とする旨の評価であったときは (ステップ S9;非合致)、後述 する非合致履歴情報を更新し (ステップ S 10)ステップ S 12へ移行する。 [0067] When the input evaluation result is an evaluation that the output song is suitable for the search word (step S9; match), the match history information described later is updated (step S9). 11) Proceed to step S12. On the other hand, if it is determined in step S9 that the output song is not suitable for the search term (step S9; non-match), it will be described later. The non-match history information to be updated is updated (step S10), and the process proceeds to step S12.
[0068] ここで、上記ステップ S 10において更新される非合致履歴情報及び上記ステップ S 11にお 、て更新される合致履歴情報にっ 、て、より具体的に図 6を用いて説明する Here, the non-match history information updated in step S 10 and the match history information updated in step S 11 will be described more specifically with reference to FIG.
[0069] 先ず、当該合致履歴情報 Gとしては、図 6 (a)に示すように、使用者が入力した検索 語に対応する評価によりその検索語に相応しいと評価された曲の曲特徴情報 20に カロえて、その曲特徴情報 20内に含まれている構成語情報に基づき、その時に有効と されて 、る変換テーブル Tを参照として上記ステップ S4にお 、て説明した方法と同 様の方法により生成された歌特徴情報 40 (その曲に対応する歌特徴情報 40)が含ま れている。 [0069] First, as the matching history information G, as shown in Fig. 6 (a), the song feature information 20 of the song that is evaluated as appropriate for the search word by the evaluation corresponding to the search word input by the user 20 Based on the constituent word information included in the song feature information 20, the method similar to the method described in step S4 above with reference to the conversion table T, which is valid at that time. Is included in the song feature information 40 (song feature information 40 corresponding to the song).
[0070] 一方、非合致履歴情報 NGとしては、図 6 (b)に示すように、上記使用者の評価によ りその使用者が入力した検索語に相応しくないと評価された曲の曲特徴情報 20にカロ えて、その曲特徴情報 20内に含まれている構成語情報に基づき、その時に有効とさ れて 、る変換テーブル Tを参照として上記ステップ S4にお 、て説明した方法と同様 の方法により生成された歌特徴情報 40 (その曲に対応する歌特徴情報 40)が、合致 履歴情報 Gの場合と同様に含まれて 、る。  [0070] On the other hand, as the non-match history information NG, as shown in Fig. 6 (b), the song feature of the song evaluated by the above user's evaluation as not suitable for the search term input by the user is shown. Similar to the method described in step S4 above with reference to the conversion table T which is valid at that time based on the constituent word information included in the song feature information 20 instead of the information 20 The song feature information 40 (song feature information 40 corresponding to the song) generated by the above method is included as in the case of the matching history information G.
[0071] そして、予め設定された曲数分だけ各履歴情報の更新が完了すると、その結果に 基づ 、て上記変換テーブル丁の内容及び検索語特徴情報 30の内容を夫々更新す る(ステップ S 12)。  [0071] When the update of each history information is completed for the preset number of songs, the contents of the conversion table and the search word feature information 30 are updated based on the results (steps). S 12).
[0072] ここで、本実施形態の場合、当該更新のうち変換テーブル Tの内容の更新にあって は、上記ステップ S18において変換テーブル Tが変更された場合におけるその変更 後の変換テーブル Tの追加処理を言うことになる。なお、当該追加処理においては 新し 、変換テーブル Tを追加するのみであり、他の古!、変換テーブル Tにつ!/、ては、 それ自体更新されることち消去されることちな 、。  [0072] Here, in the case of the present embodiment, when updating the contents of the conversion table T among the updates, the conversion table T after the change is added when the conversion table T is changed in step S18. Will say processing. In addition, in the addition process, only the conversion table T is newly added, and other old! And conversion table T! / Are likely to be deleted by updating themselves.
[0073] 次に、上記ステップ S7において作成されている再生リストにおける最後の曲まで出 力が完了した力否かが確認され (ステップ S 13)、最後の曲までの出力が完了して!/ヽ ないときは (ステップ S13 ;NO)上記ステップ S8まで戻って再生リスト内の次の曲を出 力すると共に上述したステップ S9乃至 S12を当該次の曲について繰り返す。一方、 ステップ S13の判定において、最後の曲までの出力が完了したときは (ステップ S13 ; YES)そのまま一連の曲検索処理を終了する。 [0073] Next, it is confirmed whether or not the output has been completed up to the last song in the playlist created in step S7 (step S13), and the output up to the last song is completed! / If not (Step S13; NO), return to Step S8 and output the next song in the playlist, and repeat Steps S9 to S12 described above for the next song. on the other hand, If it is determined in step S13 that the output up to the last song has been completed (step S13; YES), the series of song search processing ends.
[0074] 次に、上記ステップ S12における各更新処理における変換テーブル Tの更新処理 にっき、図 7乃至図 9を用いて説明する。  Next, the update process of the conversion table T in each update process in step S12 will be described with reference to FIGS.
[0075] 当該変換テーブル Tの更新処理においては、図 7に示すように、その更新処理を実 行しょうとして 、るタイミングで入力されて 、る検索語が、各歌特徴情報 40の 、ずれ かにより示される主観と同じ主観を示す検索語であるカゝ否かを確認し (ステップ S20) 、当該主観同士が一致しな 、ときは (ステップ S20 ;NO)変換テーブル Tの更新は行 わずに次の検索語特徴情報 30を更新するための処理に移行する。  [0075] In the update process of the conversion table T, as shown in FIG. 7, whether or not the search term that is input at the timing to perform the update process is different from each song feature information 40. (Step S20), and if the subjectivity does not match (Step S20; NO), the conversion table T is not updated. Then, the process proceeds to processing for updating the next search term feature information 30.
[0076] 他方、当該主観同士が一致したときは (ステップ S20 ; YES)、次に、実際の変換テ 一ブル Tの更新処理に移行する。  [0076] On the other hand, when the subjectivity matches (step S20; YES), the process proceeds to the actual conversion table T update processing.
[0077] なお、以下に説明する更新処理においては、予め設定された曲数(図 8及び図 9の 場合は 40曲(相応しい 20曲と相応しくない 20曲))の曲につき、一の検索語(図 8及 び図 9に示す場合は、検索語「心あたたまる」)に相応しいと評価された曲に含まれて V、た構成語の内容と、その検索語に相応しくな 、と評価された曲に含まれて!/、た構 成語の内容と、に基づ!/、てその検索語に対応する変換テーブル Tを更新する場合に ついて説明する。なお、図 8及び図 9は、図 6に示した各履歴情報のうち、変換テー ブル Tの更新処理に必要な項目及び上記検索語特徴情報図形 50における調図形 56の表示に用いられる上記相関値の後述する算出処理に必要な項目のみを抽出し たものである。  [0077] In the update process described below, one search term is used for a preset number of songs (40 songs in the case of FIGS. 8 and 9 (20 songs suitable and 20 songs not suitable)). (In the case shown in Fig. 8 and Fig. 9, it was evaluated that the contents of V, the constituent words included in the song evaluated as appropriate for the search word "heartwarming" and that it was not appropriate for the search word) The following describes how to update the conversion table T corresponding to the search term based on the contents of the! / And the constituent words included in the song. 8 and 9 show items necessary for the update process of the conversion table T in the history information shown in FIG. 6 and the correlation used for displaying the graphic 56 in the search word characteristic information graphic 50. Only items that are necessary for the calculation process described later are extracted.
[0078] 実際の変換テーブル Tの更新処理お 、ては、先ず、合致履歴情報につき、相応し V、と評価された各曲(図 8にお 、て、一の保存アドレスがーの曲に対応する)の構成 語毎に、全ての曲(20曲ずつ)につ ヽて図 8にお ヽて縦方向に「0」又は「1」を全て加 算してそれを全曲数(図 8の場合は「20」 )で除して平均値 AAを求める (ステップ S21 )。例えば、合致履歴情報における構成語「アイ」の場合は、その構成語が含まれて いた (換言すれば、図 8において値が「1」となっている)曲は 5曲であるから、これを全 曲数である「20」で除すると、構成語「アイ」について平均値 AAは「0. 25」となる。そ して、この平均値算出処理を全ての構成語について実行する。 [0079] 次に、これと同様に、非合致履歴情報につき、相応しくないと評価された各曲の構 成語毎に、全ての曲につ 、て図 9にお 、て縦方向に「0」又は「1」を全て加算してそ れを全曲数で除して平均値 DAを求める (ステップ S21)。例えば、非合致履歴情報 における構成語「アイ」の場合は、その構成語が含まれていた曲は 14曲であるから、 これを全曲数で除すると、構成語「アイ」について平均値 DAは「0. 70」となる。そして 、この平均値算出処理を全ての構成語について実行する。 [0078] In the actual update process of the conversion table T, first, for each piece of match history information, each song evaluated as V corresponding to the match history information (in FIG. Corresponding) For each word, add all “0” or “1” in the vertical direction in Fig. 8 for all songs (20 songs in total) and add it to the total number of songs (Fig. 8). In this case, the average value AA is obtained by dividing by “20”) (step S21). For example, in the case of the constituent word “eye” in the match history information, there are five songs that contain that constituent word (in other words, the value is “1” in FIG. 8). Is divided by “20” which is the total number of songs, the average value AA is “0.25” for the constituent word “eye”. Then, this average value calculation process is executed for all the constituent words. [0079] Next, in the same manner as above, for each non-matching history information, for each song composition that was evaluated as inappropriate, all songs were set to “0” in the vertical direction in FIG. Alternatively, all “1” s are added and divided by the total number of songs to obtain an average value DA (step S21). For example, in the case of the constituent word “Ai” in the non-match history information, there are 14 songs that contained that constituent word, and when this is divided by the total number of songs, the average value DA for the constituent word “Ai” is “0.70”. And this average value calculation process is performed about all the constituent words.
[0080] ここで、各構成語の夫々にっき、平均値 AAと平均値 DAとの差が大きければ大き V、ほど、その構成語によって現在の検索語に対応する歌特徴情報 40を表現できる 可能性が高くなることになる。  [0080] Here, for each constituent word, the larger the difference between the average value AA and the average value DA, the larger the V, the more the song feature information 40 corresponding to the current search word can be expressed by that constituent word. The sex will be higher.
[0081] し力しながら、ここで注意すべきは、上述した処理により算出した各平均値 AA及び DAは、曲データベース 2内に蓄積されて 、る全ての曲につ 、て評価した結果として 得られたものではないということである。そこで、算出した各平均値 (統計的には「標本 比率」と称されるものに相当する)の信頼限界幅を求め、各構成語夫々に対応する平 均値 AAと平均値 DAとの差 (平均値 AA—平均値 DA)を、その構成語の履歴情報 上の重み付け値とすることができる力否かについて確認を行う。より具体的には、信 頼限界幅の信頼度を仮に 90%とし、以下の式(1)により算出した (ステップ S22)信 頼限界幅を用いて確認を行う。すなわち、各構成語の夫々にっき、  [0081] However, it should be noted that the average values AA and DA calculated by the above-described processing are accumulated in the song database 2 and are evaluated as a result of evaluation for all the songs. It is not obtained. Therefore, the confidence margin of each calculated average value (statistically equivalent to what is called the “sample ratio”) is obtained, and the difference between the average value AA and the average value DA corresponding to each component word is obtained. Confirm whether (average value AA-average value DA) can be used as a weighting value in the history information of the constituent word. More specifically, assume that the reliability of the confidence limit range is 90%, and check using the trust limit range calculated by the following formula (1) (Step S22). That is, for each component word,
信頼限界幅 = 2 X 1. 65 X〔(AAX (1— AA))/曲数〕 1/2 …ひ) の式により対応する信頼限界幅を算出する。 Confidence limit width = 2 X 1.65 X [(AAX (1− AA)) / number of songs] 1 / 2 … H) Calculate the corresponding confidence limit width.
[0082] 次に、信頼限界幅が算出されたら、これを用いて、平均値 AAの値から平均値 DA の値を差し引いた値の絶対値が上記算出された信頼限界幅以上である力否かを確 認する(ステップ S 23)。  [0082] Next, when the confidence limit width is calculated, it is used to determine whether the absolute value of the value obtained by subtracting the average value DA from the average value AA is equal to or greater than the calculated confidence limit width. (Step S23).
[0083] そして、平均値 AAの値力 平均値 DAの値を差し引 、た値の絶対値が信頼限界 幅以上であるとき (ステップ S23 ; YES)、この差を信頼できる値であるとして、変換テ 一ブル Tにおける該当する構成語(図 8の場合は構成語「心あたたまる」)の重み付け 値として採用し、当該変換テーブル Tに登録 (格納)する (ステップ S 24)。一方、ステ ップ S 23にお!/、て、平均値 AAの値から平均値 DAの値を差し引いた値の絶対値が 信頼限界幅未満であるとき (ステップ S23 ;NO)、この差は信頼できな 、値であるとし て変換テーブル Tにおける該当する構成語の重み付けを「0」と更新する (ステップ S2 5)。 [0083] Then, when the absolute value of the average value AA is subtracted from the confidence limit width (step S23; YES), the difference is regarded as a reliable value. Adopted as a weighting value of the corresponding constituent word in the conversion table T (in the case of FIG. 8, the constituent word “heartwarming”) is registered (stored) in the conversion table T (step S 24). On the other hand, when the absolute value of the value obtained by subtracting the average value DA from the average value AA is less than the confidence limit range (step S23; NO), the difference is Value is unreliable Thus, the weight of the corresponding constituent word in the conversion table T is updated to “0” (step S25).
[0084] なお、情報検索装置 Sの初期状態においては、各履歴情報には予め設定された初 期値が格納されており、また、履歴情報の対象となる曲数も有限であるので、結果と して古い履歴情報に新しい履歴情報が上書きされることになる。これにより、出力され た曲の評価が進むにつれて次第に使用者毎に固有の主観が変換テーブル Τの重み 付け値に反映されていくようになる。そして、上書きされた履歴情報を反映させて新し く作成された変換テーブル Τは、その都度検索処理部 8内の図示しないメモリに蓄積 され、使用者は歌特徴情報への変換 (図 3ステップ S4参照)に用いられる変換テープ ル Τを個々の好みに合わせて選択する(図 3ステップ S 18参照)ことができる。  [0084] Note that, in the initial state of the information search device S, each history information stores a preset initial value, and the number of songs to be the history information is limited. As a result, old history information is overwritten with new history information. As a result, the subjectivity specific to each user is gradually reflected in the weighting value of the conversion table に つ れ て as the evaluation of the output music progresses. Then, the newly created conversion table さ せ reflecting the overwritten history information is stored in a memory (not shown) in the search processing unit 8 each time, and the user converts it into song feature information (Step 3 in Fig. 3). The conversion table Τ used in (see S4) can be selected according to individual preference (see step S18 in Fig. 3).
[0085] 次に、上記検索語特徴情報図形 50内の調図形 56における矢印 57の表示に用い られる相関値の算出処理につき、図 8及び図 9を用いて説明する。  Next, the calculation process of the correlation value used for displaying the arrow 57 in the graphic 56 in the search word feature information graphic 50 will be described with reference to FIG. 8 and FIG.
[0086] 当該算出処理は、具体的には上述した変換テーブル Τの更新処理と同様の処理 により算出される。  Specifically, the calculation process is calculated by the same process as the conversion table テ ー ブ ル update process described above.
[0087] なお、以下に説明する算出処理においては、予め設定された曲数(図 8及び図 9の 場合は 40曲(相応しい 20曲と相応しくない 20曲))の曲につき、一の検索語(図 8及 び図 9に示す場合は、検索語「心あたたまる」)に相応しいと評価された曲の調と、そ の検索語に相応しくないと評価された曲の調と、に基づいてその検索語に対応する 調の相関値を算出する場合について説明する。また、一の曲に対応する調について は、たとえその曲が途中で変調する場合であっても一の曲には主たる一の調が対応 付けられて 、るので、図 8及び図 9 (a)の横一行にぉ 、て「1」の値をとる調は一つの 曲にっき一つのみである。  [0087] In the calculation process described below, one search term is used for a preset number of songs (40 songs in the case of Figs. 8 and 9 (20 songs that are not appropriate and 20 songs that are not appropriate)). (In the case shown in Figs. 8 and 9, the search word "heartwarming") and the key of the song evaluated as inappropriate for the search word. The case where the correlation value of the key corresponding to the search term is calculated will be described. As for the key corresponding to one song, even if the song is modulated in the middle, one key is associated with one song, so Fig. 8 and Fig. 9 (a There is only one key in a song with a value of “1” in the horizontal line.
[0088] 実際の相関値の算出処理おいては、先ず、合致履歴情報につき、相応しいと評価 された各曲(図 8及び図 9 (a)にお 、て、一の保存アドレスがーの曲に対応する)の調 毎に、全ての曲(20曲ずつ)につ 、て図 8にお 、て縦方向に「0」又は「1」を全て加算 してそれを全曲数(図 8の場合は「20」)で除して平均値 AAを求める(図 7ステップ S2 1参照)。例えば、合致履歴情報におけるハ長調の場合は、その調が含まれていた( 換言すれば、図 8において値が「1」となっている)曲は 3曲であるから、これを全曲数 である「20」で除すると、ハ長調について平均値 AAは「0. 15」となる。そして、この平 均値算出処理を全ての調について実行する。 [0088] In the actual correlation value calculation process, first, for each piece of matching history information, each piece of music that was evaluated as being appropriate (the song with the same storage address in Fig. 8 and Fig. 9 (a)). For all songs (20 songs each), add all “0” or “1” in the vertical direction and add it to the total number of songs (Fig. 8). In this case, the average value AA is obtained by dividing by “20” (see step S21 in Fig. 7). For example, in the case of C major in the match history information, there are 3 songs that include that key (in other words, the value is “1” in FIG. 8). When dividing by “20”, the average value AA for C major is “0.15”. Then, this average value calculation process is executed for all the keys.
[0089] 次に、これと同様に、非合致履歴情報につき、相応しくないと評価された各曲の調 毎に、全ての曲につ!ヽて図 9 (a)にお ヽて縦方向に「0」又は「1」を全て加算してそれ を全曲数で除して平均値 DAを求める(図 7ステップ S21参照)。例えば、非合致履歴 情報におけるハ長調の場合は、その調が含まれていた曲は 4曲であるから、これを全 曲数で除すると、ハ長調について平均値 DAは「0. 20」となる。そして、この平均値 算出処理を全ての調について実行する。  [0089] Next, in the same manner as above, for each non-matching history information, all songs are evaluated for each of the songs evaluated as inappropriate! Then add “0” or “1” in the vertical direction in FIG. 9 (a) and divide it by the total number of songs to obtain the average value DA (see step S21 in FIG. 7). For example, in C major in the non-match history information, there are 4 songs that included that key, and when this is divided by the total number of songs, the average DA for C major is “0.20”. Become. Then, this average value calculation process is executed for all the keys.
[0090] ここで、各調の夫々にっき、平均値 AAと平均値 DAとの差が大きければ大き!/、ほど 、その調によって現在の曲の調に相応しくなる(換言すれば相関値が大きくなる)可 能性が高くなることになる。  [0090] Here, for each key, the greater the difference between the average value AA and the average value DA! /, The more the key is suitable for the current song key (in other words, the correlation value is larger). The possibility is high.
[0091] し力しながら、ここで上記変換テーブル Tの更新の場合と同様の統計的な問題が相 関値の算出処理についても存在するので、ここでも算出した各平均値の信頼限界幅 を求め、各構成語夫々に対応する平均値 AAと平均値 DAとの差 (平均値 AA—平均 値 DA)を、その構成語の履歴情報上の重み付け値とすることができる力否かについ て確認を行う。より具体的には、信頼限界幅の信頼度を仮に 90%とし、上記式(1)に より算出した (図 7ステップ S22参照)信頼限界幅を用いて確認を行う。  However, since there is a statistical problem similar to that in the case of updating the conversion table T in the correlation value calculation process, the confidence limit width of each average value calculated here is also calculated. It is determined whether the difference between the average value AA and the average value DA corresponding to each constituent word (average value AA—average value DA) can be used as a weighting value on the history information of the constituent word. Check. More specifically, the reliability of the confidence margin is assumed to be 90%, and confirmation is performed using the confidence margin calculated by the above formula (1) (see step S22 in Fig. 7).
[0092] 次に、信頼限界幅が算出されたら、これを用いて、平均値 AAの値から平均値 DA の値を差し引いた値の絶対値が上記算出された信頼限界幅以上である力否かを確 認する(図 7ステップ S 23参照)。  [0092] Next, when the confidence limit width is calculated, it is used to determine whether the absolute value of the value obtained by subtracting the average value DA from the average value AA is equal to or greater than the calculated confidence limit width. (Refer to step S23 in Fig. 7).
[0093] そして、平均値 AAの値力 平均値 DAの値を差し引 、た値の絶対値が信頼限界 幅以上であるとき(図 7ステップ S23; YES参照)、この差を信頼できる値であるとして 、その調の相関値 TTとして新たに採用する(図 7ステップ S24及び図 9 (b)参照)。一 方、平均値 AAの値力 平均値 DAの値を差し引いた値の絶対値が信頼限界幅未満 であるとき(図 7ステップ S23 ;NO参照)、この差は信頼できない値であるとして相関 値を更新しな ヽ(図 7ステップ S25参照)。  [0093] When the absolute value of the average value AA is subtracted from the confidence limit width (see YES in step S23 in FIG. 7), this difference is a reliable value. If there is, it is newly adopted as the correlation value TT of that key (see step S24 in FIG. 7 and FIG. 9B). On the other hand, when the absolute value of the value obtained by subtracting the value of the average value DA from the average value AA is less than the confidence limit (see step S23; NO in Fig. 7), this difference is considered to be an unreliable value, and the correlation value Do not update (see step S25 in Figure 7).
[0094] なお、上記変換テーブル Tの更新処理の場合と同様に、情報検索装置 Sの初期状 態においては、各履歴情報には予め設定された上記相関値の初期値が格納されて おり、また、履歴情報の対象となる曲数も有限であるので、結果として古い履歴情報 に新しい履歴情報が上書きされることになる。 [0094] Note that, in the initial state of the information search device S, as in the case of the update process of the conversion table T, each history information stores the initial value of the correlation value set in advance. In addition, since the number of pieces of music targeted for the history information is limited, the new history information is overwritten on the old history information as a result.
[0095] 以上説明したように、実施形態の曲検索装置 Sの動作によれば、入力された検索 語に対応する検索語特徴情報 30と、蓄積されて!ヽる各曲に対応する曲特徴情報 20 と、を比較して曲を検索するので、当該入力された検索語により相応しい曲を確実に 検索できると共に、入力された検索語を直接用いて曲を検索する場合に比してより使 用者の主観に合致した曲を検索することができる。  [0095] As described above, according to the operation of the song search device S of the embodiment, the search word feature information 30 corresponding to the input search word and the song features corresponding to each song that is accumulated! The information 20 and the song are searched to search for the song, so that it is possible to surely search for a suitable song by the input search word and to use the input search word directly compared to the case of searching for the song. Search for songs that match the user's subjectivity.
[0096] また、検索語特徴情報画像 50と曲特徴情報画像 60とが同じディスプレイ 12の画面 内に表示されるので、使用者が入力した検索語により検索されるべき検索語特徴情 報 30の内容とその検索語により実際に抽出された曲を示す曲特徴情報 20の内容と を比較でき、自己が入力した検索語の特徴と抽出された曲の特徴との類似度をィメ ージとして把握することができる。  [0096] Since the search term feature information image 50 and the song feature information image 60 are displayed on the same screen of the display 12, the search term feature information 30 to be searched based on the search term input by the user. You can compare the content with the content of the song feature information 20 that indicates the song actually extracted by the search term, and use the similarity between the feature of the search term entered by the user and the feature of the extracted song as an image. I can grasp it.
[0097] 更に、歌特徴情報の生成に用いられる変換デーブル Tが、検索処理部 8内の図示 しないメモリに蓄積されているものから使用者が任意に選択することができるので、使 用者は検索語により実際に抽出された曲が好みでない場合は、変換テーブル Tその ものを過去のものから任意に選択してそれに変更することができる。  [0097] Further, since the conversion table T used for generating the song feature information can be arbitrarily selected from those stored in a memory (not shown) in the search processing unit 8, the user can select If the song actually extracted by the search word is not like, the conversion table T can be arbitrarily selected from the past and changed.
[0098] 更に、検索語特徴情報画像 50及び曲特徴情報画像 60が同時に、且つ、同じ構図 の画像を用いてディスプレイ 12に表示されるので、双方の比較が視覚的に容易に行 える。  [0098] Furthermore, since the search word feature information image 50 and the song feature information image 60 are displayed on the display 12 simultaneously and using images having the same composition, the comparison of both can be easily performed visually.
[0099] 更にまた、検索語特徴情報 30における歌特徴情報の重み付けの度合 、が特徴図 形 51乃至 55のグラデーションの変化により表示されるので、視覚的に容易に重み付 けの変化を認識することができる。  [0099] Furthermore, since the degree of weighting of the song feature information in the search word feature information 30 is displayed by the gradation change of the feature diagrams 51 to 55, the change in weighting can be easily recognized visually. be able to.
[0100] また、曲特徴情報 20及び検索語特徴情報 30の夫々により示される音響的特徴を 示す各パラメータの大きさが各パラメータ毎に視認可能に表示されるので、当該各パ ラメータ毎の詳細な大小関係等を視覚的に認識することができる。  [0100] In addition, since the size of each parameter indicating the acoustic feature indicated by each of the song feature information 20 and the search word feature information 30 is displayed for each parameter in a visible manner, the details for each parameter are displayed. It is possible to visually recognize large and small relationships.
[0101] 更に、曲特徴情報 20及び検索語特徴情報 30の夫々により示される歌詞の特徴を 示す各歌特徴情報の大きさが各歌特徴情報毎に視認可能に表示されるので、当該 各歌特徴情報毎の詳細な大小関係等を視覚的に認識することができる。 [0102] 更にまた、各パラメータを示す図形の色自体の相違により各パラメータの大きさが 表現されるので、視覚的に容易に各パラメータの大きさを認識することができる。 [0101] Further, since the size of each song feature information indicating the feature of the lyrics indicated by each of the song feature information 20 and the search word feature information 30 is displayed for each song feature information, It is possible to visually recognize a detailed magnitude relationship for each feature information. [0102] Furthermore, since the size of each parameter is expressed by the difference in the color of the graphic representing each parameter, the size of each parameter can be easily recognized visually.
[0103] また、評価情報に基づいて検索語特徴情報 30が更新されると共に当該検索語特 徴情報 30における歌特徴情報の重み付けを示す特徴図形 51乃至 55の大きさが検 索語特徴情報 30と共に表示されるので、検索語特徴情報 30における歌特徴情報の 重み付けと共にその重み付けに基づく検索語特徴情報 30の変遷を視覚的に把握す ることがでさる。  [0103] The search word feature information 30 is updated based on the evaluation information, and the size of the feature graphics 51 to 55 indicating the weighting of the song feature information in the search word feature information 30 is the search word feature information 30. Therefore, it is possible to visually grasp the weighting of the song feature information in the search word feature information 30 and the transition of the search word feature information 30 based on the weighting.
[0104] なお、上述した実施形態においては、検索語特徴情報図形 50における特徴図形 5 1乃至 55のグラデーションを変化させることで夫々の検索語における歌特徴情報の 重み付けん相違を表現したが、これ以外に、例えば図 10左に示す検索語特徴情報 図形 70のように、特徴図形 71乃至 75 (図 5左に示す検索語特徴情報図形 50におけ る特徴図形 51乃至 55に相当)における長さの変化を、対応する歌特徴情報の重み 付けの度合いを示すものとして用いることもできる。ここで、図 10右は、検索語特徴情 報図形 70に対応する表示態様を備える曲特徴情報図形 80であり、特徴図形 81乃 至 85 (図 5右に示す曲特徴情報図形 60における特徴図形 61乃至 65に相当)も検索 語特徴情報図形 70に合わせて三角形とされている。また、図 10において、図 5に示 される各図形と同様の部材については同様の部材番号により示されている。  In the above-described embodiment, the weighting difference of the song feature information in each search word is expressed by changing the gradation of the feature figures 51 to 55 in the search word feature information figure 50. In addition, for example, the length in the characteristic figures 71 to 75 (corresponding to the characteristic figures 51 to 55 in the search word characteristic information figure 50 shown in the left of FIG. 5) like the search word characteristic information figure 70 shown in the left of FIG. Can be used as an indication of the degree of weighting of the corresponding song feature information. Here, the right side of FIG. 10 is a music feature information figure 80 having a display form corresponding to the search word characteristic information figure 70, and the characteristic figure 81 to 85 (the characteristic figure in the music feature information figure 60 shown on the right side of FIG. 5). (Corresponding to 61 to 65) is also a triangle according to the search word feature information graphic 70. Further, in FIG. 10, the same members as those in each figure shown in FIG. 5 are indicated by the same member numbers.
[0105] この図 10に示す場合には、歌特徴情報の重み付けの大きさが特徴図形 71乃至 75 の大きさの変化により表示されるので、視覚的に容易に当該重み付けの変化を認識 することができる。  [0105] In the case shown in FIG. 10, since the weighting size of the song feature information is displayed by the change in the size of the feature figures 71 to 75, the change in weighting can be easily recognized visually. Can do.
[0106] また、上述した実施形態において、入力部 9における操作により、表示されている検 索語特徴情報図形 50又は 70に対応する検索語特徴情報 30を使用者が変更するよ うに構成することもできる。  [0106] In the above-described embodiment, the user can change the search word feature information 30 corresponding to the displayed search word feature information graphic 50 or 70 by the operation of the input unit 9. You can also.
[0107] この場合には、検索語特徴情報 30の内容を使用者が変更することができるので、 当該検索語特徴情報 30を使用者の好みに応じて変更することで、よりその使用者に 趣味等に応じた曲を抽出することができる。 [0107] In this case, the user can change the contents of the search term characteristic information 30. Therefore, by changing the search word feature information 30 according to the user's preference, the user can be further changed. Music according to hobbies etc. can be extracted.
[0108] 更に、入力部 9における操作により、表示されている検索語特徴情報画像 50又は 7 0に対応する検索語特徴情報 30及び検索語特徴情報 30に含まれる歌特徴情報の 重み付けの夫々を変更するように構成することもできる。 Further, by the operation in the input unit 9, the search word feature information 30 corresponding to the displayed search word feature information image 50 or 70 and the song feature information included in the search word feature information 30 are displayed. It is also possible to configure each weighting to be changed.
[0109] この場合には、当該検索語特徴情報 30及び歌特徴情報の重み付けを使用者の好 みに応じて変更することで、よりその使用者に趣味等に応じた曲を抽出することがで きる。  In this case, by changing the weighting of the search term feature information 30 and the song feature information according to the user's preference, it is possible to further extract the song according to the hobby etc. to the user. it can.
[0110] 更にまた、検索語特徴情報図形 50又は 70及び曲特徴情報図形 60又は 80が同時 に表示されて ヽる場合にぉ ヽて、当該表示されて ヽる検索語特徴情報図形 50又は 70に対応する検索語特徴情報 30の内容を、検索処理部 8にお 、てその時に表示さ れている曲特徴情報画像 60又は 80に対応する曲特徴情報 20の内容と同一の内容 に置換して検索語特徴情報データベース 4に蓄積し直すように構成することもできる  [0110] Furthermore, when the search word feature information graphic 50 or 70 and the song feature information graphic 60 or 80 are displayed at the same time, the search word characteristic information graphic 50 or 70 displayed is displayed. The search word feature information 30 corresponding to is replaced with the same content as the song feature information 20 corresponding to the song feature information image 60 or 80 displayed at that time in the search processing unit 8. Can be configured to be stored again in the search term feature information database 4
[0111] この場合には、抽出された曲を使用者が気に入ったとき、入力された検索語に対応 する検索語特徴情報 30をその曲の曲特徴情報 20に置換できることで、再度のその 検索語が入力されたときはその抽出された曲が再度抽出されることになり、より使用 者の趣味等に応じて曲を抽出することができる。 [0111] In this case, when the user likes the extracted song, the search term feature information 30 corresponding to the entered search term can be replaced with the song feature information 20 of the song, so that the search can be performed again. When a word is input, the extracted music is extracted again, and the music can be extracted according to the user's hobbies.
[0112] 更に、上述した実施形態では、複数の曲を蓄積してこれを検索する曲検索装置 S に対して本願を適用した場合について説明した力 これ以外に、静止画像又は動画 像を蓄積し、これらを使用者の主観に応じて検索する画像検索装置に対して本願を 適用することも可能である。  [0112] Furthermore, in the above-described embodiment, the power described in the case where the present application is applied to the music search apparatus S that stores and searches for a plurality of songs, other than this, still images or moving images are stored. It is also possible to apply the present application to an image search apparatus that searches for these according to the subjectivity of the user.
[0113] 更にまた、図 3及び図 7に示すフローチャートに対応するプログラムを、フレキシブ ルディスク等の情報記録媒体に記録しておき、或いは当該プログラムをインターネット 等のネットワークを介して取得して記録しておき、これらを汎用のマイクロコンピュータ 等を用いて読み出して実行することにより、当該汎用のマイクロコンピュータを実施形 態に係る検索処理部 8として用いることも可能である。  Furthermore, a program corresponding to the flowcharts shown in FIGS. 3 and 7 is recorded on an information recording medium such as a flexible disk, or the program is acquired and recorded via a network such as the Internet. The general-purpose microcomputer can be used as the search processing unit 8 according to the embodiment by reading and executing these using a general-purpose microcomputer.

Claims

請求の範囲 The scope of the claims
[1] 複数の曲の中力 一又は複数の当該曲を検索する曲検索装置において、  [1] Intensity of multiple songs In a song search device that searches for one or more of the songs,
前記曲に含まれる歌詞の特徴を少なくとも示す曲特徴情報を各前記曲毎に識別可 能に蓄積する曲特徴情報蓄積手段と、  Song feature information accumulating means for accumulating song feature information indicating at least the features of the lyrics included in the song for each song;
検索されるべき前記曲を示す検索語であって主観を示す言葉よりなる検索語を入 力するために用いられる検索語入力手段と、  A search word input means used for inputting a search word that is a search word indicating the music to be searched and is a word indicating subjectivity;
前記入力される検索語により示される主観に相応しいとして当該検索語を用いて検 索されるべきいずれかの前記曲に含まれる歌詞の特徴を少なくとも示す検索曲特徴 情報を、各前記検索語毎に識別可能に蓄積する検索曲特徴情報蓄積手段と、 前記入力された検索語に対応する前記検索曲特徴情報である入力検索曲特徴情 報と比較される比較対象情報を、前記蓄積されて!ヽる曲特徴情報を変換情報 (変換 テーブル)を用いて変換することにより生成する変換手段と、  Search song feature information indicating at least the feature of the lyrics included in any of the songs to be searched using the search word as appropriate for the subjectivity indicated by the input search word is provided for each search word. Retrieving song feature information accumulating means for accumulating identifiable information, and comparison target information to be compared with input retrieval song feature information which is the searching song feature information corresponding to the inputted search word is accumulated! A conversion means for generating the song characteristic information by converting it using conversion information (conversion table);
前記入力検索曲特徴情報と、前記生成された比較対象情報と、を夫々比較する比 較手段と、  Comparison means for comparing the input search song feature information and the generated comparison target information, respectively.
前記比較手段による比較結果に基づいて、前記入力検索曲特徴情報に最も類似 している前記比較対象情報の生成元である前記曲特徴情報に対応する前記曲を、 前記入力された検索語に対応する前記曲として抽出する抽出手段と、  Based on the comparison result by the comparison means, the song corresponding to the song feature information that is the generation source of the comparison target information that is most similar to the input search song feature information corresponds to the inputted search word. Extracting means for extracting as the song;
前記変換情報を変更する旨の指示を入力するために用いられる指示入力手段と、 前記入力された指示に基づき前記変換情報を変更する変更手段と、  An instruction input means used for inputting an instruction to change the conversion information; a changing means for changing the conversion information based on the input instruction;
を備え、  With
前記変換手段は、前記変換情報が変更されたとき、当該変更後の変換情報を用い て前記曲特徴情報の前記比較対象情報への変換を行うことを特徴とする曲検索装 置。  When the conversion information is changed, the conversion means converts the music feature information into the comparison target information using the converted conversion information.
[2] 請求項 1に記載の曲検索装置において、  [2] In the music search device according to claim 1,
前記入力検索曲特徴情報の内容を示す画像である入力検索曲特徴情報画像を表 示手段に表示させる第一表示制御手段と、  First display control means for displaying on the display means an input search song feature information image that is an image showing the contents of the input search song feature information;
前記抽出された曲に対応する前記曲特徴情報の内容を示す画像である抽出曲特 徴情報画像を前記表示手段に表示させる第二表示制御手段と、 を備え、 Second display control means for causing the display means to display an extracted music feature information image that is an image showing the content of the music feature information corresponding to the extracted music; With
前記第一表示制御手段及び前記第二表示制御手段は、前記指示入力手段にお いて前記指示が入力される際に前記入力検索曲特徴情報画像及び前記抽出曲特 徴情報画像を前記表示手段に表示することを特徴とする曲検索装置。  The first display control means and the second display control means receive the input search song feature information image and the extracted song feature information image on the display means when the instruction is input by the instruction input means. A music search apparatus characterized by displaying.
[3] 請求項 2に記載の曲検索装置において、  [3] In the song search device according to claim 2,
前記第一表示制御手段及び前記第二表示制御手段は、前記入力検索曲特徴情 報画像及び前記抽出曲特徴情報画像を同時に、且つ、当該入力検索曲特徴情報 画像と当該抽出曲特徴情報画像とを同じ構図の画像を用いて前記表示手段に表示 させることを特徴とする曲検索装置。  The first display control means and the second display control means simultaneously perform the input search song feature information image and the extracted song feature information image, and the input search song feature information image and the extracted song feature information image. Is displayed on the display means using an image having the same composition.
[4] 請求項 1から 3の 、ずれか一項に記載の曲検索装置にお!、て、 [4] In the music search device according to any one of claims 1 to 3,!
前記曲特徴情報及び前記検索曲特徴情報は、前記歌詞の特徴を示すパラメータ として、各前記曲を構成する前記歌詞の内容を主観的に象徴し且つ相互に異なる主 観に対応する複数の象徴パラメータを夫々に含んで!/、ると共に、  The song feature information and the search song feature information include a plurality of symbolic parameters that subjectively symbolize the contents of the lyrics constituting each song and correspond to different subjects as parameters indicating the characteristics of the lyrics. Included in each! /
前記第一表示制御手段及び前記第二表示制御手段は、前記抽出曲特徴情報画 像及び前記入力検索曲特徴情報画像の夫々において、各前記象徴パラメータの大 きさを視認可能に表示させることを特徴とする曲検索装置。  The first display control means and the second display control means display the size of each symbolic parameter in a visible manner in each of the extracted music feature information image and the input search music feature information image. A song search device that is characterized.
[5] 曲に含まれる歌詞の特徴を少なくとも示す曲特徴情報を各前記曲毎に識別可能に 蓄積する曲特徴情報蓄積手段と、 [5] Song feature information accumulating means for accumulating at least each piece of song feature information indicating the characteristics of the lyrics included in the song;
検索されるべき前記曲を示し且つ主観を示す言葉よりなる検索語により示される当 該主観に相応 、として当該検索語を用いて検索されるべき!/、ずれかの前記曲に含 まれる歌詞の特徴を少なくとも示す検索曲特徴情報を、各前記検索語毎に識別可能 に蓄積する検索曲特徴情報蓄積手段と、  It should be searched using the search word as corresponding to the subject indicated by the search word consisting of the word indicating the song and subjectivity to be searched! /, The lyrics included in any of the songs Search song feature information storage means for storing search song feature information indicating at least the feature of each search word for each of the search terms;
表示手段と、  Display means;
を備え、  With
複数の当該曲の中から一又は複数の当該曲を検索する曲検索装置において実行さ れる曲検索方法であって、  A song search method that is executed in a song search device that searches for one or more songs from a plurality of songs,
前記検索語を入力する検索語入力工程と、  A search term input step for inputting the search term;
前記入力された検索語に対応する前記検索曲特徴情報である入力検索曲特徴情 報と比較される比較対象情報を、前記蓄積されて!ヽる曲特徴情報を変換情報 (変換 テーブル)を用いて変換することにより生成する変換工程と、 Input search song feature information which is the search song feature information corresponding to the input search word The information to be compared to be compared with the information is accumulated! A conversion process for generating the song feature information by converting it using conversion information (conversion table);
前記入力検索曲特徴情報と、前記生成された比較対象情報と、を夫々比較する比 較工程と、  A comparison step for comparing the input search song feature information and the generated comparison object information, respectively.
前記比較工程における比較結果に基づいて、前記入力検索曲特徴情報に最も類 似している前記比較対象情報の生成元である前記曲特徴情報に対応する前記曲を 、前記入力された検索語に対応する前記曲として抽出する抽出工程と、  Based on the comparison result in the comparison step, the song corresponding to the song feature information that is the generation source of the comparison target information that is most similar to the input search song feature information is used as the inputted search word. An extraction step of extracting as the corresponding song;
前記変換情報を変更する旨の指示を入力する指示入力工程と、  An instruction input step of inputting an instruction to change the conversion information;
前記入力された指示に基づき前記変換情報を変更する変更工程と、  A changing step of changing the conversion information based on the input instruction;
を含み、  Including
前記変換工程においては、前記変換情報が変更されたとき、当該変更後の変換情 報を用いて前記曲特徴情報の前記比較対象情報への変換を行うことを特徴とする曲 検索方法。  In the conversion step, when the conversion information is changed, the music characteristic information is converted into the comparison target information by using the changed conversion information.
[6] コンピュータを、請求項 1から 4の 、ずれか一項に記載の曲検索装置として機能さ せることを特徴とする曲検索用プログラム。  [6] A program for searching for music, which causes a computer to function as the music search device according to any one of claims 1 to 4.
[7] 請求項 6に記載の曲検索用プログラムが前記コンピュータにより読み出し可能に記 録されて!/ゝることを特徴とする情報記録媒体。 [7] The song search program according to claim 6 is recorded so as to be readable by the computer! / An information recording medium characterized by being able to speak.
PCT/JP2006/306663 2005-03-31 2006-03-30 Music composition search device, music composition search method, music composition search program, and information recording medium WO2006106825A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2007512854A JP4459269B2 (en) 2005-03-31 2006-03-30 Song search device, song search method, song search program, and information recording medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2005102937 2005-03-31
JP2005-102937 2005-03-31

Publications (1)

Publication Number Publication Date
WO2006106825A1 true WO2006106825A1 (en) 2006-10-12

Family

ID=37073381

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2006/306663 WO2006106825A1 (en) 2005-03-31 2006-03-30 Music composition search device, music composition search method, music composition search program, and information recording medium

Country Status (2)

Country Link
JP (1) JP4459269B2 (en)
WO (1) WO2006106825A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008234519A (en) * 2007-03-23 2008-10-02 Toyota Central R&D Labs Inc Information retrieval system, information retrieval device, information retrieval method, and its program
WO2011067880A1 (en) * 2009-12-04 2011-06-09 株式会社ソニー・コンピュータエンタテインメント Music recommendation system, information processing device, and information processing method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997033424A2 (en) * 1996-03-04 1997-09-12 Philips Electronics N.V. A user-oriented multimedia presentation system for multiple presentation items that each behave as an agent
JP2004326840A (en) * 2003-04-21 2004-11-18 Pioneer Electronic Corp Music data selection device, music data selection method, music data selection program, and information recording medium recorded with the program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997033424A2 (en) * 1996-03-04 1997-09-12 Philips Electronics N.V. A user-oriented multimedia presentation system for multiple presentation items that each behave as an agent
JP2004326840A (en) * 2003-04-21 2004-11-18 Pioneer Electronic Corp Music data selection device, music data selection method, music data selection program, and information recording medium recorded with the program

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008234519A (en) * 2007-03-23 2008-10-02 Toyota Central R&D Labs Inc Information retrieval system, information retrieval device, information retrieval method, and its program
WO2011067880A1 (en) * 2009-12-04 2011-06-09 株式会社ソニー・コンピュータエンタテインメント Music recommendation system, information processing device, and information processing method
JP2011118710A (en) * 2009-12-04 2011-06-16 Sony Computer Entertainment Inc Musical piece recommendation system, apparatus and method for processing information
CN102640149A (en) * 2009-12-04 2012-08-15 索尼计算机娱乐公司 Music recommendation system, information processing device, and information processing method
US9047320B2 (en) 2009-12-04 2015-06-02 Sony Corporation Music recommendation system, information processing device, and information processing method

Also Published As

Publication number Publication date
JPWO2006106825A1 (en) 2008-09-11
JP4459269B2 (en) 2010-04-28

Similar Documents

Publication Publication Date Title
JP4622808B2 (en) Music classification device, music classification method, music classification program
JP2005173938A (en) Musical piece search device, method and program and information recording media
US20090254554A1 (en) Music searching system and method
CN110010159B (en) Sound similarity determination method and device
TW200813759A (en) A method and apparatus for accessing an audio file from a collection of audio files using tonal matching
WO2017056982A1 (en) Music search method and music search device
JP2008084193A (en) Instance selection device, instance selection method and instance selection program
JP4196052B2 (en) Music retrieval / playback apparatus and medium on which system program is recorded
EP1531405B1 (en) Information search apparatus, information search method, and information recording medium on which information search program is recorded
US20220406283A1 (en) Information processing apparatus, information processing method, and information processing program
US11410706B2 (en) Content pushing method for display device, pushing device and display device
JP5344756B2 (en) Information processing apparatus, information processing method, and program
WO2006106825A1 (en) Music composition search device, music composition search method, music composition search program, and information recording medium
JP5618150B2 (en) Information processing apparatus, information processing method, and program
JP2002049627A (en) Automatic search system for content
JP2006317872A (en) Portable terminal device and musical piece expression method
TWI808038B (en) Media file selection method and service system and computer program product
US20220406280A1 (en) Information processing apparatus, information processing method, and information processing program
JP5713775B2 (en) Music search device
WO2021100493A1 (en) Information processing device, information processing method, and program
JP7243447B2 (en) VOICE ACTOR EVALUATION PROGRAM, VOICE ACTOR EVALUATION METHOD, AND VOICE ACTOR EVALUATION SYSTEM
JP2023162962A (en) Karaoke device
KR20100042705A (en) Method and apparatus for searching audio contents
JP2023162958A (en) Karaoke device
JP2023148576A (en) Method, program and system for managing playlist

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
WWE Wipo information: entry into national phase

Ref document number: 2007512854

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

NENP Non-entry into the national phase

Ref country code: RU

122 Ep: pct application non-entry in european phase

Ref document number: 06730611

Country of ref document: EP

Kind code of ref document: A1