EP1217603A1 - Verfahren zur Identifizierung von Musikstücken - Google Patents
Verfahren zur Identifizierung von Musikstücken Download PDFInfo
- Publication number
- EP1217603A1 EP1217603A1 EP01000660A EP01000660A EP1217603A1 EP 1217603 A1 EP1217603 A1 EP 1217603A1 EP 01000660 A EP01000660 A EP 01000660A EP 01000660 A EP01000660 A EP 01000660A EP 1217603 A1 EP1217603 A1 EP 1217603A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- music
- melody
- pieces
- analysis device
- text section
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/0033—Recording/reproducing or transmission of music for electrophonic musical instruments
- G10H1/0041—Recording/reproducing or transmission of music for electrophonic musical instruments in coded form
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2240/00—Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
- G10H2240/121—Musical libraries, i.e. musical databases indexed by musical parameters, wavetables, indexing schemes using musical parameters, musical rule bases or knowledge bases, e.g. for automatic composing methods
- G10H2240/131—Library retrieval, i.e. searching a database or selecting a specific musical piece, segment, pattern, rule or parameter set
Definitions
- the invention relates to a method for identifying pieces of music and a Analysis device for performing such a method.
- At least one section of a melody and / or one Text of the piece of music to be identified for example the first bars or a chorus, entered into an analysis device.
- this analysis device different matches of the melody and / or text section with others Pieces of music or parts thereof that are known to the analysis device are determined.
- the analysis device knows all pieces of music to which the Analysis facility has access and its associated data such as title, artist, composer etc. can be queried.
- These pieces of music can be in one or in different databases be saved. For example, it can be different databases of each Production companies that operate from the analytical facility through a network, For example, the Internet can be queried.
- the matches are determined by comparing the melody and / or text section with the known pieces of music (or parts thereof) for Example using one or more different pattern classification algorithms.
- At least one of the known ones is then based on the transmitted matches Pieces of music selected, if a piece of music was found at all, which a defined minimum level of correspondence with the melody and / or entered Has section of text.
- Identification data such as the title, the artist, the Composer or other information given.
- an output of the selected piece of music itself for example, such acoustic output for verification of the piece of music.
- a user can then, if he hears the music piece being played out, check himself again whether it is the sought piece and only then have the identification data output. If none of the pieces of music was selected because, for example, none of the pieces of music had that There is a defined minimum level of agreement, this will be accordingly Information, for example the text "No identification possible" is output.
- the agreement is determined extracted certain features of the melody and / or text section. Out these ascertained features then become the melody and / or text section characterizing feature set determined. Such a set of characteristics corresponds to one "Fingerprint" of the respective piece of music.
- the characterizing set of features is then with sets of features, each of the pieces of music known to the analysis device characterize, compared. This has the advantage that the amount of data to be processed are significantly less, which speeds up the process overall.
- the database no longer contains the complete pieces of music or parts of the Store pieces of music with all the information, but only the Characteristic sets of characteristics stored, which saves the space required becomes significantly lower.
- an entered melody and text section becomes one Speech recognition fed.
- the respective text can also be extracted and be supplied separately to the speech recognition.
- this speech recognition the recognized words and / or sentences compared with texts of the various pieces of music.
- this also requires the texts as a feature in the databases be saved.
- To speed up speech recognition it makes sense if the Language of the entered text section is specified, so that the speech recognition system only has to use the required libraries for the respective language and does not search unnecessarily in libraries of other languages.
- the melody and text section can be fed to music recognition be, for example, the recognized rhythms and / or pitch (intervals) with the characteristic rhythms and / or intervals of the stored pieces of music compares and in this way finds a piece that matches the melody.
- An analysis device for performing such a method must on the one hand means for entering a section of a melody and / or a text of the have identifying piece of music. It must also have a memory with a Database with various pieces of music or parts thereof or means of access to at least one such memory - for example an Internet connection for access to other storage arranged on the Internet. It also needs this Analysis device a comparator device to determine the correspondence of the Melody and / or text section with the different pieces of music or parts of these and a selection device for selecting at least one of the pieces of music of the determined matches. Finally, the analysis device needs means for Output of identification data of the selected piece of music and / or the selected one Piece of music itself.
- Such an analysis device for performing the method can be used as a stand-alone Device be constructed, which as a means for entering the melody and / or text section contains, for example, a microphone into which the user can read the text section known to him can speak or sing or whistle or hum a corresponding melody can.
- a piece of music can of course also be played in front of the microphone become.
- the means for output preferably comprise an acoustic one Output device, for example a loudspeaker, with which the selected piece of music or several selected pieces of music are wholly or partially output for verification become.
- This acoustic output device can also be used acoustically Identification data are output.
- the Analysis device but also an optical output device, for example a display, on which the identification data are output.
- the Analysis device also a corresponding control device for the verification of music pieces output, for the selection of music pieces offered for output or to enter helpful additional information for identification, for example the Language of the text etc., on.
- a self-sufficient device can, for example, in Media shops are set up and used there to advise customers.
- the analysis device assigns Input of the melody and / or text section an interface for receiving corresponding Data from a terminal.
- the means to issue the Identification data and / or the selected piece of music through an interface to Sending corresponding data to a terminal device.
- the Analysis device located anywhere. The user can then or enter text section into a communication terminal and thus via Transfer communication network to the analysis device.
- the communication terminal into which the melody and / or text section is entered to a mobile communication terminal, for example a mobile device.
- a mobile device has a microphone anyway as well as the necessary means to record the recorded acoustic signals Communication network, here a cellular network, to send to any other device.
- This method has the advantage that the user immediately when he plays a piece of music, for example in the discotheque or as background music in a department store can establish a connection to the analysis device via his mobile radio device the mobile device of the analysis device can "play" the music currently playing. With such a section of the original music, identification is considerably easier than with a music and / or sung or spoken by the user Section of text that may be significantly alienated.
- the output of the identification data or the acoustic output of the selected one Piece of music or a part thereof is also carried out via a corresponding interface, via which the respective data are then sent to a user's device.
- a user's device in this connection it can be the same end device, for example the user's mobile device, on which the melody and / or text section was entered. This can be "online” or happen “offline”.
- the identification data such as title and artist as well If necessary, selectable offers for output can also be sent, for example, by SMS be transferred to the display of the end device.
- the selection of an offered piece of music, but also other control commands or additional ones Information for the analysis device can be For example, the keyboard, the terminal can be entered.
- the data can also be entered via a natural language dialog, which is a corresponding voice interface, i.e. voice recognition and output on the part of the Analysis facility requires.
- a natural language dialog which is a corresponding voice interface, i.e. voice recognition and output on the part of the Analysis facility requires.
- the search can also be carried out offline, ie the user or the analysis device interrupts the connection to the analysis device after entering the melody and / or text section and after entering other commands and information.
- the analysis device After the analysis device has reached a result, it sends it back to the user's communication terminal, for example by SMS or by calling via a voice channel.
- the user With such an offline method, it is also possible for the user to specify another communication terminal, for example his home computer or an email address, to which the result is sent.
- the result can also be sent as an HTML document or in a similar form.
- the shipping address ie the communication terminal to which the results are to be sent, can be specified either by appropriate commands and information before or after entering the music and / or text section.
- the respective user it is also possible for the respective user to explicitly register beforehand with a service provider who operates the analysis device and for the necessary data to be stored there.
- the option is optionally offered that in addition to the selected piece of music or the associated identification data other pieces of music or their identification data issued or for output are offered that are similar to the selected piece of music. That is, it are given as additional information, for example, music titles that are similar in style like the recognized music track to give the user the opportunity to Taste to get to know corresponding further titles, which he may then would like to acquire.
- the similarity between two different pieces of music can be based on psychoacoustic distance measures, e.g. particularly strong or weak basses, certain frequency profiles within the melody, etc. are determined.
- An alternative, To determine the similarity of two pieces of music is to use one Distance matrix, which is carried out with the aid of listening experiments and / or market analyzes, for example, an analysis of buying behavior.
- a mobile radio device 2 is used by a user used to connect to the analysis device 1.
- a Melody and / or text section MA currently one of any, located nearby the music source 5 of the user played music piece from a microphone of the Mobile device 2 recorded.
- the melody and / or text section is via a mobile radio network MA transferred to the analysis device 1, which has a corresponding connection must have to the mobile network or to a telephone landline and accordingly can be dialed by the user via this telephone network.
- a commercially available mobile device 2 can be used, which modified if necessary to achieve a better transmission quality.
- the Control of the analysis device 1 via the mobile radio device 2 can either be via appropriate Menu controls using the buttons (not shown) of the mobile radio device 2 be performed.
- a voice-controlled menu can also be used.
- the melody and / or text section MA is obtained from the analysis device 1 specific characteristics extracted. These determined features then become a feature set characterizing the melody and / or text section MA is determined.
- the analysis device 1 is connected to a memory 4 with a database, which contains corresponding sets of features MS, each with different pieces of music characterize. This database also contains the required identification data, for example the title and artist of the respective associated piece of music.
- correlation coefficients between those to be compared Characteristic sets determined. The level of these correlation coefficients represents the Matches between the respective sets of characteristics.
- the set of characteristics MS of the feature sets MS stored in the memory 4 with the highest Correlation coefficient belongs to a piece of music that most Has matches with the melody and / or text section MA, which in the Mobile device 2 was entered. This piece of music is then considered the corresponding one identified piece of music selected and the associated identification data ID returned from the analysis device 1 "online" to the mobile radio device 2 and there output on the display, for example.
- the identification task is recorded directly from a music source 5 simplified as compared to normal speech recognition or pattern recognition tasks here can assume that pieces of music always with almost the same Speed can be played and thus at least a fixed common time grid between the music and / or text section entered for identification and the the correct piece of music available for selection can be accepted.
- Figure 2 shows a slightly different method in which the identification is carried out "offline" becomes.
- the input of the piece of music to be identified or a melody and / or text section MA of this piece of music also takes place here via an external music source 5 Mobile device 2 of the user and then sending the information to Analysis facility 1. Also the type of analysis by prior determination of a Melody and / or text section characterizing feature set MS takes place as in first embodiment.
- the result of the Identification not sent back to the user's mobile device 2. Instead it will this result via the Internet via email or as an HTML page to a PC 3 of the User or sent to a PC specified by him or an email address.
- the respective piece of music MT itself or transfer at least a portion of it to the PC so that the user can view this Can listen to a piece of music for verification.
- These music pieces MT (or the sections) are in the memory 4 together with the feature sets characterizing the pieces of music saved.
- order documents for a CD with the desired piece of music advertising as well as additional information are sent.
- Additional information can, for example consist in offering the user additional music tracks that are related to the identified one Songs are similar.
- the similarity is determined here using a distance matrix AM as shown in FIG. 3 is shown.
- the elements M of this distance matrix AM are similarity coefficients here, i.e. Values that measure the similarity between two pieces of music specify.
- the pieces of music are always one hundred percent to themselves Similar, so that a value of 1.0 is entered in the corresponding fields.
- the pieces of music with title 1 and title 3 as well as with the title 5 particularly similar.
- Such a distance matrix AM can also be stored in the memory 4. She can for example be based on subjective listening experiments with a larger Number of test listeners or based on the analysis of buying behavior.
- the analysis device 1 can be arranged at any point. You just have to necessary interfaces for connection with conventional mobile devices or one Have an internet connection.
- the analysis device 1 is in the figures as a coherent Device shown.
- various functions of the Analysis device1 also for different, correspondingly networked devices be distributed.
- the functions of the analysis device can be largely or if necessary even completely in the form of software on suitable computers or servers with one sufficient computing and storage capacity can be realized.
- one does not have to single central memory 4 are used, in which there is a coherent Database, but it can also be a large number of memories act that are positioned in the most diverse places and to which the Access analysis device 1, for example, via the Internet or another network can.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Reverberation, Karaoke And Other Acoustics (AREA)
Abstract
Description
Bei einem solchen Offline- Verfahren bietet es sich ebenfalls an, dass der Benutzer ein anderes Kommunikationsendgerät, beispielsweise seinen heimischen Computer bzw. eine eMail-Adresse, angibt, an die das Ergebnis versandt wird. Das Ergebnis kann dabei auch als HTML-Dokument oder in ähnlicher Form versendet werden. Die Angabe der Versandadresse, d.h. des Kommunikationsendgeräts, an welches die Ergebnisse versendet werden sollen, kann entweder durch entsprechende Befehle und Angaben vor bzw. nach Eingabe des Musik- und/oder Textabschnitts erfolgen. Es ist aber auch möglich, dass der jeweilige Nutzer sich zuvor bei einem Serviceanbieter, welcher die Analyseeinrichtung betreibt, explizit registriert und dass dort die notwendigen Daten hinterlegt werden.
Claims (17)
- Verfahren zur Identifizierung von Musikstücken mit folgenden Verfahrensschritten:Eingabe zumindest eines Abschnitts (MA) einer Melodie und/oder eines Textes des zu identifizierenden Musikstücks in eine Analyseeinrichtung (1),Ermittlung von Übereinstimmungen des Melodie- und/oder Textabschnitts (MA) mit der Analyseeinrichtung (1) bekannten Musikstücken (MT) oder Teilen davon,Auswahl mindestens eines der bekannten Musikstücke (MT) anhand der ermittelten Übereinstimmungen, sofern ein definiertes Mindestmaß an Übereinstimmungen vorliegt,Ausgabe von Identifizierungsdaten (ID) des ausgewählten Musikstücks (MT) und/oder Ausgabe zumindest eines Teils des ausgewählten Musikstücks (MT) selbst oder, sofern keines der Musikstücke (MT) ausgewählt wurde, Ausgabe einer entsprechenden Information.
- Verfahren nach Anspruch 1,
dadurch gekennzeichnet, dass eine Mehrzahl von Musikstücken und/oder deren Identifizierungsdaten, für die die meisten Übereinstimmungen ermittelt wurden, ausgegeben werden und/oder zur Ausgabe angeboten werden. - Verfahren nach Anspruch 1 oder 2,
dadurch gekennzeichnet, dass zur Ermittlung der Übereinstimmungen bestimmte Merkmale des Melodie- und/oder Textabschnitts (MA) extrahiert werden, aus den ermittelten Merkmalen dann ein den Melodie- und/oder Textabschnitt (MA) charakterisierender Merkmalssatz ermittelt wird und dieser charakterisierende Merkmalssatz mit Merkmalssätzen (MS), die jeweils die bekannten Musikstücke (MT) charakterisieren, verglichen wird. - Verfahren nach Anspruch 3,
dadurch gekennzeichnet, dass zum Vergleich des charakterisierenden Merkmalssatzes des Melodie- und/oder Textabschnitts (MA) mit den in der Datenbank gespeicherten Merkmalssätzen (MS) jeweils Korrelationskoeffizienten zwischen den zu vergleichenden Merkmalssätzen ermittelt werden, deren Höhe die Übereinstimmungen zwischen den jeweiligen Merkmalssätzen repräsentieren. - Verfahren nach einem der Ansprüche 1 bis 4,
dadurch gekennzeichnet, dass der eingegebene Melodie- und/oder Textabschnitt oder ein von diesen extrahierter Text einer Spracherkennung zugeführt wird und in der Spracherkennung erkannte Worte und/oder Sätze mit Texten der verschiedenen Musikstücke verglichen werden. - Verfahren nach Anspruch 5,
dadurch gekennzeichnet, dass für die Spracherkennung die Sprache des eingegebenen Textabschnitts angegeben wird. - Verfahren nach einem der Ansprüche 1 bis 6,
dadurch gekennzeichnet, dass der Melodie- und/oder Textabschnitt (MA) von einem Nutzer in ein Kommunikationsendgerät (2) eingegeben wird und über ein Kommunikationsnetz an die Analyseeinrichtung (1) übermittelt wird und ein ausgewähltes Musikstück (MT) und/oder dessen Identifizierungsdaten (ID) an ein vom Nutzer bestimmtes Kommunikationsendgerät (2, 3) zur Ausgabe übermittelt wird. - Verfahren nach Anspruch 7,
dadurch gekennzeichnet, dass das Kommunikationsendgerät (2), in das der Melodie- und/oder Textabschnitt (MA) eingegeben wird, ein mobiles Kommunikationsendgerät (2) ist. - Verfahren nach Anspruch 7 oder 8,
dadurch gekennzeichnet, dass das ausgewählte Musikstück (MT) und/oder dessen Identifizierungsdaten (ID) zur Ausgabe an das Kommunikationsendgerät (2) zurückübermittelt wird, in das der Melodie- und/oder Textabschnitt (MA) eingegeben wird. - Verfahren nach einem der Ansprüche 1 bis 9,
dadurch gekennzeichnet, dass zusätzlich zu dem/den ausgewählten Musikstück/en und/oder den zugehörigen Identifizierungsdaten mindestens ein weiteres Musikstück und/oder dessen Identifizierungsdaten ausgegeben und/oder zur Ausgabe angeboten wird, das zu dem/den ausgewählten Musikstück/en ähnlich ist. - Verfahren nach Anspruch 10,
dadurch gekennzeichnet, dass die Ähnlichkeit zwischen zwei Musikstücken auf Basis von psychoakustischen Abstandsmaßen ermittelt wird - Verfahren nach Anspruch 10 oder 11,
dadurch gekennzeichnet, dass die Ähnlichkeit zwischen zwei Musikstücken auf Basis einer Abstandsmatrix (AM) ermittelt wird, die mit Hilfe von Hörexperimenten und/oder von Marktanalysen (Analyse des Kaufverhaltens) aufgestellt wurde. - Analyseeinrichtung (1) zur Durchführung eines Verfahrens gemäß einem der Ansprüche 1 bis 12, mitMitteln zur Eingabe zumindest eines Abschnitts (MA) einer Melodie und/oder eines Textes des zu identifizierenden Musikstücks,einem Speicher (4) mit einer Datenbank mit verschiedenen Musikstücken oder Teilen davon, oder Mitteln zum Zugriff auf zumindest einen derartigen Speicher,einer Vergleichereinrichtung zur Ermittlung von Übereinstimmungen des Melodie- und/oder Textabschnitts (MA) mit den verschiedenen Musikstücken (MT) oder den Teilen davon,einer Auswahleinrichtung zur Auswahl mindestens eines der Musikstücke (MT) anhand der ermittelten Übereinstimmungen, sofern ein definiertes Mindestmaß an Übereinstimmungen vorliegt, undMitteln zur Ausgabe von Identifizierungsdaten (ID) des ausgewählten Musikstücks (MT) und/oder des ausgewählten Musikstücks (MT) selbst.
- Analyseeinrichtung nach Anspruch 13,
dadurch gekennzeichnet, dass die Analyseeinrichtung Mittel zur Extraktion bestimmter Merkmale des Melodie- und/oder Textabschnitts (MA) und zur Ermittlung eines den Melodie- und/oder Textabschnitt (MA) charakterisierenden Merkmalssatzes aus den ermittelten Merkmalen aufweist und dass der Speicher (4) in einer Datenbank entsprechende Merkmalssätze (MS), die jeweils die Musikstücke (MT) charakterisieren, enthält. - Analyseeinrichtung nach Anspruch 13 oder 14,
dadurch gekennzeichnet, dass die Mittel zur Eingabe des Melodie- und/oder Textabschnitts ein Mikrofon umfassen und die Mittel zur Ausgabe der Identifizierungsdaten und/oder des ausgewählten Musikstücks einen akustische Ausgabeeinheit und/oder eine optische Ausgabeeinheit umfassen. - Analyseeinrichtung nach einem der Ansprüche 13 bis 15,
dadurch gekennzeichnet, dass die Mittel zur Eingabe des Melodie- und/oder Textabschnitts (MA) eine Schnittstelle zum Empfang entsprechender Daten von einem Endgerät (2) umfassen und die Mittel zur Ausgabe der Identifizierungsdaten (ID) und/oder des ausgewählten Musikstücks (MT) eine Schnittstelle zum Versenden entsprechender Daten an ein Endgerät (2, 3) umfassen. - Analyseeinrichtung nach einem der Ansprüche 13 bis 16,
gekennzeichnet durch Mittel zur Auswahl von weiteren Musikstücken, die zu dem ausgewählten Musikstück ähnlich sind.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE10058811A DE10058811A1 (de) | 2000-11-27 | 2000-11-27 | Verfahren zur Identifizierung von Musikstücken |
DE10058811 | 2000-11-27 |
Publications (1)
Publication Number | Publication Date |
---|---|
EP1217603A1 true EP1217603A1 (de) | 2002-06-26 |
Family
ID=7664809
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP01000660A Withdrawn EP1217603A1 (de) | 2000-11-27 | 2001-11-23 | Verfahren zur Identifizierung von Musikstücken |
Country Status (6)
Country | Link |
---|---|
US (1) | US20020088336A1 (de) |
EP (1) | EP1217603A1 (de) |
JP (1) | JP4340411B2 (de) |
KR (2) | KR20020041321A (de) |
CN (1) | CN1220175C (de) |
DE (1) | DE10058811A1 (de) |
Families Citing this family (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7711564B2 (en) | 1995-07-27 | 2010-05-04 | Digimarc Corporation | Connected audio and other media objects |
US6505160B1 (en) * | 1995-07-27 | 2003-01-07 | Digimarc Corporation | Connected audio and other media objects |
US20050038819A1 (en) * | 2000-04-21 | 2005-02-17 | Hicken Wendell T. | Music Recommendation system and method |
US7013301B2 (en) * | 2003-09-23 | 2006-03-14 | Predixis Corporation | Audio fingerprinting system and method |
US20060217828A1 (en) * | 2002-10-23 | 2006-09-28 | Hicken Wendell T | Music searching system and method |
US8121843B2 (en) | 2000-05-02 | 2012-02-21 | Digimarc Corporation | Fingerprint methods and systems for media signals |
US8205237B2 (en) | 2000-09-14 | 2012-06-19 | Cox Ingemar J | Identifying works, using a sub-linear time search, such as an approximate nearest neighbor search, for initiating a work-based action, such as an action on the internet |
US7248715B2 (en) * | 2001-04-06 | 2007-07-24 | Digimarc Corporation | Digitally watermarking physical media |
US7046819B2 (en) | 2001-04-25 | 2006-05-16 | Digimarc Corporation | Encoded reference signal for digital watermarks |
US7824029B2 (en) | 2002-05-10 | 2010-11-02 | L-1 Secure Credentialing, Inc. | Identification card printer-assembler for over the counter card issuing |
CN1703734A (zh) * | 2002-10-11 | 2005-11-30 | 松下电器产业株式会社 | 从声音确定音符的方法和装置 |
GB0307474D0 (en) * | 2002-12-20 | 2003-05-07 | Koninkl Philips Electronics Nv | Ordering audio signals |
BRPI0407870A (pt) * | 2003-02-26 | 2006-03-01 | Koninkl Philips Electronics Nv | tratamento de silêncio digital na geração de impressão digital de áudio |
US7606790B2 (en) * | 2003-03-03 | 2009-10-20 | Digimarc Corporation | Integrating and enhancing searching of media content and biometric databases |
EP1634191A1 (de) * | 2003-05-30 | 2006-03-15 | Koninklijke Philips Electronics N.V. | Suche und speicherung von fingerabdrücken von medienobjekten |
JP5279270B2 (ja) | 2004-08-06 | 2013-09-04 | ディジマーク コーポレイション | 携帯コンピューティング装置における高速信号検出および分散コンピューティング |
US20060212149A1 (en) * | 2004-08-13 | 2006-09-21 | Hicken Wendell T | Distributed system and method for intelligent data analysis |
KR20070116853A (ko) * | 2005-03-04 | 2007-12-11 | 뮤직아이피 코포레이션 | 플레이리스트를 작성하기 위한 스캔 셔플 |
US7613736B2 (en) * | 2005-05-23 | 2009-11-03 | Resonance Media Services, Inc. | Sharing music essence in a recommendation system |
JP4534926B2 (ja) * | 2005-09-26 | 2010-09-01 | ヤマハ株式会社 | 画像表示装置及びプログラム |
CN101292280B (zh) * | 2005-10-17 | 2015-04-22 | 皇家飞利浦电子股份有限公司 | 导出音频输入信号的一个特征集的方法 |
EP1785891A1 (de) * | 2005-11-09 | 2007-05-16 | Sony Deutschland GmbH | Musikabfrage mittels 3D-Suchalgorithmus |
JP4534967B2 (ja) * | 2005-11-21 | 2010-09-01 | ヤマハ株式会社 | 音色及び/又は効果設定装置並びにプログラム |
US7915511B2 (en) * | 2006-05-08 | 2011-03-29 | Koninklijke Philips Electronics N.V. | Method and electronic device for aligning a song with its lyrics |
US7985911B2 (en) | 2007-04-18 | 2011-07-26 | Oppenheimer Harold B | Method and apparatus for generating and updating a pre-categorized song database from which consumers may select and then download desired playlists |
JP5135931B2 (ja) * | 2007-07-17 | 2013-02-06 | ヤマハ株式会社 | 楽曲加工装置およびプログラム |
KR101039762B1 (ko) * | 2009-11-11 | 2011-06-09 | 주식회사 금영 | 가사 데이터를 이용한 노래반주기의 곡 검색방법 |
US9280598B2 (en) * | 2010-05-04 | 2016-03-08 | Soundhound, Inc. | Systems and methods for sound recognition |
US8584197B2 (en) * | 2010-11-12 | 2013-11-12 | Google Inc. | Media rights management using melody identification |
US8584198B2 (en) * | 2010-11-12 | 2013-11-12 | Google Inc. | Syndication including melody recognition and opt out |
CN102419998B (zh) * | 2011-09-30 | 2013-03-20 | 广州市动景计算机科技有限公司 | 一种音频处理方法及系统 |
DE102011087843B4 (de) * | 2011-12-06 | 2013-07-11 | Continental Automotive Gmbh | Verfahren und System zur Auswahl mindestens eines Datensatzes aus einer relationalen Datenbank |
DE102013009569B4 (de) * | 2013-06-07 | 2015-06-18 | Audi Ag | Verfahren zum Betreiben eines Infotainmentsystems zum Beschaffen einer Wiedergabeliste für eine Audiowiedergabe in einem Kraftfahrzeug, Infotainmentsystem sowie Kraftwagen umfassend ein Infotainmentsystem |
US10133537B2 (en) * | 2014-09-25 | 2018-11-20 | Honeywell International Inc. | Method of integrating a home entertainment system with life style systems which include searching and playing music using voice commands based upon humming or singing |
CN104867492B (zh) * | 2015-05-07 | 2019-09-03 | 科大讯飞股份有限公司 | 智能交互系统及方法 |
US10129314B2 (en) * | 2015-08-18 | 2018-11-13 | Pandora Media, Inc. | Media feature determination for internet-based media streaming |
DE102016204183A1 (de) * | 2016-03-15 | 2017-09-21 | Bayerische Motoren Werke Aktiengesellschaft | Verfahren zur Musikauswahl mittels Gesten- und Sprachsteuerung |
JP2019036191A (ja) * | 2017-08-18 | 2019-03-07 | ヤフー株式会社 | 判定装置、判定方法及び判定プログラム |
CN109377988B (zh) * | 2018-09-26 | 2022-01-14 | 网易(杭州)网络有限公司 | 用于智能音箱的交互方法、介质、装置和计算设备 |
US10679604B2 (en) * | 2018-10-03 | 2020-06-09 | Futurewei Technologies, Inc. | Method and apparatus for transmitting audio |
CN116259292B (zh) * | 2023-03-23 | 2023-10-20 | 广州资云科技有限公司 | 基调和音阶的识别方法、装置、计算机设备和存储介质 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5210820A (en) * | 1990-05-02 | 1993-05-11 | Broadcast Data Systems Limited Partnership | Signal recognition system and method |
US5250745A (en) * | 1991-07-31 | 1993-10-05 | Ricos Co., Ltd. | Karaoke music selection device |
US5874686A (en) | 1995-10-31 | 1999-02-23 | Ghias; Asif U. | Apparatus and method for searching a melody |
EP0944033A1 (de) | 1998-03-19 | 1999-09-22 | Tomonari Sonoda | Vorrichtung und Verfahren zum Wiederauffinden von Melodien |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2897659B2 (ja) * | 1994-10-31 | 1999-05-31 | ヤマハ株式会社 | カラオケ装置 |
JP2000187671A (ja) | 1998-12-21 | 2000-07-04 | Tomoya Sonoda | ネットワ―クを利用した歌声による曲検索システム及び検索時に用いる歌声の入力端末装置 |
JP2002049627A (ja) | 2000-08-02 | 2002-02-15 | Yamaha Corp | コンテンツの自動検索システム |
-
2000
- 2000-11-27 DE DE10058811A patent/DE10058811A1/de not_active Ceased
-
2001
- 2001-11-23 CN CNB011456094A patent/CN1220175C/zh not_active Expired - Fee Related
- 2001-11-23 EP EP01000660A patent/EP1217603A1/de not_active Withdrawn
- 2001-11-26 JP JP2001359416A patent/JP4340411B2/ja not_active Expired - Fee Related
- 2001-11-27 KR KR1020010074285A patent/KR20020041321A/ko active Search and Examination
- 2001-11-27 US US09/995,460 patent/US20020088336A1/en not_active Abandoned
-
2008
- 2008-12-26 KR KR1020080134560A patent/KR100952186B1/ko not_active IP Right Cessation
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5210820A (en) * | 1990-05-02 | 1993-05-11 | Broadcast Data Systems Limited Partnership | Signal recognition system and method |
US5250745A (en) * | 1991-07-31 | 1993-10-05 | Ricos Co., Ltd. | Karaoke music selection device |
US5874686A (en) | 1995-10-31 | 1999-02-23 | Ghias; Asif U. | Apparatus and method for searching a melody |
EP0944033A1 (de) | 1998-03-19 | 1999-09-22 | Tomonari Sonoda | Vorrichtung und Verfahren zum Wiederauffinden von Melodien |
Also Published As
Publication number | Publication date |
---|---|
JP2002196773A (ja) | 2002-07-12 |
JP4340411B2 (ja) | 2009-10-07 |
KR20020041321A (ko) | 2002-06-01 |
CN1220175C (zh) | 2005-09-21 |
KR20090015012A (ko) | 2009-02-11 |
KR100952186B1 (ko) | 2010-04-09 |
US20020088336A1 (en) | 2002-07-11 |
CN1356689A (zh) | 2002-07-03 |
DE10058811A1 (de) | 2002-06-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP1217603A1 (de) | Verfahren zur Identifizierung von Musikstücken | |
DE60120417T2 (de) | Verfahren zur suche in einer audiodatenbank | |
DE60210295T2 (de) | Verfahren und vorrichtung zur sprachanalyse | |
Schubert | Continuous measurement of self-report emotional response to music. | |
DE60313706T2 (de) | Spracherkennungs- und -antwortsystem, Spracherkennungs- und -antwortprogramm und zugehöriges Aufzeichnungsmedium | |
DE602004006641T2 (de) | Audio-dialogsystem und sprachgesteuertes browsing-verfahren | |
Lychner | An empirical study concerning terminology relating to aesthetic response to music | |
Zoghaib | Persuasion of voices: The effects of a speaker’s voice characteristics and gender on consumers’ responses | |
DE69933853T2 (de) | Informationsverarbeitungsgerät | |
DE10306599B4 (de) | Benutzeroberfläche, System und Verfahren zur automatischen Benennung von phonischen Symbolen für Sprachsignale zum Korrigieren von Aussprache | |
DE102014118075B4 (de) | Audio und Video synchronisierendes Wahrnehmungsmodell | |
DE60128372T2 (de) | Verfahren und system zur verbesserung der genauigkeit in einem spracherkennungssystem | |
DE212016000292U1 (de) | System zur Text-zu-Sprache-Leistungsbewertung | |
EP1794743B1 (de) | Vorrichtung und verfahren zum gruppieren von zeitlichen segmenten eines musikstücks | |
DE60214850T2 (de) | Für eine benutzergruppe spezifisches musterverarbeitungssystem | |
KR100926982B1 (ko) | 거래정보를 거래대응 감상용 음악으로 변환하는 방법 및 이를 기록한 컴퓨터로 읽을 수 있는 기록매체 | |
DE102004047032A1 (de) | Vorrichtung und Verfahren zum Bezeichnen von verschiedenen Segmentklassen | |
DE69920714T2 (de) | Spracherkennung | |
WO2014131763A2 (de) | Wortwahlbasierte sprachanalyse und sprachanalyseeinrichtung | |
WO2000005709A1 (de) | Verfahren und vorrichtung zur erkennung vorgegebener schlüsselwörter in gesprochener sprache | |
DE10311581A1 (de) | Verfahren und System zum automatisierten Erstellen von Sprachwortschätzen | |
CN105895079A (zh) | 语音数据的处理方法和装置 | |
DE10220522B4 (de) | Verfahren und System zur Verarbeitung von Sprachdaten mittels Spracherkennung und Frequenzanalyse | |
DE60119643T2 (de) | Homophonewahl in der Spracherkennung | |
EP1377924B1 (de) | VERFAHREN UND VORRICHTUNG ZUM EXTRAHIEREN EINER SIGNALKENNUNG, VERFAHREN UND VORRICHTUNG ZUM ERZEUGEN EINER DAZUGEHÖRIGEN DATABANK und Verfahren und Vorrichtung zum Referenzieren eines Such-Zeitsignals |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR |
|
AX | Request for extension of the european patent |
Free format text: AL;LT;LV;MK;RO;SI |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: PHILIPS CORPORATE INTELLECTUAL PROPERTY GMBH Owner name: KONINKLIJKE PHILIPS ELECTRONICS N.V. |
|
17P | Request for examination filed |
Effective date: 20021227 |
|
AKX | Designation fees paid |
Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: KONINKLIJKE PHILIPS ELECTRONICS N.V. Owner name: PHILIPS INTELLECTUAL PROPERTY & STANDARDS GMBH |
|
17Q | First examination report despatched |
Effective date: 20061109 |
|
17Q | First examination report despatched |
Effective date: 20061109 |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAC | Information related to communication of intention to grant a patent modified |
Free format text: ORIGINAL CODE: EPIDOSCIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20120515 |