US6539395B1 - Method for creating a database for comparing music - Google Patents
Method for creating a database for comparing music Download PDFInfo
- Publication number
- US6539395B1 US6539395B1 US09/533,045 US53304500A US6539395B1 US 6539395 B1 US6539395 B1 US 6539395B1 US 53304500 A US53304500 A US 53304500A US 6539395 B1 US6539395 B1 US 6539395B1
- Authority
- US
- United States
- Prior art keywords
- music
- questions
- listener
- listeners
- vector
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Lifetime
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/60—Information retrieval; Database structures therefor; File system structures therefor of audio data
- G06F16/68—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/683—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/60—Information retrieval; Database structures therefor; File system structures therefor of audio data
- G06F16/63—Querying
- G06F16/635—Filtering based on additional data, e.g. user or group profiles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/60—Information retrieval; Database structures therefor; File system structures therefor of audio data
- G06F16/68—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2240/00—Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
- G10H2240/075—Musical metadata derived from musical analysis or for use in electrophonic musical instruments
- G10H2240/081—Genre classification, i.e. descriptive metadata for classification or selection of musical pieces according to style
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2240/00—Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
- G10H2240/121—Musical libraries, i.e. musical databases indexed by musical parameters, wavetables, indexing schemes using musical parameters, musical rule bases or knowledge bases, e.g. for automatic composing methods
- G10H2240/155—Library update, i.e. making or modifying a musical database using musical parameters as indices
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S707/00—Data processing: database and file management or data structures
- Y10S707/99931—Database or file accessing
- Y10S707/99933—Query processing, i.e. searching
- Y10S707/99936—Pattern matching access
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S707/00—Data processing: database and file management or data structures
- Y10S707/99941—Database schema or data structure
- Y10S707/99943—Generating database or data structure, e.g. via user interface
Definitions
- the present Application is also related to the U.S. patent application entitled “QUALITY ASSURANCE SYSTEM FOR SCREENING MUSIC LISTENERS”, Ser. No. 09/533,013, now pending filed on the same day as the present Application, and assigned to the Assignee of the present invention.
- the disclosure of the patent application “QUALITY ASSURANCE SYSTEM FOR SCREENING MUSIC LISTENERS” is hereby incorporated by reference in its entirety.
- the present invention relates to computerized comparison of music based upon music content and listener perception of music attributes.
- the Internet connects thousands of computers world wide through well-known protocols, for example, Transmission Control Protocol (TCP)/Internet Protocol (IP), into a vast network.
- TCP Transmission Control Protocol
- IP Internet Protocol
- Information on the Internet is stored world wide as computer files, mostly written in the Hypertext Mark Up Language (“HTML”).
- HTML Hypertext Mark Up Language
- the collection of all such publicly available computer files is known as the World Wide Web (WWW).
- the WWW is a multimedia-enabled hypertext system used for navigating the Internet and is made up of hundreds of thousands of web pages with images and text and video files, which can be displayed on a computer monitor. Each web page can have connections to other pages, which may be located on any computer connected to the Internet.
- a typical Internet user uses a client program called a “Web Browser” to connect to the Internet.
- a user can connect to the Internet via a proprietary network, such as America Online or CompuServe, or via an Internet Service Provider, e.g., Earthlink.
- a Web Browser may run on any computer connected to the Internet. Currently, various browsers are available of which two prominent browsers are Netscape Navigator and Microsoft Internet Explorer. The Web Browser receives and sends requests to a web server and acquires information from the WWW.
- a web server is a program that, upon receipt of a request, sends the requested data to the requesting user.
- URL Uniform Resource Locator
- HTTP Hypertext Transport Protocol
- WAIS Wide Area Information Service
- FTP File Transport Protocol
- HTTP Hypertext Transfer Protocol
- Music today can only be classified and searched under the name of the artist, album title, and music genre i.e., whether the music falls under the following categories: Alternative, Blues, Country, Folk, Gospel, jazz, Latin, New Age, R&B, Soul, Rap, Reggae, Rock, etc. If a consumer wants to search for music that has a lead female vocalist, with a prominent instrument, e.g., the saxophone, and the music is a cross-over between Pop and Country genres, the current searching techniques will fail to support such a request. Current search techniques cannot compare plural aspects of different genres and provide intelligent interactive search techniques to music listeners.
- Text-based search engines have worked well with databases because text can describe variables. However, text alone cannot help in searching music since music is difficult to define by text alone.
- the present invention solves the foregoing drawbacks by providing a method and system for creating a database that allows content based searching in the music domain.
- the process provides music samples to music listeners, wherein the music listeners include a plurality of average music listeners and a plurality of expert music listeners.
- Music samples may be provided via the Internet, a private computer network or music CDs.
- the process further provides a plurality of questions to the average music listeners and the expert music listeners, wherein the plurality of questions require listener response and every listener response has a corresponding value that determines the value of a feature vector, wherein the feature vectors define music attributes.
- the process compares a plurality of music samples, wherein comparing feature vectors compares the music samples. Thereafter, the process stores the compared data. Examples of some feature vectors defined by the process are as follows:
- An emotional quality vector wherein the emotional quality vector is based upon a music listener's response to questions regarding a music sample indicating if the music sample is Intense, Happy, Sad, Mellow, Romantic, Heartbreaking, Aggressive, or Upbeat, etc.;
- a vocal quality vector wherein the vocal vector is based upon a music listener's response to questions regarding a music sample indicating that the music sample includes a Sexy voice, a Smooth voice, a Powerful voice, a Great voice, or a Soulful voice, etc.;
- a sound quality vector wherein the sound quality vector is based upon a music listener's response to questions regarding a music sample indicating if the music sample has a Strong beat, is simple, has a good groove, is speech like, or emphasizes a melody, etc.;
- a situational quality vector wherein the situational quality vector is based on a music listener's response to questions regarding a music sample indicating if the music sample is good for a workout, a shopping mall, a dinner party, a dance party, slow dancing, or studying;
- a genre vector wherein the genre vector depends upon an expert listener's response to the questions regarding a music sample indicating if the music sample belongs to a plurality of genres including, Alternative, Blues, Country, Electronic/Dance, Folk, Gospel, jazz, Latin, New Age, R&B, Soul, Rap, Hip-Hop, Reggae, Rock or others;
- An ensemble vector wherein the ensemble vector depends upon an experts listener's response to questions regarding a music sample indicating whether the music sample includes a female solo, male solo, female duet, male duet, mixed duet, female group, male group or instrumental;
- an instrument vector wherein the instrument vector depends upon an expert listener's response to questions regarding a music sample indicating whether the music sample includes an acoustic guitar, electric guitar, bass, drums, harmonica, organ, piano, synthesizer, horn, or saxophone.
- feature vectors can describe music content. This assists in creating a music space for various attributes of music.
- Another advantage of the present invention is that since the feature vectors define music attributes, music can be searched based upon music content.
- FIG. 1 illustrates a computing system to carry out the inventive technique.
- FIG. 2 is a block diagram of the architecture of the computing system of FIG. 1 .
- FIG. 3 is a block diagram of the Internet Topology.
- FIG. 4 is a block diagram of the various components used for creating a database structure according to one embodiment of the present invention.
- FIG. 5A is a flow diagram of computer executable process steps for creating a database, according to the present invention.
- FIG. 5B is a flow diagram of computer executable process steps for developing a questionnaire.
- FIG. 5 C 1 is a block diagram of a neural network as used by the present invention.
- FIG. 5 C 2 is a flow diagram of computer executable process steps showing various operations performed by the neural network, according to the present invention.
- FIG. 5 C 3 is a flow diagram of computer executable process steps showing various operations performed by a Modeling Module, according to the present invention.
- FIG. 5D is a graphical representation of a plurality of music spaces created by the present invention.
- FIG. 5E is a flow diagram of computer executable process steps showing various operations performed to calibrate a music listener, according to the present invention.
- FIG. 5F is an example of storing listener responses to music samples, according to the present invention.
- FIG. 5G is a flow diagram of computer executable process steps showing various operations performed to measure typicality of a music listener, according to the present invention.
- FIG. 5H shows another example of storing listener responses.
- FIG. 5I is a block diagram showing a quality assurance system, according to the present invention.
- FIG. 6 shows sample data fields for collecting music listener information.
- FIG. 7 A 1 shows sample questions for a plurality of music listeners.
- FIG. 7 A 2 shows sample questions for a plurality of music listeners.
- FIG. 7B shows sample questions asked to a plurality of expert music listeners for obtaining explicit similarity data for music samples.
- FIG. 8 A 1 shows sample questions for a plurality of expert listeners.
- FIG. 8 A 2 shows sample questions for a plurality of expert listeners.
- FIG. 8 A 3 shows sample questions for a plurality of expert listeners.
- FIG. 9 is a block diagram of the overall system, according to another embodiment of the present invention.
- FIG. 10A shows a sample User Interface, according to the present invention.
- FIG. 10B shows a genre mixer, according to the present invention.
- FIG. 10C shows an emotional quality mixer according to the present invention.
- FIG. 10D shows a vocal quality mixer, according to the present invention.
- FIG. 11 is a block diagram of a User Interface engine, according to another embodiment of the present invention.
- FIG. 12 is a flow diagram showing computer executable process steps for conducting content-based search in the music domain.
- FIG. 13 is a flow diagram showing process steps for performing content-based search for aesthetic commodities.
- FIG. 1 is a block diagram of a computing system for executing computer executable process steps according to one embodiment of the present invention.
- FIG. 1 includes a host computer 10 and a monitor 11 .
- Monitor 11 may be a CRT type, a LCD type, or any other type of color or monochrome display.
- Also provided with computer 10 is a keyboard 13 for entering text data and user commands, and a pointing device 14 for processing objects displayed on monitor 11 .
- Computer 10 includes a computer-readable memory medium such as a rotating disk 15 for storing readable data.
- disk 15 can store application programs including web browsers by which computer 10 connects to the Internet and the systems described below, according to one aspect of the present invention.
- Computer 10 can also access a computer-readable floppy disk storing data files, application program files, and computer executable process steps embodying the present invention or the like via a floppy disk drive 16 .
- a CD-ROM interface (not shown) may also be provided with computer 10 to access application program files, audio files and data files stored on a CD-ROM.
- a modem, an integrated services digital network (ISDN) connection, or the like also provides computer 10 with an Internet connection 12 to the World Wide Web (WWW).
- the Internet connection 12 allows computer 10 to download data files, audio files, application program files and computer-executable process steps embodying the present invention.
- Computer 10 is also provided with external audio speakers 17 A and 17 B to assist a listener to listen to music either on-line downloaded from the Internet or off-line using a CD. It is noteworthy that a listener may use headphones instead of audio speakers 17 A and 17 B to listen to music.
- FIG. 2 is a block diagram showing the internal functional architecture of computer 10 .
- computer 10 includes a CPU 201 for executing computer-executable process steps and interfaces with a computer bus 208 .
- a WWW interface 202 Also shown in FIG. 2 are a WWW interface 202 , a display device interface 203 , a keyboard interface 204 , a pointing device interface 205 , an audio interface 209 , and a rotating disk 15 .
- Audio Interface 209 allows a listener to listen to music, On-line (downloaded using the Internet or a private network) or off-line (using a CD).
- disk 15 stores operating system program files, application program files, web browsers, and other files. Some of these files are stored on disk 15 using an installation program. For example, CPU 201 executes computer-executable process steps of an installation program so that CPU 201 can properly execute the application program.
- a random access main memory (“RAM”) 206 also interfaces to computer bus 208 to provide CPU 201 with access to memory storage.
- CPU 201 stores and executes the process steps out of RAM 206 .
- ROM 207 is provided to store invariant instruction sequences such as start-up instruction sequences or basic input/output operating system (BIOS) sequences for operation of keyboard 13 .
- invariant instruction sequences such as start-up instruction sequences or basic input/output operating system (BIOS) sequences for operation of keyboard 13 .
- BIOS basic input/output operating system
- FIG. 3 shows a typical topology of a computer network with computers similar to computer 10 , connected to the Internet.
- three computers X, Y and Z are shown connected to the Internet 302 via Web interface 202 through a-gateway 301 , where gateway 301 can interface N number of computers.
- Web interface 202 may be a modem, network interface card or a unit for providing connectivity to other computer systems over a network using protocols such as X.25, Ethernet or TCP/IP, or any device that allows, directly or indirectly, computer-to-computer communications.
- the invention is not limited to a particular number of computers. Any number of computers that can be connected to the Internet 302 or any other computer network may be used.
- FIG. 3 further shows a second gateway 303 that connects a network of web servers 304 and 305 to the Internet 302 .
- Web servers 304 and 305 may be connected with each other over a computer network.
- Web servers 304 and 305 can provide content including music samples, audio clips and CDs to a user from database 306 and/or 307 .
- Web servers 304 and 305 can also host the present music searching system, according to the present invention.
- a client side web server 308 that can be provided by an Internet service provider.
- FIG. 4 is a block diagram showing various components that may be used to develop a database that allows music listeners to search for music based upon music content, perceptual qualities of music and music attributes, according to one embodiment of the present invention.
- Listener perception data 401 Instrument information data 402 , Expert Information data 403 , and Explicit Pairwise data 403 A are collected and then stored as Acquired data 404 and thereafter fed into a Research database 405 (also referred as “R&D database”).
- Basic music fact data 402 A including title of the music, category/genre if known, and date of 2 recording etc. is also sent to R&D database 405 .
- Data describing music attributes may also be collected by Digital Signal processing (“DSP”) and stored as DSP data 403 B, Radio logging and stored as Radio logged data 403 D, and Internet Harvesting and stored Internet Harvested data 403 E, using Spider techniques.
- DSP Digital Signal processing
- Modeling Module 406 that creates a multi-dimensional music space based upon the acquired data, and performs a similarity analysis on the music samples, as described below in FIG. 5 C 3 .
- Modeled data from 409 is sent to a production database 407 that stores music data and allows a listener to search music based upon plural attributes as described below.
- a similarity database 407 A is also shown that includes similar music sets that are not accurately modeled by Modeling Module 406 , as discussed below.
- FIG. 5A is a flow chart showing process steps to create a dynamic database that allows comparison of music, based upon music attributes/content and perceptual quality of music based upon data collected from actual music listeners. It is well known that music affects different people in different ways. Every piece of music provides listeners certain experiences including emotional experiences.
- the present invention provides descriptors/variables that can describe human experience while listening to music and link the variables/descriptors (via feature vectors) to specific music types/genres.
- step S 501 A listeners are provided music samples either on-line via the Internet or on a CD-ROM, with a list of questions corresponding to the music samples.
- the questions are used to solicit listener responses that describe music attributes and assign values to feature vectors for the attributes.
- a listener using computer X may download music samples from a web server 304 / 305 with a list of questions.
- the present invention provides a questionnaire that evaluates the cognitive, emotional, esthetical, and situational effects of music on actual listeners.
- listener information Prior to providing music samples or questions, listener information may also be collected, as shown in FIG. 6 . Before a listener can start listening to sample music, a training session may be conducted to familiarize a listener with the music rating process.
- FIGS. 7 A 1 and 7 A 2 show an example of a list of questions that are provided to a listener prior to, after or while a listener is listening to the music sample. A listener that listens to more than one song is also asked to compare songs.
- Examples of questions in FIGS. 7 A 1 and 7 A 2 may be grouped as follows:
- This song is speech-like
- This song has a strong beat
- This song has a good groove
- This song expresses a broken heart
- the singer has a smooth voice
- the singer has a soulful voice
- the singer has a powerful voice
- the singer has a truly great voice
- This song has a high voice
- This song has a sexy voice
- This song would be good in a shopping mall
- expert data 403 is collected from expert music listeners who may be individuals trained in the field of music or are more knowledgeable in the field of music than an average listener.
- step S 501 B expert data 403 is collected by providing music samples to experts accompanied by a plurality of questions.
- Music samples and questions to expert music listeners may be provided over the Internet, a private network and/or music CDs, etc.
- a music expert using computer X may download music samples from a web server 304 / 305 with a list of questions.
- FIGS. 8 A 1 , 8 A 2 and 8 A 3 provide an example of the questions that a music expert may be asked for collecting expert data 403 .
- An expert may be asked questions 801 (FIG. 8 A 1 ) to identify music genre, for example, whether a music sample belongs to, an Alternative, a Blues, a country, an Electronic/Dance, a Folk, a Gospel, a jazz, a Latin, a New Age, a R&B/Soul, a Rap/Hip-Hop, a Reggae and a Rock style of music.
- the expert is not limited to choosing a single genre, instead, the expert may choose plural genres to identify a particular music sample.
- Questions 801 establish the importance of a particular music style in a given sample, and also determine crossover between different genres. For example, if an expert that listens to a music sample and gives a high rating for Blues and Country in questions 801 , then the music sample may have a cross-over between Blues and Country style.
- Question 802 (FIGS. 8 A 1 and 8 A 2 ) requires an expert to rate music sub-styles. This determines the sub-genre of a music sample.
- an expert identifies whether a music sample is instrumental or vocal. If music is primarily vocal, then the expert also identifies if the lead vocalist is a male or female. In question 804 , the expert describes backup vocalist(s), if any.
- FIGS. 7 A 1 , 7 A 2 8 A 1 , 8 A 2 or 8 A 3 are merely illustrative and do not limit the scope of the invention.
- the number and format of the questions as presented to music listeners or expert listeners may be different than what is shown in FIGS. 7 A 1 , 7 A 2 , 8 A 1 , 8 A 2 or 8 A 3 .
- step S 501 C explicit “pairwise” questions are provided to expert music listeners.
- FIG. 7B shows an example of questions 701 that may be asked.
- Expert music listeners are provided with a pair of music samples and experts rate the similarity of the samples.
- music is provided in pairs for evaluation, the invention is not limited to providing music samples in pairs.
- Various other presentation techniques may be used, for example, music samples may be provided as a group of three, and so forth.
- Data may be collected as DSP data 403 B using DSP techniques.
- DSP techniques includes analyzing digitized audio files containing music into a set of feature vectors which can be used to characterize and compare music.
- an audio file for any music is transformed into a set of numbers (feature vectors) which describes the qualities of the music. These numbers are constructed so that they represent the important or relevant features.
- Radio logging is another method for collecting data that can describe music.
- Data stored via radio logging is stored as radio log data 403 B.
- Radio stations play sets of coherent music and avoid playing music that is likely to unpleasantly surprise their listeners.
- radio station play lists provide an implicit measure of similarity based upon the assumption that music played within the same set are likely to have common features.
- co-occurrence of music in play lists may be used as a measure of similarity, similar to explicit pairwise data 403 A.
- One approach would be to measure the conditional probability of playing music B within a pre-defined time interval after music A has been played. Music with a higher conditional probability is assumed to be more similar.
- a second approach would be to construct the entire conditional probability distribution over time for each pair of songs. For example, construct the distribution of time until music B is played, given that Music A has already been played. These entire distributions could then be compared by using a Kullback-Leibler metric as described in “Elements of Information Theory” by T. M. Cover and A. T. Joy. (1991), published by John Wiley & Songs Inc., and incorporated herein by reference.
- Internet harvesting may be also used to collect Internet harvested or “Spider data” 403 E. Spiders are well known and collect data of users that browse the Internet. A similar strategy to that of radio logging can be applied for Internet harvesting. Co-occurrence analysis can be carried out on a plurality of web pages. One approach would involve computing the frequency of co-occurrence of artist names on a. large sample of web pages. Those artist with higher frequencies of co-occurrence are more likely to have features in common than artists with lower frequencies of co-occurrence. A similar analysis can be conducted for music titles, for albums and music labels etc.
- steps S 501 A, S 501 B and S 501 C are designed to achieve accurate ratings for music samples.
- a question regarding a music sample may be asked to evoke plural responses from music listeners. For example, if the level of “Happiness” after listening to a piece of music is to be determined, then questions may be phrased as follows:
- Each method of asking questions may evoke similar or dissimilar results from music listeners and/or experts.
- the present invention evaluates questions for form and content to obtain responses that are accurate and can be used efficiently in rating music.
- FIG. 5B is flow diagram of the methodology used for evaluating questions, prior to presenting the questions to listeners in steps S 501 A, S 501 B and S 501 C (FIG. 5 A).
- step S 5001 a basic set of questions is developed to ascertain predefined music attributes.
- a basic set of questions is designed with the intent to determine the degree of “happiness” that may be provided to a listener by a piece of music.
- step S 5002 plural sets of questions are developed based upon the basic question set in step S 5001 .
- a plural set of questions to determine the degree of “happiness” evoked by a piece of music may be stated as follows:
- step S 5003 the plural sets of questions are provided to different sets of listeners with music samples.
- the plural sets of questions are multiple ways to ask a similar question regarding a music sample.
- step S 5004 plural sets of listeners respond to the plural set of questions after listening to music samples, and the answers to the questions are evaluated. Questions may be evaluated for plural criteria as described below.
- a questionnaire that produces optimum and accurate results is chosen for collecting data in steps S 501 A-C (FIG. 5 A).
- Consensus in ratings may be measured in plural ways, for example:
- Consensus( i ) ⁇ 1*[Mean (music)(Std Dev (listener)(question ( i ))]
- Consensus (i) is the measured consensus value for an ith question
- Std Dev(listener)(question (i)) is the standard deviation of the ratings for each music sample based upon question (i), for example if five listeners have rated a music sample for a particular attribute and the rating values are R 1 , R 2 , R 3 , R 4 and R 5 , then the Standard deviation of R 1 , R 2 , R 3 , R 4 and R 5 is labeled as Std Dev (listener)(question (i)). Standard deviation of ratings for different music samples for a specific question is calculated and may be designated as STD 1 , STD 2 , STD 3 , . . . STDn, where n is the nth question; and
- Mean (Music) (Std Dev(listener) (question (i))is the mean of STD 1 . . . STDn.
- Multiplying by a negative number (for example ⁇ 1 as shown above), reverse orders the statistical values since low standard deviation values correspond to high levels of consensus among music listener ratings.
- Discrimination may be measured as follows:
- Mean (listener)(question(i)) is calculated as follows: if a music sample 1 has ratings R 1 , R 2 , R 3 , R 4 and R 5 from five different listeners, then the Mean for the music sample is calculated by (R 1 +R 2 +R 3 +R 4 +R 5 )/ 5 . This mean may be designated as M 1 . Mean for other music samples are also calculated and may be designated as M 2 . . . Mn, where n is the nth sample. Mean (listener)(question(i))) is the mean of (M 1 +M 2 + - - - Mn)/n; and
- Std Dev Music (Mean (listener)(question (i))) is the standard deviation of M 1 , M 2 , M 3 , M 4 and M 5 . Questions with low standard deviation values do not discriminate between music samples. In contrast, questions with high standard deviation values discriminate between music samples. These latter questions (with high standard deviation values) are more informative compared to the questions with low standard deviation values.
- Model Based Variance The usefulness of questions can also be evaluated by examining the contribution of each question within the context of a pre-defined model.
- One such model is derived by using dimensional reduction techniques such as Principal Components Analysis (“PCA”). Details of the foregoing techniques are provided in “Multivariate Analysis, Methods and Applications” by William R. Dillon & Matthew Goldstein (1984), published by John Wiley & Sons, and in “Multivariate Observations” by G. A. F. Seber, (1984), published by, John Wiley & Sons, both of which are incorporated herein by reference.
- PCA Principal Components Analysis
- a matrix of questions is created.
- the matrix can be considered as describing each piece of music as a vector in a “question space”, defined by the question matrix.
- a piece of music e.g., Sample 1
- Average listener responses may be represented as a vector corresponding to a single column of a matrix (M 1 ), where M 1 includes music samples as columns and listener responses as rows.
- M 1 includes music samples as columns and listener responses as rows.
- an ijth entry in M 1 is the average response on the ith question for the jth music sample.
- matrix M 1 can be described as a q ⁇ s matrix, where q is the number of questions and s is the number of music samples.
- every music sample is represented as vector in the question space defined by average listener responses.
- PCA also derives a rotation matrix (RM) which has dimensions q ⁇ q, where q is the number of questions used and is same as the row dimension of M 1 .
- RM has the following properties: (1) dimensions (or matrix entries) in RM are orthogonal, so that the matrix entries do not overlap in representing information about music samples, and 2) the dimensions or basis vectors represented as RM entries are arranged based upon the amount of variance caused by the questions in the question space.
- Matrix entries in the RM show each question's contribution to the variance in average listener responses. Questions that substantially contribute to the variance across music samples are desirable and are retained whereas questions that do not may be rejected.
- (d)Rejected questions are questions that, when excluded produce least deterioration in a similarity model. Excluding certain set of questions for collecting data and as discussed below in step S 505 , evaluating the similarity model based on the included set of questions provides the relative contribution of the questions.
- step S 5005 questions that provide accurate results in the modeling process are retained and then eventually provided to listeners in steps S 501 A, S 501 B and S 501 C (FIG. 5 A).
- step S 502 listeners respond to plural questions from steps S 501 A-S 501 C.
- step S 503 plural listener responses to the plural questions are collected.
- the various questions answered by music listeners or by music experts provide values to a plurality of feature vectors that are used to define music attributes, and the feature vectors are then used to compare and search music based upon music content.
- various feature vectors are used to create a plurality of music spaces that define the location of a piece of music in a specific music space.
- Appendix “A” provides an example of numerous feature vectors that may be used to define music samples with feature vectors.
- Emotional quality vector This vector is based upon the emotional response derived from a listener by a particular piece of music, for example, whether music samples are:
- Emotional quality vector values are based upon listener response to questions 700 B (FIGS. 7 A 1 and 7 A 2 ).
- the foregoing examples and the questions in 700 B are merely illustrative and are not intended to limit the scope of the invention.
- emotional quality vector is used to define an emotional quality space.
- Vocal quality vector A vocal quality vector is based on the vocal qualities of a particular piece of music, for example, whether a music sample has a:
- Vocal quality vector values are based upon listener response to questions 700 C, in FIG. 7 A 2 .
- the foregoing examples and the questions in 700 C are merely illustrative and are not intended to limit the scope of the invention.
- vocal quality vector is used to define a vocal quality space.
- Sound quality vector A vector based on the vocal quality of a particular music sample, for example, whether a music sample has a:
- Sound quality vector values are based upon listener response to questions 700 A (FIG. 7 A 1 ).
- the foregoing examples and the questions in 700 A are merely illustrative and are not intended to limit the scope of the invention.
- sound quality vector is used to define a sound quality space.
- Situational vector A vector that establishes the optimum situation in which a particular piece of music may be used, for example, whether a music sample is:
- Vocal quality vector values are based upon listener response to questions 700 D (FIG. 7 A 2 ).
- the foregoing examples and the questions in 700 D are merely illustrative and are not intended to limit the scope of the invention.
- situational quality vector is used to define a sound quality space.
- Genre vector A vector that determines the genre or a genre combination of a particular piece of music, for example, whether a music sample belongs to the following genres or a combination of the following genres:
- Genre vector values are based upon listener response to questions in 801 and 802 (FIGS. 8 A 1 - 8 A 2 ).
- the foregoing examples and the questions in 801 and 802 are merely illustrative and are not intended to limit the scope of the invention.
- genre vector is used to define a genre space.
- (f) Ensemble Vector A vector based upon music's ensemble, for example, if a music sample includes:
- Ensemble vector values are based upon listener response to questions in 803 and 804 (FIG. 8 A 2 ).
- the foregoing examples and the questions in 803 and 804 are merely illustrative and are not intended to limit the scope of the invention.
- ensemble vector is used to define an ensemble space.
- Instrument vector An instrument vector is based upon the level of importance of a particular instruments, for example, if a music sample includes an:
- Instrument vector values are based upon listener response to questions in 806 , 807 and 808 (FIG. 8 A 2 ).
- the foregoing examples and the questions in 806 , 807 and 808 are merely illustrative and are not intended to limit the scope of the invention.
- instrument vector is used to define an instrument space.
- DSP techniques may also be used to acquire DSP data 403 B that can be used to construct feature vectors.
- One such DSP technique for constructing a DSP feature vector is as follows.
- Extracted information is represented as a long vector of numbers, which correspond, to the amplitude of an audio signal as a function of time.
- This vector may be transformed into a spectrogram, which represents the audio file as a time-frequency matrix.
- Each row of the spectrogram represents instantaneous energy (as a function of time) within a particular frequency band.
- Each column of the spectrogram represents the instantaneous energy at a particular point in time across a set of feature bands.
- the spectrogram may be large and cumbersome.
- the spectrogram may be sub-sampled. The reduced spectrogram is then processed.
- (d) Construct a representation of the periodic structure of a piece of music within each of set of frequency bands.
- This set of numbers can be characterized as a feature vector.
- a metric e.g. a Euclidean metric
- these feature vectors may be compared, so that vectors with smaller distances are closer to each other than vectors that are farther apart.
- step S 503 listener responses are stored in R&D database 405 , and in step S 504 , acquired data 404 collected in step S 502 is transferred to Modeling Module 406 .
- Modeling Module 406 analyzes acquired data 404 and also performs a similarity computation.
- the similarity computation determines the optimum function that can represent similarity between different music samples, based upon defined music attributes (i.e. feature vector values).
- Modeling Module 406 compares vectors VA and VB using a similarity function F(VA,VB). The method for calculating F(VA,VB) is described below. The foregoing example is merely to illustrate the functionality of Modeling Module 406 and does not limit the invention.
- Modeling Module 406 The discussion below illustrates the various steps performed by Modeling Module 406 .
- matrix S can be reduced to a smaller matrix S′, where S′ is a m ⁇ p where m ⁇ n.
- S represents a set of p music samples in a n dimensional space
- S′ represents the same set in m dimensional space, where m ⁇ n.
- Subsets of each vector V may also include vectors that are defined in specific music spaces.
- vector V 1 can include vectors Vg, Ve, Vt, Vv and Vi, where Vg represents a piece of music sample in a genre space, Ve represents a piece of music in a emotional quality space, Vt represents a piece of music in a tempo space, Vv represents a piece of music in a voice quality space, and Vi represents a piece of music in a instrument space.
- Vg, Ve, Vt, Vv and Vi may be represented as follows:
- Vg ( Vg 1 , . . . Vga )
- Ve ( Ve 1 , Ve 2 . . . Veb )
- Vt ( Vt 1 , Vt 2 . . . Vtc )
- Vv ( Vv 1 , Vv 2 . . . Vvd )
- Vi ( Vi 1 , Vi 2 , . . . Vie )
- a representative matrix S 1 is created that includes perceived similarity data of plural music pairs, illustrated for convenience as pair i,j.
- matrix S 1 shall include ratings that illustrate similarity and/or dissimilarity between a pair of music.
- Modeling Module 406 calculates a distance matrix D that estimates the distances between pairs of music samples in matrix S 1 . Distances between pairs of music samples may be calculated in more than one music space.
- One method of calculating distance is the Euclidean distance, illustrated as Dij, where
- Dij SQRT [ Vi 1 ⁇ Vj 1 ) ⁇ circumflex over ( ) ⁇ 2+( Vi 2 ⁇ Vj 2 ) ⁇ circumflex over ( ) ⁇ 2- - - ( Vik ⁇ -Vjk ) ⁇ circumflex over ( ) ⁇ 2]
- Vi 1 , Vi 2 . . . Vik are feature vector values for the ith music sample
- Vj 1 , Vj 2 - - - Vjk are feature vector values for the jth music sample.
- the feature vector value specifies the location of the music sample in a particular space. It is noteworthy that Dij is not limited to Euclidean distance, and that any mathematical technique that can illustrate the distance between the vectors can be used.
- Distance matrix Dij is created for plural music spaces, and may be illustrated as Dg (distance between music sample i and j in the genre space), De(distance between music sample i and j in the emotional quality space), Dv(distance between music sample i and j in the vocal quality space), Dt (distance between music sample i and j in the tempo space) and Di (distance between music sample i and j in the instrument space).
- Dg distance between music sample i and j in the genre space
- De de(distance between music sample i and j in the emotional quality space)
- Dv distance between music sample i and j in the vocal quality space
- Dt distance between music sample i and j in the tempo space
- Di distance between music sample i and j in the instrument space
- a function Fij represents the distances between music sample i and j and may be illustrated as:
- Wg, We, Wv, Wt and Wi are individual weights allocated to individual music spaces.
- the plural weights Wg, We, Wv, Wt and Wi are calculated such that S 1 and Fij are at a minimum distance from each other.
- a function F is determined to model the observed or “true” similarity between music represented in the matrix S 1 .
- the derived function F may be applied generally to all pairs of music I and j, not just those reflected in the matrix S 1 .
- Function Fij may be fit by using linear regression or by nonlinear regression techniques as disclosed in “Generalized Linear Models” by McCullagh & Nelder, and Generalized Additive Models by Hastie & Tibshirani, both published by Chapman and Hall, and incorporated herein by reference in their entirety.
- Bayesian techniques choose a model distribution for S 1 entrees and then find the foregoing weights for Fij that maximize an appropriate likelihood function. For example, if the distribution of S 1 entries is a Gaussian distribution, then the likelihood function is a function that would maximize the probability of the observed values of S 1 with the given parameters of the Gaussian distribution and the weights used to combine spaces.
- Neural networks are nonlinear optimization and function-learning algorithms and may be used to model the similarity between S 1 and Fij.
- a simple 3 layer feed forward reverse feed network architecture as shown in FIG. 5 C 1 may be used.
- Input bottom layer is divided into 2 parts, 500 C 1 and 500 C 2 , each corresponding to feature vectors of the music samples to be compared (for example, songs A and B).
- a group of network layers 500 C 4 are fully interconnected (e.g., every node in the input layer ( 500 C 1 and 500 C 2 ) is connected by a weight to every node in the middle layer( 500 C 4 ).
- the output consists of a single node which reads out the similarity 500 C 3 between the 2 input songs, A and B.
- the neural network 500 C 5 can be trained with a random set of the pairs of music for which similarity data is available (for example in matrix S 1 ).
- FIG. 5 C 2 shows the process steps used for training network 500 C 5 :
- Step 1 Select a pair of music samples A and B.
- Step 2 Set the input layer values to the feature vectors of music samples A and B.
- Step 3 Transfer input layer values forward through the network to the output layer (output node, 500 C 3 ).
- Step 4 Compare the difference between the computed similarity value, 500 C 3 and the actual value (from matrix S 1 ).
- Step 5 Reverse feed the difference (error signal) through the network 500 C 5 and adjust weights accordingly.
- Step 6 Repeat until the network has achieved the desired performance.
- Classification Trees Techniques disclosed in “Classification and Regression Trees”, by Brieman, J. H. Friedman, R. A. Olshen & C. J. Stone (1984), published by Wadsworth, Belmont C A., may also be used to calculate the foregoing weights and perform the similarity analysis, and is incorporated herein by reference in their entirety.
- Classification trees define a hierarchical or recursive partition of a set based on the values of a set of variables. In the present case, the variables are the elements of plural feature vectors.
- a decision tree is a procedure for classifying music into categories according to their feature vector values. Expert pairwise data 403 A may be used to define a satisfactory decision tree and then the tree may be applied to a larger set of music. This method partitions music samples into mutually exclusive categories, wherein music samples within each category are considered similar.
- Hierarchical Clustering Techniques disclosed in “Multivariate Analysis: Methods and Applications” by William R. Dillon & Matthew Goldstein (1984), published by John Wiley & Sons; and “Multivariate Observations” by G. A. F. Seber (1984),published by John Wiley & Sons, and both are incorporated herein by reference in their entirety, may also be used to calculate the foregoing weights and perform the similarity analysis.
- Hierarchical clustering methods produce a hierarchical tree structure for a set of data. These methods may be used to partition a music set into a set of similar clusters as follows:
- a hierarchical clustering algorithm assigns music samples to a cluster, wherein the cluster is based on the similarity of the feature vectors of plural music samples.
- Each cluster may belong to a higher level cluster, so that the top-level or root cluster contains all music samples.
- music samples are arranged in a hierarchy of clusters, each music sample being most similar to those songs in its most “local” or lowest level cluster and successively less similar to songs which belong to only the same higher level clusters.
- a function F may assign high similarity scores to pairs of music samples based on the lowest level of the tree structure that samples share in common. For example, music samples, which belong to the same lowest-level cluster, are very similar, whereas songs which have no cluster in common except the root cluster are most dissimilar.
- Fuzzy Queries Techniques provided in “An Introduction to Fuzzy Logic Applications in Intelligent Systems” by R. R. Yager & Lotfi A. Zadeh. (1992), published by Kluwer Academic Publishers, and incorporated herein by reference in their entirety, may also be used to calculate the foregoing weights and perform the similarity analysis. Fuzzy techniques essentially place graded or “soft” constraints on matching criteria rather than on “hard” or Boolean constraints. A fuzzy approach is essentially one in which the degree to which one piece of music is similar to another piece of music follows a continuous or graded function.
- weights Wg, We, Wv, Wt and Wi are determined and function Fij is fit, the data can be used for comparing any pair of music. It is noteworthy that the weights can be changed dynamically if listener ratings for specific music sample change over time. Further, weights can be varied based upon individual listeners or a group of listeners. Weights can be specified for plural spaces. The modeled attribute data is stored and can be searched to compare music based upon pre-defined attributes.
- FIG. 5 C 3 is a flow diagram showing various computerized process steps performed by Modeling Module 406 to process listener data and perform a similarity analysis.
- step S 505 A listener response data is obtained from R&D database 405 .
- step S 505 B a similarity matrix (S 1 ) is created.
- S 1 is based upon data collected in step S 501 C (FIG. 5 A).
- Matrix S 1 includes perceived similarity data of a music pair, illustrated for convenience as pair i,j.
- matrix S 1 includes ratings that illustrate similarity and/or dissimilarity between a pair of songs.
- Modeling Module 406 creates a matrix S that includes plural feature vector values as shown above. Thereafter, Modeling Module 406 performs a dimensional reduction step so as to reduce the number of dimensions in matrix S.
- a feature vector v for a set of music samples (V 1 ,V 2 , V 3 . . . Vn), where V 1 . . . Vn are based upon plural responses received in steps S 501 A and S 501 B.
- matrix S can be reduced to a smaller matrix S′, where S′ is a m ⁇ p matrix, where m ⁇ p.
- S represents a set of p music samples in a n dimensional space and S′ represents the same set in m dimensional space, where m ⁇ n.
- Various dimensional reduction techniques may be used, as described above.
- Vg ( Vg 1 , . . . Vga ).
- Ve ( Ve 1 , Ve 2 . . . Veb )
- Vt ( Vt 1 , Vt 2 . . . Vtc )
- Vv ( Vv 1 , Vv 2 - - - Vvd )
- Vi ( Vi 1 , Vi 2 , . . . Vie )
- step S 505 E the process combines plural music spaces, i.e. genre space, vocal quality space, emotion space, sound quality space, instrument space and global space to fit the similarity matrix S 1 .
- a distance matrix D is calculated between the pair of songs in matrix S 1 .
- Distance between i and j piece of music may be calculated in more than one music space.
- One method of calculating distance is the Euclidean distance, illustrated as Dij, where
- Dij SQRT ( Vi l ⁇ Vj 1 ) ⁇ circumflex over ( ) ⁇ 2+( Vi 2 ⁇ Vj 2 ) ⁇ circumflex over ( ) ⁇ 2- - - ( Vik ⁇ Vjk ) ⁇ circumflex over ( ) ⁇ 2
- Vi 1 , Vi 2 - - - V 1 k are feature vector values for the ith song, and specifies the location of a music sample in a particular space.
- Distance matrix Dij is created for plural music spaces, and may be illustrated as Dg (Dg for genre space), De(for emotion space), Dv(for vocal space), Dt (for tempo space) and Di(for instrument space).
- a music pair may be represented by the function Fij where
- Wt and Wi are individual weights allocated to individual music spaces.
- the plural weights Wg, We, Wv, Wt and Wi are calculated such that S 1 and Fi are at a minimum distance from each other. The discussion above describes how the plural weights may be calculated.
- step S 506 based upon the modeled data, production database 407 is created.
- the production database includes set of weights calculated in step S 505 .
- a sample entry in the production database 407 may be stored as follows:
- Block I specifies column names for feature vectors, while Block II includes the actual values corresponding to the Block I column entries.
- the first entry, song_id is a unique identifier for each piece of music.
- Entries v 1 -v 54 refer to specific attributes of each piece of music.
- the last entry, release_year refers to the release year of the song.
- the following labels are used for v 1 -v 54 :
- step S 507 the process evaluates the similarity model created in step S 505 .
- a focus group of music listeners and experts will verify the similarity results by listening to music samples.
- Explicit feedback from users of the system is also used to modify the similarity model and to identify songs with poor similarity matches. All acceptable similarity matches are retained in production database 407 .
- step S 508 listeners and experts reevaluate all music samples that are rejected in step S 507 , and similarity data based upon listener response, similar to those in FIG. 7B, is obtained.
- step S 509 music samples compared in step S 508 are stored as matched sets in similarity database 407 A. It is noteworthy that the invention is not limited to a separate similarity database. Music sets obtained after step S 508 may be stored in the production database 407 , without limiting the scope of the invention.
- the present system solves this problem by providing plural music spaces that can locate music by content.
- various aspects and perceptual qualities of music are described by a plurality of feature vector values. Most of the feature vectors are defined by data acquired in process steps shown in FIG. 5 A.
- a multidimensional music space is created.
- a piece of music can be located based upon the co-ordinates that define specific music attributes.
- the plurality of feature vectors are divided into plural categories, for example, emotional quality vector, vocal quality vector, genre quality vector, ensemble vector and situational vector.
- a plurality of music spaces may be used to define and locate music based upon music content defined by plural feature vectors. Examples of such music spaces are genre space, emotional quality space, vocal quality space, and tempo space etc., as discussed below.
- X be a set containing elements ⁇ x 1 , x 2 , . . . ⁇ .
- a f(xi,xj) be a real-valued function (where xi, xj are included in set X) which satisfies the following rules for any xi,xj, xk in X:
- a music space is a metric space defined by a given set of feature vectors).
- a combined music space is created based upon plural vectors such that a piece of music can be located within the combined music space with defined co-ordinates.
- the combined music space is created by providing certain weights to plural feature vectors.
- the weights for individual feature vectors may be calculated in a plurality of ways, as discussed above. Furthermore, the weights may be calculated based upon listener preferences.
- the combined music space is created based upon a listener's request and hence is dynamic in nature.
- a genre space is created based upon data collected and modeled in FIG. 5 A.
- the genre space is defined by a set of genre vectors, where the vector values are obtained from expert data collected in step S 501 A, according to questions 801 (FIG. 8 A 1 ). Based upon genre vector values, the location a music piece may be obtained in the genre space. The distance between different music samples within the genre space indicates the similarity between the music samples with respect to genre.
- a voice quality and emotional quality space is created based upon data collected and modeled in FIG. 5 and listener responses to questions in 700 C and 700 B (FIGS. 7 A 1 and 7 A 2 ), respectively.
- the voice quality space determines the location of a piece of music in the vocal quality space.
- the voice quality space is defined by a set of feature vectors, where the feature vector values depend on listener response to questions in 700 C (FIG. 7 A 2 ). Based upon voice quality vector values the location of a music piece may be obtained in the voice quality vector space. The distance between different music samples within the voice quality space indicates the similarity between the music samples with respect to voice quality.
- the emotional quality space measures the emotional reaction to a particular piece of music.
- the emotional quality space is defined by a set of feature vectors (emotional quality vector), where the feature vector values are based upon listener responses to questions in 700 B (FIGS. 7 A 1 and 7 A 2 ). Based upon emotional quality vector values, a music piece may be located in the emotional quality space. The distance between different music samples within the emotional quality space indicates the similarity between the music samples with respect to emotional reaction evoked by a piece of music.
- a “tempo” space is created by feature vector(s) whose value depends upon the number of beat per minute and/or second.
- the number of beats may be obtained by collecting expert data or by using an algorithm(s). Details of such algorithms to collect tempo data may be obtained from “Tempo and beat analysis of acoustic music signals”, by Eric D. Scheirer, Machine Group listing, E-15-401D MIT media Laboratory, Cambridge, Mass. 02139)(December 1996), incorporated herein by reference.
- Step S 505 of FIG. 5 A Details of creating a similarity space are provided above in Step S 505 of FIG. 5 A.
- every piece of sampled music is located in a genre space, voice quality space, emotional quality space, tempo space and a generic similarity space.
- a combined music space is created real time based upon a listener's request for music.
- a piece of music has a location in the genre, vocal quality, emotional quality, and tempo space etc. Every space, including genre, voice quality, emotional quality, and tempo space is allocated a certain weight, wherein the value of the weight depends upon a user's preference and may be changed.
- a function defined by a weighted average of plural vectors provides a combined music space and assists in determining similar songs.
- the combined music space may be changed every time a listener provides a different request.
- An example of a combined music space that allows content based searching is given below:
- d 1 in the genre space d 2 in the vocal quality space
- d 3 in the emotional quality space d 4 in the tempo space
- d 5 in the similarity space d 1 in the genre space, d 2 in the vocal quality space, d 3 in the emotional quality space, d 4 in the tempo space and d 5 in the similarity space.
- D The location of first music sample is given by, D, where D is equal to:
- W 1 , W 2 , W 3 , W 4 and W 5 are weights allocated to different spaces and may be changed.
- W 1 , W 2 , W 3 , W 4 and W 5 are calculated by a process similar to that of step S 505 . (FIGS. 5 A and 5 C 3 ).
- D′ The location of second music sample is given by D′, where D′ is equal to:
- W 1 ′, W 2 ′, W 3 ′, W 4 ′ and W 5 ′ are weights allocated to different spaces and may be changed. Weights W 1 ′, W 2 ′, W 3 ′, W 4 ′ and W 5 ′ are calculated by a process similar to that of step S 505 . (FIGS. 5 A and FIG. 5 C 3 ).
- Comparing D and D′ compares the first and second music samples to each other. Details of comparing D and D′ are provided above in step S 505 of FIG. 5 A.
- FIG. 5D shows sample representation of individual spaces, for example, genre space, emotion space, vocal quality space and sound space.
- FIG. 5D also shows location of music samples A and B with respect to each other in specific spaces. It is noteworthy that FIG. 5D shows one way presenting individual spaces and is merely illustrative. FIG. 5D does not limit the scope of the invention to the specific examples.
- a quality assurance system is provided so that only music listeners that provide accurate and consistent ratings are used for acquiring data in steps S 501 A, S 501 B and S 501 C (FIG. 5 A).
- the system uses plural techniques that evaluate music listener capabilities and consistency, including measuring “typicality”, “reliability” and “discrimination”.
- FIG. 5E shows process steps for calibrating a music listener.
- step S 500 A a set of music samples with plural questions (“calibration sample”) is provided to a music listener.
- Music samples with plural questions may be provided on-line via a computer connected to the Internet (Computer X, FIG. 3) or offline via CD's or audio tapes, etc.
- calibration music samples include music that has well known feature vector values or historical responses from other calibrated listeners.
- step S 500 B a music listener's responses to the plural questions are collected and stored.
- FIG. 5F illustrates an example how collected data may be stored.
- Column 500 AA in FIG. 5F shows questions 1 to n that are asked for a specific music sample (Music Sample I)
- column 500 BB shows music listener responses (R 1 to Rn) to the questions in 500 AA
- historical responses R 1 h to Rnh or range of historical responses are shown in column 500 CC.
- Historical standard deviations ( ⁇ 1 h to ⁇ nh) of music listeners responses are stored in column 500 DD.
- ⁇ 1 is the standard deviation of the range of historical responses to question 1 for music sample I.
- FIG. 5F also shows a generic formula that may be used to calculate historical standard deviation values. Standard deviation values may be acquired from Acquired database 404 or R&D database 405 .
- FIG. 5F also shows Median values for historical responses stored in column 500 EE.
- median values for responses to question 1 may be based upon M historical responses, stored as R 1 h 1 , R 1 h 2 , R 1 h 3 - - - R 1 hn′.
- the median value R 1 hmed for question 1 can then be determined.
- the historical responses are collected according to steps S 501 A, S 501 B and S 501 C (FIG. 5 A).
- the median values R 1 hmed to Rnhmed, as shown in column 500 EE may be obtained from Acquired database 404 and/or R&D database 405 by determining the median value of M responses for each of the n questions.
- Column 500 FF shows the time spent by a listener in listening to a sample and answering a question(s) associated with the sample.
- Column 500 GG shows the historical time spent by other listeners.
- Column 500 HH shows the mean values for the historical responses. For example for question 1 , if the historical responses range from R 1 h 1 to R 1 hn′, then the mean for the response to question is (R 1 h 1 +R 1 h 2 - - - R 1 hn′)/n′, and stored as R 1 hm.
- FIG. 5F is illustrative and is not intended to limit the invention, as the data may be stored in a plurality of ways.
- step S 500 C music listener's responses are compared to historical responses. For example, music listener sample response R 1 for question 1 , music sample I, is compared with response R 1 h 1 . If R 1 ⁇ R 1 h 1 exceeds a value Rth, where Rth is a threshold value, then response R 1 is tagged as a “bad” response. Rth is based upon historical responses and is continuously refined as more data is collected from music listeners and experts.
- step S 500 D the process calculates standard deviation of music listener's responses.
- music listener standard deviation is calculated based upon responses R 1 to Rn for a specific sample.
- FIG. 5F provides the formula for calculating the standard deviation.
- step S 500 E the process compares music listener standard deviation to historical standard deviations.
- Historical standard deviation may be the mean of ⁇ 1 h to ⁇ nh. For example, if music listener standard deviation is Std(l) and the mean historical standard deviation for questions 1 to n is Std(h), then Std(l) is compared to Std(h) and if the difference is greater or less than a threshold value, then a music listener may need to be trained with respect to music samples, questions and responses. Feedback is provided automatically on-line while the music listener is still listening to a sample.
- step S 500 F a music listener's Z score is calculated.
- Z score for a question i is given by: (Xi-Mean(i))/Std(i), where Xi is the-listener response to question i, Mean(i) is the historical mean for question i(column 500 HH, FIG. 5 F), STD(i) is the historical standard deviation of question i.
- Z score for each question is calculated and thereafter the process calculates ⁇ z i 2 .
- ⁇ z i 2 exceeds a defined threshold value, then a music listener's ratings are questioned and/or tagged.
- the threshold value is again based upon historical data and may be refined as more data is collected.
- Z score may also be calculated by using historical median values instead of the mean values shown above ( 500 EE, FIG. 5E)
- steps S 500 C, S 500 D and S 500 F may all be performed simultaneously or may be performed selectively. For example, only S 500 C or S 500 D or S 500 E or any other combination of the foregoing steps may be performed to adequately calibrate a listener.
- FIG. 5G shows process steps to evaluate typicality of a music listener's response(s), after a music listener is calibrated per FIG. 5 E.
- step S 500 G a music listener is provided with more samples and questions regarding the music samples.
- the samples may be provided on-line via the Internet(or a private network), CDs or audio tapes, etc.
- step S 500 H the process stores listener responses. Although listener responses are stored, some of the process steps shown below take place real time while a listener is listening to music and answering questions.
- FIG. 5F illustrates stored listener responses and historical values.
- step S 500 I a music space based upon a music listener response to specific questions regarding a specific music sample is created.
- Listener specific music space is created as described above, in S 5004 (FIG. 5 B).
- Listener responses as shown in FIG. 5F, column 500 BB is used to create the music space.
- step S 500 J music space for a listener is compared to the global space for a specific piece of music. Steps similar to those shown in S 505 (FIG. 5A) may be used create a global space for a specific piece of music. Space comparison is also similar to that described above in FIG. 5A (step S 505 ). If a listener's response pattern for a music sample is farther than a pre-determined threshold value, then the listener may have to be retrained before listener responses are used in steps S 501 A, S 501 B and S 501 C (FIG. 5 A).
- a music listener space (“People Space”) is created.
- the music listener space is based upon music listener responses to a set of music samples and a fixed set of questions. For example as shown in FIG. 5H, a music listener provides responses Rs 1 and Rs 1 ′ to a first question for music samples I and II respectively. Rs 1 and Rs 1 ′ are used to locate the listener in the People Space.
- a matrix (MP) may be formed with average listener responses to plural sets of music samples. Thus for a set of listeners matrix MP includes rows of questions and columns as listeners. The ijth entry of the MP matrix is the jth listener's average response to the ith question. Thus each listener is located in a space of questions, where the location reflects the general pattern of a listener's response to the questions
- step S 500 L listener patterns are evaluated.
- Plural listeners can be located based upon the pattern of responses to specific questions regarding similar music. Music listeners whose responses are not typical will generally be located farther from other listeners. A pre-determined threshold may be used to determine whether a music listener is typical or not typical.
- step S 500 M the process calculates ⁇ z i 2 for a plurality of questions similar to the process step S 500 F in FIG. 5E.
- Z score values if not equal to a threshold provides a measure of typicality for a music listener.
- Step S 500 L is conducted on-line while a listener is listening to music. Any feedback associated with deviant Z scores is provided to the listener real time and on-line via the Internet or a private network.
- step S 500 N the process compares a music listener's response to historical responses, similar to step S 500 C in FIG. 5 E. This step may be performed real time and on-line. If the listener's response exceeds a threshold then the response may be tagged as “bad” and the listener may be provided feedback.
- step S 500 O calculate listener response standard deviation similar to step S 500 D in FIG. 5 E.
- step S 500 P compare listener standard deviation with historical standard deviation, similar to step S 500 E in FIG. 5 E. Again, this step may be performed real time and on-line.
- step S 500 Q the process evaluates the total time spent by a listener for a specific question.
- An example of the time spent is shown as column 500 FF in FIG. 5 F.
- This step may also be conducted real time and on-line while a listener is listening to music samples.
- step S 500 R the process calculates a Mahalonobis Distance, as described below for a specific user. Mahalonobis distance is calculated for a specific feature vector. For each listener, the average response for specific questions for a similar set of music is recorded and the responses are stored as a “listener profile”. The listener can hence be identified as a point in a multi-dimensional space created similar to the process step S 500 K.
- Mahalonobis distance is the standardized distance from the center of listener location points to the actual location of a listener (standardized by the probability density of a multivariate Gaussian distribution) and is used as a measure of typicality.
- the Mahalonobis distance is the multivariate equivalent of the Z-score and is used similar to Z-scores (Step S 500 M), i.e., listeners with large Mahalonobis distances (exceeding a pre-determined threshold) are tagged as aberrant.
- the Mahalonobis distance is a multivariate way to standardize distances by a covariance matrix.
- the threshold values for the foregoing quality assurance processes are initially determined based on initial data collected from music listeners and experts. However, the threshold values are dynamic and periodically refined as more data is collected.
- the premise for measuring reliability is that music listener's responses will be consistent if the responses to the same or similar music sample are consistent. Music listeners whose responses have been collected are provided with the same samples in random order. The responses are collected again and compared with the previous responses. If the variation in the responses exceeds a pre-determined threshold, then the music listeners are trained again.
- Discrimination evaluation identifies listeners who do not use the entire range of available responses for a question. For example, if a listener has to choose from five different options for a specific question and the listener only chooses a few levels compared to historical responses, then the listener's responses will have low variance compared to the variance of the historical responses. Hence feedback is provided to the listener to make finer distinction between responses and samples.
- FIG. 5I shows a block diagram that provides a quality assurance system for evaluating music listeners.
- Listener responses are received as 500 HH same as in FIG. 5F, and sent to the Acquired database 404 (or R&D database 405 ).
- Some listener responses as shown above are evaluated On-line (Real time via the Internet or a private network) by On Line testing module 500 II and based upon the On-line evaluation, feed back 500 MM is sent to the listener.
- On-line evaluation is performed on some aspects of listener responses by an Off Line testing module 500 KK and feedback 500 LL is provided to listeners, based upon the off line evaluation.
- FIG. 9 is a block diagram of the overall system architecture that allows content based music searching, according to another aspect of the present invention.
- FIG. 9 shows a User Interface (UI) 901 that receives a music searcher's (“user”) request for music. A listener enters the request, based upon the listener's preferences.
- UI 901 is shown in FIG. 10 A.
- FIG. 10A shows a genre selector 100 , an emotion selector 101 , a vocal quality selector 102 , a instruments selector 103 and a tempo selector 105 .
- Genre selector 100 allows a listener to specify the level and number of descriptors that a listener desires to use.
- One such option is the use of a genre mixer 100 A as shown in FIG. 10 B.
- the genre mixer 100 A includes a plurality of variable sliding rulers 108 that allow a listener to set a certain level 109 for a specific genre. For example, as shown in FIG. 10B, a listener may request music with defined levels of Rock, jazz, Latin, Blues, Folk, etc. It is noteworthy that the present invention is not limited to using a genre mixer 100 A, and other methods (e.g., a pull down menu etc.) may be used to indicate listener preferences for music genres. Currently a listener may select a specific level for the following genres:
- the invention is not limited to any particular number of genres, hence future genres may be added to the genre selector 100 A. Based upon listener selection, the current system uses genre as a filter to search for music as described below.
- the emotion quality selector 101 enables a listener to specify the emotional quality for music.
- An emotional quality mixer 101 A is shown in FIG. 10C that allows a listener to adjust emotion levels 111 for different emotions. For example, a listener may select a level of 40 for up beat emotion, 30 for relaxing, 25 for romantic, and zero for the rest. Listener selection for emotional quality determines another filter for the system to search for music, based upon music content.
- the invention is not limited to any particular number of emotional qualities, hence future emotion qualities may be added to the emotional quality selector 101 A. Based upon listener selection, the current system uses emotional quality as a filter to search for music as described below.
- a vocal quality selector 102 allows a listener to choose from a vocal ensemble, e.g., a female solo or a male solo.
- a vocal quality mixer 102 A as shown in FIG. 10D, also allows a listener to select from other possible ensembles, for example:
- Vocal quality selector 102 can also allow a listener to choose from various vocal quality levels that may describe qualitative aspects of the vocal components of music, for example:
- a vocal quality mixer similar to vocal quality mixer 102 A may also be used to select various levels 113 of vocal quality by a sliding bar 112 , for example a listener may select a level of 50 for “smooth”, 25 for “sexy” and 25 for “great”. It is noteworthy that the invention is not limited to any particular number of vocal qualities, hence future vocal qualities may be added to the vocal quality selector 102 and vocal quality mixer 102 A.
- An instrument selector 103 allows a listener to select a plurality of instruments, for example,
- An instrument mixer and tempo mixer similar to the voice quality mixer 102 A, emotion quality mixer 101 A and genre mixer 100 A may be used to select and/or vary the influence of various instruments and/or music beat.
- the instrument selector 103 and tempo selector 105 provide other filters for UI engine 902 to search for music based upon music content.
- a listener may also input standard search requests for an artist, title, label or album at the search selector 104 .
- Standard search requests provide Standard Query language (SQL) calls for searching music.
- UI 901 as shown in FIG. 10A also provides a graphical illustration of a music space 106 .
- a listener may use the pointing device 14 or keyboard 13 (FIG. 1) to use the various options in display 107 , e.g., to view search results, play certain music selections, stop playing music etc.
- Appendix “A”, II also provides a list of filters that are used for content based searching according to the present invention in the music space 106 .
- UI 901 is coupled to a UI engine 902 .
- a user's request is submitted to UI engine 902 that searches for a song set based upon a specific listener request.
- FIG. 11 shows a block diagram showing various components of UI engine 902 .
- FIG. 11 shows a XML parser 1001 that receives listener requests from UI 901 . It is noteworthy that the invention is not limited to using a XML parser 1001 , and any other parser that can process UI 901 's request may be used.
- XML parser 1001 extracts calls from a listener request and a two step searching step is performed based upon the listener request. In the first step, SQL calls are used to search production database 407 as shown below and with reference to Appendix “A”:
- Vocal Ensemble —Filter 15 refers to field v 2 of song_vectors
- the SQL search provides a first set of songs. Using a second search refines this first set of songs.
- the inferential engine 1003 interacts with the production database 407 and performs a second search for songs in the genre space, emotional quality space and combined space if necessary.
- the refined search is based upon the similarity modeling as discussed above with respect to Modeling Module 406 (FIG. 5A, Step S 505 ). Results of the refined search are presented as a music set 1002 to the user.
- FIG. 9 shows UI engine 902 coupled to the production database 407 , similarity database 407 A and a user log database 903 .
- User log database 903 is populated by tracking a user's music listening habits and the websites that a listener may visit while listening to a particular type of music.
- a listener profile is created that can be used for selective advertising and marketing purposes.
- Data collected in user log database 903 may also be used to selectively provide music to listeners based upon collected user profile data and music listening habits.
- FIG. 9 also shows Research database 405 linked to similarity Modeling Module 406 that is linked to production database 407 , similarity database 407 A and user log database 903 .
- FIG. 12 shows computer executable process steps that allow a listener to search for music based upon music attributes and content.
- the concept of content based searching as illustrated below is included in a Music Query Language (“MQL”) that allows content based searching.
- MQL Music Query Language
- a listener enters a request to search for music in UI 901 .
- An example of a search request may be to search for music that is mellow, has a strong beat with 50% blues and 50% country blend, includes a male vocalist with a powerful voice, and the saxophone as the most important instrument.
- the request has the following parameters: “mellow”, “Strong beat”, “50% Blues and50% Country”, “Male Vocalist”, “Powerful Voice” and “Saxophone”. It is noteworthy that the foregoing example is merely illustrative and the invention is not limited to the foregoing example.
- step S 1202 UI engine 902 receives and parses the listener request.
- XML parser 1001 parses the listener request.
- UI engines 902 acquire a first song set from production database 407 .
- the first song set is acquired based upon SQL calls. Music can be searched for “Song Title”, “Album Title”, Name of the Artist, Tempo, or instruments, as shown above and in Appendix “A”.
- inferential engine 1003 searches for music using selection algorithms that allow searching in the emotional, genre, and/or combined music spaces.
- Inferential engine search is based upon data stored in production database 407 and similarity modeling principles used by Modeling Module 406 (FIG. 5A, Step S 505 ). Based upon the search request above, one technique for searching that may be used is illustrated below:
- (b) Determine all the songs that are within a certain distance from location Ls.
- a predetermined threshold may be used to find the songs within a certain weighted Euclidean distance; for example, choose all songs that are within “X” distance from Ls, where X is a predetermined threshold number.
- step S 1205 acquire all music that is similar to the listener request from similarity database 407 A. Generally, similarity database results will be acquired if step S 1204 results are inaccurate.
- step S 1206 the process presents a music set to the user.
- UI engine 902 provides the music set to UI 901 .
- the music set includes the songs obtained by inferential engine 1003 and songs obtained from the similarity database 407 A.
- FIG. 13 shows a flow chart of an overall system that allows content based searching for aesthetic commodities.
- step S 1301 the process collects data that can define a plurality of attributes of a specific or a combination of aesthetic commodities. Data may be acquired as shown above in FIG. 5 A. Based upon collected data, a plurality of feature vectors can be defined, as described above.
- step S 1302 model the data acquired in step S 1301 .
- the modeling analysis is similar to that shown above and in FIG. 5 C 3 .
- Modeling analysis also creates plural metric spaces similar to the music spaces shown above and in FIG. 5 D.
- step S 1303 the process provides a user interface to a user that may be similar to the user interface shown in FIG. 10 A.
- the user interface will allow a user to define a search criterion based upon attributes of a specific aesthetic commodity.
- step S 1304 the process performs a search for an aesthetic commodity similar to search performed in steps S 1202 -S 1205 (FIG. 12 ).
- feature vectors can describe music content. This assists in creating a music space for various attributes of music.
- Another advantage of the present invention is that since the feature vectors define music attribute, music can be searched based upon music content.
- Yet another advantage of the present invention is that any aesthetic commodity may be described by feature vectors and searched based upon content.
- Filter Definitions Filter Structure ⁇ filter> ⁇ uid>5 ⁇ /uid> ⁇ value>.3 ⁇ value>* ⁇ direction>3.14159 ⁇ /direction>* ⁇ rangelo>0 ⁇ /rangelo>* ⁇ rangehi>6.28318 ⁇ /rangehi>* ⁇ booleanlist>0 2 4 7 9 ⁇ /booleanlist>* ⁇ booleantype>0 ⁇ /boolean type>* ⁇ booleanstring>(1&&5)
- ⁇ boolean type> parameter should be frozen at 1 for ‘or.’
- a typical XML filter structure for this parameter may be: ⁇ filter> ⁇ uid>15 ⁇ /uid> ⁇ booleanlist>0 2 4 ⁇ /booleanlist> ⁇ booleantype>1 ⁇ /booleantype> ⁇ /filter> which means, provide songs that are either ‘female solo vocals,’ ‘female duet vocals,’ or ‘mixed duet vocals.’ ***note: an additional field is included in the XML filter structure, ⁇ booleanstring> to provide more powerful, arbitrary combinations of boolean values and operators.
- Vocal MixerParameters (uid's 300-399) Smooth Voice 300 0-1 (continuous) — — — — — Soulful Voice 301 0-1 (continuous) — — — — Sexy Voice 302 0-1 (continuous) — — — — Great Voice 303 0-1 (continuous) — — — — Powerful Voice 304 0-1 (continuous) — — — — Vocal Circumplex 305 0-1 (continuous) 0-2PI (continuous) — — 0-10 int *the circumplex arranges the previous 5 parameters on a circle.
Abstract
Description
Data Block I. |
song_id | v1 | v2 | v3 | v4 | v5 | v6 | v7 | v8 | v9 | v10 |
v11 | v12 | v13 | v14 | v15 | v16 | v17 | v18 | v19 | v20 | v21 | |
v22 | v23 | v24 | v25 | v26 | v27 | v28 | v29 | v30 | v31 | v32 | |
v33 | v34 | v35 | v36 | v37 | v38 | v39 | v40 | v41 | v42 | v43 | |
v44 | v45 | v46 | v47 | v48 | v49 | v50 | v51 | v52 | v53 | v54 |
release_year |
Data Block II. |
6319 | 0.663043 | 1.000000 | NULL 0.000000 | 1.000000 | 1.000000 |
1.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | ||
0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | ||
0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | ||
0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | ||
1.000000 | 0.348485 | 0.560606 | 0.424242 | 0.409091 | ||
0.560606 | 0.530303 | 0.636364 | 0.590909 | 0.136364 | ||
0.166667 | 0.242424 | 0.181818 | 0.196970 | −0.080946 | ||
0.045888 | −0.132495 | 0.029958 | 0.009163 | 0.008496 | − | |
0.000661 | 0.655467 | 1.317940 | 0.604017 | 0.000000 | ||
0.000000 | 0.000000 | 0.000000 | 1994 | |||
6316 | 0.315217 | 1.000000 | NULL | 0.000000 | 1.000000 | |
1.000000 | 1.000000 | 0.000000 | 0.000000 | 0.000000 | ||
1.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | ||
0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | ||
0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | ||
0.000000 | 1.000000 | 0.370370 | 0.425926 | 0.444444 | ||
0.296296 | 0.351852 | 0.444444 | 0.518519 | 0.481481 | ||
0.314815 | 0.259259 | 0.333333 | 0.370370 | 0.351852 | ||
0.175593 | 0.099421 | 0.026434 | 0.028079 | −0.041860 | − | |
0.033818 | 0.028811 | 1.380721 | 0.924552 | 0.149940 | ||
0.000000 | 0.000000 | 0.000000 | 0.000000 | 1990 | ||
NAME | Column Name | ||
Tempo | v1 | ||
Lead Vocals | v2 | ||
Focus Background Vocals | v3 | ||
Acoustic Guitar | v4 | ||
Electric Guitar | v5 | ||
Bass | v6 | ||
Drums | v7 | ||
Harmonica | v8 | ||
Organ | v9 | ||
Piano | v10 | ||
Synthesizer | v11 | ||
Horn | v12 | ||
Saxophone | v13 | ||
Strings | v14 | ||
Alternative | v15 | ||
Blues | v16 | ||
Country | v17 | ||
Electronic/Dance | v18 | ||
Folk | v19 | ||
Gospel | v20 | ||
Jazz | v21 | ||
Latin | v22 | ||
New Age | v23 | ||
R&B/Soul | v24 | ||
Rap/Hip-Hop | v25 | ||
Reggae | v26 | ||
Rock | v27 | ||
Smooth Voice | v28 | ||
Soulful Voice | v29 | ||
Sexy Voice | v30 | ||
Great Voice | v31 | ||
Powerful Voice | v32 | ||
Intense | v33 | ||
Upbeat | v34 | ||
Aggressive | v35 | ||
Relaxing | v36 | ||
Mellow | v37 | ||
Sad | v38 | ||
Romantic | v39 | ||
Broken-hearted | v40 | ||
Coord1 | v41 | ||
Coord2 | v42 | ||
Coord3 | v43 | ||
Coord4 | v44 | ||
Coord5 | v45 | ||
Coord6 | v46 | ||
Coord7 | v47 | ||
Coord8 | v48 | ||
Coord9 | v49 | ||
Coord10 | v50 | ||
Parent | v51 | ||
Level | v52 | ||
ClustVal | v53 | ||
ClustNSong | v54 | ||
Year | v55 | ||
APPENDIX A | |
I. | |
table song_vectors ( | song_id double NOT NULL PRIMARY |
KEY, v1 float, | # tempo (continuous) | v2 float, |
# lead vocal type (integer) (0-8) v3 float, | # |
focus on background vocals? (bool) (0-1) |
#***** boolean instrument filters below |
***** | v4 float, | # prominent acoustic guitar |
(bool) (0-1) | v5 float, | # prominent electric guitar |
(bool) (0-1) | v6 float, | # prominent bass (bool) (0- |
1) |
v7 float, | # prominent drums (bool) (0-1) | |
v8 float, | # prominent harmonica (bool) (0-1) | |
v9 float, | # prominent organ (bool) (0-1) |
v10 float, | # prominent piano (bool) (0-1) | |
V11 float, | # prominent synthesizer (bool) |
(0-1) |
v12 float, | # prominent horn (bool) (0-1) | |
v13 float, | # prominent saxophone (bool) (0- |
1) |
v14 float, | # prominent strings (bool) (0-1) |
#***** continuous genre mixer filters |
below ***** |
# | these are subject to change |
v15 float, | # Alternative (continuous) | |
v16 float, | # Blues (continuous) | |
v17 float, | # Country (continuous) | |
v18 float, | # Electronic/Dance (continuous) | |
v19 float, | # Folk (continuous) | |
v20 float, | # Gospel (continuous) | |
v21 float, | # Jazz (continuous) | |
v22 float, | # Latin (continuous) | |
v23 float, | # New Age (continuous) | |
v24 float, | # R&B/Soul (continuous) | |
v25 float, | # Rap/Hip-Hip (continuous) | |
v26 float, | # Reggae (continuous) | |
v27 float, | # Rock (continuous) |
#***** continuous Vocal Parameters |
subject to change v28 float, |
# Smooth Voice (continuous) |
v29 float, | # Soulful Voice (continuous) | |
v30 float, | # Sexy Voice (continuous) | |
v31 float, | # Great Voice (continuous) | |
v32 float, | # Powerful Voice (continuous) |
#***** continuous Emotion Parameters |
v33 float, | # Intense | |
v34 float, | # Upbeat | |
v35 float, | # Aggressive | |
v36 float, | # Relaxing | |
v37 float, | # Mellow | |
v38 float, | # Sad | |
v39 float, | # Romantic | |
v40 float, | # Broken-hearted |
#***** continuous coordinate parameters |
v41 float, | # coordinate 1 | |
v42 float, | # coordinate 2 | |
v43 float, | # coordinate 3 | |
v44 float, | # coordinate 4 | |
v45 float, | # coordinate 5 | |
v46 float, | # coordinate 6 | |
v47 float, | # coordinate 7 | |
v48 float, | # coordinate 8 | |
v49 float, | # coordinate 9 |
v50 float | # coordinate 10 |
#***** cluster related stuff |
v51 int, | # uid of parent song | |
v52 int, | # level of song (if it's a std candle |
song) |
# will be −1 if it's a normal leaf |
song |
v53 float, | # continuous quantitative filter |
measurement |
v54 int, | # number of songs in the cluster | |
represented by # this song | ||
v55 int | # release year |
) |
II. Filter Definitions |
Filter Structure: |
<filter> |
<uid>5</uid> | |
<value>.3<value>* | |
<direction>3.14159</direction>* | |
<rangelo>0</rangelo>* | |
<rangehi>6.28318</rangehi>* | |
<booleanlist>0 2 4 7 9</booleanlist>* | |
<booleantype>0</boolean type>* | |
<booleanstring>(1&&5) | | (3&&8)&&!(6| | 3)</booleanstrin |
g>* ** |
</filter> |
* these fields are optional depending on the filter |
** this generalized boolean query mechanism is subject to |
change |
List of Filters/controls with their corresponding fields: |
FilterName | ||||||
index list | uid | value | direction | rangelo | rangehi | boolean |
Genre Mixer Parameters: (uid's 0-99) |
Alternative | 0 | 0-1 (continuous) | — | — | — | — |
Blues | 1 | 0-1 (continuous) | — | — | — | — |
Country | 2 | 0-1 (continuous) | — | — | — | — |
Electronic/Dance | 3 | 0-1 (continuous) | — | — | — | — |
Folk | 4 | 0-1 (continuous) | — | — | — | — |
Gospel | 5 | 0-1 (continuous) | — | — | — | — |
Jazz | 6 | 0-1 (continuous) | — | — | — | — |
Latin | 7 | 0-1 (continuous) | — | — | — | — |
New Age | 8 | 0-1 (continuous) | — | — | — | — |
R&B/Soul | 9 | 0-1 (continuous) | — | — | — | — |
Rap/Hip-Hop | 10 | 0-1 (continuous) | — | — | — | — |
Reggae | 11 | 0-1 (continuous) | — | — | — | — |
Rock | 12 | 0-1 (continuous) | — | — | — | — |
Vocal Quality (uid's 200-299) |
Lead Vocals | 200 | — | — | — | — | 0-8 |
(int) |
**note: For Lead Vocals) the meaning of the values are the following: |
0 = female solo, 1 = male solo, 2 = female duet, 3 = male duet, 4 = mixed duet, 5 = | |
female group, 6 = male group, 7 = mixed group, 8 = instrumental. | |
The <boolean type> parameter should be frozen at 1 for ‘or.’ |
Thus a typical XML filter structure for this parameter may be: |
<filter> |
<uid>15</uid> | |
<booleanlist>0 2 4</booleanlist> | |
<booleantype>1</booleantype> |
</filter> |
which means, provide songs that are either ‘female solo vocals,’ ‘female duet vocals,’ or | |
‘mixed duet vocals.’ | |
***note: an additional field is included in the XML filter structure, <booleanstring> to | |
provide more powerful, arbitrary combinations of boolean values and operators. |
Vocal MixerParameters (uid's 300-399) |
Smooth Voice | 300 | 0-1 (continuous) | — | — | — | — |
|
301 | 0-1 (continuous) | — | — | — | — |
|
302 | 0-1 (continuous) | — | — | — | — |
|
303 | 0-1 (continuous) | — | — | — | — |
|
304 | 0-1 (continuous) | — | — | — | — |
|
305 | 0-1 (continuous) | 0-2PI (continuous) | — | — | 0-10 |
int |
*the circumplex arranges the previous 5 parameters on a circle. |
Instrument Parameters (uid's 400-499) |
Acoustic Guitar | 400 | 0-1 (boolean) | — | — | — | — |
Electric Guitar | 401 | 0-1 (boolean) | — | — | — | — |
Bass | 402 | 0-1 (boolean) | — | — | — | — |
Drums | 403 | 0-1 (boolean) | — | — | — | — |
Harmonica | 404 | 0-1 (boolean) | — | — | — | — |
Organ | 405 | 0-1 (boolean) | — | — | — | — |
Piano | 406 | 0-1 (boolean) | — | — | — | — |
Synthesizer | 407 | 0-1 (boolean) | — | — | — | — |
Horn | 408 | 0-1 (boolean) | — | — | — | — |
Saxophone | 409 | 0-1 (boolean) | — | — | — | — |
Strings | 410 | 0-1 (boolean) | — | — | — | — |
Emotion Mixer Parameters (uid's 500-599) |
Intense | 500 | 0-1 (continuous) | — | — | — | — |
Upbeat | 501 | 0-1 (continuous) | — | — | — | — |
Aggressive | 502 | 0-1 (continuous) | — | — | — | — |
Relaxing | 503 | 0-1 (continuous) | — | — | — | — |
Mellow | 504 | 0-1 (continuous) | — | — | — | — |
Sad | 505 | 0-1 (continuous) | — | — | — | — |
Romantic | 506 | 0-1 (continuous) | — | — | — | — |
Broken-hearted | 507 | 0-1 (continuous) | — | — | — | — |
III. Relevant Tables in the Production Database |
table songs ( |
uid double NOT NULL PRIMARY KEY, | |
created datetime, | |
song_title varchar(255), | |
artist varchar(255), | |
genre double, | |
album_title varchar(255), |
release_year int, | * this is used for the timeline filter |
parent double # parent song, null if we're the highest in this |
genre |
) |
create table filters ( |
uid int NOT NULL PRIMARY KEY, | |
name varchar(255), |
column_name varchar(18) NOT NULL, | * maps to columns in |
song_vectors |
type int * TBD, probably used for whether this is an SQL or |
other param |
# for now, 0=SQL only, 1=attrvector param for |
Matlab |
)\g |
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/533,045 US6539395B1 (en) | 2000-03-22 | 2000-03-22 | Method for creating a database for comparing music |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/533,045 US6539395B1 (en) | 2000-03-22 | 2000-03-22 | Method for creating a database for comparing music |
Publications (1)
Publication Number | Publication Date |
---|---|
US6539395B1 true US6539395B1 (en) | 2003-03-25 |
Family
ID=24124229
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/533,045 Expired - Lifetime US6539395B1 (en) | 2000-03-22 | 2000-03-22 | Method for creating a database for comparing music |
Country Status (1)
Country | Link |
---|---|
US (1) | US6539395B1 (en) |
Cited By (166)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010049278A1 (en) * | 2000-05-24 | 2001-12-06 | Hiromitsu Negishi | Audio-contents demo system |
US20020019858A1 (en) * | 2000-07-06 | 2002-02-14 | Rolf Kaiser | System and methods for the automatic transmission of new, high affinity media |
US20030001881A1 (en) * | 2001-06-29 | 2003-01-02 | Steve Mannheimer | Method and system for providing an acoustic interface |
US20030016250A1 (en) * | 2001-04-02 | 2003-01-23 | Chang Edward Y. | Computer user interface for perception-based information retrieval |
US20030037124A1 (en) * | 2001-07-04 | 2003-02-20 | Atsushi Yamaura | Portal server and information supply method for supplying music content |
US20030045953A1 (en) * | 2001-08-21 | 2003-03-06 | Microsoft Corporation | System and methods for providing automatic classification of media entities according to sonic properties |
US20030079015A1 (en) * | 2001-05-09 | 2003-04-24 | Dotclick Corporation | Method, apparatus and program product providing business processes using media identification and tracking of associated user preferences |
US20030086341A1 (en) * | 2001-07-20 | 2003-05-08 | Gracenote, Inc. | Automatic identification of sound recordings |
US20030106413A1 (en) * | 2001-12-06 | 2003-06-12 | Ramin Samadani | System and method for music identification |
US20030135725A1 (en) * | 2002-01-14 | 2003-07-17 | Schirmer Andrew Lewis | Search refinement graphical user interface |
US20030236582A1 (en) * | 2002-06-25 | 2003-12-25 | Lee Zamir | Selection of items based on user reactions |
US20040002993A1 (en) * | 2002-06-26 | 2004-01-01 | Microsoft Corporation | User feedback processing of metadata associated with digital media files |
US6674452B1 (en) * | 2000-04-05 | 2004-01-06 | International Business Machines Corporation | Graphical user interface to query music by examples |
US20040167890A1 (en) * | 2000-01-24 | 2004-08-26 | Aviv Eyal | System and method for media search and playback |
US20040199657A1 (en) * | 2000-01-24 | 2004-10-07 | Aviv Eyal | Streaming media search and playback system |
US20040225519A1 (en) * | 2002-06-25 | 2004-11-11 | Martin Keith D. | Intelligent music track selection |
US20040254957A1 (en) * | 2003-06-13 | 2004-12-16 | Nokia Corporation | Method and a system for modeling user preferences |
US20050021470A1 (en) * | 2002-06-25 | 2005-01-27 | Bose Corporation | Intelligent music track selection |
US20050038819A1 (en) * | 2000-04-21 | 2005-02-17 | Hicken Wendell T. | Music Recommendation system and method |
US20050092161A1 (en) * | 2003-11-05 | 2005-05-05 | Sharp Kabushiki Kaisha | Song search system and song search method |
US20050097075A1 (en) * | 2000-07-06 | 2005-05-05 | Microsoft Corporation | System and methods for providing automatic classification of media entities according to consonance properties |
US20050102375A1 (en) * | 2003-10-23 | 2005-05-12 | Kivin Varghese | An Internet System for the Uploading, Viewing and Rating of Videos |
US20050125394A1 (en) * | 2003-11-14 | 2005-06-09 | Yasuteru Kodama | Information search apparatus, information search method, and information recording medium on which information search program is recorded |
US6913466B2 (en) * | 2001-08-21 | 2005-07-05 | Microsoft Corporation | System and methods for training a trainee to classify fundamental properties of media entities |
US20050229204A1 (en) * | 2002-05-16 | 2005-10-13 | Koninklijke Philips Electronics N.V. | Signal processing method and arragement |
US20050262146A1 (en) * | 2004-01-21 | 2005-11-24 | Grace James R | System and apparatus for wireless synchronization of multimedia content |
US20060010167A1 (en) * | 2004-01-21 | 2006-01-12 | Grace James R | Apparatus for navigation of multimedia content in a vehicle multimedia system |
US20060020614A1 (en) * | 1997-08-08 | 2006-01-26 | Kolawa Adam K | Method and apparatus for automated selection, organization, and recommendation of items based on user preference topography |
WO2006035115A1 (en) * | 2004-09-28 | 2006-04-06 | Kutalab Oy | Online media content transfer |
US7031931B1 (en) * | 2000-03-30 | 2006-04-18 | Nokia Corporation | Portable device attached to a media player for rating audio/video contents |
US20060096447A1 (en) * | 2001-08-29 | 2006-05-11 | Microsoft Corporation | System and methods for providing automatic classification of media entities according to melodic movement properties |
US20060112082A1 (en) * | 2004-11-19 | 2006-05-25 | Microsoft Corporation | Client-based generation of music playlists from a server-provided subset of music similarity vectors |
US20060143190A1 (en) * | 2003-02-26 | 2006-06-29 | Haitsma Jaap A | Handling of digital silence in audio fingerprinting |
US20060137516A1 (en) * | 2004-12-24 | 2006-06-29 | Samsung Electronics Co., Ltd. | Sound searcher for finding sound media data of specific pattern type and method for operating the same |
US20060190450A1 (en) * | 2003-09-23 | 2006-08-24 | Predixis Corporation | Audio fingerprinting system and method |
US7107254B1 (en) * | 2001-05-07 | 2006-09-12 | Microsoft Corporation | Probablistic models and methods for combining multiple content classifiers |
US20060212478A1 (en) * | 2005-03-21 | 2006-09-21 | Microsoft Corporation | Methods and systems for generating a subgroup of one or more media items from a library of media items |
US20060212149A1 (en) * | 2004-08-13 | 2006-09-21 | Hicken Wendell T | Distributed system and method for intelligent data analysis |
US20060217828A1 (en) * | 2002-10-23 | 2006-09-28 | Hicken Wendell T | Music searching system and method |
US20060218292A1 (en) * | 2001-05-09 | 2006-09-28 | Woodward Mark L | Method, apparatus and program product for media identification and tracking associated user preferences |
US20060218187A1 (en) * | 2005-03-25 | 2006-09-28 | Microsoft Corporation | Methods, systems, and computer-readable media for generating an ordered list of one or more media items |
US20060224260A1 (en) * | 2005-03-04 | 2006-10-05 | Hicken Wendell T | Scan shuffle for building playlists |
US20060230065A1 (en) * | 2005-04-06 | 2006-10-12 | Microsoft Corporation | Methods, systems, and computer-readable media for generating a suggested list of media items based upon a seed |
US20060242198A1 (en) * | 2005-04-22 | 2006-10-26 | Microsoft Corporation | Methods, computer-readable media, and data structures for building an authoritative database of digital audio identifier elements and identifying media items |
US20060239254A1 (en) * | 1998-12-08 | 2006-10-26 | Nomadix, Inc. | Systems and Methods for Providing Dynamic Network Authorization, Authentication and Accounting |
US20060265349A1 (en) * | 2005-05-23 | 2006-11-23 | Hicken Wendell T | Sharing music essence in a recommendation system |
US20060288041A1 (en) * | 2005-06-20 | 2006-12-21 | Microsoft Corporation | Providing community-based media item ratings to users |
US20070016599A1 (en) * | 2005-07-15 | 2007-01-18 | Microsoft Corporation | User interface for establishing a filtering engine |
US20070038672A1 (en) * | 2005-08-11 | 2007-02-15 | Microsoft Corporation | Single action media playlist generation |
US7194477B1 (en) * | 2001-06-29 | 2007-03-20 | Revenue Science, Inc. | Optimized a priori techniques |
US20070112940A1 (en) * | 2005-10-26 | 2007-05-17 | Sony Corporation | Reproducing apparatus, correlated information notifying method, and correlated information notifying program |
US7228280B1 (en) | 1997-04-15 | 2007-06-05 | Gracenote, Inc. | Finding database match for file based on file characteristics |
US20070136221A1 (en) * | 2005-03-30 | 2007-06-14 | Peter Sweeney | System, Method and Computer Program for Facet Analysis |
US20070168388A1 (en) * | 2005-12-30 | 2007-07-19 | Microsoft Corporation | Media discovery and curation of playlists |
US7277766B1 (en) * | 2000-10-24 | 2007-10-02 | Moodlogic, Inc. | Method and system for analyzing digital audio files |
US7281034B1 (en) | 2000-01-24 | 2007-10-09 | Friskit, Inc. | System and method for media playback over a network using links that contain control signals and commands |
US20070240557A1 (en) * | 2006-04-12 | 2007-10-18 | Whitman Brian A | Understanding Music |
US20070270667A1 (en) * | 2004-11-03 | 2007-11-22 | Andreas Coppi | Musical personal trainer |
US20070297292A1 (en) * | 2006-06-21 | 2007-12-27 | Nokia Corporation | Method, computer program product and device providing variable alarm noises |
US20080046429A1 (en) * | 2006-08-16 | 2008-02-21 | Yahoo! Inc. | System and method for hierarchical segmentation of websites by topic |
US7343553B1 (en) * | 2000-05-19 | 2008-03-11 | Evan John Kaye | Voice clip identification method |
US20080133696A1 (en) * | 2006-12-04 | 2008-06-05 | Hanebeck Hanns-Christian Leemo | Personal multi-media playing system |
US20080162468A1 (en) * | 2006-12-19 | 2008-07-03 | Teravolt Gbr | Method of and apparatus for selecting characterisable datasets |
US20080168022A1 (en) * | 2007-01-05 | 2008-07-10 | Harman International Industries, Incorporated | Heuristic organization and playback system |
US20080195654A1 (en) * | 2001-08-20 | 2008-08-14 | Microsoft Corporation | System and methods for providing adaptive media property classification |
US20080201370A1 (en) * | 2006-09-04 | 2008-08-21 | Sony Deutschland Gmbh | Method and device for mood detection |
US20080228744A1 (en) * | 2007-03-12 | 2008-09-18 | Desbiens Jocelyn | Method and a system for automatic evaluation of digital files |
US20080229910A1 (en) * | 2007-03-22 | 2008-09-25 | Yamaha Corporation | Database constructing apparatus and method |
US20080235283A1 (en) * | 2007-03-21 | 2008-09-25 | The Regents Of The University Of California | Generating audio annotations for search and retrieval |
US20080256042A1 (en) * | 2007-04-10 | 2008-10-16 | Brian Whitman | Automatically Acquiring Acoustic and Cultural Information About Music |
US20080256106A1 (en) * | 2007-04-10 | 2008-10-16 | Brian Whitman | Determining the Similarity of Music Using Cultural and Acoustic Information |
US20090019996A1 (en) * | 2007-07-17 | 2009-01-22 | Yamaha Corporation | Music piece processing apparatus and method |
US7551889B2 (en) | 2004-06-30 | 2009-06-23 | Nokia Corporation | Method and apparatus for transmission and receipt of digital data in an analog signal |
US20090228796A1 (en) * | 2008-03-05 | 2009-09-10 | Sony Corporation | Method and device for personalizing a multimedia application |
US20090234888A1 (en) * | 2008-03-17 | 2009-09-17 | Disney Enterprises, Inc. | Method and system for producing a mood guided media playlist |
US20090231964A1 (en) * | 2006-06-21 | 2009-09-17 | Nokia Corporation | Variable alarm sounds |
US20090259690A1 (en) * | 2004-12-30 | 2009-10-15 | All Media Guide, Llc | Methods and apparatus for audio recognitiion |
US20090277322A1 (en) * | 2008-05-07 | 2009-11-12 | Microsoft Corporation | Scalable Music Recommendation by Search |
US20090281906A1 (en) * | 2008-05-07 | 2009-11-12 | Microsoft Corporation | Music Recommendation using Emotional Allocation Modeling |
US20100036802A1 (en) * | 2008-08-05 | 2010-02-11 | Setsuo Tsuruta | Repetitive fusion search method for search system |
US20100049766A1 (en) * | 2006-08-31 | 2010-02-25 | Peter Sweeney | System, Method, and Computer Program for a Consumer Defined Information Architecture |
US20100057664A1 (en) * | 2008-08-29 | 2010-03-04 | Peter Sweeney | Systems and methods for semantic concept definition and semantic concept relationship synthesis utilizing existing domain definitions |
US20100100826A1 (en) * | 2008-10-17 | 2010-04-22 | Louis Hawthorne | System and method for content customization based on user profile |
US20100106267A1 (en) * | 2008-10-22 | 2010-04-29 | Pierre R. Schowb | Music recording comparison engine |
US20100107075A1 (en) * | 2008-10-17 | 2010-04-29 | Louis Hawthorne | System and method for content customization based on emotional state of the user |
US7761423B1 (en) * | 2005-10-11 | 2010-07-20 | OneSpot, Inc. | System and method for indexing a network of interrelated elements |
US20100217755A1 (en) * | 2007-10-04 | 2010-08-26 | Koninklijke Philips Electronics N.V. | Classifying a set of content items |
US20100235307A1 (en) * | 2008-05-01 | 2010-09-16 | Peter Sweeney | Method, system, and computer program for user-driven dynamic generation of semantic networks and media synthesis |
US7827110B1 (en) | 2003-11-03 | 2010-11-02 | Wieder James W | Marketing compositions by using a customized sequence of compositions |
US20100281009A1 (en) * | 2006-07-31 | 2010-11-04 | Microsoft Corporation | Hierarchical conditional random fields for web extraction |
US20100279825A1 (en) * | 2006-09-07 | 2010-11-04 | Nike, Inc. | Athletic Performance Sensing and/or Tracking Systems and Methods |
US20100318586A1 (en) * | 2009-06-11 | 2010-12-16 | All Media Guide, Llc | Managing metadata for occurrences of a recording |
US20100325135A1 (en) * | 2009-06-23 | 2010-12-23 | Gracenote, Inc. | Methods and apparatus for determining a mood profile associated with media data |
WO2010146231A1 (en) * | 2009-06-18 | 2010-12-23 | Nokia Corporation | Method and apparatus for classifying content |
US20110016102A1 (en) * | 2009-07-20 | 2011-01-20 | Louis Hawthorne | System and method for identifying and providing user-specific psychoactive content |
US7884274B1 (en) | 2003-11-03 | 2011-02-08 | Wieder James W | Adaptive personalized music and entertainment |
US7890374B1 (en) | 2000-10-24 | 2011-02-15 | Rovi Technologies Corporation | System and method for presenting music to consumers |
US20110041154A1 (en) * | 2009-08-14 | 2011-02-17 | All Media Guide, Llc | Content Recognition and Synchronization on a Television or Consumer Electronics Device |
US20110060645A1 (en) * | 2009-09-08 | 2011-03-10 | Peter Sweeney | Synthesizing messaging using context provided by consumers |
US20110060644A1 (en) * | 2009-09-08 | 2011-03-10 | Peter Sweeney | Synthesizing messaging using context provided by consumers |
US20110060794A1 (en) * | 2009-09-08 | 2011-03-10 | Peter Sweeney | Synthesizing messaging using context provided by consumers |
US20110078020A1 (en) * | 2009-09-30 | 2011-03-31 | Lajoie Dan | Systems and methods for identifying popular audio assets |
US20110078729A1 (en) * | 2009-09-30 | 2011-03-31 | Lajoie Dan | Systems and methods for identifying audio content using an interactive media guidance application |
US20110112994A1 (en) * | 2007-07-31 | 2011-05-12 | National Institute Of Advanced Industrial Science And Technology | Musical piece recommendation system, musical piece recommendation method, and musical piece recommendation computer program |
US20110126114A1 (en) * | 2007-07-06 | 2011-05-26 | Martin Keith D | Intelligent Music Track Selection in a Networked Environment |
US20110145256A1 (en) * | 2009-12-10 | 2011-06-16 | Harris Corporation | Video processing system providing overlay of selected geospatially-tagged metadata relating to a geolocation outside viewable area and related methods |
US20110154197A1 (en) * | 2009-12-18 | 2011-06-23 | Louis Hawthorne | System and method for algorithmic movie generation based on audio/video synchronization |
US20110173185A1 (en) * | 2010-01-13 | 2011-07-14 | Rovi Technologies Corporation | Multi-stage lookup for rolling audio recognition |
EP2410444A2 (en) * | 2010-07-21 | 2012-01-25 | Magix AG | System and method for dynamic generation of individualized playlists according to user selection of musical features |
US8156246B2 (en) | 1998-12-08 | 2012-04-10 | Nomadix, Inc. | Systems and methods for providing content and services on a network system |
US8190708B1 (en) * | 1999-10-22 | 2012-05-29 | Nomadix, Inc. | Gateway device having an XML interface and associated method |
US8195734B1 (en) | 2006-11-27 | 2012-06-05 | The Research Foundation Of State University Of New York | Combining multiple clusterings by soft correspondence |
US8204883B1 (en) * | 2008-04-17 | 2012-06-19 | Amazon Technologies, Inc. | Systems and methods of determining genre information |
US20120296776A1 (en) * | 2011-05-20 | 2012-11-22 | Microsoft Corporation | Adaptive interactive search |
US20120294457A1 (en) * | 2011-05-17 | 2012-11-22 | Fender Musical Instruments Corporation | Audio System and Method of Using Adaptive Intelligence to Distinguish Information Content of Audio Signals and Control Signal Processing Function |
US8326584B1 (en) * | 1999-09-14 | 2012-12-04 | Gracenote, Inc. | Music searching methods based on human perception |
US20130039584A1 (en) * | 2011-08-11 | 2013-02-14 | Oztan Harmanci | Method and apparatus for detecting near-duplicate images using content adaptive hash lookups |
US8396800B1 (en) | 2003-11-03 | 2013-03-12 | James W. Wieder | Adaptive personalized music and entertainment |
US20130179439A1 (en) * | 2001-05-16 | 2013-07-11 | Pandora Media, Inc. | Methods and Systems for Utilizing Contextual Feedback to Generate and Modify Playlists |
US8613053B2 (en) | 1998-12-08 | 2013-12-17 | Nomadix, Inc. | System and method for authorizing a portable communication device |
US8676732B2 (en) | 2008-05-01 | 2014-03-18 | Primal Fusion Inc. | Methods and apparatus for providing information of interest to one or more users |
US8751957B1 (en) * | 2000-11-22 | 2014-06-10 | Pace Micro Technology Plc | Method and apparatus for obtaining auditory and gestural feedback in a recommendation system |
US8849860B2 (en) | 2005-03-30 | 2014-09-30 | Primal Fusion Inc. | Systems and methods for applying statistical inference techniques to knowledge representations |
US8886531B2 (en) | 2010-01-13 | 2014-11-11 | Rovi Technologies Corporation | Apparatus and method for generating an audio fingerprint and using a two-stage query |
US8918428B2 (en) | 2009-09-30 | 2014-12-23 | United Video Properties, Inc. | Systems and methods for audio asset storage and management |
US9053181B2 (en) | 2003-11-03 | 2015-06-09 | James W. Wieder | Adaptive personalized playback or presentation using count |
US9053299B2 (en) | 2003-11-03 | 2015-06-09 | James W. Wieder | Adaptive personalized playback or presentation using rating |
US20150193196A1 (en) * | 2014-01-06 | 2015-07-09 | Alpine Electronics of Silicon Valley, Inc. | Intensity-based music analysis, organization, and user interface for audio reproduction devices |
US9092516B2 (en) | 2011-06-20 | 2015-07-28 | Primal Fusion Inc. | Identifying information of interest based on user preferences |
US9098681B2 (en) | 2003-11-03 | 2015-08-04 | James W. Wieder | Adaptive personalized playback or presentation using cumulative time |
US20150220633A1 (en) * | 2013-03-14 | 2015-08-06 | Aperture Investments, Llc | Music selection and organization using rhythm, texture and pitch |
US9104779B2 (en) | 2005-03-30 | 2015-08-11 | Primal Fusion Inc. | Systems and methods for analyzing and synthesizing complex knowledge representations |
US9177248B2 (en) | 2005-03-30 | 2015-11-03 | Primal Fusion Inc. | Knowledge representation systems and methods incorporating customization |
US9235806B2 (en) | 2010-06-22 | 2016-01-12 | Primal Fusion Inc. | Methods and devices for customizing knowledge representation systems |
US9262520B2 (en) | 2009-11-10 | 2016-02-16 | Primal Fusion Inc. | System, method and computer program for creating and manipulating data structures using an interactive graphical interface |
US9263060B2 (en) | 2012-08-21 | 2016-02-16 | Marian Mason Publishing Company, Llc | Artificial neural network based system for classification of the emotional content of digital music |
US9361365B2 (en) | 2008-05-01 | 2016-06-07 | Primal Fusion Inc. | Methods and apparatus for searching of content using semantic synthesis |
US20160162565A1 (en) * | 2014-12-09 | 2016-06-09 | Hyundai Motor Company | Method and device for generating music playlist |
US9378203B2 (en) | 2008-05-01 | 2016-06-28 | Primal Fusion Inc. | Methods and apparatus for providing information of interest to one or more users |
US9390695B2 (en) * | 2014-10-27 | 2016-07-12 | Northwestern University | Systems, methods, and apparatus to search audio synthesizers using vocal imitation |
US9460390B1 (en) * | 2011-12-21 | 2016-10-04 | Emc Corporation | Analyzing device similarity |
US9753925B2 (en) | 2009-05-06 | 2017-09-05 | Gracenote, Inc. | Systems, methods, and apparatus for generating an audio-visual presentation using characteristics of audio, visual and symbolic media objects |
US9773205B1 (en) | 2003-11-03 | 2017-09-26 | James W. Wieder | Distributing digital-works and usage-rights via limited authorization to user-devices |
US9934785B1 (en) | 2016-11-30 | 2018-04-03 | Spotify Ab | Identification of taste attributes from an audio signal |
US10002325B2 (en) | 2005-03-30 | 2018-06-19 | Primal Fusion Inc. | Knowledge representation systems and methods incorporating inference rules |
US10061476B2 (en) | 2013-03-14 | 2018-08-28 | Aperture Investments, Llc | Systems and methods for identifying, searching, organizing, selecting and distributing content based on mood |
US10225328B2 (en) | 2013-03-14 | 2019-03-05 | Aperture Investments, Llc | Music selection and organization using audio fingerprints |
US10248669B2 (en) | 2010-06-22 | 2019-04-02 | Primal Fusion Inc. | Methods and devices for customizing knowledge representation systems |
US10403304B1 (en) | 2018-03-13 | 2019-09-03 | Qbrio Studio, Inc. | Neural networks for identifying the potential of digitized audio to induce frisson in listeners |
US10595054B2 (en) | 2016-05-10 | 2020-03-17 | Google Llc | Method and apparatus for a virtual online video channel |
US10623480B2 (en) | 2013-03-14 | 2020-04-14 | Aperture Investments, Llc | Music categorization using rhythm, texture and pitch |
US10750216B1 (en) | 2016-05-10 | 2020-08-18 | Google Llc | Method and apparatus for providing peer-to-peer content delivery |
US10750248B1 (en) | 2016-05-10 | 2020-08-18 | Google Llc | Method and apparatus for server-side content delivery network switching |
US10771824B1 (en) | 2016-05-10 | 2020-09-08 | Google Llc | System for managing video playback using a server generated manifest/playlist |
US10785508B2 (en) | 2016-05-10 | 2020-09-22 | Google Llc | System for measuring video playback events using a server generated manifest/playlist |
EP3786952A1 (en) * | 2019-08-30 | 2021-03-03 | Playground Music Ltd | Assessing similarity of electronic files |
US11032588B2 (en) | 2016-05-16 | 2021-06-08 | Google Llc | Method and apparatus for spatial enhanced adaptive bitrate live streaming for 360 degree video playback |
US11039181B1 (en) | 2016-05-09 | 2021-06-15 | Google Llc | Method and apparatus for secure video manifest/playlist generation and playback |
US11069378B1 (en) | 2016-05-10 | 2021-07-20 | Google Llc | Method and apparatus for frame accurate high resolution video editing in cloud using live video streams |
US20210294840A1 (en) * | 2020-03-19 | 2021-09-23 | Adobe Inc. | Searching for Music |
US11165999B1 (en) | 2003-11-03 | 2021-11-02 | Synergyze Technologies Llc | Identifying and providing compositions and digital-works |
US11271993B2 (en) | 2013-03-14 | 2022-03-08 | Aperture Investments, Llc | Streaming music categorization using rhythm, texture and pitch |
US11294977B2 (en) | 2011-06-20 | 2022-04-05 | Primal Fusion Inc. | Techniques for presenting content to a user based on the user's preferences |
US11386262B1 (en) | 2016-04-27 | 2022-07-12 | Google Llc | Systems and methods for a knowledge-based form creation platform |
US20230056955A1 (en) * | 2018-06-05 | 2023-02-23 | Anker Innovations Technology Co., Ltd. | Deep Learning Based Method and System for Processing Sound Quality Characteristics |
US11609948B2 (en) | 2014-03-27 | 2023-03-21 | Aperture Investments, Llc | Music streaming, playlist creation and streaming architecture |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5521324A (en) * | 1994-07-20 | 1996-05-28 | Carnegie Mellon University | Automated musical accompaniment with multiple input sensors |
US5616876A (en) | 1995-04-19 | 1997-04-01 | Microsoft Corporation | System and methods for selecting music on the basis of subjective content |
US5647058A (en) * | 1993-05-24 | 1997-07-08 | International Business Machines Corporation | Method for high-dimensionality indexing in a multi-media database |
US5696964A (en) * | 1996-04-16 | 1997-12-09 | Nec Research Institute, Inc. | Multimedia database retrieval system which maintains a posterior probability distribution that each item in the database is a target of a search |
US6201176B1 (en) * | 1998-05-07 | 2001-03-13 | Canon Kabushiki Kaisha | System and method for querying a music database |
US6289354B1 (en) * | 1998-10-07 | 2001-09-11 | International Business Machines Corporation | System and method for similarity searching in high-dimensional data space |
-
2000
- 2000-03-22 US US09/533,045 patent/US6539395B1/en not_active Expired - Lifetime
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5647058A (en) * | 1993-05-24 | 1997-07-08 | International Business Machines Corporation | Method for high-dimensionality indexing in a multi-media database |
US5521324A (en) * | 1994-07-20 | 1996-05-28 | Carnegie Mellon University | Automated musical accompaniment with multiple input sensors |
US5616876A (en) | 1995-04-19 | 1997-04-01 | Microsoft Corporation | System and methods for selecting music on the basis of subjective content |
US5696964A (en) * | 1996-04-16 | 1997-12-09 | Nec Research Institute, Inc. | Multimedia database retrieval system which maintains a posterior probability distribution that each item in the database is a target of a search |
US6201176B1 (en) * | 1998-05-07 | 2001-03-13 | Canon Kabushiki Kaisha | System and method for querying a music database |
US6289354B1 (en) * | 1998-10-07 | 2001-09-11 | International Business Machines Corporation | System and method for similarity searching in high-dimensional data space |
Non-Patent Citations (13)
Title |
---|
"An Introduction to Bayesian Statistical Decision Process" by Bruce W. Morgan (1968), published by Prentice-Hall, Inc.; Englewood Cliffs, New Jersey; Chapter 6 (pp 91-108). |
"An Introduction to Fuzzy Logic Applications in Intelligent Systems" by R.R. Yager & Lofti A. Zadeh (1992), published by Kluwer Academic Publishers: Norwell, MA; Chapters 1 (pp 1-25), 10 (pp 221-233). |
"Bayesian Data Analysis" by Gelman J. Carlin, H.S. Stern, D.B. Rubin (1995), published by CRC Press; New York; Chapters 5 (pp 119-160), 13 (pp 366-383), 14 (384-406), 15 (407-419), 16 (pp-420-438). |
"Classification and Regression Trees", by Brieman, J.H. Friedman, R.A. Olshen & C.J. Stone (1984), published by Wadsworth, Belmont California; Chapters 1 (pp 1-17), 2 (pp 18-58), 8 (pp 216-265) , 9 (pp 266-278) 11 (pp 297-312). |
"Elements of Information Theory" by T.M. Cover and A.T. Joy (1991), published by John Wiley & Songs Inc., New York; p. 18. |
"Elements of Statistical Computing: Numerical computation" by R.A. Thisted (1988), published by Chapman & Hall; New York, Chapters 4 (pp155-258), 6 (pp 337-361). |
"General Additive Models" by Hastie & Tibshirani, (1990) published by Chapman And Hall; London; Chapters 4 (pp 83-104), 6 (pp136-173). |
"Generalized Linear Models" by McCullagh & Nelder (1983), published by Chapman And Hall 2nd Edition; New York; Chapters 5 (pp 149-191), 6 (pp 193-244). |
"Learning Bayesian Networks: The Combination of Knowledge and Statistical Data" by D. Heckerman, D. Geiger, D.M. Chickering (1994), Microsoft Research Technical Report, published by Prentice-Hall, Inc. (pp 1-53). |
"Multivariate Analysis, Methods and Applications" by William R. Dillon & Matthew Goldstein (1984), published by John Wiley & Sons; New York; Chapters 2 (pp 23-52) 3 (pp 53-106), 4 (pp107-156) 5 (pp157-208). |
"Multivariate Observations" by G.A.F. Seber, (1984), published by John Wiley & Sons; New York; pp. 253-278. |
"Neural Networks: Networks a Comprehensive Foundation", by Haykin, S. (1994); MacMillan College Publication Co.; New York; Chapter 6 (pp138-235). |
"Tempo and beat analysis of acoustic music signals" by Eric D. Scheirer, Machine Group Listing, E-15-401D MIT Media Laboratory, Cambridge, Massachusetts (Dec. 1996); Journal of the Acoustical Society of America, vol. 103(1); pp. 588-601. |
Cited By (344)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7228280B1 (en) | 1997-04-15 | 2007-06-05 | Gracenote, Inc. | Finding database match for file based on file characteristics |
US20060020614A1 (en) * | 1997-08-08 | 2006-01-26 | Kolawa Adam K | Method and apparatus for automated selection, organization, and recommendation of items based on user preference topography |
US8364806B2 (en) | 1998-12-08 | 2013-01-29 | Nomadix, Inc. | Systems and methods for providing content and services on a network system |
US7689716B2 (en) | 1998-12-08 | 2010-03-30 | Nomadix, Inc. | Systems and methods for providing dynamic network authorization, authentication and accounting |
US8613053B2 (en) | 1998-12-08 | 2013-12-17 | Nomadix, Inc. | System and method for authorizing a portable communication device |
US8266266B2 (en) | 1998-12-08 | 2012-09-11 | Nomadix, Inc. | Systems and methods for providing dynamic network authorization, authentication and accounting |
US8725888B2 (en) | 1998-12-08 | 2014-05-13 | Nomadix, Inc. | Systems and methods for providing content and services on a network system |
US10341243B2 (en) | 1998-12-08 | 2019-07-02 | Nomadix, Inc. | Systems and methods for providing content and services on a network system |
US9548935B2 (en) | 1998-12-08 | 2017-01-17 | Nomadix, Inc. | Systems and methods for providing content and services on a network system |
US9160672B2 (en) | 1998-12-08 | 2015-10-13 | Nomadix, Inc. | Systems and methods for controlling user perceived connection speed |
US8244886B2 (en) | 1998-12-08 | 2012-08-14 | Nomadix, Inc. | Systems and methods for providing content and services on a network system |
US8606917B2 (en) | 1998-12-08 | 2013-12-10 | Nomadix, Inc. | Systems and methods for providing content and services on a network system |
US8725899B2 (en) | 1998-12-08 | 2014-05-13 | Nomadix, Inc. | Systems and methods for providing content and services on a network system |
US8713641B1 (en) | 1998-12-08 | 2014-04-29 | Nomadix, Inc. | Systems and methods for authorizing, authenticating and accounting users having transparent computer access to a network using a gateway device |
US8156246B2 (en) | 1998-12-08 | 2012-04-10 | Nomadix, Inc. | Systems and methods for providing content and services on a network system |
US20060239254A1 (en) * | 1998-12-08 | 2006-10-26 | Nomadix, Inc. | Systems and Methods for Providing Dynamic Network Authorization, Authentication and Accounting |
US8788690B2 (en) | 1998-12-08 | 2014-07-22 | Nomadix, Inc. | Systems and methods for providing content and services on a network system |
US8266269B2 (en) | 1998-12-08 | 2012-09-11 | Nomadix, Inc. | Systems and methods for providing content and services on a network system |
US8370477B2 (en) | 1998-12-08 | 2013-02-05 | Nomadix, Inc. | Systems and methods for providing content and services on a network system |
US10110436B2 (en) | 1998-12-08 | 2018-10-23 | Nomadix, Inc. | Systems and methods for providing content and services on a network system |
US20080215173A1 (en) * | 1999-06-28 | 2008-09-04 | Musicip Corporation | System and Method for Providing Acoustic Analysis Data |
US8805657B2 (en) | 1999-09-14 | 2014-08-12 | Gracenote, Inc. | Music searching methods based on human perception |
US8326584B1 (en) * | 1999-09-14 | 2012-12-04 | Gracenote, Inc. | Music searching methods based on human perception |
US8190708B1 (en) * | 1999-10-22 | 2012-05-29 | Nomadix, Inc. | Gateway device having an XML interface and associated method |
US8516083B2 (en) | 1999-10-22 | 2013-08-20 | Nomadix, Inc. | Systems and methods of communicating using XML |
US20040199657A1 (en) * | 2000-01-24 | 2004-10-07 | Aviv Eyal | Streaming media search and playback system |
US7281034B1 (en) | 2000-01-24 | 2007-10-09 | Friskit, Inc. | System and method for media playback over a network using links that contain control signals and commands |
US9405753B2 (en) | 2000-01-24 | 2016-08-02 | George Aposporos | Dynamic ratings-based streaming media playback system |
US20040167890A1 (en) * | 2000-01-24 | 2004-08-26 | Aviv Eyal | System and method for media search and playback |
US9547650B2 (en) | 2000-01-24 | 2017-01-17 | George Aposporos | System for sharing and rating streaming media playlists |
US10318647B2 (en) | 2000-01-24 | 2019-06-11 | Bluebonnet Internet Media Services, Llc | User input-based play-list generation and streaming media playback system |
US9779095B2 (en) | 2000-01-24 | 2017-10-03 | George Aposporos | User input-based play-list generation and playback system |
US7469283B2 (en) | 2000-01-24 | 2008-12-23 | Friskit, Inc. | Streaming media search and playback system |
US7031931B1 (en) * | 2000-03-30 | 2006-04-18 | Nokia Corporation | Portable device attached to a media player for rating audio/video contents |
US6674452B1 (en) * | 2000-04-05 | 2004-01-06 | International Business Machines Corporation | Graphical user interface to query music by examples |
US20050038819A1 (en) * | 2000-04-21 | 2005-02-17 | Hicken Wendell T. | Music Recommendation system and method |
US20090254554A1 (en) * | 2000-04-21 | 2009-10-08 | Musicip Corporation | Music searching system and method |
US7343553B1 (en) * | 2000-05-19 | 2008-03-11 | Evan John Kaye | Voice clip identification method |
US6738805B2 (en) * | 2000-05-24 | 2004-05-18 | Victor Company Of Japan, Ltd. | Audio-contents demo system connectable to a mobile telephone device |
US20010049278A1 (en) * | 2000-05-24 | 2001-12-06 | Hiromitsu Negishi | Audio-contents demo system |
US20050097138A1 (en) * | 2000-07-06 | 2005-05-05 | Microsoft Corporation | System and methods for the automatic transmission of new, high affinity media |
US7447705B2 (en) | 2000-07-06 | 2008-11-04 | Microsoft Corporation | System and methods for the automatic transmission of new, high affinity media |
US20050165779A1 (en) * | 2000-07-06 | 2005-07-28 | Microsoft Corporation | System and methods for the automatic transmission of new, high affinity media |
US20090006321A1 (en) * | 2000-07-06 | 2009-01-01 | Microsoft Corporation | System and methods for the automatic transmission of new, high affinity media |
US20020019858A1 (en) * | 2000-07-06 | 2002-02-14 | Rolf Kaiser | System and methods for the automatic transmission of new, high affinity media |
US7505959B2 (en) * | 2000-07-06 | 2009-03-17 | Microsoft Corporation | System and methods for the automatic transmission of new, high affinity media |
US7206775B2 (en) * | 2000-07-06 | 2007-04-17 | Microsoft Corporation | System and methods for the automatic transmission of new, high affinity media |
US20050076027A1 (en) * | 2000-07-06 | 2005-04-07 | Microsoft Corporation | System and methods for the automatic transmission of new, high affinity media |
US7370031B2 (en) | 2000-07-06 | 2008-05-06 | Microsoft Corporation | Generation of high affinity media |
US7312391B2 (en) * | 2000-07-06 | 2007-12-25 | Microsoft Corporation | System and methods for the automatic transmission of new, high affinity media using user profiles and musical properties |
US7756874B2 (en) * | 2000-07-06 | 2010-07-13 | Microsoft Corporation | System and methods for providing automatic classification of media entities according to consonance properties |
US20050097075A1 (en) * | 2000-07-06 | 2005-05-05 | Microsoft Corporation | System and methods for providing automatic classification of media entities according to consonance properties |
US7277766B1 (en) * | 2000-10-24 | 2007-10-02 | Moodlogic, Inc. | Method and system for analyzing digital audio files |
US7853344B2 (en) | 2000-10-24 | 2010-12-14 | Rovi Technologies Corporation | Method and system for analyzing ditigal audio files |
US7890374B1 (en) | 2000-10-24 | 2011-02-15 | Rovi Technologies Corporation | System and method for presenting music to consumers |
US20110035035A1 (en) * | 2000-10-24 | 2011-02-10 | Rovi Technologies Corporation | Method and system for analyzing digital audio files |
US8751957B1 (en) * | 2000-11-22 | 2014-06-10 | Pace Micro Technology Plc | Method and apparatus for obtaining auditory and gestural feedback in a recommendation system |
US20030016250A1 (en) * | 2001-04-02 | 2003-01-23 | Chang Edward Y. | Computer user interface for perception-based information retrieval |
US7107254B1 (en) * | 2001-05-07 | 2006-09-12 | Microsoft Corporation | Probablistic models and methods for combining multiple content classifiers |
US20080147715A1 (en) * | 2001-05-09 | 2008-06-19 | Woodward Mark L | Method, apparatus and program product for media identification and tracking associated user preferences |
US8244896B2 (en) | 2001-05-09 | 2012-08-14 | Emission Limited Liability Company | Method, apparatus and program product for media identification and tracking associated user preferences |
US20060218292A1 (en) * | 2001-05-09 | 2006-09-28 | Woodward Mark L | Method, apparatus and program product for media identification and tracking associated user preferences |
US20060253585A1 (en) * | 2001-05-09 | 2006-11-09 | Fein Gene S | Method, apparatus and program product providing business processes using media identification and tracking of associated user preferences |
US20030079015A1 (en) * | 2001-05-09 | 2003-04-24 | Dotclick Corporation | Method, apparatus and program product providing business processes using media identification and tracking of associated user preferences |
US7844722B2 (en) * | 2001-05-09 | 2010-11-30 | Woodward Mark L | Method, apparatus and program product for media identification and tracking associated user preferences |
US20130179439A1 (en) * | 2001-05-16 | 2013-07-11 | Pandora Media, Inc. | Methods and Systems for Utilizing Contextual Feedback to Generate and Modify Playlists |
US7752546B2 (en) * | 2001-06-29 | 2010-07-06 | Thomson Licensing | Method and system for providing an acoustic interface |
US7194477B1 (en) * | 2001-06-29 | 2007-03-20 | Revenue Science, Inc. | Optimized a priori techniques |
US20030001881A1 (en) * | 2001-06-29 | 2003-01-02 | Steve Mannheimer | Method and system for providing an acoustic interface |
US7272629B2 (en) * | 2001-07-04 | 2007-09-18 | Yamaha Corporation | Portal server and information supply method for supplying music content of multiple versions |
US20030037124A1 (en) * | 2001-07-04 | 2003-02-20 | Atsushi Yamaura | Portal server and information supply method for supplying music content |
US20030086341A1 (en) * | 2001-07-20 | 2003-05-08 | Gracenote, Inc. | Automatic identification of sound recordings |
US7328153B2 (en) | 2001-07-20 | 2008-02-05 | Gracenote, Inc. | Automatic identification of sound recordings |
US8082279B2 (en) | 2001-08-20 | 2011-12-20 | Microsoft Corporation | System and methods for providing adaptive media property classification |
US20080195654A1 (en) * | 2001-08-20 | 2008-08-14 | Microsoft Corporation | System and methods for providing adaptive media property classification |
US7532943B2 (en) * | 2001-08-21 | 2009-05-12 | Microsoft Corporation | System and methods for providing automatic classification of media entities according to sonic properties |
US20030045953A1 (en) * | 2001-08-21 | 2003-03-06 | Microsoft Corporation | System and methods for providing automatic classification of media entities according to sonic properties |
US6913466B2 (en) * | 2001-08-21 | 2005-07-05 | Microsoft Corporation | System and methods for training a trainee to classify fundamental properties of media entities |
US7574276B2 (en) | 2001-08-29 | 2009-08-11 | Microsoft Corporation | System and methods for providing automatic classification of media entities according to melodic movement properties |
US20060096447A1 (en) * | 2001-08-29 | 2006-05-11 | Microsoft Corporation | System and methods for providing automatic classification of media entities according to melodic movement properties |
US20060111801A1 (en) * | 2001-08-29 | 2006-05-25 | Microsoft Corporation | Automatic classification of media entities according to melodic movement properties |
US20030106413A1 (en) * | 2001-12-06 | 2003-06-12 | Ramin Samadani | System and method for music identification |
US6995309B2 (en) * | 2001-12-06 | 2006-02-07 | Hewlett-Packard Development Company, L.P. | System and method for music identification |
US7096218B2 (en) * | 2002-01-14 | 2006-08-22 | International Business Machines Corporation | Search refinement graphical user interface |
US20030135725A1 (en) * | 2002-01-14 | 2003-07-17 | Schirmer Andrew Lewis | Search refinement graphical user interface |
US20050229204A1 (en) * | 2002-05-16 | 2005-10-13 | Koninklijke Philips Electronics N.V. | Signal processing method and arragement |
US20050021470A1 (en) * | 2002-06-25 | 2005-01-27 | Bose Corporation | Intelligent music track selection |
US20040225519A1 (en) * | 2002-06-25 | 2004-11-11 | Martin Keith D. | Intelligent music track selection |
US20030236582A1 (en) * | 2002-06-25 | 2003-12-25 | Lee Zamir | Selection of items based on user reactions |
US20040002993A1 (en) * | 2002-06-26 | 2004-01-01 | Microsoft Corporation | User feedback processing of metadata associated with digital media files |
US20060217828A1 (en) * | 2002-10-23 | 2006-09-28 | Hicken Wendell T | Music searching system and method |
US20060143190A1 (en) * | 2003-02-26 | 2006-06-29 | Haitsma Jaap A | Handling of digital silence in audio fingerprinting |
US20040254957A1 (en) * | 2003-06-13 | 2004-12-16 | Nokia Corporation | Method and a system for modeling user preferences |
US7487180B2 (en) | 2003-09-23 | 2009-02-03 | Musicip Corporation | System and method for recognizing audio pieces via audio fingerprinting |
US20060190450A1 (en) * | 2003-09-23 | 2006-08-24 | Predixis Corporation | Audio fingerprinting system and method |
US20050102375A1 (en) * | 2003-10-23 | 2005-05-12 | Kivin Varghese | An Internet System for the Uploading, Viewing and Rating of Videos |
US11165999B1 (en) | 2003-11-03 | 2021-11-02 | Synergyze Technologies Llc | Identifying and providing compositions and digital-works |
US10970368B1 (en) | 2003-11-03 | 2021-04-06 | James W. Wieder | Distributing digital-works and usage-rights to user-devices |
US9053181B2 (en) | 2003-11-03 | 2015-06-09 | James W. Wieder | Adaptive personalized playback or presentation using count |
US7827110B1 (en) | 2003-11-03 | 2010-11-02 | Wieder James W | Marketing compositions by using a customized sequence of compositions |
US9053299B2 (en) | 2003-11-03 | 2015-06-09 | James W. Wieder | Adaptive personalized playback or presentation using rating |
US9098681B2 (en) | 2003-11-03 | 2015-08-04 | James W. Wieder | Adaptive personalized playback or presentation using cumulative time |
US9645788B1 (en) | 2003-11-03 | 2017-05-09 | James W. Wieder | Adaptively scheduling playback or presentation, based on user action(s) |
US8656043B1 (en) | 2003-11-03 | 2014-02-18 | James W. Wieder | Adaptive personalized presentation or playback, using user action(s) |
US7884274B1 (en) | 2003-11-03 | 2011-02-08 | Wieder James W | Adaptive personalized music and entertainment |
US9858397B1 (en) | 2003-11-03 | 2018-01-02 | James W. Wieder | Distributing digital-works and usage-rights to user-devices |
US10223510B1 (en) | 2003-11-03 | 2019-03-05 | James W. Wieder | Distributing digital-works and usage-rights to user-devices |
US9773205B1 (en) | 2003-11-03 | 2017-09-26 | James W. Wieder | Distributing digital-works and usage-rights via limited authorization to user-devices |
US8001612B1 (en) | 2003-11-03 | 2011-08-16 | Wieder James W | Distributing digital-works and usage-rights to user-devices |
US8396800B1 (en) | 2003-11-03 | 2013-03-12 | James W. Wieder | Adaptive personalized music and entertainment |
US8370952B1 (en) | 2003-11-03 | 2013-02-05 | Wieder James W | Distributing digital-works and usage-rights to user-devices |
US7576278B2 (en) * | 2003-11-05 | 2009-08-18 | Sharp Kabushiki Kaisha | Song search system and song search method |
US20050092161A1 (en) * | 2003-11-05 | 2005-05-05 | Sharp Kabushiki Kaisha | Song search system and song search method |
US20050125394A1 (en) * | 2003-11-14 | 2005-06-09 | Yasuteru Kodama | Information search apparatus, information search method, and information recording medium on which information search program is recorded |
US20060010167A1 (en) * | 2004-01-21 | 2006-01-12 | Grace James R | Apparatus for navigation of multimedia content in a vehicle multimedia system |
US20050262146A1 (en) * | 2004-01-21 | 2005-11-24 | Grace James R | System and apparatus for wireless synchronization of multimedia content |
US7885926B2 (en) | 2004-01-21 | 2011-02-08 | GM Global Technology Operations LLC | System and apparatus for wireless synchronization of multimedia content |
US7551889B2 (en) | 2004-06-30 | 2009-06-23 | Nokia Corporation | Method and apparatus for transmission and receipt of digital data in an analog signal |
US20060212149A1 (en) * | 2004-08-13 | 2006-09-21 | Hicken Wendell T | Distributed system and method for intelligent data analysis |
US20080195593A1 (en) * | 2004-09-28 | 2008-08-14 | Pasi Harju | Online Media Content Transfer |
WO2006035115A1 (en) * | 2004-09-28 | 2006-04-06 | Kutalab Oy | Online media content transfer |
US20070270667A1 (en) * | 2004-11-03 | 2007-11-22 | Andreas Coppi | Musical personal trainer |
US20060107823A1 (en) * | 2004-11-19 | 2006-05-25 | Microsoft Corporation | Constructing a table of music similarity vectors from a music similarity graph |
US7777125B2 (en) * | 2004-11-19 | 2010-08-17 | Microsoft Corporation | Constructing a table of music similarity vectors from a music similarity graph |
US20060112098A1 (en) * | 2004-11-19 | 2006-05-25 | Microsoft Corporation | Client-based generation of music playlists via clustering of music similarity vectors |
US7340455B2 (en) * | 2004-11-19 | 2008-03-04 | Microsoft Corporation | Client-based generation of music playlists from a server-provided subset of music similarity vectors |
US7571183B2 (en) * | 2004-11-19 | 2009-08-04 | Microsoft Corporation | Client-based generation of music playlists via clustering of music similarity vectors |
US20060112082A1 (en) * | 2004-11-19 | 2006-05-25 | Microsoft Corporation | Client-based generation of music playlists from a server-provided subset of music similarity vectors |
US20060137516A1 (en) * | 2004-12-24 | 2006-06-29 | Samsung Electronics Co., Ltd. | Sound searcher for finding sound media data of specific pattern type and method for operating the same |
US20090259690A1 (en) * | 2004-12-30 | 2009-10-15 | All Media Guide, Llc | Methods and apparatus for audio recognitiion |
US8352259B2 (en) | 2004-12-30 | 2013-01-08 | Rovi Technologies Corporation | Methods and apparatus for audio recognition |
WO2006096664A3 (en) * | 2005-03-04 | 2009-04-09 | Musicip Corp | Scan shuffle for building playlists |
US20060224260A1 (en) * | 2005-03-04 | 2006-10-05 | Hicken Wendell T | Scan shuffle for building playlists |
US7756388B2 (en) | 2005-03-21 | 2010-07-13 | Microsoft Corporation | Media item subgroup generation from a library |
US20060212478A1 (en) * | 2005-03-21 | 2006-09-21 | Microsoft Corporation | Methods and systems for generating a subgroup of one or more media items from a library of media items |
US20060218187A1 (en) * | 2005-03-25 | 2006-09-28 | Microsoft Corporation | Methods, systems, and computer-readable media for generating an ordered list of one or more media items |
US9104779B2 (en) | 2005-03-30 | 2015-08-11 | Primal Fusion Inc. | Systems and methods for analyzing and synthesizing complex knowledge representations |
US8849860B2 (en) | 2005-03-30 | 2014-09-30 | Primal Fusion Inc. | Systems and methods for applying statistical inference techniques to knowledge representations |
US9904729B2 (en) | 2005-03-30 | 2018-02-27 | Primal Fusion Inc. | System, method, and computer program for a consumer defined information architecture |
US7606781B2 (en) * | 2005-03-30 | 2009-10-20 | Primal Fusion Inc. | System, method and computer program for facet analysis |
US9177248B2 (en) | 2005-03-30 | 2015-11-03 | Primal Fusion Inc. | Knowledge representation systems and methods incorporating customization |
US10002325B2 (en) | 2005-03-30 | 2018-06-19 | Primal Fusion Inc. | Knowledge representation systems and methods incorporating inference rules |
US20070136221A1 (en) * | 2005-03-30 | 2007-06-14 | Peter Sweeney | System, Method and Computer Program for Facet Analysis |
US9934465B2 (en) | 2005-03-30 | 2018-04-03 | Primal Fusion Inc. | Systems and methods for analyzing and synthesizing complex knowledge representations |
US7533091B2 (en) * | 2005-04-06 | 2009-05-12 | Microsoft Corporation | Methods, systems, and computer-readable media for generating a suggested list of media items based upon a seed |
US20060230065A1 (en) * | 2005-04-06 | 2006-10-12 | Microsoft Corporation | Methods, systems, and computer-readable media for generating a suggested list of media items based upon a seed |
US20060242198A1 (en) * | 2005-04-22 | 2006-10-26 | Microsoft Corporation | Methods, computer-readable media, and data structures for building an authoritative database of digital audio identifier elements and identifying media items |
US7613736B2 (en) | 2005-05-23 | 2009-11-03 | Resonance Media Services, Inc. | Sharing music essence in a recommendation system |
US20060265349A1 (en) * | 2005-05-23 | 2006-11-23 | Hicken Wendell T | Sharing music essence in a recommendation system |
US20060288041A1 (en) * | 2005-06-20 | 2006-12-21 | Microsoft Corporation | Providing community-based media item ratings to users |
US7890513B2 (en) | 2005-06-20 | 2011-02-15 | Microsoft Corporation | Providing community-based media item ratings to users |
US7580932B2 (en) | 2005-07-15 | 2009-08-25 | Microsoft Corporation | User interface for establishing a filtering engine |
US20070016599A1 (en) * | 2005-07-15 | 2007-01-18 | Microsoft Corporation | User interface for establishing a filtering engine |
US7680824B2 (en) | 2005-08-11 | 2010-03-16 | Microsoft Corporation | Single action media playlist generation |
US20070038672A1 (en) * | 2005-08-11 | 2007-02-15 | Microsoft Corporation | Single action media playlist generation |
US7761423B1 (en) * | 2005-10-11 | 2010-07-20 | OneSpot, Inc. | System and method for indexing a network of interrelated elements |
US8484205B1 (en) * | 2005-10-11 | 2013-07-09 | OneSpot, Inc. | System and method for generating sources of prioritized content |
US20070112940A1 (en) * | 2005-10-26 | 2007-05-17 | Sony Corporation | Reproducing apparatus, correlated information notifying method, and correlated information notifying program |
US10002643B2 (en) | 2005-10-26 | 2018-06-19 | Sony Corporation | Reproducing apparatus, correlated information notifying method, and correlated information notifying program |
US7685210B2 (en) | 2005-12-30 | 2010-03-23 | Microsoft Corporation | Media discovery and curation of playlists |
US20070168388A1 (en) * | 2005-12-30 | 2007-07-19 | Microsoft Corporation | Media discovery and curation of playlists |
US20070240557A1 (en) * | 2006-04-12 | 2007-10-18 | Whitman Brian A | Understanding Music |
US7772478B2 (en) | 2006-04-12 | 2010-08-10 | Massachusetts Institute Of Technology | Understanding music |
US20090231964A1 (en) * | 2006-06-21 | 2009-09-17 | Nokia Corporation | Variable alarm sounds |
US8625394B2 (en) | 2006-06-21 | 2014-01-07 | Core Wireless Licensing S.A.R.L. | Variable alarm sounds |
US20070297292A1 (en) * | 2006-06-21 | 2007-12-27 | Nokia Corporation | Method, computer program product and device providing variable alarm noises |
US20100281009A1 (en) * | 2006-07-31 | 2010-11-04 | Microsoft Corporation | Hierarchical conditional random fields for web extraction |
US20080046429A1 (en) * | 2006-08-16 | 2008-02-21 | Yahoo! Inc. | System and method for hierarchical segmentation of websites by topic |
US20100049766A1 (en) * | 2006-08-31 | 2010-02-25 | Peter Sweeney | System, Method, and Computer Program for a Consumer Defined Information Architecture |
US8510302B2 (en) | 2006-08-31 | 2013-08-13 | Primal Fusion Inc. | System, method, and computer program for a consumer defined information architecture |
US7921067B2 (en) * | 2006-09-04 | 2011-04-05 | Sony Deutschland Gmbh | Method and device for mood detection |
US20080201370A1 (en) * | 2006-09-04 | 2008-08-21 | Sony Deutschland Gmbh | Method and device for mood detection |
US9643071B2 (en) | 2006-09-07 | 2017-05-09 | Nike, Inc. | Athletic performance sensing and/or tracking systems and methods |
US9636566B2 (en) | 2006-09-07 | 2017-05-02 | Nike, Inc. | Athletic performance sensing and/or tracking systems and methods |
US9959090B2 (en) | 2006-09-07 | 2018-05-01 | Nike, Inc. | Athletic performance sensing and/or tracking systems and methods |
US8152695B2 (en) * | 2006-09-07 | 2012-04-10 | Nike, Inc. | Athletic performance sensing and/or tracking systems and methods |
US10923225B2 (en) | 2006-09-07 | 2021-02-16 | Nike, Inc. | Athletic performance sensing and/or tracking systems and methods |
US8568278B2 (en) | 2006-09-07 | 2013-10-29 | Nike, Inc. | Athletic performance sensing and/or tracking systems and methods |
US9623315B2 (en) | 2006-09-07 | 2017-04-18 | Nike, Inc. | Athletic performance sensing and/or tracking systems and methods |
US11955219B2 (en) * | 2006-09-07 | 2024-04-09 | Nike, Inc. | Athletic performance sensing and/or tracking systems and methods |
US9643072B2 (en) | 2006-09-07 | 2017-05-09 | Nike, Inc. | Athletic performance sensing and/or tracking systems and methods |
US11682479B2 (en) | 2006-09-07 | 2023-06-20 | Nike, Inc. | Athletic performance sensing and/or tracking systems and methods |
US10168986B2 (en) | 2006-09-07 | 2019-01-01 | Nike, Inc. | Athletic performance sensing and/or tracking systems and methods |
US10185537B2 (en) | 2006-09-07 | 2019-01-22 | Nike, Inc. | Athletic performance sensing and/or tracking systems and methods |
US20100279825A1 (en) * | 2006-09-07 | 2010-11-04 | Nike, Inc. | Athletic Performance Sensing and/or Tracking Systems and Methods |
US11676699B2 (en) | 2006-09-07 | 2023-06-13 | Nike, Inc. | Athletic performance sensing and/or tracking systems and methods |
US11676698B2 (en) | 2006-09-07 | 2023-06-13 | Nike, Inc. | Athletic performance sensing and/or tracking systems and methods |
US9656145B2 (en) | 2006-09-07 | 2017-05-23 | Nike, Inc. | Athletic performance sensing and/or tracking systems and methods |
US9656146B2 (en) * | 2006-09-07 | 2017-05-23 | Nike, Inc. | Athletic performance sensing and/or tracking systems and methods |
US11676695B2 (en) | 2006-09-07 | 2023-06-13 | Nike, Inc. | Athletic performance sensing and/or tracking systems and methods |
US11676696B2 (en) | 2006-09-07 | 2023-06-13 | Nike, Inc. | Athletic performance sensing and/or tracking systems and methods |
US11676697B2 (en) * | 2006-09-07 | 2023-06-13 | Nike, Inc. | Athletic performance sensing and/or tracking systems and methods |
US9662560B2 (en) | 2006-09-07 | 2017-05-30 | Nike, Inc. | Athletic performance sensing and/or tracking systems and methods |
US10303426B2 (en) | 2006-09-07 | 2019-05-28 | Nike, Inc. | Athletic performance sensing and/or tracking systems and methods |
US20220262480A1 (en) * | 2006-09-07 | 2022-08-18 | Nike, Inc. | Athletic Performance Sensing and/or Tracking Systems and Methods |
US20220262490A1 (en) * | 2006-09-07 | 2022-08-18 | Nike, Inc. | Athletic Performance Sensing and/or Tracking Systems and Methods |
US9700780B2 (en) | 2006-09-07 | 2017-07-11 | Nike, Inc. | Athletic performance sensing and/or tracking systems and methods |
US20150258377A1 (en) * | 2006-09-07 | 2015-09-17 | Nike, Inc. | Athletic Performance Sensing and/or Tracking Systems and Methods |
US8195734B1 (en) | 2006-11-27 | 2012-06-05 | The Research Foundation Of State University Of New York | Combining multiple clusterings by soft correspondence |
US20080133696A1 (en) * | 2006-12-04 | 2008-06-05 | Hanebeck Hanns-Christian Leemo | Personal multi-media playing system |
US20080162468A1 (en) * | 2006-12-19 | 2008-07-03 | Teravolt Gbr | Method of and apparatus for selecting characterisable datasets |
EP1939768A3 (en) * | 2006-12-19 | 2009-01-14 | teravolt GbR, vertreten durch den geschäftsführenden Gesellschafter Herrn Oliver Koch | Method and device for selecting characterisable data records |
US20080168022A1 (en) * | 2007-01-05 | 2008-07-10 | Harman International Industries, Incorporated | Heuristic organization and playback system |
US7875788B2 (en) * | 2007-01-05 | 2011-01-25 | Harman International Industries, Incorporated | Heuristic organization and playback system |
US7842876B2 (en) * | 2007-01-05 | 2010-11-30 | Harman International Industries, Incorporated | Multimedia object grouping, selection, and playback system |
US20080168390A1 (en) * | 2007-01-05 | 2008-07-10 | Daniel Benyamin | Multimedia object grouping, selection, and playback system |
US20080228744A1 (en) * | 2007-03-12 | 2008-09-18 | Desbiens Jocelyn | Method and a system for automatic evaluation of digital files |
US7873634B2 (en) * | 2007-03-12 | 2011-01-18 | Hitlab Ulc. | Method and a system for automatic evaluation of digital files |
US20120143907A1 (en) * | 2007-03-21 | 2012-06-07 | The Regents Of The University Of California | Generating audio annotations for search and retrieval |
US8112418B2 (en) * | 2007-03-21 | 2012-02-07 | The Regents Of The University Of California | Generating audio annotations for search and retrieval |
US20080235283A1 (en) * | 2007-03-21 | 2008-09-25 | The Regents Of The University Of California | Generating audio annotations for search and retrieval |
US8161056B2 (en) * | 2007-03-22 | 2012-04-17 | Yamaha Corporation | Database constructing apparatus and method |
US20080229910A1 (en) * | 2007-03-22 | 2008-09-25 | Yamaha Corporation | Database constructing apparatus and method |
US8280889B2 (en) | 2007-04-10 | 2012-10-02 | The Echo Nest Corporation | Automatically acquiring acoustic information about music |
US20080256042A1 (en) * | 2007-04-10 | 2008-10-16 | Brian Whitman | Automatically Acquiring Acoustic and Cultural Information About Music |
US20080256106A1 (en) * | 2007-04-10 | 2008-10-16 | Brian Whitman | Determining the Similarity of Music Using Cultural and Acoustic Information |
US7949649B2 (en) | 2007-04-10 | 2011-05-24 | The Echo Nest Corporation | Automatically acquiring acoustic and cultural information about music |
US20110225150A1 (en) * | 2007-04-10 | 2011-09-15 | The Echo Nest Corporation | Automatically Acquiring Acoustic Information About Music |
US8073854B2 (en) * | 2007-04-10 | 2011-12-06 | The Echo Nest Corporation | Determining the similarity of music using cultural and acoustic information |
US20110126114A1 (en) * | 2007-07-06 | 2011-05-26 | Martin Keith D | Intelligent Music Track Selection in a Networked Environment |
US7812239B2 (en) * | 2007-07-17 | 2010-10-12 | Yamaha Corporation | Music piece processing apparatus and method |
US20090019996A1 (en) * | 2007-07-17 | 2009-01-22 | Yamaha Corporation | Music piece processing apparatus and method |
US8370277B2 (en) * | 2007-07-31 | 2013-02-05 | National Institute Of Advanced Industrial Science And Technology | Musical piece recommendation system and method |
US20110112994A1 (en) * | 2007-07-31 | 2011-05-12 | National Institute Of Advanced Industrial Science And Technology | Musical piece recommendation system, musical piece recommendation method, and musical piece recommendation computer program |
US20100217755A1 (en) * | 2007-10-04 | 2010-08-26 | Koninklijke Philips Electronics N.V. | Classifying a set of content items |
US20090228796A1 (en) * | 2008-03-05 | 2009-09-10 | Sony Corporation | Method and device for personalizing a multimedia application |
US9491256B2 (en) * | 2008-03-05 | 2016-11-08 | Sony Corporation | Method and device for personalizing a multimedia application |
US20090234888A1 (en) * | 2008-03-17 | 2009-09-17 | Disney Enterprises, Inc. | Method and system for producing a mood guided media playlist |
US8204883B1 (en) * | 2008-04-17 | 2012-06-19 | Amazon Technologies, Inc. | Systems and methods of determining genre information |
US8676722B2 (en) | 2008-05-01 | 2014-03-18 | Primal Fusion Inc. | Method, system, and computer program for user-driven dynamic generation of semantic networks and media synthesis |
US11868903B2 (en) | 2008-05-01 | 2024-01-09 | Primal Fusion Inc. | Method, system, and computer program for user-driven dynamic generation of semantic networks and media synthesis |
US9378203B2 (en) | 2008-05-01 | 2016-06-28 | Primal Fusion Inc. | Methods and apparatus for providing information of interest to one or more users |
US20100235307A1 (en) * | 2008-05-01 | 2010-09-16 | Peter Sweeney | Method, system, and computer program for user-driven dynamic generation of semantic networks and media synthesis |
US9792550B2 (en) | 2008-05-01 | 2017-10-17 | Primal Fusion Inc. | Methods and apparatus for providing information of interest to one or more users |
US11182440B2 (en) | 2008-05-01 | 2021-11-23 | Primal Fusion Inc. | Methods and apparatus for searching of content using semantic synthesis |
US9361365B2 (en) | 2008-05-01 | 2016-06-07 | Primal Fusion Inc. | Methods and apparatus for searching of content using semantic synthesis |
US8676732B2 (en) | 2008-05-01 | 2014-03-18 | Primal Fusion Inc. | Methods and apparatus for providing information of interest to one or more users |
US8344233B2 (en) | 2008-05-07 | 2013-01-01 | Microsoft Corporation | Scalable music recommendation by search |
US20090281906A1 (en) * | 2008-05-07 | 2009-11-12 | Microsoft Corporation | Music Recommendation using Emotional Allocation Modeling |
US8438168B2 (en) | 2008-05-07 | 2013-05-07 | Microsoft Corporation | Scalable music recommendation by search |
US8650094B2 (en) * | 2008-05-07 | 2014-02-11 | Microsoft Corporation | Music recommendation using emotional allocation modeling |
US20090277322A1 (en) * | 2008-05-07 | 2009-11-12 | Microsoft Corporation | Scalable Music Recommendation by Search |
US20100036802A1 (en) * | 2008-08-05 | 2010-02-11 | Setsuo Tsuruta | Repetitive fusion search method for search system |
US8972370B2 (en) * | 2008-08-05 | 2015-03-03 | Tokyo Denki University | Repetitive fusion search method for search system |
US9595004B2 (en) | 2008-08-29 | 2017-03-14 | Primal Fusion Inc. | Systems and methods for semantic concept definition and semantic concept relationship synthesis utilizing existing domain definitions |
US8943016B2 (en) | 2008-08-29 | 2015-01-27 | Primal Fusion Inc. | Systems and methods for semantic concept definition and semantic concept relationship synthesis utilizing existing domain definitions |
US10803107B2 (en) | 2008-08-29 | 2020-10-13 | Primal Fusion Inc. | Systems and methods for semantic concept definition and semantic concept relationship synthesis utilizing existing domain definitions |
US20100057664A1 (en) * | 2008-08-29 | 2010-03-04 | Peter Sweeney | Systems and methods for semantic concept definition and semantic concept relationship synthesis utilizing existing domain definitions |
US8495001B2 (en) | 2008-08-29 | 2013-07-23 | Primal Fusion Inc. | Systems and methods for semantic concept definition and semantic concept relationship synthesis utilizing existing domain definitions |
US20100100826A1 (en) * | 2008-10-17 | 2010-04-22 | Louis Hawthorne | System and method for content customization based on user profile |
US20100107075A1 (en) * | 2008-10-17 | 2010-04-29 | Louis Hawthorne | System and method for content customization based on emotional state of the user |
US7994410B2 (en) * | 2008-10-22 | 2011-08-09 | Classical Archives, LLC | Music recording comparison engine |
US20100106267A1 (en) * | 2008-10-22 | 2010-04-29 | Pierre R. Schowb | Music recording comparison engine |
US9753925B2 (en) | 2009-05-06 | 2017-09-05 | Gracenote, Inc. | Systems, methods, and apparatus for generating an audio-visual presentation using characteristics of audio, visual and symbolic media objects |
US20100318586A1 (en) * | 2009-06-11 | 2010-12-16 | All Media Guide, Llc | Managing metadata for occurrences of a recording |
US8620967B2 (en) | 2009-06-11 | 2013-12-31 | Rovi Technologies Corporation | Managing metadata for occurrences of a recording |
US20100325583A1 (en) * | 2009-06-18 | 2010-12-23 | Nokia Corporation | Method and apparatus for classifying content |
RU2509352C2 (en) * | 2009-06-18 | 2014-03-10 | Нокиа Корпорейшн | Method and apparatus for classifying content |
US9514472B2 (en) * | 2009-06-18 | 2016-12-06 | Core Wireless Licensing S.A.R.L. | Method and apparatus for classifying content |
CN102612693A (en) * | 2009-06-18 | 2012-07-25 | 诺基亚股份有限公司 | Method and apparatus for classifying content |
WO2010146231A1 (en) * | 2009-06-18 | 2010-12-23 | Nokia Corporation | Method and apparatus for classifying content |
US20180075039A1 (en) * | 2009-06-23 | 2018-03-15 | Gracenote, Inc. | Methods and apparatus for determining a mood profile associated with media data |
US10558674B2 (en) * | 2009-06-23 | 2020-02-11 | Gracenote, Inc. | Methods and apparatus for determining a mood profile associated with media data |
US11580120B2 (en) * | 2009-06-23 | 2023-02-14 | Gracenote, Inc. | Methods and apparatus for determining a mood profile associated with media data |
US20140330848A1 (en) * | 2009-06-23 | 2014-11-06 | Gracenote, Inc. | Methods and apparatus for determining a mood profile associated with media data |
US20100325135A1 (en) * | 2009-06-23 | 2010-12-23 | Gracenote, Inc. | Methods and apparatus for determining a mood profile associated with media data |
US11204930B2 (en) * | 2009-06-23 | 2021-12-21 | Gracenote, Inc. | Methods and apparatus for determining a mood profile associated with media data |
US20220067057A1 (en) * | 2009-06-23 | 2022-03-03 | Gracenote, Inc. | Methods and Apparatus For Determining A Mood Profile Associated With Media Data |
US8805854B2 (en) * | 2009-06-23 | 2014-08-12 | Gracenote, Inc. | Methods and apparatus for determining a mood profile associated with media data |
US9842146B2 (en) * | 2009-06-23 | 2017-12-12 | Gracenote, Inc. | Methods and apparatus for determining a mood profile associated with media data |
US20110016102A1 (en) * | 2009-07-20 | 2011-01-20 | Louis Hawthorne | System and method for identifying and providing user-specific psychoactive content |
US20110041154A1 (en) * | 2009-08-14 | 2011-02-17 | All Media Guide, Llc | Content Recognition and Synchronization on a Television or Consumer Electronics Device |
US20110060645A1 (en) * | 2009-09-08 | 2011-03-10 | Peter Sweeney | Synthesizing messaging using context provided by consumers |
US10181137B2 (en) | 2009-09-08 | 2019-01-15 | Primal Fusion Inc. | Synthesizing messaging using context provided by consumers |
US9292855B2 (en) | 2009-09-08 | 2016-03-22 | Primal Fusion Inc. | Synthesizing messaging using context provided by consumers |
US20110060644A1 (en) * | 2009-09-08 | 2011-03-10 | Peter Sweeney | Synthesizing messaging using context provided by consumers |
US20110060794A1 (en) * | 2009-09-08 | 2011-03-10 | Peter Sweeney | Synthesizing messaging using context provided by consumers |
US20110078020A1 (en) * | 2009-09-30 | 2011-03-31 | Lajoie Dan | Systems and methods for identifying popular audio assets |
US20110078729A1 (en) * | 2009-09-30 | 2011-03-31 | Lajoie Dan | Systems and methods for identifying audio content using an interactive media guidance application |
US8918428B2 (en) | 2009-09-30 | 2014-12-23 | United Video Properties, Inc. | Systems and methods for audio asset storage and management |
US8677400B2 (en) | 2009-09-30 | 2014-03-18 | United Video Properties, Inc. | Systems and methods for identifying audio content using an interactive media guidance application |
US10146843B2 (en) | 2009-11-10 | 2018-12-04 | Primal Fusion Inc. | System, method and computer program for creating and manipulating data structures using an interactive graphical interface |
US9262520B2 (en) | 2009-11-10 | 2016-02-16 | Primal Fusion Inc. | System, method and computer program for creating and manipulating data structures using an interactive graphical interface |
US8970694B2 (en) * | 2009-12-10 | 2015-03-03 | Harris Corporation | Video processing system providing overlay of selected geospatially-tagged metadata relating to a geolocation outside viewable area and related methods |
US20110145256A1 (en) * | 2009-12-10 | 2011-06-16 | Harris Corporation | Video processing system providing overlay of selected geospatially-tagged metadata relating to a geolocation outside viewable area and related methods |
US20110154197A1 (en) * | 2009-12-18 | 2011-06-23 | Louis Hawthorne | System and method for algorithmic movie generation based on audio/video synchronization |
US20110173185A1 (en) * | 2010-01-13 | 2011-07-14 | Rovi Technologies Corporation | Multi-stage lookup for rolling audio recognition |
US8886531B2 (en) | 2010-01-13 | 2014-11-11 | Rovi Technologies Corporation | Apparatus and method for generating an audio fingerprint and using a two-stage query |
US11474979B2 (en) | 2010-06-22 | 2022-10-18 | Primal Fusion Inc. | Methods and devices for customizing knowledge representation systems |
US9235806B2 (en) | 2010-06-22 | 2016-01-12 | Primal Fusion Inc. | Methods and devices for customizing knowledge representation systems |
US9576241B2 (en) | 2010-06-22 | 2017-02-21 | Primal Fusion Inc. | Methods and devices for customizing knowledge representation systems |
US10474647B2 (en) | 2010-06-22 | 2019-11-12 | Primal Fusion Inc. | Methods and devices for customizing knowledge representation systems |
US10248669B2 (en) | 2010-06-22 | 2019-04-02 | Primal Fusion Inc. | Methods and devices for customizing knowledge representation systems |
EP2410444A3 (en) * | 2010-07-21 | 2012-02-01 | Magix AG | System and method for dynamic generation of individualized playlists according to user selection of musical features |
EP2410444A2 (en) * | 2010-07-21 | 2012-01-25 | Magix AG | System and method for dynamic generation of individualized playlists according to user selection of musical features |
US20120294457A1 (en) * | 2011-05-17 | 2012-11-22 | Fender Musical Instruments Corporation | Audio System and Method of Using Adaptive Intelligence to Distinguish Information Content of Audio Signals and Control Signal Processing Function |
US20120296776A1 (en) * | 2011-05-20 | 2012-11-22 | Microsoft Corporation | Adaptive interactive search |
US9098575B2 (en) | 2011-06-20 | 2015-08-04 | Primal Fusion Inc. | Preference-guided semantic processing |
US9715552B2 (en) | 2011-06-20 | 2017-07-25 | Primal Fusion Inc. | Techniques for presenting content to a user based on the user's preferences |
US10409880B2 (en) | 2011-06-20 | 2019-09-10 | Primal Fusion Inc. | Techniques for presenting content to a user based on the user's preferences |
US11294977B2 (en) | 2011-06-20 | 2022-04-05 | Primal Fusion Inc. | Techniques for presenting content to a user based on the user's preferences |
US9092516B2 (en) | 2011-06-20 | 2015-07-28 | Primal Fusion Inc. | Identifying information of interest based on user preferences |
US20130039584A1 (en) * | 2011-08-11 | 2013-02-14 | Oztan Harmanci | Method and apparatus for detecting near-duplicate images using content adaptive hash lookups |
US9047534B2 (en) * | 2011-08-11 | 2015-06-02 | Anvato, Inc. | Method and apparatus for detecting near-duplicate images using content adaptive hash lookups |
US9460390B1 (en) * | 2011-12-21 | 2016-10-04 | Emc Corporation | Analyzing device similarity |
US9263060B2 (en) | 2012-08-21 | 2016-02-16 | Marian Mason Publishing Company, Llc | Artificial neural network based system for classification of the emotional content of digital music |
US10623480B2 (en) | 2013-03-14 | 2020-04-14 | Aperture Investments, Llc | Music categorization using rhythm, texture and pitch |
US10225328B2 (en) | 2013-03-14 | 2019-03-05 | Aperture Investments, Llc | Music selection and organization using audio fingerprints |
US10242097B2 (en) * | 2013-03-14 | 2019-03-26 | Aperture Investments, Llc | Music selection and organization using rhythm, texture and pitch |
US11271993B2 (en) | 2013-03-14 | 2022-03-08 | Aperture Investments, Llc | Streaming music categorization using rhythm, texture and pitch |
US10061476B2 (en) | 2013-03-14 | 2018-08-28 | Aperture Investments, Llc | Systems and methods for identifying, searching, organizing, selecting and distributing content based on mood |
US20150220633A1 (en) * | 2013-03-14 | 2015-08-06 | Aperture Investments, Llc | Music selection and organization using rhythm, texture and pitch |
US20150193196A1 (en) * | 2014-01-06 | 2015-07-09 | Alpine Electronics of Silicon Valley, Inc. | Intensity-based music analysis, organization, and user interface for audio reproduction devices |
US11609948B2 (en) | 2014-03-27 | 2023-03-21 | Aperture Investments, Llc | Music streaming, playlist creation and streaming architecture |
US11899713B2 (en) | 2014-03-27 | 2024-02-13 | Aperture Investments, Llc | Music streaming, playlist creation and streaming architecture |
US9390695B2 (en) * | 2014-10-27 | 2016-07-12 | Northwestern University | Systems, methods, and apparatus to search audio synthesizers using vocal imitation |
US20160162565A1 (en) * | 2014-12-09 | 2016-06-09 | Hyundai Motor Company | Method and device for generating music playlist |
US9990413B2 (en) * | 2014-12-09 | 2018-06-05 | Hyundai Motor Company | Method and device for generating music playlist |
US11386262B1 (en) | 2016-04-27 | 2022-07-12 | Google Llc | Systems and methods for a knowledge-based form creation platform |
US11647237B1 (en) | 2016-05-09 | 2023-05-09 | Google Llc | Method and apparatus for secure video manifest/playlist generation and playback |
US11039181B1 (en) | 2016-05-09 | 2021-06-15 | Google Llc | Method and apparatus for secure video manifest/playlist generation and playback |
US10750216B1 (en) | 2016-05-10 | 2020-08-18 | Google Llc | Method and apparatus for providing peer-to-peer content delivery |
US10750248B1 (en) | 2016-05-10 | 2020-08-18 | Google Llc | Method and apparatus for server-side content delivery network switching |
US11545185B1 (en) | 2016-05-10 | 2023-01-03 | Google Llc | Method and apparatus for frame accurate high resolution video editing in cloud using live video streams |
US10785508B2 (en) | 2016-05-10 | 2020-09-22 | Google Llc | System for measuring video playback events using a server generated manifest/playlist |
US11589085B2 (en) | 2016-05-10 | 2023-02-21 | Google Llc | Method and apparatus for a virtual online video channel |
US11877017B2 (en) | 2016-05-10 | 2024-01-16 | Google Llc | System for measuring video playback events using a server generated manifest/playlist |
US10595054B2 (en) | 2016-05-10 | 2020-03-17 | Google Llc | Method and apparatus for a virtual online video channel |
US11785268B1 (en) | 2016-05-10 | 2023-10-10 | Google Llc | System for managing video playback using a server generated manifest/playlist |
US10771824B1 (en) | 2016-05-10 | 2020-09-08 | Google Llc | System for managing video playback using a server generated manifest/playlist |
US11069378B1 (en) | 2016-05-10 | 2021-07-20 | Google Llc | Method and apparatus for frame accurate high resolution video editing in cloud using live video streams |
US11032588B2 (en) | 2016-05-16 | 2021-06-08 | Google Llc | Method and apparatus for spatial enhanced adaptive bitrate live streaming for 360 degree video playback |
US11683540B2 (en) | 2016-05-16 | 2023-06-20 | Google Llc | Method and apparatus for spatial enhanced adaptive bitrate live streaming for 360 degree video playback |
US10891948B2 (en) | 2016-11-30 | 2021-01-12 | Spotify Ab | Identification of taste attributes from an audio signal |
US9934785B1 (en) | 2016-11-30 | 2018-04-03 | Spotify Ab | Identification of taste attributes from an audio signal |
US10403304B1 (en) | 2018-03-13 | 2019-09-03 | Qbrio Studio, Inc. | Neural networks for identifying the potential of digitized audio to induce frisson in listeners |
US11790934B2 (en) * | 2018-06-05 | 2023-10-17 | Anker Innovations Technology Co., Ltd. | Deep learning based method and system for processing sound quality characteristics |
US20230056955A1 (en) * | 2018-06-05 | 2023-02-23 | Anker Innovations Technology Co., Ltd. | Deep Learning Based Method and System for Processing Sound Quality Characteristics |
EP3786811A1 (en) * | 2019-08-30 | 2021-03-03 | Playground Music Ltd | Assessing similarity of electronic files |
EP3786952A1 (en) * | 2019-08-30 | 2021-03-03 | Playground Music Ltd | Assessing similarity of electronic files |
US20210294840A1 (en) * | 2020-03-19 | 2021-09-23 | Adobe Inc. | Searching for Music |
US11636342B2 (en) * | 2020-03-19 | 2023-04-25 | Adobe Inc. | Searching for music |
US11461649B2 (en) * | 2020-03-19 | 2022-10-04 | Adobe Inc. | Searching for music |
US20230097356A1 (en) * | 2020-03-19 | 2023-03-30 | Adobe Inc. | Searching for Music |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6539395B1 (en) | Method for creating a database for comparing music | |
US20020002899A1 (en) | System for content based music searching | |
Casey et al. | Content-based music information retrieval: Current directions and future challenges | |
Shao et al. | Music recommendation based on acoustic features and user access patterns | |
US7756874B2 (en) | System and methods for providing automatic classification of media entities according to consonance properties | |
US7035873B2 (en) | System and methods for providing adaptive media property classification | |
US7574276B2 (en) | System and methods for providing automatic classification of media entities according to melodic movement properties | |
US7696427B2 (en) | Method and system for recommending music | |
US20120233164A1 (en) | Music classification system and method | |
US20030045953A1 (en) | System and methods for providing automatic classification of media entities according to sonic properties | |
WO2007053770A2 (en) | Audio search system | |
US7227072B1 (en) | System and method for determining the similarity of musical recordings | |
Lu et al. | A novel method for personalized music recommendation | |
KR20100095166A (en) | A personal adaptive music recommendation method using analysis of playlists of users | |
Bogdanov et al. | Content-based music recommendation based on user preference examples | |
US7890374B1 (en) | System and method for presenting music to consumers | |
Schedl et al. | User-aware music retrieval | |
Liu et al. | Adaptive music recommendation based on user behavior in time slot | |
Herrera et al. | SIMAC: Semantic interaction with music audio contents | |
Liu | Effective results ranking for mobile query by singing/humming using a hybrid recommendation mechanism | |
Lin et al. | Automated Playlist Generation from Personal Music Libraries | |
Sharma et al. | Audio songs classification based on music patterns | |
US20030120679A1 (en) | Method for creating a database index for a piece of music and for retrieval of piece of music | |
KR101968206B1 (en) | Method for automatically generating music playlist by analyzing user prior information | |
Hartmann | Testing a spectral-based feature set for audio genre classification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: EMOTIONEERING, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GJERDINGEN, ROBERT O.;KHAN, REHAN M.;MATHYS, MARC;AND OTHERS;REEL/FRAME:010695/0461 Effective date: 20000315 |
|
AS | Assignment |
Owner name: MOODLOGIC, INC., A DELAWARE CORPORATION, CALIFORNI Free format text: CHANGE OF NAME;ASSIGNOR:EMOTIONEERING, INC., A DELAWARE CORPORATION;REEL/FRAME:011598/0993 Effective date: 20000526 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FEPP | Fee payment procedure |
Free format text: PAT HOLDER NO LONGER CLAIMS SMALL ENTITY STATUS, ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: STOL); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
REFU | Refund |
Free format text: REFUND - SURCHARGE, PETITION TO ACCEPT PYMT AFTER EXP, UNINTENTIONAL (ORIGINAL EVENT CODE: R2551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
AS | Assignment |
Owner name: JPMORGAN CHASE BANK, N.A., NEW YORK Free format text: SECURITY AGREEMENT;ASSIGNORS:APTIV DIGITAL, INC.;GEMSTAR DEVELOPMENT CORPORATION;GEMSTAR-TV GUIDE INTERNATIONAL, INC.;AND OTHERS;REEL/FRAME:020986/0074 Effective date: 20080502 Owner name: JPMORGAN CHASE BANK, N.A.,NEW YORK Free format text: SECURITY AGREEMENT;ASSIGNORS:APTIV DIGITAL, INC.;GEMSTAR DEVELOPMENT CORPORATION;GEMSTAR-TV GUIDE INTERNATIONAL, INC.;AND OTHERS;REEL/FRAME:020986/0074 Effective date: 20080502 |
|
AS | Assignment |
Owner name: ROVI TECHNOLOGIES CORPORATION, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MOODLOGIC, INC.;REEL/FRAME:023273/0821 Effective date: 20090817 |
|
FPAY | Fee payment |
Year of fee payment: 8 |
|
AS | Assignment |
Owner name: UNITED VIDEO PROPERTIES, INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A. (A NATIONAL ASSOCIATION);REEL/FRAME:025222/0731 Effective date: 20100317 Owner name: ODS PROPERTIES, INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A. (A NATIONAL ASSOCIATION);REEL/FRAME:025222/0731 Effective date: 20100317 Owner name: GEMSTAR DEVELOPMENT CORPORATION, CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A. (A NATIONAL ASSOCIATION);REEL/FRAME:025222/0731 Effective date: 20100317 Owner name: TV GUIDE, INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A. (A NATIONAL ASSOCIATION);REEL/FRAME:025222/0731 Effective date: 20100317 Owner name: INDEX SYSTEMS INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A. (A NATIONAL ASSOCIATION);REEL/FRAME:025222/0731 Effective date: 20100317 Owner name: ALL MEDIA GUIDE, LLC, CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A. (A NATIONAL ASSOCIATION);REEL/FRAME:025222/0731 Effective date: 20100317 Owner name: ROVI DATA SOLUTIONS, INC. (FORMERLY KNOWN AS TV GU Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A. (A NATIONAL ASSOCIATION);REEL/FRAME:025222/0731 Effective date: 20100317 Owner name: ROVI TECHNOLOGIES CORPORATION, CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A. (A NATIONAL ASSOCIATION);REEL/FRAME:025222/0731 Effective date: 20100317 Owner name: ROVI GUIDES, INC. (FORMERLY KNOWN AS GEMSTAR-TV GU Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A. (A NATIONAL ASSOCIATION);REEL/FRAME:025222/0731 Effective date: 20100317 Owner name: APTIV DIGITAL, INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A. (A NATIONAL ASSOCIATION);REEL/FRAME:025222/0731 Effective date: 20100317 Owner name: TV GUIDE ONLINE, LLC, CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A. (A NATIONAL ASSOCIATION);REEL/FRAME:025222/0731 Effective date: 20100317 Owner name: STARSIGHT TELECAST, INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A. (A NATIONAL ASSOCIATION);REEL/FRAME:025222/0731 Effective date: 20100317 Owner name: ROVI SOLUTIONS CORPORATION (FORMERLY KNOWN AS MACR Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A. (A NATIONAL ASSOCIATION);REEL/FRAME:025222/0731 Effective date: 20100317 Owner name: ROVI SOLUTIONS LIMITED (FORMERLY KNOWN AS MACROVIS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A. (A NATIONAL ASSOCIATION);REEL/FRAME:025222/0731 Effective date: 20100317 |
|
AS | Assignment |
Owner name: JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT, NE Free format text: SECURITY INTEREST;ASSIGNORS:APTIV DIGITAL, INC., A DELAWARE CORPORATION;GEMSTAR DEVELOPMENT CORPORATION, A CALIFORNIA CORPORATION;INDEX SYSTEMS INC, A BRITISH VIRGIN ISLANDS COMPANY;AND OTHERS;REEL/FRAME:027039/0168 Effective date: 20110913 |
|
AS | Assignment |
Owner name: MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT, MARYLAND Free format text: PATENT SECURITY AGREEMENT;ASSIGNORS:APTIV DIGITAL, INC.;GEMSTAR DEVELOPMENT CORPORATION;INDEX SYSTEMS INC.;AND OTHERS;REEL/FRAME:033407/0035 Effective date: 20140702 Owner name: MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL Free format text: PATENT SECURITY AGREEMENT;ASSIGNORS:APTIV DIGITAL, INC.;GEMSTAR DEVELOPMENT CORPORATION;INDEX SYSTEMS INC.;AND OTHERS;REEL/FRAME:033407/0035 Effective date: 20140702 Owner name: ROVI CORPORATION, CALIFORNIA Free format text: PATENT RELEASE;ASSIGNOR:JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT;REEL/FRAME:033396/0001 Effective date: 20140702 Owner name: STARSIGHT TELECAST, INC., CALIFORNIA Free format text: PATENT RELEASE;ASSIGNOR:JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT;REEL/FRAME:033396/0001 Effective date: 20140702 Owner name: ROVI TECHNOLOGIES CORPORATION, CALIFORNIA Free format text: PATENT RELEASE;ASSIGNOR:JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT;REEL/FRAME:033396/0001 Effective date: 20140702 Owner name: ROVI GUIDES, INC., CALIFORNIA Free format text: PATENT RELEASE;ASSIGNOR:JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT;REEL/FRAME:033396/0001 Effective date: 20140702 Owner name: APTIV DIGITAL, INC., CALIFORNIA Free format text: PATENT RELEASE;ASSIGNOR:JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT;REEL/FRAME:033396/0001 Effective date: 20140702 Owner name: INDEX SYSTEMS INC., CALIFORNIA Free format text: PATENT RELEASE;ASSIGNOR:JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT;REEL/FRAME:033396/0001 Effective date: 20140702 Owner name: TV GUIDE INTERNATIONAL, INC., CALIFORNIA Free format text: PATENT RELEASE;ASSIGNOR:JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT;REEL/FRAME:033396/0001 Effective date: 20140702 Owner name: UNITED VIDEO PROPERTIES, INC., CALIFORNIA Free format text: PATENT RELEASE;ASSIGNOR:JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT;REEL/FRAME:033396/0001 Effective date: 20140702 Owner name: ROVI SOLUTIONS CORPORATION, CALIFORNIA Free format text: PATENT RELEASE;ASSIGNOR:JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT;REEL/FRAME:033396/0001 Effective date: 20140702 Owner name: ALL MEDIA GUIDE, LLC, CALIFORNIA Free format text: PATENT RELEASE;ASSIGNOR:JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT;REEL/FRAME:033396/0001 Effective date: 20140702 Owner name: GEMSTAR DEVELOPMENT CORPORATION, CALIFORNIA Free format text: PATENT RELEASE;ASSIGNOR:JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT;REEL/FRAME:033396/0001 Effective date: 20140702 |
|
FPAY | Fee payment |
Year of fee payment: 12 |
|
AS | Assignment |
Owner name: HPS INVESTMENT PARTNERS, LLC, AS COLLATERAL AGENT, Free format text: SECURITY INTEREST;ASSIGNORS:ROVI SOLUTIONS CORPORATION;ROVI TECHNOLOGIES CORPORATION;ROVI GUIDES, INC.;AND OTHERS;REEL/FRAME:051143/0468 Effective date: 20191122 Owner name: HPS INVESTMENT PARTNERS, LLC, AS COLLATERAL AGENT, NEW YORK Free format text: SECURITY INTEREST;ASSIGNORS:ROVI SOLUTIONS CORPORATION;ROVI TECHNOLOGIES CORPORATION;ROVI GUIDES, INC.;AND OTHERS;REEL/FRAME:051143/0468 Effective date: 20191122 |
|
AS | Assignment |
Owner name: MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL Free format text: PATENT SECURITY AGREEMENT;ASSIGNORS:ROVI SOLUTIONS CORPORATION;ROVI TECHNOLOGIES CORPORATION;ROVI GUIDES, INC.;AND OTHERS;REEL/FRAME:051110/0006 Effective date: 20191122 Owner name: ROVI TECHNOLOGIES CORPORATION, CALIFORNIA Free format text: RELEASE OF SECURITY INTEREST IN PATENT RIGHTS;ASSIGNOR:MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT;REEL/FRAME:051145/0090 Effective date: 20191122 Owner name: UNITED VIDEO PROPERTIES, INC., CALIFORNIA Free format text: RELEASE OF SECURITY INTEREST IN PATENT RIGHTS;ASSIGNOR:MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT;REEL/FRAME:051145/0090 Effective date: 20191122 Owner name: APTIV DIGITAL INC., CALIFORNIA Free format text: RELEASE OF SECURITY INTEREST IN PATENT RIGHTS;ASSIGNOR:MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT;REEL/FRAME:051145/0090 Effective date: 20191122 Owner name: VEVEO, INC., CALIFORNIA Free format text: RELEASE OF SECURITY INTEREST IN PATENT RIGHTS;ASSIGNOR:MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT;REEL/FRAME:051145/0090 Effective date: 20191122 Owner name: ROVI GUIDES, INC., CALIFORNIA Free format text: RELEASE OF SECURITY INTEREST IN PATENT RIGHTS;ASSIGNOR:MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT;REEL/FRAME:051145/0090 Effective date: 20191122 Owner name: ROVI SOLUTIONS CORPORATION, CALIFORNIA Free format text: RELEASE OF SECURITY INTEREST IN PATENT RIGHTS;ASSIGNOR:MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT;REEL/FRAME:051145/0090 Effective date: 20191122 Owner name: SONIC SOLUTIONS LLC, CALIFORNIA Free format text: RELEASE OF SECURITY INTEREST IN PATENT RIGHTS;ASSIGNOR:MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT;REEL/FRAME:051145/0090 Effective date: 20191122 Owner name: GEMSTAR DEVELOPMENT CORPORATION, CALIFORNIA Free format text: RELEASE OF SECURITY INTEREST IN PATENT RIGHTS;ASSIGNOR:MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT;REEL/FRAME:051145/0090 Effective date: 20191122 Owner name: INDEX SYSTEMS INC., CALIFORNIA Free format text: RELEASE OF SECURITY INTEREST IN PATENT RIGHTS;ASSIGNOR:MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT;REEL/FRAME:051145/0090 Effective date: 20191122 Owner name: STARSIGHT TELECAST, INC., CALIFORNIA Free format text: RELEASE OF SECURITY INTEREST IN PATENT RIGHTS;ASSIGNOR:MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT;REEL/FRAME:051145/0090 Effective date: 20191122 Owner name: MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT, MARYLAND Free format text: PATENT SECURITY AGREEMENT;ASSIGNORS:ROVI SOLUTIONS CORPORATION;ROVI TECHNOLOGIES CORPORATION;ROVI GUIDES, INC.;AND OTHERS;REEL/FRAME:051110/0006 Effective date: 20191122 |
|
AS | Assignment |
Owner name: ROVI SOLUTIONS CORPORATION, CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:HPS INVESTMENT PARTNERS, LLC;REEL/FRAME:053458/0749 Effective date: 20200601 Owner name: TIVO SOLUTIONS, INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:HPS INVESTMENT PARTNERS, LLC;REEL/FRAME:053458/0749 Effective date: 20200601 Owner name: VEVEO, INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:HPS INVESTMENT PARTNERS, LLC;REEL/FRAME:053458/0749 Effective date: 20200601 Owner name: ROVI GUIDES, INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:HPS INVESTMENT PARTNERS, LLC;REEL/FRAME:053458/0749 Effective date: 20200601 Owner name: ROVI TECHNOLOGIES CORPORATION, CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:HPS INVESTMENT PARTNERS, LLC;REEL/FRAME:053458/0749 Effective date: 20200601 Owner name: ROVI TECHNOLOGIES CORPORATION, CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:MORGAN STANLEY SENIOR FUNDING, INC.;REEL/FRAME:053481/0790 Effective date: 20200601 Owner name: VEVEO, INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:MORGAN STANLEY SENIOR FUNDING, INC.;REEL/FRAME:053481/0790 Effective date: 20200601 Owner name: TIVO SOLUTIONS, INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:MORGAN STANLEY SENIOR FUNDING, INC.;REEL/FRAME:053481/0790 Effective date: 20200601 Owner name: ROVI GUIDES, INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:MORGAN STANLEY SENIOR FUNDING, INC.;REEL/FRAME:053481/0790 Effective date: 20200601 Owner name: ROVI SOLUTIONS CORPORATION, CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:MORGAN STANLEY SENIOR FUNDING, INC.;REEL/FRAME:053481/0790 Effective date: 20200601 |