US20120317136A1 - Systems and methods for domain-specific tokenization - Google Patents

Systems and methods for domain-specific tokenization Download PDF

Info

Publication number
US20120317136A1
US20120317136A1 US13404498 US201213404498A US2012317136A1 US 20120317136 A1 US20120317136 A1 US 20120317136A1 US 13404498 US13404498 US 13404498 US 201213404498 A US201213404498 A US 201213404498A US 2012317136 A1 US2012317136 A1 US 2012317136A1
Authority
US
Grant status
Application
Patent type
Prior art keywords
content
metadata
token
seed information
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13404498
Inventor
Michael Papish
Benjamin Green
Alex Helsinger
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
UV Corp
Rovi Guides Inc
TV Guide Inc
Original Assignee
United Video Properties Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date

Links

Images

Classifications

    • G06F16/437
    • G06F16/9535
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/30Information retrieval; Database structures therefor ; File system structures therefor
    • G06F17/30017Multimedia data retrieval; Retrieval of more than one type of audiovisual media
    • G06F17/30023Querying
    • G06F17/30029Querying by filtering; by personalisation, e.g. querying making use of user profiles
    • G06F17/30035Administration of user profiles, e.g. generation, initialisation, adaptation, distribution
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/30Information retrieval; Database structures therefor ; File system structures therefor
    • G06F17/30861Retrieval from the Internet, e.g. browsers
    • G06F17/30864Retrieval from the Internet, e.g. browsers by querying, e.g. search engines or meta-search engines, crawling techniques, push systems
    • G06F17/30867Retrieval from the Internet, e.g. browsers by querying, e.g. search engines or meta-search engines, crawling techniques, push systems with filtering and personalisation

Abstract

Systems and methods are provided for identifying a content domain associated with seed information, and searching metadata based on seed information. A processor receives seed information and accesses a database of tokens, each token associated with a content domain and each content domain associated with two or more tokens. For example, the content domain may be “Music” and the token may include “feat,” “with,” or “duet.” The processor determines whether a portion of the seed information matches a token in the database of tokens, and in response to determining that a portion of the seed information matches a token, the processor identifies the content domain associated with the matching token and determines whether at least some of the seed information, including the matching token, corresponds to some of the metadata in a database of data records.

Description

    CROSS-REFERENCE TO RELATED APPLICATION
  • This application claims priority to U.S. Provisional Application No. 61/496,463, filed Jun. 13, 2011 and incorporated by reference herein in its entirety.
  • BACKGROUND
  • Systems that catalog metadata about content elements may be called upon to perform two different tasks. The first task is catalog matching, in which records from two or more catalogs (e.g., databases of media content metadata) are aggregated by matching records in one catalog to records in another catalog. The second task is searching, in which a user or client application submits a request for information (often just a text string) and, in response, the system interrogates the catalog to provide appropriate information about media content. In each of these tasks, the system is called upon to compare some seed information (i.e., a metadata field from a first record in the catalog matching task, or a text string input by a user in the searching task) with a corpus of cataloged information (i.e., the records of an existing catalog in the catalog matching and searching tasks). One way of approaching both of these tasks is to search for the identical information within the catalog, but this approach is likely to be very slow when the catalog is large. More sophisticated search and matching methodologies may be used, but these methodologies may impose greater computational requirements.
  • SUMMARY
  • Described herein are systems and methods for improving search and cataloging tasks by identifying the interest domains that are most likely to be relevant to a particular task. These content domains are identified by analyzing and transforming the information (referred to as “seed information”) provided to the search or cataloging task. Seed information may be any data that is used to describe content elements or media attributes of interest (e.g., a text string, an image, or a sound). In some implementations, a first processor receives seed information from a second processor and accesses a database of tokens, each token associated with a content domain and each content domain associated with two or more tokens. For example, the content domain may be “Music” and the token may include “feat,” “with,” or “duet.” The first processor determines whether a portion of the seed information matches a token in the database of tokens. If a portion of the seed information matches a token, the first processor identifies the content domain associated in the database with the matching token, and accesses a database of data records for the identified content domain. This database includes data records representative of content elements in the identified content domain (e.g., titles or genres in the content domain “Books”), with each data record including metadata descriptive of the associated content element. The first processor then determines whether at least some of the seed information, including the matching token, matches some of the metadata in the database of data records. If a data record with matching metadata is found, the first processor may transmit an identifier for that data record to the second processor.
  • In some implementations, if at least some of the seed information matches some of the metadata in the database of data records, the processor may store an association between at least some of the seed information and the second record. The seed information may be metadata of a first record, in which case storing an association between at least some of the seed information and the second record may include storing an association between the first and second records. The first and second records may represent a common content element, in which case storing an association between the first and second records may include storing the metadata of the first record and the metadata of the second record as metadata of a common record for the common content element. Storing an association between the first and second records may include storing the metadata of the first record as metadata of the second record in the database of records. The seed information may be transmitted from a user device, in which case storing an association between at least some of the seed information and the second record may include displaying some of the metadata of the second record at the user device.
  • In some implementations, each token in the database of tokens is also associated with an operative expression, which may indicate the meaning of the token within the seed information. In response to determining that a portion of the seed information matches a token, the processor may generate a query based on at least some of the seed information and the operative expression associated with the token. The processor may use the query to search the database of records.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The above and other objects and advantages of the systems and methods of the present disclosure will be apparent upon consideration of the following detailed description, taken in conjunction with the accompanying drawings, in which like reference characters refer to like parts throughout, and in which:
  • FIGS. 1A and 1B depict illustrative displays that may be used to provide interactive application items;
  • FIG. 2 depicts an illustrative recommendation display;
  • FIG. 3A is a block diagram of an illustrative interactive media system which may be used with various embodiments;
  • FIG. 3B depicts an illustrative client device;
  • FIG. 4 is a block diagram of a catalog and recommendation system;
  • FIG. 5 is a block diagram of a service processor;
  • FIG. 6 depicts an illustrative memory structure that may be used to store tokens;
  • FIG. 7 is a flow diagram of a process for identifying a content domain associated with seed information;
  • FIG. 8 is a flow diagram of a process for searching a record database using a query constructed from a text string and an operative expression associated with a token;
  • FIG. 9 is an illustrative user-facing display 900 for performing media information searches;
  • FIG. 10 is a flow diagram of a process for associating seed information with a record in a database of records;
  • FIGS. 11A-11B illustrate a catalog matching application that utilizes the processes illustrated in FIGS. 7, 8 and 10; and
  • FIG. 12 is a flow diagram of a non-token-based process for identifying a content domain.
  • DETAILED DESCRIPTION
  • The systems and methods described herein take advantage of the different characteristics of metadata in different content domains to improve existing search and cataloging methodologies. In particular, the systems and methods described herein identify tokens within seed information, which may have specific meanings within particular content domains. For example, in the music domain, seed information may take the form “A feat. B” where A and B are the names of individual artists. Once a token such as “feat.” is identified in the seed information, the systems and methods described herein may use the token to identify the most relevant content domains in which to search, use a particular meaning of the token in forming a search query, or both. For example, once a system identifies “feat.” as a token within seed information, the system may then identify “music” as the most relevant content domain associated with the seed information, identify an operative expression associated with the seed information, including the token, (i.e., “Artist(A) AND Artist(B),” indicating that the “Artist” metadata field in a matching record should contain both A and B), or both. By extracting domain-specific information from tokens, the systems and methods described herein improve catalog matching and searching applications, as well as related applications undertaken in search and recommendation systems.
  • These systems and methods may be particularly useful in cataloging and searching applications that span multiple content domains. For example, when a user is allowed to enter seed information that will simultaneously search across music, movies and celebrities, it can be difficult to determine in which content domain a user is interested. If the system identifies a domain-specific token in the seed information, the system has a place to start in determining the most appropriate content domains from which to provide information to the user. When the system recognizes a token that is used in different domains (and perhaps in different ways), the system may be configured to disambiguate the token based on additional information, such as a user-specified content domain (e.g., specified via a radio button selection in a web-based interface) or from contextual information (e.g., when the user has been browsing in the “Movies” section of an online media store).
  • The search and cataloging systems and methods disclosed herein may be readily applied to any interactive application (e.g., interactive software, interactive websites, interactive television programs, and interactive presentations) or static application that includes aggregating data for transmitting recommendations to one or more users (e.g., a magazine feature providing product recommendations to different types of readers). As used herein, the term “recommendation” should be understood to mean information chosen to appeal to a user or group of users. Recommendations may be explicit (e.g., by presenting a particular book in a “Recommended For You” display on a website) or implicit (e.g., by presenting an advertisement for a particular product expected to appeal to a particular user or group of users). For illustrative purposes, this disclosure will often discuss exemplary embodiments of these systems and methods as applied in media guidance applications, but it will be understood that these illustrative examples do not limit the range of applications which may be improved by the use of the systems and methods disclosed herein.
  • The amount of information available to users in any given search, recommendation or content delivery system can be substantial. Consequently, many users desire a form of media guidance through an interface that allows users to efficiently navigate content selections and easily identify content that they may desire. An application that provides such guidance is referred to herein as an interactive media guidance application or, sometimes, a media guidance application or a guidance application. In particular, the cataloging techniques disclosed herein may be advantageously utilized by guidance applications (e.g., as part of the guidance data source from which the guidance application draws information).
  • Interactive media guidance applications may take various forms depending on the content for which they provide guidance. One typical type of media guidance application is an interactive television program guide. Interactive television program guides (sometimes referred to as electronic program guides) are well-known guidance applications that, among other things, allow users to navigate among and locate many types of content. As referred to herein, the term “content” should be understood to mean an electronically consumable user asset, such as television programming, as well as pay-per-view programs, on-demand programs (as in video-on-demand (VOD) systems), Internet content (e.g., streaming content, downloadable content, webcasts, etc.), video clips, audio, content information, pictures, rotating images, documents, playlists, websites, articles, books, electronic books, blogs, advertisements, chat sessions, social media, applications, games, descriptions of media assets (e.g., year made, genre, ratings, reviews, etc.) and/or any other media or multimedia and/or combination of the same. Guidance applications also allow users to navigate among and locate content. As referred to herein, the term “multimedia” should be understood to mean content that utilizes at least two different content forms described above, for example, text, audio, images, video, or interactivity content forms. Content may be recorded, played, displayed or accessed by client devices, but can also be part of a live performance.
  • One of the functions of the media guidance application is to provide media guidance data to users. As referred to herein, the phrase, “media guidance data” or “guidance data” should be understood to mean any data related to content, such as metadata, recommendations, media listings, media-related information (e.g., broadcast times, broadcast channels, titles, descriptions, ratings information (e.g., parental control ratings, critic's ratings, etc.), genre or category information, actor information, logo data for broadcasters' or providers' logos, etc.), media format (e.g., standard definition, high definition, 3D, etc.), advertisement information (e.g., text, images, media clips, etc.), on-demand information, blogs, websites, and any other type of guidance data that is helpful for a user to navigate among and locate desired content selections.
  • FIGS. 1 and 2 show illustrative display screens that may be used to provide media guidance data organized according to the cataloging systems and techniques disclosed herein. The display screens shown in FIGS. 1-2 may be implemented on any suitable client device or platform. While the displays of FIGS. 1-2 are illustrated as full screen displays, they may also be fully or partially overlaid over content being displayed. A user may indicate a desire to access content information by selecting a selectable option provided in a display screen (e.g., a menu option, a listings option, an icon, a hyperlink, etc.) or pressing a dedicated button (e.g., a GUIDE button) on a remote control or other user input interface or device. In response to the user's indication, the media guidance application may provide a display screen with media guidance data organized in one of several ways, such as by time and channel in a grid, by time, by channel, by source, by content type, by category (e.g., movies, sports, news, children, or other categories of programming), or other predefined, user-defined, or other organization criteria. The organization of the media guidance data is determined by guidance application data. As referred to herein, the phrase, “guidance application data” should be understood to mean data used in operating the guidance application, such as program information, guidance application settings, user preferences, or user profile information. In some implementations, the guidance application data is based on data from a catalog of data assembled and maintained in accordance with the techniques described herein. For example, information about the particular channels displayed in FIG. 1 may be those channels for which sufficient metadata is available in a catalog to which the guidance application has access.
  • FIG. 1A shows illustrative grid program listings display 100 arranged by time and channel that also enables access to different types of content in a single display. Display 100 may include grid 102 with: (1) a column of channel/content type identifiers 104, where each channel/content type identifier (which is a cell in the column) identifies a different channel or content type available; and (2) a row of time identifiers 106, where each time identifier (which is a cell in the row) identifies a time block of programming. Grid 102 also includes cells of program listings, such as program listing 108, where each listing provides the title of the program provided on the listing's associated channel and time. With a user input device, a user can select program listings by moving highlight region 110. Information relating to the program listing selected by highlight region 110 may be provided in program information region 112. Region 112 may include, for example, the program title, the program description, the time the program is provided (if applicable), the channel the program is on (if applicable), the program's rating, and other desired information. Program and channel information used in grid 102 may come from a content catalog assembled and maintained according to the techniques described herein.
  • Display 100 may also include advertisement 124, video region 122, and options region 126. The item advertised in advertisement 124 and/or the format of advertisement 124 (e.g., interactive or passive, animated or static) may be selected using the recommendation techniques described herein. Video region 122 may allow the user to view and/or preview programs that are currently available, will be available, or were available to the user. The content of video region 122 may correspond to, or be independent from, one of the listings displayed in grid 102. Grid displays including a video region are sometimes referred to as picture-in-guide (PIG) displays. PIG displays and their functionalities are described in greater detail in Satterfield et al. U.S. Pat. No. 6,564,378, issued May 13, 2003 and Yuen et al. U.S. Pat. No. 6,239,794, issued May 29, 2001, which are hereby incorporated by reference herein in their entireties. PIG displays may be included in other media guidance application display screens of the embodiments described herein.
  • Options region 126 may allow the user to access different types of content, media guidance application displays, and/or media guidance application features. Options region 126 may be part of display 100 (and other display screens described herein), or may be invoked by a user by selecting an on-screen option or pressing a dedicated or assignable button on a user input device. The selectable options within options region 126 may concern features related to program listings in grid 102 or may include options available from a main menu display. Features related to program listings may include searching for other air times or ways of receiving a program, requesting programs similar to or recommended based on a program, recording a program, enabling series recording of a program, setting a program and/or a channel as a favorite, purchasing a program, or other features. Options available from a main menu display may include search options, VOD options, parental control options, Internet options, cloud-based options, device synchronization options, second screen device options, options to access various types of media guidance data displays, options to subscribe to a premium service, options to edit a user's profile, options to access a browse overlay, or other options.
  • Another display arrangement for providing media guidance is shown in FIG. 1B. Video mosaic display 200 includes selectable options 202 for content information organized based on content type, genre, and/or other organization criteria. In display 200, television listings option 204 is selected, thus providing listings 206, 208, 210, and 212 as broadcast program listings. The information in one or more of listings 206, 208, 210 and 212 may include information from an aggregated content catalog, such as those disclosed herein. In display 200 the listings may provide graphical images including cover art, still images from the content, video clip previews, live video from the content, or other types of content that indicate to a user the content being described by the media guidance data in the listing. Each of the graphical listings may also be accompanied by text to provide further information about the content associated with the listing. For example, listing 208 may include more than one portion, including media portion 214 and text portion 216. Media portion 214 and/or text portion 216 may be selectable to view content in full-screen or to view information related to the content displayed in media portion 214 (e.g., to view listings for the channel on which the video is displayed). A user may also select recommendations option 218 to be provided with recommendations, as discussed below.
  • FIG. 2 is an illustrative display for providing recommendations which may be generated, for example, in response to a user selection of recommendation option 218, or in response to any other suitable user action (e.g., logging in to a search service, or launching a media guidance application), and may be based on the data stored in an aggregated catalog assembled and maintained using the techniques described herein. Display 250 includes a set of navigation elements 260, each of which may be selected by a user to change the information displayed (e.g., personal recommendations). In the current display, navigation element 262 is highlighted, indicating that “For You” information is displayed. In display 250, the “For You” information includes an array of content element indicators 252, each of which indicates a particular content element that is recommended for the user in an associated content domain 254 (e.g., “movies,” “music,” “TV,” etc.). The term “content element” is used herein to refer to any asset, category, feature, property or other characteristic of content that is catalogued by catalog and recommendation system 400 according to the methods described herein. Examples of content elements include particular assets (e.g., the Beatles' “White Album”), or descriptors such as categories (e.g., detective novels, role-playing games), attributes (e.g., actors, directors, language), or any other piece of information that may be catalogued or used to classify content. Each content element indicator may indicate an asset (e.g., the movie “Top Gun”), a genre (e.g., the musical genre of “Death Metal”), an artist (e.g., the author J. K. Rowling) or any other content element that is expected to appeal to the user. A user may select “more” icon 256 to view more recommendations on a display in a particular content domain. In some embodiments, recommendations are not displayed by content domain, but are displayed according to chronology, in order of user preference, clustered by common elements (e.g., common actions, common themes or common rating, or are arranged randomly. Each of the indicators 252 may be user-selectable (e.g., via mouse click, double-touch, or hover-over), the recommendation systems described herein may provide additional information about the content element and/or allow the user to access assets associated with the content element. Display 250 includes advertisement 258, which may advertise a product, service or other purchasable item. As described above with reference to advertisement 124 of FIG. 1A, the item advertised by advertisement 258 may be selected based on a user's preference or by a determination that the advertised item is related to content elements that the media guidance application has determined that a client may like. Further discussion of various configurations for the display screens of FIG. 1-2, as well as several other exemplary displays, are presented elsewhere herein.
  • FIG. 3A is a block diagram of an illustrative interactive media system 350. System 350 includes media content source 366 and media guidance data source 368 coupled to communications network 364 via communication paths 370 and 372, respectively. Paths 370 and 372 may separately or together include one or more communications paths, such as, a satellite path, a fiber-optic path, a cable path, a path that supports Internet communications (e.g., IPTV), free-space connections (e.g., for broadcast or other wireless signals), or any other suitable wired or wireless communications path or combination of such paths. Communications with media content source 366 and media guidance data source 368 may be exchanged over one or more communications paths, but are shown as a single path in FIG. 3A to avoid overcomplicating the drawing. In addition, there may be more than one of each of media content source 366 and media guidance data source 368, but only one of each is shown in FIG. 3A to avoid overcomplicating the drawing. Different possible types of each of these sources are discussed below. If desired, media content source 366 and media guidance data source 368 may be integrated as one source device. Media content source 366 and media guidance data source 358 include inputs 384 and 386, respectively, for receiving data from external sources. The search and cataloging systems and techniques disclosed herein may be implemented by media guidance data source 358, for example, which may be configured to aggregate content metadata received from multiple metadata sources via input 386.
  • In some implementations, a media guidance application is implemented on a client server, which receives data from a media guidance data source (such as media guidance data source 368) and uses that data to provide a media guidance application to one or more client devices. In some implementations, the media guidance application executes directly on the client device; in this case, the client device is itself a client of the media guidance data source. As used herein, the term “client” or “client device” should be understood to mean any device that receives media guidance data (such as recommendations) from a media guidance data source. A user device, then, is a particular example of a client device. Client devices 374 may be coupled to communications network 364. Namely, user television equipment 352, user computer equipment 354, and wireless user communications device 356 are coupled to communications network 364 via communications paths 358, 360, and 362, respectively. Client devices 374 may include client data server 376, which has additional client devices: user television equipment 378, user computer equipment 380, and wireless user communications device 382. Communications network 364 may be one or more networks including the Internet, a mobile phone network, mobile voice or data network (e.g., a 4G or LTE network), cable network, public switched telephone network, or other types of communications network or combinations of communications networks. Paths 358, 360, 362, 388, 390, 392 and 394 may include any of the communication paths described above in connection with paths 370 and 372. Paths 362 and 394 are drawn with dotted lines to indicate that, in the exemplary embodiment shown in FIG. 3A, they are wireless paths, and paths 358, 360, 388, 390 and 392 are drawn as solid lines to indicate they are wired paths (although these paths may be wireless paths, if desired). Various network configurations of devices may be implemented and are discussed in more detail below. Although communications between sources 366 and 368 and client devices 374 are shown as through communications network 364, in an embodiment, sources 366 and 368 may communicate directly with client devices 374 via communication paths (not shown) such as those described above in connection with paths 370 and 372. Additional discussion of suitable configurations of system 350 is presented elsewhere herein.
  • Client devices 374 of FIG. 3A can be implemented in system 350 as any type of equipment suitable for accessing content and/or media guidance data, such as a non-portable gaming machine. Client devices, on which a media guidance application may be implemented, may function as standalone devices or may be part of a network of devices. FIG. 3B shows a generalized embodiment of illustrative client device 300. More specific implementations of client devices are discussed below in connection with FIG. 3A. Client device 300 may receive content and data (such as metadata from a catalog) via input/output (hereinafter “I/O”) path 302. I/O path 302 may provide content (e.g., broadcast programming, on-demand programming, Internet content, content available over a local area network (LAN) or wide area network (WAN), and/or other content) and data (such as media guidance data) to control circuitry 304, which includes processing circuitry 306 and storage 308. Control circuitry 304 may be used to send and receive commands, requests, and other suitable data using I/O path 302. I/O path 302 may connect control circuitry 304 (and specifically processing circuitry 306) to one or more communications paths (described below). I/O functions may be provided by one or more of these communications paths, but are shown as a single path in FIG. 3B to avoid overcomplicating the drawing. In some implementations, client device 300 is a user device through which a user may access content and the media guidance application (and its display screens described above and below). In some implementations, client device 300 is a server or other processing system that acts as an intermediary between media guidance data (such as content metadata) and one or more user devices.
  • An operator may send instructions to control circuitry 304 using input interface 310. Input interface 310 may be any suitable interface, such as a remote control, mouse, trackball, keypad, keyboard, touch screen, touchpad, stylus input, joystick, voice recognition interface, or other input interfaces. Display 312 may be provided as a stand-alone device or integrated with other elements of client device 300. Display 312 may be one or more of a monitor, a television, a liquid crystal display (LCD) for a mobile device, or any other suitable equipment for displaying visual images. In some embodiments, display 312 may be HDTV-capable. In some embodiments, display 312 may be a 3D display, and the interactive media guidance application and any suitable content may be displayed in 3D. A video card or graphics card may generate the output to the display 312. The video card may offer various functions such as accelerated rendering of 3D scenes and 2D graphics, MPEG-2/MPEG-4 decoding, TV output, or the ability to connect multiple monitors. The video card may be any processing circuitry described above in relation to control circuitry 304. The video card may be integrated with the control circuitry 304. Speakers 314 may be provided as integrated with other elements of client device 300 or may be stand-alone units. The audio component of videos and other content displayed on display 312 may be played through speakers 314. In some embodiments, the audio may be distributed to a receiver (not shown), which processes and outputs the audio via speakers 314. In some implementations, client device 300 may not include one or more of display 312 and speakers 314.
  • FIG. 4 is a block diagram of catalog and recommendation system 400, one embodiment of media guidance data source 368 of FIG. 3A. In some implementations, the components of catalog and catalog and recommendation system 400 are distributed between multiple processing and storage devices; for example, the components of catalog and recommendation system 400 may be divided between media guidance data source 368, media content source 366 and client data server 376 (FIG. 3A). Catalog and recommendation system 400 is illustrated as divided into three functional components, each of which include one or more processing devices and storage devices (such as these described above with reference to client device 300 of FIG. 3B): orchestration component 406, offline component 402 and real-time component 404. Offline component 402 may be configured to perform many of the back-end cataloging processes described herein. In particular, offline component 402 includes content information database 414, which may receive media data records from one or more data sources via input 438 (which may correspond to input 386 of media guidance data source 368 of FIG. 3A). Content information database 414 includes memory hardware configured to operate in any of a number of database architectures, such as a relational database management system or a document-based database architecture like NoSQL. Content information database 414 also includes a processing engine executed on one or more database servers to receive, store and serve data stored in memory. Any of the database hardware and architecture configurations described herein, including those described above with reference to content information database 414, may be used for any of the databases or data storage systems described herein. In some embodiments, the media data records received at input 438 are electronic signals representative of media content or information about media content (referred to herein as “content metadata” or “metadata”). Signals received at input 438 may be provided by third-party data providers (such as cable television head-ends, web-based data sources, catalog management organizations, or real-time or other data feeds) or from users supplying content or metadata to catalog and recommendation system 400. Signals received at input 438 may take the form of a file of multiple data records, or through a message bus that provides new data records and updates to previous data records as changes are made, for example. In some implementations, content information database 414 is coupled with one or more processing devices configured to extract metadata from data records arranged in a tabular format and to store that metadata in content information database 414. In some implementations, content information database 414 may “catalog” the information received at input 438 in a memory (e.g., local, remote or distributed) according to a data structure, as described in additional detail below.
  • Information from database 414 may be transmitted (by one or more servers associated with database 414) to data mining processor 412. Data mining processor 412 is configured to extract information from database 414 and process the extracted information to reconcile information from multiple sources (e.g., data records from multiple catalog management systems). In some implementations, data mining processor 412 includes a memory device configured as a database for storing one or more tokens used in performing the domain-based tokenization techniques described in detail below. Data mining processor 412 may also transmit the reconciled information to core content relations management (“CCRM”) module 408. As used herein, the term “module” should be understood to mean a processing device executing programming logic, such as source code, or higher-level code (e.g., Java code executed via a Java compiler), stored in a memory device (e.g., RAM, ROM, removable memory media, Flash memory, optical dishes, etc.). In some implementations, CCRM module 408 includes a MySQL database of reconciled data. Systems and methods for reconciling data in an aggregate catalog, which may be implemented by data mining processor 412 in conjunction with CCRM module 408 and the rest of offline component 402, are described in co-pending application ______, entitled “Systems and methods for transmitting content metadata for multiple data records” (Attorney Docket No. 003597-0608-101), which is incorporated by reference herein in its entirety.
  • CCRM module 408 may also receive information from editorial influence module 410. In some embodiments, editorial influence module 410 receives metadata from human or computer editors, and augments the information that is automatically catalogued with this “editorial” metadata. Editorial influence module 410 includes a server configured to provide a web-based interface between human editors and the database of CCRM module 408. In some implementations, editorial influence module 410 includes a Java application running on an Apache Tomcat web server, but may be executed on any processing device or devices with a user interface. Human editors may interact with the web-based interface using a personal computer connected to the Internet, a hand-held device, or any of the client devices (such as client device 300 of FIG. 3B) described herein. Editorial metadata is described in additional detail below with reference to FIG. 14.
  • Information from database 414 may also be transmitted (e.g., by one or more servers associated with database 414) to export/index processor 416. Export/index processor 416 queries CCRM module 408 to extract catalog information from CCRM module 408 and formats this information for use in different modules of real-time component 404 (as described in detail below). Export/index processor 416 may be configured to extract information in batches on a regular interval (e.g., every twenty-four hours) and format and transmit this batched information to a dependent module, or may be configured to extract information as it is updated in CCRM module 408. As shown in FIG. 4, export/index processor 416 transmits information to domain relations module 420, search indices module 424, and metadata module 422. These modules serve as “quick” sources of certain common types of information for real-time service processor 418 (described in detail below); instead of requiring real-time service processor 418 to query CCRM module 408 whenever a particular kind of metadata is desired, real-time service processor 418 may instead query one of these modules to obtain the information. Domain relations module 420 includes a data storage device configured as a database for storing metadata about relationships between media content and descriptors of media content (such as genre, actors, media domain, rating, etc.). Content metadata module 422 includes a data storage device configured as a database for storing frequently requested metadata. Additionally, metadata module 422 may include only those metadata fields that are commonly used for the recommendation techniques executed by real-time service processor 418. In some implementations, metadata module 422 stores a subset of the data stored in CCRM module 408 in a format that can be easily filtered according to the parameters of a search or recommendation request (e.g., a tabular format that can be quickly filtered to exclude movies rated “R” and above). Search indices module 424 includes a data storage device configured as a database for storing search heuristics that may be used by real-time service processor 418 to improve search performance. Many search techniques utilize heuristics such as removing spaces from search queries, transforming queries into lower-case characters, comparing a search query against a list of common variations and misspellings, and identifying one or more n-grams within a search query, among others.
  • Real-time service processor 418 receives information from domain relations module 420, metadata module 422 and search indices module 424, as described above, and provides recommendation information to client devices (such as client device 300). The components of catalog and recommendation system 400 may be distributed between multiple processing and storage devices; for example, the components of catalog and recommendation system 400 may be divided between media guidance data source 368, media content source 366 and client data service 376 (FIG. 3B) via device gateway 434 in orchestration component 406. Device gateway 434 may include any transmission path suitable for communicating recommendation information, such as the path 372 between media guidance data source 368 and communication network 364 and paths 358, 360, 362 and 388 between communication network 364 and client devices 374 (FIG. 3A). In particular, real-time service processor 418 is configured to provide metadata in response to various types of client queries (e.g., for metadata matching a search term, for metadata on content related to a particular content element, etc.). Real-time service processor 418 may provide, for example, identifiers of particular content as well as metadata for that content (e.g., album art in response to a music search request). Real-time service processor 418 may execute any of a number of recommendation techniques, such as those described in co-pending application ______, titled “Systems and methods for providing media recommendations” (Attorney Docket No. 003597-0603-101), which is incorporated by reference in its entirety herein. Real-time service processor 418 may also query CCRM module 408 directly, or provide feedback to CCRM module 408 as an application on a client device interacts with catalog and recommendation system 400 through device gateway 434. In some implementations, real-time service processor 418 is implemented as a web service executing on an Apache Tomcat or other server.
  • Real-time service processor 418 also communicates with profiles database 426, which may include a data storage device configured as a database for storing information about client preferences, client equipment, client event history, or other information relevant for transmitting recommendations and data to a client. In some implementations, profiles database 426 stores profile information individual users (who may be users of an intermediate client service). Media guidance applications (such as recommendation applications) may be personalized based on a client's preferences as stored in profiles database 426. A personalized media guidance application allows a client to customize displays and features to create a personalized “experience” with the media guidance application. The customizations may include preferred sources of content metadata, varying presentation schemes (e.g., color scheme of displays, font size of text, etc.), aspects of content listings displayed (e.g., only HDTV or only 3D programming, client-specified broadcast channels based on favorite channel selections, re-ordering the display of channels, recommended content, etc.), desired recording features (e.g., recording or series recordings for particular users, recording quality, etc.), parental control settings, customized presentation of Internet content (e.g., presentation of social media content, e-mail, electronically delivered articles, etc.) and other desired customizations. This personalized experience may be created by allowing a client (such as a user) to input these customizations and/or by the media guidance application monitoring user activity to determine various user preferences.
  • Clients may access their personalized guidance application by logging in, communicating with catalog and recommendation system 400 using a designated protocol over path 440, or otherwise identifying themselves to the guidance application. The media guidance application may allow a client to provide profile information for profiles database 426 or may automatically compile profile information. The media guidance application may, for example, monitor the content the client accesses and/or other interactions the user may have with the guidance application, including responses to and feedback based on recommended content. Profiles database 426 may communicate with event database 436, which may store event records that contain information about client interactions with catalog and recommendation system 400. Profiles database 426 may access event database 436 to reconstruct a client's history of use of catalog and recommendation system 400 and to determine content preferences. Additionally, the media guidance application may obtain all or part of other profiles that are related to a particular client (e.g., from other web sites on the Internet the client accesses, such as www.allrovi.com, from other media guidance applications the client accesses, from other interactive applications the client accesses, from another device of the client, etc.), and/or obtain information about the client from other sources that the media guidance application may access. As a result, a client can be provided with a unified guidance application experience across the client's different devices. This type of experience is described in greater detail below. Additional personalized media guidance application features are described in greater detail in Ellis et al., U.S. Patent Application Publication No. 2005/0251827, filed Jul. 11, 2005; Boyer et al., U.S. Pat. No. 7,165,098, issued Jan. 16, 2007; and Ellis et al., U.S. Patent Application Publication No. 2002/0174430, filed Feb. 21, 2002, which are hereby incorporated by reference herein in their entireties.
  • Real-time service 418 transmits information to and receives information from client devices by way of path 440 and device gateway 434. As described above with reference to paths 370 and 372 of FIG. 3A, path 440 may include one or more communication paths such as a satellite path, a fiber-optic path, a cable path, a path that supports Internet communications, free-space connections, or any other suitable wired or wireless communications path or combination of such paths. Device gateway 434 may be, for example, a web service implemented on one or more server devices, configured to receive requests from client devices via path 440. The client devices that communicate with device gateway 434 may be client devices 374 of FIG. 3A. These client devices may take the form of client device 300 (FIG. 3B), for example, and may communicate with device gateway 434 via I/O path 302 (FIG. 3B). The data provided to client devices via path 440 may be supplemented by data from supplemental database 428, which may store metadata and media content that is provided along with the information transmitted from real-time service processor 418 to device gateway 434. Supplemental database 428 may include, for example, media content source 366 (FIG. 3A), and may include or be in communication with media guidance data source 358 or another content metadata catalog. For example, in response to a call to device gateway 434 from a client device, device gateway 434 may send a search or recommendation request to real-time service processor 418. Real-time service processor 418 may respond by sending a list of content identifiers that satisfy the request back to device gateway 434, at which point device gateway 434 will request appropriate supplementary information from supplemental database 428 (e.g., clips of videos whose identifiers are included in the data provided to device gateway 434 by real-time service processor 418). In some implementations, supplemental database 428 is populated with information from content information database 414, domain relations module 420, metadata module 422 or search indices module 422, or may be the same as one or more of these databases or modules.
  • FIG. 5 is a block diagram of service processor 500, one possible implementation of real-time service processor 418 (FIG. 4A) and media guidance data source 368 (FIG. 3A). Service processor 500 may be functionally organized into web service tier 532 and orchestration tier 534. Orchestration tier 534 includes dispatcher processor 506, one or more service context modules 508, sources 510 and cache 516. Dispatcher processor 506 manages the flow of data between web service tier 532 and other components in orchestration tier 534, and in particular, responds to requests from web service tier 532 by checking to see whether data stored in cache 516 satisfies the request or determining which of service context modules 508 to call to satisfy the request. Requests from web service tier 532 may represent requests from client devices (such as client devices 374 of FIG. 3A) received via path 440 and device gateway 434 (FIG. 4), for example. In some implementations, dispatcher processor 506 includes processing hardware configured to execute a Java application to perform the operations described herein. Cache 516 includes a memory device that stores data recently received from or transmitted to the web service tier 532, thus providing a “quick” source for data that may be requested or used multiple times. Each of sources 510 includes computer-executable code (e.g., Java code) for performing a particular search or recommendation operation. For example, a source may include code that may be executed to perform a search of a particular database, or may include code that may be executed to identify similar items to a specified item within a catalog. Sources 510 include primary sources 512, which include basic or common search or recommendation operations, and secondary sources 514, which include custom implementations of particular search or recommendation operations (e.g., for particular clients) or implementations of search or recommendation operations that build on or use primary source operations stored as primary sources 512. Collections of one or more sources 510 are stored as service context modules 508, each of which specifies a particular set of one or more of sources 510 to use when satisfying requests (e.g., from particular regions like North America or Europe, or from particular customers). In some implementations, service context modules 510 are represented as XML files.
  • Dispatcher processor 506 is also in communication with a number of service modules in web service tier 532, including REST v1 service module 520, REST v2 service module 522, and SOAP service module 524. These different service modules provide interfaces and transport mechanisms for accessing the “back-end” processing and data of orchestration tier 532. REST and SOAP are two different ways of packaging input and output data, and any other such protocols may be used. In some embodiments, service processor 500 includes processing and networking hardware configured with a software platform for serving dynamically generated recommendations applications in XML and JSON.
  • In some embodiments, index/export module 416 (FIG. 4) or service processor 500 (FIG. 5) includes one or more data contracts. Data contracts are electronic data files, encoded in a data definition language, that define a type and structure of data available to an application that accesses a service that operates according to the contract. When an application accesses a contracted service, the application can parse the contract to determine what data (e.g., assets, metadata, recommendations, etc.) the service can provide. A single service may be instantiated multiple times with different contracts, with each contract governing a different type of data. A service may advertise the data types and structures that, according to the contract, the service can provide to applications. The application may receive this information and determine which services provide data of a type and structure that is compatible with the application's own purpose and architecture. Multiple services, each with its own contract or contracts, may communicate with each other, passing data through the services and transforming the data, repackaging the data, or adding content along the way. In some embodiments, a service determines the contracts that it advertises based on the contracts that it reads in from other services (indicating the data types and structure to which the service has access), plus additional fields and operations that represent additional functionality provided by the service itself.
  • The types and structure of data specified in a contract may take any of a number of forms. For example, a recommendation system may receive a search or other query and may return pointers to media assets and fields containing metadata about those media assets. Thus, in some configurations of service processor 500 (FIG. 5), each of services 520, 522 and 524 advertises the operations the service supports, the types of data that the service can return per operation, and the fields it can return per data type per operation. A field may be single- or multi-valued, optional or required, and stored as strings or more complex data objects (such via a map to an internal object through JSON). The same contract can also be used in different service contexts, which utilize different sets of underlying source data (e.g., different third-party metadata catalogs).
  • The above systems may be configured to perform any of the domain-specific tokenization techniques described herein, which may improve existing search and cataloging methodologies. Although the techniques described below may be described as executed by data mining processor 412 or real-time service processor 418 (FIG. 4) for clarity of illustration, it will be understood that these techniques may be performed by any device or group of devices configured to do so; for example, any special- or general-purpose processing circuitry located within media guidance data source 358 (FIG. 3A), client device 300 (FIG. 3B), or any appropriately-configured component of catalog and recommendation system 400 (FIG. 4) such as processing circuitry associated with CCRM module 408. In some implementations, these processes are performed by multiple processing devices operating in series, in parallel, or a combination.
  • FIG. 6 illustrates a memory structure that data mining processor 412 may use to store tokens in accordance with the systems and methods described herein. Memory structure 600 may be part of a larger memory structure that may be distributed across multiple storage devices or across multiple memory locations in a single storage device. As depicted in FIG. 6, memory structure 600 includes entries for eight different token entries 608-620. For the entries 608-620 in memory structure 600, data mining processor 412 populates token fields 602, content domain fields 604 and operative expression fields 606. Token fields 602 define a token for each entry. Token entry 608 includes token “feat., ft., feat” (representing three different tokens, as discussed below), token entry 610 includes token “duet,” token entry 612 includes token “rating,” token entry 614 includes token “starring,” token entry 616 includes token “guest,” token entry 618 includes token “by”, and token entry 620 includes token “rating” (the same token as included in token entry 612, but associated with a different domain, as discussed in detail below).
  • In memory structure 600, each token is a text string. These text strings may include any combination of numbers, letters, symbols, punctuation marks and other textual characters, in any language or character array. In some implementations, one or more tokens are graphical tokens that data mining processor 412 is configured to recognize within images, video, and other graphical content. For example, a broadcaster's graphical logo may be a token that data mining processor 412 can identify within a streaming video. In some implementations, one or more tokens are audio tokens that data mining processor 412 is configured to recognize within audio files, or audio portions of multimedia files. For example, the “NBC chimes” may be a token that catalog and recommendation system 400 can identify within an audio recording.
  • Content domain fields 604 identify content domains associated with tokens. For example, token entry 608 indicates that any of one of the tokens “feat.”, “ft.” and “feat” is associated with the content domain “Music.” The association between the tokens of token entry 608 and the content domain “Music” indicates that any of the tokens “feat.”, or “ft.” or “feat” may be used in search queries and textual information that describe content elements in the “Music” domain. In memory structure 600, token entries 608 and 610 include content domain “Music,” token entry 612 includes content domain “Movies,” token entries 614 and 616 include content domain “Video,” token entry 618 includes content domain “Books” and token entry 620 includes content domain “Games.” The data in which a token may be identified is referred to herein as “seed information,” many examples of which are given throughout this disclosure. If a same token is associated with multiple content domains (e.g., the token “rating” associated with “Movies” and “Games” in token entries 612 and 620, respectively), memory structure 600 may reflect this by including the multiple content domains in the content domain field 604 associated with the entry for that token, or by including separate entries for the token for each associated content domain.
  • Operative expression fields 606 identify operative expressions associated with tokens. Operative expressions indicate the meaning of the token within the seed information, and may be used along with the rest of the seed information to form queries for searching a catalog. Operative expressions may include combinations of logical operators, arithmetic operators, search operators, and calls to functions such as signal processing functions, filtering functions, memory retrieval functions, or any other functions. For example, the association between the “duet” token in token entry 610 and the operative expression “STRING1 STRING2 token->Artist(STRING1) AND Artist(STRING2)” indicates that the text string “duet” may be interpreted as meaning that, when a text string is received that includes two sub-strings followed by the token “duet,” that text string indicates a content element in which each text string appears in the “Artist” metadata field of that element. In memory structure 600, token entry 608 includes operative expression “STRING1 token STRING 2->Artist(STRING1) AND Artist(STRING2),” token entry 612 includes operative expression “STRING token->MPAA(STRING),” token entry 614 includes operative expression “token STRING->PrimaryCast(STRING),” token entry 616 includes operative expression “token STRING->SupportingCast(STRING),” token entry 618 includes operative expression “token STRING->Author(STRING)” and token entry 620 includes operative expression “STRING token->ESRB(STRING).” The syntax used in operative expression fields 606 is purely illustrative, and any syntax may be used. When tokens are associated with an operative expression, a token that is associated with multiple content domains may also be associated with multiple different operative expressions (e.g., the token “rating” associated with “Movies” and “Games” and the operative expressions “STRING token->MPAA(STRING)” and “STRING token->ESRB(STRING)” in token entries 612 and 620, respectively). A token may also be associated with the same operative expression across different content domains.
  • FIG. 7 is a flow diagram 700 of a process for identifying a content domain associated with seed information. At step 702, data mining processor 412 receives seed information, which is any data that is used to describe content elements or media attributes of interest to a client (e.g., a user input at a search dialog box, a user selection of a “More Like This” option, or a particular metadata field from a catalog record). For ease of explanation, the steps of flow diagram 700 are illustrated herein with reference to textual seed information (referred to as “the text string”), but other types of seed information may be used with any of the systems and processes described herein (e.g., graphical and audio seed information, as described above). Data mining processor 412 may receive the seed information at step 702 from a processing device associated with any source of seed information (e.g., a user input, metadata provided to catalog and recommendation system 400 by an external catalog, or a metadata field that is being parsed by data mining processor 412 during a catalog matching task). In some implementations, determining processor 412 itself may identify the seed information, and may then process the seed information in accordance with the remaining steps of flow diagram 700.
  • At step 704, data mining processor 412 accesses a database of tokens. This database may include a memory structure like memory structure 600 of FIG. 6, for example, or may be any database or other memory structure. As discussed in detail above with reference to FIG. 6, each token in the database accessed at step 704 is associated with at least one content domain (e.g., stored in a field in the database). In some implementations, each content domain is associated with two or more tokens in the database of tokens. For ease of explanation, the following discussion and examples will focus on the case in which a token is associated with a single domain.
  • At step 706, data mining processor 412 determines whether a portion of the seed information received at step 702 matches any of the tokens in the token database (e.g., in any of token fields 602 of memory structure 600 of FIG. 6). Data mining processor 412 may make this determination using any matching technique, such as sequence alignment techniques typically applied in genetic sequencing applications, simple character-by-character comparisons between the text string and the tokens in the database regular expressions matching techniques, approximate string matching techniques or any other strict or approximate string searching or pattern matching technique. A “match” need not mean an exact character-for-character match between a portion of the seed information and a token, but may mean a likeness with respect to certain qualities, an approximate match, or any other semantic notion of matching. For example, data mining processor 412 may be configured to link all variants of a token to a single entry in a token database (e.g., “ft.” and “feat” both linking to the entry for the token “feat.” of token entry 608 of memory structure 600 of FIG. 6). Data mining processor 412 may also be configured with separate entries in the token database for each of these variants.
  • If data mining processor 412 determines at step 706 that no portion of the seed information matches a token in the database, data mining processor 412 may return to step 702 and wait for additional seed information before invoking the token identification steps 704 and 706 again (and may instead, for example, proceed to analyze the seed information received at step 702 using other techniques). If data mining processor 412 determines at step 706 that a portion of the seed information does match a token in the database, data mining processor 412 identifies the content domain associated with the matching token at step 708, using the database of tokens (e.g., memory structure 600 of FIG. 6).
  • At step 710, after data mining processor 412 has identified a matching token at step 706 and the associated content domain at step 708, data mining processor 412 may store an association between the seed information and the content domain identified at step 708. The location and nature of the storage at step 710 may depend upon the application for which the process of FIG. 7 is performed. For example, in a catalog matching application in which the seed information received at step 702 is a metadata field name or value from a new data record that is being integrated into an existing catalog, data mining processor 412 may implement step 710 by storing the metadata field in the catalog along with additional metadata associating the metadata field with the content domain (e.g., a new “Domain” metadata field). Data mining processor 412 may implement step 710 by storing the seed information and the content domain temporarily in a RAM while steps 712 and 714 are performed.
  • At step 712, data mining processor 412 accesses a database of data records representative of content elements within the content domain identified at step 708. In some implementations, this database of data records is part of CCRM module 408 (FIG. 4). Each data record in the database accessed at step 712 includes metadata descriptive of the content element associated with the data record. For example, a data record in the “Movies” database may represent the actor Clark Gable, and have associated metadata fields Year_Born, Films, Spouse, etc. The database accessed at step 712 may be a database associated with a single content domain, or may be particular entries related to the identified content domain taken from an database that covers multiple content domains.
  • At step 714, data mining processor 412 determines whether at least some of the seed information, including the matching token, corresponds to some metadata of any of the data records in the database. Data mining processor 412 may use any of the “matching” techniques described above (with reference to step 706) in making the determination at step 714. Data mining processor 412 also looks for matches between different portions of the seed information and two or more different metadata fields within a single data record, which may indicate a stronger correspondence between the seed information and that data record then of only a single metadata field in that data record matched some of the seed information. The correspondence determined at step 714 may also include determining that the corresponding data record is consistent with any operative expressions associated with the token, as described in detail below. Based on the determination of step 714, data mining processor 412 may identify one or more records that correspond to the seed information.
  • At step 716, data mining processor 412 transmits identifiers for the corresponding data records to the second processor (which provided the seed information received at step 702). The identifiers may be one or more memory addresses for the corresponding data records, a VRL or other identification code that can be used to retrieve the corresponding data records, or the data records themselves. In some implementations, the second processor may use the matching data records to, for example, provide recommendations or populate a display on a client device. In some implementations, the identifiers are transmitted at step 716 to a device other than the second processor (instead of or in addition to the second processor), such as another component in catalog and recommendation system 400 or directly to a user device downstream of the second processor.
  • In implementations in which tokens are associated with operative expressions as well as content domains, data mining processor 412 or real-time service processor 418 (FIG. 4) may use these operative expressions to form queries that can be used to interrogate a database of data records of content metadata. FIG. 8 is a flow diagram 800 of a process for searching an record database using a query constructed from textual seed information and an operative expression associated with a token in the seed information. The process of FIG. 8 will be described as commencing after the identification at step 706 of FIG. 7 of a matching token within seed information. At step 802, data mining processor 412 identifies, from the token database (such as memory structure 600 of FIG. 6), an operative expression associated with the matching token. The associated operative expression may be identified within the token database in the same manner described above for identifying an associated content domain at step 708 of FIG. 7.
  • At step 804, data mining processor 412 generates a query based on the seed information and the operative expression identified at step 802. To generate this query, data mining processor 412 may follow the instructions provided by the operative expression, as described above with reference to FIG. 6. For example, data mining processor 412 may use the operative expression “STRING1 STRING2 token->Artist(STRING1) AND Artist(STRING2)” in token entry 612 of FIG. 6 to construct a query “Artist(STRING1) AND Artist(STRING2)” that can then be interpreted by a search engine or other matching processor. At step 806, data mining processor 412 searches a database of content element data records using the query generated at step 804, and at step 808, data mining processor 412 identifies one or more records that satisfy the query (representing, e.g., content elements or attributes such as actress, release year, etc.). In implementations in which data mining processor 412 has identified one or more content domains relevant to the text string (e.g., using the process of FIG. 7), data mining processor 412 may limit the search to data records associated with the identified content domains, as described above with reference to step 712 of FIG. 7. At step 810, data mining processor 412 transmits some of the metadata of the satisfactory data records to another device (e.g., for display on a handheld client device via CCRM module 408, real-time service processor 418, and device gateway 434, or for further processing at another component of catalog and recommendation system 400).
  • FIG. 9 is an illustrative user-facing display 900 for performing media information searches, which may be presented on display 312 of client device 300 (FIG. 3B). Display 900 includes a text input box 902 into which a user may enter seed information regarding media in which the user is interested (e.g., as shown in FIG. 9, “missy elliott feat. jay-z”). The user may input the seed information via input interface 310 of client device 300 (FIG. 3B). After entering text in input box 902, the user may execute a search using the seed information by selecting or otherwise activating “GO” button 908. In display 900, the user is also given the option to select one or more domains 906 for focusing the search. In such implementations, real-time service processor 418 may use the user-selected domain to narrow the scope of its search and/or disambiguate any tokens in the search string between multiple content domains.
  • As discussed above, the FIG. 7 process of identifying a content domain associated with seed information may be used advantageously in catalog matching applications. One such implementation is illustrated in FIG. 10, which is a flow diagram 1000 of a process for associating seed information with a record in a database of records. Data mining processor 412 may execute the steps of flow diagram 1000, for example, when importing new data records from different media catalogs into an existing database. At step 1002, data mining processor 412 accesses a database of data records belonging to a content domain (as identified, for example, using the process illustrated in FIG. 7). Each record in the record database accessed at step 1002 includes associated metadata, as described above with reference to step 712 of FIG. 7.
  • At step 1004, data mining processor 412 receives seed information (e.g., as described above with reference to step 702 of FIG. 7). At step 1006, data mining processor 412 determines whether the seed information received at step 1004 corresponds to any metadata of any data records in the records database. Data mining processor 412 may use any search or matching technique at step 1006 (including techniques which do not require an exact match) as described above. If data mining processor 412 determines that there is a correspondence at step 1006, data mining processor 412 will store an association between the seed information and the corresponding record within the record database of CCRM module 408 (FIG. 4) at step 1008 (e.g., by including some or all of the seed information as metadata of the corresponding data record, or by storing a link in memory between the seed information and the corresponding data record). If data mining processor 412 determines that there is no correspondence between the seed information and any data records in the record database at step 1006, data mining processor 412 creates a new data record for the seed information in the record database of CCRM module 408 (FIG. 4) at step 1010, and at step 1012, data mining processor 412 stores at least some of the seed information as metadata of the new data record created at step 1010.
  • FIGS. 11A and 11B illustrate a catalog matching application that utilizes the processes illustrated in flow diagrams 700 (FIG. 7), 800 (FIGS. 8), and 1000 (FIG. 10). FIG. 11A depicts a memory structure 1102 that may be included in a records database. Memory structure 1102 includes eight different data records 1110-1124. Memory structure 1102 is shown as having three fields: Record ID fields 1104, Content Domain fields 1106 and Other Metadata fields 1108. For example, record 1110 includes Record ID “111,” Content Domain “Music” and Other Metadata including “Title” and “Artist.”
  • FIG. 11B illustrates a catalog matching process that may be executed by data mining processor 412 upon receiving new record 1126 (e.g., from an external metadata catalog). New record 1126 has two fields: External ID field 1128 and Metadata field 1130. The ID in External ID field 1128 (“67341”) represents an identification number or code associated with new record 1126 by an external catalog and may have no relation to the Record IDs stored in the memory structure 1102 of FIG. 11A. The metadata included in Metadata field 1130 includes, among other things, a field for “Artist” that has the value “Missy Elliott feat. Jay-Z.” Upon receiving new record 1126, data mining processor 412 may analyze new record 1126 to determine whether new data record 1126 corresponds to any of the records are already stored in memory structure 1102. To do so, data mining processor 412 may begin by examining each of the portions of metadata in Metadata field 1130 (i.e., by using each portion of metadata as seed information in turn). In FIG. 11B, data mining processor 412 first uses the “Artist” value from metadata field 1130 as seed information 1132. This value is the text string “Missy Elliot feat. Jay-Z.” Executing the process described above with reference to FIG. 7, data mining processor 412 identifies token 1134 within seed information 1132: “feat.” (see, e.g., the illustrative token database of FIG. 6). The remainder of seed information 1132 is non-token portion 1136.
  • Using token 1134 and the process described above with reference to FIG. 7, data mining processor 412 accesses a token database to identify content domain 1138 “Music” associated with the identified token (see, e.g., the illustrative token database of FIG. 6). Executing the process described above with reference to FIG. 8, data mining processor 412 also identifies operative expression 1140 associated with token 1134: “STRING1 STRING2 token->Artist(STRING1) AND Artist(STRING2)” (see, e.g., the illustrative token database of FIG. 6). Using content domain 1138, operative expression 1140 and non-token portion 1136, data mining processor 412 may construct query 1142 in accordance with the process illustrated in FIG. 8, and may use query 1142 to search memory structure 1102 for corresponding records in accordance with the process illustrated in FIG. 10. Data mining processor 412 may identify a corresponding Record ID 1142 from memory structure 1102, namely record 1110, and proceed to associate new record 1126 with record 1110 in memory structure 1102 (e.g., by including new record 1126 as metadata of record 1110).
  • As described above, in some implementations, a token may be associated with one or more content domains. In such implementations, it may be useful to have non-token-based techniques for determining which content domains are most likely to be associated with seed information (e.g., in order to narrow down the number of records that must be searched to identify data records relevant to the seed information). FIG. 12 is a flow diagram 1200 of a non-token-based process for identifying a content domain that may be executed by device gateway 434 or real-time service processor 418. For illustrative purposes, flow diagram 1200 will be described as executed by real-time service processor 418.
  • At step 1202, real-time service processor 418 determines whether a client has selected a domain (e.g., through input interface 310 of client 300 of FIG. 3B). A client (e.g., a user or a client application) may select a domain in any number of ways. For example, a client may select a domain by choosing one of a list of domains when performing a search operation (as shown, for example, in display 900 of FIG. 9). A client application may transmit a particular domain selection along with the seed information when requesting a search or recommendation operation from real-time service processor 418. If real-time service processor 418 determines that a client has selected a domain at step 1202, real-time service processor 418 proceeds to step 1212 and identifies a content domain accordingly. If real-time service processor 418 determines that a user has not selected a domain at step 1202, real-time service processor 418 executes step 1204 and determines whether another client input has indicated a domain. For example, if a user has entered seed information in an input box in a “Books” section of an online bookseller's website, real-time service processor 418 may determine that the domain “Books” is indicated. If real-time service processor 418 determines that another client input has indicated a domain at step 1204, real-time service processor 418 proceeds to step 1212 and identifies a content domain accordingly.
  • If real-time service processor 418 determines that another client input has not indicated a domain at step 1204, real-time service processor 418 executes step 1206 and determines whether a media type selected by the client indicates a domain. For example, if a client device requests information in MP3 format, real-time service processor 418 may determine that the media type (MP3) indicates the “Music” or “Audio” domains. If real-time service processor 418 determines that a selected media type has indicated a domain at step 1206, real-time service processor 418 proceeds to step 1212 and identifies a content domain accordingly. If real-time service processor 418 determines that a domain is not indicated by a selected media type at step 1206, real-time service processor 418 proceeds to execute step 1208 and determines whether an event history indicates a domain. An event history may be any stored information regarding previous actions or requests by the client, and may provide domain information in any of a number of ways. For example, if a client has issued three requests for information in the past few minutes, all of which were identified as associated with the “Games” domain, real-time service processor 418 may determine that newly received seed information is also likely to be associated with the “Games” domain. If a high percentage of a client's previous requests at a particular time of day have been associated with the “News” domain, real-time service processor 418 may determine that new seed information received at that time of day is also likely to be associated with the “News” domain. If real-time service processor 418 determines that an event history has indicated a domain at step 1204, real-time service processor 418 proceeds to step 1212 and identifies a content domain accordingly. If real-time service processor 418 determines that the event history does not indicate a domain at step 1208, real-time service processor 418 proceeds to execute step 1210 and determine whether a client profile indicates a domain. If real-time service processor 418 determines that an event history has indicated a domain at step 1204, real-time service processor 418 proceeds to step 1212 and identifies a content domain accordingly. If real-time service processor 418 determines that a client profile does not indicate a domain at step 1210, real-time service processor 418 may return to execute 1202 and wait for additional seed information before invoking steps 1202-1210 again (and may instead, for example, proceed with any requested operations without relying on an identified content domain).
  • The following discussion addresses further embodiments of display screens, client devices and systems suitable for use with the asset cataloging, search, and recommendation techniques described herein. As noted above, the following discussion will often be presented in the context of media guidance applications, but it will be understood that these illustrative examples do not limit the range of interactive applications which may be improved by the use of the asset cataloging, search, and recommendation techniques of the present disclosure.
  • With the advent of the Internet, mobile computing, and high-speed wireless networks, users are accessing media on client devices on which they traditionally did not. As referred to herein, the phrase “client device,” “user equipment,” “user device,” “electronic device,” “electronic equipment,” “media equipment device,” or “media device” should be understood to mean any device for accessing the content described above, such as a television, a Smart TV, a set-top box, an integrated receiver decoder (IRD) for handling satellite television, a digital storage device, a digital media receiver (DMR), a digital media adapter (DMA), a streaming media device, a DVD player, a DVD recorder, a connected DVD, a local media server, a BLU-RAY player, a BLU-RAY recorder, a personal computer (PC), a laptop computer, a tablet computer, a WebTV box, a personal computer television (PC/TV), a PC media server, a PC media center, a hand-held computer, a stationary telephone, a personal digital assistant (PDA), a mobile telephone, a portable video player, a portable music player, a portable gaming machine, a smart phone, or any other television equipment, computing equipment, or wireless device, and/or combination of the same. In some embodiments, the client device may have a front facing screen and a rear facing screen, multiple front screens, or multiple angled screens. In some embodiments, the client device may have a front facing camera and/or a rear facing camera. On these client devices, users may be able to navigate among and locate the same content available through a television. Consequently, media guidance may be available on these devices, as well. The guidance provided may be for content available only through a television, for content available only through one or more of other types of client devices, or for content available both through a television and one or more of the other types of client devices. The media guidance applications may be provided as on-line applications (i.e., provided on a web-site), or as stand-alone applications or clients on client devices. The various devices and platforms that may implement media guidance applications are described in more detail below.
  • In addition to providing access to linear programming (e.g., content that is scheduled to be transmitted to a plurality of client devices at a predetermined time and is provided according to a schedule), the media guidance application also provides access to non-linear programming (e.g., content accessible to a client device at any time and is not provided according to a schedule). Non-linear programming may include content from different content sources including on-demand content (e.g., VOD), Internet content (e.g., streaming media, downloadable media, etc.), locally stored content (e.g., content stored on any client device described above or other storage device), or other time-independent content. On-demand content may include movies or any other content provided by a particular content provider (e.g., HBO On Demand providing “The Sopranos” and “Curb Your Enthusiasm”). HBO ON DEMAND is a service mark owned by Time Warner Company L.P. et al. and THE SOPRANOS and CURB YOUR ENTHUSIASM are trademarks owned by the Home Box Office, Inc. Internet content may include web events, such as a chat session or Webcast, or content available on-demand as streaming content or downloadable content through an Internet web site or other Internet access (e.g., FTP).
  • Grid 102 may provide media guidance data for non-linear programming including on-demand listing 114, recorded content listing 116, and Internet content listing 118. A display combining media guidance data for content from different types of content sources is sometimes referred to as a “mixed-media” display. The various permutations of the types of media guidance data that may be displayed that are different than display 100 may be based on user selection or guidance application definition (e.g., a display of only recorded and broadcast listings, only on-demand and broadcast listings, etc.). As illustrated, listings 114, 116, and 118 are shown as spanning the entire time block displayed in grid 102 to indicate that selection of these listings may provide access to a display dedicated to on-demand listings, recorded listings, or Internet listings, respectively. In some embodiments, listings for these content types may be included directly in grid 102. Additional media guidance data may be displayed in response to the user selecting one of the navigational icons 120. Pressing an arrow key on a user input device may affect the display in a similar manner as selecting navigational icons 120.
  • Advertisement 124 may provide an advertisement for content that, depending on a viewer's access rights (e.g., for subscription programming), is currently available for viewing, will be available for viewing in the future, or may never become available for viewing, and may correspond to or be unrelated to one or more of the content listings in grid 102. Advertisement 124 may also be for products or services related or unrelated to the content displayed in grid 102. Advertisement 124 may be selectable and provide further information about content, provide information about a product or a service, enable purchasing of content, a product, or a service, provide content relating to the advertisement, etc. Advertisement 124 may be targeted based on a user's profile/preferences, monitored user activity, the type of display provided, or on other suitable targeted advertisement bases.
  • While advertisement 124 is shown as rectangular or banner shaped, advertisements may be provided in any suitable size, shape, and location in a guidance application display. For example, advertisement 124 may be provided as a rectangular shape that is horizontally adjacent to grid 102. This is sometimes referred to as a panel advertisement. In addition, advertisements may be overlaid over content or a guidance application display or embedded within a display. Advertisements may also include text, images, rotating images, video clips, or other types of content described above. Advertisements may be stored in a client device having a guidance application, in a database connected to the client, in a remote location (including streaming media servers), or on other storage means, or a combination of these locations. Providing advertisements in a media guidance application is discussed in greater detail in, for example, Knudson et al., U.S. Patent Application Publication No. 2003/0110499, filed Jan. 17, 2003; Ward, III et al. U.S. Pat. No. 6,756,997, issued Jun. 29, 2004; and Schein et al. U.S. Pat. No. 6,388,714, issued May 14, 2002, which are hereby incorporated by reference herein in their entireties. It will be appreciated that advertisements may be included in other media guidance application display screens of the embodiments described herein.
  • In an embodiment, display 200 of FIG. 2 may be augmented by any of the items and features described above for display 100 of FIG. 1. For example, advertisement 205 may take the form of any of the embodiments described above for advertisement 124. The listings in display 200 are of different sizes (i.e., listing 206 is larger than listings 208, 210, and 212), but if desired, all the listings may be the same size. Listings may be of different sizes or graphically accentuated to indicate degrees of interest to the user or to emphasize certain content, as desired by the content provider or based on user preferences. Various systems and methods for graphically accentuating content listings are discussed in, for example, Yates, U.S. Patent Application Publication No. 2010/0153885, filed Dec. 29, 2005, which is hereby incorporated by reference herein in its entirety.
  • As discussed above, the systems and methods of the present disclosure may be implemented in whole or in part by client 300 of FIG. 3B, which includes control circuitry 304. Control circuitry 304 may be based on any suitable processing circuitry such as processing circuitry 306. As referred to herein, processing circuitry should be understood to mean circuitry based on one or more microprocessors, microcontrollers, digital signal processors, programmable logic devices, field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), etc., and may include a multi-core processor (e.g., dual-core, quad-core, hexa-core, or any suitable number of cores) or supercomputer. In some embodiments, processing circuitry may be distributed across multiple separate processors or processing units, for example, multiple of the same type of processing units (e.g., two Intel Core i7 processors) or multiple different processors (e.g., an Intel Core i5 processor and an Intel Core i7 processor). In some embodiments, control circuitry 304 executes instructions for a media guidance application stored in memory (i.e., storage 308).
  • In client-server based embodiments, control circuitry 304 may include communications circuitry suitable for communicating with a guidance application server or other networks or servers. The instructions for carrying out the above mentioned functionality may be stored on the guidance application server. Communications circuitry may include a cable modem, an integrated services digital network (ISDN) modem, a digital subscriber line (DSL) modem, a telephone modem, Ethernet card, or a wireless modem for communications with other equipment, or any other suitable communications circuitry. Such communications may involve the Internet or any other suitable communications networks or paths (which is described in more detail in connection with FIG. 3A). In addition, communications circuitry may include circuitry that enables peer-to-peer communication of client devices, or communication of client devices in locations remote from each other (described in more detail below). Server-centric and/or peer-to-peer communication may enable the pooling of preferences and behaviors between users, for use with the systems and techniques disclosed herein.
  • Memory may be an electronic storage device provided as storage 308 that is part of control circuitry 304. As referred to herein, the phrase “electronic storage device” or “storage device” should be understood to mean any device for storing electronic data, computer software, or firmware, such as random-access memory, read-only memory, hard drives, optical drives, digital video disc (DVD) recorders, compact disc (CD) recorders, BLU-RAY disc (BD) recorders, BLU-RAY 3D disc recorders, digital video recorders (DVR, sometimes called a personal video recorder, or PVR_), solid state devices, quantum storage devices, gaming consoles, gaming media, or any other suitable fixed or removable storage devices, and/or any combination of the same. Storage 308 may be used to store various types of content described herein as well as media guidance information, described above, and guidance application data, described above. Nonvolatile memory may also be used (e.g., to launch a boot-up routine and other instructions). Cloud-based storage, described in relation to FIG. 3A, may be used to supplement storage 308 or instead of storage 308.
  • Control circuitry 304 may include video generating circuitry and tuning circuitry, such as one or more analog tuners, one or more MPEG-2 decoders or other digital decoding circuitry, high-definition tuners, or any other suitable tuning or video circuits or combinations of such circuits. Encoding circuitry (e.g., for converting over-the-air, analog, or digital signals to MPEG signals for storage) may also be provided. Control circuitry 304 may also include scaler circuitry for upconverting and downconverting content into the preferred output format of the client device 300. Circuitry 304 may also include digital-to-analog converter circuitry and analog-to-digital converter circuitry for converting between digital and analog signals. The tuning and encoding circuitry may be used by the client device to receive and to display, to play, or to record content. The tuning and encoding circuitry may also be used to receive guidance data. The circuitry described herein, including for example, the tuning, video generating, encoding, decoding, encrypting, decrypting, scaler, and analog/digital circuitry, may be implemented using software running on one or more general purpose or specialized processors. Multiple tuners may be provided to handle simultaneous tuning functions (e.g., watch and record functions, picture-in-picture (PIP) functions, multiple-tuner recording, etc.). If storage 308 is provided as a separate device from client device 300, the tuning and encoding circuitry (including multiple tuners) may be associated with storage 308.
  • The guidance application may be implemented using any suitable architecture. For example, it may be a stand-alone application wholly implemented on client device 300. In such an approach, instructions of the application are stored locally, and data for use by the application is downloaded on a periodic basis (e.g., from an out-of-band feed, from an Internet resource, or using another suitable approach). In some embodiments, the media guidance application is a client-server based application. Data for use by a thick or thin client implemented on client device 300 is retrieved on-demand by issuing requests to a server remote to the client device 300. In one example of a client-server based guidance application, control circuitry 304 runs a web browser that interprets web pages provided by a remote server.
  • In some embodiments, the media guidance application is downloaded and interpreted or otherwise run by an interpreter or virtual machine (run by control circuitry 304). In some embodiments, the guidance application may be encoded in the ETV Binary Interchange Format (EBIF), received by control circuitry 304 as part of a suitable feed, and interpreted by a user agent running on control circuitry 304. For example, the guidance application may be an EBIF application. In some embodiments, the guidance application may be defined by a series of JAVA-based files that are received and run by a local virtual machine or other suitable middleware executed by control circuitry 304. In some of such embodiments (e.g., those employing MPEG-2 or other digital media encoding schemes), the guidance application may be, for example, encoded and transmitted in an MPEG-2 object carousel with the MPEG audio and video packets of a program.
  • User television equipment 352 may include a set-top box, an integrated receiver decoder (IRD) for handling satellite television, a television set, a digital storage device, a DVD recorder, a video-cassette recorder (VCR), a local media server, or other user television equipment. One or more of these devices may be integrated into a single device, if desired. User computer equipment 354 may include a PC, a laptop, a tablet, a WebTV box, a personal computer television (PC/TV), a PC media server, a PC media center, or other user computer equipment. WEBTV is a trademark owned by Microsoft Corp. Wireless user communications device 356 may include PDAs, a mobile telephone, a portable video player, a portable music player, a portable gaming machine, or other wireless devices.
  • A client device utilizing at least some of the system features described above in connection with FIG. 3A may not be classified solely as user television equipment 352, user computer equipment 354, or a wireless user communications device 356. For example, user television equipment 352 may, like some user computer equipment 354, be Internet-enabled allowing for access to Internet content, while user computer equipment 354 may, like some television equipment 352, include a tuner allowing for access to television programming. The media guidance application may have the same layout on the various different types of client device or may be tailored to the display capabilities of the client device. For example, on user computer equipment 354, the guidance application may be provided as a web site accessed by a web browser. In another example, the guidance application may be scaled down for wireless user communications devices 356.
  • In system 350, there is typically more than one of each type of client device but only one of each is shown in FIG. 3A to avoid overcomplicating the drawing. In addition, each user may utilize more than one type of client device and also more than one of each type of client device.
  • In some embodiments, a client device (e.g., user television equipment 352, user computer equipment 354, wireless user communications device 356) may be referred to as a “second screen device.” For example, a second screen device may supplement content presented on a first client device. The content presented on the second screen device may be any suitable content that supplements the content presented on the first device. In some embodiments, the second screen device provides an interface for adjusting settings and display preferences of the first device. In some embodiments, the second screen device is configured for interacting with other second screen devices or for interacting with a social network. The second screen device can be located in the same room as the first device, a different room from the first device but in the same house or building, or in a different building from the first device.
  • The user may also set various settings to maintain consistent media guidance application settings across in-home devices and remote devices. Settings include those described herein, as well as channel and program favorites, programming preferences that the guidance application utilizes to make programming recommendations, display preferences, and other desirable guidance settings. For example, if a user sets a channel as a favorite on, for example, the web site www.allrovi.com on their personal computer at their office, the same channel would appear as a favorite on the user's in-home devices (e.g., user television equipment and user computer equipment) as well as the user's mobile devices, if desired. Therefore, changes made on one client device can change the guidance experience on another client device, regardless of whether they are the same or a different type of client device. In addition, the changes made may be based on settings input by a user, as well as user activity monitored by the guidance application.
  • Although communications paths are not drawn between client devices, these devices may communicate directly with each other via communication paths, such as those described above in connection with paths 358, 360, and 362, as well other short-range point-to-point communication paths, such as USB cables, IEEE 1394 cables, wireless paths (e.g., Bluetooth, infrared, IEEE 802-11x, etc.), or other short-range communication via wired or wireless paths. BLUETOOTH is a certification mark owned by Bluetooth SIG, INC. The client devices may also communicate with each other directly through an indirect path via communications network 364.
  • System 350 includes content source 366 and media guidance data source 358 coupled to communications network 364 via communication paths 370 and 372, respectively. Paths 370 and 372 may include any of the communication paths described above in connection with paths 358, 360, and 362. Communications with the content source 366 and media guidance data source 358 may be exchanged over one or more communications paths, but are shown as a single path in FIG. 3A to avoid overcomplicating the drawing. In addition, there may be more than one of each of content source 366 and media guidance data source 358, but only one of each is shown in FIG. 3A to avoid overcomplicating the drawing. (The different types of each of these sources are discussed below.) If desired, content source 366 and media guidance data source 358 may be integrated as one source device. Although communications between sources 366 and 358 with client devices 352, 354, and 356 are shown as through communications network 364, in some embodiments, sources 366 and 358 may communicate directly with client devices 352, 354, and 356 via communication paths (not shown) such as those described above in connection with paths 358, 360, and 362.
  • Content source 366 may include one or more types of content distribution equipment including a television distribution facility, cable system headend, satellite distribution facility, programming sources (e.g., television broadcasters, such as NBC, ABC, HBO, etc.), intermediate distribution facilities and/or servers, Internet providers, on-demand media servers, and other content providers. NBC is a trademark owned by the National Broadcasting Company, Inc., ABC is a trademark owned by ABC, Inc., and HBO is a trademark owned by Home Box Office, Inc. Content source 366 may be the originator of content (e.g., a television broadcaster, a Webcast provider, etc.) or may not be the originator of content (e.g., an on-demand content provider, an Internet provider of content of broadcast programs for downloading, etc.). Content source 366 may include cable sources, satellite providers, on-demand providers, Internet providers, over-the-top content providers, or other providers of content. Content source 366 may also include a remote media server used to store different types of content (including video content selected by a user), in a location remote from any of the client devices. Systems and methods for remote storage of content, and providing remotely stored content to client devices are discussed in greater detail in connection with Ellis et al., U.S. Pat. No. 7,761,892, issued Jul. 20, 2010, which is hereby incorporated by reference herein in its entirety.
  • Media guidance data source 358 may provide media guidance data, such as the media guidance data described above. Media guidance application data may be provided to the client devices using any suitable approach. In some embodiments, the guidance application may be a stand-alone interactive television program guide that receives program guide data via a data feed (e.g., a continuous feed or trickle feed). Program schedule data and other guidance data may be provided to the client device on a television channel sideband, using an in-band digital signal, using an out-of-band digital signal, or by any other suitable data transmission technique. Program schedule data and other media guidance data may be provided to client devices on multiple analog or digital television channels.
  • In some embodiments, guidance data from media guidance data source 358 may be provided to users' equipment using a client-server approach. For example, a client device may pull media guidance data from a server, or a server may push media guidance data to a client device. In some embodiments, a guidance application client residing on the user's equipment may initiate sessions with source 358 to obtain guidance data when needed, e.g., when the guidance data is out of date or when the client device receives a request from the user to receive data. Media guidance may be provided to the client device with any suitable frequency (e.g., continuously, daily, a user-specified period of time, a system-specified period of time, in response to a request from a client device, etc.). Media guidance data source 358 may provide, to user equipment devices 352, 354, and 356, the media guidance application itself or software updates for the media guidance application.
  • Media guidance applications may be, for example, stand-alone applications implemented on client devices. In some embodiments, media guidance applications may be client-server applications where only the client resides on the client device. For example, media guidance applications may be implemented partially as a client application on control circuitry 304 of client device 300 and partially on a remote server as a server application (e.g., media guidance data source 358). The guidance application displays may be generated by the media guidance data source 358 and transmitted to the client devices. The media guidance data source 358 may also transmit data for storage on the client, which then generates the guidance application displays based on instructions processed by control circuitry.
  • Content and/or media guidance data delivered to client devices 374, such as user equipment devices 352, 354, and 356 may be over-the-top (OTT) content. OTT content delivery allows Internet-enabled user devices, including any client device described above, to receive content that is transferred over the Internet, including any content described above. OTT content is delivered via an Internet connection provided by an Internet service provider (ISP), but a third party distributes the content. The ISP may not be responsible for the viewing abilities, copyrights, or redistribution of the content, and may only transfer IP packets provided by the OTT content provider. Examples of OTT content providers include YOUTUBE, NETFLIX, and HULU, which provide audio and video via IP packets. OTT content providers may additionally or alternatively provide media guidance data described above. In addition to content and/or media guidance data, providers of OTT content can distribute media guidance applications (e.g., web-based applications or cloud-based applications), or the content can be displayed by media guidance applications stored on the client device.
  • Media guidance data source 358 may make asset cataloging or recommendation applications available to users. Such applications may be downloaded from media guidance data source 368 to a client device, or may be accessed remotely by a user. These applications, as well as other applications, features and tools, may be provided to users on a subscription basis or may be selectively downloaded or used for an additional fee. In an embodiment, media guidance data source 368 may serve as a repository for media asset data developed by users and/or third-parties, and as a distribution source for this data and related applications.
  • Media guidance system 350 is intended to illustrate a number of approaches, or network configurations, by which client devices and sources of content and guidance data may communicate with each other for the purpose of accessing content and providing media guidance. The embodiments described herein may be applied in any one or a subset of these approaches, or in a system employing other approaches for delivering content and providing media guidance. The following four approaches provide specific illustrations of the generalized example of FIG. 3A.
  • In one approach, client devices may communicate with each other within a home network. Client devices can communicate with each other directly via short-range point-to-point communication schemes described above, via indirect paths through a hub or other similar device provided on a home network, or via communications network 364. Each of the multiple individuals in a single home may operate different client devices on the home network. As a result, it may be desirable for various media guidance information or settings to be communicated between the different client devices. For example, it may be desirable for users to maintain consistent media guidance application settings on different client devices within a home network, as described in greater detail in Ellis et al., U.S. patent application Ser. No. No. 11/179,360, filed Jul. 11, 2005. Different types of client devices in a home network may also communicate with each other to transmit content. For example, a user may transmit content from user computer equipment to a portable video player or portable music player.
  • In a second approach, users may have multiple types of client devices by which they access content and obtain media guidance. For example, some users may have home networks that are accessed by in-home and mobile devices. Users may control in-home devices via a media guidance application implemented on a remote device. For example, users may access an online media guidance application on a website via a personal computer at their offices, or a mobile device such as a PDA or web-enabled mobile telephone. The user may set various settings (e.g., recordings, reminders, or other settings) on the online guidance application to control the user's in-home equipment. The online guide may control the user's equipment directly, or by communicating with a media guidance application on the user's in-home equipment. Various systems and methods for client devices communicating, where the client devices are in locations remote from each other, is discussed in, for example, Ellis et al., U.S. Pat. No. 8,046,801, issued Oct. 25, 2011, which is hereby incorporated by reference herein in its entirety.
  • In a third approach, users of client devices inside and outside a home can use their media guidance application to communicate directly with content source 366 to access content. Specifically, within a home, users of user television equipment 352 and user computer equipment 354 may access the media guidance application to navigate among and locate desirable content. Users may also access the media guidance application outside of the home using wireless user communications devices 356 to navigate among and locate desirable content.
  • In a fourth approach, client devices may operate in a cloud computing environment to access cloud services. In a cloud computing environment, various types of computing services for content sharing, storage or distribution (e.g., video sharing sites or social networking sites) are provided by a collection of network-accessible computing and storage resources, referred to as “the cloud.” For example, the cloud can include a collection of server computing devices, which may be located centrally or at distributed locations, that provide cloud-based services to various types of users and devices connected via a network such as the Internet via communications network 364. These cloud resources may include one or more content sources 366 and one or more media guidance data sources 358. In addition or in the alternative, the remote computing sites may include other client devices, such as user television equipment 352, user computer equipment 354, and wireless user communications device 356. For example, the other client devices may provide access to a stored copy of a video or a streamed video. In such embodiments, client devices may operate in a peer-to-peer manner without communicating with a central server.
  • The cloud provides access to services, such as content storage, content sharing, or social networking services, among other examples, as well as access to any content described above, for client devices. Services can be provided in the cloud through cloud computing service providers, or through other providers of online services. For example, the cloud-based services can include a content storage service, a content sharing site, a social networking site, or other services via which user-sourced content is distributed for viewing by others on connected devices. These cloud-based services may allow a client device to store content to the cloud and to receive content from the cloud rather than storing content locally and accessing locally-stored content.
  • A user may use various content capture devices, such as camcorders, digital cameras with video mode, audio recorders, mobile phones, and handheld computing devices, to record content. The user can upload content to a content storage service on the cloud either directly, for example, from user computer equipment 354 or wireless user communications device 356 having content capture feature. Alternatively, the user can first transfer the content to a client device, such as user computer equipment 354. The client device storing the content uploads the content to the cloud using a data transmission service on communications network 364. In some embodiments, the client device itself is a cloud resource, and other client devices can access the content directly from the client device on which the user stored the content.
  • Cloud resources may be accessed by a client device using, for example, a web browser, a media guidance application, a desktop application, a mobile application, and/or any combination of access applications or the same. The client device may be a cloud client that relies on cloud computing for application delivery, or the client device may have some functionality without access to cloud resources. For example, some applications running on the client device may be cloud applications, i.e., applications delivered as a service over the Internet, while other applications may be stored and run on the client device. In some embodiments, a client device may receive content from multiple cloud resources simultaneously. For example, a client device can stream audio from one cloud resource while downloading content from a second cloud resource. Or, a client device can download content from multiple cloud resources for more efficient downloading. In some embodiments, client devices can use cloud resources for processing operations such as the processing operations performed by processing circuitry described in relation to FIGS. 3A and 3B.
  • It is to be understood that while the invention has been described in conjunction with the various illustrative embodiments, the forgoing description is intended to illustrate and not limit the scope of the invention. While several embodiments have been provided in the present disclosure, it should be understood that the disclosed systems, components, and methods may be embodied in many other specific forms without departing from the scope of the present disclosure.
  • The intention is not to be limited to the details given herein or implemented in sub-combinations with one or more other features described herein. For example, a variety of systems and methods may be implemented based on the disclosure and still fall within the scope of the invention. Also, the various features described or illustrated above may be combined or integrated in other systems or certain features may be omitted, or not implemented.
  • Examples of changes, substitutions, and alterations are ascertainable by one skilled in the art and could be made without departing from the scope of the information disclosed herein. Certain particular aspects, advantages, and modifications are within the scope of the following claims. All references cited herein are incorporated by reference in their entirety and made part of this application.

Claims (33)

1. A method of searching metadata based on seed information, comprising:
receiving, at a first processor from a second processor, seed information;
accessing, with the first processor, a database of tokens, each token associated with a content domain and each content domain associated with two or more tokens;
determining, with the first processor, whether a portion of the seed information matches a token in the database of tokens;
in response to determining that a portion of the seed information matches a token:
identifying, with the first processor, the content domain associated with the matching token;
accessing, with the first processor, a database of data records for content elements within the identified content domain, each record including metadata descriptive of a content element;
determining, with the first processor, that the seed information, including the matching token, corresponds to metadata of a first data record in the database of data records; and
transmitting, from the first processor to the second processor, an identifier of the first data record.
2. The method of claim 1, further comprising:
in response to determining that the seed information, including the matching token, corresponds to metadata of a first data record:
storing, with the first processor in a memory device, an association between at least some of the seed information and the first data record.
3. The method of claim 2, wherein the seed information is metadata of a second data record stored in a database accessible by the second processor, and storing an association between at least some of the seed information and the second record comprises storing an association between the first and second data records.
4. The method of claim 3, wherein the first and second records represent a common content elementcontent element, and storing an association between the first and second data records comprises storing the metadata of the first data record and the metadata of the second data record as metadata of a common data record for the common content elementcontent element.
5. The method of claim 3, wherein storing an association between the first and second data records comprises storing the metadata of the second data record as metadata of the first data record in the database of records.
6. The method of claim 2, wherein the second processor is included in a client device, and storing an association between at least some of the seed information and the second record comprises displaying some of the metadata of the second record at the client device.
7. The method of claim 1, wherein each token in the database of tokens is also associated with an operative expression.
8. The method of claim 7, wherein determining that the seed information, including the matching token, corresponds to metadata of a first data record comprises:
generating, with the first processor, a query based on at least some of the seed information and the operative expression associated with the token; and
with the first processor, searching the database of data records using the query.
9. The method of claim 1, wherein the content domain is music and the token comprises at least one of: feat, with, and duet.
10. The method of claim 1, wherein the seed information comprises a text string.
11. The method of claim 1, wherein the seed information comprises at least one of an image and a sound.
12. A system for searching metadata based on seed information, comprising:
a memory device configured to store a database of tokens, each token associated with a content domain and each content domain associated with two or more tokens; and
a processor configured to:
receive seed information from a second processor;
access the database of tokens;
determine whether a portion of the seed information matches a token in the database of tokens; and
in response to determining that a portion of the seed information matches a token:
identify the content domain associated with the matching token;
access a database of data records for content elements within the identified content domain, each record including metadata descriptive of a content domain;
determine that the seed information, including the matching token, corresponds to metadata of a first data record in the database of data records; and
transmit, to the second processor, an identifier of the first data record.
13. The system of claim 12, the processor further configured to:
in response to determining that the seed information, including the matching token, corresponds to metadata of a first data record:
store, in a memory device, an association between at least some of the seed information and the first data record.
14. The system of claim 13, wherein the seed information is metadata of a second data record stored in a database accessible by the second processor, and storing an association between at least some of the seed information and the second record comprises storing an association between the first and second data records.
15. The system of claim 14, wherein the first and second records represent a common content element, and storing an association between the first and second data records comprises storing the metadata of the first data record and the metadata of the second data record as metadata of a common data record for the common content element.
16. The system of claim 14, wherein storing an association between the first and second data records comprises storing the metadata of the second data record as metadata of the first data record in the database of records.
17. The system of claim 13, wherein the second processor is included in a client device, and storing an association between at least some of the seed information and the second record comprises displaying some of the metadata of the second record at the client device.
18. The system of claim 12, wherein each token in the database of tokens is also associated with an operative expression.
19. The system of claim 18, wherein determining that the seed information, including the matching token, corresponds to metadata of a first data record comprises:
generating, with the processor, a query based on at least some of the seed information and the operative expression associated with the token; and
with the processor, searching the database of data records using the query.
20. The system of claim 12, wherein the content domain is music and the token comprises at least one of: feat, with, and duet.
21. The system of claim 12, wherein the seed information comprises a text string.
22. The system of claim 12, wherein the seed information comprises at least one of an image and a sound.
23. A system for searching metadata based on seed information, comprising:
means for receiving seed information from a processor;
means for accessing a database of tokens stored in a memory device, each token associated with a content domain and each content domain associated with two or more tokens;
means for determining whether a portion of the seed information matches a token in the database of tokens;
means for identifying, in response to determining that a portion of the seed information matches a token, the content domain associated with the matching token;
means for accessing a database of data records for content elements within the identified content domain, each record including metadata descriptive of a content domain;
means for determining that the seed information, including the matching token, corresponds to metadata of a first data record in the database of data records; and
means for transmitting, to the second processor, an identifier of the first data record.
24. The system of claim 23, further comprising:
means for storing an association between at least some of the seed information and the first data record, in response to determining that the seed information, including the matching token, corresponds to metadata of a first data record.
25. The system of claim 24, wherein the seed information is metadata of a second data record stored in a database accessible by the second processor, and storing an association between at least some of the seed information and the second record comprises storing an association between the first and second data records.
26. The system of claim 25, wherein the first and second records represent a common content element, and storing an association between the first and second data records comprises storing the metadata of the first data record and the metadata of the second data record as metadata of a common data record for the common content element content elementcontent element.
27. The system of claim 25, wherein storing an association between the first and second data records comprises storing the metadata of the second data record as metadata of the first data record in the database of records.
28. The system of claim 24, wherein the second processor is included in a client device, and storing an association between at least some of the seed information and the second record comprises displaying some of the metadata of the second record at the client device.
29. The system of claim 23, wherein each token in the database of tokens is also associated with an operative expression.
30. The system of claim 29, wherein the means for determining that the seed information, including the matching token, corresponds to metadata of a first data record in the database of data records comprises:
means for generating a query based on at least some of the seed information and the operative expression associated with the token, in response to determining that a portion of the seed information matches a token; and
means for searching a database of data records using the query.
31. The system of claim 23, wherein the content domain is music and the token comprises at least one of: feat, with, and duet.
32. The system of claim 23, wherein the seed information comprises a text string.
33. The system of claim 23, wherein the seed information comprises at least one of an image and a sound.
US13404498 2011-06-13 2012-02-24 Systems and methods for domain-specific tokenization Abandoned US20120317136A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US201161496463 true 2011-06-13 2011-06-13
US13404498 US20120317136A1 (en) 2011-06-13 2012-02-24 Systems and methods for domain-specific tokenization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US13404498 US20120317136A1 (en) 2011-06-13 2012-02-24 Systems and methods for domain-specific tokenization

Publications (1)

Publication Number Publication Date
US20120317136A1 true true US20120317136A1 (en) 2012-12-13

Family

ID=46000311

Family Applications (3)

Application Number Title Priority Date Filing Date
US13404294 Active 2032-03-16 US9235574B2 (en) 2011-06-13 2012-02-24 Systems and methods for providing media recommendations
US13404498 Abandoned US20120317136A1 (en) 2011-06-13 2012-02-24 Systems and methods for domain-specific tokenization
US13404574 Abandoned US20120317085A1 (en) 2011-06-13 2012-02-24 Systems and methods for transmitting content metadata from multiple data records

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US13404294 Active 2032-03-16 US9235574B2 (en) 2011-06-13 2012-02-24 Systems and methods for providing media recommendations

Family Applications After (1)

Application Number Title Priority Date Filing Date
US13404574 Abandoned US20120317085A1 (en) 2011-06-13 2012-02-24 Systems and methods for transmitting content metadata from multiple data records

Country Status (2)

Country Link
US (3) US9235574B2 (en)
WO (2) WO2012173670A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140067369A1 (en) * 2012-08-30 2014-03-06 Xerox Corporation Methods and systems for acquiring user related information using natural language processing techniques
US20140201224A1 (en) * 2013-01-15 2014-07-17 International Business Machines Corporation Find regular expression instruction on substring of larger string
WO2014145884A2 (en) * 2013-03-15 2014-09-18 Locus Analytics, Llc Syntactic tagging in a domain-specific context
US20150135070A1 (en) * 2013-11-11 2015-05-14 Samsung Electronics Co., Ltd. Display apparatus, server apparatus and user interface screen providing method thereof
US9104878B1 (en) * 2013-12-11 2015-08-11 Appercut Security Ltd. Automated source code scanner for backdoors and other pre-defined patterns
US9219736B1 (en) * 2013-12-20 2015-12-22 Google Inc. Application programming interface for rendering personalized related content to third party applications
US9235574B2 (en) 2011-06-13 2016-01-12 Rovi Guides, Inc. Systems and methods for providing media recommendations
US9245299B2 (en) 2013-03-15 2016-01-26 Locus Lp Segmentation and stratification of composite portfolios of investment securities
US20160125076A1 (en) * 2014-10-30 2016-05-05 Hyundai Motor Company Music recommendation system for vehicle and method thereof
US9374411B1 (en) * 2013-03-21 2016-06-21 Amazon Technologies, Inc. Content recommendations using deep data
US9898447B2 (en) 2015-06-22 2018-02-20 International Business Machines Corporation Domain specific representation of document text for accelerated natural language processing

Families Citing this family (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7434219B2 (en) 2000-01-31 2008-10-07 Commvault Systems, Inc. Storage of application specific profiles correlating to document versions
EP1442387A4 (en) 2001-09-28 2008-01-23 Commvault Systems Inc System and method for archiving objects in an information store
US9552141B2 (en) 2004-06-21 2017-01-24 Apple Inc. Methods and apparatuses for operating a data processing system
US8352954B2 (en) 2008-06-19 2013-01-08 Commvault Systems, Inc. Data storage resource allocation by employing dynamic methods and blacklisting resource request pools
US9128883B2 (en) 2008-06-19 2015-09-08 Commvault Systems, Inc Data storage resource allocation by performing abbreviated resource checks based on relative chances of failure of the data storage resources to determine whether data storage requests would fail
US8849762B2 (en) * 2011-03-31 2014-09-30 Commvault Systems, Inc. Restoring computing environments, such as autorecovery of file systems at certain points in time
US20130246385A1 (en) * 2012-03-13 2013-09-19 Microsoft Corporation Experience recommendation system based on explicit user preference
US10157184B2 (en) 2012-03-30 2018-12-18 Commvault Systems, Inc. Data previewing before recalling large data files
JP2013257696A (en) * 2012-06-12 2013-12-26 Sony Corp Information processing apparatus and method and program
US9558278B2 (en) 2012-09-11 2017-01-31 Apple Inc. Integrated content recommendation
US9397844B2 (en) 2012-09-11 2016-07-19 Apple Inc. Automated graphical user-interface layout
US9218118B2 (en) 2012-09-11 2015-12-22 Apple Inc. Media player playlist management
US9563627B1 (en) * 2012-09-12 2017-02-07 Google Inc. Contextual determination of related media content
US9852239B2 (en) * 2012-09-24 2017-12-26 Adobe Systems Incorporated Method and apparatus for prediction of community reaction to a post
US8887197B2 (en) * 2012-11-29 2014-11-11 At&T Intellectual Property I, Lp Method and apparatus for managing advertisements using social media data
US9633216B2 (en) 2012-12-27 2017-04-25 Commvault Systems, Inc. Application of information management policies based on operation with a geographic entity
US10129596B2 (en) * 2013-01-21 2018-11-13 Netflix, Inc. Adaptive row selection
CN103092466B (en) * 2013-01-22 2016-03-30 小米科技有限责任公司 A method and apparatus for operating a mobile terminal
CN103106031B (en) * 2013-01-22 2016-03-23 小米科技有限责任公司 A method and apparatus for operating a mobile terminal
US8990205B2 (en) * 2013-01-28 2015-03-24 International Business Machines Corporation Data caveats for database tables
US9678960B2 (en) * 2013-01-31 2017-06-13 Disney Enterprises, Inc. Methods and systems of dynamic content analysis
US20150012562A1 (en) * 2013-02-04 2015-01-08 Zola Books Inc. Literary Recommendation Engine
US9256269B2 (en) * 2013-02-20 2016-02-09 Sony Computer Entertainment Inc. Speech recognition system for performing analysis to a non-tactile inputs and generating confidence scores and based on the confidence scores transitioning the system from a first power state to a second power state
US9459968B2 (en) 2013-03-11 2016-10-04 Commvault Systems, Inc. Single index to query multiple backup formats
US20140278920A1 (en) * 2013-03-12 2014-09-18 Dan Holden Advertisement feedback and customization
US9485329B1 (en) * 2013-07-17 2016-11-01 Google Inc. Action-defined conditions for selecting curated content
US9313258B2 (en) 2013-08-02 2016-04-12 Nagravision S.A. System and method to manage switching between devices
US20150067505A1 (en) * 2013-08-28 2015-03-05 Yahoo! Inc. System And Methods For User Curated Media
US9326026B2 (en) 2013-10-31 2016-04-26 At&T Intellectual Property I, Lp Method and apparatus for content distribution over a network
GB2520949A (en) 2013-12-04 2015-06-10 Ibm Trustworthiness of processed data
US10169121B2 (en) 2014-02-27 2019-01-01 Commvault Systems, Inc. Work flow management for an information management system
US9648100B2 (en) 2014-03-05 2017-05-09 Commvault Systems, Inc. Cross-system storage management for transferring data across autonomous information management systems
US9823978B2 (en) 2014-04-16 2017-11-21 Commvault Systems, Inc. User-level quota management of data objects stored in information management systems
US9740574B2 (en) 2014-05-09 2017-08-22 Commvault Systems, Inc. Load balancing across multiple data paths
US10162888B2 (en) 2014-06-23 2018-12-25 Sony Interactive Entertainment LLC System and method for audio identification
US9792372B2 (en) * 2014-07-11 2017-10-17 Yahoo Holdings, Inc. Using exogenous sources for personalization of website services
US9444811B2 (en) 2014-10-21 2016-09-13 Commvault Systems, Inc. Using an enhanced data agent to restore backed up data across autonomous storage management systems
US20160132954A1 (en) * 2014-11-11 2016-05-12 Christian Guckelsberger Recommender System Employing Subjective Properties
FR3030177B1 (en) * 2014-12-16 2016-12-30 Stmicroelectronics (Rousset) Sas Electronic device comprising an alarm module of a separate electronic device of a processor core
US20160371270A1 (en) * 2015-06-16 2016-12-22 Salesforce.Com, Inc. Processing a file to generate a recommendation using a database system
US9766825B2 (en) 2015-07-22 2017-09-19 Commvault Systems, Inc. Browse and restore for block-level backups

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020107973A1 (en) * 2000-11-13 2002-08-08 Lennon Alison Joan Metadata processes for multimedia database access
US20070100908A1 (en) * 2005-11-01 2007-05-03 Neeraj Jain Method and apparatus for tracking history information of a group session
US20080208844A1 (en) * 2007-02-27 2008-08-28 Jenkins Michael D Entertainment platform with layered advanced search and profiling technology

Family Cites Families (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4996642A (en) * 1987-10-01 1991-02-26 Neonics, Inc. System and method for recommending items
US4870579A (en) * 1987-10-01 1989-09-26 Neonics, Inc. System and method of predicting subjective reactions
US6239794B1 (en) 1994-08-31 2001-05-29 E Guide, Inc. Method and system for simultaneously displaying a television program and information about the program
US6388714B1 (en) 1995-10-02 2002-05-14 Starsight Telecast Inc Interactive computer system for providing television schedule information
US6177931B1 (en) 1996-12-19 2001-01-23 Index Systems, Inc. Systems and methods for displaying and recording control interface with television programs, video, advertising information and program scheduling information
US6564378B1 (en) 1997-12-08 2003-05-13 United Video Properties, Inc. Program guide system with browsing display
CN1867068A (en) 1998-07-14 2006-11-22 联合视频制品公司 Client-server based interactive television program guide system with remote server recording
US7165098B1 (en) 1998-11-10 2007-01-16 United Video Properties, Inc. On-line schedule system with personalization features
US8043224B2 (en) * 2000-07-12 2011-10-25 Dimicine Research It, Llc Telemedicine system
EP1349080A1 (en) 2002-03-26 2003-10-01 Deutsche Thomson-Brandt Gmbh Methods and apparatus for using metadata from different sources
US20030225777A1 (en) * 2002-05-31 2003-12-04 Marsh David J. Scoring and recommending media content based on user preferences
WO2005072405A3 (en) * 2004-01-27 2007-03-01 Gary Robinson Enabling recommendations and community by massively-distributed nearest-neighbor searching
US7487072B2 (en) * 2004-08-04 2009-02-03 International Business Machines Corporation Method and system for querying multimedia data where adjusting the conversion of the current portion of the multimedia data signal based on the comparing at least one set of confidence values to the threshold
JP2006260420A (en) * 2005-03-18 2006-09-28 Fujitsu Ltd Web site analysis system
US20100153885A1 (en) 2005-12-29 2010-06-17 Rovi Technologies Corporation Systems and methods for interacting with advanced displays provided by an interactive media guidance application
US7966324B2 (en) * 2006-05-30 2011-06-21 Microsoft Corporation Personalizing a search results page based on search history
US8086758B1 (en) 2006-11-27 2011-12-27 Disney Enterprises, Inc. Systems and methods for interconnecting media applications and services with centralized services
US7953736B2 (en) * 2007-01-04 2011-05-31 Intersect Ptp, Inc. Relevancy rating of tags
US8417713B1 (en) * 2007-12-05 2013-04-09 Google Inc. Sentiment detection as a ranking signal for reviewable entities
US8010539B2 (en) * 2008-01-25 2011-08-30 Google Inc. Phrase based snippet generation
US20090234727A1 (en) * 2008-03-12 2009-09-17 William Petty System and method for determining relevance ratings for keywords and matching users with content, advertising, and other users based on keyword ratings
US8051099B2 (en) * 2008-05-08 2011-11-01 International Business Machines Corporation Energy efficient data provisioning
JP4678546B2 (en) * 2008-09-08 2011-04-27 ソニー株式会社 Recommendation apparatus and method, program, and recording medium
JP4650541B2 (en) * 2008-09-08 2011-03-16 ソニー株式会社 Recommendation apparatus and method, program, and recording medium
US8316396B2 (en) * 2009-05-13 2012-11-20 Tivo Inc. Correlation of media metadata gathered from diverse sources
US8861935B2 (en) * 2009-08-26 2014-10-14 Verizon Patent And Licensing Inc. Systems and methods for enhancing utilization of recorded media content programs
US8224818B2 (en) * 2010-01-22 2012-07-17 National Cheng Kung University Music recommendation method and computer readable recording medium storing computer program performing the method
US20120311633A1 (en) * 2010-02-19 2012-12-06 Ishan Mandrekar Automatic clip generation on set top box
US8527648B2 (en) * 2010-10-18 2013-09-03 At&T Intellectual Property I, L.P. Systems, methods, and computer program products for optimizing content distribution in data networks
US8949211B2 (en) * 2011-01-31 2015-02-03 Hewlett-Packard Development Company, L.P. Objective-function based sentiment
US8650023B2 (en) * 2011-03-21 2014-02-11 Xerox Corporation Customer review authoring assistant
US9235574B2 (en) * 2011-06-13 2016-01-12 Rovi Guides, Inc. Systems and methods for providing media recommendations
US20130018892A1 (en) * 2011-07-12 2013-01-17 Castellanos Maria G Visually Representing How a Sentiment Score is Computed
US8600796B1 (en) * 2012-01-30 2013-12-03 Bazaarvoice, Inc. System, method and computer program product for identifying products associated with polarized sentiments

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020107973A1 (en) * 2000-11-13 2002-08-08 Lennon Alison Joan Metadata processes for multimedia database access
US20070100908A1 (en) * 2005-11-01 2007-05-03 Neeraj Jain Method and apparatus for tracking history information of a group session
US20080208844A1 (en) * 2007-02-27 2008-08-28 Jenkins Michael D Entertainment platform with layered advanced search and profiling technology

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9235574B2 (en) 2011-06-13 2016-01-12 Rovi Guides, Inc. Systems and methods for providing media recommendations
US9396179B2 (en) * 2012-08-30 2016-07-19 Xerox Corporation Methods and systems for acquiring user related information using natural language processing techniques
US20140067369A1 (en) * 2012-08-30 2014-03-06 Xerox Corporation Methods and systems for acquiring user related information using natural language processing techniques
US9047343B2 (en) * 2013-01-15 2015-06-02 International Business Machines Corporation Find regular expression instruction on substring of larger string
US20140201224A1 (en) * 2013-01-15 2014-07-17 International Business Machines Corporation Find regular expression instruction on substring of larger string
US9361358B2 (en) 2013-03-15 2016-06-07 Locus Lp Syntactic loci and fields in a functional information system
WO2014145884A3 (en) * 2013-03-15 2014-10-30 Locus Analytics, Llc Syntactic tagging in a domain-specific context
US9471664B2 (en) 2013-03-15 2016-10-18 Locus Lp Syntactic tagging in a domain-specific context
WO2014145884A2 (en) * 2013-03-15 2014-09-18 Locus Analytics, Llc Syntactic tagging in a domain-specific context
US9245299B2 (en) 2013-03-15 2016-01-26 Locus Lp Segmentation and stratification of composite portfolios of investment securities
US9646075B2 (en) 2013-03-15 2017-05-09 Locus Lp Segmentation and stratification of data entities in a database system
US9910910B2 (en) 2013-03-15 2018-03-06 Locus Lp Syntactic graph modeling in a functional information system
US9374411B1 (en) * 2013-03-21 2016-06-21 Amazon Technologies, Inc. Content recommendations using deep data
US20150135070A1 (en) * 2013-11-11 2015-05-14 Samsung Electronics Co., Ltd. Display apparatus, server apparatus and user interface screen providing method thereof
US9104878B1 (en) * 2013-12-11 2015-08-11 Appercut Security Ltd. Automated source code scanner for backdoors and other pre-defined patterns
US9219736B1 (en) * 2013-12-20 2015-12-22 Google Inc. Application programming interface for rendering personalized related content to third party applications
US9705999B1 (en) 2013-12-20 2017-07-11 Google Inc. Application programming interface for rendering personalized related content to third party applications
US20160125076A1 (en) * 2014-10-30 2016-05-05 Hyundai Motor Company Music recommendation system for vehicle and method thereof
US9898447B2 (en) 2015-06-22 2018-02-20 International Business Machines Corporation Domain specific representation of document text for accelerated natural language processing
US10133713B2 (en) 2015-06-22 2018-11-20 International Business Machines Corporation Domain specific representation of document text for accelerated natural language processing

Also Published As

Publication number Publication date Type
US20120317085A1 (en) 2012-12-13 application
WO2012173672A1 (en) 2012-12-20 application
US9235574B2 (en) 2016-01-12 grant
US20120317123A1 (en) 2012-12-13 application
WO2012173670A1 (en) 2012-12-20 application

Similar Documents

Publication Publication Date Title
US20070214480A1 (en) Method and apparatus for conducting media content search and management by integrating EPG and internet search systems
US20110289419A1 (en) Browser integration for a content system
US20110078717A1 (en) System for notifying a community of interested users about programs or segments
US20110078729A1 (en) Systems and methods for identifying audio content using an interactive media guidance application
US20120117057A1 (en) Searching recorded or viewed content
US8832742B2 (en) Systems and methods for acquiring, categorizing and delivering media in interactive media guidance applications
US7966638B2 (en) Interactive media display across devices
US20110289534A1 (en) User interface for content browsing and selection in a movie portal of a content system
US20140088952A1 (en) Systems and methods for automatic program recommendations based on user interactions
US20130179925A1 (en) Systems and methods for navigating through related content based on a profile associated with a user
US20130086159A1 (en) Media content recommendations based on social network relationship
US20100114857A1 (en) User interface with available multimedia content from multiple multimedia websites
US20080086747A1 (en) Systems and methods for acquiring, categorizing and delivering media in interactive media guidance applications
US20110283232A1 (en) User interface for public and personal content browsing and selection in a content system
US20110078172A1 (en) Systems and methods for audio asset storage and management
US20120174039A1 (en) Systems and methods for navigating through content in an interactive media guidance application
US20120324504A1 (en) Systems and methods for providing parental controls in a cloud-based media guidance application
US7890490B1 (en) Systems and methods for providing advanced information searching in an interactive media guidance application
US20120078953A1 (en) Browsing hierarchies with social recommendations
US20140150009A1 (en) Systems and methods for presenting content simultaneously in different forms based on parental control settings
US20130173533A1 (en) Systems and methods for sharing profile information using user preference tag clouds
US20090055393A1 (en) Method and system for facilitating information searching on electronic devices based on metadata information
US20100122293A1 (en) Systems and methods for detecting inconsistent user actions and providing feedback
US20140223481A1 (en) Systems and methods for updating a search request
US20130081083A1 (en) Method of managing contents and image display device using the same

Legal Events

Date Code Title Description
AS Assignment

Owner name: UNITED VIDEO PROPERTIES, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PAPISH, MICHAEL;GREEN, BENJAMIN;HELSINGER, ALEX;REEL/FRAME:027760/0356

Effective date: 20120210

AS Assignment

Owner name: MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL

Free format text: PATENT SECURITY AGREEMENT;ASSIGNORS:APTIV DIGITAL, INC.;GEMSTAR DEVELOPMENT CORPORATION;INDEX SYSTEMS INC.;AND OTHERS;REEL/FRAME:033407/0035

Effective date: 20140702

AS Assignment

Owner name: TV GUIDE, INC., CALIFORNIA

Free format text: MERGER;ASSIGNOR:UV CORP.;REEL/FRAME:035848/0270

Effective date: 20141124

Owner name: ROVI GUIDES, INC., CALIFORNIA

Free format text: MERGER;ASSIGNOR:TV GUIDE, INC.;REEL/FRAME:035848/0245

Effective date: 20141124

Owner name: UV CORP., CALIFORNIA

Free format text: MERGER;ASSIGNOR:UNITED VIDEO PROPERTIES, INC.;REEL/FRAME:035893/0241

Effective date: 20141124