US20200192935A1 - Segmentation Of Video According To Narrative Theme - Google Patents

Segmentation Of Video According To Narrative Theme Download PDF

Info

Publication number
US20200192935A1
US20200192935A1 US16/552,375 US201916552375A US2020192935A1 US 20200192935 A1 US20200192935 A1 US 20200192935A1 US 201916552375 A US201916552375 A US 201916552375A US 2020192935 A1 US2020192935 A1 US 2020192935A1
Authority
US
United States
Prior art keywords
relevance
interval
terms
intervals
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US16/552,375
Inventor
Geetu Ambwani
Anthony R. DAVIS
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Comcast Cable Communications LLC
Original Assignee
Comcast Cable Communications LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Comcast Cable Communications LLC filed Critical Comcast Cable Communications LLC
Priority to US16/552,375 priority Critical patent/US20200192935A1/en
Assigned to COMCAST CABLE COMMUNICATIONS, LLC reassignment COMCAST CABLE COMMUNICATIONS, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: Ambwani, Geetu, DAVIS, ANTHONY R.
Publication of US20200192935A1 publication Critical patent/US20200192935A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7844Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using original textual content or text extracted from visual content or transcript of audio data

Definitions

  • aspects of the disclosure relate to processing data files, media files and other content items, such as files containing movies, television shows, sporting events, etc.
  • One or more aspects relate to the identification of segments in a content item according to themes.
  • Processing may be configured to, for example, capture, analyze, segment, index, retrieve, and/or distribute the massive amount of information contained within the types of media used within these technologies to help users sift through the content and find the portion(s) that will be of most interest.
  • a content supplier or provider may utilize a plurality of computing devices to allow the segmentation and distribution of media content items to user devices.
  • the plurality of computing devices may include a content database, a relevance interval database, a segment database, a content server, a data analysis server, a relevance interval calculation server, a segment calculation server, a distribution server, and a network that provides communication between the various databases and servers of the content supplier.
  • the segments of the content items may be indexed, for example, by theme, and then later retrieved and transmitted to a user device for consumption by a user.
  • computing devices may be configured to perform a method of segmenting one or more content items according to a narrative theme.
  • a textual transcript of a program e.g., closed-captioning data, transcript file, etc.
  • a relevance interval may be generated for each term.
  • the relevance interval for a term may identify an interval within the program (e.g., number of lines in the transcript above and below the line in which the term appears; or a time segment before and after the point in time at which the term appears, etc.) over which the term is deemed to be relevant.
  • a relevance interval can be determined by comparing the term with other terms in the latter/earlier lines or segments of the program, and consulting a database identifying a statistical strength value for how contextually related the two terms are (e.g., the terms “dog” and “leash” may be more likely to be contextually related than the terms “dog” and “physics,” for example).
  • a content item can be segmented by themes.
  • a theme may include one or more words and the relevance intervals for each word in the theme may be determined (e.g., retrieved from a database storing the relevance intervals for the content item).
  • the relevance intervals may be filtered and/or merged to identify a total interval that represents one or more contextually-coherent segments in the content item (e.g., a segment that relates to dogs, such as if lines 10-20 of the transcript contained a conversation mostly about dogs).
  • the total interval can also be filtered and/or merged.
  • Information related to the total interval can then be used (e.g., stored in a searchable index) to help users who wish to locate segments of interest (e.g., if a user wishes to find conversations about dogs in this evening's show).
  • FIG. 1 is a block diagram showing an illustrative system in which content items may be segmented and distributed to users in accordance with one or more aspects of the disclosure.
  • FIG. 2 illustrates a computing device that may be used to implement various methods and/or devices in accordance with one or more aspects of the disclosure.
  • FIG. 3 is a flowchart illustrating an example method for identifying intervals within a content item according to one or more aspects described herein.
  • FIG. 4 is a flowchart illustrating an example method for calculating a relevance interval for a word according to one or more aspects described herein.
  • FIG. 5 is a graphical illustration of relevance intervals according to one or more aspects described herein.
  • FIG. 6 is a flowchart illustrating an example method for determining an interval for a theme based on the relevance intervals for each word in the theme according to one or more aspects described herein.
  • FIG. 7A is a graphical illustration showing an example of relevance intervals that have been retrieved based on terms in a theme according to one or more aspects described herein.
  • FIG. 7B is a graphical illustration showing an example of relevance intervals retrieved for each term of a theme after they have been filtered, according to one or more aspects described herein.
  • FIG. 7C is a graphical illustration showing an example of relevance intervals that have been merged, according to one or more aspects described herein.
  • FIG. 8A is a graphical illustration showing example total relevance intervals for different themes according to one or more aspects described herein.
  • FIG. 8B is a graphical illustration showing example total intervals after filtering has been performed according to one or more aspects described herein.
  • Some embodiments may be described in the context of a network providing services to user devices over various networks using one or more protocols.
  • the disclosure is not limited to any particular type of network or networks using a specific type of communication medium or to a specific set of communication protocols.
  • FIG. 1 is a block diagram showing an illustrative system in which content items may be segmented or otherwise processed according to the disclosure, and distributed to or accessed by users.
  • content supplier or provider 100 may provide, or make available for access, content items.
  • Content items may take various forms, including items having video (e.g., a video program), audio (e.g., an audio file), text, data, and/or animations (e.g., Adobe® Flash files).
  • Content supplier 100 may also provide services related to the content items (e.g., video-on-demand, Internet or other network access services, etc.) to one or more users.
  • content supplier 100 may distribute to the user devices the content items, segments of the content items (e.g., portions of a television program, chapters in a story or movie, etc.) and/or services using the content items and/or the segments.
  • Content supplier 100 may provide the segments of the content items with, for example, descriptive labels and/or topics (e.g., topic names or thematic labels for different stories in this week's episode of the “60 Minutes” news program).
  • Content items that have been segmented by theme/topic can be used in various manners by the content supplier 100 to provide users with more interactive and/or information rich services related to information-rich media, such as audio or video.
  • content supplier 100 may store a searchable index of segmented content.
  • the index may label each segment according to one or more topics, and identify the start/end times in the content.
  • Content supplier 100 may receive a request from a user for segments of content items matching a narrative theme (e.g., plot line, plot point, plot device, etc.).
  • the content supplier 100 may retrieve one or more matching segments (e.g., all segments having a label matching and/or including the particular sports team) and transmit the results in a desired format, such as via an Internet Protocol (IP) video stream, or transmit a list of results (e.g., web page, electronic program guide menu display, etc.).
  • IP Internet Protocol
  • the segments may be transmitted as part of a service, such as a video-on-demand service, or other service.
  • content supplier 100 may organize one or more content streams (e.g., video channels or services, datastreams, etc.) according to particular labels and/or narrative theme, and stream the organized segments over the corresponding content stream. For example, one channel or service may be made to carry all sports segments from the day.
  • content supplier 100 may allow a user to receive recommendations of thematically similar content items and/or segments. In some instances, the recommendations may be based on a particular content item and/or segment, such as a content item or segment currently being viewed by a user.
  • content supplier 100 may be configured to employ processes configured to, or related to, capture, analyze, segment, index, store, and/or retrieve segments of the content items.
  • content supplier 100 may use various databases and servers to provide the desired functionality.
  • content supplier 100 includes a content database 101 , relevance interval database 102 , segment database 103 , and computing devices related to a content server 109 , data analysis server 111 , relevance interval calculation server 105 , segment processing server 107 , distribution server 113 , and a server network 115 that provides communication between the various databases and servers of the content supplier 100 .
  • Content database 101 may store a plurality of content items and other data associated with a service offered by content supplier 100 to users.
  • a content item may include data, audio and/or visual content and may be associated with particular programs, media files, or other content files.
  • a content item may be one or more video and/or audio files associated with a particular television show, movie, commercial, sporting event, news report, public service announcements and the like, because the item includes content for that show, movie, commercial, etc.
  • the content server 109 may be configured to perform various tasks related to the content and/or services of the content supplier 100 , including the tasks of receiving content items, storing content items in the content database 101 , receiving requests associated with content or services via distribution server 113 , retrieving content and/or data associated with a service from the content database 101 , and transmitting the content and/or other data to the distribution server 113 .
  • Data analysis server 111 may be configured to perform specific functions related to the analysis of content items, including data extraction (e.g., speech recognition of audio, optical character recognition of onscreen visual text, image processing algorithms of onscreen text, etc.) and analysis of the extracted data (e.g., natural language processing, logical structure analysis, anaphora resolution, etc.). Results of the data extraction and/or analysis may be stored in the content database 101 .
  • content database 101 may include text from the content items, such as textual transcripts, closed captioning data, and the like.
  • the text may include data extracted and/or generated by the data analysis server 111 .
  • Relevance interval calculation server 105 may be configured to analyze individual terms and/or phrases from the content's text (e.g., its transcript, closed captioning data, textual description, etc.), and identify relevance intervals for the various terms and phrases in that text.
  • a term's relevance interval may identify one or more sections of a content item (e.g., lines of dialog within the transcript, time portions of the show, etc.) that are deemed relevant to that term.
  • Relevance intervals and example methods of calculating relevance intervals, will be discussed in detail below.
  • the calculated relevance intervals may be stored in relevance interval database 102 .
  • Segment processing server 107 may be configured to analyze the content's text (e.g., transcript, etc.) and the relevance intervals identified by the relevance interval calculation server 105 , and identify and determine thematic segments within the content items (e.g., by creation of a searchable index of the segments). Segment processing server 107 may also otherwise process the segments and store information related to the segments in one or more of the databases, including segment database 103 . For example, segment processing server 107 may determine segments or intervals of the content item with respect to some identified themes and store data identifying the segments/intervals and the corresponding themes.
  • segment processing server 107 may determine segments or intervals of the content item with respect to some identified themes and store data identifying the segments/intervals and the corresponding themes.
  • segment processing server 107 may analyze a content item for a television show and identify segments of the content item for a number of identified themes (e.g., one or more segments related to a first theme, one, one or more segments related to a second theme, etc.).
  • the segment database 103 may store data such as indexes, pointers or other indicators of the segments, including indicators of where a segment begins, how the segments relate to the theme, etc.
  • indexes e.g., pointers or other indicators of the segments, including indicators of where a segment begins, how the segments relate to the theme, etc.
  • Distribution server 113 may process communication between the content supplier 100 and one or more user devices 119 a - 119 n . As illustrated in FIG. 1 , distribution server 113 may transmit information via the distribution network 117 to the user devices 119 a - 119 n . Distribution server 113 may also receive information from the user devices 119 a - 119 n via the distribution network 117 . User devices 119 a - 119 n may be a heterogeneous mix of various computing devices, including a network gateway, personal computer, web-enabled cellular phone, personal digital assistant, laptop computer, television set-top box, digital video recorder, etc.
  • Distribution network 117 may be any type of network, such as satellite, telephone, cellular, wireless, Ethernet, twisted pair, fiber, coaxial, a hybrid fiber/coax network (e.g., a coaxial/fiber network), etc., or a combination of networks.
  • the distribution network may include components not illustrated, such as modems, bridges, routers, splitters, filters, amplifiers, wireless access points, Bluetooth® devices, and other connectors of various formats (e.g., HDMI, Multimedia over Coax Alliance, etc.) to assist in conveying the signals to their destination.
  • distribution server 113 may be configured to manage communications between devices on the distribution network 117 and other devices of the content supplier 100 .
  • the distribution server 113 or network 117 may include a termination system, such as a cable modem termination system (CMTS).
  • CMTS may be as specified in the Data Over cable Service Interface Specification (DOCSIS) standard, published by Cable Television Laboratories, Inc. (a.k.a. CableLabs), or it may be a similar or modified device instead.
  • DOCSIS Data Over cable Service Interface Specification
  • the termination system may be configured to place data on one or more downstream frequencies to be received by modems, such as cable modems, at the premises of the user devices 119 a - 119 n , and to receive upstream communications from one or more modems on one or more upstream frequencies.
  • Distribution servers may be used for other types of distribution networks, such as an optical fiber termination system for optical media, telephone line DSLAM (Digital Subscriber Line Access Multiplexer) for telephone lines, satellite transceivers, cellular telephone stations, local area wireless (e.g., WiMax), etc.
  • optical fiber termination system for optical media
  • telephone line DSLAM Digital Subscriber Line Access Multiplexer
  • satellite transceivers for cellular telephone stations
  • local area wireless e.g., WiMax
  • FIG. 2 illustrates a computing device that may be used to implement various devices described herein (e.g., the various servers, user devices, and/or databases of FIG. 1 ).
  • the computing device 200 may include one or more processors 203 , which may execute instructions of a computer program to perform any of the features described herein. Those instructions may be stored in any type of computer-readable medium or memory, to configure the operation of the processor 203 .
  • instructions may be stored in a read-only memory (ROM) 207 , random access memory (RAM) 205 , or other media accessible by the computing device, such as a Universal Serial Bus (USB) drive, compact disk (CD), digital versatile disk (DVD), floppy disk drive, etc.
  • USB Universal Serial Bus
  • CD compact disk
  • DVD digital versatile disk
  • floppy disk drive etc.
  • Input/output module 211 may include a microphone, keypad, touch screen, and/or stylus through which a user of computing device 200 may provide input, and may also include one or more of a speaker for providing audio output and a video display device for providing textual, audiovisual and/or graphical output.
  • Software may be stored within memory 213 (e.g., hard drive), RAM 205 , and/or ROM 207 to provide instructions to processor 203 for enabling the computing device 200 to perform various functions.
  • memory 213 may store software used by the computing device 200 , such as an operating system 215 , application programs 217 , and an associated database 215 .
  • some or all of computing device 200 's computer executable instructions may be embodied in hardware or firmware (not shown).
  • computing device 200 may operate in a networked environment supporting connections to one or more networked devices. Accordingly, computing device 200 may use network interface 209 to connect to the network.
  • Network interface 209 may manage communications over various networks using various formats, including protocols such as TCP/IP, Ethernet, FTP, HTTP, WiMax, MoCA, Bluetooth® and the like.
  • One or more aspects of the disclosure may be embodied in computer-useable data and/or computer-executable instructions, such as in one or more program modules, executed by one or more computers or other devices.
  • program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types when executed by a processor in a computer or other data processing device.
  • the computer executable instructions may be stored on one or more computer readable media such as a hard disk, optical disk, removable storage media, solid state memory, RAM, etc.
  • the functionality of the program modules may be combined or distributed as desired in various embodiments.
  • the functionality may be embodied in whole or in part in firmware or hardware equivalents such as integrated circuits, field programmable gate arrays (FPGA), and the like.
  • Particular data structures may be used to more effectively implement one or more aspects of the invention, and such data structures are contemplated within the scope of computer executable instructions and computer-usable data described herein.
  • FIG. 3 is a flowchart illustrating an example method for identifying intervals within a content item in which different themes, such as narrative themes, are discussed or appear (e.g., the times within an episode of ‘Friends’ in which Joey talks about his job).
  • the method itself may be performed by a computing device such as a processing server, such as segment processing server 107 .
  • a processing server such as segment processing server 107
  • a 3 process may involve receiving a textual transcript of a show (e.g., a show being one example of a content item, and a line-by-line script of spoken words being one example of a textual transcript), processing each individual word or data interval in the transcript to identify relevance intervals for each word (a transcript word's relevance interval may identify the number of sentences before and after that are related to the word, and can be generated using a relationship measurement database that maps strengths of relationships between various pairs of words), receiving or otherwise determining a textual description of the narrative themes in the show (e.g., a written show summary provided by the show's producers), and totaling the relevance intervals of the words in the text description to result in a total interval of the show for the narrative theme.
  • a textual transcript of a show e.g., a show being one example of a content item, and a line-by-line script of spoken words being one example of a textual transcript
  • processing each individual word or data interval in the transcript to identify relevance intervals for each word
  • the server 107 may receive or otherwise determine a transcript of the show (e.g., television program, movie, song, webcast, or other content item).
  • the transcript may include, for example, a line-by-line script, record, or listing of the words spoken (or other suitable data interval) in the show, along with any additional relevant or identifying information, such as the identity of the speaker, the time within the show (e.g., time offset from the start, or absolute time of day when spoken if the show was transmitted according to a schedule), volume or tone, or any other characteristic of the spoken dialogue.
  • the server 107 may process each word or phrase appearing in the transcript to identify that word or phrase's relevance interval within the transcript.
  • a word's relevance interval can identify one or more portions of the content (e.g., sentences or time before and after the word's appearance in the transcript) that are contextually relevant to the word.
  • FIG. 4 illustrates an example process by which a relevance interval for a single word (or phrase) can be determined.
  • the server 107 may generate a data record for each word or phrase, where the record identifies a portion (e.g., range of sentences or times) of the show that is contextually related to that word or phrase.
  • These data records, also referred herein as relevance intervals may then be processed in the steps below, and combined to result in the relevance interval for a particular narrative theme.
  • the server 107 may receive, or otherwise determine, information identifying or describing the themes that are expected to be found in the show.
  • the information identifying the themes may be in the form of a description of the show, such as one appearing in an electronic program guide (EPG).
  • EPG electronic program guide
  • producers of the television show “60 Minutes” may provide a short textual description accompanying each episode, identifying the different topics in the episode.
  • the terms of the textual description may describe one or more narrative themes of the content item.
  • Each narrative theme may include a plurality of terms and/or sentences (e.g., a television drama may have sentences describing different plotlines in tonight's episode).
  • an episode of “60 Minutes” may contain a first segment described as “Congress's New Tax Proposal,” a second segment described as “The Killer Who Got Away,” and a third segment described as “The NFL's Mystery Man”, and the textual description can contain these segment descriptions.
  • the information identifying the narrative themes may be manually entered and/or supplied by a user and/or a content supplier.
  • a summary of the plot lines for a television show may be received in conjunction with a broadcast of the television program, such as with electronic program guide (EPG) data, or the plot line summaries can be downloaded from a third-party server (e.g., from a web server and/or extracted from a website).
  • EPG electronic program guide
  • the system may process the information identifying the narrative themes (e.g., the textual description) to identify the themes to look for in the content. Determining the one or more themes may include analyzing the textual description, and separating it into different themes that can be searched for in the program (e.g., story lines, news categories, etc.). In one example, the system may assume that each different sentence in the textual description refers to a different theme (each theme can be a different plotline, story, etc.) that can be found in the show (e.g., “Joey gets a job interview. Monica and Chandler have an argument.”), and the textual description can be parsed by sentence, with each sentence of the representation becoming its own theme for further processing. In alternative embodiments, the narrative themes may be spelled out differently. For example, the program provider can supply a database listing the different narrative themes in the program, such as by keywords, and/or delimiting the different themes by different separators, etc.
  • an iterative process can be performed for each expected theme.
  • the server 107 may perform, for example, the steps illustrated in FIG. 6 .
  • the FIG. 6 process described further below, may take each theme identified in the prior step, identify the individual words in the theme's description, find the relevance intervals for each of those words (e.g., by accessing a database storing the relevance intervals of step 302 ), and combine the relevance intervals of each word into a composite relevance interval (e.g., a total interval) to result in the intervals/segments of the content in which the theme is found.
  • a composite relevance interval e.g., a total interval
  • data can be stored (e.g., in a memory of server 107 ), or otherwise prepared, to identify each theme and its corresponding interval(s) within the show.
  • data may optionally be stored identifying each theme and the corresponding intervals within the show. In some embodiments, this may include creating and/or storing a data structure that has a record for each theme, and in each record there would be descriptive text for that theme (e.g., the words from the textual description corresponding to the theme), and data identifying the sentences/times in the show that are in that theme.
  • the data structure could also include other data, such as a theme record handle/identifier, information related to the underlying relevance intervals forming the composite relevance interval for that theme, and other data related to the show, such as the name, filename, and/or some other identifier/descriptive data for the show.
  • a theme record handle/identifier information related to the underlying relevance intervals forming the composite relevance interval for that theme
  • other data related to the show such as the name, filename, and/or some other identifier/descriptive data for the show.
  • the system may receive or otherwise determine a transcript (e.g., a line-by-line script, record, or listing of the spoken words, etc.) of the content item (e.g., show, television program, etc.).
  • the transcript may be supplied with the content item and/or received in addition to the content.
  • the transcript may be determined and/or generated from the content item.
  • a speech recognition system may capture the words spoken in a content item.
  • the content item may be processed, using the speech recognition system, to extract the spoken words and record the time at which each word occurs in the content item.
  • the speech recognition system may also determine and record alternatives for extracted words or phrases, and may determine a probability for each determined alternative.
  • the speech recognition system may also capture other aspects of the content item, including pauses and speaker changes.
  • An optical character recognition (OCR) system may also be used to generate a transcript (or some of the data included in a transcript).
  • OCR optical character recognition
  • an OCR system can capture words spoken in a content item, such as a video item, when the spoken words are displayed on-screen (e.g., as with subtitles, or closed captioning) or are otherwise visible during a playback of the content item.
  • the OCR system may extract and record characteristics of the visible text such as the size, position, style and precise time interval of visibility.
  • a meta-data extraction module can also be used when generating a transcript.
  • a meta-data extraction module can capture embedded data of a content item, including filename, and meta-data of the content item, such as tagging information (e.g., MPEG-7 meta-tags, HTML tags, file header information, etc.).
  • tagging information e.g., MPEG-7 meta-tags, HTML tags, file header information, etc.
  • the extracted information may be used as an indicator for important terms of the transcript.
  • summaries of content items e.g., a capsule description of a television program for display in an electronic program guide
  • the transcript may optionally be processed by a word-adjustment module to increase the accuracy of the extracted data.
  • the transcript may be a combination of text extracted from different techniques (e.g., a transcript based on a two sets of extracted data, the first from a speech recognition system and the second from an OCR system; or a transcript based on a data from a speech recognition system and a supplied transcript; etc.).
  • a word-adjustment module may analyze a transcript sentence by sentence. Such an analysis may include determining grammatical information of the text, such as parts of speech, phrases, verb argument structure, clausal dependencies, and other features.
  • the sentences may be analyzed according to sentence boundary (e.g., parsed based on punctuation, such as periods; parsed based on capital letters; etc.). Each sentence may be given its own unique index and the time of occurrence of the sentence may be associated with an analyzed (e.g., parsed) sentence.
  • terms of the transcript may be processed into a canonical form.
  • the transcript may be processed to identify occurrences of the same or related words.
  • the system may identify that the term “mouse” and “mice” are the singular and plural of the same concept, and should be treated as the same word.
  • the system may identify that the terms “New” and “York” and “City” are a multi-word expression “New York City” (e.g., when they appear next to each other in a transcript) and should be treated as a single term when the transcript is further processed/analyzed.
  • the system may identify proper nouns with occurrences of pronouns in the transcript (e.g., identify “Bob” with occurrences of “he”, etc.)
  • the terms “dog”, “Spot” and “him” may be treated as referring to the same entity (e.g., “dog”)
  • the terms “store” and “they” may be treated as the same term (e.g., “store”)
  • the terms “grocery” and “shopping” may be treated as a multi-word expression (e.g., “grocery shopping”).
  • a thesaurus database can also be used to combine synonyms, such as “automobile” and “car,” to treat them as the same word and help identify contextually related sentences.
  • the transcript may be filtered to remove one or more terms.
  • the transcript may be filtered to leave only those terms that are determined to have significance and/or semantic meaning.
  • the significance and/or semantic meaning of a term may be determined in various ways, including, for example, term-weighting techniques (e.g., term frequency—inverse document frequency (tf/idf), threshold pointwise mutual information (PMI) values with other terms in the vicinity, etc.), and other indicators of term salience.
  • term-weighting techniques e.g., term frequency—inverse document frequency (tf/idf), threshold pointwise mutual information (PMI) values with other terms in the vicinity, etc.
  • PMI threshold pointwise mutual information
  • any occurrences of “the” and “a” may be filtered out.
  • certain nouns, verbs adjectives, adverbs, punctuation, etc. may be filtered out.
  • Table 1 illustrates an example transcript of a content item.
  • the transcript may contain a list of sentences (column 2). Each sentence may be associated with a unique label/index (column 1) and a time code that represent when the sentence occurs in the content item (column 3).
  • each word or phrase appearing in the transcript can be processed to identify the relevance interval for the word.
  • a word's relevance interval can identify one or more portions of the show (e.g., sentences or time before and after the word's appearance in the transcript) that are contextually relevant to the word. For example, a particular relevance interval may identify any number of sentences within the transcript that are relevant to a word.
  • the occurrence of the word “dog” in a television program transcript may, via the relevance interval calculation, be determined to be relevant to the sentence immediately previous to the occurrence of “dog” and the two sentences immediately following the occurrence of “dog.”
  • the relevance interval for the word “dog” identifies that “dog” is relevant to the previous sentence in the transcript and the two sentences following the occurrence of “dog.”
  • a computer e.g., server 107
  • FIG. 4 is a flowchart illustrating an example method for calculating a relevance interval for a selected word, term and/or phrase. The method may be performed or repeated for each word or term in the content item's transcript. Relevance intervals for individual terms can be generated prior to handling the description, and without prior knowledge of the expected themes in the show.
  • the example method of calculating begins with initializing the relevance interval of the selected term (e.g., the first word in the first sentence of a transcript). For example, a term's initial relevance interval may simply be one sentence—the sentence in which the term appears.
  • initializing the relevance interval may include assigning it information relating to the sentence of the current term (e.g., assigning the current sentence index (e.g., an identifier or handle used to reference the sentence) and/or the time in the content item where the current sentence occurs).
  • assigning the current sentence index e.g., an identifier or handle used to reference the sentence
  • the relevance interval may be modified to include a listing of other sentences or times that are relevant to the selected term.
  • the determination of the relevance interval for the selected term may be performed by traversing down the transcript (and up) to compare other sentences in the transcript with the selected term, and determining how contextually close (or contextually similar) those other sentences are to the selected term and/or to other terms in the sentence(s) where the selected term occurs. This can be done, for example, by doing a term-for-term comparison, successively comparing the current term with each term in the other sentence.
  • step 403 it is determined whether the end of the transcript (or portion of content) has been reached (e.g., whether the last sentence of the transcript has been considered). If the process is not at the end of the transcript, then the method proceeds to step 405 and the next sentence is retrieved.
  • next sentence upon retrieving the next sentence of the transcript, it is determined whether the next sentence is relevant to the current term. This determination can be performed in various ways. For example, the relevancy may be determined based on a comparison between the current term and each of the words in the next sentence. Similarity strengths for these term-by-term comparisons can be obtained from a similarity measurement database or other knowledge base, and totaled, averaged, or otherwise combined to yield an overall similarity strength between the current term and the next sentence.
  • the selected word (e.g., “dog”) can be compared with other terms appearing in the next sentence by forming a similarity measurement between the word “dog” and each term appearing in the next sentence, and deeming the next sentence relevant if the similarity measurement is sufficiently strong.
  • Similarity measurements may identify relationship strengths between pairs of words, and depending upon the type and other characteristics of the content item, take into account contextual information present in the transcript, as well as other contextual information drawn from an analysis of the content item.
  • the types of contextual information used can vary and may depend upon the type and/or the characteristics of the content item (e.g., relevance intervals for a written document, a video with spoken words, a video with sung words, etc., may be calculated using different types of contextual information).
  • contextual information can include the clustering of the occurrences of information representations; the natural breaks in the media, such as paragraphs, changes in slides, verbal pauses, or speaker changes; natural language processing data, such as parts of speech, grammatical structure, multi-word expressions, named entities, and references; semantic information, such as synonyms, classifications, similarities, concepts related by knowledge based connections, and topic similarity; logical structure based upon transition words, prosodic cues, visual layout, surrounding formatting data (e.g., surrounding HTML), topic flow, and statistically measured continuity of topic; and other extracted data.
  • natural language processing data such as parts of speech, grammatical structure, multi-word expressions, named entities, and references
  • semantic information such as synonyms, classifications, similarities, concepts related by knowledge based connections, and topic similarity
  • logical structure based upon transition words, prosodic cues, visual layout, surrounding formatting data (e.g., surrounding HTML), topic flow, and statistically measured continuity of topic; and other extracted data.
  • the similarity measurements can be used to determine whether the next sentence is relevant to the selected word.
  • the comparison of the word “dog” to terms in the next sentence may include the calculation and/or retrieval of similarity measurements between “dog” and any (or all) of the other terms of the next sentence.
  • the next sentence is deemed relevant if one or more of the retrieved similarity measurements are above a certain threshold.
  • Various threshold comparison schemes can be used (e.g., determine whether one measurement is above a threshold value, determine whether two or more measurements are “strong,” determine whether an average strength value exceeds a threshold value, etc.).
  • the next sentence may be relevant if the cumulative measurement is above a threshold.
  • the similarity measurements may be represented by statistical measurements of term co-occurrence in corpus data.
  • a large amount of data may be selected as the corpus, such as, for example, a representative sample of sentences or newspaper articles.
  • This corpus may then be processed to create the statistical measurements indicating a strength of the statistical relationship between finding the occurrence of one word (e.g., “dog”) and also finding an occurrence of another word (e.g., “leash”).
  • finding the occurrence of one word e.g., “dog”
  • another word e.g., “leash”
  • One statistical measurement of term co-occurrence in corpus data may be a pointwise mutual information score that is calculated between a pair of terms (e.g., between a first term, A, and a second term, B, of the corpus data, where term A and term B form the pair of terms).
  • One way to calculate a pointwise mutual information score is to, for each pair of terms, create a table that tabulates the number of times that each term occurs in the corpus and the number of times term of a pair occurs within a predetermined distance of the other term of that pair. In other words, this table records the number of times that two terms co-occur within a window of constant size in the corpus.
  • “dog” may occur 10,000 times in a corpus of 10 million words, “leash” may have occur 500 times in that same corpus, and “dog” and “leash” may occur 100 times within 10 terms of each other. Cubicle may also occur 100 times in the corpus but “dog” and “cubicle” may occur only 8 times within 10 terms of each other. From this data, the pointwise mutual information scores between pairs of terms can be calculated.
  • One formula for calculating the mutual information score may be:
  • Measures of association or similarity between terms can also be calculated using other similarity measurement techniques, including, for example, Dice coefficient, chi-square statistics, log-likelihood rations, WordNet-based similarity, Wikipedia Miner similarity, Google similarity distance, etc.
  • similarity measurements may be stored in a knowledge base (e.g., database).
  • the similarity measurement database may contain, for example, a large listing of terms, and for each term, the database may list a number of contextually-related terms along with a strength value for the match. So, for example, an entry for the word “dog” may list “park”, “bone”, “leash” and “owner” (among others), and may include a strength value for the contextual relationship between “dog” and each of the listed words.
  • the strength value can be stronger for words like “leash” and “bone”, since those terms quite often relate to dogs, while the strength value for words like “park” and “owner” may be lower, since those terms are often used in other contexts that don't involve dogs. Any desired type of similarity measurement database may be used.
  • a particular statistical measurement may be retrieved from the database by providing a query including the two terms of the desired similarity measurement.
  • the query could include the term under consideration (e.g., the current term), and a term from the next sentence (e.g., a query including “dog” and “leash” would return the similarity measurement between those terms).
  • Another way to measure the relevancy is to compare other (or all) words in the sentence of the current term with words of the next sentence.
  • This process can include gathering the similarity measurements between any (or all) of the terms in the two sentences. Once the similarity measurement(s) are gathered, the similarity strength between the two sentences may be calculated, for example, by summing and/or taking the product of the gathered similarity measurements, and then normalizing by the resulting number. In some instances, outlying (e.g., extreme) values may be discarded before performing the summation and/or product. Additionally, the similarity measurements may be normalized before performing the summation and/or product.
  • the relevance may be determined based on an analysis of the terms of the sentences.
  • the relevancy may be based on a pronoun resolution determination (e.g., if the sentences are about the same noun, the sentences may be relevant).
  • a first sentence may include the noun “Spot” and the second sentence may include the pronoun “he” which may be resolved to be referring to “Spot.” These sentences may be deemed relevant, since the pronoun in the subsequent sentence refers back to the proper noun in the prior sentence.
  • the relevancy may be based on the occurrence of a discourse phrase that indicates a continuation or the ending of an interval (e.g., the phrase “In addition” may indicate a continuation, while the phrase “In other news” may indicate the intervals ends).
  • the method can return to step 403 , to determine if this sentence is the end of the transcript, and to continue with checking the next sentence in the transcript. Alternatively, if the next sentence is not deemed relevant, then the system can conclude that the term's relevance in this direction has ended, and the process can cease the downward traversal, and proceed to step 411 for the upward traversal through the transcript. Additionally and/or alternatively, the process may proceed to step 411 if a threshold number of sentences have been determined to be not relevant. In one example, the process may proceed to step 411 if a threshold of consecutive sentences have been determined to be not relevant (e.g., if the previous sentence and the current sentence is not relevant, proceed to step 411 ).
  • a counter may be incremented for every not relevant sentence and if the counter reaches a threshold (e.g., 4, etc.), then the process proceeds to step 413 . Otherwise, the process may proceed to step 403 .
  • a threshold e.g. 4, etc.
  • the method can proceed to step 409 , where that sentence is added to the current term's relevance interval.
  • the relevance interval is modified with the appropriate information of the sentence.
  • the relevance interval for the current term may be a list of sentences, sentence labels/indexes, time periods, some combination thereof, etc. associated with that current term.
  • the interval for a term is stored as a list having copies of relevant sentences, then the terms of the next sentence may be added to the relevance interval list for the current term.
  • the appropriate index value can be updated (e.g., start value may be decremented, end value may be incremented, etc.) to list the relevant sentences according to their indices.
  • the appropriate start or end value of the relevance interval may be augmented by adding the time values for the next sentence (e.g., making the end time of the next sentence the end time of the interval). Table 2, found below, and FIG. 5 illustrate example relevance intervals.
  • the method may proceed to step 403 if the next sentence was added to the relevance interval in step 409 . Otherwise, if the next sentence was not added, the method may proceed to step 411 . In other embodiments, the method may always proceed to step 403 . In yet other embodiments, the method may proceed to step 411 after two or more consecutive sentences have not been added to the current relevance interval (allowing for short gaps in conversation topic without cutting short the relevance interval).
  • step 403 If, at step 403 , it is determined that the end of the transcript has been reached, the method proceeds to step 411 , where the sentence of the current term is located (similar to step 401 ). At step 413 , it is then determined whether this location is the beginning of the transcript (since the upwards traversal would end at the beginning of the script). If this location is the beginning of the transcript, then the method ends. Otherwise, the method proceeds to step 415 , where the previous sentence in the transcript is retrieved. At step 417 , upon retrieving the previous sentence, it is determined whether the previous sentence is relevant to the current term, which may be performed similarly to the determination of step 407 . If the previous sentence is relevant, the previous sentence is added to the relevance interval at step 419 . Otherwise, the method may proceed directly to step 413 to determine whether this previous sentence is the beginning of the transcript. At step 420 , another determination is made whether to continue calculating the current relevance interval. This determination may be similar to the determination made at step 410 .
  • the process illustrated by FIG. 4 may be repeated until the relevance intervals for the desired terms (e.g., all terms, phrases, or canonical terms or phrases in the transcript) have been calculated.
  • the relevance intervals and other data related to the relevance intervals may be stored in a database (e.g., relevance interval database 102 of FIG. 1 ).
  • the relevance intervals may be subject to additional, or post-processing.
  • the relevance intervals may be indexed for easier retrieval. For example, words may appear multiple times in a document and, thus, may also have multiple relevance intervals.
  • the relevance intervals for a transcript may be indexed by the term for which the relevance intervals were calculated. The relevance intervals may be searched by term to retrieve any and/or all of the relevance intervals associated with that term.
  • the relevance intervals may be merged. For example, if two relevance intervals for a term are sufficiently close (e.g., the end of the first relevance interval lies within a predetermined number of sentences from the start of the second relevance interval), the two relevance intervals may be merged.
  • Table 2 illustrates three example relevance intervals that may be calculated from the transcript illustrated in Table 1. As shown in Table 2, each relevance interval is given a label according to the term for which it was calculated, the sentence in which the term occurred, the start and end sentence for the interval (e.g., sentence boundary), and the start and end time of the content item for the interval (e.g., the start and end time-codes of the sentence in the content item).
  • each relevance interval is given a label according to the term for which it was calculated, the sentence in which the term occurred, the start and end sentence for the interval (e.g., sentence boundary), and the start and end time of the content item for the interval (e.g., the start and end time-codes of the sentence in the content item).
  • FIG. 5 is a graphical illustration of relevance intervals. Specifically, the example of FIG. 5 is a graphical illustration of the relevance intervals included in Table 2. As seen in FIG. 5 , the relevance interval for “Dog” is interval 501 , which spans three sentences (S_1, S_2 and S_3) and 25 seconds of the program (from the 35 second mark to the one minute mark); the relevance interval for “Leash” is interval 503 ; and the relevance interval for “Exercise” is interval 505 . Each illustrated relevance interval spans a particular time of the content item and/or sentences of the transcript. The illustration of FIG. 5 provides indications of the sentences of the transcript 507 and the time-codes of the content item 509 .
  • the system may receive information identifying different narrative themes.
  • this information may be represented as a textual description.
  • the description may include different terms and/or sentences that are meant to describe different narrative themes and/or portions of the content item.
  • a narrative theme or portion of the description may be meant to summarize and/or otherwise encapsulate related events within the content item.
  • a narrative theme may describe a plot line, plot point, a plot device, or other narrative theme of a content item, such as a television show, song, movie, etc.
  • a narrative theme may describe an emotional theme of a content item, or other thematic characteristic of the content item.
  • a narrative theme can relate to marketing, advertisement, educational, or other themes.
  • the received description may take various forms.
  • the description of a television episode may be supplied by one or more viewers (e.g., a social networking site, or other website, may allow individual users to post their own summaries of the television episode). Because each viewer is free to describe the piece of content (e.g., an episode of a show) as they interpret it, their description may differ from the descriptions of other viewers, even when they are describing similar events of the television episode. Descriptions could also be supplied by a content supplier or provider.
  • a description supplied by a content provider may be a summary for inclusion into an electronic programming guide, or some other description associated with an existing service of the content provider.
  • descriptions supplied by a content supplier may also differ from those supplied by a viewer. Therefore, a wide range of possible descriptions could be received for a content item, and different sets of relevance intervals and theme segments can be identified using the different descriptions.
  • Table 3 illustrates a sample textual description that includes one or more narrative themes of a content item. Specifically, the example description includes narrative themes for a television show.
  • Character A changes her name. Character B searches for a job. Characters C and D look for a house.
  • the textual description includes three sentences.
  • each sentence describes a plot line for the television show.
  • the illustrated text could be parsed by sentence.
  • the description could be processed differently than just by sentence to identify the themes or theme sentences.
  • processing of the description could be based on other (or additional) grammar structural cues of the text (e.g., parsed based on punctuation, such as commas, semi-colons, etc.).
  • each sentence may optionally be given its own unique index.
  • Table 4 illustrates an example of a textual description that has been processed into one or more themes.
  • each theme includes a single sentence of the descriptive text illustrated in Table 1. The first sentence is the first theme, the second sentence is the second theme, and the third sentence is the third theme.
  • Each theme has been given its own unique index (e.g., T_1, T_2, T_3).
  • T_2 Character B searches for a job.
  • T_3 Characters C and D look for a house.
  • the system can seek to find where the terms of each theme appear in the transcript and/or identify the relevance intervals for those terms.
  • a total interval is determined for each theme.
  • the relevance intervals of each word in a theme description can be combined into a composite relevance interval (e.g., a total interval), which can represent an interval for the theme.
  • FIG. 6 is a flowchart illustrating an example method for determining an interval for a theme based on the relevance intervals for each word in the theme. In some embodiments, this method may be part of an iterative process that is repeated for each theme determined from the textual description (e.g., the themes identified at step 304 of FIG. 3 ).
  • the relevance intervals for each term in the theme are retrieved (e.g., from the relevance interval database 102 of FIG. 1 ). In some embodiments, only the relevance intervals with terms matching each term will be retrieved (e.g., via a query requesting all relevance intervals for the term “dog”). In other embodiments, relevance intervals having terms that are similar to one of the theme's terms may be retrieved. For example, each term in the theme can be processed through a thesaurus to identify synonyms for each term.
  • all relevance intervals that match the current term and the synonyms of the current term can be retrieved (e.g., via one or more queries requesting relevance intervals for the word “leash” and the synonyms of “leash” such as “cord” and/or “chain”, etc.).
  • FIG. 7A is a graphical illustration showing an example of relevance intervals that have been retrieved based on terms in a theme.
  • the relevance intervals for each term of a theme have been retrieved from a database storing the relevance intervals for the content item.
  • the illustrated theme includes three terms, Term A (e.g., “Character A” of theme T_1 of Table 4, which is treated by the system as a single word), Term B (e.g., “changes” of thematic element T_1 of Table 4), and Term C (e.g., “name” of thematic element T_1 of Table 4).
  • FIG. 7A displays relevance intervals 701 corresponding to only a portion of the content item.
  • the Figure illustrates relevance intervals 701 that begin after 1:00 and end before 5:00 of the content item, while those occurring, beginning, and/or ending outside this portion of the content item are not shown.
  • relevance intervals 701 have been illustrated in three portions.
  • the first portion 701 a are the relevance intervals retrieved based on Term A
  • the second portion 701 b are the relevance intervals retrieved based on Term B
  • the third portion 701 c are the relevance intervals retrieved based on Term C.
  • relevance intervals 705 a - 705 i are illustrated in FIG. 7A .
  • the top of a relevance interval represents the beginning of the relevance interval.
  • the bottom of the relevance interval represents the end of the relevance interval.
  • relevance interval 705 a which is a relevance interval retrieved based on Term A, begins at approximately 1:15 of the content item and ends at approximately 1:45 of the content item.
  • the relevance intervals retrieved based on the terms in the theme can be filtered.
  • the relevance intervals may be filtered using various techniques, some of which include filtering the relevance intervals based on one or more characteristics of the relevance intervals.
  • the relevance intervals may be filtered based on the size (or length) of each relevance interval (“size filtering”). In one instance, any relevance interval that has a size less than a predetermined size (e.g., a size of 5 seconds, etc.) may be removed from the retrieved set of relevance intervals (e.g., remove relevance intervals 705 b , 705 f and 705 i of FIG. 7A from relevance intervals 701 ).
  • the size of a relevance interval may be the time spanned by the relevance interval (e.g., the difference between the end of the relevance interval and the beginning of the relevance interval). Alternatively, the size of a relevance interval may be the number of sentences spanned by the relevance interval.
  • the relevance intervals may be filtered based on a comparison between different relevance intervals.
  • Comparison filtering may take the form of overlap filtering and/or adjacency filtering.
  • Overlap filtering may include determining that multiple relevance intervals overlap with each other in the content, and removing relevance intervals that do not overlap (e.g., remove relevance interval 705 i from relevance intervals 701 of FIG. 7A , but not 705 g because 705 g overlaps with 705 c ).
  • overlap filtering may remove any relevance interval that does not overlap with a threshold number of other relevance intervals (e.g., in an example with a threshold of 2, remove relevance interval 705 i and 705 c from relevance intervals 701 of FIG. 7A , because 705 i does not overlap with any other relevance interval and 705 c only overlaps with 705 c but not a second relevance interval).
  • Adjacency filtering may include determining whether another interval is near a particular relevance interval, and removing the relevance interval if there are no nearby relevance intervals. For example, determining whether another interval is near may include comparing the start and end times of the relevance intervals to determine if at least one relevance interval ends or begins within a threshold time of the start and end time of the particular relevance interval (e.g., remove relevance interval 705 i from relevance intervals 701 of FIG. 7A , but not 705 f , because the start of interval 705 f is sufficiently near to the end of interval 705 e ).
  • relevance intervals may be filtered using a hybrid technique.
  • a hybrid technique can be a combination different filtering techniques, such as a combination of size filtering and comparison filtering, as described above.
  • relevance intervals may be filtered based a sliding scale between relevance interval size and the relevance interval comparison. In one sliding scale technique, the larger the size of the relevance interval, the less the relevance interval needs to overlap or be near another relevance interval (e.g., intervals 705 f and 705 g of FIG. 7A are candidates for removal because they only overlap with one other relevance interval, but relevance interval 705 f is removed from relevance intervals 701 , while 705 g remains, because 705 g is larger).
  • intervals 705 b , 705 f and 705 i of FIG. 7A are candidates for removal because their size is less than a threshold value, but intervals 705 f and 705 i are removed from relevance intervals 701 , while 705 b remains, because 705 b overlaps with two other relevance intervals).
  • FIG. 7B is a graphical illustration showing an example of relevance intervals retrieved for each term of a theme after they have been filtered. As illustrated, FIG. 7B illustrates the relevance intervals of FIG. 7A after they have been filtered. The results of the filtering process are illustrated in FIG. 7B as filtered relevance intervals 707 . Filtered relevance intervals 707 are illustrated in three portions: first portion 707 a , second portion 707 b , and third portion 707 c . As compared to relevance intervals 701 of FIG. 7A , only relevance interval 705 i of FIG. 7A was removed from relevance intervals 701 .
  • relevance intervals 705 b , 705 f and 705 i are candidates for removal because of their size.
  • Relevance interval 705 b remains because it overlaps with 705 e and 705 f ;
  • relevance interval 705 f remains because it is near 705 e .
  • All other relevance intervals e.g., relevance intervals 705 a , 705 c , 705 d , 705 e , 705 g , and 705 h ) remain because they are of a sufficient size and/or overlap and/or are near a sufficient number of other relevance intervals.
  • the retrieved (and possibly filtered) relevance intervals may be merged or otherwise combined.
  • Relevance intervals can be merged using various techniques. For example, the relevance intervals may be merged based on overlap between relevance intervals. In one instance, a relevance interval is merged into another relevance interval if the two relevance intervals overlap (e.g., relevance interval 705 a and 705 d of FIG. 7A may be merged into a single interval, etc.)
  • the relevance intervals may be merged based on the adjacency of the relevance intervals (e.g., relevance interval 705 f and 705 e of FIG. 7A may be merged into a single interval, etc.).
  • a relevance interval can be merged into another relevance interval if the two relevance intervals are sufficiently near each other (e.g., relevance interval 705 e may be merged with 705 f of FIG. 7A because the end time of 705 e is within a threshold number of seconds to the start time of 705 f ).
  • merging relevance intervals based on the adjacency of the matching relevance intervals may include merging the adjacent intervals such that the resulting merged interval spans from the beginning of the earliest relevance interval to the end of the later relevance interval (e.g., from the beginning of 705 e of FIG. 7A to the end of 705 f ). This may cause the merged relevance interval to include the gap between the adjacent relevance intervals (e.g., the merged relevance interval includes the gap from the end of 705 e of FIG. 7A to the beginning of 705 f ).
  • the merging process may continue by merging a merged relevance interval with another relevance interval. For example, two relevance intervals may be merged (e.g., relevance interval 705 a and 705 d of FIG. 7A are merged). Then, the merged relevance interval and a third relevance interval may be merged (e.g., the merged interval of 705 a and 705 d is merged again with interval 705 h ).
  • FIG. 7C is a graphical illustration showing an example of relevance intervals that have been merged.
  • relevance intervals 701 of FIG. 7A have been merged into merged relevance intervals 709 .
  • relevance interval 711 a and relevance interval 705 i are part of merged relevance intervals 709 .
  • Relevance interval 711 a is a merged relevance interval.
  • One merging technique that may produce relevance interval 711 a includes a hybrid technique that merges based on a combination of two techniques: overlap merging and adjacency merging. With respect to relevance intervals 701 of FIG.
  • relevance intervals 705 a - g are merged to create relevance interval 711 a because all intervals of 705 a - g overlap or are near another internal (e.g., interval 705 a , 705 d and 705 h all overlap with each other and they may be merged) and/or an intermediate merged interval (e.g., interval 705 c overlaps with the merged interval that results from merging intervals 705 e and 705 f , and they may be merged).
  • Relevance interval 705 i is not merged with another relevance interval, because interval 705 i does not overlap with any other relevance interval (or any resulting merged relevance interval) and is not sufficiently near any other relevance interval (or any resulting merged relevance interval).
  • the relevance intervals resulting from the merging at step 605 can be treated as the total interval for the current theme. Accordingly, in some arrangements, the total interval for a theme depends on whether the relevance intervals retrieved based on the terms of a theme have been filtered and/or merged. If neither filtering nor merging was performed, the total interval may include all relevance intervals retrieved based on the theme's terms (e.g, relevance intervals 701 of FIG. 7A ). If filtering was performed, the total interval for the theme may include the relevance intervals retrieved based on the theme's terms, minus some intervals that were filtered out (e.g., the filtered relevance intervals 707 of FIG. 7B ).
  • the total interval for the theme may include the merged relevance intervals (e.g., the matching relevance intervals 709 of FIG. 7C ). If filtering and merging were performed, the total interval of the theme may include the relevance intervals that result from filtering and merging the relevance intervals retrieved based on the terms of the theme (not shown).
  • FIG. 8A is a graphical illustration showing example total intervals for different themes. Specifically, FIG. 8A illustrates the total intervals for three different themes (e.g., themes T_1, T_2, T_3 of Table 4).
  • the first total interval 801 is for the first theme (e.g., T_1 of Table 4)
  • the second total interval 803 is for the second theme (e.g., T_2 of Table 4)
  • the third total interval 805 is for the third theme (e.g., T_3 of Table 4).
  • FIG. 8A is a graphical illustration showing example total intervals for different themes. Specifically, FIG. 8A illustrates the total intervals for three different themes (e.g., themes T_1, T_2, T_3 of Table 4).
  • the first total interval 801 is for the first theme (e.g., T_1 of Table 4)
  • the second total interval 803 is for the second theme (e.g., T_2 of Table 4)
  • the third total interval 805 is for
  • FIG. 8A displays the total intervals 801 , 803 , 805 for only a portion of a content item. Specifically, as seen by line 807 of FIG. 8A , the portion(s) of the total intervals that begin after 0:00 and end before 8:00 of the content item are illustrated, while those occurring/beginning/ending outside this portion of the content item are not shown. Thus, sub-intervals 801 a - d , 803 a - c , and 805 a - c are illustrated in FIG. 8A .
  • the total intervals may be subject to further processing.
  • the total intervals for the themes may be filtered.
  • the total intervals may be filtered using techniques similar to those described above with respect to filtering the relevance intervals of a single theme (e.g., size filtering, adjacency filtering, overlap filtering, hybrid filtering, etc.).
  • the total intervals may also be filtered using a modified technique (e.g., modified overlap filtering, etc.)
  • modified overlap filtering may include determining intervals from the total intervals that overlap and then removing the smallest of any overlapping interval.
  • modified overlap filtering may include determining intervals from the total intervals that overlap and then filtering based on the size of the overlap and/or the size of the relevance interval.
  • the larger a relevance interval the greater the overlap with another interval is required to be before it is filtered (e.g., a sub-interval of a total interval with a size of 30 seconds may need to overlap for greater than 20 seconds with another sub-interval before it will be removed, while a sub-interval with a size of 10 seconds may need to overlap for greater than 2 seconds with another sub-interval for it to be removed from the total intervals).
  • sub-interval 801 c is removed because it does not overlap with 805 c enough, while 805 b , 803 c and 801 d all remain.
  • overlap filtering may be performed because themes of some content items may be unlikely to overlap.
  • the total intervals may be filtered based on the content item.
  • the total interval may be filtered based on the location (e.g., time, sentence, etc.) of a sub-interval with respect to the content item (“location filtering”).
  • location filtering e.g., time, sentence, etc.
  • the location of a sub-interval may determine how aggressive the filtering constraints are applied.
  • relevance intervals located later in the content item may be filtered less aggressively than relevance intervals located earlier in the content item (e.g., overlapping intervals are not removed if they occur later in the content item, but are removed if they occur earlier in the content item).
  • any sub-interval occurring after 4:00 of the content item may not be removed (e.g., sub-intervals 801 b , 801 c , 801 d , 803 b , 803 c and 805 c are not removed because they occur after 4:00), while any sub-interval occurring before 4:00 will be subject to particular filtering constraints, such as overlap filtering, size filtering, etc.
  • filtering constraints such as overlap filtering, size filtering, etc.
  • the total intervals may also be filtered based on the themes.
  • the segments of relevance may be filtered based on the relatedness of the themes to each other (“relatedness filtering”).
  • a first theme may be compared to a second theme to determine a relatedness score based, for example, on how closely related the terms are to one another in the respective theme descriptions (in other words, based on how different the themes are from one another—if themes are closely related, then the merging of relevance intervals to determine whether a sentence belongs to a first or second theme may require a higher relevance score between the term and neighboring terms that are deemed to be in the theme).
  • total intervals of the themes may be filtered based on the relatedness score.
  • the magnitude of the relatedness value determines how aggressively the total intervals are filtered (e.g., a higher relatedness value would cause less to be filtered, a lower relatedness value would cause more to be filtered, etc.).
  • the relatedness value between two themes may include calculating a similarity measurement (e.g., pointwise mutual information (PMI), WordNet-based similarity measures, Wikipedia Miner similarity measures, some combination thereof, etc.) that measures how inter-related the terms of the themes are to each other (these measurements may be combined into a single measurement).
  • PMI pointwise mutual information
  • WordNet-based similarity measures e.g., WordNet-based similarity measures, Wikipedia Miner similarity measures, some combination thereof, etc.
  • the relatedness value and/or the similarity measurements may be normalized (e.g., normalized to be within a range from zero to one), and/or bounded (e.g., set to zero if negative, set to one if greater than one, etc.).
  • the magnitude of the similarity measurement may determine whether certain filtering techniques will be applied between the two total intervals. For example, if the similarity measurement is less than a threshold value, then any overlapping sub-intervals may be filtered based on size (e.g., the smallest of the overlapping sub-intervals will be removed, any overlapping sub-interval less than a threshold size will be removed, any sub-interval that is completely overlapped by a larger sub-interval will be removed, etc.).
  • the similarity measurement is greater than the threshold, then no further filtering may be performed (e.g., overlapping sub-intervals are not removed).
  • 801 b may be removed because it overlaps with 803 b . If the similarity measurement of the themes for total intervals 801 and 803 is greater than a threshold, 801 b may remain despite its overlap with 803 b . Filtering techniques similar to these may be performed because themes that are related may overlap in some content items.
  • FIG. 8B is a graphical illustration showing example total intervals after filtering has been performed on the sub-intervals.
  • the total intervals of FIG. 8A e.g., total intervals 801 , 803 , 805 of FIG. 8A
  • the filtered total intervals of FIG. 8B e.g., intervals 811 , 813 , 815 of FIG. 8B
  • One filtering technique that may produce the filtered total intervals 811 , 813 , 815 as illustrated in FIG. 8B includes a hybrid technique that filters based on location of the sub-intervals and uses a modified overlapping filtering process.
  • sub-intervals 803 a and 805 a remain because they do not overlap any sub-interval; sub-intervals 801 a and 803 b remain because they were greater than any sub-interval that they overlapped with (e.g., 801 a overlapped with 805 b of FIG. 8A ; 803 b overlapped with 801 b of FIG.
  • sub-interval 801 a is greater than 805 b ; and sub-interval 803 b is greater than 801 b ); and sub-intervals 801 c , 801 d , 803 c , and 805 c remain because, even though they overlap with other sub-intervals, they all are located later in the content item (e.g., after 5:00 of the content item) and, therefore, are not removed.
  • data can be stored to identify the themes and the intervals of each theme with respect to the content item.
  • data can be stored that identifies segments of the content item by theme.
  • storing data to identify the themes and the intervals of each theme may include creating and/or storing a data structure that has a record for each theme, and in each record there would be descriptive text for that theme (e.g., the words from the textual description corresponding to the theme), and data identifying the sentences/times in the content item that are in that theme (e.g., the start and end times of the sub-intervals from the total intervals corresponding to that theme).
  • a data structure that identifies the themes and the intervals of each theme can include data representing the total intervals (and/or the sub-intervals) illustrated in FIG. 8B .
  • the data structure could include the words of the first theme (“Character A changes her name”) and data fields that include the start and/or end times (and/or start/end sentences) of each sub-interval in the corresponding total interval (e.g., total interval 811 , which includes sub-intervals 801 a , 801 c and 801 d ).
  • the data structure could also include similar data for the second theme and the third theme.
  • the data structure may also include other data such as data identifying the content item.
  • Another example would be a data structure including data representing the total intervals (and/or sub-intervals) illustrated in FIG. 8A .
  • the stored data may be part of a searchable index (e.g., segment database 103 of FIG. 1 ) that allows searching for segments of content items based on certain queries (e.g., a query for segments matching, or similar to, a particular theme; a query for segments matching, or similar to, a particular word; etc.).
  • queries e.g., a query for segments matching, or similar to, a particular theme; a query for segments matching, or similar to, a particular word; etc.
  • the system may receive a query for segments similar to a particular theme (e.g., “Character A changes her name”).
  • the index may be searched and segments of content items that match or are similar to the particular theme may be identified from the stored data structure.
  • the data structure includes records for a theme matching “Character A changes her name”, such as a record including data based on the total interval graphically illustrated in FIG.
  • the found data records can be used to supply a user with segments of the content item.
  • the content item is a video
  • portions of video corresponding to the start and end times may be transmitted (e.g., the video corresponding to the start and end times of sub-intervals 801 a , 801 b , 801 c , 801 d of FIG.
  • the segments may be transmitted as a continuous video stream to a requesting user (e.g., a video where all segments addressing a given topic, across multiple news television programs, are played consecutively in a single on-demand stream).
  • a list identifying the segments may be transmitted that allows a user to select particular portions to view and/or store (e.g., a user may view a program guide listing television programs, enter a search query for a particular topic/theme, view results responsive to the search that identify different segments of one or more television programs, and select at least one segment for viewing)
  • the searchable index may be included as part of another service (e.g., a video-on-demand service, a web and/or Internet service, etc.) that utilizes a search function, allowing a user to conduct a search for requested themes (e.g., a user could indicate a desire to see the story arc involving Jin and Sun from all seasons of the television show “Lost”, and be presented with a single stream containing snippets with those characters, such as segments indexed with a theme including an identification of “Jin” or “Sun”).
  • portions of content items corresponding to the theme of a content item that is currently being consumed by a user can be retrieved and supplied to the user device.
  • the user may indicate their wish to view additional information about a portion of the current content item (e.g., a user may press an appropriate button on a remote control).
  • the associated command may then be communicated to the content supplier which causes a theme for the current content item (e.g., current television program) to be determined (e.g., the theme of the current segment of the current television program).
  • a theme for the current content item e.g., current television program
  • other portions of other content items may then be retrieved based on the determined theme and subsequently supplied to the user device for consumption by the user.
  • the user may be watching a television program and the current segment of the television program may be “The Building of the Eiffel Tower.”
  • One or more other segments from different programs may be retrieved and transmitted to the user having a label that is the same as, or similar to, the “The Building of the Eiffel Tower” (e.g., “Eiffel Tower,” “Building of the Statue of Liberty,” etc.)
  • the retrieved portions may be transmitted with additional information, such as text describing the theme, the time-code of the segment's position in the content item, etc. Such additional information may be displayed to a user upon consumption of the segment and/or transmitted information.
  • the searchable index may be used by an advertising system.
  • the advertising system may insert advertisements into content items based on the themes of a content item and/or an advertisement.
  • the advertising system may insert an advertisement into the content item near a segment (e.g., before the segment, after the segment, in a transition between the segment and another segment, etc.).
  • the advertisement may be thematically similar to the theme of that segment (e.g., a mystery plotline segment may be thematically similar to an advertisement parodying a mystery plotline, etc.).
  • the system may insert advertisements near segments based on whether the segment and an advertisement have the same or similar themes (e.g., a segment having a theme related to the “Eiffel Tower” may cause the system to insert advertisements for travel-related activities, such as hotels, airlines, travel booking services, etc.).
  • a segment having a theme related to the “Eiffel Tower” may cause the system to insert advertisements for travel-related activities, such as hotels, airlines, travel booking services, etc.).
  • the themes of the content items that have been transmitted to a user device may be monitored to determine the consumption habits of the user (e.g., determine one or more themes that are often included in content items viewed by the user).
  • the system determines that a user typically views certain themes (e.g., travel-related themes, such as by watching programs having themes related to the “Eiffel Tower” or “Japan” or “flight”) thematically similar content items may be transmitted to a user device as a recommendation to that user (e.g., content items having themes similar to “Eiffel Tower” or “Japan” or “flight” may be transmitted to the user as a recommendation for viewing).
  • the monitored viewing habits may be used by an advertising system (e.g., targeting advertisements to the user based on viewing habits, etc.).
  • the relevance interval for that occurrence could be calculated (e.g., using the method of FIG. 4 ). Similarly, the relevance interval for other occurrences in the transcript could be calculated. This could be repeated for each term in a theme (and then repeated again for all terms in the other themes).

Abstract

Content items may be segmented and labeled by a theme to provide information, such as audio or video, with greater functionality, accuracy and speed. The segments and other related information may be stored and made accessible to users through, for example, a search service and/or an on-demand service. Also provided is automatic segmentation that may include any one of receiving a textual description of a content item, determining themes within the textual description, determining relevance intervals corresponding to terms of the theme, filtering and/or merging the relevance intervals, and determining a total interval for each theme that represents one or more contextually-coherent segments of the content item.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of prior U.S. application Ser. No. 13/195,944, filed Aug. 2, 2011, which is incorporated herein by reference in its entirety.
  • TECHNICAL FIELD
  • Aspects of the disclosure relate to processing data files, media files and other content items, such as files containing movies, television shows, sporting events, etc. One or more aspects relate to the identification of segments in a content item according to themes.
  • BACKGROUND
  • As communication technologies, such as the Internet, and interactive technologies, such as a video-on-demand service, increasingly rely on more information-rich types of media to enhance their popularity and/or capabilities, there is an increasing need to process such information. Processing may be configured to, for example, capture, analyze, segment, index, retrieve, and/or distribute the massive amount of information contained within the types of media used within these technologies to help users sift through the content and find the portion(s) that will be of most interest. However, due to the massive amount of information within media (e.g., a single day's worth of television programming may contain thousands and thousands of hours of content, addressing thousands and thousands of topics, narrative themes, etc.), attempting to capture, analyze, segment, index, retrieve and/or distribute information from a static document may be extremely difficult. Therefore, the processing of certain types of information-rich media files is often performed using manual judgments and determinations. For example, producers of the television show “60 Minutes” may manually generate a textual description of the news segments appearing in this week's episode, identify the three stories to be run in tonight's episode, and provide that textual description to broadcasters. That description can then be provided to users, and can be searched using simple text searches to find programs containing topics of interest. Users can then record those programs.
  • This approach, however, is not perfect. Content producers do not always provide descriptions, or sufficiently detailed descriptions. The summary or descriptions may be lacking; it may, for example, identify a single segment as being “discussing the latest tax proposal in Congress,” and fail to provide additional details (e.g., the formal title or number of a piece of legislation, its sponsor, etc.) that may be useful in supporting a meaningful search by the user. Thus, there remains an ever-present need to provide more useful information to users, for example, to provide for the capture, analysis, segmentation, indexing, retrieval and distribution of information related to media with greater functionality, accuracy and speed.
  • BRIEF SUMMARY
  • The following presents a simplified summary of the disclosure in order to provide a basic understanding of some aspects. It is not intended to identify key or critical elements of the disclosure or to delineate the scope thereof. The following summary merely presents some concepts of the disclosure in a simplified form as a prelude to the more detailed description provided below.
  • In some embodiments herein, a content supplier or provider may utilize a plurality of computing devices to allow the segmentation and distribution of media content items to user devices. The plurality of computing devices may include a content database, a relevance interval database, a segment database, a content server, a data analysis server, a relevance interval calculation server, a segment calculation server, a distribution server, and a network that provides communication between the various databases and servers of the content supplier. The segments of the content items may be indexed, for example, by theme, and then later retrieved and transmitted to a user device for consumption by a user.
  • In some embodiments, computing devices may be configured to perform a method of segmenting one or more content items according to a narrative theme. For example, a textual transcript of a program (e.g., closed-captioning data, transcript file, etc.) may be received and processed to calculate relevance intervals. A relevance interval may be generated for each term. The relevance interval for a term may identify an interval within the program (e.g., number of lines in the transcript above and below the line in which the term appears; or a time segment before and after the point in time at which the term appears, etc.) over which the term is deemed to be relevant. A relevance interval can be determined by comparing the term with other terms in the latter/earlier lines or segments of the program, and consulting a database identifying a statistical strength value for how contextually related the two terms are (e.g., the terms “dog” and “leash” may be more likely to be contextually related than the terms “dog” and “physics,” for example). Using the relevance intervals, a content item can be segmented by themes. A theme may include one or more words and the relevance intervals for each word in the theme may be determined (e.g., retrieved from a database storing the relevance intervals for the content item). Upon determining the relevance intervals for the individual words of the theme, the relevance intervals may be filtered and/or merged to identify a total interval that represents one or more contextually-coherent segments in the content item (e.g., a segment that relates to dogs, such as if lines 10-20 of the transcript contained a conversation mostly about dogs). The total interval can also be filtered and/or merged. Information related to the total interval can then be used (e.g., stored in a searchable index) to help users who wish to locate segments of interest (e.g., if a user wishes to find conversations about dogs in this evening's show).
  • The details of these and other embodiments of the present disclosure are set forth in the accompanying drawings and the description below. Other features and advantages of the invention will be apparent from the description and drawings, and from the claims.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The present disclosure is illustrated by way of example and not limited in the accompanying figures in which like reference numerals indicate similar elements and in which:
  • FIG. 1 is a block diagram showing an illustrative system in which content items may be segmented and distributed to users in accordance with one or more aspects of the disclosure.
  • FIG. 2 illustrates a computing device that may be used to implement various methods and/or devices in accordance with one or more aspects of the disclosure.
  • FIG. 3 is a flowchart illustrating an example method for identifying intervals within a content item according to one or more aspects described herein.
  • FIG. 4 is a flowchart illustrating an example method for calculating a relevance interval for a word according to one or more aspects described herein.
  • FIG. 5 is a graphical illustration of relevance intervals according to one or more aspects described herein.
  • FIG. 6 is a flowchart illustrating an example method for determining an interval for a theme based on the relevance intervals for each word in the theme according to one or more aspects described herein.
  • FIG. 7A is a graphical illustration showing an example of relevance intervals that have been retrieved based on terms in a theme according to one or more aspects described herein.
  • FIG. 7B is a graphical illustration showing an example of relevance intervals retrieved for each term of a theme after they have been filtered, according to one or more aspects described herein.
  • FIG. 7C is a graphical illustration showing an example of relevance intervals that have been merged, according to one or more aspects described herein.
  • FIG. 8A is a graphical illustration showing example total relevance intervals for different themes according to one or more aspects described herein.
  • FIG. 8B is a graphical illustration showing example total intervals after filtering has been performed according to one or more aspects described herein.
  • DETAILED DESCRIPTION
  • Some embodiments may be described in the context of a network providing services to user devices over various networks using one or more protocols. The disclosure is not limited to any particular type of network or networks using a specific type of communication medium or to a specific set of communication protocols.
  • FIG. 1 is a block diagram showing an illustrative system in which content items may be segmented or otherwise processed according to the disclosure, and distributed to or accessed by users. In the embodiment illustrated in FIG. 1, content supplier or provider 100 may provide, or make available for access, content items. Content items may take various forms, including items having video (e.g., a video program), audio (e.g., an audio file), text, data, and/or animations (e.g., Adobe® Flash files). Content supplier 100 may also provide services related to the content items (e.g., video-on-demand, Internet or other network access services, etc.) to one or more users. According to aspects of this disclosure, content supplier 100 may distribute to the user devices the content items, segments of the content items (e.g., portions of a television program, chapters in a story or movie, etc.) and/or services using the content items and/or the segments. Content supplier 100 may provide the segments of the content items with, for example, descriptive labels and/or topics (e.g., topic names or thematic labels for different stories in this week's episode of the “60 Minutes” news program). Content items that have been segmented by theme/topic can be used in various manners by the content supplier 100 to provide users with more interactive and/or information rich services related to information-rich media, such as audio or video. For example, in some arrangements, content supplier 100 may store a searchable index of segmented content. The index may label each segment according to one or more topics, and identify the start/end times in the content. Content supplier 100 may receive a request from a user for segments of content items matching a narrative theme (e.g., plot line, plot point, plot device, etc.). In response, the content supplier 100 may retrieve one or more matching segments (e.g., all segments having a label matching and/or including the particular sports team) and transmit the results in a desired format, such as via an Internet Protocol (IP) video stream, or transmit a list of results (e.g., web page, electronic program guide menu display, etc.). Additionally or alternatively, the segments may be transmitted as part of a service, such as a video-on-demand service, or other service.
  • In other arrangements, content supplier 100 may organize one or more content streams (e.g., video channels or services, datastreams, etc.) according to particular labels and/or narrative theme, and stream the organized segments over the corresponding content stream. For example, one channel or service may be made to carry all sports segments from the day. In yet other arrangements, content supplier 100 may allow a user to receive recommendations of thematically similar content items and/or segments. In some instances, the recommendations may be based on a particular content item and/or segment, such as a content item or segment currently being viewed by a user.
  • To provide the segmented content to the user devices, content supplier 100 may be configured to employ processes configured to, or related to, capture, analyze, segment, index, store, and/or retrieve segments of the content items. To accomplish this, content supplier 100 may use various databases and servers to provide the desired functionality. For example, in the illustrated embodiment of FIG. 1, content supplier 100 includes a content database 101, relevance interval database 102, segment database 103, and computing devices related to a content server 109, data analysis server 111, relevance interval calculation server 105, segment processing server 107, distribution server 113, and a server network 115 that provides communication between the various databases and servers of the content supplier 100.
  • Content database 101 may store a plurality of content items and other data associated with a service offered by content supplier 100 to users. A content item may include data, audio and/or visual content and may be associated with particular programs, media files, or other content files. For example, a content item may be one or more video and/or audio files associated with a particular television show, movie, commercial, sporting event, news report, public service announcements and the like, because the item includes content for that show, movie, commercial, etc. The content server 109 may be configured to perform various tasks related to the content and/or services of the content supplier 100, including the tasks of receiving content items, storing content items in the content database 101, receiving requests associated with content or services via distribution server 113, retrieving content and/or data associated with a service from the content database 101, and transmitting the content and/or other data to the distribution server 113.
  • Data analysis server 111 may be configured to perform specific functions related to the analysis of content items, including data extraction (e.g., speech recognition of audio, optical character recognition of onscreen visual text, image processing algorithms of onscreen text, etc.) and analysis of the extracted data (e.g., natural language processing, logical structure analysis, anaphora resolution, etc.). Results of the data extraction and/or analysis may be stored in the content database 101. In some embodiments, content database 101 may include text from the content items, such as textual transcripts, closed captioning data, and the like. In some instances, the text may include data extracted and/or generated by the data analysis server 111.
  • Relevance interval calculation server 105 (e.g., a computing device) may be configured to analyze individual terms and/or phrases from the content's text (e.g., its transcript, closed captioning data, textual description, etc.), and identify relevance intervals for the various terms and phrases in that text. A term's relevance interval may identify one or more sections of a content item (e.g., lines of dialog within the transcript, time portions of the show, etc.) that are deemed relevant to that term. Relevance intervals, and example methods of calculating relevance intervals, will be discussed in detail below. The calculated relevance intervals may be stored in relevance interval database 102.
  • Segment processing server 107 (e.g., a computing device) may be configured to analyze the content's text (e.g., transcript, etc.) and the relevance intervals identified by the relevance interval calculation server 105, and identify and determine thematic segments within the content items (e.g., by creation of a searchable index of the segments). Segment processing server 107 may also otherwise process the segments and store information related to the segments in one or more of the databases, including segment database 103. For example, segment processing server 107 may determine segments or intervals of the content item with respect to some identified themes and store data identifying the segments/intervals and the corresponding themes. In one example, segment processing server 107 may analyze a content item for a television show and identify segments of the content item for a number of identified themes (e.g., one or more segments related to a first theme, one, one or more segments related to a second theme, etc.). The segment database 103 may store data such as indexes, pointers or other indicators of the segments, including indicators of where a segment begins, how the segments relate to the theme, etc. Various manners in which a segment is identified and/or determined are discussed in detail below.
  • Distribution server 113 (e.g., a computing device) may process communication between the content supplier 100 and one or more user devices 119 a-119 n. As illustrated in FIG. 1, distribution server 113 may transmit information via the distribution network 117 to the user devices 119 a-119 n. Distribution server 113 may also receive information from the user devices 119 a-119 n via the distribution network 117. User devices 119 a-119 n may be a heterogeneous mix of various computing devices, including a network gateway, personal computer, web-enabled cellular phone, personal digital assistant, laptop computer, television set-top box, digital video recorder, etc.
  • Distribution network 117 may be any type of network, such as satellite, telephone, cellular, wireless, Ethernet, twisted pair, fiber, coaxial, a hybrid fiber/coax network (e.g., a coaxial/fiber network), etc., or a combination of networks. In some embodiments, the distribution network may include components not illustrated, such as modems, bridges, routers, splitters, filters, amplifiers, wireless access points, Bluetooth® devices, and other connectors of various formats (e.g., HDMI, Multimedia over Coax Alliance, etc.) to assist in conveying the signals to their destination. Accordingly, distribution server 113 may be configured to manage communications between devices on the distribution network 117 and other devices of the content supplier 100. For example, in a coaxial cable or hybrid fiber/coax system, the distribution server 113 or network 117 may include a termination system, such as a cable modem termination system (CMTS). The CMTS may be as specified in the Data Over cable Service Interface Specification (DOCSIS) standard, published by Cable Television Laboratories, Inc. (a.k.a. CableLabs), or it may be a similar or modified device instead. The termination system may be configured to place data on one or more downstream frequencies to be received by modems, such as cable modems, at the premises of the user devices 119 a-119 n, and to receive upstream communications from one or more modems on one or more upstream frequencies. Similar or other appropriate types of distribution servers may be used for other types of distribution networks, such as an optical fiber termination system for optical media, telephone line DSLAM (Digital Subscriber Line Access Multiplexer) for telephone lines, satellite transceivers, cellular telephone stations, local area wireless (e.g., WiMax), etc.
  • FIG. 2 illustrates a computing device that may be used to implement various devices described herein (e.g., the various servers, user devices, and/or databases of FIG. 1). The computing device 200 may include one or more processors 203, which may execute instructions of a computer program to perform any of the features described herein. Those instructions may be stored in any type of computer-readable medium or memory, to configure the operation of the processor 203. For example, instructions may be stored in a read-only memory (ROM) 207, random access memory (RAM) 205, or other media accessible by the computing device, such as a Universal Serial Bus (USB) drive, compact disk (CD), digital versatile disk (DVD), floppy disk drive, etc. Input/output module 211 may include a microphone, keypad, touch screen, and/or stylus through which a user of computing device 200 may provide input, and may also include one or more of a speaker for providing audio output and a video display device for providing textual, audiovisual and/or graphical output. Software may be stored within memory 213 (e.g., hard drive), RAM 205, and/or ROM 207 to provide instructions to processor 203 for enabling the computing device 200 to perform various functions. For example, memory 213 may store software used by the computing device 200, such as an operating system 215, application programs 217, and an associated database 215. Alternatively, some or all of computing device 200's computer executable instructions may be embodied in hardware or firmware (not shown). Additionally, computing device 200 may operate in a networked environment supporting connections to one or more networked devices. Accordingly, computing device 200 may use network interface 209 to connect to the network. Network interface 209 may manage communications over various networks using various formats, including protocols such as TCP/IP, Ethernet, FTP, HTTP, WiMax, MoCA, Bluetooth® and the like.
  • One or more aspects of the disclosure may be embodied in computer-useable data and/or computer-executable instructions, such as in one or more program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types when executed by a processor in a computer or other data processing device. The computer executable instructions may be stored on one or more computer readable media such as a hard disk, optical disk, removable storage media, solid state memory, RAM, etc. As will be appreciated by one of skill in the art, the functionality of the program modules may be combined or distributed as desired in various embodiments. In addition, the functionality may be embodied in whole or in part in firmware or hardware equivalents such as integrated circuits, field programmable gate arrays (FPGA), and the like. Particular data structures may be used to more effectively implement one or more aspects of the invention, and such data structures are contemplated within the scope of computer executable instructions and computer-usable data described herein.
  • One or more aspects of this disclosure relate to providing a method for the segmentation of content items, such as identifying portions of an audiovisual program. The method may segment the content item by determining intervals within the content item that are contextually-cohesive with respect to a particular theme. FIG. 3 is a flowchart illustrating an example method for identifying intervals within a content item in which different themes, such as narrative themes, are discussed or appear (e.g., the times within an episode of ‘Friends’ in which Joey talks about his job). The method itself may be performed by a computing device such as a processing server, such as segment processing server 107. In general terms, the FIG. 3 process may involve receiving a textual transcript of a show (e.g., a show being one example of a content item, and a line-by-line script of spoken words being one example of a textual transcript), processing each individual word or data interval in the transcript to identify relevance intervals for each word (a transcript word's relevance interval may identify the number of sentences before and after that are related to the word, and can be generated using a relationship measurement database that maps strengths of relationships between various pairs of words), receiving or otherwise determining a textual description of the narrative themes in the show (e.g., a written show summary provided by the show's producers), and totaling the relevance intervals of the words in the text description to result in a total interval of the show for the narrative theme.
  • In step 301, the server 107 may receive or otherwise determine a transcript of the show (e.g., television program, movie, song, webcast, or other content item). The transcript may include, for example, a line-by-line script, record, or listing of the words spoken (or other suitable data interval) in the show, along with any additional relevant or identifying information, such as the identity of the speaker, the time within the show (e.g., time offset from the start, or absolute time of day when spoken if the show was transmitted according to a schedule), volume or tone, or any other characteristic of the spoken dialogue.
  • In step 302, the server 107 may process each word or phrase appearing in the transcript to identify that word or phrase's relevance interval within the transcript. As noted above, a word's relevance interval can identify one or more portions of the content (e.g., sentences or time before and after the word's appearance in the transcript) that are contextually relevant to the word. FIG. 4 illustrates an example process by which a relevance interval for a single word (or phrase) can be determined. As a result of step 302, the server 107 may generate a data record for each word or phrase, where the record identifies a portion (e.g., range of sentences or times) of the show that is contextually related to that word or phrase. These data records, also referred herein as relevance intervals, may then be processed in the steps below, and combined to result in the relevance interval for a particular narrative theme.
  • In step 303, the server 107 may receive, or otherwise determine, information identifying or describing the themes that are expected to be found in the show. For example, the information identifying the themes may be in the form of a description of the show, such as one appearing in an electronic program guide (EPG). For example, producers of the television show “60 Minutes” may provide a short textual description accompanying each episode, identifying the different topics in the episode. The terms of the textual description may describe one or more narrative themes of the content item. Each narrative theme may include a plurality of terms and/or sentences (e.g., a television drama may have sentences describing different plotlines in tonight's episode). For example, an episode of “60 Minutes” may contain a first segment described as “Congress's New Tax Proposal,” a second segment described as “The Killer Who Got Away,” and a third segment described as “The NFL's Mystery Man”, and the textual description can contain these segment descriptions. In some embodiments, the information identifying the narrative themes may be manually entered and/or supplied by a user and/or a content supplier. For example, a summary of the plot lines for a television show may be received in conjunction with a broadcast of the television program, such as with electronic program guide (EPG) data, or the plot line summaries can be downloaded from a third-party server (e.g., from a web server and/or extracted from a website).
  • At step 304, the system may process the information identifying the narrative themes (e.g., the textual description) to identify the themes to look for in the content. Determining the one or more themes may include analyzing the textual description, and separating it into different themes that can be searched for in the program (e.g., story lines, news categories, etc.). In one example, the system may assume that each different sentence in the textual description refers to a different theme (each theme can be a different plotline, story, etc.) that can be found in the show (e.g., “Joey gets a job interview. Monica and Chandler have an argument.”), and the textual description can be parsed by sentence, with each sentence of the representation becoming its own theme for further processing. In alternative embodiments, the narrative themes may be spelled out differently. For example, the program provider can supply a database listing the different narrative themes in the program, such as by keywords, and/or delimiting the different themes by different separators, etc.
  • At step 305, an iterative process can be performed for each expected theme. For each theme, the server 107 may perform, for example, the steps illustrated in FIG. 6. The FIG. 6 process, described further below, may take each theme identified in the prior step, identify the individual words in the theme's description, find the relevance intervals for each of those words (e.g., by accessing a database storing the relevance intervals of step 302), and combine the relevance intervals of each word into a composite relevance interval (e.g., a total interval) to result in the intervals/segments of the content in which the theme is found.
  • Once the relevance intervals for each theme have been determined, data can be stored (e.g., in a memory of server 107), or otherwise prepared, to identify each theme and its corresponding interval(s) within the show. At step 306, data may optionally be stored identifying each theme and the corresponding intervals within the show. In some embodiments, this may include creating and/or storing a data structure that has a record for each theme, and in each record there would be descriptive text for that theme (e.g., the words from the textual description corresponding to the theme), and data identifying the sentences/times in the show that are in that theme. The data structure could also include other data, such as a theme record handle/identifier, information related to the underlying relevance intervals forming the composite relevance interval for that theme, and other data related to the show, such as the name, filename, and/or some other identifier/descriptive data for the show. Once all the data has been stored, the stored data could be used as a basis for searching the content item based on theme. Additional details of the steps of FIG. 3 will be discussed throughout this document.
  • As discussed with respect to step 301 of FIG. 3, the system may receive or otherwise determine a transcript (e.g., a line-by-line script, record, or listing of the spoken words, etc.) of the content item (e.g., show, television program, etc.). In some arrangements, the transcript may be supplied with the content item and/or received in addition to the content. In other arrangements, the transcript may be determined and/or generated from the content item. For example, a speech recognition system may capture the words spoken in a content item. The content item may be processed, using the speech recognition system, to extract the spoken words and record the time at which each word occurs in the content item. The speech recognition system may also determine and record alternatives for extracted words or phrases, and may determine a probability for each determined alternative. The speech recognition system may also capture other aspects of the content item, including pauses and speaker changes.
  • An optical character recognition (OCR) system may also be used to generate a transcript (or some of the data included in a transcript). For example, an OCR system can capture words spoken in a content item, such as a video item, when the spoken words are displayed on-screen (e.g., as with subtitles, or closed captioning) or are otherwise visible during a playback of the content item. The OCR system may extract and record characteristics of the visible text such as the size, position, style and precise time interval of visibility.
  • A meta-data extraction module can also be used when generating a transcript. For example, a meta-data extraction module can capture embedded data of a content item, including filename, and meta-data of the content item, such as tagging information (e.g., MPEG-7 meta-tags, HTML tags, file header information, etc.). In some arrangements, the extracted information may be used as an indicator for important terms of the transcript. Similarly, summaries of content items (e.g., a capsule description of a television program for display in an electronic program guide) may also provide important terms of the content item, and can be used when generating the transcript.
  • Once the transcript has been obtained (e.g., supplied or generated), the transcript may optionally be processed by a word-adjustment module to increase the accuracy of the extracted data. Additionally, the transcript may be a combination of text extracted from different techniques (e.g., a transcript based on a two sets of extracted data, the first from a speech recognition system and the second from an OCR system; or a transcript based on a data from a speech recognition system and a supplied transcript; etc.).
  • A word-adjustment module, for example, may analyze a transcript sentence by sentence. Such an analysis may include determining grammatical information of the text, such as parts of speech, phrases, verb argument structure, clausal dependencies, and other features. In one example, the sentences may be analyzed according to sentence boundary (e.g., parsed based on punctuation, such as periods; parsed based on capital letters; etc.). Each sentence may be given its own unique index and the time of occurrence of the sentence may be associated with an analyzed (e.g., parsed) sentence.
  • In some arrangements, terms of the transcript may be processed into a canonical form. For example, the transcript may be processed to identify occurrences of the same or related words. In one example, the system may identify that the term “mouse” and “mice” are the singular and plural of the same concept, and should be treated as the same word. In a second example, the system may identify that the terms “New” and “York” and “City” are a multi-word expression “New York City” (e.g., when they appear next to each other in a transcript) and should be treated as a single term when the transcript is further processed/analyzed. In a third example, the system may identify proper nouns with occurrences of pronouns in the transcript (e.g., identify “Bob” with occurrences of “he”, etc.) With respect to the illustrated transcript of Table 3, the terms “dog”, “Spot” and “him” may be treated as referring to the same entity (e.g., “dog”), the terms “store” and “they” may be treated as the same term (e.g., “store”), and the terms “grocery” and “shopping” may be treated as a multi-word expression (e.g., “grocery shopping”). A thesaurus database can also be used to combine synonyms, such as “automobile” and “car,” to treat them as the same word and help identify contextually related sentences.
  • In some embodiments, the transcript may be filtered to remove one or more terms. For example, the transcript may be filtered to leave only those terms that are determined to have significance and/or semantic meaning. The significance and/or semantic meaning of a term may be determined in various ways, including, for example, term-weighting techniques (e.g., term frequency—inverse document frequency (tf/idf), threshold pointwise mutual information (PMI) values with other terms in the vicinity, etc.), and other indicators of term salience. In one example, any occurrences of “the” and “a” may be filtered out. In a second example, certain nouns, verbs adjectives, adverbs, punctuation, etc. may be filtered out.
  • Table 1 illustrates an example transcript of a content item. As seen in Table 1, the transcript may contain a list of sentences (column 2). Each sentence may be associated with a unique label/index (column 1) and a time code that represent when the sentence occurs in the content item (column 3).
  • TABLE 1
    S_1 Yesterday, I took my dog to the park. 0:35
    S_2 While there, I took him off the leash to get 0:39
    some exercise.
    S_3 After 2 minutes, Spot began chasing a 0:48
    squirrel.
    S_4 I needed to go grocery shopping. 1:01
    S_5 So I went later that day to the local store. 1:04
    S_6 Unfortunately, they were out of cashews. 1:11
  • With the transcript successfully obtained and processed, the calculation of the relevance intervals can proceed. As discussed above with respect to step 302 of FIG. 3, each word or phrase appearing in the transcript can be processed to identify the relevance interval for the word. A word's relevance interval can identify one or more portions of the show (e.g., sentences or time before and after the word's appearance in the transcript) that are contextually relevant to the word. For example, a particular relevance interval may identify any number of sentences within the transcript that are relevant to a word. For example, the occurrence of the word “dog” in a television program transcript may, via the relevance interval calculation, be determined to be relevant to the sentence immediately previous to the occurrence of “dog” and the two sentences immediately following the occurrence of “dog.” Put into other words, the relevance interval for the word “dog” identifies that “dog” is relevant to the previous sentence in the transcript and the two sentences following the occurrence of “dog.” To calculate each relevance interval, a computer (e.g., server 107) may, for example, perform the steps illustrated in FIG. 4.
  • FIG. 4 is a flowchart illustrating an example method for calculating a relevance interval for a selected word, term and/or phrase. The method may be performed or repeated for each word or term in the content item's transcript. Relevance intervals for individual terms can be generated prior to handling the description, and without prior knowledge of the expected themes in the show. At step 401 of FIG. 4, the example method of calculating begins with initializing the relevance interval of the selected term (e.g., the first word in the first sentence of a transcript). For example, a term's initial relevance interval may simply be one sentence—the sentence in which the term appears. In some embodiments, initializing the relevance interval may include assigning it information relating to the sentence of the current term (e.g., assigning the current sentence index (e.g., an identifier or handle used to reference the sentence) and/or the time in the content item where the current sentence occurs).
  • Through the subsequent steps of the relevance interval calculation, the relevance interval may be modified to include a listing of other sentences or times that are relevant to the selected term. For example, the determination of the relevance interval for the selected term may be performed by traversing down the transcript (and up) to compare other sentences in the transcript with the selected term, and determining how contextually close (or contextually similar) those other sentences are to the selected term and/or to other terms in the sentence(s) where the selected term occurs. This can be done, for example, by doing a term-for-term comparison, successively comparing the current term with each term in the other sentence.
  • At step 403, it is determined whether the end of the transcript (or portion of content) has been reached (e.g., whether the last sentence of the transcript has been considered). If the process is not at the end of the transcript, then the method proceeds to step 405 and the next sentence is retrieved.
  • At step 407, upon retrieving the next sentence of the transcript, it is determined whether the next sentence is relevant to the current term. This determination can be performed in various ways. For example, the relevancy may be determined based on a comparison between the current term and each of the words in the next sentence. Similarity strengths for these term-by-term comparisons can be obtained from a similarity measurement database or other knowledge base, and totaled, averaged, or otherwise combined to yield an overall similarity strength between the current term and the next sentence. As one illustrative example, the selected word (e.g., “dog”) can be compared with other terms appearing in the next sentence by forming a similarity measurement between the word “dog” and each term appearing in the next sentence, and deeming the next sentence relevant if the similarity measurement is sufficiently strong.
  • Similarity measurements may identify relationship strengths between pairs of words, and depending upon the type and other characteristics of the content item, take into account contextual information present in the transcript, as well as other contextual information drawn from an analysis of the content item. The types of contextual information used can vary and may depend upon the type and/or the characteristics of the content item (e.g., relevance intervals for a written document, a video with spoken words, a video with sung words, etc., may be calculated using different types of contextual information). For example, contextual information can include the clustering of the occurrences of information representations; the natural breaks in the media, such as paragraphs, changes in slides, verbal pauses, or speaker changes; natural language processing data, such as parts of speech, grammatical structure, multi-word expressions, named entities, and references; semantic information, such as synonyms, classifications, similarities, concepts related by knowledge based connections, and topic similarity; logical structure based upon transition words, prosodic cues, visual layout, surrounding formatting data (e.g., surrounding HTML), topic flow, and statistically measured continuity of topic; and other extracted data.
  • The similarity measurements can be used to determine whether the next sentence is relevant to the selected word. For example, continuing the above example, the comparison of the word “dog” to terms in the next sentence may include the calculation and/or retrieval of similarity measurements between “dog” and any (or all) of the other terms of the next sentence. In one example, the next sentence is deemed relevant if one or more of the retrieved similarity measurements are above a certain threshold. Various threshold comparison schemes can be used (e.g., determine whether one measurement is above a threshold value, determine whether two or more measurements are “strong,” determine whether an average strength value exceeds a threshold value, etc.). In other arrangements, the next sentence may be relevant if the cumulative measurement is above a threshold.
  • The similarity measurements may be represented by statistical measurements of term co-occurrence in corpus data. In one example, a large amount of data may be selected as the corpus, such as, for example, a representative sample of sentences or newspaper articles. This corpus may then be processed to create the statistical measurements indicating a strength of the statistical relationship between finding the occurrence of one word (e.g., “dog”) and also finding an occurrence of another word (e.g., “leash”). One can expect to find a stronger statistical relationship between some words (e.g., “dog” and “leash”) than others (e.g., “dog” and “office”).
  • One statistical measurement of term co-occurrence in corpus data may be a pointwise mutual information score that is calculated between a pair of terms (e.g., between a first term, A, and a second term, B, of the corpus data, where term A and term B form the pair of terms). One way to calculate a pointwise mutual information score is to, for each pair of terms, create a table that tabulates the number of times that each term occurs in the corpus and the number of times term of a pair occurs within a predetermined distance of the other term of that pair. In other words, this table records the number of times that two terms co-occur within a window of constant size in the corpus. Using the terms “dog”, “leash” and “cubicle” as an example, “dog” may occur 10,000 times in a corpus of 10 million words, “leash” may have occur 500 times in that same corpus, and “dog” and “leash” may occur 100 times within 10 terms of each other. Cubicle may also occur 100 times in the corpus but “dog” and “cubicle” may occur only 8 times within 10 terms of each other. From this data, the pointwise mutual information scores between pairs of terms can be calculated. One formula for calculating the mutual information score may be:
      • Let PMI(S,P) be the pointwise mutual information score between term S and term P. Let N be the number of words in the corpus. Let # S be the number of occurrences of term S in the corpus, let # P be the number of occurrences of term P in the corpus, and let # S&P be the number of times term S occurs within K terms of term P. Then MI(S,P)=log2((N)R # S&P)/[# S][# P]).
  • Measures of association or similarity between terms can also be calculated using other similarity measurement techniques, including, for example, Dice coefficient, chi-square statistics, log-likelihood rations, WordNet-based similarity, Wikipedia Miner similarity, Google similarity distance, etc.
  • In some embodiments, similarity measurements may be stored in a knowledge base (e.g., database). The similarity measurement database may contain, for example, a large listing of terms, and for each term, the database may list a number of contextually-related terms along with a strength value for the match. So, for example, an entry for the word “dog” may list “park”, “bone”, “leash” and “owner” (among others), and may include a strength value for the contextual relationship between “dog” and each of the listed words. The strength value can be stronger for words like “leash” and “bone”, since those terms quite often relate to dogs, while the strength value for words like “park” and “owner” may be lower, since those terms are often used in other contexts that don't involve dogs. Any desired type of similarity measurement database may be used.
  • When comparing the relevance of the next sentence to the current term, a particular statistical measurement may be retrieved from the database by providing a query including the two terms of the desired similarity measurement. For example, the query could include the term under consideration (e.g., the current term), and a term from the next sentence (e.g., a query including “dog” and “leash” would return the similarity measurement between those terms).
  • Another way to measure the relevancy is to compare other (or all) words in the sentence of the current term with words of the next sentence. This process can include gathering the similarity measurements between any (or all) of the terms in the two sentences. Once the similarity measurement(s) are gathered, the similarity strength between the two sentences may be calculated, for example, by summing and/or taking the product of the gathered similarity measurements, and then normalizing by the resulting number. In some instances, outlying (e.g., extreme) values may be discarded before performing the summation and/or product. Additionally, the similarity measurements may be normalized before performing the summation and/or product.
  • As another example, the relevance may be determined based on an analysis of the terms of the sentences. In one example, the relevancy may be based on a pronoun resolution determination (e.g., if the sentences are about the same noun, the sentences may be relevant). In one instance, a first sentence may include the noun “Spot” and the second sentence may include the pronoun “he” which may be resolved to be referring to “Spot.” These sentences may be deemed relevant, since the pronoun in the subsequent sentence refers back to the proper noun in the prior sentence. In another example, the relevancy may be based on the occurrence of a discourse phrase that indicates a continuation or the ending of an interval (e.g., the phrase “In addition” may indicate a continuation, while the phrase “In other news” may indicate the intervals ends).
  • If the sentence is not relevant to the current term, the method can return to step 403, to determine if this sentence is the end of the transcript, and to continue with checking the next sentence in the transcript. Alternatively, if the next sentence is not deemed relevant, then the system can conclude that the term's relevance in this direction has ended, and the process can cease the downward traversal, and proceed to step 411 for the upward traversal through the transcript. Additionally and/or alternatively, the process may proceed to step 411 if a threshold number of sentences have been determined to be not relevant. In one example, the process may proceed to step 411 if a threshold of consecutive sentences have been determined to be not relevant (e.g., if the previous sentence and the current sentence is not relevant, proceed to step 411). Otherwise, the process may proceed to step 403. In another example, a counter may be incremented for every not relevant sentence and if the counter reaches a threshold (e.g., 4, etc.), then the process proceeds to step 413. Otherwise, the process may proceed to step 403.
  • On the other hand, if the sentence was determined to be relevant, the method can proceed to step 409, where that sentence is added to the current term's relevance interval. To add the sentence to the relevance interval, the relevance interval is modified with the appropriate information of the sentence. For example, the relevance interval for the current term may be a list of sentences, sentence labels/indexes, time periods, some combination thereof, etc. associated with that current term. In one example, if the interval for a term is stored as a list having copies of relevant sentences, then the terms of the next sentence may be added to the relevance interval list for the current term. In another example, if the relevance interval for a term is stored according to sentence labels/indexes (e.g., a relevance interval spanning sentence 1, sentence 2 and sentence 3, may be described by a start value of 1 and an end value of 3), the appropriate index value can be updated (e.g., start value may be decremented, end value may be incremented, etc.) to list the relevant sentences according to their indices. In a third example, if the relevance interval is stored as a time value (e.g., time period within a piece of content), then the appropriate start or end value of the relevance interval may be augmented by adding the time values for the next sentence (e.g., making the end time of the next sentence the end time of the interval). Table 2, found below, and FIG. 5 illustrate example relevance intervals.
  • At step 410, a determination is made whether to continue calculating the current relevance interval. In some embodiments, the method may proceed to step 403 if the next sentence was added to the relevance interval in step 409. Otherwise, if the next sentence was not added, the method may proceed to step 411. In other embodiments, the method may always proceed to step 403. In yet other embodiments, the method may proceed to step 411 after two or more consecutive sentences have not been added to the current relevance interval (allowing for short gaps in conversation topic without cutting short the relevance interval).
  • If, at step 403, it is determined that the end of the transcript has been reached, the method proceeds to step 411, where the sentence of the current term is located (similar to step 401). At step 413, it is then determined whether this location is the beginning of the transcript (since the upwards traversal would end at the beginning of the script). If this location is the beginning of the transcript, then the method ends. Otherwise, the method proceeds to step 415, where the previous sentence in the transcript is retrieved. At step 417, upon retrieving the previous sentence, it is determined whether the previous sentence is relevant to the current term, which may be performed similarly to the determination of step 407. If the previous sentence is relevant, the previous sentence is added to the relevance interval at step 419. Otherwise, the method may proceed directly to step 413 to determine whether this previous sentence is the beginning of the transcript. At step 420, another determination is made whether to continue calculating the current relevance interval. This determination may be similar to the determination made at step 410.
  • Accordingly, the process illustrated by FIG. 4 may be repeated until the relevance intervals for the desired terms (e.g., all terms, phrases, or canonical terms or phrases in the transcript) have been calculated. Once calculated and/or processed (e.g., post-processed, as described below), the relevance intervals and other data related to the relevance intervals may be stored in a database (e.g., relevance interval database 102 of FIG. 1).
  • Additionally, in some arrangements, the relevance intervals may be subject to additional, or post-processing. In some embodiments utilizing post-processing, the relevance intervals may be indexed for easier retrieval. For example, words may appear multiple times in a document and, thus, may also have multiple relevance intervals. To facilitate quicker retrieval of the relevance intervals, the relevance intervals for a transcript may be indexed by the term for which the relevance intervals were calculated. The relevance intervals may be searched by term to retrieve any and/or all of the relevance intervals associated with that term. In other embodiments utilizing post-processing, the relevance intervals may be merged. For example, if two relevance intervals for a term are sufficiently close (e.g., the end of the first relevance interval lies within a predetermined number of sentences from the start of the second relevance interval), the two relevance intervals may be merged.
  • Table 2 illustrates three example relevance intervals that may be calculated from the transcript illustrated in Table 1. As shown in Table 2, each relevance interval is given a label according to the term for which it was calculated, the sentence in which the term occurred, the start and end sentence for the interval (e.g., sentence boundary), and the start and end time of the content item for the interval (e.g., the start and end time-codes of the sentence in the content item).
  • TABLE 2
    Term Sentence Sentence Time Time
    Term Sentence Begin End Begin End
    Dog S_1 S_1 S_3 0:35 1:01
    Leash S_2 S_1 S_3 0:35 1:01
    Exercise S_2 S_2 S_2 0:39 0:48
  • FIG. 5 is a graphical illustration of relevance intervals. Specifically, the example of FIG. 5 is a graphical illustration of the relevance intervals included in Table 2. As seen in FIG. 5, the relevance interval for “Dog” is interval 501, which spans three sentences (S_1, S_2 and S_3) and 25 seconds of the program (from the 35 second mark to the one minute mark); the relevance interval for “Leash” is interval 503; and the relevance interval for “Exercise” is interval 505. Each illustrated relevance interval spans a particular time of the content item and/or sentences of the transcript. The illustration of FIG. 5 provides indications of the sentences of the transcript 507 and the time-codes of the content item 509.
  • As discussed above with respect to step 303 of FIG. 3, the system may receive information identifying different narrative themes. In one or more embodiments, this information may be represented as a textual description. The description may include different terms and/or sentences that are meant to describe different narrative themes and/or portions of the content item. A narrative theme or portion of the description may be meant to summarize and/or otherwise encapsulate related events within the content item. For example, a narrative theme may describe a plot line, plot point, a plot device, or other narrative theme of a content item, such as a television show, song, movie, etc. (e.g., a first portion of a “60 Minutes” episode called “Congress's New Tax Proposal,” a second portion called “The Killer Who Got Away,” and a third portion called “The NFL's Mystery Man”). As another example, a narrative theme may describe an emotional theme of a content item, or other thematic characteristic of the content item. In addition, a narrative theme can relate to marketing, advertisement, educational, or other themes.
  • It is noted that what can be considered a narrative theme of a content item (e.g., the plot lines of the content item, etc.) can vary. Thus, the received description may take various forms. For example, the description of a television episode may be supplied by one or more viewers (e.g., a social networking site, or other website, may allow individual users to post their own summaries of the television episode). Because each viewer is free to describe the piece of content (e.g., an episode of a show) as they interpret it, their description may differ from the descriptions of other viewers, even when they are describing similar events of the television episode. Descriptions could also be supplied by a content supplier or provider. In one instance, a description supplied by a content provider may be a summary for inclusion into an electronic programming guide, or some other description associated with an existing service of the content provider. Just as descriptions among different viewers may describe a content item differently, descriptions supplied by a content supplier may also differ from those supplied by a viewer. Therefore, a wide range of possible descriptions could be received for a content item, and different sets of relevance intervals and theme segments can be identified using the different descriptions.
  • Table 3 illustrates a sample textual description that includes one or more narrative themes of a content item. Specifically, the example description includes narrative themes for a television show.
  • TABLE 3
    Character A changes her name. Character B searches for a job.
    Characters C and D look for a house.
  • As illustrated in Table 3, the textual description includes three sentences. In the illustrated description, each sentence describes a plot line for the television show. To separate each plot line, for example, the illustrated text could be parsed by sentence. In some arrangements, the description could be processed differently than just by sentence to identify the themes or theme sentences. For example, processing of the description could be based on other (or additional) grammar structural cues of the text (e.g., parsed based on punctuation, such as commas, semi-colons, etc.). Upon separating the description into the different theme sentences, each sentence may optionally be given its own unique index.
  • Table 4 illustrates an example of a textual description that has been processed into one or more themes. As seen in Table 2, each theme includes a single sentence of the descriptive text illustrated in Table 1. The first sentence is the first theme, the second sentence is the second theme, and the third sentence is the third theme. Each theme has been given its own unique index (e.g., T_1, T_2, T_3).
  • TABLE 4
    T_1 Character A changes her name.
    T_2 Character B searches for a job.
    T_3 Characters C and D look for a house.
  • With each theme identified, the system can seek to find where the terms of each theme appear in the transcript and/or identify the relevance intervals for those terms. As discussed above in connection with step 305 of FIG. 3, a total interval is determined for each theme. For example, the relevance intervals of each word in a theme description can be combined into a composite relevance interval (e.g., a total interval), which can represent an interval for the theme.
  • FIG. 6 is a flowchart illustrating an example method for determining an interval for a theme based on the relevance intervals for each word in the theme. In some embodiments, this method may be part of an iterative process that is repeated for each theme determined from the textual description (e.g., the themes identified at step 304 of FIG. 3).
  • As seen at step 601 of FIG. 6, the relevance intervals for each term in the theme are retrieved (e.g., from the relevance interval database 102 of FIG. 1). In some embodiments, only the relevance intervals with terms matching each term will be retrieved (e.g., via a query requesting all relevance intervals for the term “dog”). In other embodiments, relevance intervals having terms that are similar to one of the theme's terms may be retrieved. For example, each term in the theme can be processed through a thesaurus to identify synonyms for each term. Upon retrieving the synonyms, all relevance intervals that match the current term and the synonyms of the current term can be retrieved (e.g., via one or more queries requesting relevance intervals for the word “leash” and the synonyms of “leash” such as “cord” and/or “chain”, etc.).
  • FIG. 7A is a graphical illustration showing an example of relevance intervals that have been retrieved based on terms in a theme. In the example illustrated in FIG. 7A, the relevance intervals for each term of a theme have been retrieved from a database storing the relevance intervals for the content item. The illustrated theme includes three terms, Term A (e.g., “Character A” of theme T_1 of Table 4, which is treated by the system as a single word), Term B (e.g., “changes” of thematic element T_1 of Table 4), and Term C (e.g., “name” of thematic element T_1 of Table 4). As seen in the illustration, FIG. 7A displays relevance intervals 701 corresponding to only a portion of the content item. Specifically, as seen by line 703 of FIG. 7A, the Figure illustrates relevance intervals 701 that begin after 1:00 and end before 5:00 of the content item, while those occurring, beginning, and/or ending outside this portion of the content item are not shown. For illustration purposes, relevance intervals 701 have been illustrated in three portions. The first portion 701 a are the relevance intervals retrieved based on Term A, the second portion 701 b are the relevance intervals retrieved based on Term B, and the third portion 701 c are the relevance intervals retrieved based on Term C. Thus, relevance intervals 705 a-705 i are illustrated in FIG. 7A. The top of a relevance interval represents the beginning of the relevance interval. The bottom of the relevance interval represents the end of the relevance interval. For example, relevance interval 705 a, which is a relevance interval retrieved based on Term A, begins at approximately 1:15 of the content item and ends at approximately 1:45 of the content item.
  • Referring again to FIG. 6, at optional step 603, the relevance intervals retrieved based on the terms in the theme can be filtered. The relevance intervals may be filtered using various techniques, some of which include filtering the relevance intervals based on one or more characteristics of the relevance intervals. For example, the relevance intervals may be filtered based on the size (or length) of each relevance interval (“size filtering”). In one instance, any relevance interval that has a size less than a predetermined size (e.g., a size of 5 seconds, etc.) may be removed from the retrieved set of relevance intervals (e.g., remove relevance intervals 705 b, 705 f and 705 i of FIG. 7A from relevance intervals 701). The size of a relevance interval may be the time spanned by the relevance interval (e.g., the difference between the end of the relevance interval and the beginning of the relevance interval). Alternatively, the size of a relevance interval may be the number of sentences spanned by the relevance interval.
  • As another example, the relevance intervals may be filtered based on a comparison between different relevance intervals. Comparison filtering may take the form of overlap filtering and/or adjacency filtering. Overlap filtering may include determining that multiple relevance intervals overlap with each other in the content, and removing relevance intervals that do not overlap (e.g., remove relevance interval 705 i from relevance intervals 701 of FIG. 7A, but not 705 g because 705 g overlaps with 705 c). Additionally, overlap filtering may remove any relevance interval that does not overlap with a threshold number of other relevance intervals (e.g., in an example with a threshold of 2, remove relevance interval 705 i and 705 c from relevance intervals 701 of FIG. 7A, because 705 i does not overlap with any other relevance interval and 705 c only overlaps with 705 c but not a second relevance interval).
  • Adjacency filtering may include determining whether another interval is near a particular relevance interval, and removing the relevance interval if there are no nearby relevance intervals. For example, determining whether another interval is near may include comparing the start and end times of the relevance intervals to determine if at least one relevance interval ends or begins within a threshold time of the start and end time of the particular relevance interval (e.g., remove relevance interval 705 i from relevance intervals 701 of FIG. 7A, but not 705 f, because the start of interval 705 f is sufficiently near to the end of interval 705 e).
  • As yet another example, relevance intervals may be filtered using a hybrid technique. A hybrid technique can be a combination different filtering techniques, such as a combination of size filtering and comparison filtering, as described above. In one instance, relevance intervals may be filtered based a sliding scale between relevance interval size and the relevance interval comparison. In one sliding scale technique, the larger the size of the relevance interval, the less the relevance interval needs to overlap or be near another relevance interval (e.g., intervals 705 f and 705 g of FIG. 7A are candidates for removal because they only overlap with one other relevance interval, but relevance interval 705 f is removed from relevance intervals 701, while 705 g remains, because 705 g is larger). Similarly, as a relevance interval overlaps with, or is near, a greater number of relevance intervals, the smaller the threshold size may become (e.g., intervals 705 b, 705 f and 705 i of FIG. 7A are candidates for removal because their size is less than a threshold value, but intervals 705 f and 705 i are removed from relevance intervals 701, while 705 b remains, because 705 b overlaps with two other relevance intervals).
  • FIG. 7B is a graphical illustration showing an example of relevance intervals retrieved for each term of a theme after they have been filtered. As illustrated, FIG. 7B illustrates the relevance intervals of FIG. 7A after they have been filtered. The results of the filtering process are illustrated in FIG. 7B as filtered relevance intervals 707. Filtered relevance intervals 707 are illustrated in three portions: first portion 707 a, second portion 707 b, and third portion 707 c. As compared to relevance intervals 701 of FIG. 7A, only relevance interval 705 i of FIG. 7A was removed from relevance intervals 701. One filtering technique that may produce the filtered relevance intervals 707 illustrated in FIG. 7B includes a hybrid technique that filters based on a combination of three techniques: size filtering, overlap filtering and adjacency filtering. Using this hybrid technique, relevance intervals 705 b, 705 f and 705 i are candidates for removal because of their size. Relevance interval 705 b remains because it overlaps with 705 e and 705 f; relevance interval 705 f remains because it is near 705 e. All other relevance intervals (e.g., relevance intervals 705 a, 705 c, 705 d, 705 e, 705 g, and 705 h) remain because they are of a sufficient size and/or overlap and/or are near a sufficient number of other relevance intervals.
  • Referring again to FIG. 6, at step 605, the retrieved (and possibly filtered) relevance intervals may be merged or otherwise combined. Relevance intervals can be merged using various techniques. For example, the relevance intervals may be merged based on overlap between relevance intervals. In one instance, a relevance interval is merged into another relevance interval if the two relevance intervals overlap (e.g., relevance interval 705 a and 705 d of FIG. 7A may be merged into a single interval, etc.)
  • As another example, the relevance intervals may be merged based on the adjacency of the relevance intervals (e.g., relevance interval 705 f and 705 e of FIG. 7A may be merged into a single interval, etc.). A relevance interval can be merged into another relevance interval if the two relevance intervals are sufficiently near each other (e.g., relevance interval 705 e may be merged with 705 f of FIG. 7A because the end time of 705 e is within a threshold number of seconds to the start time of 705 f). In some arrangements, merging relevance intervals based on the adjacency of the matching relevance intervals may include merging the adjacent intervals such that the resulting merged interval spans from the beginning of the earliest relevance interval to the end of the later relevance interval (e.g., from the beginning of 705 e of FIG. 7A to the end of 705 f). This may cause the merged relevance interval to include the gap between the adjacent relevance intervals (e.g., the merged relevance interval includes the gap from the end of 705 e of FIG. 7A to the beginning of 705 f).
  • Additionally, the merging process may continue by merging a merged relevance interval with another relevance interval. For example, two relevance intervals may be merged (e.g., relevance interval 705 a and 705 d of FIG. 7A are merged). Then, the merged relevance interval and a third relevance interval may be merged (e.g., the merged interval of 705 a and 705 d is merged again with interval 705 h).
  • FIG. 7C is a graphical illustration showing an example of relevance intervals that have been merged. As illustrated in FIG. 7C, relevance intervals 701 of FIG. 7A have been merged into merged relevance intervals 709. As seen in FIG. 7C, relevance interval 711 a and relevance interval 705 i are part of merged relevance intervals 709. Compared to the intervals illustrated in FIG. 7A, relevance interval 705 i was not merged with any relevance interval. Relevance interval 711 a is a merged relevance interval. One merging technique that may produce relevance interval 711 a includes a hybrid technique that merges based on a combination of two techniques: overlap merging and adjacency merging. With respect to relevance intervals 701 of FIG. 7A, relevance intervals 705 a-g are merged to create relevance interval 711 a because all intervals of 705 a-g overlap or are near another internal (e.g., interval 705 a, 705 d and 705 h all overlap with each other and they may be merged) and/or an intermediate merged interval (e.g., interval 705 c overlaps with the merged interval that results from merging intervals 705 e and 705 f, and they may be merged). Relevance interval 705 i is not merged with another relevance interval, because interval 705 i does not overlap with any other relevance interval (or any resulting merged relevance interval) and is not sufficiently near any other relevance interval (or any resulting merged relevance interval).
  • Referring again to FIG. 6, the relevance intervals resulting from the merging at step 605 can be treated as the total interval for the current theme. Accordingly, in some arrangements, the total interval for a theme depends on whether the relevance intervals retrieved based on the terms of a theme have been filtered and/or merged. If neither filtering nor merging was performed, the total interval may include all relevance intervals retrieved based on the theme's terms (e.g, relevance intervals 701 of FIG. 7A). If filtering was performed, the total interval for the theme may include the relevance intervals retrieved based on the theme's terms, minus some intervals that were filtered out (e.g., the filtered relevance intervals 707 of FIG. 7B). If merging was performed, the total interval for the theme may include the merged relevance intervals (e.g., the matching relevance intervals 709 of FIG. 7C). If filtering and merging were performed, the total interval of the theme may include the relevance intervals that result from filtering and merging the relevance intervals retrieved based on the terms of the theme (not shown).
  • As discussed above, the process illustrated in FIG. 6 may be repeated iteratively (e.g., for each theme identified from a textual description). FIG. 8A is a graphical illustration showing example total intervals for different themes. Specifically, FIG. 8A illustrates the total intervals for three different themes (e.g., themes T_1, T_2, T_3 of Table 4). The first total interval 801 is for the first theme (e.g., T_1 of Table 4), the second total interval 803 is for the second theme (e.g., T_2 of Table 4), and the third total interval 805 is for the third theme (e.g., T_3 of Table 4). As seen in the illustration, FIG. 8A displays the total intervals 801, 803, 805 for only a portion of a content item. Specifically, as seen by line 807 of FIG. 8A, the portion(s) of the total intervals that begin after 0:00 and end before 8:00 of the content item are illustrated, while those occurring/beginning/ending outside this portion of the content item are not shown. Thus, sub-intervals 801 a-d, 803 a-c, and 805 a-c are illustrated in FIG. 8A.
  • In some arrangements, the total intervals (e.g., intervals 801, 803, 805 of FIG. 8A) may be subject to further processing. For example, the total intervals for the themes may be filtered. The total intervals may be filtered using techniques similar to those described above with respect to filtering the relevance intervals of a single theme (e.g., size filtering, adjacency filtering, overlap filtering, hybrid filtering, etc.). Similarly, the total intervals may also be filtered using a modified technique (e.g., modified overlap filtering, etc.) In one example, modified overlap filtering may include determining intervals from the total intervals that overlap and then removing the smallest of any overlapping interval. In another example, modified overlap filtering may include determining intervals from the total intervals that overlap and then filtering based on the size of the overlap and/or the size of the relevance interval. In one such instance, the larger a relevance interval, the greater the overlap with another interval is required to be before it is filtered (e.g., a sub-interval of a total interval with a size of 30 seconds may need to overlap for greater than 20 seconds with another sub-interval before it will be removed, while a sub-interval with a size of 10 seconds may need to overlap for greater than 2 seconds with another sub-interval for it to be removed from the total intervals). With respect to the illustrated example of FIG. 8A, sub-interval 801 c is removed because it does not overlap with 805 c enough, while 805 b, 803 c and 801 d all remain. Such overlap filtering may be performed because themes of some content items may be unlikely to overlap.
  • Additionally or alternatively, the total intervals may be filtered based on the content item. In one example, the total interval may be filtered based on the location (e.g., time, sentence, etc.) of a sub-interval with respect to the content item (“location filtering”). In another example, the location of a sub-interval may determine how aggressive the filtering constraints are applied. Thus, in one example, relevance intervals located later in the content item may be filtered less aggressively than relevance intervals located earlier in the content item (e.g., overlapping intervals are not removed if they occur later in the content item, but are removed if they occur earlier in the content item). With respect to the illustrated example of FIG. 8A, any sub-interval occurring after 4:00 of the content item may not be removed (e.g., sub-intervals 801 b, 801 c, 801 d, 803 b, 803 c and 805 c are not removed because they occur after 4:00), while any sub-interval occurring before 4:00 will be subject to particular filtering constraints, such as overlap filtering, size filtering, etc. Such techniques may be performed because some content items may have converging themes towards the end of the content item.
  • The total intervals may also be filtered based on the themes. For example, the segments of relevance may be filtered based on the relatedness of the themes to each other (“relatedness filtering”). In one example, a first theme may be compared to a second theme to determine a relatedness score based, for example, on how closely related the terms are to one another in the respective theme descriptions (in other words, based on how different the themes are from one another—if themes are closely related, then the merging of relevance intervals to determine whether a sentence belongs to a first or second theme may require a higher relevance score between the term and neighboring terms that are deemed to be in the theme). Upon determining the relatedness score, total intervals of the themes may be filtered based on the relatedness score. In one instance, the magnitude of the relatedness value determines how aggressively the total intervals are filtered (e.g., a higher relatedness value would cause less to be filtered, a lower relatedness value would cause more to be filtered, etc.). In one example, the relatedness value between two themes may include calculating a similarity measurement (e.g., pointwise mutual information (PMI), WordNet-based similarity measures, Wikipedia Miner similarity measures, some combination thereof, etc.) that measures how inter-related the terms of the themes are to each other (these measurements may be combined into a single measurement). Additionally or alternatively, the relatedness value and/or the similarity measurements may be normalized (e.g., normalized to be within a range from zero to one), and/or bounded (e.g., set to zero if negative, set to one if greater than one, etc.). The magnitude of the similarity measurement may determine whether certain filtering techniques will be applied between the two total intervals. For example, if the similarity measurement is less than a threshold value, then any overlapping sub-intervals may be filtered based on size (e.g., the smallest of the overlapping sub-intervals will be removed, any overlapping sub-interval less than a threshold size will be removed, any sub-interval that is completely overlapped by a larger sub-interval will be removed, etc.). As another example, if the similarity measurement is greater than the threshold, then no further filtering may be performed (e.g., overlapping sub-intervals are not removed). With respect to the illustrated example of FIG. 8A, if a similarity measurement of the themes for total intervals 801 and 803 are less than a threshold, 801 b may be removed because it overlaps with 803 b. If the similarity measurement of the themes for total intervals 801 and 803 is greater than a threshold, 801 b may remain despite its overlap with 803 b. Filtering techniques similar to these may be performed because themes that are related may overlap in some content items.
  • FIG. 8B is a graphical illustration showing example total intervals after filtering has been performed on the sub-intervals. As illustrated, the total intervals of FIG. 8A (e.g., total intervals 801, 803, 805 of FIG. 8A) have been filtered into the filtered total intervals of FIG. 8B (e.g., intervals 811, 813, 815 of FIG. 8B). One filtering technique that may produce the filtered total intervals 811, 813, 815 as illustrated in FIG. 8B includes a hybrid technique that filters based on location of the sub-intervals and uses a modified overlapping filtering process. Using this hybrid technique, sub-intervals 803 a and 805 a remain because they do not overlap any sub-interval; sub-intervals 801 a and 803 b remain because they were greater than any sub-interval that they overlapped with (e.g., 801 a overlapped with 805 b of FIG. 8A; 803 b overlapped with 801 b of FIG. 8A; sub-interval 801 a is greater than 805 b; and sub-interval 803 b is greater than 801 b); and sub-intervals 801 c, 801 d, 803 c, and 805 c remain because, even though they overlap with other sub-intervals, they all are located later in the content item (e.g., after 5:00 of the content item) and, therefore, are not removed.
  • With the total interval being determined for each theme, data can be stored to identify the themes and the intervals of each theme with respect to the content item. In other words, data can be stored that identifies segments of the content item by theme. As discussed above with respect to step 307 of FIG. 3, storing data to identify the themes and the intervals of each theme may include creating and/or storing a data structure that has a record for each theme, and in each record there would be descriptive text for that theme (e.g., the words from the textual description corresponding to the theme), and data identifying the sentences/times in the content item that are in that theme (e.g., the start and end times of the sub-intervals from the total intervals corresponding to that theme). In one example, a data structure that identifies the themes and the intervals of each theme can include data representing the total intervals (and/or the sub-intervals) illustrated in FIG. 8B. With respect to a data structure based on the total intervals of FIG. 8B, the data structure could include the words of the first theme (“Character A changes her name”) and data fields that include the start and/or end times (and/or start/end sentences) of each sub-interval in the corresponding total interval (e.g., total interval 811, which includes sub-intervals 801 a, 801 c and 801 d). The data structure could also include similar data for the second theme and the third theme. The data structure may also include other data such as data identifying the content item. Another example would be a data structure including data representing the total intervals (and/or sub-intervals) illustrated in FIG. 8A.
  • The stored data may be part of a searchable index (e.g., segment database 103 of FIG. 1) that allows searching for segments of content items based on certain queries (e.g., a query for segments matching, or similar to, a particular theme; a query for segments matching, or similar to, a particular word; etc.). For example, the system may receive a query for segments similar to a particular theme (e.g., “Character A changes her name”). The index may be searched and segments of content items that match or are similar to the particular theme may be identified from the stored data structure. Continuing the above example, since the data structure includes records for a theme matching “Character A changes her name”, such as a record including data based on the total interval graphically illustrated in FIG. 8A by total interval 801, these records would be found by the search. Additionally, if other records exist in the data structure that match, or are similar to the theme, they would also be found by the search (e.g., segments from a different content item having the same or a similar theme, etc.) The found data records can be used to supply a user with segments of the content item. In one instance, if the content item is a video, portions of video corresponding to the start and end times may be transmitted (e.g., the video corresponding to the start and end times of sub-intervals 801 a, 801 b, 801 c, 801 d of FIG. 8B) The segments may be transmitted as a continuous video stream to a requesting user (e.g., a video where all segments addressing a given topic, across multiple news television programs, are played consecutively in a single on-demand stream). In another instance, a list identifying the segments may be transmitted that allows a user to select particular portions to view and/or store (e.g., a user may view a program guide listing television programs, enter a search query for a particular topic/theme, view results responsive to the search that identify different segments of one or more television programs, and select at least one segment for viewing)
  • In some embodiments, the searchable index may be included as part of another service (e.g., a video-on-demand service, a web and/or Internet service, etc.) that utilizes a search function, allowing a user to conduct a search for requested themes (e.g., a user could indicate a desire to see the story arc involving Jin and Sun from all seasons of the television show “Lost”, and be presented with a single stream containing snippets with those characters, such as segments indexed with a theme including an identification of “Jin” or “Sun”). In yet another instance, portions of content items corresponding to the theme of a content item that is currently being consumed by a user can be retrieved and supplied to the user device. In one such embodiment, the user may indicate their wish to view additional information about a portion of the current content item (e.g., a user may press an appropriate button on a remote control). The associated command may then be communicated to the content supplier which causes a theme for the current content item (e.g., current television program) to be determined (e.g., the theme of the current segment of the current television program). Upon determining the theme of the current content item, other portions of other content items may then be retrieved based on the determined theme and subsequently supplied to the user device for consumption by the user. For example, the user may be watching a television program and the current segment of the television program may be “The Building of the Eiffel Tower.” One or more other segments from different programs may be retrieved and transmitted to the user having a label that is the same as, or similar to, the “The Building of the Eiffel Tower” (e.g., “Eiffel Tower,” “Building of the Statue of Liberty,” etc.) Additionally, the retrieved portions may be transmitted with additional information, such as text describing the theme, the time-code of the segment's position in the content item, etc. Such additional information may be displayed to a user upon consumption of the segment and/or transmitted information.
  • In a second example, the searchable index may be used by an advertising system. In one instance, the advertising system may insert advertisements into content items based on the themes of a content item and/or an advertisement. In one embodiment, the advertising system may insert an advertisement into the content item near a segment (e.g., before the segment, after the segment, in a transition between the segment and another segment, etc.). In some arrangements the advertisement may be thematically similar to the theme of that segment (e.g., a mystery plotline segment may be thematically similar to an advertisement parodying a mystery plotline, etc.). For example, the system may insert advertisements near segments based on whether the segment and an advertisement have the same or similar themes (e.g., a segment having a theme related to the “Eiffel Tower” may cause the system to insert advertisements for travel-related activities, such as hotels, airlines, travel booking services, etc.).
  • In another example, the themes of the content items that have been transmitted to a user device may be monitored to determine the consumption habits of the user (e.g., determine one or more themes that are often included in content items viewed by the user). In one instance, if the system determines that a user typically views certain themes (e.g., travel-related themes, such as by watching programs having themes related to the “Eiffel Tower” or “Japan” or “flight”) thematically similar content items may be transmitted to a user device as a recommendation to that user (e.g., content items having themes similar to “Eiffel Tower” or “Japan” or “flight” may be transmitted to the user as a recommendation for viewing). In another instance, the monitored viewing habits may be used by an advertising system (e.g., targeting advertisements to the user based on viewing habits, etc.).
  • Although specific examples of carrying out the features above have been described, those skilled in the art will appreciate that there are numerous variations and permutations of the above-described systems and methods that are contained within the spirit and scope of the invention as set forth in the appended claims. For example, instead of calculating relevance intervals for an entire transcript (as seen above at step 302 of FIG. 3), alternative embodiments could calculate the relevance intervals on an as-needed basis. Some embodiments may process the textual description of a content item to identify the themes and calculate relevance intervals from the transcript as part of the process for determining the total interval for each theme. For each term in a theme, the transcript could be searched for occurrences of the term. When an occurrence is found in the transcript, the relevance interval for that occurrence could be calculated (e.g., using the method of FIG. 4). Similarly, the relevance interval for other occurrences in the transcript could be calculated. This could be repeated for each term in a theme (and then repeated again for all terms in the other themes).
  • Additionally, numerous other embodiments, modifications and variations within the scope and spirit of the appended claims will occur to persons of ordinary skill in the art from a review of this disclosure.

Claims (20)

1. A method comprising:
determining, by a computing device and for each term selected from a dialog of a content item, a relevance interval for the term by identifying one or more portions of the dialog that are relevant to the term based on a similarity measurement for a first term in the one or more portions, the first term being different from the term, wherein the determining results in relevance intervals for terms selected from the dialog;
determining, from a textual description that indicates one or more narrative themes of the content item, a first plurality of terms that indicate a first narrative theme of the one or more narrative themes;
determining relevance intervals for the first plurality of terms by searching, based on the first plurality of terms, the relevance intervals for the terms selected from the dialog;
determining, based on the relevance intervals for the first plurality of terms, a first composite relevance interval that indicates one or more first portions, of the content item, that are relevant to the first narrative theme;
receiving, after storing the first composite relevance interval as part of a searchable index, a query indicative of criteria for searching the searchable index;
determining, based on the criteria and the searchable index, the first composite relevance interval; and
causing, based on the query, display of at least one of the one or more first portions.
2. The method of claim 1, wherein the similarity measurement comprises a statistical measurement of term co-occurrence in corpus data, wherein a transcript of the content item comprises the dialog, and wherein the method further comprises:
receiving the transcript of the content item; and
storing the relevance intervals for the terms selected form the dialog in a database;
wherein the textual description comprises a user-provided description of the content item;
wherein the searching, based on the terms indicative of the first narrative theme, the relevance intervals for the terms selected from the dialog comprises determining that one of the terms selected from the dialog matches one of the first plurality of terms.
3. The method of claim 1, further comprising:
determining, from the textual description, a second plurality of terms that indicate a second narrative theme of the one or more narrative themes;
determining relevance intervals for the second plurality of terms by searching, based on the second plurality of terms, the relevance intervals for the terms selected from the dialog;
determining a second composite relevance interval by removing a fourth relevance interval from the relevance intervals for the second plurality of terms or by merging a fifth relevance interval from the relevance intervals for the second plurality of terms with a sixth relevance interval from the relevance intervals for the second plurality of terms; and
after determining the first composite relevance interval, filtering the first composite relevance interval by determining that a first sub-interval of the first composite relevance interval overlaps with a second sub-interval of the second composite relevance interval and by removing the first sub-interval from the first composite relevance interval.
4. The method of claim 3, further comprising:
after determining the relevance intervals for the first plurality of terms, filtering the relevance intervals for the first plurality of terms;
after determining the relevance intervals for the second plurality of terms, filtering the relevance intervals for the second plurality of terms;
after determining the relevance intervals for the first plurality of terms, merging the relevance intervals for the first plurality of terms;
after determining the relevance intervals for the second plurality of terms, merging the relevance intervals for the second plurality of terms;
after determining the second composite relevance interval, filtering the second composite relevance interval;
after determining the first composite relevance interval, merging the first composite relevance interval; and
after determining the second composite relevance interval, merging the second composite relevance interval.
5. The method of claim 1, wherein determining the first composite relevance interval is performed by removing a first relevance interval from the relevance intervals for the first plurality of terms or by merging a second relevance interval from the first plurality of terms with a third relevance interval from the relevance intervals for the first plurality of terms, and wherein the method further comprises:
preparing the searchable index to associate the first composite relevance interval with the first narrative theme; and
wherein the query indicates the first narrative theme based on the first narrative theme being associated with content currently being viewed by a user.
6. The method of claim 5, wherein determining the first composite relevance interval is performed by:
removing the first relevance interval from the relevance intervals for the first plurality of terms based on a size of the first relevance interval, an overlap of the first relevance interval with another relevance interval of the relevance intervals for the first plurality of terms, an adjacency of the first relevance interval with another relevance interval of the relevance intervals for the first plurality of terms, or a content item location of the first relevance interval.
7. The method of claim 5, wherein determining the first composite relevance interval is performed by:
merging the second relevance interval with the third relevance interval based on an overlap of the second relevance interval with the third relevance interval or an adjacency of the second relevance interval with the third relevance interval.
8. The method of claim 1, wherein each of the one or more narrative themes indicates one of a plurality of plot lines for the content item, wherein the first narrative theme indicates a first plot line of the plurality of plot lines, and wherein the criteria indicates the first plot line.
9. The method of claim 1, wherein each of the one or more narrative themes indicates one of a plurality of news segments for the content item, wherein the first narrative theme indicates a first news segment of the plurality of news segments, and wherein the criteria indicates the first news segment.
10. The method of claim 1, wherein the criteria indicates a content consumption habit of a user.
11. A method comprising:
determining, by a computing device and for each term selected from a dialog of a content item, a relevance interval for the term by identifying one or more portions of the dialog that are relevant to the term based on a similarity measurement for a first term in the one or more portions, the first term being different from the term, wherein the determining results in relevance intervals for terms selected from the dialog;
receiving a summary of plot lines for the content item;
determining, from the summary of the plot lines, a first plurality of terms that indicate a first plot line of the plot lines;
determining relevance intervals for the first plurality of terms by searching, based on the first plurality of terms, the relevance intervals for the terms selected from the dialog;
determining, based on the relevance intervals for the first plurality of terms, a first composite relevance interval that indicates one or more first portions, of the content item, that are relevant to the first plot line;
receiving, after storing the first composite relevance interval as part of a searchable index, a query indicative of criteria for searching the searchable index;
determining, based on the criteria and the searchable index, the first composite relevance interval; and
causing, based on the query, display of at least one of the one or more first portions.
12. The method of claim 11, wherein the receiving the summary of the plot lines is performed based on receiving electronic program guide (EPG) data associated with the content item, wherein the EPG data comprises the summary of the plot lines.
13. The method of claim 11, wherein the receiving the summary of the plot lines is performed based on accessing a website that indicates the summary of the plot lines.
14. The method of claim 11, wherein the receiving the summary of the plot lines is performed based on downloading the summary of the plot lines from a web server.
15. The method of claim 11, wherein the criteria indicates a content consumption habit of a user.
16. The method of claim 11, wherein determining the first composite relevance interval is performed by removing a first relevance interval from the relevance intervals for the first plurality of terms or by merging a second relevance interval from the first plurality of terms with a third relevance interval from the relevance intervals for the first plurality of terms, and wherein the method further comprises:
preparing the searchable index to associate the first composite relevance interval with the first plot line; and
wherein the criteria is based on content currently being viewed by a user.
17. The method of claim 11, further comprising:
determining, from the summary of the plot lines, a second plurality of terms that indicate a second plot line of the plot lines;
determining relevance intervals for the second plurality of terms by searching, based on the second plurality of terms, the relevance intervals for the terms selected from the dialog;
determining a second composite relevance interval by removing a fourth relevance interval from the relevance intervals for the second plurality of terms or by merging a fifth relevance interval from the relevance intervals for the second plurality of terms with a sixth relevance interval from the relevance intervals for the second plurality of terms; and
after determining the first composite relevance interval, filtering the first composite relevance interval by determining that a first sub-interval of the first composite relevance interval overlaps with a second sub-interval of the second composite relevance interval and by removing the first sub-interval from the first composite relevance interval.
18. A method comprising:
determining, by a computing device and for each term selected from a dialog of a content item, a relevance interval for the term by identifying one or more portions of the dialog that are relevant to the term based on a similarity measurement for a first term in the one or more portions, the first term being different from the term, wherein the determining results in relevance intervals for terms selected from the dialog;
receiving a textual summary of the content item;
determining, from the textual summary, a first plurality of terms that indicate a first theme of the content item;
determining relevance intervals for the first plurality of terms by searching, based on the first plurality of terms, the relevance intervals for the terms selected from the dialog;
determining, based on the relevance intervals for the first plurality of terms, a first composite relevance interval that indicates one or more first portions, of the content item, that are relevant to the first theme;
receiving, after storing the first composite relevance interval as part of a searchable index, a query indicative of criteria for searching the searchable index;
determining, based on the criteria and the searchable index, the first composite relevance interval; and
causing, based on the query, display of at least one of the one or more first portions.
19. The method of claim 18, wherein the receiving the textual summary of the content item is performed based on receiving electronic program guide (EPG) data associated with the content item, wherein the EPG data comprises the textual summary.
20. The method of claim 18, further comprising:
determining, from the textual summary, a second plurality of terms that indicate a second theme of the content item;
determining relevance intervals for the second plurality of terms by searching, based on the second plurality of terms, the relevance intervals for the terms selected from the dialog;
determining a second composite relevance interval by removing a fourth relevance interval from the relevance intervals for the second plurality of terms or by merging a fifth relevance interval from the relevance intervals for the second plurality of terms with a sixth relevance interval from the relevance intervals for the second plurality of terms; and
after determining the first composite relevance interval, filtering the first composite relevance interval by determining that a first sub-interval of the first composite relevance interval overlaps with a second sub-interval of the second composite relevance interval and by removing the first sub-interval from the first composite relevance interval.
US16/552,375 2011-08-02 2019-08-27 Segmentation Of Video According To Narrative Theme Pending US20200192935A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/552,375 US20200192935A1 (en) 2011-08-02 2019-08-27 Segmentation Of Video According To Narrative Theme

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13/195,944 US10467289B2 (en) 2011-08-02 2011-08-02 Segmentation of video according to narrative theme
US16/552,375 US20200192935A1 (en) 2011-08-02 2019-08-27 Segmentation Of Video According To Narrative Theme

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/195,944 Continuation US10467289B2 (en) 2011-08-02 2011-08-02 Segmentation of video according to narrative theme

Publications (1)

Publication Number Publication Date
US20200192935A1 true US20200192935A1 (en) 2020-06-18

Family

ID=46754874

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/195,944 Active 2034-02-24 US10467289B2 (en) 2011-08-02 2011-08-02 Segmentation of video according to narrative theme
US16/552,375 Pending US20200192935A1 (en) 2011-08-02 2019-08-27 Segmentation Of Video According To Narrative Theme

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US13/195,944 Active 2034-02-24 US10467289B2 (en) 2011-08-02 2011-08-02 Segmentation of video according to narrative theme

Country Status (3)

Country Link
US (2) US10467289B2 (en)
EP (1) EP2555131A3 (en)
CA (1) CA2784366A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210256051A1 (en) * 2020-02-14 2021-08-19 Beijing Baidu Netcom Science And Technology Co., Ltd. Theme classification method based on multimodality, device, and storage medium

Families Citing this family (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9146909B2 (en) * 2011-07-27 2015-09-29 Qualcomm Incorporated Web browsing enhanced by cloud computing
JP5853653B2 (en) * 2011-12-01 2016-02-09 ソニー株式会社 Server device, information terminal, and program
JP6074933B2 (en) * 2012-07-19 2017-02-08 沖電気工業株式会社 Video distribution apparatus, video distribution program, cache control apparatus, cache control program, video distribution system, and video distribution method
US9483463B2 (en) * 2012-09-10 2016-11-01 Xerox Corporation Method and system for motif extraction in electronic documents
US8972416B1 (en) * 2012-11-29 2015-03-03 Amazon Technologies, Inc. Management of content items
US9553922B1 (en) * 2013-05-02 2017-01-24 Twitter, Inc. Media event based social networking interfaces
US10404806B2 (en) * 2015-09-01 2019-09-03 Yen4Ken, Inc. Methods and systems for segmenting multimedia content
US10546028B2 (en) 2015-11-18 2020-01-28 International Business Machines Corporation Method for personalized breaking news feed
US9934449B2 (en) 2016-02-04 2018-04-03 Videoken, Inc. Methods and systems for detecting topic transitions in a multimedia content
US10296533B2 (en) * 2016-07-07 2019-05-21 Yen4Ken, Inc. Method and system for generation of a table of content by processing multimedia content
US10116981B2 (en) * 2016-08-01 2018-10-30 Microsoft Technology Licensing, Llc Video management system for generating video segment playlist using enhanced segmented videos
US10560734B2 (en) * 2016-08-01 2020-02-11 Microsoft Technology Licensing, Llc Video segmentation and searching by segmentation dimensions
US10708635B2 (en) * 2017-03-02 2020-07-07 Ricoh Company, Ltd. Subsumption architecture for processing fragments of a video stream
US10956495B2 (en) 2017-03-02 2021-03-23 Ricoh Company, Ltd. Analysis of operator behavior focalized on machine events
US10929685B2 (en) 2017-03-02 2021-02-23 Ricoh Company, Ltd. Analysis of operator behavior focalized on machine events
US10719552B2 (en) 2017-03-02 2020-07-21 Ricoh Co., Ltd. Focalized summarizations of a video stream
US10720182B2 (en) 2017-03-02 2020-07-21 Ricoh Company, Ltd. Decomposition of a video stream into salient fragments
US10929707B2 (en) 2017-03-02 2021-02-23 Ricoh Company, Ltd. Computation of audience metrics focalized on displayed content
US10956773B2 (en) 2017-03-02 2021-03-23 Ricoh Company, Ltd. Computation of audience metrics focalized on displayed content
US10943122B2 (en) 2017-03-02 2021-03-09 Ricoh Company, Ltd. Focalized behavioral measurements in a video stream
US10949463B2 (en) 2017-03-02 2021-03-16 Ricoh Company, Ltd. Behavioral measurements in a video stream focalized on keywords
US10956494B2 (en) 2017-03-02 2021-03-23 Ricoh Company, Ltd. Behavioral measurements in a video stream focalized on keywords
US10713391B2 (en) 2017-03-02 2020-07-14 Ricoh Co., Ltd. Tamper protection and video source identification for video processing pipeline
US10949705B2 (en) 2017-03-02 2021-03-16 Ricoh Company, Ltd. Focalized behavioral measurements in a video stream
KR102348758B1 (en) * 2017-04-27 2022-01-07 삼성전자주식회사 Method for operating speech recognition service and electronic device supporting the same
US11822591B2 (en) * 2017-09-06 2023-11-21 International Business Machines Corporation Query-based granularity selection for partitioning recordings
US11252477B2 (en) 2017-12-20 2022-02-15 Videokawa, Inc. Event-driven streaming media interactivity
US11109111B2 (en) 2017-12-20 2021-08-31 Flickray, Inc. Event-driven streaming media interactivity
US10595098B2 (en) * 2018-01-09 2020-03-17 Nbcuniversal Media, Llc Derivative media content systems and methods
US10872124B2 (en) * 2018-06-27 2020-12-22 Sap Se Search engine
US11132393B2 (en) * 2018-10-30 2021-09-28 International Business Machines Corporation Identifying expressions for target concept with images

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6507838B1 (en) * 2000-06-14 2003-01-14 International Business Machines Corporation Method for combining multi-modal queries for search of multimedia data using time overlap or co-occurrence and relevance scores
JP2008301426A (en) * 2007-06-04 2008-12-11 Nippon Hoso Kyokai <Nhk> Featured value generating device, summary video detecting device, and program
US8528019B1 (en) * 1999-11-18 2013-09-03 Koninklijke Philips N.V. Method and apparatus for audio/data/visual information

Family Cites Families (73)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4227177A (en) 1978-04-27 1980-10-07 Dialog Systems, Inc. Continuous speech recognition method
US5761655A (en) 1990-06-06 1998-06-02 Alphatronix, Inc. Image file storage and retrieval system
GB9100732D0 (en) 1991-01-14 1991-02-27 Xerox Corp A data access system
US5623690A (en) 1992-06-03 1997-04-22 Digital Equipment Corporation Audio/video storage and retrieval for multimedia workstations by interleaving audio and video data in data file
US5682326A (en) 1992-08-03 1997-10-28 Radius Inc. Desktop digital video processing system
US5530859A (en) 1993-05-10 1996-06-25 Taligent, Inc. System for synchronizing a midi presentation with presentations generated by other multimedia streams by means of clock objects
US5594897A (en) 1993-09-01 1997-01-14 Gwg Associates Method for retrieving high relevance, high quality objects from an overall source
US5553281A (en) 1994-03-21 1996-09-03 Visual F/X, Inc. Method for computer-assisted media processing
US5521841A (en) 1994-03-31 1996-05-28 Siemens Corporate Research, Inc. Browsing contents of a given video sequence
US5493677A (en) 1994-06-08 1996-02-20 Systems Research & Applications Corporation Generation, archiving, and retrieval of digital images with evoked suggestion-set captions and natural language interface
US5835667A (en) * 1994-10-14 1998-11-10 Carnegie Mellon University Method and apparatus for creating a searchable digital video library and a system and method of using such a library
JP3545824B2 (en) 1995-02-21 2004-07-21 富士通株式会社 Data retrieval device
US5649182A (en) 1995-03-17 1997-07-15 Reitz; Carl A. Apparatus and method for organizing timeline data
US5729741A (en) 1995-04-10 1998-03-17 Golden Enterprises, Inc. System for storage and retrieval of diverse types of information obtained from different media sources which includes video, audio, and text transcriptions
US5799315A (en) 1995-07-07 1998-08-25 Sun Microsystems, Inc. Method and apparatus for event-tagging data files automatically correlated with a time of occurence in a computer system
US5640553A (en) 1995-09-15 1997-06-17 Infonautics Corporation Relevance normalization for documents retrieved from an information retrieval system in response to a query
US5737734A (en) 1995-09-15 1998-04-07 Infonautics Corporation Query word relevance adjustment in a search of an information retrieval system
US5742816A (en) 1995-09-15 1998-04-21 Infonautics Corporation Method and apparatus for identifying textual documents and multi-mediafiles corresponding to a search topic
US5717914A (en) 1995-09-15 1998-02-10 Infonautics Corporation Method for categorizing documents into subjects using relevance normalization for documents retrieved from an information retrieval system in response to a query
US5737495A (en) 1995-09-29 1998-04-07 Intel Corporation Method and apparatus for managing multimedia data files in a computer network by streaming data files into separate streams based on file attributes
US6374260B1 (en) 1996-05-24 2002-04-16 Magnifi, Inc. Method and apparatus for uploading, indexing, analyzing, and searching media content
US6370543B2 (en) 1996-05-24 2002-04-09 Magnifi, Inc. Display of media previews
US5765150A (en) 1996-08-09 1998-06-09 Digital Equipment Corporation Method for statistically projecting the ranking of information
US5956729A (en) 1996-09-06 1999-09-21 Motorola, Inc. Multimedia file, supporting multiple instances of media types, and method for forming same
US5928330A (en) 1996-09-06 1999-07-27 Motorola, Inc. System, device, and method for streaming a multimedia file
US20030093790A1 (en) 2000-03-28 2003-05-15 Logan James D. Audio and video program recording, editing and playback systems using metadata
US5982369A (en) 1997-04-21 1999-11-09 Sony Corporation Method for displaying on a screen of a computer system images representing search results
US5924090A (en) 1997-05-01 1999-07-13 Northern Light Technology Llc Method and apparatus for searching a database of records
US5845279A (en) 1997-06-27 1998-12-01 Lucent Technologies Inc. Scheduling resources for continuous media databases
US6463444B1 (en) 1997-08-14 2002-10-08 Virage, Inc. Video cataloger system with extensibility
US6360234B2 (en) 1997-08-14 2002-03-19 Virage, Inc. Video cataloger system with synchronized encoders
US6567980B1 (en) 1997-08-14 2003-05-20 Virage, Inc. Video cataloger system with hyperlinked output
US6055543A (en) 1997-11-21 2000-04-25 Verano File wrapper containing cataloging information for content searching across multiple platforms
US6363380B1 (en) 1998-01-13 2002-03-26 U.S. Philips Corporation Multimedia computer system with story segmentation capability and operating program therefor including finite automation video parser
WO1999041684A1 (en) * 1998-02-13 1999-08-19 Fast Tv Processing and delivery of audio-video information
US6714909B1 (en) * 1998-08-13 2004-03-30 At&T Corp. System and method for automated multimedia content indexing and retrieval
US6366296B1 (en) 1998-09-11 2002-04-02 Xerox Corporation Media browser using multimodal analysis
US7134074B2 (en) 1998-12-25 2006-11-07 Matsushita Electric Industrial Co., Ltd. Data processing method and storage medium, and program for causing computer to execute the data processing method
US6345253B1 (en) 1999-04-09 2002-02-05 International Business Machines Corporation Method and apparatus for retrieving audio information using primary and supplemental indexes
US6937766B1 (en) 1999-04-15 2005-08-30 MATE—Media Access Technologies Ltd. Method of indexing and searching images of text in video
US6675174B1 (en) 2000-02-02 2004-01-06 International Business Machines Corp. System and method for measuring similarity between a set of known temporal media segments and a one or more temporal media streams
US6578040B1 (en) * 2000-06-14 2003-06-10 International Business Machines Corporation Method and apparatus for indexing of topics using foils
US6882793B1 (en) 2000-06-16 2005-04-19 Yesvideo, Inc. Video processing system
EP1405212B1 (en) 2000-07-06 2013-12-18 Streamsage, Inc. Method and system for indexing and searching timed media information based upon relevance intervals
US7490092B2 (en) * 2000-07-06 2009-02-10 Streamsage, Inc. Method and system for indexing and searching timed media information based upon relevance intervals
US20040125877A1 (en) * 2000-07-17 2004-07-01 Shin-Fu Chang Method and system for indexing and content-based adaptive streaming of digital video content
US20030163815A1 (en) 2001-04-06 2003-08-28 Lee Begeja Method and system for personalized multimedia delivery service
GB2381688B (en) 2001-11-03 2004-09-22 Dremedia Ltd Time ordered indexing of audio-visual data
US7382933B2 (en) * 2005-08-24 2008-06-03 International Business Machines Corporation System and method for semantic video segmentation based on joint audiovisual and text analysis
US9100723B2 (en) 2006-03-07 2015-08-04 Samsung Electronics Co., Ltd. Method and system for managing information on a video recording
US8510453B2 (en) 2007-03-21 2013-08-13 Samsung Electronics Co., Ltd. Framework for correlating content on a local network with information on an external network
US8209724B2 (en) 2007-04-25 2012-06-26 Samsung Electronics Co., Ltd. Method and system for providing access to information of potential interest to a user
US8115869B2 (en) 2007-02-28 2012-02-14 Samsung Electronics Co., Ltd. Method and system for extracting relevant information from content metadata
US8732154B2 (en) 2007-02-28 2014-05-20 Samsung Electronics Co., Ltd. Method and system for providing sponsored information on electronic devices
US8863221B2 (en) 2006-03-07 2014-10-14 Samsung Electronics Co., Ltd. Method and system for integrating content and services among multiple networks
US20070214123A1 (en) 2006-03-07 2007-09-13 Samsung Electronics Co., Ltd. Method and system for providing a user interface application and presenting information thereon
US8200688B2 (en) 2006-03-07 2012-06-12 Samsung Electronics Co., Ltd. Method and system for facilitating information searching on electronic devices
US20080235209A1 (en) 2007-03-20 2008-09-25 Samsung Electronics Co., Ltd. Method and apparatus for search result snippet analysis for query expansion and result filtering
US8843467B2 (en) 2007-05-15 2014-09-23 Samsung Electronics Co., Ltd. Method and system for providing relevant information to a user of a device in a local network
US8195650B2 (en) 2007-02-28 2012-06-05 Samsung Electronics Co., Ltd. Method and system for providing information using a supplementary device
US20080221989A1 (en) 2007-03-09 2008-09-11 Samsung Electronics Co., Ltd. Method and system for providing sponsored content on an electronic device
US20080250010A1 (en) 2007-04-05 2008-10-09 Samsung Electronics Co., Ltd. Method and system for determining and pre-processing potential user queries related to content in a network
US7716361B2 (en) 2006-05-04 2010-05-11 Samsung Electronics Co., Ltd. Method and system for providing persistent media across web pages using only browser plug-ins
US20080066136A1 (en) * 2006-08-24 2008-03-13 International Business Machines Corporation System and method for detecting topic shift boundaries in multimedia streams using joint audio, visual and text cues
US8935269B2 (en) 2006-12-04 2015-01-13 Samsung Electronics Co., Ltd. Method and apparatus for contextual search and query refinement on consumer electronics devices
US20080183681A1 (en) 2007-01-29 2008-07-31 Samsung Electronics Co., Ltd. Method and system for facilitating information searching on electronic devices
US8055762B2 (en) 2007-04-16 2011-11-08 Samsung Electronics Co. Ltd Method and system for location identification
EP2164558A4 (en) * 2007-06-08 2010-08-04 Valentx Inc Methods and devices for intragastric support of functional or prosthetic gastrointestinal devices
JP5343861B2 (en) * 2007-12-27 2013-11-13 日本電気株式会社 Text segmentation apparatus, text segmentation method and program
US20090292685A1 (en) 2008-05-22 2009-11-26 Microsoft Corporation Video search re-ranking via multi-graph propagation
US8423555B2 (en) * 2010-07-09 2013-04-16 Comcast Cable Communications, Llc Automatic segmentation of video
US9483557B2 (en) * 2011-03-04 2016-11-01 Microsoft Technology Licensing Llc Keyword generation for media content
US9069850B2 (en) * 2011-11-08 2015-06-30 Comcast Cable Communications, Llc Content descriptor

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8528019B1 (en) * 1999-11-18 2013-09-03 Koninklijke Philips N.V. Method and apparatus for audio/data/visual information
US6507838B1 (en) * 2000-06-14 2003-01-14 International Business Machines Corporation Method for combining multi-modal queries for search of multimedia data using time overlap or co-occurrence and relevance scores
JP2008301426A (en) * 2007-06-04 2008-12-11 Nippon Hoso Kyokai <Nhk> Featured value generating device, summary video detecting device, and program

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Teufel, Simone, and Marc Moens. "Argumentative classification of extracted sentences as a first step towards flexible abstracting." Advances in automatic text summarization 155 (1999): 1-171. (Year: 1999) *
Translation of JP-2008301426-A (Year: 2008) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210256051A1 (en) * 2020-02-14 2021-08-19 Beijing Baidu Netcom Science And Technology Co., Ltd. Theme classification method based on multimodality, device, and storage medium

Also Published As

Publication number Publication date
EP2555131A2 (en) 2013-02-06
US20130036124A1 (en) 2013-02-07
EP2555131A3 (en) 2015-06-24
US10467289B2 (en) 2019-11-05
CA2784366A1 (en) 2013-02-02

Similar Documents

Publication Publication Date Title
US20200192935A1 (en) Segmentation Of Video According To Narrative Theme
US9177080B2 (en) Automatic segmentation of video
US11562737B2 (en) Generating topic-specific language models
US11197036B2 (en) Multimedia stream analysis and retrieval
US11070879B2 (en) Media content recommendation through chatbots
US9008489B2 (en) Keyword-tagging of scenes of interest within video content
US20210406306A1 (en) Content Descriptor
US8527520B2 (en) Method and system for indexing and searching timed media information based upon relevant intervals
US20080177536A1 (en) A/v content editing
JP6429382B2 (en) Content recommendation device and program
KR20080080028A (en) Method and device for extracting information from content metadata
JP2005509949A (en) Method and system for retrieving, updating and presenting personal information
Messer et al. SeeNSearch: A context directed search facilitator for home entertainment devices
EP1405212B1 (en) Method and system for indexing and searching timed media information based upon relevance intervals
KR102275095B1 (en) The informatization method for youtube video metadata for personal media production
Ariyasu et al. Message analysis algorithms and their application to social tv
Law-To et al. VOXALEADNEWS: A scalable content based video search engine
Galuščáková Information retrieval and navigation in audio-visual archives
Mochizuki et al. Re-mining Topics Popular in the Recent Past from a Large-Scale Closed Caption TV Corpus

Legal Events

Date Code Title Description
AS Assignment

Owner name: COMCAST CABLE COMMUNICATIONS, LLC, PENNSYLVANIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:AMBWANI, GEETU;DAVIS, ANTHONY R.;SIGNING DATES FROM 20110728 TO 20110801;REEL/FRAME:050182/0563

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED