US20080263041A1 - Method and Apparatus for Automatic Detection and Identification of Unidentified Broadcast Audio or Video Signals - Google Patents

Method and Apparatus for Automatic Detection and Identification of Unidentified Broadcast Audio or Video Signals Download PDF

Info

Publication number
US20080263041A1
US20080263041A1 US12/093,453 US9345306A US2008263041A1 US 20080263041 A1 US20080263041 A1 US 20080263041A1 US 9345306 A US9345306 A US 9345306A US 2008263041 A1 US2008263041 A1 US 2008263041A1
Authority
US
United States
Prior art keywords
unregistered
piece
programming
portions
similar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/093,453
Inventor
Kwan Cheung
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mediaguide Inc
Original Assignee
Mediaguide Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mediaguide Inc filed Critical Mediaguide Inc
Priority to US12/093,453 priority Critical patent/US20080263041A1/en
Publication of US20080263041A1 publication Critical patent/US20080263041A1/en
Assigned to MEDIAGUIDE, INC reassignment MEDIAGUIDE, INC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHEUNG, KWAN
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H20/00Arrangements for broadcast or for distribution combined with broadcast
    • H04H20/12Arrangements for observation, testing or troubleshooting
    • H04H20/14Arrangements for observation, testing or troubleshooting for monitoring programmes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/683Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/632Query formulation
    • G06F16/634Query by example, e.g. query by humming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H60/00Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
    • H04H60/56Arrangements characterised by components specially adapted for monitoring, identification or recognition covered by groups H04H60/29-H04H60/54
    • H04H60/58Arrangements characterised by components specially adapted for monitoring, identification or recognition covered by groups H04H60/29-H04H60/54 of audio
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • G06F2218/16Classification; Matching by matching signal segments

Definitions

  • the present invention relates to a method of detecting and tracking unknown broadcast content items that are periodically encountered by automatic detection and tracking systems.
  • detection of broadcast content for example, music broadcast over radio
  • These known pattern vectors are stored in a database and while the broadcast signals are received, the same computation is applied to the incoming signal. Then, the detection process entails searching for matches between the incoming computed pattern vectors and the vast database of pre-created pattern vectors associated with the identity of known content.
  • This system runs into problems when content that has not been registered in the database yet and is being broadcast anyway. In the prior art, these unknown or unmatched programming items would be ignored.
  • This invention is directed to address this shortcoming by determining when a likely piece of programming content has been detected, tracking such detections, and then submitting the piece for human identification in order that proper publishing or other indicia of identity be associated with the content. The system automatically determines which portions of the broadcast signal are previously un-registered content.
  • FIG. 1 is a schematic of a Radio Monitoring System.
  • FIG. 2 is an Illustration of a repetition of the same program along the time-axis.
  • FIG. 3 is a schematic of how the invention is operated
  • FIG. 4 is the workflow of exercising the first grouping and the second grouping
  • the core of a modern radio monitoring system is a detection system consisting of a detection algorithm and a database.
  • the database referred to as the detection database, is populated with identification information of the programs to be detected, e.g. songs and commercial ads.
  • the identification information includes the fingerprint of the program (in this document, the term “pattern” or “pattern vector” will be used instead of “fingerprint”).
  • Signals received from an electronic media broadcast, for example, a radio broadcast are processed to extract patterns at regular intervals in time. Patterns from broadcast are compared with patterns registered in the detection database. A detection is made if a program registered in the database matched with that of the broadcast.
  • Matching includes more than an identical match, but also close matches and a series of matches over time that are determined to be sufficiently consistent with a specific program.
  • the short coming of the system is that detection result is determined by what is being stored in the database. Clearly, if a particular song is not registered in the detection database, the song will never be detected.
  • the invention is directed to exercise the self-similarity principle in order to detect unregistered programs that have been repeated.
  • the invention is designed to bring in all programs that have at least one repetition within a pre-determined period of time.
  • the invention can generate results autonomously while other matching and identification processes are running. However, the actual identity of the content of each of the results is not known until it has been listen by human operators. Clearly, listening process is a manual process.
  • the harvested results can be converted to registered content if it turns out that a piece of content is registered whose pattern vectors sufficiently match those associated with content only identified as a unique piece of content as a result of the operation of the invention.
  • content as content is registered in the database, it can be checked to see if it has already been detected by the invention, and the database of detections updated accordingly.
  • the pattern vectors of this song that have been recovered by the invention can be extracted from one of the corresponding broadcast signal clips and registered into the detection database of the media monitoring system.
  • each of the harvests has at least two copies because at a minimum, self-similarity between to instances in time is necessary to determine the repetition that indicates the presence of an individually identifiable piece of content. Selecting which repetition should be registered into the detection database is determined by the audio quality.
  • the invention also includes a step called Audio-Selector, which selects from all the repetitions of the harvested content, the one that has the best audio quality.
  • the pattern vectors computed from this repetition is registered in the database, typically with a unique identification number that is used in lieu of an actual title, because the title is not known yet.
  • the invention When coupling the invention with an existing media monitoring system, which has detected and identified certain programs along the time, the invention can be set to identify similar programs only during time periods where the monitoring system has not otherwise detected any registered programming. In this mode, the invention is said to be “harvesting on undetected time. Another other mode is to harvest on “all time”, including during time periods where registered content has been identified. This approach is seldom used as it is redundant.
  • the invention can also be exercised to two other ways, one called “batch-mode” and the other “time-progressive” mode.
  • the latter is employed by the preferred embodiment due to more economical use of computer memory and CPU processing time.
  • the detail of time-progressive mode is further described below.
  • the invention In the batch mode, the invention is exercised periodically on a large memory of stored pattern vectors computed from incoming broadcast signals. Every time when it is exercised, the invention will update the harvest with new harvests generated since the last time the invention was operated.
  • the time-progressive mode the invention can be exercised at any time at will. The invention is operated to continually seek self-similar repetition among unidentified periods of time in a signal.
  • the harvested content is then not considered unidentified for these purposes the next time the content is encountered.
  • the time progressive mode can be used so that the harvests from every media source can be compared to identify content that is self-similar as compared to instances from other geographic regions or distinct media broadcast sources.
  • the starting point of the invention is to exercise self-similarity detection on individual channels of the incoming broadcast signal.
  • An undetected program is harvested if it has been repeated at least once over a prescribed period TP, characterized with the start time, TS, and the end time, TE.
  • TP a prescribed period
  • the invention will detect programs that are similar to each other.
  • Each harvest is a unique program that occurred at least twice in TP.
  • Each repetition is given an index with an “instance id”.
  • Each harvest, indexed with a “family id”, has a collection of all the corresponding instance id's.
  • the self-similar detection algorithm is used on each individual channel to detect programs that have been repeated more than once within TP.
  • the example presented here is for audio from a radio station, but the invention would work equally well on audio from internet, satellite or any other broadcast medium, or for other media types, including audio-visual works like television programming.
  • Given a piece of audio recording of a radio station the recording was started at time TS and ended at time TE.
  • a clip referred to as a “bait” is selected from the recording and used as a reference to be matched against with the entire, or a selected portion of the recording.
  • Each clip that has a match to the bait is referred to as a “catch”.
  • the timing information of both the bait and each of the catches will be registered as an instance.
  • the timing location information of every (bait, catch) pair is given an instance id.
  • the self-similarity detection is exercised iteratively: The first bait clip is selected right from the start of the recording. If there is a catch, the second bait clip is selected right after the end of the previous bait clip. Otherwise, the next bait clip is started with a small time offset ⁇ >0 from start of the previous one. The iteration is run until the end of the recording is reached. The detection is based on the similarity of pattern vectors.
  • the 25 sub-bands cover the frequencies from 63 Hz to 2,587 Hz.
  • centroid The first order moment, or “centroid”, of each subband is then computed. Each centroid lies in the open interval (0,1). Below is the formula:
  • the period TP characterized by the start time T S and end time T E of the period, contains N samples: x[1] to x[N].
  • a pattern is extracted from each frame of signal: The first frame is started from the first sample of TP: x[1] to x[16384].
  • the first frame yields the first pattern vector, denoted as:
  • M lower integer of ( N ⁇ 16384)/4000+1.
  • the start time of each frame is taken as the time location of the corresponding pattern vector.
  • the time location for 1 is at T S .
  • the method of the invention begins with the Phase I Detection: Generating Catch Threads.
  • the process is described as below using pseudo code.
  • the BlockSize is a parameter which can be freely set. In the preferred embodiment, it is set to 5, corresponding to a time duration of approximately 10 seconds.
  • matching is not necessarily an exact match but a sufficient match. Hence, matching is determined if appropriate conditions are met between the pattern vector in the query and a pattern vector in the database.
  • the first requirement is called the Gap Requirement.
  • Gap Requirement A pattern, ⁇ C, is said satisfied with the gap-requirement if the absolute point error between to is within some prescribed bound:
  • gap can be set individually for each subband.
  • g q is set to about 0.1 uniformly on all subbands.
  • these parameters can be adjusted to balance false positive identifications against false negatives, processing times and the like.
  • RG be the set of pattern vectors in C that satisfy the gap requirement with respect to query set
  • Error-bound requirement A pattern vector, ⁇ C, is satisfies the error-bound requirement if the Norm-1 error between to is less than some prescribed bound B:
  • ⁇ n 1 25 ⁇ ⁇ C s , n - d r , n ⁇ ⁇ B e
  • B e is set at about 0.8.
  • RF RG ⁇ RE.
  • RF is the set of all pattern vectors in C that satisfy both the gap and the error-bound requirements.
  • E1 be the set of corresponding Norm-1 errors.
  • the five query patterns: to will have respectively RF p to RF p+4 as the pattern sets that satisfy both requirements, and E1 p to E1 p+4 as the sets holding the corresponding Norm-1 error values.
  • RF 0 to RF 4 be the five sets of pattern vectors with respect to the querying of to This step is about building up “qualified threads”.
  • a thread Th is a sequence of five frames:
  • the index F1 has a offset between 2 to 7 relative to F0. This offset is based on the 4 ⁇ sampling of pattern vectors in C and 1 ⁇ sampling of pattern vectors in D. Thus, for every advancement of one frame in D, we expect a four frame advancement in C. Thus, the offset of index between F0 and F1 should be equal to 4. In the preferred embodiment, a range of 2 (4 ⁇ 2) to 7 (4+3) is allowed for such offset is due to the robustness consideration.
  • Each qualifying thread is required to pass the Sequencing-Rule between all the BlockSize frames, as described above. As a result all threads that fail to pass the Sequencing-Rule between any pair of subsequent are disqualified. If the number of qualified threads is greater than zero, then the process applies a time restraint test to the thread.
  • the process examines the time location of each qualified thread:
  • min_ratio is set to 0.90, and max_ratio is set to 1.10—allowing +/ ⁇ 10% duration variance. If the number of remaining qualified threads is greater than zero, these remaining threads are subject to the next stage of the process.
  • Phase II Detection Tracking on Qualified Threads.
  • the phase II tracking step involves tracking the remaining qualified threads. This step of the analysis process shall be presented as pseudo-code.
  • phase II process proceeds to measure the duration of each thread, the duration information of each n-th thread is registered in EndRegister[n].
  • Every catch will be paired up with the bait.
  • the start-time and the end-time of the bait and the catch, as well as the channel id (such as the channel's call-letter) will be logged.
  • An instance id is generated to every bait-catch pair.
  • An instance consists of:
  • the invention can be operated in two operational modes: harvesting at “all-time”, and harvesting in “undetected-time”.
  • all programs within TP that have been repeated more than once within TP are detected and harvested.
  • certain spots within TP have already been detected and identified by the monitoring system.
  • the invention can be exercised more effectively on clips where the radio monitoring system has not made detection.
  • the invention is said to be harvesting just the undetected time.
  • the second is the mean 1st order auto-correlation of the patterns of the clip:
  • the vigilance of the clip is a measure of the clarity, characterized by norm-1 difference between adjacent 1 ⁇ pattern vectors.
  • the vigilance value is required to surpass a minimum score of 0.95. This threshold is used in the preferred embodiment, but may be adjusted based on the application of the invention.
  • the SSD is structured with the additional parametric data as follows:
  • the clips registered into the SSD can be arranged in the order of the time location of each clip. Thus, Clip #1 preceded Clip #2, which preceded Clip #3, and so on.
  • the exercising of the self-similarity detection algorithm is to take one clip at a time as the bait to match with all the later clips. Clips that failed the quality requirement will be excluded from the self-similarity detection exercise.
  • the matched instances are registered onto a database:
  • the instances collected during the self-similarity detection exercise will be processed to identify harvests from the instances.
  • a commercial advertisement with duration of 30 seconds has been repeated in five different time locations within the harvesting period TP.
  • a 0 a segment in A
  • B 0 a segment in B
  • a 1 a segment in A
  • C 0 a segment in C
  • the 5 instances can be eloquently represented into a table form:
  • the first process is called Identification.
  • the major purpose of this process is identify the best representation from the multiple catches of the same clip. For example, the 4 clips A 0 , A 1 , A 2 and A 3 are multiple catches containing the same content—all of them contain the first occurrence of the advertisement.
  • the Identification process selects the most representable out of these four catches.
  • the second process is the Grouping exercise where all similar clips, in this case the 5 occurrences of the same advertisements, are being grouped into the same group.
  • each of the above clips is either a bait or a catch. All the clips are then compared with each other to identify if the two has time overlap.
  • a 0 , A 1 , A 2 , A 3 -------->A 2 is the winner.
  • the example five clips identified in the Identification process above are determined if they can be grouped into a single family.
  • a family is a collection of clips which passed the Sufficient-Similarity test.
  • Two clips, X and Y. are said sufficient-similar if both satisfy the “85% Rule”:
  • the duration of the similar segment is no less than 85% of the durations of either clips. This percentage is a configuration parameter. Clearly, the higher the percentage, the tighter the similarity requirement.
  • the 85% number used by the preferred embodiment can be adjusted higher or lower depending on the application of the invention. To determine if X and Y are sufficient similar:
  • the Grouping process can be represented by a transition of the Instance Table: Start with the Instance Table after the self-similarity detection exercise
  • a family has at least two members. Again, based on the belief that longer clip is more informative than shorter clip, the family member that has the longest duration is selected as the “Lead Member” of the family.
  • the duration-ratio of the overlap between the representative member to each family members will be re-measured. Those that fail the 85% Rule will be discarded from the family.
  • a particular advertisement were repeated a number of times. And in some of these spins, the advertisement was purposely paired up back-to-back with another advertisement.
  • a MacDonalds special combo advertisement may, from time to time, be purposely paired up with a Coca-cola advertisement. It is often desirable to separate the these catches, though they are similar, into two different families.
  • the Sufficient-Similarity Test is an effective mean for separating these instances.
  • A, B, C, D, and E contain a MacDonald special combo advertisement; while B and D are the MacDonald Special Combo advertisement followed by a Cocacola advertisement.
  • both B and D will not be combined with A, C and E into the same family. Instead, A, B and C will be combined into one family; and B and D will be combined into another family.
  • the term “membership” originated from classical set theory. For example, the number ⁇ is a member of the set of all real numbers, but not a member of the set of integers.
  • the rule referred to as the “85% Rule”—is used to determine if two clips are sufficiently similar to be grouped into the same family. Those that pass the condition will be grouped together as family members.
  • the membership on the family level is referred to as the “First Membership”.
  • the “Second Membership” is referred as the grouping of families—Similarity of representative members of two families are measured. The two will be grouped if the similarity surpasses some prescribed threshold value. Same grouping rule on the First Membership is used on the Second Membership.
  • the threshold value for the second membership is set at 50%, or the 50% Rule is used for this second grouping exercise.
  • the preferred embodiment uses the 50% threshold for the second membership test, but this value can be adjusted up or down depending on the application of the invention.
  • the purpose of creating the second grouping process is to provide additional information to human operators to speedup identifying harvests, that is, having operators determine the actual identity of content that is harvested but has not been identified with title and publisher information.
  • Family #1 containing A, C and E.
  • Family #2 containing B and D.
  • the results of the first grouping exercise do not convey the information that Family #1 and Family #2 are similar.
  • the second grouping exercise will group both families into a common group, referred to as a “Community” and a community_id will be assigned, conveying that the two families are similar with respect to the 50% Rule.
  • the second grouping exercise results with “trunks”, where each trunk carries a number of families, which are branches connected to the same trunk. Human operators may first run a coarse analysis on a trunk, finding out what is the common message within the entire trunk (e.g. a Discovery-Channel advertisement), then pay attention to what are the specifics to each family (e.g. different video programs).
  • the basic units of the Harvesting exercise are the bait-catch instants being detected in the self-similarity detection exercise.
  • Exercising of the self-similarity detection algorithm can be in the batch-mode, where the self-similarity exercise is working on a particular time period NT. If NT is a very long period, the self-similarity detection can be exercised in the time-progressive mode. In this mode, the period NT is being divided into smaller intervals. For example, the period NT is a 12 hour period starting at 00:00 to 12:00. One can divided the period into two 6-hour period:
  • the non-shaded entries are the similarity information on NT_1.
  • the shaded entries are the similarity information appended after the self similarity detection exercise on NT_2. are the similarity information on NT — 1.
  • the shaded entries are the similarity information appended after the self similarity detection exercise on NT — 2.
  • the new entries are generated by running the self similarity detection process described above on both the old and the new undetected clips.
  • the size of SSD is limited. In this example, the size of the SSD on NT — 2 is half to the SSD in the batch mode. Also note that in harvesting NT — 2, all the undetected clips in NT — 1 and NT — 2 will be used as queries to the SSD. New results in each partition will be appended to the Instance Table.
  • the partition size of NT can be arbitrarily fine, as determined by the application. Also, partitions are not required to be uniform, that is, a partition can be set for each clip. After the Instance Table has been appended with new results, both the Identification and the Grouping processes can be exercised to identify append new members to existing families, or to identify new families.
  • the harvester exercise on a single channel can be easily applied to harvest similar clips across different channels.
  • the process is performed as follows:
  • Clips with similar durations Clips from stations of similar formats. Clips that appear most recently, e.g. within 24 hours.
  • the similarity information will be processed via the Grouping Process with the 85% Rule to identify similar families across different channels. All similar families will be combined into a combined family. A combined-family identification number will be generated for the combined family. Each combined family consists of families of different channels. These families are combined due to high degree of similarity among their lead members. The combined family thus contains all informations of every family: channel-id, family-id, time locations and audio-quality index of every clip of every family. A lead combined-family member is selected. Again, the clip that has the longest duration is selected as the lead combined-family member.
  • Channels that are within the same market e.g. all radio stations in the New York market, will be selected into the cross-channel harvesting exercise.
  • the Instance-Table holds similarity information across channels in the same market.
  • Each entry may contain similarity information of multiple clips.
  • Each entry may contain similarity information of multiple clips of two markets.
  • Market Market Market #1 Market #2 Market #3 . . . Market #N Market #1 similarity similarity . . . similarity information information information across markets #1 across markets across markets and #2 #1 and #3 #1 and #M Market #2 similarity . . . similarity information information across markets across markets #2 and #3 #2 and #M Market #3 . . . similarity information across markets #3 and #M . . . . similarity information across markets #3 and #M . . . . . . . . . . . . . . Market #N ⁇ 1 similarity information across markets #N ⁇ 1 and #N
  • the harvest processing stage consists of a number of steps: starting from the harvests collected on all the three levels, human listening, identification, to the end where certain clips out of the harvests are decided to be promoted to the monitoring system, that is, fully identified and registered in the monitoring system.
  • the families of all the three levels will be presented to human operators for identification.
  • the system will automatically select the clip that has the highest audio quality within the family and present to the human operator.
  • the operator will identify the clip, and input the meta-data of the clip, including, for example, the song title, publisher, record label. If it is a song, the identification of the clip will be made by a format specialist, which generates the title and artist information.

Abstract

A system and method of detecting unidentified broadcast electronic media content using a self-similarity technique is presented. The process and system catalogues repeated instances of content that has not be positively identified, but are sufficiently similar as to infer repetitive broadcasts. These catalogued instances may be further processed on the basis of different broadcast channels, sources, geographic locations of broadcasts or format to further assist the identification thereof.

Description

  • This application claims priority to PCT/US05/04802, filed on Feb. 16, 2004, and 60/736,348, filed on Nov. 15, 2005, both of which are incorporated herein by reference.
  • BACKGROUND AND SUMMARY OF THE INVENTION
  • The present invention relates to a method of detecting and tracking unknown broadcast content items that are periodically encountered by automatic detection and tracking systems. It is known in the art that detection of broadcast content, for example, music broadcast over radio, includes the sampling the of the identified content to compute numerical representations of features of the content, sometimes referred to in the art as a fingerprint, or in the related patent application PCT/US05/04802, filed on Feb. 16, 2004, which is incorporated herein by reference, a pattern vector. These known pattern vectors are stored in a database and while the broadcast signals are received, the same computation is applied to the incoming signal. Then, the detection process entails searching for matches between the incoming computed pattern vectors and the vast database of pre-created pattern vectors associated with the identity of known content.
  • This system runs into problems when content that has not been registered in the database yet and is being broadcast anyway. In the prior art, these unknown or unmatched programming items would be ignored. This invention is directed to address this shortcoming by determining when a likely piece of programming content has been detected, tracking such detections, and then submitting the piece for human identification in order that proper publishing or other indicia of identity be associated with the content. The system automatically determines which portions of the broadcast signal are previously un-registered content.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a schematic of a Radio Monitoring System.
  • FIG. 2 is an Illustration of a repetition of the same program along the time-axis.
  • FIG. 3 is a schematic of how the invention is operated
  • FIG. 4 is the workflow of exercising the first grouping and the second grouping
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS 1. Introduction
  • Short Coming of a Radio Monitoring System The core of a modern radio monitoring system (see FIG. 1) is a detection system consisting of a detection algorithm and a database. The database, referred to as the detection database, is populated with identification information of the programs to be detected, e.g. songs and commercial ads. The identification information includes the fingerprint of the program (in this document, the term “pattern” or “pattern vector” will be used instead of “fingerprint”). Signals received from an electronic media broadcast, for example, a radio broadcast are processed to extract patterns at regular intervals in time. Patterns from broadcast are compared with patterns registered in the detection database. A detection is made if a program registered in the database matched with that of the broadcast. Matching includes more than an identical match, but also close matches and a series of matches over time that are determined to be sufficiently consistent with a specific program. The short coming of the system is that detection result is determined by what is being stored in the database. Clearly, if a particular song is not registered in the detection database, the song will never be detected.
  • A Semi-Automatic Solution. It is a well-known fact that certain programs, particularly songs and commercial ads, are broadcast repeatedly. This repetition provides the opportunity for a detection algorithm to detect programs that have been repeated, even if they are not registered in the detection database. An important principle being employed to design the detection algorithm on repetitive programs is the self-similarity principle. If a program is broadcast at time T1 and is repeated at T2, T2>T1. Programs located along the time-axis at T1 and T2 are said identical.
  • To detect if a program, which was located at T1, is repeated elsewhere, we can cut out the content located at T1, and run an identification exercise to identify along the time-axis when this piece of content is repeated. We will then identify that program located at T1 if it is repeated at T2. The invention is directed to exercise the self-similarity principle in order to detect unregistered programs that have been repeated. The invention is designed to bring in all programs that have at least one repetition within a pre-determined period of time. The invention can generate results autonomously while other matching and identification processes are running. However, the actual identity of the content of each of the results is not known until it has been listen by human operators. Clearly, listening process is a manual process. Alternatively, the harvested results can be converted to registered content if it turns out that a piece of content is registered whose pattern vectors sufficiently match those associated with content only identified as a unique piece of content as a result of the operation of the invention. In other words, as content is registered in the database, it can be checked to see if it has already been detected by the invention, and the database of detections updated accordingly.
  • If the invention outputs a identifiable piece of content, referred to herein as a “harvest”, such as a new release of a song not actually registered in the database, the pattern vectors of this song that have been recovered by the invention can be extracted from one of the corresponding broadcast signal clips and registered into the detection database of the media monitoring system. By definition, each of the harvests has at least two copies because at a minimum, self-similarity between to instances in time is necessary to determine the repetition that indicates the presence of an individually identifiable piece of content. Selecting which repetition should be registered into the detection database is determined by the audio quality. The invention also includes a step called Audio-Selector, which selects from all the repetitions of the harvested content, the one that has the best audio quality. The pattern vectors computed from this repetition is registered in the database, typically with a unique identification number that is used in lieu of an actual title, because the title is not known yet.
  • When coupling the invention with an existing media monitoring system, which has detected and identified certain programs along the time, the invention can be set to identify similar programs only during time periods where the monitoring system has not otherwise detected any registered programming. In this mode, the invention is said to be “harvesting on undetected time. Another other mode is to harvest on “all time”, including during time periods where registered content has been identified. This approach is seldom used as it is redundant.
  • The invention can also be exercised to two other ways, one called “batch-mode” and the other “time-progressive” mode. The latter is employed by the preferred embodiment due to more economical use of computer memory and CPU processing time. The detail of time-progressive mode is further described below. In the batch mode, the invention is exercised periodically on a large memory of stored pattern vectors computed from incoming broadcast signals. Every time when it is exercised, the invention will update the harvest with new harvests generated since the last time the invention was operated. In the time-progressive mode, the invention can be exercised at any time at will. The invention is operated to continually seek self-similar repetition among unidentified periods of time in a signal. As harvests are created, the harvested content is then not considered unidentified for these purposes the next time the content is encountered. In addition, the time progressive mode can be used so that the harvests from every media source can be compared to identify content that is self-similar as compared to instances from other geographic regions or distinct media broadcast sources.
  • Operational Principle of Harvester. The starting point of the invention is to exercise self-similarity detection on individual channels of the incoming broadcast signal. An undetected program is harvested if it has been repeated at least once over a prescribed period TP, characterized with the start time, TS, and the end time, TE. Within TP, the invention will detect programs that are similar to each other. Each harvest is a unique program that occurred at least twice in TP. Each repetition is given an index with an “instance id”. Each harvest, indexed with a “family id”, has a collection of all the corresponding instance id's.
  • Self-Similar Detection. The self-similar detection algorithm is used on each individual channel to detect programs that have been repeated more than once within TP. The example presented here is for audio from a radio station, but the invention would work equally well on audio from internet, satellite or any other broadcast medium, or for other media types, including audio-visual works like television programming. Given a piece of audio recording of a radio station, the recording was started at time TS and ended at time TE. A clip referred to as a “bait” is selected from the recording and used as a reference to be matched against with the entire, or a selected portion of the recording. Each clip that has a match to the bait (subject to certain matching criteria) is referred to as a “catch”. The timing information of both the bait and each of the catches will be registered as an instance. The timing location information of every (bait, catch) pair is given an instance id.
  • The self-similarity detection is exercised iteratively: The first bait clip is selected right from the start of the recording. If there is a catch, the second bait clip is selected right after the end of the previous bait clip. Otherwise, the next bait clip is started with a small time offset τ>0 from start of the previous one. The iteration is run until the end of the recording is reached. The detection is based on the similarity of pattern vectors.
  • Calculation of pattern vectors is describe below, and in further detail in the sister patent application PCT/US05/04802, filed on Feb. 16, 2004, incorporated herein by reference.
      • In the preferred embodiment, the audio sampling rate is set at 8,000 Hz. These samples are organized into time frames of about 16,384 samples (a frame has a duration of about 2 seconds). Below is the procedure to generate a pattern vector.
      • Given a frame of 16,384 signal samples:

  • Figure US20080263041A1-20081023-P00001
    =[x[0]x[1]Λx[16384]]
      • Take the Fast-Fourier-Transform and obtain 16,384 complex FFT coefficients:

  • Figure US20080263041A1-20081023-P00001
    =[X[0]X[1]ΛX[16384]]
  • Partition the 16,384 FFT coefficients into 25 subbands, These 25 subbands are a subset of the original 31 subbands originally used for the monitoring system described in the sister patent application PCT/US05/04802. The following are the indices of FFT's in each subband: Subband #1: 130 to 182; Subband #2: 183 to 255; subband #3: 256 to 357; subband #4: 358 to 501; subband #5: 502 to 702; subband #6: 703 to 984; subband #7: 985 to 1,378; subband #8: 1,379 to 1,930; subband #9: 1,931 to 2,702; subband #10: 2,703 to 3,784; subband #11: 3,785 to 5,298; subband #12: 130 to 255; subband #13: 256 to 501; subband #14: 157 to 219; subband #15: 220 to 306; subband #16: 307 to 429; subband #17: 430 to 602; subband #18: 603 to 843; subband #19: 844 to 1,181; subband #20:1,182 to 1,654; subband #21: 1,655 to 2,316; subband #22: 2,317 to 3,243; subband #23: 3,244 to 4,541; subband #24: 157 to 306; subband #25: 307 to 602.
  • The 25 sub-bands cover the frequencies from 63 Hz to 2,587 Hz.
  • Let Nk=number of elements in the k-th subband, k=1 to 25. Also let Mk={mk[1], mk[2], . . . , mk[Nk]} be the set containing the corresponding indices for the k-th band.
  • The first order moment, or “centroid”, of each subband is then computed. Each centroid lies in the open interval (0,1). Below is the formula:
  • centroid of the k - th subband = c k = n = 1 Nk n * X [ mk [ n ] ] n = 1 Nk X [ mk [ n ] ] ; k = 1 to 25
  • The pattern vector for each frame is then=
    Figure US20080263041A1-20081023-P00002
    =[c1 c2 Λ c25].
  • Extracting Pattern Vectors Using 4 Times (4×) Pattern Vector Sampling
  • Let the period TP, characterized by the start time TS and end time TE of the period, contains N samples: x[1] to x[N]. A pattern is extracted from each frame of signal: The first frame is started from the first sample of TP: x[1] to x[16384].
      • Frame to frame distance is set at 4,000 samples (0.5 second).
        • The second frame is from x[4001] to x[20384].
        • The third frame is from x[8001] to x[24384].
        • And so on for the remaining frames.
  • The first frame yields the first pattern vector, denoted as:

  • Figure US20080263041A1-20081023-P00003
    1=[c1,1c1,2Λc1,25]
      • The last frame yields

  • Figure US20080263041A1-20081023-P00004
    M=[cM,1cM,2ΛcM,25]

  • where

  • M=lower integer of (N−16384)/4000+1.
  • The start time of each frame is taken as the time location of the corresponding pattern vector. Thus, the time location for
    Figure US20080263041A1-20081023-P00003
    1 is at TS. And for, the time location for
    Figure US20080263041A1-20081023-P00005
    2 is at TS+4000/8000 sec=TS+0.5 sec, and so on.
  • Self Similarity Detection using the one-times pattern vector as the Query Starting from C 1, take every fourth pattern vector:
    Figure US20080263041A1-20081023-P00006
    K.
  • And assign them to query vectors D as:
    Figure US20080263041A1-20081023-P00007
    Figure US20080263041A1-20081023-P00008
    Figure US20080263041A1-20081023-P00009
  • The result is a total of Q query vectors:
    Figure US20080263041A1-20081023-P00010
    to
    Figure US20080263041A1-20081023-P00011
    where Q=Quotient of M/4. The time location of
    Figure US20080263041A1-20081023-P00010
    is that of
    Figure US20080263041A1-20081023-P00012
    And the time location of
    Figure US20080263041A1-20081023-P00013
    is that of
    Figure US20080263041A1-20081023-P00014
    These 1× Q vectors will be used one by one to query the 4× pattern vectors of
    Figure US20080263041A1-20081023-P00015
    to
    Figure US20080263041A1-20081023-P00016
  • Self-Similarity Detection
  • To facilitate the disclosure purposes, designate C and D as the sets that hold the 4× and the 1× pattern vectors respectively:
  • C = [ C ρ 1 C ρ 2 Λ C ρ M ] D = [ D ρ 1 D ρ 2 Λ D ρ Q ]
  • Store every 4× pattern vector into a database. We will refer this database as the self-similar-detection database, abbreviated as SSD. The index for each of these pattern vectors is the time-stamp of each frame.
  • TABLE 1
    A Self-Similarity Detection Database
    Index
    1st Subband 2nd Subband 3rd Subband . . . 25th Subband
    1 D1,1 D1,2 D1,3 . . . D1,25
    2 D2,1 D2,2 D2,3 . . . D2,25
    Figure US20080263041A1-20081023-P00017
    Figure US20080263041A1-20081023-P00017
    Figure US20080263041A1-20081023-P00017
    Figure US20080263041A1-20081023-P00017
    . . .
    Figure US20080263041A1-20081023-P00017
    Q DQ,1 DQ,2 DQ,3 . . . DQ,25
  • The method of the invention begins with the Phase I Detection: Generating Catch Threads. The process is described as below using pseudo code. The BlockSize is a parameter which can be freely set. In the preferred embodiment, it is set to 5, corresponding to a time duration of approximately 10 seconds.
      • Set BlockSize=5
      • Setp=1.
      • Loop: While p<=Q−(BlockSize−1):
        • Read pattern vectors:
          Figure US20080263041A1-20081023-P00018
          to
          Figure US20080263041A1-20081023-P00019
          p+BlockSize−1 Start a new bait by using these five pattern vectors (corresponding to approximately 10 seconds audio) as the first five 1× pattern vectors of the bait.
        • Let the query be
          Figure US20080263041A1-20081023-P00020
          r=p top+BlockSize−1:
        • Query each of the five pattern vectors to the SSD.
        • Check whether the returning pattern vectors satisfy the gap requirement and the error-bound requirement.
  • Practitioners of ordinary skill will recognize that matching is not necessarily an exact match but a sufficient match. Hence, matching is determined if appropriate conditions are met between the pattern vector in the query and a pattern vector in the database. The first requirement is called the Gap Requirement.
  • Gap Requirement: A pattern,
    Figure US20080263041A1-20081023-P00021
    εC, is said satisfied with the gap-requirement if the absolute point error between
    Figure US20080263041A1-20081023-P00022
    to
    Figure US20080263041A1-20081023-P00023
    is within some prescribed bound:

  • |c k,q −d r,q |≦g q; q=1 to 25; gq is the gap set for the q-th subband.
  • Note that the gap can be set individually for each subband. In the preferred embodiment, gq is set to about 0.1 uniformly on all subbands. However, these parameters can be adjusted to balance false positive identifications against false negatives, processing times and the like. For the purposes of this disclosure, let RG be the set of pattern vectors in C that satisfy the gap requirement with respect to query set
    Figure US20080263041A1-20081023-P00024
  • Error-bound requirement: A pattern vector,
    Figure US20080263041A1-20081023-P00025
    εC, is satisfies the error-bound requirement if the Norm-1 error between
    Figure US20080263041A1-20081023-P00026
    to
    Figure US20080263041A1-20081023-P00027
    is less than some prescribed bound B:
  • n = 1 25 C s , n - d r , n B e
  • In the preferred embodiment, Be is set at about 0.8.
  • Let RE be the set of pattern vectors in C that satisfy the error bound requirement with respect to query set
    Figure US20080263041A1-20081023-P00028
  • Let RF=RG∩RE. Then RF is the set of all pattern vectors in C that satisfy both the gap and the error-bound requirements. Also, let E1 be the set of corresponding Norm-1 errors. The five query patterns:
    Figure US20080263041A1-20081023-P00029
    to
    Figure US20080263041A1-20081023-P00030
    , will have respectively RFp to RFp+4 as the pattern sets that satisfy both requirements, and E1p to E1p+4 as the sets holding the corresponding Norm-1 error values.
  • Let RF0 to RF4 be the five sets of pattern vectors with respect to the querying of
    Figure US20080263041A1-20081023-P00031
    to
    Figure US20080263041A1-20081023-P00032
    This step is about building up “qualified threads”. In this step, a thread Th is a sequence of five frames:

  • Th:
    Figure US20080263041A1-20081023-P00033
    Figure US20080263041A1-20081023-P00034
    Figure US20080263041A1-20081023-P00035
    Figure US20080263041A1-20081023-P00036
    Figure US20080263041A1-20081023-P00037
  • where
    Figure US20080263041A1-20081023-P00038
    is selected from RFp,
    Figure US20080263041A1-20081023-P00039
    is selected from RFp+1, . . . , and
    Figure US20080263041A1-20081023-P00040
    is selected from RFp+4. The offset of the index of two subsequent frame, e.g.
    Figure US20080263041A1-20081023-P00041
    and
    Figure US20080263041A1-20081023-P00042
    in the thread has to satisfy the following Sequencing-Rule:
  • Sequencing-Rule: The index F1 has a offset between 2 to 7 relative to F0. This offset is based on the 4× sampling of pattern vectors in C and 1× sampling of pattern vectors in D. Thus, for every advancement of one frame in D, we expect a four frame advancement in C. Thus, the offset of index between F0 and F1 should be equal to 4. In the preferred embodiment, a range of 2 (4−2) to 7 (4+3) is allowed for such offset is due to the robustness consideration.
      • Steps for Sequencing Rule Test:
        • Set the number of threads equal to the number of elements in RF0=NT.
        • Each of the elements of RF0 is the first frame of each of the NT threads.
          • Let Thp,q be the q-th element, q=0 to 4, of the p-th thread, p=1 to NT.
        • Loop: For n=1 to NT: (n is the thread index)
        • Loop: for q=1 to BlockSize−1: (q=the element of the p-th thread)
          • Select from the set RFq members that satisfy the sequencing rule to the element Thn,q−1.
          • If there exists member(s) in RFq that satisfied with the Sequencing-Rule,
          • Select the one(s) that has the smallest error (the error values are stored in E1q) as the new thread element Tn,q.
          • Else
          • The n-th thread is disqualified from further threading.
  • Collect all qualified threads: Each qualifying thread is required to pass the Sequencing-Rule between all the BlockSize frames, as described above. As a result all threads that fail to pass the Sequencing-Rule between any pair of subsequent are disqualified. If the number of qualified threads is greater than zero, then the process applies a time restraint test to the thread.
  • In particular, the process examines the time location of each qualified thread:
      • Remove threads with time location “earlier” than +tm (tm≧0) minutes from the time location of
        Figure US20080263041A1-20081023-P00043
        (This step is used to restrict time locations of all catch threads be at least tm minutes “after” the time location of the lead-thread). In the preferred embodiment, tm is set to 5 minutes. (Setting tm=5 minutes excludes all catches with time locations within 5 minute to the bait.)
      • Remove time-overlapped threads:
        • If two threads are overlapped in time, remove the one that has the larger accumulated error.
      • Analyze the quality of each remaining thread with the duration-ratio test:
      • Given the thread Thp that is still qualified, its duration-ratio is calculated with the following formula:

  • duration ratio=(16384+(Th p,4 −Th p,0)*4000)/(16384+4*16000)
        • If the duration ratio is outside the interval (min_ratio, max_ratio), the thread Thp is removed.
  • In the preferred embodiment, min_ratio is set to 0.90, and max_ratio is set to 1.10—allowing +/−10% duration variance. If the number of remaining qualified threads is greater than zero, these remaining threads are subject to the next stage of the process.
  • Phase II Detection: Tracking on Qualified Threads. The phase II tracking step involves tracking the remaining qualified threads. This step of the analysis process shall be presented as pseudo-code.
      • Let the number of remaining qualified threads=Nu.
      • Denote the k-th qualified thread as Thk
      • Set the MaxStep (an integer)=1.
      • Set q=BlockSize−1.
      • Set Steps[n]=0, n=1 to Nu.
      • Set EndRegister[n]=1, n=1 to Nu.
      • Set MinStep=1.
      • Loop: While MinStep<MaxStep
        • Set q=q+1.
        • Read the next query:
          Figure US20080263041A1-20081023-P00044
          is sent to the to the SSD to obtain the corresponding RFq and E1q.
        • For loop: for n=1 to Nu
          • Select from the set RFq members that satisfy the sequencing rule relative to the element Tn,q−1. (Note that the Sequencing-Rule requires two adjacent frames in the n-th thread to have an offset between 2 to 7. In this loop, the Sequencing-Rule is generalized to be as offset between two adjacent frames is between Steps[n]*4+2 to Steps[n]*4+7.)
          • If there exists at least one member in RFq that meets the test, select the one that has the least error (the error values are stored in E1q) as the new thread element Tn,q.

  • Set Steps[n]=1.

  • Else

  • Set T n,q =T n,q−1.
            • Set Steps[n]=Steps[n]+1. (Note in this step, the register in Steps[n] is used as a ‘skipped’ counter. In particular, a thread may have a skip in the q-th query, i.e. There exists no frame in RFq that is satisfied with the Sequencing-Rule relative to Tn,q−1. Every time where there is a skip, the register Steps[n] is incremented. However, as long as there is no skip, the register Steps[n] is reset to 1.)
          • End For loop.
        • For loop: For n=1 to Nu
          • If Steps[n]>MaxStep,
            • EndRegister[n]=q−1. (Once the n-th thread has reached the maximum skips denoted by MaxStep, the location where the maximum skips reached is marked and registered in EndRegister[n].)
        • End the For loop.
        • Compute MinStep=min (Steps[n]). (The variable MinStep is the smallest number of skips across all qualified threads. The threading will continue until MinStep is larger than MaxStep, which is specified to limit the maximum number of skips.)
      • End the While loop.
  • Now the phase II process proceeds to measure the duration of each thread, the duration information of each n-th thread is registered in EndRegister[n].
      • Loop: For n=1 to Nu
        • Enter the four parameters to characterize each of the n-th thread Thn:
        • start_time=p
        • end_time=p+EndRegister[n].
        • First_frame_id=Tn,0
        • Last_frame_id=Tn,EndRegister[n]
      • End the For Loop.
  • Pair up the bait and every catch into an instance: Here, every catch will be paired up with the bait. The start-time and the end-time of the bait and the catch, as well as the channel id (such as the channel's call-letter) will be logged. An instance id is generated to every bait-catch pair. An instance consists of:
  • {instance id; channel id; start-time/end-time of the bait; start-time/end-time of the catch}.
  • Once this is complete, the entire iteration of the loop beginning at Phase I is complete. In pseudo code:
      • Set p=p+q+1. (Start the next bait right after the end of the previous bait.)
      • Else

  • p=p+1.

  • Else

  • p=p+1.
      • End the While loop initiated at the beginning of Phase I.
  • Self-Similarity Detection on Undetected Clips Only. As mentioned above, the invention can be operated in two operational modes: harvesting at “all-time”, and harvesting in “undetected-time”. In the “all-time” mode, all programs within TP that have been repeated more than once within TP are detected and harvested. With a radio monitoring system in place, certain spots within TP have already been detected and identified by the monitoring system. The invention can be exercised more effectively on clips where the radio monitoring system has not made detection. In this mode, the invention is said to be harvesting just the undetected time.
  • Running the invention in the undetected time requires some modification to what was presented as the process presented above. Departures from the all-time mode is presented as follows: The Self-Similar Database (SSD) is registered with clips where no detection were made by the monitor. The audio quality of each clip is measured. There are a total of three quality scores per clip: The RMS power of the clip represented here as a vector of h samples:
    Figure US20080263041A1-20081023-P00045
    =[z1 z2 Λ zh], is equal to
  • P z = 10 * log ( 1 h n = 1 h z n 2 ) B
  • If Pz is below −30 dB, the audio power of the clip is too low and will not be considered. The second is the mean 1st order auto-correlation of the patterns of the clip: The 4× patterns of the clip is obtained: {
    Figure US20080263041A1-20081023-P00046
    , k=1, 2, K, r}, and then extracted are the 1× patterns:
    Figure US20080263041A1-20081023-P00047
    =
    Figure US20080263041A1-20081023-P00048
    n=1, 2, K, R=quotient(r/4). Also calculated are the mean 1st order auto
  • correlation = 1 R - 1 n = 1 R - 1 ( m = 1 25 D n , m z * D n + 1 , m z ) .
  • If the mean value is higher than 0.99, the clip is believed to be contaminated with too much static. On the other hand, if the mean value is lower than 0.80, the clip is believed to be merely containing channel noise. In both cases the clip is unusable. These threshold numbers may be adjusted up or down to adjust for false-positive versus false negative rates of identification as required in the specific application of the invention. Also calculated is the vigilance of the clip: The vigilance is a measure of the clarity, characterized by norm-1 difference between adjacent 1× pattern vectors.
  • vigilance = 1 R - 1 n = 1 R - 1 ( m = 1 25 D n , m + 1 z - D n , m z )
  • The vigilance value is required to surpass a minimum score of 0.95. This threshold is used in the preferred embodiment, but may be adjusted based on the application of the invention.
  • The SSD is structured with the additional parametric data as follows:
  • TABLE 2
    A Self-Similarity Detection Database Appended with Indices on Undetected Clips
    Mean 1st-
    Undetected RMS order Pattern 1st 2nd 25th
    Clips power correlation Vigilance index Subband Subband . . . subband
    Clip #
    1 RMS Mean 1st- Vigilance (1, 1) D1,1 1 D1,1 1
    Figure US20080263041A1-20081023-P00017
    D1,25 1
    Power order For Clip (1, 2) D2,1 1 D2,2 1
    Figure US20080263041A1-20081023-P00017
    D2,25 1
    of correlation # 1
    Figure US20080263041A1-20081023-P00017
    Figure US20080263041A1-20081023-P00017
    Figure US20080263041A1-20081023-P00017
    Figure US20080263041A1-20081023-P00017
    Figure US20080263041A1-20081023-P00017
    Clip For Clip (1, M1) DM 1 ,1 1 DM 1 ,2 1
    Figure US20080263041A1-20081023-P00017
    DM 1 ,25 1
    #1 #1
    Clip #2 RMS Mean 1st- Vigilance (2, 1) D1,1 2 D1,1 2
    Figure US20080263041A1-20081023-P00017
    D1,25 2
    Power order For Clip (2, 2) D2,1 2 D2,2 2
    Figure US20080263041A1-20081023-P00017
    D2,25 2
    of correlation # 2
    Figure US20080263041A1-20081023-P00017
    Figure US20080263041A1-20081023-P00017
    Figure US20080263041A1-20081023-P00017
    Figure US20080263041A1-20081023-P00017
    Figure US20080263041A1-20081023-P00017
    Clip For Clip (2, M2) DM 1 ,1 2 DM 1 ,2 2
    Figure US20080263041A1-20081023-P00017
    DM 1 ,25 2
    #2 #2
    Figure US20080263041A1-20081023-P00017
    Figure US20080263041A1-20081023-P00017
    Figure US20080263041A1-20081023-P00017
    Figure US20080263041A1-20081023-P00017
    Figure US20080263041A1-20081023-P00017
    Figure US20080263041A1-20081023-P00017
    Figure US20080263041A1-20081023-P00017
    Figure US20080263041A1-20081023-P00017
    Figure US20080263041A1-20081023-P00017
    Clip #S RMS Mean 1st- Vigilance (S, 1) D1,1 S D1,1 S
    Figure US20080263041A1-20081023-P00017
    D1,25 S
    Power order For Clip (S, 2) D2,1 S D2,2 S
    Figure US20080263041A1-20081023-P00017
    D2,25 S
    of correlation #S
    Figure US20080263041A1-20081023-P00017
    Figure US20080263041A1-20081023-P00017
    Figure US20080263041A1-20081023-P00017
    Figure US20080263041A1-20081023-P00017
    Figure US20080263041A1-20081023-P00017
    Clip For Clip (S, MS) DM 1 ,1 S DM 1 ,2 S
    Figure US20080263041A1-20081023-P00017
    DM 1 ,25 S
    #S #S
  • The clips registered into the SSD can be arranged in the order of the time location of each clip. Thus, Clip #1 preceded Clip #2, which preceded Clip #3, and so on. The exercising of the self-similarity detection algorithm is to take one clip at a time as the bait to match with all the later clips. Clips that failed the quality requirement will be excluded from the self-similarity detection exercise. The matched instances are registered onto a database:
  • TABLE 3
    Self-Similarity Detection Information between Undetected Clips are logged onto a Database.
    Clip
    Clip Clip # 1 Clip#2 Clip#3 . . . Clip#S
    Clip #
    1 Instance_id + Instance_id + . . . Instance_id +
    similar information similar information similar information
    between #1 and #2 between #1 and #3 between #1 and #M
    Clip #
    2 .Instance_id + similar . . . Instance_id + similar
    information information
    between #2 and #3 between #2 and #M
    . . . . . . . . . . . . . . . . . .
    Clip #S − 1 Instance_id + similar
    information
    between #M − 1 and
    #M
    Clip #S . . .

    Harvesting—Mining Harvests from Similar Instances
  • The instances collected during the self-similarity detection exercise will be processed to identify harvests from the instances. Consider the following scenario where a commercial advertisement with duration of 30 seconds has been repeated in five different time locations within the harvesting period TP.
      • Five undetected clips, namely, A, B, C, D and E, where A preceded B, B preceded C, C preceded D, and D preceded E, are checked into the SSD. Each clip contains the said commercial advertisement. The time duration of each clip is arbitrary but all longer than 30 seconds.
      • The self-similarity detection algorithm will then use A and the bait and result with four instances
        • B0/A0, C0/A1, D0/A2 and E0/A3.
  • Here, A0, a segment in A, is the clip found to be similar to B0, a segment in B. Likewise, A1, a segment in A, is the clip found to be similar to C0, a segment in C.
      • Thus, the self-similarity detection algorithm will also yield the following instances:
        With B as the bait: C1/B1, D1/B2, E1/B3,
        With C as the bait: D2/C2, E2/C3,
        With D as the bait: E3/D3.
  • The 5 instances can be eloquently represented into a table form:
  • The Instance Table
    Clip
    Clip A B C D E
    A B0/A0 C0/A1 D0/A2 E0/A3
    B C1/B1 D1/B2 E1/B3
    C D2/C2 E2/C3
    D E3/D3
    E
      • There are a total of ten instances, containing the five repetitions of the advertisements.
        • Also note that there are overlaps among themselves. For example, A0, A1, A2 and A3 are the audio clips that contain the first occurrence of the advertisement, except that they are differed slightly in time-offsets and time-durations.
  • Two processes are used to mine harvests from all the instances. The first process is called Identification. The major purpose of this process is identify the best representation from the multiple catches of the same clip. For example, the 4 clips A0, A1, A2 and A3 are multiple catches containing the same content—all of them contain the first occurrence of the advertisement. The Identification process selects the most representable out of these four catches. The second process is the Grouping exercise where all similar clips, in this case the 5 occurrences of the same advertisements, are being grouped into the same group.
  • Identification Process.
  • There are two steps in the Identification Process:
      • The first step is to run overlapping detection exercises to identify clips that have time overlaps.
      • The second step is to select the most representable from all overlapped clips identified in the first step.
  • The same example is used to illustrate the two steps taken in the Identification Process:
      • Collect all the clips that have been detected by the self-similarity detection algorithm within the time period TP. In the same example, we have a total of twenty clips:
      • A0, A1, A2, A3,
      • B0, B1, B2, B3,
      • C0, C1, C2, C3,
      • D0, D1, D2, D3,
      • E0, E1, E2, E3.
  • Note that each of the above clips is either a bait or a catch. All the clips are then compared with each other to identify if the two has time overlap.
  • Determination of time-overlap of two clips are given below:
      • Given two clips, say X and Y.
        • If X and Y have time overlap AND the overlap duration is within +/−6 seconds of the shorter of the two clips. Then X and Y are said overlapped.
      • If X and Y is determined to have overlapped, then the shorter of the two clips is replaced by the longer one. This is being done so due to the notion that the longer one can contain more information and is therefore more representable.
  • Thus, given the four clips that are overlapped: A0, A1, A2, A3, only one clip will survive the identification process. The following results are obtained after exercising the Identification process on the twenty clips:
  • A0, A1, A2, A3-------->A2 is the winner.
    B0, B1, B2, B3-------->B0 is the winner.
    C0, C1, C2, C3-------->C3 is the winner.
    D0, D1, D2, D3-------->D0 is the winner.
    E0, E1, E2, E3-------->E2 is the winner.
  • Thus, only five clips are identified from 10 instances.
  • Grouping Process
  • The example five clips identified in the Identification process above are determined if they can be grouped into a single family. A family is a collection of clips which passed the Sufficient-Similarity test.
  • Sufficient-Similarity Test
  • Two clips, X and Y. are said sufficient-similar if both satisfy the “85% Rule”: The duration of the similar segment is no less than 85% of the durations of either clips. This percentage is a configuration parameter. Clearly, the higher the percentage, the tighter the similarity requirement. The 85% number used by the preferred embodiment can be adjusted higher or lower depending on the application of the invention. To determine if X and Y are sufficient similar:
      • The similarity segment across X and Y is first identified.
      • The duration of the segment on X is measured.
      • Compute the similarity ratio, R1, of this duration to the duration of X.
      • The duration of the segment on Y is measured.
      • Compute the similarity ratio, R2, of this duration to the duration of Y.
      • X and Y are said sufficient-similar if min(R1,R2)≧85%.
      • Otherwise, the two clips are not sufficient-similar.
      • Group all sufficient-similar clips into a single family.
        • A family id is generated.
        • The family contains all the information of all the family members, including the channel id, time location, and audio quality measures.
  • The Grouping process can be represented by a transition of the Instance Table: Start with the Instance Table after the self-similarity detection exercise
  • The Instance Table
    Clip
    Clip A B C D E
    A B0/A0 C0/A1 D0/A2 E0/A3
    B C1/B1 D1/B2 E1/B3
    C D2/C2 E2/C3
    D E3/D3
    E
      • Replace clips with their representative clips within the Instant Table
  • The Instants Table after Identification Process
    Clip
    Clip A2 B0 C3 D0 E2
    A2 B0/A2 C3/A2 D0/A2 E2/A2
    B0 C3/B0 D0/B0 E2/B0
    C3 D0/C3 E2/C3
    D0 E2/D0
    E2
  • The Cross-Similarity Table
    Clip
    Clip A B C D E
    A R1, R2 R1, R2 R1, R2 R1, R2
    B R1, R2 R1, R2 R1, R2
    C R1, R2 R1, R2
    D R1, R2
    E
  • Compute the similarity ratio across clips
      • Check if the two ratios in each entry are above 85%. If so, the clips of that entry are said sufficient-similar.
        • Collect sufficient-similar clips into a single family.
        • Besides identifying family member from the Cross-Similarity Table, the following rule is also used to collect family members:
        • If X and Y are sufficient-similar, and Y and Z are also sufficient similar,
        • then X and Z are said sufficiently similar.
    Selecting Family Representative Member
  • A family has at least two members. Again, based on the belief that longer clip is more informative than shorter clip, the family member that has the longest duration is selected as the “Lead Member” of the family.
  • Quality Control with Family Lead's Member
  • After a lead member has been selected, the duration-ratio of the overlap between the representative member to each family members will be re-measured. Those that fail the 85% Rule will be discarded from the family.
  • Effectiveness of the Grouping Process on Back-to-Back Advertisements
  • Within the harvesting period NT, a particular advertisement were repeated a number of times. And in some of these spins, the advertisement was purposely paired up back-to-back with another advertisement. For example, a MacDonalds special combo advertisement may, from time to time, be purposely paired up with a Coca-cola advertisement. It is often desirable to separate the these catches, though they are similar, into two different families. The Sufficient-Similarity Test is an effective mean for separating these instances.
  • With our running examples with five clips: A, B, C, D, and E. Here, A, C and E contain a MacDonald special combo advertisement; while B and D are the MacDonald Special Combo advertisement followed by a Cocacola advertisement.
  • The resulting Cross-Similarity Table is expected to be similar to:
  • Cross-Similarity Table
    Clip
    Clip A B C D E
    A 0.95, 0.48 0.96, 093 0.91, 0.50 0.98, 0.96
    B 0.47, 0.90 0.91, 0.93 0.48, 0.91
    C 0.93, 0.47 0.89, 0.92
    D 0.46, 0.88
    E
  • Clearly, both B and D will not be combined with A, C and E into the same family. Instead, A, B and C will be combined into one family; and B and D will be combined into another family.
  • Second Membership in Grouping Process.
  • The term “membership” originated from classical set theory. For example, the number π is a member of the set of all real numbers, but not a member of the set of integers. Referring above where the rule—referred to as the “85% Rule”—is used to determine if two clips are sufficiently similar to be grouped into the same family. Those that pass the condition will be grouped together as family members. The membership on the family level is referred to as the “First Membership”. The “Second Membership” is referred as the grouping of families—Similarity of representative members of two families are measured. The two will be grouped if the similarity surpasses some prescribed threshold value. Same grouping rule on the First Membership is used on the Second Membership. The threshold value for the second membership is set at 50%, or the 50% Rule is used for this second grouping exercise. The preferred embodiment uses the 50% threshold for the second membership test, but this value can be adjusted up or down depending on the application of the invention. The purpose of creating the second grouping process is to provide additional information to human operators to speedup identifying harvests, that is, having operators determine the actual identity of content that is harvested but has not been identified with title and publisher information.
  • This is illustrated in the following example:
  • Assume five clips: A, B, C, D and E containing two similar 60-second advertisements. All the five advertisements are identical in the first 40 seconds, carrying the dedicated channel message. The last 20 seconds are different, carrying the advertised product information. The clips A, C and E contain the first advertisement. And D and E contain the second advertisement. The first grouping exercise will result with two distinct families:
  • Family #1 containing A, C and E.
    Family #2 containing B and D.
  • The results of the first grouping exercise do not convey the information that Family #1 and Family #2 are similar. The second grouping exercise will group both families into a common group, referred to as a “Community” and a community_id will be assigned, conveying that the two families are similar with respect to the 50% Rule. From the data presentation point of view, the second grouping exercise results with “trunks”, where each trunk carries a number of families, which are branches connected to the same trunk. Human operators may first run a coarse analysis on a trunk, finding out what is the common message within the entire trunk (e.g. a Discovery-Channel advertisement), then pay attention to what are the specifics to each family (e.g. different video programs).
  • Time Progressive Harvesting
  • The basic units of the Harvesting exercise are the bait-catch instants being detected in the self-similarity detection exercise. Exercising of the self-similarity detection algorithm can be in the batch-mode, where the self-similarity exercise is working on a particular time period NT. If NT is a very long period, the self-similarity detection can be exercised in the time-progressive mode. In this mode, the period NT is being divided into smaller intervals. For example, the period NT is a 12 hour period starting at 00:00 to 12:00. One can divided the period into two 6-hour period:
      • NT1: 00:00 to 06:00
      • NT2: 06:00 to 12:00
  • Exercise the self-similarity detection on the first interval:
      • Build the Instance Table.
  • Example: In NT 1, there are 5 undetected clips, A, B, C, D, and E.
  • Thus, the Instant Table is a 5×5 table:
  • Clip
    Clip A B C D E
    A Instance_id + Instance_id + Instance_id + Instance_id +
    Similarity information Similarity information Similarity information Similarity information
    across A and B across A and C across A and D across A and E
    B Instance_id + Instance_id + Instance_id +
    Similarity information Similarity information Similarity information
    across B and C across CBand D across B and E
    C Instance_id + Instance_id +
    Similarity information Similarity information
    across C and D across C and E
    D Instance_id +
    Similarity information
    across D and E
    E
  • Assume there are five undetected clips: F, G, H, I, and J, in NT 2.
  • Append the five new clips from NT2 onto the Instant Table:
  • TABLE 4
    Figure US20080263041A1-20081023-C00001
    Instance Table Updating in the Time-Progressive Mode. The non-shaded entries are the similarity information on NT_1. The shaded entries are the similarity information appended after the self similarity detection exercise on NT_2.

    are the similarity information on NT 1. The shaded entries are the similarity information appended after the self similarity detection exercise on NT 2.
  • The new entries are generated by running the self similarity detection process described above on both the old and the new undetected clips. In the time progressive mode, the size of SSD is limited. In this example, the size of the SSD on NT 2 is half to the SSD in the batch mode. Also note that in harvesting NT 2, all the undetected clips in NT 1 and NT 2 will be used as queries to the SSD. New results in each partition will be appended to the Instance Table. The partition size of NT can be arbitrarily fine, as determined by the application. Also, partitions are not required to be uniform, that is, a partition can be set for each clip. After the Instance Table has been appended with new results, both the Identification and the Grouping processes can be exercised to identify append new members to existing families, or to identify new families.
  • Cross Channel Harvesting
  • The harvester exercise on a single channel can be easily applied to harvest similar clips across different channels. In other words, it might be desirable to find self-similarity of content clips not just across time, but across other broadcast sources, where it is assumed that sufficiently similar clips on two distinct broadcast sources is likely an identifiable piece of content. The process is performed as follows:
  • First, select families from channels to be compared. To save computational efforts, certain criteria are set on what kind of families are to be selected for the exercise. Below are three criteria that can be used:
  • Clips with similar durations.
    Clips from stations of similar formats.
    Clips that appear most recently, e.g. within 24 hours.
  • Recall that every family has a lead member clip. The process registers each lead clip from the selected families, of selected channels, into the SSD. Then the self-similarity detection exercise is run on the SSD. The resulting similarity information is entered into an Instance Table.
  • TABLE 5
    Instance Table in a Cross-Channel Exercise
    Clip
    Lead Clip from Lead Clip from Lead Clip from Lead Clip from Lead Clip from
    family#1 of family#2 of family#1 of family#2 of family#1 of
    Clip channel # 1 channel #1 channel #2 channel #2 channel #3
    Lead Clip from similarity similarity similarity similarity
    family#
    1 of information information information information
    channel #
    1
    Lead Clip from similarity similarity similarity
    family#
    2 of information information information
    channel #
    1
    Lead Clip from similarity similarity
    family#
    1 of information information
    channel #
    2
    Lead Clip from similarity
    family#
    2 of information
    channel #
    2
    Lead Clip from
    family#1 of
    channel #3
  • The similarity information will be processed via the Grouping Process with the 85% Rule to identify similar families across different channels. All similar families will be combined into a combined family. A combined-family identification number will be generated for the combined family. Each combined family consists of families of different channels. These families are combined due to high degree of similarity among their lead members. The combined family thus contains all informations of every family: channel-id, family-id, time locations and audio-quality index of every clip of every family. A lead combined-family member is selected. Again, the clip that has the longest duration is selected as the lead combined-family member.
  • Cross Channel Harvesting in the Same Market
  • Channels that are within the same market, e.g. all radio stations in the New York market, will be selected into the cross-channel harvesting exercise. First, determine what channel selection criteria are to be applied. Select channels based on the criteria into the harvesting process. Harvests identified on this level are referred to “Market-Level” harvests. The Instance-Table holds similarity information across channels in the same market.
  • TABLE 6
    Instance Table in a Market-Level Harvesting Exercise. Each entry may
    contain similarity information of multiple clips.
    Channel
    Channel Channel # 1 Channel #2 Channel #3 . . . Channel #M
    Channel #
    1 similarity similarity . . . similarity
    information information information
    across across across
    channels #1 channels #1 channels #1
    and #2 and #3 and #M
    Channel #
    2 similarity . . . similarity
    information information
    across across
    channels #2 channels #2
    and #3 and #M
    Channel #3 . . . similarity
    information
    across
    channels #3
    and #M
    . . . . . . . .
    . . . . .
    . . . . .
    Channel #M − 1 similarity
    information
    across
    channels #M − 1
    and #M
  • Cross Channel Harvesting in Different Markets
  • Harvests from different markets obtained above are combined into the cross-channel harvesting exercise. Determine what market selection criteria are. One may have a presumption that certain market combination is likely to yield meaningful harvests. Harvests identified on this level is referred to “National-Level” harvests. The Instance-Table holds similarity information across channels of different markets.
  • TABLE 7
    Instance Table in a National-Level Harvesting Exercise. Each entry may
    contain similarity information of multiple clips of two markets.
    Market
    Market Market # 1 Market #2 Market #3 . . . Market #N
    Market #
    1 similarity similarity . . . similarity
    information information information
    across markets #1 across markets across markets
    and #2 #1 and #3 #1 and #M
    Market #
    2 similarity . . . similarity
    information information
    across markets across markets
    #
    2 and #3 #2 and #M
    Market #3 . . . similarity
    information
    across markets
    #3 and #M
    . . . . . . . .
    . . . . .
    . . . . .
    Market #N − 1 similarity
    information
    across markets
    #N − 1 and #N
  • Harvest Processing
  • The Harvester exercise yields similar programs on three levels:
      • Channel level: Similar programs of the same channel.
      • Market level: Similar programs of different channels in the same market.
      • National level: Similar programs of different channels of different markets.
  • Although here there may be substantial results or harvests of identifiable but unregistered content as determined on several levels, they are still remain as unknown programs until after human listening. The harvest processing stage consists of a number of steps: starting from the harvests collected on all the three levels, human listening, identification, to the end where certain clips out of the harvests are decided to be promoted to the monitoring system, that is, fully identified and registered in the monitoring system.
  • Human Listening—Identifying Harvests
  • The families of all the three levels will be presented to human operators for identification. The system will automatically select the clip that has the highest audio quality within the family and present to the human operator. The operator will identify the clip, and input the meta-data of the clip, including, for example, the song title, publisher, record label. If it is a song, the identification of the clip will be made by a format specialist, which generates the title and artist information.
  • Although the present invention has been described and illustrated in detail, it is to be clearly understood that the same is by way of illustration and example only, and is not to be taken by way of limitation. It is appreciated that various features of the invention which are, for clarity, described in the context of separate embodiments may also be provided in combination in a single embodiment. Conversely, various features of the invention which are, for brevity, described in the context of a single embodiment may also be provided separately or in any suitable combination. It is appreciated that the particular embodiment described in the Appendices is intended only to provide an extremely detailed disclosure of the present invention and is not intended to be limiting. It is appreciated that any of the software components of the present invention may, if desired, be implemented in ROM (read-only memory) form. The software components may, generally, be implemented in hardware, if desired, using conventional techniques.
      • The spirit and scope of the present invention are to be limited only by the terms of the appended claims.

Claims (59)

1. A method executed by a digital signal processing system for detecting at least two substantially similar instances of the same piece of unregistered program in one or more broadcast signals comprising:
periodically detecting whether two or more unregistered portions of said one or more broadcast signals is sufficiently self-similar.
2. The method of claim 1 further comprising detecting one or more sufficiently similar sets of pattern vectors, wherein each set corresponds to one of the one or more unregistered portions.
3. The method of claim 2 further comprising assigning an identification number to one or more of the sets of pattern vectors detected.
4. The method of claim 1 further comprising determining whether the piece of unregistered program is substantially the same as a registered piece of programming and then changing an indicia of identity associated with the unregistered program to an indicia of identity associated with the registered program.
5. The method of claim 4 further comprising converting an identification number assigned to the piece of unregistered program to an identification number corresponding to the registered piece of programming.
6. The method of claim 1 further comprising:
identifying a piece of unregistered programming in which sufficient similarities have been detected between a piece of registered programming and said piece of unregistered programming; and
converting the status of said identified piece of unregistered programming from unregistered to registered.
7. The method of claim 1, wherein the two or more unregistered portions of one of said one or more broadcast signals are substantially non-overlapping in time.
8. The method of claim 1, wherein each of said unregistered portions of said one or more broadcast signals comprises one or more frames in duration.
9. The method of claim 8, wherein the duration is approximately 10 seconds.
10. The method of claim 8, wherein the duration is less than approximately 10 seconds.
11. The method of claim 2 further comprising storing in computer memory at least one of said sets of pattern vectors.
12. The method of claim 2 further comprising comparing the audio quality of said sets of pattern vectors.
13. The method of claim 12 further comprising storing in computer memory a set of pattern vectors determined to have relatively better audio quality than other sufficiently similar sets of pattern vectors.
14. The method of claim 1 further comprising storing in computer memory an indicia of identity associated with the unregistered program.
15. The method of claim 1 further comprising storing in computer memory one of: an approximate start time, an approximate end time, and an approximate duration time of said two or more of the self-similar unregistered portions.
16. The method of claim 1 further comprising storing in computer memory the broadcast source information of the one or more broadcast signals over which said two or more unregistered portions are broadcast.
17. The method of claim 1 further comprising adjusting the duration of said unregistered portions for decreasing the detection time of self-similarities in the one or more broadcast signals.
18. A method executed by a digital signal processing system for detecting self-similar repetition in a broadcast signal comprising:
detecting a thread comprising a sequence of one or more portions of a first unregistered piece of programming, where each consecutive portion of the first unregistered piece is determined to be sufficiently similar to a corresponding substantially consecutive portion of a second unregistered piece of unregistered programming.
19. The method of claim 18 further comprising:
storing in computer memory as a single piece of programming the consecutive sequence of sufficiently similar portions where for each portion, the audio quality is determined to be better than the other corresponding sufficiently similar consecutive portion.
20. The method of claim 18, wherein the step of determining to be sufficiently self-similar comprises detecting sufficiently similar sets of pattern vectors associated with each of the consecutive portions.
21. The method of claim 18 further comprising assigning a unique identification number to one or more sets of pattern vectors that comprise the portions.
22. The method of claim 18 further comprising:
determining the end of the thread by detecting one or more insufficiently similar portions of the first piece as compared to the corresponding portions of the second piece.
23. The method of claim 18 where the first piece and the second piece do not substantially overlap.
24. The method of claim 22, wherein the number of detections of insufficient similarity is a number greater than a predetermined threshold.
25. The method of claim 22 further comprising determining one of an approximate start time, approximate end time and approximate duration time corresponding to the thread.
26. The method of claim 18 further comprising:
identifying a piece of unregistered programming in which sufficient similarities have been detected between a piece of registered programming and said piece of unregistered programming; and
converting the status of said identified piece of unregistered programming from unregistered to registered.
27. The method of claim 18 further comprising determining whether the piece of unregistered program is substantially the same as a registered piece of programming and then changing an indicia of identity associated with the unregistered program to an indicia of identity associated with the registered program.
28. The method of claim 18, wherein each of said portions comprises one or more frames in duration.
29. The method of claim 18 where each of said portions is equal to or less than approximately 10 seconds.
30. A method executed by a digital signal processing system for detecting self-similar repetition in a broadcast signal comprising:
detecting insufficient self-similarities between two non-overlapping pieces of unregistered programming of a broadcast signal by determining the number of pairs of insufficiently self-similar corresponding portions from each of the two pieces of unregistered programming, wherein said pairs of insufficiently self-similar corresponding portions substantially succeed a substantial number of substantially consecutive pairs of sufficiently self-similar corresponding portions.
31. The method of claim 30 further comprising detecting insufficiently similar sets of pattern vectors associated with each of the two unregistered corresponding portions of the broadcast signal.
32. The method of claim 30 further comprising assigning a unique identification number to one or more sets of pattern vectors that comprise each of the corresponding portions.
33. The method of claim 30 further comprising:
accumulating a thread of portions determined to be sufficiently self-similar, wherein each accumulated portion is uniquely selected from each of said pairs of sufficient self-similar corresponding portions; and
accumulating a thread of said selected portions, wherein the portions are ordered sequentially in time according to the relative order of the pair from which the portion was selected.
34. The method of claim 30 further comprising determining one of an approximate start time, approximate end time and approximate duration time corresponding to the thread.
36. The method of claim 33, wherein the portion selected is selected for having at least relatively better quality.
37. The method of claim 30, wherein said sufficient number comprises a number greater than a predetermined threshold.
38. The method of claim 30 further comprising:
identifying a piece of unregistered programming in which sufficient similarities have been detected between a piece of registered programming and said piece of unregistered programming; and
converting the status of said identified piece of unregistered programming from unregistered to registered.
39. The method of claim 30 further comprising determining whether the piece of unregistered program is substantially the same as a registered piece of programming and then changing an indicia of identity associated with the unregistered program to an indicia of identity associated with the registered program.
40. The method of claim 30, wherein each of said portions comprises one or more frames in duration.
41. A digital signal processing system for detecting at least two substantially similar instances of the same piece of unregistered program in one or more broadcast signals comprising:
a detection means adapted for periodically detecting whether two or more unregistered portions of said one or more broadcast signals is sufficiently self-similar.
42. The system of claim 42, wherein said detection means is further adapted for detecting one or more sufficiently similar sets of pattern vectors, wherein each set corresponds to one of the one or more unregistered portions.
43. The system of claim 42 further comprising:
a processor adapted for assigning an identification number to one or more of the sets of pattern vectors detected.
44. The system of claim 41 further comprising:
a processor adapted for determining whether the piece of unregistered program is substantially the same as a registered piece of programming and then changing an indicia of identity associated with the unregistered program to an indicia of identity associated with the registered program.
45. The system of claim 44, wherein said processor is further adapted for converting an identification number assigned to the piece of unregistered program to an identification number corresponding to the registered piece of programming.
46. The system of claim 41 further comprising:
a processor adapted for identifying a piece of unregistered programming in which sufficient similarities have been detected between a piece of registered programming and said piece of unregistered programming; and
converting the status of said identified piece of unregistered programming from unregistered to registered.
47. The system of claim 41, wherein the two or more unregistered portions of one of said one or more broadcast signals are substantially non-overlapping in time.
48. The system of claim 41, wherein each of said unregistered portions of said one or more broadcast signals comprises one or more frames in duration.
49. The system of claim 48, wherein the duration is approximately 10 seconds.
50. The system of claim 48, wherein the duration is less than approximately 10 seconds.
51. The system of claim 42 further comprising a computer memory adapted for storing at least one of said sets of pattern vectors.
52. A digital signal processing system for detecting self-similar repetition in a broadcast signal comprising:
a detection means adapted for detecting a thread comprising a sequence of one or more portions of a first unregistered piece of programming, where each consecutive portion of the first unregistered piece is determined to be sufficiently similar to a corresponding substantially consecutive portion in a second unregistered piece of unregistered programming.
53. The system of claim 52 further comprising:
a processor adapted for identifying a piece of unregistered programming in which sufficient similarities have been detected between a piece of registered programming and said piece of unregistered programming; and
converting the status of said identified piece of unregistered programming from unregistered to registered.
54. The system of claim 52 further comprising:
a processor adapted for determining whether the piece of unregistered program is substantially the same as a registered piece of programming and then changing an indicia of identity associated with the unregistered program to an indicia of identity associated with the registered program.
55. A digital signal processing system for detecting self-similar repetition in a broadcast signal comprising:
detection means adapted for detecting insufficient self-similarities between two non-overlapping pieces of unregistered programming of a broadcast signal by determining the number of pairs of insufficiently self-similar corresponding portions from each of the two pieces of unregistered programming, wherein said pairs of insufficiently self-similar corresponding portions substantially succeed a substantial number of substantially consecutive pairs of sufficiently self-similar corresponding portions.
56. The system of claim 55, wherein the detection means is further adapted for detecting insufficiently similar sets of pattern vectors associated with each of the two unregistered portions of the broadcast signal.
57. The system of claim 55 further comprising a processor adapted for assigning a unique identification number to one or more sets of pattern vectors that comprise the portions.
58. The system of claim 55 further comprising a processor adapted for selecting a portion from each of said pairs of sufficient self-similar portions; and
accumulating a thread of said selected portions, wherein the portions are ordered sequentially in time according to the relative order of the pair from which the portion was selected.
59. The system of claim 55, wherein the processor is further adapted for determining one of an approximate start time, approximate end time and approximate duration time corresponding to the thread.
60. The method of claim 1 further comprising rendering to a human listener the one or more self-similar unregistered portions.
US12/093,453 2005-11-14 2006-11-14 Method and Apparatus for Automatic Detection and Identification of Unidentified Broadcast Audio or Video Signals Abandoned US20080263041A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/093,453 US20080263041A1 (en) 2005-11-14 2006-11-14 Method and Apparatus for Automatic Detection and Identification of Unidentified Broadcast Audio or Video Signals

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US73634805P 2005-11-14 2005-11-14
US11322706 2005-12-30
US11/322,706 US8229751B2 (en) 2004-02-26 2005-12-30 Method and apparatus for automatic detection and identification of unidentified Broadcast audio or video signals
PCT/US2006/060891 WO2007059498A2 (en) 2005-11-14 2006-11-14 Method and apparatus for automatic detection and identification of unidentified broadcast audio or video signals
US12/093,453 US20080263041A1 (en) 2005-11-14 2006-11-14 Method and Apparatus for Automatic Detection and Identification of Unidentified Broadcast Audio or Video Signals

Publications (1)

Publication Number Publication Date
US20080263041A1 true US20080263041A1 (en) 2008-10-23

Family

ID=38040386

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/322,706 Active 2028-03-13 US8229751B2 (en) 2004-02-26 2005-12-30 Method and apparatus for automatic detection and identification of unidentified Broadcast audio or video signals
US12/093,453 Abandoned US20080263041A1 (en) 2005-11-14 2006-11-14 Method and Apparatus for Automatic Detection and Identification of Unidentified Broadcast Audio or Video Signals

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/322,706 Active 2028-03-13 US8229751B2 (en) 2004-02-26 2005-12-30 Method and apparatus for automatic detection and identification of unidentified Broadcast audio or video signals

Country Status (4)

Country Link
US (2) US8229751B2 (en)
EP (1) EP1952639B1 (en)
CA (1) CA2629907C (en)
WO (1) WO2007059498A2 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070124756A1 (en) * 2005-11-29 2007-05-31 Google Inc. Detecting Repeating Content in Broadcast Media
US7831531B1 (en) 2006-06-22 2010-11-09 Google Inc. Approximate hashing functions for finding similar content
US20120136466A1 (en) * 2010-11-28 2012-05-31 Aron Weiss System and method for identifying a broadcast source of ambient audio
US8411977B1 (en) 2006-08-29 2013-04-02 Google Inc. Audio identification using wavelet-based signatures
US8625033B1 (en) 2010-02-01 2014-01-07 Google Inc. Large-scale matching of audio and video
US9430472B2 (en) 2004-02-26 2016-08-30 Mobile Research Labs, Ltd. Method and system for automatic detection of content
US9955234B2 (en) 2014-03-28 2018-04-24 Panasonic Intellectual Property Management Co., Ltd. Image reception apparatus, parameter setting method, and additional information displaying system including a calibration operation

Families Citing this family (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10419541B2 (en) 2008-11-26 2019-09-17 Free Stream Media Corp. Remotely control devices over a network without authentication or registration
US9386356B2 (en) 2008-11-26 2016-07-05 Free Stream Media Corp. Targeting with television audience data across multiple screens
US8180891B1 (en) 2008-11-26 2012-05-15 Free Stream Media Corp. Discovery, access control, and communication with networked services from within a security sandbox
US9026668B2 (en) 2012-05-26 2015-05-05 Free Stream Media Corp. Real-time and retargeted advertising on multiple screens of a user watching television
US10567823B2 (en) 2008-11-26 2020-02-18 Free Stream Media Corp. Relevant advertisement generation based on a user operating a client device communicatively coupled with a networked media device
US9519772B2 (en) 2008-11-26 2016-12-13 Free Stream Media Corp. Relevancy improvement through targeting of information based on data gathered from a networked device associated with a security sandbox of a client device
US10631068B2 (en) 2008-11-26 2020-04-21 Free Stream Media Corp. Content exposure attribution based on renderings of related content across multiple devices
US10334324B2 (en) 2008-11-26 2019-06-25 Free Stream Media Corp. Relevant advertisement generation based on a user operating a client device communicatively coupled with a networked media device
US9986279B2 (en) 2008-11-26 2018-05-29 Free Stream Media Corp. Discovery, access control, and communication with networked services
US10880340B2 (en) 2008-11-26 2020-12-29 Free Stream Media Corp. Relevancy improvement through targeting of information based on data gathered from a networked device associated with a security sandbox of a client device
US9154942B2 (en) 2008-11-26 2015-10-06 Free Stream Media Corp. Zero configuration communication between a browser and a networked media device
US10977693B2 (en) 2008-11-26 2021-04-13 Free Stream Media Corp. Association of content identifier of audio-visual data with additional data through capture infrastructure
US9961388B2 (en) 2008-11-26 2018-05-01 David Harrison Exposure of public internet protocol addresses in an advertising exchange server to improve relevancy of advertisements
US10949458B2 (en) 2009-05-29 2021-03-16 Inscape Data, Inc. System and method for improving work load management in ACR television monitoring system
US10116972B2 (en) 2009-05-29 2018-10-30 Inscape Data, Inc. Methods for identifying video segments and displaying option to view from an alternative source and/or on an alternative device
US10375451B2 (en) 2009-05-29 2019-08-06 Inscape Data, Inc. Detection of common media segments
US8595781B2 (en) * 2009-05-29 2013-11-26 Cognitive Media Networks, Inc. Methods for identifying video segments and displaying contextual targeted content on a connected television
US9094714B2 (en) 2009-05-29 2015-07-28 Cognitive Networks, Inc. Systems and methods for on-screen graphics detection
US9449090B2 (en) 2009-05-29 2016-09-20 Vizio Inscape Technologies, Llc Systems and methods for addressing a media database using distance associative hashing
EP2534585A4 (en) * 2010-02-12 2018-01-24 Google LLC Compound splitting
US10192138B2 (en) 2010-05-27 2019-01-29 Inscape Data, Inc. Systems and methods for reducing data density in large datasets
US9838753B2 (en) 2013-12-23 2017-12-05 Inscape Data, Inc. Monitoring individual viewing of television events using tracking pixels and cookies
TWI412019B (en) 2010-12-03 2013-10-11 Ind Tech Res Inst Sound event detecting module and method thereof
US9955192B2 (en) 2013-12-23 2018-04-24 Inscape Data, Inc. Monitoring individual viewing of television events using tracking pixels and cookies
MX2017009738A (en) 2015-01-30 2017-11-20 Inscape Data Inc Methods for identifying video segments and displaying option to view from an alternative source and/or on an alternative device.
CN107949849B (en) 2015-04-17 2021-10-08 构造数据有限责任公司 System and method for reducing data density in large data sets
CA2992529C (en) 2015-07-16 2022-02-15 Inscape Data, Inc. Prediction of future views of video segments to optimize system resource utilization
AU2016293601B2 (en) 2015-07-16 2020-04-09 Inscape Data, Inc. Detection of common media segments
US10080062B2 (en) 2015-07-16 2018-09-18 Inscape Data, Inc. Optimizing media fingerprint retention to improve system resource utilization
CA2992519A1 (en) 2015-07-16 2017-01-19 Inscape Data, Inc. Systems and methods for partitioning search indexes for improved efficiency in identifying media segments
KR20190134664A (en) 2017-04-06 2019-12-04 인스케이프 데이터, 인코포레이티드 System and method for using media viewing data to improve device map accuracy

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5151788A (en) * 1988-01-26 1992-09-29 Blum Dieter W Method and apparatus for identifying and eliminating specific material from video signals
US5651094A (en) * 1994-06-07 1997-07-22 Nec Corporation Acoustic category mean value calculating apparatus and adaptation apparatus
US20020099555A1 (en) * 2000-11-03 2002-07-25 International Business Machines Corporation System for monitoring broadcast audio content
US20030033347A1 (en) * 2001-05-10 2003-02-13 International Business Machines Corporation Method and apparatus for inducing classifiers for multimedia based on unified representation of features reflecting disparate modalities
US6584223B1 (en) * 1998-04-02 2003-06-24 Canon Kabushiki Kaisha Image search apparatus and method
US20030154084A1 (en) * 2002-02-14 2003-08-14 Koninklijke Philips Electronics N.V. Method and system for person identification using video-speech matching
US6675174B1 (en) * 2000-02-02 2004-01-06 International Business Machines Corp. System and method for measuring similarity between a set of known temporal media segments and a one or more temporal media streams
US20040091111A1 (en) * 2002-07-16 2004-05-13 Levy Kenneth L. Digital watermarking and fingerprinting applications
US6766523B2 (en) * 2002-05-31 2004-07-20 Microsoft Corporation System and method for identifying and segmenting repeating media objects embedded in a stream
US20040162728A1 (en) * 2003-02-18 2004-08-19 Mark Thomson Method and apparatus for providing a speaker adapted speech recognition model set
US20050125223A1 (en) * 2003-12-05 2005-06-09 Ajay Divakaran Audio-visual highlights detection using coupled hidden markov models
US20050197724A1 (en) * 2004-03-08 2005-09-08 Raja Neogi System and method to generate audio fingerprints for classification and storage of audio clips
US20060080356A1 (en) * 2004-10-13 2006-04-13 Microsoft Corporation System and method for inferring similarities between media objects
US20060149552A1 (en) * 2004-12-30 2006-07-06 Aec One Stop Group, Inc. Methods and Apparatus for Audio Recognition
US20060190450A1 (en) * 2003-09-23 2006-08-24 Predixis Corporation Audio fingerprinting system and method
US20060229878A1 (en) * 2003-05-27 2006-10-12 Eric Scheirer Waveform recognition method and apparatus
US20070055500A1 (en) * 2005-09-01 2007-03-08 Sergiy Bilobrov Extraction and matching of characteristic fingerprints from audio signals
US20070058949A1 (en) * 2005-09-15 2007-03-15 Hamzy Mark J Synching a recording time of a program to the actual program broadcast time for the program
US20080193016A1 (en) * 2004-02-06 2008-08-14 Agency For Science, Technology And Research Automatic Video Event Detection and Indexing
US7565104B1 (en) * 2004-06-16 2009-07-21 Wendell Brown Broadcast audio program guide

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5436653A (en) 1992-04-30 1995-07-25 The Arbitron Company Method and system for recognition of broadcast segments
US5918223A (en) 1996-07-22 1999-06-29 Muscle Fish Method and article of manufacture for content-based analysis, storage, retrieval, and segmentation of audio information
US6542869B1 (en) * 2000-05-11 2003-04-01 Fuji Xerox Co., Ltd. Method for automatic analysis of audio including music and speech
WO2002003179A2 (en) 2000-06-30 2002-01-10 Williams Eddie H Online digital content library
TW582022B (en) 2001-03-14 2004-04-01 Ibm A method and system for the automatic detection of similar or identical segments in audio recordings
EP1410380B1 (en) 2001-07-20 2010-04-28 Gracenote, Inc. Automatic identification of sound recordings
US20040193642A1 (en) 2003-03-26 2004-09-30 Allen Paul G. Apparatus and method for processing digital music files
WO2005081829A2 (en) 2004-02-26 2005-09-09 Mediaguide, Inc. Method and apparatus for automatic detection and identification of broadcast audio or video programming signal

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5151788A (en) * 1988-01-26 1992-09-29 Blum Dieter W Method and apparatus for identifying and eliminating specific material from video signals
US5651094A (en) * 1994-06-07 1997-07-22 Nec Corporation Acoustic category mean value calculating apparatus and adaptation apparatus
US6584223B1 (en) * 1998-04-02 2003-06-24 Canon Kabushiki Kaisha Image search apparatus and method
US6675174B1 (en) * 2000-02-02 2004-01-06 International Business Machines Corp. System and method for measuring similarity between a set of known temporal media segments and a one or more temporal media streams
US20020099555A1 (en) * 2000-11-03 2002-07-25 International Business Machines Corporation System for monitoring broadcast audio content
US20030033347A1 (en) * 2001-05-10 2003-02-13 International Business Machines Corporation Method and apparatus for inducing classifiers for multimedia based on unified representation of features reflecting disparate modalities
US20030154084A1 (en) * 2002-02-14 2003-08-14 Koninklijke Philips Electronics N.V. Method and system for person identification using video-speech matching
US6766523B2 (en) * 2002-05-31 2004-07-20 Microsoft Corporation System and method for identifying and segmenting repeating media objects embedded in a stream
US20040091111A1 (en) * 2002-07-16 2004-05-13 Levy Kenneth L. Digital watermarking and fingerprinting applications
US20040162728A1 (en) * 2003-02-18 2004-08-19 Mark Thomson Method and apparatus for providing a speaker adapted speech recognition model set
US20060229878A1 (en) * 2003-05-27 2006-10-12 Eric Scheirer Waveform recognition method and apparatus
US20060190450A1 (en) * 2003-09-23 2006-08-24 Predixis Corporation Audio fingerprinting system and method
US20050125223A1 (en) * 2003-12-05 2005-06-09 Ajay Divakaran Audio-visual highlights detection using coupled hidden markov models
US20080193016A1 (en) * 2004-02-06 2008-08-14 Agency For Science, Technology And Research Automatic Video Event Detection and Indexing
US20050197724A1 (en) * 2004-03-08 2005-09-08 Raja Neogi System and method to generate audio fingerprints for classification and storage of audio clips
US7565104B1 (en) * 2004-06-16 2009-07-21 Wendell Brown Broadcast audio program guide
US20060080356A1 (en) * 2004-10-13 2006-04-13 Microsoft Corporation System and method for inferring similarities between media objects
US20060149552A1 (en) * 2004-12-30 2006-07-06 Aec One Stop Group, Inc. Methods and Apparatus for Audio Recognition
US20070055500A1 (en) * 2005-09-01 2007-03-08 Sergiy Bilobrov Extraction and matching of characteristic fingerprints from audio signals
US20070058949A1 (en) * 2005-09-15 2007-03-15 Hamzy Mark J Synching a recording time of a program to the actual program broadcast time for the program

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9430472B2 (en) 2004-02-26 2016-08-30 Mobile Research Labs, Ltd. Method and system for automatic detection of content
US8700641B2 (en) 2005-11-29 2014-04-15 Google Inc. Detecting repeating content in broadcast media
US7991770B2 (en) * 2005-11-29 2011-08-02 Google Inc. Detecting repeating content in broadcast media
US20070130580A1 (en) * 2005-11-29 2007-06-07 Google Inc. Social and Interactive Applications for Mass Media
US8442125B2 (en) 2005-11-29 2013-05-14 Google Inc. Determining popularity ratings using social and interactive applications for mass media
US8479225B2 (en) 2005-11-29 2013-07-02 Google Inc. Social and interactive applications for mass media
US20070124756A1 (en) * 2005-11-29 2007-05-31 Google Inc. Detecting Repeating Content in Broadcast Media
US8065248B1 (en) 2006-06-22 2011-11-22 Google Inc. Approximate hashing functions for finding similar content
US7831531B1 (en) 2006-06-22 2010-11-09 Google Inc. Approximate hashing functions for finding similar content
US8498951B1 (en) 2006-06-22 2013-07-30 Google Inc. Approximate hashing functions for finding similar content
US8504495B1 (en) 2006-06-22 2013-08-06 Google Inc. Approximate hashing functions for finding similar content
US8977067B1 (en) 2006-08-29 2015-03-10 Google Inc. Audio identification using wavelet-based signatures
US8411977B1 (en) 2006-08-29 2013-04-02 Google Inc. Audio identification using wavelet-based signatures
US8625033B1 (en) 2010-02-01 2014-01-07 Google Inc. Large-scale matching of audio and video
US20120136466A1 (en) * 2010-11-28 2012-05-31 Aron Weiss System and method for identifying a broadcast source of ambient audio
US9955234B2 (en) 2014-03-28 2018-04-24 Panasonic Intellectual Property Management Co., Ltd. Image reception apparatus, parameter setting method, and additional information displaying system including a calibration operation

Also Published As

Publication number Publication date
EP1952639A4 (en) 2008-12-31
US20070109449A1 (en) 2007-05-17
EP1952639B1 (en) 2019-01-30
WO2007059498A2 (en) 2007-05-24
CA2629907C (en) 2017-11-28
US8229751B2 (en) 2012-07-24
WO2007059498A3 (en) 2007-12-21
CA2629907A1 (en) 2007-05-24
EP1952639A2 (en) 2008-08-06

Similar Documents

Publication Publication Date Title
US8229751B2 (en) Method and apparatus for automatic detection and identification of unidentified Broadcast audio or video signals
US8468183B2 (en) Method and apparatus for automatic detection and identification of broadcast audio and video signals
US9918141B2 (en) System and method for monitoring and detecting television ads in real-time using content databases (ADEX reporter)
CN1183763C (en) Method and apparatus for recommending television programming using decision trees
US5572246A (en) Method and apparatus for producing a signature characterizing an interval of a video signal while compensating for picture edge shift
CN101669308B (en) Methods and apparatus for characterizing media
EP3693891A1 (en) Methods and apparatus for identifying media content using temporal signal characteristics
US20020015105A1 (en) Signal processing device and signal processing method
US20040167767A1 (en) Method and system for extracting sports highlights from audio signals
US10847168B2 (en) Research data gathering
US11785105B2 (en) Methods and apparatus to facilitate meter to meter matching for media identification
US20190213214A1 (en) Audio matching
CN108460633B (en) Method for establishing advertisement audio acquisition and identification system and application thereof
CN114297483A (en) Content recommendation method and device based on attention degree
CN109857859B (en) News information processing method, device, equipment and storage medium
CN111209511A (en) Method and system for pushing information based on data association relation
US20230078282A1 (en) System and method of joining research studies to extract analytical insights for enabling cross-study analysis
MX2008006241A (en) Method and apparatus for automatic detection and identification of unidentified broadcast audio or video signals
KR20160037144A (en) Music recommendation method based on user context and preference using radio signal analysis and music recommendation system using thereof
CN115495600A (en) Video and audio retrieval method based on features
WO2020197393A1 (en) A computer controlled method of operating a training tool for classifying annotated events in content of data stream
Biernacki Effective TV advertising block division into single commercials method

Legal Events

Date Code Title Description
AS Assignment

Owner name: MEDIAGUIDE, INC, PENNSYLVANIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CHEUNG, KWAN;REEL/FRAME:022818/0533

Effective date: 20090605

Owner name: MEDIAGUIDE, INC,PENNSYLVANIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CHEUNG, KWAN;REEL/FRAME:022818/0533

Effective date: 20090605

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION