US20180101610A1 - Method and System for Identification of Distributed Broadcast Content - Google Patents
Method and System for Identification of Distributed Broadcast Content Download PDFInfo
- Publication number
- US20180101610A1 US20180101610A1 US15/840,025 US201715840025A US2018101610A1 US 20180101610 A1 US20180101610 A1 US 20180101610A1 US 201715840025 A US201715840025 A US 201715840025A US 2018101610 A1 US2018101610 A1 US 2018101610A1
- Authority
- US
- United States
- Prior art keywords
- content
- broadcast
- client device
- data stream
- receiving
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims abstract description 59
- 230000006870 function Effects 0.000 claims description 2
- 230000004044 response Effects 0.000 abstract description 10
- 238000012545 processing Methods 0.000 description 6
- 238000005070 sampling Methods 0.000 description 6
- 230000005540 biological transmission Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000003595 spectral effect Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000012512 characterization method Methods 0.000 description 1
- 238000013144 data compression Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Images
Classifications
-
- G06F17/30743—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/60—Information retrieval; Database structures therefor; File system structures therefor of audio data
- G06F16/68—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/683—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04H—BROADCAST COMMUNICATION
- H04H60/00—Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
- H04H60/35—Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users
- H04H60/37—Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users for identifying segments of broadcast information, e.g. scenes or extracting programme ID
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04H—BROADCAST COMMUNICATION
- H04H60/00—Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
- H04H60/35—Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users
- H04H60/38—Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users for identifying broadcast time or space
- H04H60/40—Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users for identifying broadcast time or space for identifying broadcast time
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04H—BROADCAST COMMUNICATION
- H04H2201/00—Aspects of broadcast communication
- H04H2201/30—Aspects of broadcast communication characterised by the use of a return channel, e.g. for collecting users' opinions, for returning broadcast space/time information or for requesting data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04H—BROADCAST COMMUNICATION
- H04H60/00—Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
- H04H60/35—Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users
- H04H60/38—Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users for identifying broadcast time or space
- H04H60/41—Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users for identifying broadcast time or space for identifying broadcast space, i.e. broadcast channels, broadcast stations or broadcast areas
- H04H60/42—Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users for identifying broadcast time or space for identifying broadcast space, i.e. broadcast channels, broadcast stations or broadcast areas for identifying broadcast areas
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04H—BROADCAST COMMUNICATION
- H04H60/00—Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
- H04H60/56—Arrangements characterised by components specially adapted for monitoring, identification or recognition covered by groups H04H60/29-H04H60/54
- H04H60/58—Arrangements characterised by components specially adapted for monitoring, identification or recognition covered by groups H04H60/29-H04H60/54 of audio
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04H—BROADCAST COMMUNICATION
- H04H60/00—Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
- H04H60/68—Systems specially adapted for using specific information, e.g. geographical or meteorological information
- H04H60/73—Systems specially adapted for using specific information, e.g. geographical or meteorological information using meta-information
- H04H60/74—Systems specially adapted for using specific information, e.g. geographical or meteorological information using meta-information using programme related information, e.g. title, composer or interpreter
Definitions
- the present invention generally relates to identifying content within broadcasts, and more particularly, to identifying information about segments or excerpts of content within a data stream.
- Content identification may be used in a service provided for a consumer device (e.g., a cell phone), which includes a broadcast receiver, to supply broadcast program metadata to a user. For example, title, artist, and album information can be provided to the user on the device for broadcast programs as the programs are being played on the device.
- a consumer device e.g., a cell phone
- broadcast program metadata For example, title, artist, and album information can be provided to the user on the device for broadcast programs as the programs are being played on the device.
- Existing systems to provide content information of a broadcast signal to a user may only provide limited metadata, as with a radio data signal (RDS).
- RDS radio data signal
- existing systems may not be monitoring every broadcast station in every locale, and a desired radio station mapping may not always be available.
- a computational cost to perform a recognition on one media sample may be small, however, when considering that potentially many millions of consumer devices may be active at the same time, and if each were to query the server once per minute, the recognition server would have to be able to perform millions of recognitions every minute, and then the computational cost becomes significant.
- Such a system may only be able to allow a time budget of a few microseconds or less per recognition request, which is a few orders of magnitude smaller than typical processing times for media content identification.
- broadcast media is often presented as a continuous stream without segmentation markers
- a brute-force sample and query method could require fine granularity sampling intervals, thus increasing required query load even more.
- a method of identifying content within a data stream includes receiving a content identification query from a client device that requests an identity of content that was broadcast from a broadcast source. If content from the broadcast source has previously been identified and if the content identification query has been received at a time during which the content is still being broadcast from the source, the method includes sending the previous identification of the content to the client device. However, if not, the method includes (i) performing a content identification using a sample of the content broadcast from the broadcast source, and (ii) storing the content identification.
- the method includes receiving a content identification query from a client device that requests an identity of content being broadcast from a broadcast source and including information pertaining to the broadcast source of the content.
- the method also includes accessing a cache including a listing of content identifications that were each generated using a content sample, and each listing includes information pertaining to identity of content broadcast from a plurality of broadcast sources and each item in the listing including (i) an identity of given content, (ii) an identity of a given broadcast source that broadcast the given content, and (iii) an indication of when the content identification is valid.
- the method also includes matching the broadcast source of the content to a broadcast source of one of the content samples from which any of the content identifications were generated, and if the content identification query was received during a time in which the content identification in the listing pertaining to the one of the content samples is still valid, sending the content identification in the listing pertaining to the one of the content samples to the client device in response to the content identification query.
- the method includes receiving a first content identification query from a first client device that includes a recording of a sample of content being broadcast from a first source, making a content identification using the sample of the content, determining a time during which the content will be or is being broadcast from the first source, and storing the content identification, the time, and information pertaining to the first source of the content in a cache.
- the method also includes receiving a second content identification query from a second client device that requests an identity of content being broadcast from a second source and including information pertaining to the second source of the content.
- the method further includes if the first source and the second source are the same and if the time has not expired, (i) sending the content identification made in response to the first content identification query to the second client device in response to the second content identification query, and if not, (ii) making a second content identification using a sample of the content being broadcast from the second source and storing the second content identification in the cache.
- FIG. 1 illustrates one example of a system for identifying content within an audio stream.
- FIG. 2 is a flowchart depicting functional blocks of an example method of identifying content based on location of a user, broadcast information and/or stored content identifications.
- FIG. 3 is a block diagram illustrating an example client consumer device in communication with a sample analyzer to receive information identifying broadcast content.
- FIG. 4 illustrates a conceptual example of multiple content identification queries occurring serially in time during a song.
- FIG. 5 illustrates an example display of broadcast metadata on a mobile phone.
- FIG. 6 illustrates a conceptual block diagram of an example coverage area map for two radio stations.
- the method may be applied to any type of data content identification.
- the data is an audio data stream.
- the audio data stream may be a real-time data stream or an audio recording, for example.
- Exemplary embodiments describe methods for identifying content by identifying a source (e.g., channel, stream, or station) of the content transmission, and a location of a device requesting the content identification. For example, it may be desirable to detect from a free-field audio sample of a radio broadcast which radio station a user is listening to, as well as to what song the user is listening. Exemplary embodiments described below illustrate a method and apparatus for identifying a broadcast source of desired content, and for identifying content broadcast from the source.
- a user can utilize an audio sampling device including a microphone and optional data transmission means to identify content from a broadcast source. The user may hear an audio program being broadcast from some broadcast means, such as radio or television, and can record a sample of the audio using the audio sampling device. The sample, broadcast source information, and optionally a location of the audio sampling device are then conveyed to an analyzing means to identify the content. Content information may then be reported back to the user.
- the identity and information within a query are then stored. If second user then subsequently sends a content identification query for the same broadcast source and the query is received within a given time frame, then the stored content identity can be returned as a result to the second user.
- the query would need to be received during a time in which the same song is being broadcast on by the same broadcast source, so that the second user would effectively be asking to identify the same song that was previously identified in response to the first query.
- the response to the first query (which is stored) can be returned to all subsequent queries.
- only one computational content identification is needed to be performed, because the result can be stored for later retrieval, if subsequent content queries satisfy the requirements (e.g., if subsequent content queries are considered to be for the same song).
- FIG. 1 illustrates one example of a system for identifying content within other data content, such as identifying a song within a radio broadcast.
- the system includes radio stations, such as radio station 102 , which may be a radio or television content provider, for example, that broadcasts audio streams and other information to a receiver 104 .
- the receiver 104 receives the broadcast radio signal using an antenna 106 and converts the signal into sound.
- the receiver 104 may be a component within any number of consumer devices, such as a portable computer or cell phone.
- the receiver 104 may also include a conventional AM/FM tuner and other amplifiers as well to enable tuning to a desired radio broadcast channel.
- the receiver 104 can record portions of the broadcast signal (e.g., audio sample) for identification.
- the receiver 104 can send over a wired or wireless link a recorded broadcast to a sample analyzer 108 that will identify information pertaining to the audio sample, such as track identities (e.g., song title, artist, or other broadcast program information).
- the sample analyzer 108 includes an audio search engine 110 and may access a database 112 containing audio sample and broadcast information, for example, to compare the received audio sample with stored information so as to identify tracks within the received audio stream. Once tracks within the audio stream have been identified, the track identities or other information may be reported back to the receiver 104 .
- the receiver 104 may receive a broadcast from the radio station 102 , and perform some initial processing on a sample of the broadcast so as to create a fingerprint of the broadcast sample. The receiver 104 could then send the fingerprint information to the sample analyzer 108 , which will identify information pertaining to the sample based on the fingerprint alone. In this manner, more computation or identification processing can be performed at the receiver 104 , rather than at the sample analyzer 108 .
- the database 112 may include many recordings and each recording has a unique identifier (e.g., sound_ID).
- the database 112 itself does not necessarily need to store the audio files for each recording, since the sound_IDs can be used to retrieve audio files from elsewhere.
- a sound database index may be very large, containing indices for millions or even billions of files, for example. New recordings can be added incrementally to the database index.
- FIG. 1 illustrates a system that has a given configuration
- the components within the system may be arranged in other manners.
- the audio search engine 110 may be separate from the sample analyzer 108 , or audio sample processing can occur at the receiver 104 or at the sample analyzer 108 .
- the configurations described herein are merely exemplary in nature, and many alternative configurations might also be used.
- the system in FIG. 1 and in particular the sample analyzer 108 , identifies content within an audio stream using samples of the audio within the audio stream.
- Various audio sample identification techniques are known in the art for performing computational content identifications of audio samples and features of audio samples using a database of audio tracks.
- the following patents and publications describe possible examples for audio recognition techniques, and each is entirely incorporated herein by reference, as if fully set forth in this description.
- identifying features of an audio recording begins by receiving the recording and sampling the recording at a plurality of sampling points to produce a plurality of signal values.
- a statistical moment of the signal can be calculated using any known formulas, such as that noted in U.S. Pat. No. 5,210,820, for example.
- the calculated statistical moment is then compared with a plurality of stored signal identifications and the recording is recognized as similar to one of the stored signal identifications.
- the calculated statistical moment can be used to create a feature vector that is quantized, and a weighted sum of the quantized feature vector is used to access a memory that stores the signal identifications.
- audio content can be identified by identifying or computing characteristics or fingerprints of an audio sample and comparing the fingerprints to previously identified fingerprints.
- the particular locations within the sample at which fingerprints are computed depend on reproducible points in the sample. Such reproducibly computable locations are referred to as “landmarks.”
- the location within the sample of the landmarks can be determined by the sample itself, i.e., is dependent upon sample qualities and is reproducible. That is, the same landmarks are computed for the same signal each time the process is repeated.
- a landmarking scheme may mark about 5-10 landmarks per second of sound recording; of course, landmarking density depends on the amount of activity within the sound recording.
- One landmarking technique known as Power Norm, is to calculate the instantaneous power at many time points in the recording and to select local maxima.
- One way of doing this is to calculate the envelope by rectifying and filtering the waveform directly.
- Another way is to calculate the Hilbert transform (quadrature) of the signal and use the sum of the magnitudes squared of the Hilbert transform and the original signal.
- Other methods for calculating landmarks may also be used.
- a fingerprint is computed at or near each landmark time point in the recording.
- the nearness of a feature to a landmark is defined by the fingerprinting method used.
- a feature is considered near a landmark if it clearly corresponds to the landmark and not to a previous or subsequent landmark.
- features correspond to multiple adjacent landmarks.
- the fingerprint is generally a value or set of values that summarizes a set of features in the recording at or near the time point.
- each fingerprint is a single numerical value that is a hashed function of multiple features.
- Other examples of fingerprints include spectral slice fingerprints, multi-slice fingerprints, LPC coefficients, cepstral coefficients, and frequency components of spectrogram peaks.
- Fingerprints can be computed by any type of digital signal processing or frequency analysis of the signal.
- a frequency analysis is performed in the neighborhood of each landmark timepoint to extract the top several spectral peaks.
- a fingerprint value may then be the single frequency value of the strongest spectral peak.
- the sample analyzer 108 will receive a recording and compute fingerprints of the recording.
- the sample analyzer 108 may compute the fingerprints by contacting additional recognition engines.
- the sample analyzer 108 can then access the database 112 to match the fingerprints of the recording with fingerprints of known audio tracks by generating correspondences between equivalent fingerprints and files in the database 112 to locate a file that has the largest number of linearly related correspondences, or whose relative locations of characteristic fingerprints most closely match the relative locations of the same fingerprints of the recording. That is, linear correspondences between the landmark pairs are identified, and sets are scored according to the number of pairs that are linearly related.
- a linear correspondence occurs when a statistically significant number of corresponding sample locations and file locations can be described with substantially the same linear equation, within an allowed tolerance.
- the file of the set with the highest statistically significant score, i.e., with the largest number of linearly related correspondences is the winning file, and is deemed the matching media file.
- an audio sample can be analyzed to identify its content using a localized matching technique.
- a relationship between two audio samples can be characterized by first matching certain fingerprint objects derived from the respective samples.
- a set of fingerprint objects, each occurring at a particular location, is generated for each audio sample.
- Each location is determined depending upon the content of a respective audio sample and each fingerprint object characterizes one or more local features at or near the respective particular location.
- a relative value is next determined for each pair of matched fingerprint objects.
- a histogram of the relative values is then generated. If a statistically significant peak is found, the two audio samples can be characterized as substantially matching.
- a time stretch ratio which indicates how much an audio sample has been sped up or slowed down as compared to the original audio track can be determined.
- the reader is referred to published PCT patent application WO 03/091990, to Wang and Culbert, entitled Robust and Invariant Audio Pattern Matching, the entire disclosure of which is herein incorporated by reference as if fully set forth in this description.
- systems and methods described within the publications above may return more than just the identity of an audio sample.
- Wang and Smith may return, in addition to the metadata associated with an identified audio track, the relative time offset (RTO) of an audio sample from the beginning of the identified audio track.
- RTO relative time offset
- the fingerprints of the audio sample can be compared with fingerprints of the original files to which they match. Each fingerprint occurs at a given time, so after matching fingerprints to identify the audio sample, a difference in time between a first fingerprint (of the matching fingerprint in the audio sample) and a first fingerprint of the stored original file will be a time offset of the audio sample, e.g., amount of time into a song.
- a relative time offset e.g., 67 seconds into a song
- a user may send from a client device a content identification query to a sample analyzer, which may use any of the techniques described herein to identify the content.
- the user's client device may only need to send information relating to a source of the content and a location of the client device to the sample analyzer to identify content to which the user is currently listening.
- the sample analyzer will perform a content identification for a song once, and then for future queries, which are received within a valid time window by other client devices listening to the same broadcast that are located in a geographic area for which the broadcast covers, the sample analyzer can return the previous content identification that was performed.
- the sample analyzer can identify a recording without having to perform computationally intensive identifications (as described above), but by referring to previous identifications made with for devices in the same locality.
- the sample analyzer can return the same identification to a second user.
- an allowable time window e.g., time duration of the previously identified song.
- the sample analyzer will not have to do a computationally intensive identification, but rather, the sample analyzer can rely on the previous stored recognition. In this manner, there could be many queries to identify a song being broadcast on a radio station, and the sample analyzer may only have to perform one computationally intensive identification, store the identification and mark the identification as being valid for a given time fame.
- FIG. 2 is a flowchart depicting functional blocks of an example method of identifying content based on location of a user, broadcast information and/or stored content identifications.
- a consumer appliance including a broadcast receiver can be used to listen to a broadcast station.
- a user can send a content identification query from the consumer appliance to a request server, providing at least a representation of a broadcast station to which the user is listening, as shown at block 202 .
- the consumer appliance may also send location information to the request server to indicate a geographic location of the consumer appliance, as shown at block 204 . If the broadcast station information is not unique, for example, if the broadcast station information is just a tuning frequency, the location information acts to disambiguate an exact broadcast station.
- the request server uses either the broadcast frequency alone, or the broadcast frequency and the geographic location information to identify a unique broadcast source, as shown at block 206 .
- the request server determines if there is currently cached metadata available for the selected broadcast station, as shown at block 208 .
- Currently cached valid metadata will be available if a broadcast program has already been identified for a previous query on the selected broadcast station within a predetermined interval of time. If there is currently cached metadata available for the broadcast station, then the request server will return an associated cached metadata result to the consumer appliance, as shown at block 210 . If no currently cached metadata is available, then the request server will request the consumer appliance to send a media sample representation to the request server, as shown at block 212 . The request server then routes the media sample to a recognition server for an identification, and sends an associated metadata result back to the consumer appliance, as shown at blocks 214 and 216 .
- the request server then caches (stores) the result as a currently cached metadata for the selected broadcast station for a predetermined length of time, during which the currently cached metadata is valid, as shown at block 218 .
- Caching the current metadata makes it possible to serve requests to many more consumer appliances than would otherwise be possible if each request included a sample recording that had to be identified individually through a recognition server.
- each broadcast program on each broadcast station would only need to be identified once independent of how many consumer devices make requests because the initial identification is shared and used for all subsequent requests pertaining to the same broadcast program (e.g., for all subsequent requests received during the valid time period).
- FIG. 3 is a block diagram illustrating an example client consumer device 302 in communication with a sample analyzer 304 to receive information identifying broadcast content.
- the client consumer device 302 may be a personal computer, stereo receiver, set-top box, mobile phone, MP3 player, and may be able to communicate with the sample analyzer 304 via a wired or wireless data connection.
- the wired data connection could operate over Ethernet, DSL, ISDN, or conventional POTS telephone modem network.
- the wireless data connection may operate according to a short range wireless protocol, such as the Bluetooth® protocol, WiFi or WiMax, or according to a long range wireless protocol, such as CDMA, GSM, or other wireless networks.
- the client consumer device 302 includes a broadcast receiver 306 , a broadcast station selector 308 , a media sampler 310 , a query generator 312 , a global positioning system (GPS) location device 314 , a timestamp clock 316 and a display 318 .
- GPS global positioning system
- the broadcast receiver 306 may be any type of general FM/AM transmitter/receiver (or XM satellite radio receiver) to receiver broadcasts from a radio station.
- the broadcast receiver 306 may even receive an Internet streaming digital broadcast.
- the broadcast station selector 308 is coupled to the broadcast receiver 306 and is able to tune to a specific broadcast frequency (so as to only pass one radio frequency) to an amplifier and loudspeaker (not shown) to be played for a user.
- the broadcast station selector 308 may provide a text string representing a broadcast channel or an Internet address, such as a URL, that represents the broadcast channel. Alternatively, the broadcast station selector 308 may specify a number indicating a tuning frequency.
- the tuning frequency may be used by the broadcast receiver 306 to set an analog, digital, or software tuner, or to access an Internet network address to access a specific broadcast program.
- the media sampler 310 is coupled to the broadcast receiver in order to record a portion of a broadcast.
- a segment of an audio program a few seconds long may be sampled digitally into a file as a numeric array by the media sampler 310 .
- the media sample may be further processed by compression.
- the raw media sample may be processed through a feature extractor to pull out relevant features for content identification.
- One feature extractor known in the art is taught by Wang and Smith, U.S. Pat. No. 6,990,453, which is entirely incorporated by reference, in which a list of spectrogram peaks in time and frequency is extracted from an audio sample.
- Another suitable feature extraction method known in the art is disclosed by Haitsma, et al, in U.S.
- Patent Application Publication Number 2002/0178410 which entirely incorporated herein by reference. Feature extraction and compression are not required, but can be used by the media sampler 310 to reduce an amount of data that is transmitted to the sample analyzer 304 , thus saving time and bandwidth costs.
- the query generator 312 may also send a geographic location of the client consumer device 302 along with the query, and may receive the geographic location from the GPS device 314 .
- the mechanism by which the GPS device 314 determines a position of the client consumer device 302 can be device-based and/or network based.
- the GPS device 314 is a GPS receiver for receiving from a GPS satellite system an indication of the client consumer device's current position.
- the GPS device 314 may send a position determination request into a wireless network, and the network may respond to the GPS device 314 by providing the GPS device 314 with an indication of the GPS device's position.
- the network may determine the GPS device's position by querying the GPS device according to the specification “Position Determination Service Standard for Dual Mode Spread Spectrum Systems,” TIA/EIA/IS-801, published in October 1999 and fully incorporated herein by reference, which defines a set of signaling messages between a device and network components to provide a position determination service so as to determine a location of the device.
- the GPS device 314 may operate via a reverse-lookup protocol using an IP address of the client consumer device 302 to obtain an approximate location.
- the IP address of the client consumer device 302 may be assigned by a network provider, and a geographic location of the IP address can be included within registration information of the owner of the IP address. Either the IP address of the client consumer device 302 or an IP address of a gateway in the path to the server may be used.
- the GPS device 314 can provide sufficient information to indicate an approximate position by sending its IP address, and the derivation of the position may be performed at the client consumer device 302 or at the sample analyzer 304 .
- the IP address will include information from which a location can be ascertained, or may even include a reference number indicative of a physical location.
- the GPS device 314 is optional and is only used if the broadcast station selector 306 does not uniquely specify a broadcast station. For example, if the broadcast station selector 306 only specifies a tuning frequency, rather than a tuning frequency and additional information pertaining to a broadcast station (e.g., such as a broadcast station name). Location information disambiguates the broadcast station since only one station in a geographical vicinity may use the same frequency. For purposes of the present application, accuracy of the GPS device 314 does not need to be extremely high. Other means for localization may be employed, working in conjunction with the sample analyzer 304 , such as triangulation through mobile phone data network transmission towers. For fixed location consumer appliances such as a set-top box, the location information may be specified by a zip code or a residential address stored in a data string, for example.
- a user may then use the query generator 312 to send a content identification query to the sample analyzer 304 to receive information pertaining to the identity of the content.
- the query generator 312 may also send a timestamp from the timestamp clock 316 along with the query.
- the sample analyzer 304 will return metadata to the client consumer device 302 for display on the metadata display 318 , which may be any typical display device.
- the sample analyzer 304 includes a request server 320 , a recognition server 322 , a metadata cache temporary storage 324 and a timestamp clock 326 .
- the request server 320 receives content identification queries from the client consumer device 302 and returns metadata pertaining to an identification of the content.
- the recognition server 322 operates to perform a computational identification of an audio sample, using any of the methods described herein, such as those described within Kenyon, U.S. Pat. No. 5,210,820.
- the recognition server 322 will also identify a real-time offset of the audio sample from the original recording, as described within U.S. Patent Application Publication US 2002/0083060, to Wang and Smith, to determine a time for which the identification of the audio sample is valid and may be returned in response to future queries.
- the request server 320 and/or the recognition server 322 can estimate endpoints of the broadcast program by noting a timestamp of a beginning of the media sample and subtracting off the relative time offset (RTO) to obtain a segment start time, and then further adding a length of the broadcast program (known after making the content identification) to obtain a segment end time.
- the segment start and end times can be used to calculate a time interval of validity during which the cached metadata for the identified broadcast program is valid. For example, if the RTO indicates that the sample is 50 seconds into the song, and after making the content identification, the identity and length of the song is known, and thus, the time remaining for which the song will be played can be calculated.
- the request server 320 would simply return the previously stored identity of the song.
- the recognition server 322 may return in addition to usual metadata identifying the song both a relative time offset from the beginning of the identified broadcast program corresponding to the start of the media sample and a length of the identified broadcast program.
- the recognition algorithms by Wang and Smith or by Haitsma, et al, (references cited above) can provide such information.
- the recognition server 322 will then note the broadcast station from which the sample was recorded, and then store all the information in the metadata cache 324 , in a format as shown in Table 1 below, for example.
- the metadata cache 324 may correlate content identifications (e.g., names of song) with a broadcast station and a time of validity.
- the time of validity indicates how long the content identification for the specified broadcast station is valid.
- the time of validity may be a remaining length of the song, so that if another user sends in a query for this broadcast station during the time of validity (e.g., during broadcast of the same song), then the content identification of the song is still valid and is still correct.
- the time of validity may also be a time corresponding to a length of the song, and the request server 320 will then note the timestamp in the content identification request to determine if the cached metadata is still valid.
- the request server 320 will receive the content identification query from the client consumer device 302 , identify a broadcast station from the query and determine if there is a currently cached metadata result available and valid for the selected broadcast station within the metadata cache 324 . As explained, currently cached metadata will be available if the recognition server 322 has already identified the broadcast program on the selected broadcast station within a predetermined interval of time in the past.
- the request server 320 If there is currently cached metadata available for the selected broadcast station, then the request server 320 returns the associated cached metadata content identification result to the client consumer device 302 . Furthermore, the time interval of validity, or at least an endpoint of a song may also be returned in the metadata to the client consumer device 302 . The client consumer device 302 can then synchronize update times indicating when to next query the request server 320 for an identity of the next song (e.g., which will start after the end of the previous time interval of validity), thus minimizing a delay in updating program metadata between broadcast programs.
- request server 320 will request the client consumer device 302 to send a media sample representation to the request server 320 for identification.
- the request server 320 will route the media sample to the recognition server 322 , which performs a computational identification and sends an associated metadata result back to the request sever 320 that forwards the result back to the client consumer device 302 .
- the request server 320 will also cache the result as the currently cached metadata for the selected broadcast station, and store a predetermined length of time during which the currently cached metadata is valid. Caching of the current metadata enables the request server 320 to serve requests from many more consumer appliance clients than would otherwise be possible if each request had to be computational identified individually through the recognition server 322 .
- FIG. 4 illustrates a conceptual example of multiple content identification queries occurring serially in time during a song.
- a first song is being broadcast by a radio station at a start time T m and the song has an end time of T n and thus a length of (T n ⁇ T m ).
- a first content identification query is received at time T 1 , which is after the start of the first song, and so the content identification query is performed to identify the first song.
- the identity of the first song is then stored, and sent to a device requesting the first query.
- T 2 which is before the end time T n of the first song, then the stored information pertaining to a response that was sent to the first query is also sent in response to the second query.
- No second or additional computational content identification is needed. For all content identification queries received after the first query (e.g., time T 1 ) and before the end of the song (e.g., time T n ), the result from the first computational content identification is returned.
- the client consumer device 302 can synchronize update times indicating when to next query the request server 320 for an identity of the next song (e.g., which will start after the end of the previous time interval of validity or soon thereafter) to minimize a delay in updating program metadata between broadcast programs.
- the next song begins broadcasting at a time T x , and thus during the time T n to T x no songs are broadcast.
- a broadcast station may air commercials or DJ talk.
- a client consumer device may be programmed to next query for content identification at least a few seconds after the end time of the previously identified song.
- a client consumer device may programmatically (or automatically) query the request server 320 to receive content identifications of every song being broadcast and received at the client consumer device so as to constantly received updated program metadata.
- Metadata may also be automatically displayed on a client consumer device, while a broadcast receiver application is open and operating.
- FIG. 5 illustrates an example display of broadcast metadata on a mobile device.
- the display may indicate radio station information (104.5 FM), a song title, an artist name, and a time remaining for the song. Other information may also be displayed as well.
- the mobile device may continually receive new metadata with new information pertaining to a current song being played, and may update the display accordingly.
- the metadata update may be sent in response to a query by the client consumer device 302 , or alternatively may be pushed proactively by the sample analyzer 304 to the client consumer device 302 , as long as the client consumer device 302 continues to indicate that it is still tuned to the same broadcast station. In this manner, the data can be sent without a request to continue updating the metadata information.
- the client consumer device 302 sends broadcast station information to the sample analyzer 304 and the sample analyzer 304 usually will be able to discern to which broadcast station the client consumer device 302 is listening based on the information.
- the sample analyzer may also attempt to determine a broadcast source by using external monitoring systems. For example, samples from broadcast channels may be monitored and each broadcast sample may be time stamped in terms of a “real-time” offset from a common time base, and an estimated time offset of the broadcast sample within the “original” recording is determined (using the technique of Wang and Smith described in U.S. Patent Application Publication US 2002/0083060, the entire disclosure of which is herein incorporated by reference).
- user sample characteristics received by the sample analyzer 304 can be compared with characteristics from broadcast samples that were taken at or near the time the user sample was recorded to identify a match. If the real-time offsets are within a certain tolerance, e.g., one second, then the user audio sample is considered to be originating from the same source as the broadcast sample, since the probability that a random performance of the same audio content (such as a hit song) is synchronized to less than one second in time is low. Additional factors may also be considered when attempting to find a match to a broadcast source the audio sample.
- user samples can be taken over a longer period of time, e.g., longer than a typical audio program, such as over a transition between audio programs on the same channel to verify continuity of identity over a program transition as an indicator that the correct broadcast channel is being tracked.
- the broadcast selection selector 308 of the client consumer device 302 does not uniquely describe a single broadcast station, then location information from the GPS device 314 is also sent along with the query (either within the query message or as a separate message) to the request server 320 .
- the request server 320 may then access the metadata cache 324 and identify a broadcast station that broadcasts within an area of the location of the client consumer device 302 .
- the request server 320 can look to a table, such as Table 1, to verify that station “104.5” broadcasts to San Francisco, which is where the client consumer device 302 may be located, and return the metadata result describing the program playing at the time.
- the request server 320 will ask the client consumer device 302 to send a media sample representation to identify the sample.
- the recognition server 322 will then computationally identify the sample and return a metadata result.
- the metadata result is then sent to the client consumer device 302 and displayed to a user.
- FIG. 6 illustrates a conceptual block diagram of a coverage area map for two radio stations.
- Radio Station 104.5 WMQD has a coverage area 602
- Radio Station 96.5 WGRD has a coverage area 604
- a second Radio Station 96.5 WGRD has a coverage area 606 .
- Mobile device 608 is within coverage area 602 and mobile device 610 is within coverage area 604 while mobile device 612 is within both coverage areas 602 and 604 .
- Mobile device 614 is within coverage area 606 .
- the mobile devices may send a content identification query through a wireless network 616 via a wireless link 618 to a server 620 , which includes functionality and/or components comprising a sample analyzer, as described above in FIG. 3 , to identify broadcast content received from the Radio Stations.
- the server 620 may have the map, as shown in FIG. 6 , of the coverage areas of the Radio Stations, and using location information received from the mobile devices, can determine to which radio station the mobile device is listening. However, for mobile devices 610 , 612 and 614 , the server 620 may also require additional information, such as the location of the mobile device, because the frequency information alone will not be enough to distinguish the radio stations.
- a self-organizing broadcast station mapping system may be derived if no map of physical broadcast stations is available. Initially, it is not known where each broadcast radio station is located, however, it is desired to determine for each broadcast station its coverage area. A coverage map may be formed from many samples taken by many client consumer appliances over a period of time. Referring back to FIG. 3 , to construct a coverage area map, each query received at the request server 320 may include a tuning frequency, a GPS location, and a media sample. Each query is initially routed to the recognition server 322 for identification of the metadata using the computational identification technique.
- the metadata is checked to see if the identified programs correspond to each other. This is performed, for example, by determining if the metadata match, and then a temporal correspondence is verified for example by determining whether the time intervals of validity match. If both media samples are determined to be the same, then the request server 320 will have two geographic locations to which the tuning frequency broadcasts (e.g., if the metadata and the intervals match, then the two users are declared to be tuned to the same unknown broadcast station).
- the two corresponding GPS locations are grouped into a set of locations belonging to the unknown broadcast station that have the same broadcast station selector (e.g. tuning frequency).
- a coverage map may be generated from the set of locations by convolving with a disc of predetermined radius, e.g., 0.5 or 1 Kilometer. In other words, a locality zone of predetermined radius is drawn around each point in the set of locations.
- Each unknown broadcast station is thus associated with a corresponding coverage map, and furthermore, is associated with currently cached metadata from the most recent recognition of a media sample associated with the unknown broadcast station.
- a search is performed to find a broadcast station that has the same broadcast station selector and coverage map that overlaps the GPS location.
- a media identification by the recognition server is not performed and the current metadata is returned. Otherwise, a media identification is performed by the recognition server and the resulting metadata becomes the currently cached metadata for that broadcast station.
- a media identification is performed. If the resulting metadata and time interval of validity matches that of a known broadcast station that has the same broadcast station selector (e.g., tuning frequency), then the new GPS location can be added to that broadcast station's set of locations and the associated coverage map can be updated. If no matching broadcast station is found, then a new record for a new broadcast station would be generated.
- raw audio samples received from broadcast stations can be identified using known computational identification techniques, and the identification can be stored and returned to subsequent queries associated with the same broadcast source during a time of validity. If many users are listening to the same broadcast program and are making the same query, much time can be saved by performing one computational audio pattern recognition and returning the result to all users, rather than performing a computational identification of content for every user (when doing so will repeat many identifications).
- any of the embodiments described above may be used together or in any combination to enhance certainty of identifying samples in the data stream.
- many of the embodiments may be performed using a consumer device that has a broadcast stream receiving means (such as a radio receiver), and either (1) a data transmission means for communicating with a central identification server for performing the identification step, or (2) a means for carrying out the identification step built into the consumer device itself (e.g., an audio recognition means database could be loaded onto the consumer device).
- the consumer device may include means for updating a database to accommodate identification of new audio tracks, such as an Ethernet or wireless data connection to a server, and means to request a database update.
- the consumer device may also further include local storage means for storing recognized segmented and labeled audio track files, and the device may have playlist selection and audio track playback means, as in a jukebox, for example.
- the mechanisms described above can be implemented in software that is used in conjunction with a general purpose or application specific processor and one or more associated memory structures. Nonetheless, other implementations utilizing additional hardware and/or firmware may alternatively be used.
- the mechanism of the present application is capable of being distributed in the form of a computer-readable medium of instructions in a variety of forms, and that the present application applies equally regardless of the particular type of signal bearing media used to actually carry out the distribution. Examples of such computer-accessible devices include computer memory (RAM or ROM), floppy disks, and CD-ROMs, as well as transmission-type media such as digital and analog communication links.
- video files may be identified using similar techniques for identifying audio files including scanning a video file to find digital markings (e.g., fingerprints) unique to the file, and checking a database of videos to identify videos that have similar markings.
- digital markings e.g., fingerprints
- Fingerprint technology can identify audio or video by extracting specific characterization parameters of a file, which are translated into a bit string or fingerprint, and comparing the fingerprints of the file with the fingerprints of previously stored original files in a central database.
- characterization parameters For more information on video recognition technologies, the reader is referred to U.S. Pat. No. 6,714,594, entitled “Video content detection method and system leveraging data-compression constructs,” the contents of which are herein incorporated by reference as if fully set forth in this description.
- apparatus and methods described herein may be implemented in hardware, software, or a combination, such as a general purpose or dedicated processor running a software application through volatile or non-volatile memory.
- a general purpose or dedicated processor running a software application through volatile or non-volatile memory.
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Library & Information Science (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Circuits Of Receivers In General (AREA)
- Information Transfer Between Computers (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
Description
- The present patent application claims priority under 35 U.S.C. § 119(e) to U.S. Provisional Patent Application Ser. No. 60/848,941, filed on Oct. 3, 2006, the entirety of which is herein incorporated by reference. The present patent application also claims priority to U.S. patent application Ser. No. 11/866,814, filed on Oct. 3, 2007, the entirety of which is herein incorporated by reference. The present patent application also claims priority to U.S. patent application Ser. No. 12/976,050, filed on Dec. 22, 2010, the entirety of which is herein incorporated by reference. The present patent application also claims priority to U.S. patent application Ser. No. 13/309,222, filed on Dec. 1, 2011, the entirety of which is herein incorporated by reference. The present patent application also claims priority to U.S. patent application Ser. No. 13/868,708, filed on Apr. 23, 2013, the entirety of which is herein incorporated by reference. The present patent application also claims priority to U.S. patent application Ser. No. 14/672,881, filed on Mar. 30, 2015, the entirety of which is herein incorporated by reference.
- The present invention generally relates to identifying content within broadcasts, and more particularly, to identifying information about segments or excerpts of content within a data stream.
- As industries move toward multimedia rich working environments, usage of all forms of audio and visual content representations (radio broadcast transmissions, streaming video, audio canvas, visual summarization, etc.) becomes more frequent. Whether a user, content provider, or both, everybody searches for ways to optimally utilize such content. For example, one method that has much potential for creative uses is content identification. Enabling a user to identify content that the user is listening to or watching offers a content provider new possibilities for success.
- Content identification may be used in a service provided for a consumer device (e.g., a cell phone), which includes a broadcast receiver, to supply broadcast program metadata to a user. For example, title, artist, and album information can be provided to the user on the device for broadcast programs as the programs are being played on the device. Existing systems to provide content information of a broadcast signal to a user may only provide limited metadata, as with a radio data signal (RDS). In addition, existing systems may not be monitoring every broadcast station in every locale, and a desired radio station mapping may not always be available.
- Still further, other existing systems may require the consumer device to sample/record a broadcast program and to send the sample of the broadcast program to a recognition server for direct identification. A computational cost to perform a recognition on one media sample may be small, however, when considering that potentially many millions of consumer devices may be active at the same time, and if each were to query the server once per minute, the recognition server would have to be able to perform millions of recognitions every minute, and then the computational cost becomes significant. Such a system may only be able to allow a time budget of a few microseconds or less per recognition request, which is a few orders of magnitude smaller than typical processing times for media content identification. Furthermore, since broadcast media is often presented as a continuous stream without segmentation markers, in order to provide matching program metadata that is timely and synchronized with current program, a brute-force sample and query method could require fine granularity sampling intervals, thus increasing required query load even more.
- In the field of broadcast monitoring and subsequent content identification, it is desirable to identify as much audio content as possible, within every locale, while minimizing effort expended. The present application provides techniques for doing so.
- Within embodiments disclosed herein, a method of identifying content within a data stream is provided. The method includes receiving a content identification query from a client device that requests an identity of content that was broadcast from a broadcast source. If content from the broadcast source has previously been identified and if the content identification query has been received at a time during which the content is still being broadcast from the source, the method includes sending the previous identification of the content to the client device. However, if not, the method includes (i) performing a content identification using a sample of the content broadcast from the broadcast source, and (ii) storing the content identification.
- In another embodiment, the method includes receiving a content identification query from a client device that requests an identity of content being broadcast from a broadcast source and including information pertaining to the broadcast source of the content. The method also includes accessing a cache including a listing of content identifications that were each generated using a content sample, and each listing includes information pertaining to identity of content broadcast from a plurality of broadcast sources and each item in the listing including (i) an identity of given content, (ii) an identity of a given broadcast source that broadcast the given content, and (iii) an indication of when the content identification is valid. The method also includes matching the broadcast source of the content to a broadcast source of one of the content samples from which any of the content identifications were generated, and if the content identification query was received during a time in which the content identification in the listing pertaining to the one of the content samples is still valid, sending the content identification in the listing pertaining to the one of the content samples to the client device in response to the content identification query.
- In still another embodiment, the method includes receiving a first content identification query from a first client device that includes a recording of a sample of content being broadcast from a first source, making a content identification using the sample of the content, determining a time during which the content will be or is being broadcast from the first source, and storing the content identification, the time, and information pertaining to the first source of the content in a cache. The method also includes receiving a second content identification query from a second client device that requests an identity of content being broadcast from a second source and including information pertaining to the second source of the content. The method further includes if the first source and the second source are the same and if the time has not expired, (i) sending the content identification made in response to the first content identification query to the second client device in response to the second content identification query, and if not, (ii) making a second content identification using a sample of the content being broadcast from the second source and storing the second content identification in the cache.
- These as well as other features, advantages and alternatives will become apparent to those of ordinary skill in the art by reading the following detailed description, with appropriate reference to the accompanying drawings.
-
FIG. 1 illustrates one example of a system for identifying content within an audio stream. -
FIG. 2 is a flowchart depicting functional blocks of an example method of identifying content based on location of a user, broadcast information and/or stored content identifications. -
FIG. 3 is a block diagram illustrating an example client consumer device in communication with a sample analyzer to receive information identifying broadcast content. -
FIG. 4 illustrates a conceptual example of multiple content identification queries occurring serially in time during a song. -
FIG. 5 illustrates an example display of broadcast metadata on a mobile phone. -
FIG. 6 illustrates a conceptual block diagram of an example coverage area map for two radio stations. - Within exemplary embodiments described below, a method for identifying content within data streams is provided. The method may be applied to any type of data content identification. In the following examples, the data is an audio data stream. The audio data stream may be a real-time data stream or an audio recording, for example.
- Exemplary embodiments describe methods for identifying content by identifying a source (e.g., channel, stream, or station) of the content transmission, and a location of a device requesting the content identification. For example, it may be desirable to detect from a free-field audio sample of a radio broadcast which radio station a user is listening to, as well as to what song the user is listening. Exemplary embodiments described below illustrate a method and apparatus for identifying a broadcast source of desired content, and for identifying content broadcast from the source. In one embodiment, a user can utilize an audio sampling device including a microphone and optional data transmission means to identify content from a broadcast source. The user may hear an audio program being broadcast from some broadcast means, such as radio or television, and can record a sample of the audio using the audio sampling device. The sample, broadcast source information, and optionally a location of the audio sampling device are then conveyed to an analyzing means to identify the content. Content information may then be reported back to the user.
- The identity and information within a query (broadcast source information and optionally location information) are then stored. If second user then subsequently sends a content identification query for the same broadcast source and the query is received within a given time frame, then the stored content identity can be returned as a result to the second user. The query would need to be received during a time in which the same song is being broadcast on by the same broadcast source, so that the second user would effectively be asking to identify the same song that was previously identified in response to the first query. In this manner, for all queries received after a first query, during a broadcast of the song for which the query pertains, and pertaining to the same broadcast source, the response to the first query (which is stored) can be returned to all subsequent queries. As a result, only one computational content identification is needed to be performed, because the result can be stored for later retrieval, if subsequent content queries satisfy the requirements (e.g., if subsequent content queries are considered to be for the same song).
- Referring now to the figures,
FIG. 1 illustrates one example of a system for identifying content within other data content, such as identifying a song within a radio broadcast. The system includes radio stations, such asradio station 102, which may be a radio or television content provider, for example, that broadcasts audio streams and other information to areceiver 104. Thereceiver 104 receives the broadcast radio signal using anantenna 106 and converts the signal into sound. Thereceiver 104 may be a component within any number of consumer devices, such as a portable computer or cell phone. Thereceiver 104 may also include a conventional AM/FM tuner and other amplifiers as well to enable tuning to a desired radio broadcast channel. - The
receiver 104 can record portions of the broadcast signal (e.g., audio sample) for identification. Thereceiver 104 can send over a wired or wireless link a recorded broadcast to asample analyzer 108 that will identify information pertaining to the audio sample, such as track identities (e.g., song title, artist, or other broadcast program information). Thesample analyzer 108 includes anaudio search engine 110 and may access adatabase 112 containing audio sample and broadcast information, for example, to compare the received audio sample with stored information so as to identify tracks within the received audio stream. Once tracks within the audio stream have been identified, the track identities or other information may be reported back to thereceiver 104. - Alternatively, the
receiver 104 may receive a broadcast from theradio station 102, and perform some initial processing on a sample of the broadcast so as to create a fingerprint of the broadcast sample. Thereceiver 104 could then send the fingerprint information to thesample analyzer 108, which will identify information pertaining to the sample based on the fingerprint alone. In this manner, more computation or identification processing can be performed at thereceiver 104, rather than at thesample analyzer 108. - The
database 112 may include many recordings and each recording has a unique identifier (e.g., sound_ID). Thedatabase 112 itself does not necessarily need to store the audio files for each recording, since the sound_IDs can be used to retrieve audio files from elsewhere. A sound database index may be very large, containing indices for millions or even billions of files, for example. New recordings can be added incrementally to the database index. - The system of
FIG. 1 allows songs to be identified based on stored information. WhileFIG. 1 illustrates a system that has a given configuration, the components within the system may be arranged in other manners. For example, theaudio search engine 110 may be separate from thesample analyzer 108, or audio sample processing can occur at thereceiver 104 or at thesample analyzer 108. Thus, it should be understood that the configurations described herein are merely exemplary in nature, and many alternative configurations might also be used. - The system in
FIG. 1 , and in particular thesample analyzer 108, identifies content within an audio stream using samples of the audio within the audio stream. Various audio sample identification techniques are known in the art for performing computational content identifications of audio samples and features of audio samples using a database of audio tracks. The following patents and publications describe possible examples for audio recognition techniques, and each is entirely incorporated herein by reference, as if fully set forth in this description. -
- Kenyon et al, U.S. Pat. No. 4,843,562, entitled “Broadcast Information Classification System and Method”
- Kenyon, U.S. Pat. No. 5,210,820, entitled “Signal Recognition System and Method”
- Haitsma et al, International Publication Number WO 02/065782 A1, entitled “Generating and Matching Hashes of Multimedia Content”
- Wang and Smith, International Publication Number WO 02/11123 A2, entitled “System and Methods for Recognizing Sound and Music Signals in High Noise and Distortion”
- Wang and Culbert, International Publication Number WO 03/091990 A1, entitled “Robust and Invariant Audio Pattern Matching”
- Wang, Avery, International Publication Number W05/079499 (also published as U.S. Pat. No. 7,986,913), entitled “Method and Apparatus for identification of broadcast source”
- Briefly, identifying features of an audio recording begins by receiving the recording and sampling the recording at a plurality of sampling points to produce a plurality of signal values. A statistical moment of the signal can be calculated using any known formulas, such as that noted in U.S. Pat. No. 5,210,820, for example. The calculated statistical moment is then compared with a plurality of stored signal identifications and the recording is recognized as similar to one of the stored signal identifications. The calculated statistical moment can be used to create a feature vector that is quantized, and a weighted sum of the quantized feature vector is used to access a memory that stores the signal identifications.
- In another example, generally, audio content can be identified by identifying or computing characteristics or fingerprints of an audio sample and comparing the fingerprints to previously identified fingerprints. The particular locations within the sample at which fingerprints are computed depend on reproducible points in the sample. Such reproducibly computable locations are referred to as “landmarks.” The location within the sample of the landmarks can be determined by the sample itself, i.e., is dependent upon sample qualities and is reproducible. That is, the same landmarks are computed for the same signal each time the process is repeated. A landmarking scheme may mark about 5-10 landmarks per second of sound recording; of course, landmarking density depends on the amount of activity within the sound recording. One landmarking technique, known as Power Norm, is to calculate the instantaneous power at many time points in the recording and to select local maxima. One way of doing this is to calculate the envelope by rectifying and filtering the waveform directly. Another way is to calculate the Hilbert transform (quadrature) of the signal and use the sum of the magnitudes squared of the Hilbert transform and the original signal. Other methods for calculating landmarks may also be used.
- Once the landmarks have been computed, a fingerprint is computed at or near each landmark time point in the recording. The nearness of a feature to a landmark is defined by the fingerprinting method used. In some cases, a feature is considered near a landmark if it clearly corresponds to the landmark and not to a previous or subsequent landmark. In other cases, features correspond to multiple adjacent landmarks. The fingerprint is generally a value or set of values that summarizes a set of features in the recording at or near the time point. In one embodiment, each fingerprint is a single numerical value that is a hashed function of multiple features. Other examples of fingerprints include spectral slice fingerprints, multi-slice fingerprints, LPC coefficients, cepstral coefficients, and frequency components of spectrogram peaks.
- Fingerprints can be computed by any type of digital signal processing or frequency analysis of the signal. In one example, to generate spectral slice fingerprints, a frequency analysis is performed in the neighborhood of each landmark timepoint to extract the top several spectral peaks. A fingerprint value may then be the single frequency value of the strongest spectral peak. For more information on calculating characteristics or fingerprints of audio samples, the reader is referred to U.S. Patent Application Publication US 2002/0083060, to Wang and Smith, entitled “System and Methods for Recognizing Sound and Music Signals in High Noise and Distortion,” the entire disclosure of which is herein incorporated by reference as if fully set forth in this description.
- Thus, the
sample analyzer 108 will receive a recording and compute fingerprints of the recording. Thesample analyzer 108 may compute the fingerprints by contacting additional recognition engines. To identify the recording, thesample analyzer 108 can then access thedatabase 112 to match the fingerprints of the recording with fingerprints of known audio tracks by generating correspondences between equivalent fingerprints and files in thedatabase 112 to locate a file that has the largest number of linearly related correspondences, or whose relative locations of characteristic fingerprints most closely match the relative locations of the same fingerprints of the recording. That is, linear correspondences between the landmark pairs are identified, and sets are scored according to the number of pairs that are linearly related. A linear correspondence occurs when a statistically significant number of corresponding sample locations and file locations can be described with substantially the same linear equation, within an allowed tolerance. The file of the set with the highest statistically significant score, i.e., with the largest number of linearly related correspondences, is the winning file, and is deemed the matching media file. - As yet another example of a technique to identify content within the audio stream, an audio sample can be analyzed to identify its content using a localized matching technique. For example, generally, a relationship between two audio samples can be characterized by first matching certain fingerprint objects derived from the respective samples. A set of fingerprint objects, each occurring at a particular location, is generated for each audio sample. Each location is determined depending upon the content of a respective audio sample and each fingerprint object characterizes one or more local features at or near the respective particular location. A relative value is next determined for each pair of matched fingerprint objects. A histogram of the relative values is then generated. If a statistically significant peak is found, the two audio samples can be characterized as substantially matching. Additionally, a time stretch ratio, which indicates how much an audio sample has been sped up or slowed down as compared to the original audio track can be determined. For a more detailed explanation of this method, the reader is referred to published PCT patent application WO 03/091990, to Wang and Culbert, entitled Robust and Invariant Audio Pattern Matching, the entire disclosure of which is herein incorporated by reference as if fully set forth in this description.
- In addition, systems and methods described within the publications above may return more than just the identity of an audio sample. For example, Wang and Smith may return, in addition to the metadata associated with an identified audio track, the relative time offset (RTO) of an audio sample from the beginning of the identified audio track. To determine a relative time offset of the audio recording, the fingerprints of the audio sample can be compared with fingerprints of the original files to which they match. Each fingerprint occurs at a given time, so after matching fingerprints to identify the audio sample, a difference in time between a first fingerprint (of the matching fingerprint in the audio sample) and a first fingerprint of the stored original file will be a time offset of the audio sample, e.g., amount of time into a song. Thus, a relative time offset (e.g., 67 seconds into a song) at which the sample was taken can be determined.
- Thus, a user may send from a client device a content identification query to a sample analyzer, which may use any of the techniques described herein to identify the content. Within exemplary embodiments described below, the user's client device may only need to send information relating to a source of the content and a location of the client device to the sample analyzer to identify content to which the user is currently listening.
- In an exemplary embodiment, the sample analyzer will perform a content identification for a song once, and then for future queries, which are received within a valid time window by other client devices listening to the same broadcast that are located in a geographic area for which the broadcast covers, the sample analyzer can return the previous content identification that was performed. Within a given geographic area, there is a limited number of radio broadcast stations, and if a geographic location of a user is known, then using the known location, broadcast information and a time of a query, the sample analyzer can identify a recording without having to perform computationally intensive identifications (as described above), but by referring to previous identifications made with for devices in the same locality.
- As an example, if two users are trying to identify the same radio station content at about the same time, after the sample analyzer performs an identification of a first user's recording (using a method described above), then within an allowable time window (e.g., time duration of the previously identified song), the sample analyzer can return the same identification to a second user. During a time duration of the song, if another user within the same locality and listening to the same broadcast sends in a request, the sample analyzer will not have to do a computationally intensive identification, but rather, the sample analyzer can rely on the previous stored recognition. In this manner, there could be many queries to identify a song being broadcast on a radio station, and the sample analyzer may only have to perform one computationally intensive identification, store the identification and mark the identification as being valid for a given time fame.
-
FIG. 2 is a flowchart depicting functional blocks of an example method of identifying content based on location of a user, broadcast information and/or stored content identifications. Initially, a consumer appliance including a broadcast receiver can be used to listen to a broadcast station. A user can send a content identification query from the consumer appliance to a request server, providing at least a representation of a broadcast station to which the user is listening, as shown atblock 202. The consumer appliance may also send location information to the request server to indicate a geographic location of the consumer appliance, as shown atblock 204. If the broadcast station information is not unique, for example, if the broadcast station information is just a tuning frequency, the location information acts to disambiguate an exact broadcast station. Many radio stations broadcast in one area, and each has a distinct broadcast frequency, however, broadcast frequencies are reused throughout multiple areas. Thus, the request server uses either the broadcast frequency alone, or the broadcast frequency and the geographic location information to identify a unique broadcast source, as shown atblock 206. - Next, the request server determines if there is currently cached metadata available for the selected broadcast station, as shown at
block 208. Currently cached valid metadata will be available if a broadcast program has already been identified for a previous query on the selected broadcast station within a predetermined interval of time. If there is currently cached metadata available for the broadcast station, then the request server will return an associated cached metadata result to the consumer appliance, as shown atblock 210. If no currently cached metadata is available, then the request server will request the consumer appliance to send a media sample representation to the request server, as shown atblock 212. The request server then routes the media sample to a recognition server for an identification, and sends an associated metadata result back to the consumer appliance, as shown atblocks block 218. Caching the current metadata makes it possible to serve requests to many more consumer appliances than would otherwise be possible if each request included a sample recording that had to be identified individually through a recognition server. Using the method inFIG. 2 , each broadcast program on each broadcast station would only need to be identified once independent of how many consumer devices make requests because the initial identification is shared and used for all subsequent requests pertaining to the same broadcast program (e.g., for all subsequent requests received during the valid time period). -
FIG. 3 is a block diagram illustrating an exampleclient consumer device 302 in communication with asample analyzer 304 to receive information identifying broadcast content. Theclient consumer device 302 may be a personal computer, stereo receiver, set-top box, mobile phone, MP3 player, and may be able to communicate with thesample analyzer 304 via a wired or wireless data connection. The wired data connection could operate over Ethernet, DSL, ISDN, or conventional POTS telephone modem network. The wireless data connection may operate according to a short range wireless protocol, such as the Bluetooth® protocol, WiFi or WiMax, or according to a long range wireless protocol, such as CDMA, GSM, or other wireless networks. - The
client consumer device 302 includes abroadcast receiver 306, abroadcast station selector 308, amedia sampler 310, aquery generator 312, a global positioning system (GPS)location device 314, atimestamp clock 316 and adisplay 318. - The
broadcast receiver 306 may be any type of general FM/AM transmitter/receiver (or XM satellite radio receiver) to receiver broadcasts from a radio station. Thebroadcast receiver 306 may even receive an Internet streaming digital broadcast. Thebroadcast station selector 308 is coupled to thebroadcast receiver 306 and is able to tune to a specific broadcast frequency (so as to only pass one radio frequency) to an amplifier and loudspeaker (not shown) to be played for a user. Thebroadcast station selector 308 may provide a text string representing a broadcast channel or an Internet address, such as a URL, that represents the broadcast channel. Alternatively, thebroadcast station selector 308 may specify a number indicating a tuning frequency. The tuning frequency may be used by thebroadcast receiver 306 to set an analog, digital, or software tuner, or to access an Internet network address to access a specific broadcast program. - The
media sampler 310 is coupled to the broadcast receiver in order to record a portion of a broadcast. A segment of an audio program a few seconds long may be sampled digitally into a file as a numeric array by themedia sampler 310. In an optional step of processing, the media sample may be further processed by compression. Alternatively, the raw media sample may be processed through a feature extractor to pull out relevant features for content identification. One feature extractor known in the art is taught by Wang and Smith, U.S. Pat. No. 6,990,453, which is entirely incorporated by reference, in which a list of spectrogram peaks in time and frequency is extracted from an audio sample. Another suitable feature extraction method known in the art is disclosed by Haitsma, et al, in U.S. Patent Application Publication Number 2002/0178410, which entirely incorporated herein by reference. Feature extraction and compression are not required, but can be used by themedia sampler 310 to reduce an amount of data that is transmitted to thesample analyzer 304, thus saving time and bandwidth costs. - The
query generator 312 may also send a geographic location of theclient consumer device 302 along with the query, and may receive the geographic location from theGPS device 314. The mechanism by which theGPS device 314 determines a position of theclient consumer device 302 can be device-based and/or network based. In a device-based system, theGPS device 314 is a GPS receiver for receiving from a GPS satellite system an indication of the client consumer device's current position. In a network-based system, theGPS device 314 may send a position determination request into a wireless network, and the network may respond to theGPS device 314 by providing theGPS device 314 with an indication of the GPS device's position. (In this regard, the network may determine the GPS device's position by querying the GPS device according to the specification “Position Determination Service Standard for Dual Mode Spread Spectrum Systems,” TIA/EIA/IS-801, published in October 1999 and fully incorporated herein by reference, which defines a set of signaling messages between a device and network components to provide a position determination service so as to determine a location of the device. - Alternatively, in a network-based system, the
GPS device 314 may operate via a reverse-lookup protocol using an IP address of theclient consumer device 302 to obtain an approximate location. The IP address of theclient consumer device 302 may be assigned by a network provider, and a geographic location of the IP address can be included within registration information of the owner of the IP address. Either the IP address of theclient consumer device 302 or an IP address of a gateway in the path to the server may be used. In this case, theGPS device 314 can provide sufficient information to indicate an approximate position by sending its IP address, and the derivation of the position may be performed at theclient consumer device 302 or at thesample analyzer 304. The IP address will include information from which a location can be ascertained, or may even include a reference number indicative of a physical location. - The
GPS device 314 is optional and is only used if thebroadcast station selector 306 does not uniquely specify a broadcast station. For example, if thebroadcast station selector 306 only specifies a tuning frequency, rather than a tuning frequency and additional information pertaining to a broadcast station (e.g., such as a broadcast station name). Location information disambiguates the broadcast station since only one station in a geographical vicinity may use the same frequency. For purposes of the present application, accuracy of theGPS device 314 does not need to be extremely high. Other means for localization may be employed, working in conjunction with thesample analyzer 304, such as triangulation through mobile phone data network transmission towers. For fixed location consumer appliances such as a set-top box, the location information may be specified by a zip code or a residential address stored in a data string, for example. - A user may then use the
query generator 312 to send a content identification query to thesample analyzer 304 to receive information pertaining to the identity of the content. Thequery generator 312 may also send a timestamp from thetimestamp clock 316 along with the query. Thesample analyzer 304 will return metadata to theclient consumer device 302 for display on themetadata display 318, which may be any typical display device. - The
sample analyzer 304 includes arequest server 320, arecognition server 322, a metadata cachetemporary storage 324 and a timestamp clock 326. Therequest server 320 receives content identification queries from theclient consumer device 302 and returns metadata pertaining to an identification of the content. Therecognition server 322 operates to perform a computational identification of an audio sample, using any of the methods described herein, such as those described within Kenyon, U.S. Pat. No. 5,210,820. Therecognition server 322 will also identify a real-time offset of the audio sample from the original recording, as described within U.S. Patent Application Publication US 2002/0083060, to Wang and Smith, to determine a time for which the identification of the audio sample is valid and may be returned in response to future queries. - The
request server 320 and/or therecognition server 322 can estimate endpoints of the broadcast program by noting a timestamp of a beginning of the media sample and subtracting off the relative time offset (RTO) to obtain a segment start time, and then further adding a length of the broadcast program (known after making the content identification) to obtain a segment end time. The segment start and end times can be used to calculate a time interval of validity during which the cached metadata for the identified broadcast program is valid. For example, if the RTO indicates that the sample is 50 seconds into the song, and after making the content identification, the identity and length of the song is known, and thus, the time remaining for which the song will be played can be calculated. If another user were to send in a content identification query for the same broadcast station during the remaining time for which the song will be played, then no computational identification is necessary because it is known that the same song is still being played and the identity of the song has already been determined and stored. In this instance, therequest server 320 would simply return the previously stored identity of the song. - When a computational identity is needed, the
recognition server 322 may return in addition to usual metadata identifying the song both a relative time offset from the beginning of the identified broadcast program corresponding to the start of the media sample and a length of the identified broadcast program. The recognition algorithms by Wang and Smith or by Haitsma, et al, (references cited above) can provide such information. Therecognition server 322 will then note the broadcast station from which the sample was recorded, and then store all the information in themetadata cache 324, in a format as shown in Table 1 below, for example. -
TABLE 1 Broadcast Station Content Identification Time of Validity 104.5 WMQD “name of song” Valid for the (San Francisco) next 3:30 - As shown in Table 1, the
metadata cache 324 may correlate content identifications (e.g., names of song) with a broadcast station and a time of validity. The time of validity indicates how long the content identification for the specified broadcast station is valid. For example, the time of validity may be a remaining length of the song, so that if another user sends in a query for this broadcast station during the time of validity (e.g., during broadcast of the same song), then the content identification of the song is still valid and is still correct. The time of validity may also be a time corresponding to a length of the song, and therequest server 320 will then note the timestamp in the content identification request to determine if the cached metadata is still valid. - The
request server 320 will receive the content identification query from theclient consumer device 302, identify a broadcast station from the query and determine if there is a currently cached metadata result available and valid for the selected broadcast station within themetadata cache 324. As explained, currently cached metadata will be available if therecognition server 322 has already identified the broadcast program on the selected broadcast station within a predetermined interval of time in the past. - If there is currently cached metadata available for the selected broadcast station, then the
request server 320 returns the associated cached metadata content identification result to theclient consumer device 302. Furthermore, the time interval of validity, or at least an endpoint of a song may also be returned in the metadata to theclient consumer device 302. Theclient consumer device 302 can then synchronize update times indicating when to next query therequest server 320 for an identity of the next song (e.g., which will start after the end of the previous time interval of validity), thus minimizing a delay in updating program metadata between broadcast programs. - If no currently cached metadata is available and valid for the selected broadcast station, then request
server 320 will request theclient consumer device 302 to send a media sample representation to therequest server 320 for identification. Therequest server 320 will route the media sample to therecognition server 322, which performs a computational identification and sends an associated metadata result back to the request sever 320 that forwards the result back to theclient consumer device 302. Therequest server 320 will also cache the result as the currently cached metadata for the selected broadcast station, and store a predetermined length of time during which the currently cached metadata is valid. Caching of the current metadata enables therequest server 320 to serve requests from many more consumer appliance clients than would otherwise be possible if each request had to be computational identified individually through therecognition server 322. -
FIG. 4 illustrates a conceptual example of multiple content identification queries occurring serially in time during a song. As shown, a first song is being broadcast by a radio station at a start time Tm and the song has an end time of Tn and thus a length of (Tn−Tm). A first content identification query is received at time T1, which is after the start of the first song, and so the content identification query is performed to identify the first song. The identity of the first song is then stored, and sent to a device requesting the first query. Once a second content identification is received at time T2, which is before the end time Tn of the first song, then the stored information pertaining to a response that was sent to the first query is also sent in response to the second query. No second or additional computational content identification is needed. For all content identification queries received after the first query (e.g., time T1) and before the end of the song (e.g., time Tn), the result from the first computational content identification is returned. - As mentioned above, the
client consumer device 302 can synchronize update times indicating when to next query therequest server 320 for an identity of the next song (e.g., which will start after the end of the previous time interval of validity or soon thereafter) to minimize a delay in updating program metadata between broadcast programs. In the example shown inFIG. 4 , the next song begins broadcasting at a time Tx, and thus during the time Tn to Tx no songs are broadcast. For example, during the time Tn to Tx, a broadcast station may air commercials or DJ talk. Thus, a client consumer device may be programmed to next query for content identification at least a few seconds after the end time of the previously identified song. - To that end, a client consumer device may programmatically (or automatically) query the
request server 320 to receive content identifications of every song being broadcast and received at the client consumer device so as to constantly received updated program metadata. In this manner, a user listening to a radio station will know the identity of all songs being played, and will not have to manually create or send a content identification query to therequest server 320. Metadata may also be automatically displayed on a client consumer device, while a broadcast receiver application is open and operating. For example,FIG. 5 illustrates an example display of broadcast metadata on a mobile device. The display may indicate radio station information (104.5 FM), a song title, an artist name, and a time remaining for the song. Other information may also be displayed as well. The mobile device may continually receive new metadata with new information pertaining to a current song being played, and may update the display accordingly. The metadata update may be sent in response to a query by theclient consumer device 302, or alternatively may be pushed proactively by thesample analyzer 304 to theclient consumer device 302, as long as theclient consumer device 302 continues to indicate that it is still tuned to the same broadcast station. In this manner, the data can be sent without a request to continue updating the metadata information. - The
client consumer device 302 sends broadcast station information to thesample analyzer 304 and thesample analyzer 304 usually will be able to discern to which broadcast station theclient consumer device 302 is listening based on the information. The sample analyzer may also attempt to determine a broadcast source by using external monitoring systems. For example, samples from broadcast channels may be monitored and each broadcast sample may be time stamped in terms of a “real-time” offset from a common time base, and an estimated time offset of the broadcast sample within the “original” recording is determined (using the technique of Wang and Smith described in U.S. Patent Application Publication US 2002/0083060, the entire disclosure of which is herein incorporated by reference). Then user sample characteristics received by thesample analyzer 304 can be compared with characteristics from broadcast samples that were taken at or near the time the user sample was recorded to identify a match. If the real-time offsets are within a certain tolerance, e.g., one second, then the user audio sample is considered to be originating from the same source as the broadcast sample, since the probability that a random performance of the same audio content (such as a hit song) is synchronized to less than one second in time is low. Additional factors may also be considered when attempting to find a match to a broadcast source the audio sample. For example, to further verify that the user is actually listening to a given broadcast channel, and that it is not just a coincidence (such as a user taking a recording from a CD player), user samples can be taken over a longer period of time, e.g., longer than a typical audio program, such as over a transition between audio programs on the same channel to verify continuity of identity over a program transition as an indicator that the correct broadcast channel is being tracked. - However, if the
broadcast selection selector 308 of theclient consumer device 302 does not uniquely describe a single broadcast station, then location information from theGPS device 314 is also sent along with the query (either within the query message or as a separate message) to therequest server 320. Therequest server 320 may then access themetadata cache 324 and identify a broadcast station that broadcasts within an area of the location of theclient consumer device 302. For example, therequest server 320 can look to a table, such as Table 1, to verify that station “104.5” broadcasts to San Francisco, which is where theclient consumer device 302 may be located, and return the metadata result describing the program playing at the time. - In the event that the
request server 320 cannot locate a metadata result corresponding to the receivedbroadcast station selector 308 information and the location information, therequest server 320 will ask theclient consumer device 302 to send a media sample representation to identify the sample. Therecognition server 322 will then computationally identify the sample and return a metadata result. The metadata result is then sent to theclient consumer device 302 and displayed to a user. - In the cases described above in which a terrestrial broadcast is being monitored and the
broadcast station selector 308 does not uniquely specify a broadcast station (e.g., only the tuning frequency is specified), an optional means for location may be used in conjunction with a map of known physical broadcast stations and corresponding coverage areas to ascertain to which station the client device is tuned, based on the assumption that reception is limited to a coverage area in proximity to the broadcast station.FIG. 6 illustrates a conceptual block diagram of a coverage area map for two radio stations. In the example shown inFIG. 6 , Radio Station 104.5 WMQD has acoverage area 602, Radio Station 96.5 WGRD has acoverage area 604, and a second Radio Station 96.5 WGRD has acoverage area 606.Mobile device 608 is withincoverage area 602 andmobile device 610 is withincoverage area 604 whilemobile device 612 is within bothcoverage areas Mobile device 614 is withincoverage area 606. - The mobile devices may send a content identification query through a
wireless network 616 via awireless link 618 to aserver 620, which includes functionality and/or components comprising a sample analyzer, as described above inFIG. 3 , to identify broadcast content received from the Radio Stations. Theserver 620 may have the map, as shown inFIG. 6 , of the coverage areas of the Radio Stations, and using location information received from the mobile devices, can determine to which radio station the mobile device is listening. However, formobile devices server 620 may also require additional information, such as the location of the mobile device, because the frequency information alone will not be enough to distinguish the radio stations. - In another embodiment involving client consumer devices tuning to terrestrial broadcast stations, and in which a GPS receiver (or functional equivalent) is present within the devices, a self-organizing broadcast station mapping system may be derived if no map of physical broadcast stations is available. Initially, it is not known where each broadcast radio station is located, however, it is desired to determine for each broadcast station its coverage area. A coverage map may be formed from many samples taken by many client consumer appliances over a period of time. Referring back to
FIG. 3 , to construct a coverage area map, each query received at therequest server 320 may include a tuning frequency, a GPS location, and a media sample. Each query is initially routed to therecognition server 322 for identification of the metadata using the computational identification technique. If two queries are made using the same frequency, and the media sample from one request temporally overlaps the time interval of validity resulting from the other request, then the metadata is checked to see if the identified programs correspond to each other. This is performed, for example, by determining if the metadata match, and then a temporal correspondence is verified for example by determining whether the time intervals of validity match. If both media samples are determined to be the same, then therequest server 320 will have two geographic locations to which the tuning frequency broadcasts (e.g., if the metadata and the intervals match, then the two users are declared to be tuned to the same unknown broadcast station). - The two corresponding GPS locations are grouped into a set of locations belonging to the unknown broadcast station that have the same broadcast station selector (e.g. tuning frequency). A coverage map may be generated from the set of locations by convolving with a disc of predetermined radius, e.g., 0.5 or 1 Kilometer. In other words, a locality zone of predetermined radius is drawn around each point in the set of locations. Each unknown broadcast station is thus associated with a corresponding coverage map, and furthermore, is associated with currently cached metadata from the most recent recognition of a media sample associated with the unknown broadcast station. When a query is made with a broadcast station selector and a new GPS location, a search is performed to find a broadcast station that has the same broadcast station selector and coverage map that overlaps the GPS location. If a match is found and a current metadata is available for that group, then a media identification by the recognition server is not performed and the current metadata is returned. Otherwise, a media identification is performed by the recognition server and the resulting metadata becomes the currently cached metadata for that broadcast station.
- If a new non-overlapping GPS location is encountered (e.g., the location is not within the previously generated coverage area map) and the query does not match a known broadcast station and an associated coverage map, then a media identification is performed. If the resulting metadata and time interval of validity matches that of a known broadcast station that has the same broadcast station selector (e.g., tuning frequency), then the new GPS location can be added to that broadcast station's set of locations and the associated coverage map can be updated. If no matching broadcast station is found, then a new record for a new broadcast station would be generated.
- Using the methods described herein, raw audio samples received from broadcast stations can be identified using known computational identification techniques, and the identification can be stored and returned to subsequent queries associated with the same broadcast source during a time of validity. If many users are listening to the same broadcast program and are making the same query, much time can be saved by performing one computational audio pattern recognition and returning the result to all users, rather than performing a computational identification of content for every user (when doing so will repeat many identifications).
- Many embodiments have been described as being performed, individually or in combination with other embodiments, however, any of the embodiments described above may be used together or in any combination to enhance certainty of identifying samples in the data stream. In addition, many of the embodiments may be performed using a consumer device that has a broadcast stream receiving means (such as a radio receiver), and either (1) a data transmission means for communicating with a central identification server for performing the identification step, or (2) a means for carrying out the identification step built into the consumer device itself (e.g., an audio recognition means database could be loaded onto the consumer device). Further, the consumer device may include means for updating a database to accommodate identification of new audio tracks, such as an Ethernet or wireless data connection to a server, and means to request a database update. The consumer device may also further include local storage means for storing recognized segmented and labeled audio track files, and the device may have playlist selection and audio track playback means, as in a jukebox, for example.
- The methods described above can be implemented in software that is used in conjunction with a general purpose or application specific processor and one or more associated memory structures. Nonetheless, other implementations utilizing additional hardware and/or firmware may alternatively be used. For example, the mechanism of the present application is capable of being distributed in the form of a computer-readable medium of instructions in a variety of forms, and that the present application applies equally regardless of the particular type of signal bearing media used to actually carry out the distribution. Examples of such computer-accessible devices include computer memory (RAM or ROM), floppy disks, and CD-ROMs, as well as transmission-type media such as digital and analog communication links.
- While examples have been described in conjunction with present embodiments of the application, persons of skill in the art will appreciate that variations may be made without departure from the scope and spirit of the application. For example, although the broadcast data-stream described in the examples are often audio streams, the invention is not so limited, but rather may be applied to a wide variety of broadcast content, including video, television, internet streaming, or other multimedia content. As one example, video files may be identified using similar techniques for identifying audio files including scanning a video file to find digital markings (e.g., fingerprints) unique to the file, and checking a database of videos to identify videos that have similar markings. Fingerprint technology can identify audio or video by extracting specific characterization parameters of a file, which are translated into a bit string or fingerprint, and comparing the fingerprints of the file with the fingerprints of previously stored original files in a central database. For more information on video recognition technologies, the reader is referred to U.S. Pat. No. 6,714,594, entitled “Video content detection method and system leveraging data-compression constructs,” the contents of which are herein incorporated by reference as if fully set forth in this description.
- Further, the apparatus and methods described herein may be implemented in hardware, software, or a combination, such as a general purpose or dedicated processor running a software application through volatile or non-volatile memory. The true scope and spirit of the application is defined by the appended claims, which may be interpreted in light of the foregoing.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/840,025 US20180101610A1 (en) | 2006-10-03 | 2017-12-13 | Method and System for Identification of Distributed Broadcast Content |
Applications Claiming Priority (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US84894106P | 2006-10-03 | 2006-10-03 | |
US11/866,814 US7881657B2 (en) | 2006-10-03 | 2007-10-03 | Method for high-throughput identification of distributed broadcast content |
US12/976,050 US8086171B2 (en) | 2006-10-03 | 2010-12-22 | Method and system for identification of distributed broadcast content |
US13/309,222 US8442426B2 (en) | 2006-10-03 | 2011-12-01 | Method and system for identification of distributed broadcast content |
US13/868,708 US9361370B2 (en) | 2006-10-03 | 2013-04-23 | Method and system for identification of distributed broadcast content |
US14/672,881 US9864800B2 (en) | 2006-10-03 | 2015-03-30 | Method and system for identification of distributed broadcast content |
US15/840,025 US20180101610A1 (en) | 2006-10-03 | 2017-12-13 | Method and System for Identification of Distributed Broadcast Content |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/672,881 Continuation US9864800B2 (en) | 2006-10-03 | 2015-03-30 | Method and system for identification of distributed broadcast content |
Publications (1)
Publication Number | Publication Date |
---|---|
US20180101610A1 true US20180101610A1 (en) | 2018-04-12 |
Family
ID=39092803
Family Applications (6)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/866,814 Active 2029-06-26 US7881657B2 (en) | 2006-10-03 | 2007-10-03 | Method for high-throughput identification of distributed broadcast content |
US12/976,050 Active US8086171B2 (en) | 2006-10-03 | 2010-12-22 | Method and system for identification of distributed broadcast content |
US13/309,222 Active US8442426B2 (en) | 2006-10-03 | 2011-12-01 | Method and system for identification of distributed broadcast content |
US13/868,708 Active 2029-06-29 US9361370B2 (en) | 2006-10-03 | 2013-04-23 | Method and system for identification of distributed broadcast content |
US14/672,881 Active US9864800B2 (en) | 2006-10-03 | 2015-03-30 | Method and system for identification of distributed broadcast content |
US15/840,025 Abandoned US20180101610A1 (en) | 2006-10-03 | 2017-12-13 | Method and System for Identification of Distributed Broadcast Content |
Family Applications Before (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/866,814 Active 2029-06-26 US7881657B2 (en) | 2006-10-03 | 2007-10-03 | Method for high-throughput identification of distributed broadcast content |
US12/976,050 Active US8086171B2 (en) | 2006-10-03 | 2010-12-22 | Method and system for identification of distributed broadcast content |
US13/309,222 Active US8442426B2 (en) | 2006-10-03 | 2011-12-01 | Method and system for identification of distributed broadcast content |
US13/868,708 Active 2029-06-29 US9361370B2 (en) | 2006-10-03 | 2013-04-23 | Method and system for identification of distributed broadcast content |
US14/672,881 Active US9864800B2 (en) | 2006-10-03 | 2015-03-30 | Method and system for identification of distributed broadcast content |
Country Status (5)
Country | Link |
---|---|
US (6) | US7881657B2 (en) |
EP (1) | EP2070231B1 (en) |
ES (1) | ES2433966T3 (en) |
HK (1) | HK1135527A1 (en) |
WO (1) | WO2008042953A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110267067A (en) * | 2019-06-28 | 2019-09-20 | 广州酷狗计算机科技有限公司 | Method, apparatus, equipment and the storage medium that direct broadcasting room is recommended |
US10922720B2 (en) | 2017-01-11 | 2021-02-16 | Adobe Inc. | Managing content delivery via audio cues |
Families Citing this family (221)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU2001290963B2 (en) | 2000-09-13 | 2006-11-02 | Stratosaudio, Inc. | System and method for ordering and delivering media content using supplementary data which is transmitted with a broadcast signal |
US7761497B1 (en) * | 2001-07-13 | 2010-07-20 | Vignette Software, LLC | Storage medium having a manageable file directory structure |
US7239981B2 (en) | 2002-07-26 | 2007-07-03 | Arbitron Inc. | Systems and methods for gathering audience measurement data |
US8959016B2 (en) | 2002-09-27 | 2015-02-17 | The Nielsen Company (Us), Llc | Activating functions in processing devices using start codes embedded in audio |
US9711153B2 (en) | 2002-09-27 | 2017-07-18 | The Nielsen Company (Us), Llc | Activating functions in processing devices using encoded audio and detecting audio signatures |
CA2511919A1 (en) | 2002-12-27 | 2004-07-22 | Nielsen Media Research, Inc. | Methods and apparatus for transcoding metadata |
US7917130B1 (en) | 2003-03-21 | 2011-03-29 | Stratosaudio, Inc. | Broadcast response method and system |
US20070189544A1 (en) | 2005-01-15 | 2007-08-16 | Outland Research, Llc | Ambient sound responsive media player |
US20070156676A1 (en) * | 2005-09-09 | 2007-07-05 | Outland Research, Llc | System, Method and Computer Program Product for Intelligent Groupwise Media Selection |
ES2433966T3 (en) * | 2006-10-03 | 2013-12-13 | Shazam Entertainment, Ltd. | Method for high flow rate of distributed broadcast content identification |
US7962460B2 (en) * | 2006-12-01 | 2011-06-14 | Scenera Technologies, Llc | Methods, systems, and computer program products for determining availability of presentable content via a subscription service |
US7995770B1 (en) | 2007-02-02 | 2011-08-09 | Jeffrey Franklin Simon | Apparatus and method for aligning and controlling reception of sound transmissions at locations distant from the sound source |
US8499316B2 (en) * | 2007-05-11 | 2013-07-30 | Sony Corporation | Program identification using a portable communication device |
US8645983B2 (en) * | 2007-09-20 | 2014-02-04 | Sony Corporation | System and method for audible channel announce |
WO2009042858A1 (en) * | 2007-09-28 | 2009-04-02 | Gracenote, Inc. | Synthesizing a presentation of a multimedia event |
US8631448B2 (en) | 2007-12-14 | 2014-01-14 | Stratosaudio, Inc. | Systems and methods for scheduling interactive media and events |
US20090177736A1 (en) | 2007-12-14 | 2009-07-09 | Christensen Kelly M | Systems and methods for outputting updated media |
US9886434B2 (en) * | 2008-01-03 | 2018-02-06 | Google Technology Holdings LLC | Method and apparatus for acquiring program information |
US8205148B1 (en) * | 2008-01-11 | 2012-06-19 | Bruce Sharpe | Methods and apparatus for temporal alignment of media |
WO2009100240A1 (en) | 2008-02-05 | 2009-08-13 | Stratosaudio, Inc. | System and method for advertisement transmission and display |
CN101567203B (en) * | 2008-04-24 | 2013-06-05 | 深圳富泰宏精密工业有限公司 | System and method for automatically searching and playing music |
US9106801B2 (en) * | 2008-04-25 | 2015-08-11 | Sony Corporation | Terminals, servers, and methods that find a media server to replace a sensed broadcast program/movie |
US8886112B2 (en) | 2008-09-24 | 2014-11-11 | Apple Inc. | Media device with enhanced data retrieval feature |
US8457575B2 (en) * | 2008-09-26 | 2013-06-04 | Microsoft Corporation | Obtaining and presenting metadata related to a radio broadcast |
US20170034586A1 (en) * | 2008-10-08 | 2017-02-02 | Wakingapp Ltd. | System for content matching and triggering for reality-virtuality continuum-based environment and methods thereof |
US8359205B2 (en) | 2008-10-24 | 2013-01-22 | The Nielsen Company (Us), Llc | Methods and apparatus to perform audio watermarking and watermark detection and extraction |
US9667365B2 (en) | 2008-10-24 | 2017-05-30 | The Nielsen Company (Us), Llc | Methods and apparatus to perform audio watermarking and watermark detection and extraction |
US8121830B2 (en) | 2008-10-24 | 2012-02-21 | The Nielsen Company (Us), Llc | Methods and apparatus to extract data encoded in media content |
GB2465141B (en) | 2008-10-31 | 2014-01-22 | Media Instr Sa | Simulcast resolution in content matching systems |
US20100205628A1 (en) * | 2009-02-12 | 2010-08-12 | Davis Bruce L | Media processing methods and arrangements |
US9355554B2 (en) * | 2008-11-21 | 2016-05-31 | Lenovo (Singapore) Pte. Ltd. | System and method for identifying media and providing additional media content |
US8180891B1 (en) * | 2008-11-26 | 2012-05-15 | Free Stream Media Corp. | Discovery, access control, and communication with networked services from within a security sandbox |
US8508357B2 (en) * | 2008-11-26 | 2013-08-13 | The Nielsen Company (Us), Llc | Methods and apparatus to encode and decode audio for shopper location and advertisement presentation tracking |
US9390167B2 (en) * | 2010-07-29 | 2016-07-12 | Soundhound, Inc. | System and methods for continuous audio matching |
EP2234024B1 (en) * | 2009-03-24 | 2012-10-03 | Sony Corporation | Context based video finder |
CA3094520A1 (en) | 2009-05-01 | 2010-11-04 | The Nielsen Company (Us), Llc | Methods, apparatus and articles of manufacture to provide secondary content in association with primary broadcast media content |
WO2010138776A2 (en) * | 2009-05-27 | 2010-12-02 | Spot411 Technologies, Inc. | Audio-based synchronization to media |
US8489774B2 (en) | 2009-05-27 | 2013-07-16 | Spot411 Technologies, Inc. | Synchronized delivery of interactive content |
US20100306073A1 (en) * | 2009-05-29 | 2010-12-02 | Liam Young | Identifying and purchasing pre-recorded content |
US8560597B2 (en) | 2009-07-30 | 2013-10-15 | At&T Intellectual Property I, L.P. | Anycast transport protocol for content distribution networks |
WO2011018599A1 (en) * | 2009-08-12 | 2011-02-17 | British Telecommunications Plc | Communications system |
US20110041154A1 (en) * | 2009-08-14 | 2011-02-17 | All Media Guide, Llc | Content Recognition and Synchronization on a Television or Consumer Electronics Device |
US20110078020A1 (en) * | 2009-09-30 | 2011-03-31 | Lajoie Dan | Systems and methods for identifying popular audio assets |
US8161071B2 (en) | 2009-09-30 | 2012-04-17 | United Video Properties, Inc. | Systems and methods for audio asset storage and management |
US8677400B2 (en) * | 2009-09-30 | 2014-03-18 | United Video Properties, Inc. | Systems and methods for identifying audio content using an interactive media guidance application |
US8521779B2 (en) | 2009-10-09 | 2013-08-27 | Adelphoi Limited | Metadata record generation |
US8682145B2 (en) | 2009-12-04 | 2014-03-25 | Tivo Inc. | Recording system based on multimedia content fingerprints |
US9069771B2 (en) * | 2009-12-08 | 2015-06-30 | Xerox Corporation | Music recognition method and system based on socialized music server |
CN102754350A (en) * | 2010-02-08 | 2012-10-24 | 松下电器产业株式会社 | Audio apparatus |
FR2956787A1 (en) | 2010-02-24 | 2011-08-26 | Alcatel Lucent | METHOD AND SERVER FOR DETECTING A VIDEO PROGRAM RECEIVED BY A USER |
US9026102B2 (en) | 2010-03-16 | 2015-05-05 | Bby Solutions, Inc. | Movie mode and content awarding system and method |
US9264785B2 (en) * | 2010-04-01 | 2016-02-16 | Sony Computer Entertainment Inc. | Media fingerprinting for content determination and retrieval |
US9185458B2 (en) * | 2010-04-02 | 2015-11-10 | Yahoo! Inc. | Signal-driven interactive television |
WO2011130564A1 (en) * | 2010-04-14 | 2011-10-20 | Sven Riethmueller | Platform-independent interactivity with media broadcasts |
CN102959624B (en) | 2010-06-09 | 2015-04-22 | 阿德尔福伊有限公司 | System and method for audio media recognition |
US9047371B2 (en) | 2010-07-29 | 2015-06-02 | Soundhound, Inc. | System and method for matching a query against a broadcast stream |
US9876905B2 (en) | 2010-09-29 | 2018-01-23 | Genesys Telecommunications Laboratories, Inc. | System for initiating interactive communication in response to audio codes |
US20120084175A1 (en) * | 2010-10-04 | 2012-04-05 | Research In Motion Limited | Method, system and mobile electronic device for purchasing media |
GB2484893A (en) * | 2010-10-15 | 2012-05-02 | Samsung Electronics Co Ltd | Validation and fast channel change for broadcast system |
US20120158769A1 (en) * | 2010-12-15 | 2012-06-21 | Dish Network L.L.C. | Music distribution and identification systems and methods |
US8799951B1 (en) | 2011-03-07 | 2014-08-05 | Google Inc. | Synchronizing an advertisement stream with a video source |
US9380356B2 (en) | 2011-04-12 | 2016-06-28 | The Nielsen Company (Us), Llc | Methods and apparatus to generate a tag for media content |
US9742736B2 (en) | 2011-04-19 | 2017-08-22 | Nagravision S.A. | Ethernet decoder device and method to access protected content |
US9035163B1 (en) | 2011-05-10 | 2015-05-19 | Soundbound, Inc. | System and method for targeting content based on identified audio and multimedia |
SG185833A1 (en) * | 2011-05-10 | 2012-12-28 | Smart Communications Inc | System and method for recognizing broadcast program content |
US9137202B2 (en) | 2011-06-09 | 2015-09-15 | At&T Intellectual Property I, L.P. | System and method for dynamically adapting network delivery modes of content |
JP5833235B2 (en) | 2011-06-10 | 2015-12-16 | シャザム エンターテインメント リミテッドShazam Entertainment Limited | Method and system for identifying the contents of a data stream |
US9210208B2 (en) | 2011-06-21 | 2015-12-08 | The Nielsen Company (Us), Llc | Monitoring streaming media content |
US9209978B2 (en) | 2012-05-15 | 2015-12-08 | The Nielsen Company (Us), Llc | Methods and apparatus to measure exposure to streaming media |
WO2013009940A2 (en) * | 2011-07-12 | 2013-01-17 | Optinera Inc | Interacting with time-based content |
US8639178B2 (en) | 2011-08-30 | 2014-01-28 | Clear Channel Management Sevices, Inc. | Broadcast source identification based on matching broadcast signal fingerprints |
US9374183B2 (en) | 2011-08-30 | 2016-06-21 | Iheartmedia Management Services, Inc. | Broadcast source identification based on matching via bit count |
US9461759B2 (en) | 2011-08-30 | 2016-10-04 | Iheartmedia Management Services, Inc. | Identification of changed broadcast media items |
US8861937B2 (en) | 2011-08-31 | 2014-10-14 | The Nielsen Company (Us), Llc | Methods and apparatus to access media |
US9060206B2 (en) * | 2011-09-16 | 2015-06-16 | Nbcuniversal Media, Llc | Sampled digital content based syncronization of supplementary digital content |
US9460465B2 (en) | 2011-09-21 | 2016-10-04 | Genesys Telecommunications Laboratories, Inc. | Graphical menu builder for encoding applications in an image |
MX2014003610A (en) * | 2011-09-26 | 2014-11-26 | Sirius Xm Radio Inc | System and method for increasing transmission bandwidth efficiency ( " ebt2" ). |
US20130254159A1 (en) * | 2011-10-25 | 2013-09-26 | Clip Interactive, Llc | Apparatus, system, and method for digital audio services |
US11599915B1 (en) | 2011-10-25 | 2023-03-07 | Auddia Inc. | Apparatus, system, and method for audio based browser cookies |
JP2013110736A (en) * | 2011-10-28 | 2013-06-06 | Nintendo Co Ltd | Information processing system, server system, terminal system, information processing program, and information presentation method |
US8966525B2 (en) * | 2011-11-08 | 2015-02-24 | Verizon Patent And Licensing Inc. | Contextual information between television and user device |
WO2013072076A1 (en) | 2011-11-18 | 2013-05-23 | Nec Europe Ltd. | Method and system for identifying content |
BE1020406A3 (en) * | 2011-11-25 | 2013-09-03 | Kult Bvba | SELECTING AND SENDING AN INFORMATION PACKAGE. |
US20130173517A1 (en) * | 2011-12-30 | 2013-07-04 | Nokia Corporation | Method and apparatus for coordinating content across devices based on stimuli |
US9292894B2 (en) | 2012-03-14 | 2016-03-22 | Digimarc Corporation | Content recognition and synchronization using local caching |
CN102663885B (en) * | 2012-03-23 | 2017-11-07 | 中兴通讯股份有限公司 | A kind of method operated to display device, system and relevant device |
US9301016B2 (en) | 2012-04-05 | 2016-03-29 | Facebook, Inc. | Sharing television and video programming through social networking |
US9703932B2 (en) * | 2012-04-30 | 2017-07-11 | Excalibur Ip, Llc | Continuous content identification of broadcast content |
KR101404596B1 (en) | 2012-05-03 | 2014-06-11 | (주)엔써즈 | System and method for providing video service based on image data |
US9418669B2 (en) * | 2012-05-13 | 2016-08-16 | Harry E. Emerson, III | Discovery of music artist and title for syndicated content played by radio stations |
US20130318114A1 (en) * | 2012-05-13 | 2013-11-28 | Harry E. Emerson, III | Discovery of music artist and title by broadcast radio receivers |
RU2014145704A (en) * | 2012-05-23 | 2016-06-10 | Сони Корпорейшн | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD AND PROGRAM |
JP2013257815A (en) * | 2012-06-14 | 2013-12-26 | Sony Corp | Information processing apparatus, information processing method and program |
US9330647B1 (en) * | 2012-06-21 | 2016-05-03 | Amazon Technologies, Inc. | Digital audio services to augment broadcast radio |
US9628829B2 (en) * | 2012-06-26 | 2017-04-18 | Google Technology Holdings LLC | Identifying media on a mobile device |
US9113203B2 (en) | 2012-06-28 | 2015-08-18 | Google Inc. | Generating a sequence of audio fingerprints at a set top box |
US8843952B2 (en) * | 2012-06-28 | 2014-09-23 | Google Inc. | Determining TV program information based on analysis of audio fingerprints |
US10957310B1 (en) | 2012-07-23 | 2021-03-23 | Soundhound, Inc. | Integrated programming framework for speech and text understanding with meaning parsing |
EP2690593A1 (en) | 2012-07-24 | 2014-01-29 | Nagravision S.A. | Method for marking and transmitting a content and method for detecting an identifyier of said content |
US9282366B2 (en) | 2012-08-13 | 2016-03-08 | The Nielsen Company (Us), Llc | Methods and apparatus to communicate audience measurement information |
US9699485B2 (en) | 2012-08-31 | 2017-07-04 | Facebook, Inc. | Sharing television and video programming through social networking |
US9661361B2 (en) | 2012-09-19 | 2017-05-23 | Google Inc. | Systems and methods for live media content matching |
EP2712203A1 (en) * | 2012-09-25 | 2014-03-26 | Nagravision S.A. | Method and system for enhancing redistributed audio / video content |
US20140095333A1 (en) * | 2012-09-28 | 2014-04-03 | Stubhub, Inc. | System and Method for Purchasing a Playlist Linked to an Event |
GB2506897A (en) | 2012-10-11 | 2014-04-16 | Imagination Tech Ltd | Obtaining stored music track information for a music track playing on a radio broadcast signal |
US8588432B1 (en) | 2012-10-12 | 2013-11-19 | Jeffrey Franklin Simon | Apparatus and method for authorizing reproduction and controlling of program transmissions at locations distant from the program source |
EP2728773A1 (en) * | 2012-11-06 | 2014-05-07 | Alcatel Lucent | Method and device for allowing mobile communication equipments to access to multimedia streams played on multimedia screens |
US10339936B2 (en) | 2012-11-27 | 2019-07-02 | Roland Storti | Method, device and system of encoding a digital interactive response action in an analog broadcasting message |
US10366419B2 (en) | 2012-11-27 | 2019-07-30 | Roland Storti | Enhanced digital media platform with user control of application data thereon |
US8615221B1 (en) * | 2012-12-06 | 2013-12-24 | Google Inc. | System and method for selection of notification techniques in an electronic device |
US20140196070A1 (en) * | 2013-01-07 | 2014-07-10 | Smrtv, Inc. | System and method for automated broadcast media identification |
US10320502B2 (en) * | 2013-01-14 | 2019-06-11 | Comcast Cable Communications, Llc | Audio capture |
US9313544B2 (en) | 2013-02-14 | 2016-04-12 | The Nielsen Company (Us), Llc | Methods and apparatus to measure exposure to streaming media |
US9008490B1 (en) | 2013-02-25 | 2015-04-14 | Google Inc. | Melody recognition systems |
US9451048B2 (en) * | 2013-03-12 | 2016-09-20 | Shazam Investments Ltd. | Methods and systems for identifying information of a broadcast station and information of broadcasted content |
US9460201B2 (en) | 2013-05-06 | 2016-10-04 | Iheartmedia Management Services, Inc. | Unordered matching of audio fingerprints |
US20140336797A1 (en) * | 2013-05-12 | 2014-11-13 | Harry E. Emerson, III | Audio content monitoring and identification of broadcast radio stations |
US20140336799A1 (en) * | 2013-05-13 | 2014-11-13 | Harry E. Emerson, III | Discovery of music artist and title via companionship between a cellular phone and a broadcast radio receiver |
CN104183253B (en) * | 2013-05-24 | 2018-05-11 | 富泰华工业(深圳)有限公司 | music playing system, device and method |
US9711152B2 (en) | 2013-07-31 | 2017-07-18 | The Nielsen Company (Us), Llc | Systems apparatus and methods for encoding/decoding persistent universal media codes to encoded audio |
US20150039321A1 (en) | 2013-07-31 | 2015-02-05 | Arbitron Inc. | Apparatus, System and Method for Reading Codes From Digital Audio on a Processing Device |
WO2015021251A1 (en) * | 2013-08-07 | 2015-02-12 | AudioStreamTV Inc. | Systems and methods for providing synchronized content |
US10014006B1 (en) | 2013-09-10 | 2018-07-03 | Ampersand, Inc. | Method of determining whether a phone call is answered by a human or by an automated device |
US9053711B1 (en) | 2013-09-10 | 2015-06-09 | Ampersand, Inc. | Method of matching a digitized stream of audio signals to a known audio recording |
KR102095888B1 (en) * | 2013-10-07 | 2020-04-01 | 삼성전자주식회사 | User terminal apparatus and server providing broadcast viewing pattern information and method for providing broadcast viewing pattern information |
US9332035B2 (en) | 2013-10-10 | 2016-05-03 | The Nielsen Company (Us), Llc | Methods and apparatus to measure exposure to streaming media |
US9507849B2 (en) | 2013-11-28 | 2016-11-29 | Soundhound, Inc. | Method for combining a query and a communication command in a natural language computer system |
US9414129B2 (en) | 2013-12-04 | 2016-08-09 | Vizio Inc | Using client tuner devices to provide content fingerprinting in a networked system |
IN2014MU00140A (en) | 2014-01-15 | 2015-08-28 | Whats On India Media Private Ltd | |
US9292488B2 (en) | 2014-02-01 | 2016-03-22 | Soundhound, Inc. | Method for embedding voice mail in a spoken utterance using a natural language processing computer system |
US11295730B1 (en) | 2014-02-27 | 2022-04-05 | Soundhound, Inc. | Using phonetic variants in a local context to improve natural language understanding |
US9900656B2 (en) * | 2014-04-02 | 2018-02-20 | Whats On India Media Private Limited | Method and system for customer management |
EP2928094B1 (en) * | 2014-04-03 | 2018-05-30 | Alpine Electronics, Inc. | Receiving apparatus and method of providing information associated with received broadcast signals |
CN104978968A (en) * | 2014-04-11 | 2015-10-14 | 鸿富锦精密工业(深圳)有限公司 | Watermark loading apparatus and watermark loading method |
KR102137189B1 (en) * | 2014-04-15 | 2020-07-24 | 엘지전자 주식회사 | Video display device and operating method thereof |
US20150302086A1 (en) | 2014-04-22 | 2015-10-22 | Gracenote, Inc. | Audio identification during performance |
US9699499B2 (en) | 2014-04-30 | 2017-07-04 | The Nielsen Company (Us), Llc | Methods and apparatus to measure exposure to streaming media |
US9564123B1 (en) | 2014-05-12 | 2017-02-07 | Soundhound, Inc. | Method and system for building an integrated user profile |
US9832538B2 (en) | 2014-06-16 | 2017-11-28 | Cisco Technology, Inc. | Synchronizing broadcast timeline metadata |
US20170163497A1 (en) * | 2014-07-07 | 2017-06-08 | Hewlett-Packard Development Company, L.P. | Portable speaker |
US10078636B2 (en) * | 2014-07-18 | 2018-09-18 | International Business Machines Corporation | Providing a human-sense perceivable representation of an aspect of an event |
US20160132600A1 (en) * | 2014-11-07 | 2016-05-12 | Shazam Investments Limited | Methods and Systems for Performing Content Recognition for a Surge of Incoming Recognition Queries |
US20160217136A1 (en) * | 2015-01-22 | 2016-07-28 | Itagit Technologies Fz-Llc | Systems and methods for provision of content data |
CN112261446B (en) * | 2015-01-30 | 2023-07-18 | 夏普株式会社 | Method for reporting information |
US10360583B2 (en) | 2015-02-05 | 2019-07-23 | Direct Path, Llc | System and method for direct response advertising |
WO2016162723A1 (en) * | 2015-04-09 | 2016-10-13 | Airshr Pty Ltd | Systems and methods for providing information and/or content associated with broadcast segments |
CN104820678B (en) * | 2015-04-15 | 2018-10-19 | 小米科技有限责任公司 | Audio-frequency information recognition methods and device |
US9762965B2 (en) | 2015-05-29 | 2017-09-12 | The Nielsen Company (Us), Llc | Methods and apparatus to measure exposure to streaming media |
US20180115802A1 (en) * | 2015-06-23 | 2018-04-26 | Gregory Knox | Methods and systems for generating media viewing behavioral data |
US20180124458A1 (en) * | 2015-06-23 | 2018-05-03 | Gregory Knox | Methods and systems for generating media viewing experiential data |
US9913056B2 (en) | 2015-08-06 | 2018-03-06 | Dolby Laboratories Licensing Corporation | System and method to enhance speakers connected to devices with microphones |
US9900636B2 (en) | 2015-08-14 | 2018-02-20 | The Nielsen Company (Us), Llc | Reducing signature matching uncertainty in media monitoring systems |
KR20170027551A (en) * | 2015-09-02 | 2017-03-10 | 삼성전자주식회사 | Electric device and controlling method of thereof |
US11019385B2 (en) * | 2016-01-20 | 2021-05-25 | Samsung Electronics Co., Ltd. | Content selection for networked media devices |
US9992517B2 (en) | 2016-02-23 | 2018-06-05 | Comcast Cable Communications, Llc | Providing enhanced content based on user interactions |
US10063918B2 (en) | 2016-02-29 | 2018-08-28 | Gracenote, Inc. | Media channel identification with multi-match detection and disambiguation based on single-match |
US9930406B2 (en) | 2016-02-29 | 2018-03-27 | Gracenote, Inc. | Media channel identification with video multi-match detection and disambiguation based on audio fingerprint |
US9924222B2 (en) | 2016-02-29 | 2018-03-20 | Gracenote, Inc. | Media channel identification with multi-match detection and disambiguation based on location |
US10433026B2 (en) * | 2016-02-29 | 2019-10-01 | MyTeamsCalls LLC | Systems and methods for customized live-streaming commentary |
US9786298B1 (en) | 2016-04-08 | 2017-10-10 | Source Digital, Inc. | Audio fingerprinting based on audio energy characteristics |
US10545954B2 (en) * | 2017-03-15 | 2020-01-28 | Google Llc | Determining search queries for obtaining information during a user experience of an event |
US10462514B2 (en) * | 2017-03-29 | 2019-10-29 | The Nielsen Company (Us), Llc | Interactive overlays to determine viewer data |
US10277343B2 (en) * | 2017-04-10 | 2019-04-30 | Ibiquity Digital Corporation | Guide generation for music-related content in a broadcast radio environment |
US10271095B1 (en) | 2017-12-21 | 2019-04-23 | Samuel Chenillo | System and method for media segment indentification |
US10867185B2 (en) | 2017-12-22 | 2020-12-15 | Samuel Chenillo | System and method for media segment identification |
WO2019002831A1 (en) | 2017-06-27 | 2019-01-03 | Cirrus Logic International Semiconductor Limited | Detection of replay attack |
GB201713697D0 (en) | 2017-06-28 | 2017-10-11 | Cirrus Logic Int Semiconductor Ltd | Magnetic detection of replay attack |
GB2563953A (en) | 2017-06-28 | 2019-01-02 | Cirrus Logic Int Semiconductor Ltd | Detection of replay attack |
US10652592B2 (en) * | 2017-07-02 | 2020-05-12 | Comigo Ltd. | Named entity disambiguation for providing TV content enrichment |
US11601715B2 (en) | 2017-07-06 | 2023-03-07 | DISH Technologies L.L.C. | System and method for dynamically adjusting content playback based on viewer emotions |
GB201801532D0 (en) | 2017-07-07 | 2018-03-14 | Cirrus Logic Int Semiconductor Ltd | Methods, apparatus and systems for audio playback |
GB201801526D0 (en) | 2017-07-07 | 2018-03-14 | Cirrus Logic Int Semiconductor Ltd | Methods, apparatus and systems for authentication |
GB201801530D0 (en) | 2017-07-07 | 2018-03-14 | Cirrus Logic Int Semiconductor Ltd | Methods, apparatus and systems for authentication |
GB201801528D0 (en) | 2017-07-07 | 2018-03-14 | Cirrus Logic Int Semiconductor Ltd | Method, apparatus and systems for biometric processes |
GB201801527D0 (en) | 2017-07-07 | 2018-03-14 | Cirrus Logic Int Semiconductor Ltd | Method, apparatus and systems for biometric processes |
US10574373B2 (en) * | 2017-08-08 | 2020-02-25 | Ibiquity Digital Corporation | ACR-based radio metadata in the cloud |
US10672015B2 (en) * | 2017-09-13 | 2020-06-02 | Bby Solutions, Inc. | Streaming events modeling for information ranking to address new information scenarios |
US10264315B2 (en) * | 2017-09-13 | 2019-04-16 | Bby Solutions, Inc. | Streaming events modeling for information ranking |
GB201803570D0 (en) | 2017-10-13 | 2018-04-18 | Cirrus Logic Int Semiconductor Ltd | Detection of replay attack |
GB201804843D0 (en) | 2017-11-14 | 2018-05-09 | Cirrus Logic Int Semiconductor Ltd | Detection of replay attack |
GB201801874D0 (en) | 2017-10-13 | 2018-03-21 | Cirrus Logic Int Semiconductor Ltd | Improving robustness of speech processing system against ultrasound and dolphin attacks |
GB2567503A (en) | 2017-10-13 | 2019-04-17 | Cirrus Logic Int Semiconductor Ltd | Analysing speech signals |
GB201801664D0 (en) | 2017-10-13 | 2018-03-21 | Cirrus Logic Int Semiconductor Ltd | Detection of liveness |
GB201801663D0 (en) | 2017-10-13 | 2018-03-21 | Cirrus Logic Int Semiconductor Ltd | Detection of liveness |
GB201801661D0 (en) | 2017-10-13 | 2018-03-21 | Cirrus Logic International Uk Ltd | Detection of liveness |
US10171877B1 (en) * | 2017-10-30 | 2019-01-01 | Dish Network L.L.C. | System and method for dynamically selecting supplemental content based on viewer emotions |
GB201801659D0 (en) | 2017-11-14 | 2018-03-21 | Cirrus Logic Int Semiconductor Ltd | Detection of loudspeaker playback |
CN108012173B (en) * | 2017-11-16 | 2021-01-22 | 百度在线网络技术(北京)有限公司 | Content identification method, device, equipment and computer storage medium |
US10276175B1 (en) | 2017-11-28 | 2019-04-30 | Google Llc | Key phrase detection with audio watermarking |
US10715855B1 (en) * | 2017-12-20 | 2020-07-14 | Groupon, Inc. | Method, system, and apparatus for programmatically generating a channel incrementality ratio |
US11048946B2 (en) | 2017-12-21 | 2021-06-29 | Samuel Chenillo | System and method for identifying cognate image sequences |
US11264037B2 (en) | 2018-01-23 | 2022-03-01 | Cirrus Logic, Inc. | Speaker identification |
US11735189B2 (en) | 2018-01-23 | 2023-08-22 | Cirrus Logic, Inc. | Speaker identification |
US11475899B2 (en) | 2018-01-23 | 2022-10-18 | Cirrus Logic, Inc. | Speaker identification |
US10848792B2 (en) * | 2018-03-05 | 2020-11-24 | Maestro Interactive, Inc. | System and method for providing audience-targeted content triggered by events during program |
EP3788500A4 (en) * | 2018-05-04 | 2022-03-30 | Ibiquity Digital Corporation | System for in-vehicle live guide generation |
US10692490B2 (en) | 2018-07-31 | 2020-06-23 | Cirrus Logic, Inc. | Detection of replay attack |
CN112352391B (en) * | 2018-08-23 | 2024-07-26 | 谷歌有限责任公司 | Radio station recommendation |
US10915614B2 (en) | 2018-08-31 | 2021-02-09 | Cirrus Logic, Inc. | Biometric authentication |
US11037574B2 (en) | 2018-09-05 | 2021-06-15 | Cirrus Logic, Inc. | Speaker recognition and speaker change detection |
EP3686609A1 (en) * | 2019-01-25 | 2020-07-29 | Rohde & Schwarz GmbH & Co. KG | Measurement system and method for recording context information of a measurement |
US11259058B2 (en) * | 2019-03-25 | 2022-02-22 | Apple Inc. | Use of rendered media to assess delays in media distribution systems |
US11026000B2 (en) * | 2019-04-19 | 2021-06-01 | Microsoft Technology Licensing, Llc | Previewing video content referenced by typed hyperlinks in comments |
US11785194B2 (en) | 2019-04-19 | 2023-10-10 | Microsoft Technology Licensing, Llc | Contextually-aware control of a user interface displaying a video and related user text |
US11678031B2 (en) | 2019-04-19 | 2023-06-13 | Microsoft Technology Licensing, Llc | Authoring comments including typed hyperlinks that reference video content |
US11025354B2 (en) | 2019-07-19 | 2021-06-01 | Ibiquity Digital Corporation | Targeted fingerprinting of radio broadcast audio |
AU2019457816A1 (en) * | 2019-07-19 | 2022-03-03 | Ibiquity Digital Corporation | Targeted fingerprinting of radio broadcast audio |
US10834466B1 (en) * | 2019-08-02 | 2020-11-10 | International Business Machines Corporation | Virtual interactivity for a broadcast content-delivery medium |
US11563517B2 (en) * | 2019-08-08 | 2023-01-24 | Qualcomm Incorporated | Managing broadcast channels based on bandwidth |
US11321904B2 (en) | 2019-08-30 | 2022-05-03 | Maxon Computer Gmbh | Methods and systems for context passing between nodes in three-dimensional modeling |
US11714928B2 (en) | 2020-02-27 | 2023-08-01 | Maxon Computer Gmbh | Systems and methods for a self-adjusting node workspace |
US11373369B2 (en) | 2020-09-02 | 2022-06-28 | Maxon Computer Gmbh | Systems and methods for extraction of mesh geometry from straight skeleton for beveled shapes |
US11284139B1 (en) * | 2020-09-10 | 2022-03-22 | Hulu, LLC | Stateless re-discovery of identity using watermarking of a video stream |
US12007512B2 (en) | 2020-11-30 | 2024-06-11 | Navico, Inc. | Sonar display features |
US11405684B1 (en) * | 2021-01-08 | 2022-08-02 | Christie Digital Systems Usa, Inc. | Distributed media player for digital cinema |
US11589100B1 (en) * | 2021-03-31 | 2023-02-21 | Amazon Technologies, Inc. | On-demand issuance private keys for encrypted video transmission |
US11496776B1 (en) * | 2021-07-19 | 2022-11-08 | Intrado Corporation | Database layer caching for video communications |
US11831943B2 (en) * | 2021-10-26 | 2023-11-28 | Apple Inc. | Synchronized playback of media content |
WO2023205559A1 (en) * | 2022-04-22 | 2023-10-26 | Whdiyo Llc | Dynamic visual watermark for streaming video |
US20230419927A1 (en) * | 2022-06-22 | 2023-12-28 | Deepen Shah | Automated remote music identification device and system |
US12124451B2 (en) * | 2023-03-20 | 2024-10-22 | Saudi Arabian Oil Company | System and method for efficient integration with a primary database to reduce unnecessary network traffic |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020120925A1 (en) * | 2000-03-28 | 2002-08-29 | Logan James D. | Audio and video program recording, editing and playback systems using metadata |
US20030021441A1 (en) * | 1995-07-27 | 2003-01-30 | Levy Kenneth L. | Connected audio and other media objects |
US20030093790A1 (en) * | 2000-03-28 | 2003-05-15 | Logan James D. | Audio and video program recording, editing and playback systems using metadata |
US20040249965A1 (en) * | 2003-05-05 | 2004-12-09 | Huggins Guy Dwayne | Node caching system for streaming media applications |
US20060067260A1 (en) * | 2004-09-30 | 2006-03-30 | Timo Tokkonen | Updating associating data in a media device |
US7110714B1 (en) * | 1999-08-27 | 2006-09-19 | Kay Matthew W | Television commerce system with program identifiers |
US20070143779A1 (en) * | 2005-12-20 | 2007-06-21 | Kari Kaarela | Location info-based automatic setup of broadcast receiver devices |
US20070143777A1 (en) * | 2004-02-19 | 2007-06-21 | Landmark Digital Services Llc | Method and apparatus for identificaton of broadcast source |
US20070186228A1 (en) * | 2004-02-18 | 2007-08-09 | Nielsen Media Research, Inc. | Methods and apparatus to determine audience viewing of video-on-demand programs |
US20070195987A1 (en) * | 1999-05-19 | 2007-08-23 | Rhoads Geoffrey B | Digital Media Methods |
US20080154401A1 (en) * | 2004-04-19 | 2008-06-26 | Landmark Digital Services Llc | Method and System For Content Sampling and Identification |
US7698554B2 (en) * | 2004-02-13 | 2010-04-13 | Royal Holloway And Bedford New College | Controlling transmission of broadcast content |
US7774022B2 (en) * | 1997-07-29 | 2010-08-10 | Mobilemedia Ideas Llc | Information processing apparatus and method, information processing system, and transmission medium |
US20150205865A1 (en) * | 2006-10-03 | 2015-07-23 | Shazam Entertainment Limited | Method and System for Identification of Distributed Broadcast Content |
US9826046B2 (en) * | 2004-05-05 | 2017-11-21 | Black Hills Media, Llc | Device discovery for digital entertainment network |
Family Cites Families (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4843562A (en) * | 1987-06-24 | 1989-06-27 | Broadcast Data Systems Limited Partnership | Broadcast information classification system and method |
US5210820A (en) * | 1990-05-02 | 1993-05-11 | Broadcast Data Systems Limited Partnership | Signal recognition system and method |
US6829368B2 (en) * | 2000-01-26 | 2004-12-07 | Digimarc Corporation | Establishing and interacting with on-line media collections using identifiers in media signals |
US7171018B2 (en) * | 1995-07-27 | 2007-01-30 | Digimarc Corporation | Portable devices and methods employing digital watermarking |
US6408128B1 (en) * | 1998-11-12 | 2002-06-18 | Max Abecassis | Replaying with supplementary information a segment of a video |
AU1769701A (en) * | 1999-11-23 | 2001-06-04 | Radiant Systems, Inc. | Audio request interaction system |
US6834308B1 (en) * | 2000-02-17 | 2004-12-21 | Audible Magic Corporation | Method and apparatus for identifying media content presented on a media playing device |
US6990453B2 (en) * | 2000-07-31 | 2006-01-24 | Landmark Digital Services Llc | System and methods for recognizing sound and music signals in high noise and distortion |
US7085613B2 (en) * | 2000-11-03 | 2006-08-01 | International Business Machines Corporation | System for monitoring audio content in a video broadcast |
DE60228202D1 (en) * | 2001-02-12 | 2008-09-25 | Gracenote Inc | METHOD FOR GENERATING AN IDENTIFICATION HASH FROM THE CONTENTS OF A MULTIMEDIA FILE |
US6714594B2 (en) * | 2001-05-14 | 2004-03-30 | Koninklijke Philips Electronics N.V. | Video content detection method and system leveraging data-compression constructs |
JP2004536348A (en) * | 2001-07-20 | 2004-12-02 | グレースノート インコーポレイテッド | Automatic recording identification |
MXPA04002235A (en) * | 2001-09-10 | 2004-06-29 | Thomson Licensing Sa | Method and apparatus for creating an indexed playlist in a digital audio data player. |
US20030167211A1 (en) * | 2002-03-04 | 2003-09-04 | Marco Scibora | Method and apparatus for digitally marking media content |
AU2003223748A1 (en) | 2002-04-25 | 2003-11-10 | Neuros Audio, Llc | Apparatus and method for identifying audio |
BR0309598A (en) | 2002-04-25 | 2005-02-09 | Shazam Entertainment Ltd | Method for characterizing a relationship between first and second audio samples, computer program product, and computer system |
US7231176B2 (en) | 2004-02-06 | 2007-06-12 | Jeffrey Levy | Methods and system for retrieving music information from wireless telecommunication devices |
US20050193016A1 (en) * | 2004-02-17 | 2005-09-01 | Nicholas Seet | Generation of a media content database by correlating repeating media content in media streams |
US20050197724A1 (en) * | 2004-03-08 | 2005-09-08 | Raja Neogi | System and method to generate audio fingerprints for classification and storage of audio clips |
JP2007533274A (en) * | 2004-04-19 | 2007-11-15 | ランドマーク、ディジタル、サーヴィセズ、エルエルシー | Method and system for content sampling and identification |
US20050267750A1 (en) * | 2004-05-27 | 2005-12-01 | Anonymous Media, Llc | Media usage monitoring and measurement system and method |
US7574451B2 (en) * | 2004-11-02 | 2009-08-11 | Microsoft Corporation | System and method for speeding up database lookups for multiple synchronized data streams |
US20060105702A1 (en) | 2004-11-17 | 2006-05-18 | Muth Edwin A | System and method for interactive monitoring of satellite radio use |
KR100713517B1 (en) * | 2004-11-26 | 2007-05-02 | 삼성전자주식회사 | PVR By Using MetaData and Its Recording Control Method |
CA2611070C (en) * | 2005-06-03 | 2015-10-06 | Nielsen Media Research, Inc. | Methods and apparatus to detect a time-shift event associated with the presentation of media content |
US20070143788A1 (en) * | 2005-12-21 | 2007-06-21 | Abernethy Michael N Jr | Method, apparatus, and program product for providing local information in a digital video stream |
US8059646B2 (en) * | 2006-07-11 | 2011-11-15 | Napo Enterprises, Llc | System and method for identifying music content in a P2P real time recommendation network |
US20080051029A1 (en) * | 2006-08-25 | 2008-02-28 | Bradley James Witteman | Phone-based broadcast audio identification |
US20080049704A1 (en) * | 2006-08-25 | 2008-02-28 | Skyclix, Inc. | Phone-based broadcast audio identification |
WO2008148195A1 (en) * | 2007-06-05 | 2008-12-11 | E-Lane Systems Inc. | Media exchange system |
US8959108B2 (en) * | 2008-06-18 | 2015-02-17 | Zeitera, Llc | Distributed and tiered architecture for content search and content monitoring |
US9047371B2 (en) | 2010-07-29 | 2015-06-02 | Soundhound, Inc. | System and method for matching a query against a broadcast stream |
US9703932B2 (en) * | 2012-04-30 | 2017-07-11 | Excalibur Ip, Llc | Continuous content identification of broadcast content |
US9161074B2 (en) * | 2013-04-30 | 2015-10-13 | Ensequence, Inc. | Methods and systems for distributing interactive content |
-
2007
- 2007-10-03 ES ES07843742T patent/ES2433966T3/en active Active
- 2007-10-03 EP EP07843742.3A patent/EP2070231B1/en active Active
- 2007-10-03 WO PCT/US2007/080292 patent/WO2008042953A1/en active Application Filing
- 2007-10-03 US US11/866,814 patent/US7881657B2/en active Active
-
2009
- 2009-12-17 HK HK09111879.4A patent/HK1135527A1/en unknown
-
2010
- 2010-12-22 US US12/976,050 patent/US8086171B2/en active Active
-
2011
- 2011-12-01 US US13/309,222 patent/US8442426B2/en active Active
-
2013
- 2013-04-23 US US13/868,708 patent/US9361370B2/en active Active
-
2015
- 2015-03-30 US US14/672,881 patent/US9864800B2/en active Active
-
2017
- 2017-12-13 US US15/840,025 patent/US20180101610A1/en not_active Abandoned
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030021441A1 (en) * | 1995-07-27 | 2003-01-30 | Levy Kenneth L. | Connected audio and other media objects |
US7774022B2 (en) * | 1997-07-29 | 2010-08-10 | Mobilemedia Ideas Llc | Information processing apparatus and method, information processing system, and transmission medium |
US20070195987A1 (en) * | 1999-05-19 | 2007-08-23 | Rhoads Geoffrey B | Digital Media Methods |
US7110714B1 (en) * | 1999-08-27 | 2006-09-19 | Kay Matthew W | Television commerce system with program identifiers |
US20030093790A1 (en) * | 2000-03-28 | 2003-05-15 | Logan James D. | Audio and video program recording, editing and playback systems using metadata |
US20020120925A1 (en) * | 2000-03-28 | 2002-08-29 | Logan James D. | Audio and video program recording, editing and playback systems using metadata |
US20040249965A1 (en) * | 2003-05-05 | 2004-12-09 | Huggins Guy Dwayne | Node caching system for streaming media applications |
US7698554B2 (en) * | 2004-02-13 | 2010-04-13 | Royal Holloway And Bedford New College | Controlling transmission of broadcast content |
US20070186228A1 (en) * | 2004-02-18 | 2007-08-09 | Nielsen Media Research, Inc. | Methods and apparatus to determine audience viewing of video-on-demand programs |
US20070143777A1 (en) * | 2004-02-19 | 2007-06-21 | Landmark Digital Services Llc | Method and apparatus for identificaton of broadcast source |
US20080154401A1 (en) * | 2004-04-19 | 2008-06-26 | Landmark Digital Services Llc | Method and System For Content Sampling and Identification |
US9826046B2 (en) * | 2004-05-05 | 2017-11-21 | Black Hills Media, Llc | Device discovery for digital entertainment network |
US20060067260A1 (en) * | 2004-09-30 | 2006-03-30 | Timo Tokkonen | Updating associating data in a media device |
US20070143779A1 (en) * | 2005-12-20 | 2007-06-21 | Kari Kaarela | Location info-based automatic setup of broadcast receiver devices |
US20150205865A1 (en) * | 2006-10-03 | 2015-07-23 | Shazam Entertainment Limited | Method and System for Identification of Distributed Broadcast Content |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10922720B2 (en) | 2017-01-11 | 2021-02-16 | Adobe Inc. | Managing content delivery via audio cues |
US11410196B2 (en) | 2017-01-11 | 2022-08-09 | Adobe Inc. | Managing content delivery via audio cues |
CN110267067A (en) * | 2019-06-28 | 2019-09-20 | 广州酷狗计算机科技有限公司 | Method, apparatus, equipment and the storage medium that direct broadcasting room is recommended |
Also Published As
Publication number | Publication date |
---|---|
US8442426B2 (en) | 2013-05-14 |
US9864800B2 (en) | 2018-01-09 |
HK1135527A1 (en) | 2010-06-04 |
US20110099197A1 (en) | 2011-04-28 |
EP2070231A1 (en) | 2009-06-17 |
US7881657B2 (en) | 2011-02-01 |
US9361370B2 (en) | 2016-06-07 |
US20150205865A1 (en) | 2015-07-23 |
EP2070231B1 (en) | 2013-07-03 |
US20120079515A1 (en) | 2012-03-29 |
US20080082510A1 (en) | 2008-04-03 |
WO2008042953A1 (en) | 2008-04-10 |
ES2433966T3 (en) | 2013-12-13 |
US20130247082A1 (en) | 2013-09-19 |
US8086171B2 (en) | 2011-12-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20180101610A1 (en) | Method and System for Identification of Distributed Broadcast Content | |
US9225444B2 (en) | Method and apparatus for identification of broadcast source | |
US7739062B2 (en) | Method of characterizing the overlap of two media segments | |
US8688248B2 (en) | Method and system for content sampling and identification | |
US20140214190A1 (en) | Method and System for Content Sampling and Identification | |
US10757456B2 (en) | Methods and systems for determining a latency between a source and an alternative feed of the source |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SHAZAM ENTERTAINMENT, LTD., UNITED KINGDOM Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WANG, AVERY LI-CHUN;WONG, CHEE;SYMONS, JONATHAN;REEL/FRAME:044380/0209 Effective date: 20071003 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: APPLE INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SHAZAM ENTERTAINMENT LIMITED;REEL/FRAME:053679/0069 Effective date: 20200507 |