WO2009042697A2 - Identification de diffusion audio basée sur téléphone - Google Patents

Identification de diffusion audio basée sur téléphone Download PDF

Info

Publication number
WO2009042697A2
WO2009042697A2 PCT/US2008/077541 US2008077541W WO2009042697A2 WO 2009042697 A2 WO2009042697 A2 WO 2009042697A2 US 2008077541 W US2008077541 W US 2008077541W WO 2009042697 A2 WO2009042697 A2 WO 2009042697A2
Authority
WO
WIPO (PCT)
Prior art keywords
broadcast
audio
user
message
metadata
Prior art date
Application number
PCT/US2008/077541
Other languages
English (en)
Other versions
WO2009042697A3 (fr
Inventor
Robert Reid
Bradley James Witteman
Marco J. Thompson
Original Assignee
Skyclix, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Skyclix, Inc. filed Critical Skyclix, Inc.
Publication of WO2009042697A2 publication Critical patent/WO2009042697A2/fr
Publication of WO2009042697A3 publication Critical patent/WO2009042697A3/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H60/00Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
    • H04H60/35Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users
    • H04H60/37Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users for identifying segments of broadcast information, e.g. scenes or extracting programme ID
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/102Programmed access in sequence to addressed parts of tracks of operating record carriers
    • G11B27/105Programmed access in sequence to addressed parts of tracks of operating record carriers of operating discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/19Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
    • G11B27/28Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H60/00Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
    • H04H60/35Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users
    • H04H60/38Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users for identifying broadcast time or space
    • H04H60/41Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users for identifying broadcast time or space for identifying broadcast space, i.e. broadcast channels, broadcast stations or broadcast areas
    • H04H60/44Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users for identifying broadcast time or space for identifying broadcast space, i.e. broadcast channels, broadcast stations or broadcast areas for identifying broadcast stations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H60/00Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
    • H04H60/56Arrangements characterised by components specially adapted for monitoring, identification or recognition covered by groups H04H60/29-H04H60/54
    • H04H60/58Arrangements characterised by components specially adapted for monitoring, identification or recognition covered by groups H04H60/29-H04H60/54 of audio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H2201/00Aspects of broadcast communication
    • H04H2201/90Aspects of broadcast communication characterised by the use of signatures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H60/00Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
    • H04H60/29Arrangements for monitoring broadcast services or broadcast-related services
    • H04H60/33Arrangements for monitoring the users' behaviour or opinions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H60/00Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
    • H04H60/35Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users
    • H04H60/49Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users for identifying locations
    • H04H60/51Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users for identifying locations of receiving stations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H60/00Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
    • H04H60/61Arrangements for services using the result of monitoring, identification or recognition covered by groups H04H60/29-H04H60/54
    • H04H60/63Arrangements for services using the result of monitoring, identification or recognition covered by groups H04H60/29-H04H60/54 for services of sales
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H60/00Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
    • H04H60/61Arrangements for services using the result of monitoring, identification or recognition covered by groups H04H60/29-H04H60/54
    • H04H60/65Arrangements for services using the result of monitoring, identification or recognition covered by groups H04H60/29-H04H60/54 for using the result on users' side
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/06Optimizing the usage of the radio link, e.g. header compression, information sizing, discarding information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/06Selective distribution of broadcast services, e.g. multimedia broadcast multicast service [MBMS]; Services to user groups; One-way selective calling services

Definitions

  • the subject matter described herein relates to a phone -based system for identifying broadcast audio streams, and methods of providing such a system.
  • This specification describes various aspects relating to providing broadcast audio and broadcast source identification by a broadcast monitoring service.
  • the systems and methods described herein can, e.g., generate a temporal database of broadcast audio streams in real-time and retrieve the broadcast information (e.g., metadata, RBDS and HD Radio information) associated with the broadcast audio streams. Further, the system can, e.g., identify what station or channel and what kind of audio a user is listening to by comparing an audio sample taken of a live broadcast provided by the user through his phone (e.g., a mobile or land- line phone) with a cached stream of audio which is captured from an over-the-air or network stream or a pre-processed audio database and retrieving audio identification information from the pre-processed database.
  • his phone e.g., a mobile or land- line phone
  • the broadcast monitoring service can derive what station the user was listening to based on the corresponding audio from the pre-processed database.
  • a user calls the broadcast identification system, sends an audio sample of "Snow” by the Red Hot chili Peppers, which is accurately identified in the pre- processed database. From this recognition event (which can include time stamp and/or cell tower info), a broadcast monitoring service can determine which station was playing "Snow" and send the user links to that station.
  • one aspect can be a method that includes obtaining a broadcast stream that includes one or more broadcast segments, and associating each broadcast segment with a broadcast timestamp.
  • the method also includes receiving an audio sample through a user-initiated telephone connection for a predetermined period of time.
  • the method further includes comparing the received audio sample with a database comprising a plurality of pre- processed audio samples, and obtaining a matching pre-processed audio sample that most closely correlates to the received audio sample.
  • Other implementations of this aspect include corresponding systems, apparatus, and computer program products.
  • the method can include associating a user audio timestamp with the received audio sample.
  • the method can also include selecting one of the associated broadcast timestamps that most closely corresponds to the user audio timestamp, and retrieving one or more broadcast segments associated with the selected broadcast timestamp.
  • the method can further include comparing the matching pre-processed audio sample with the retrieved broadcast segments, and identifying a broadcast source based on a matching broadcast segment that most closely correlates to the matching pre-processed audio sample.
  • the predetermined period of time is less than about 25 seconds. In one implementation, the predetermined period of time is about 20 seconds.
  • the method can additionally include obtaining a metadata from the matching broadcast segment, and transmitting a message based on the obtained metadata.
  • the method can include obtaining from a metadata source a metadata associated with the matching broadcast segment based, at least in part, on the identified broadcast source, and transmitting a message based on the obtained metadata.
  • the transmitted message can be a text message, an e-mail message, a multimedia message, an audio message, a wireless application protocol message, or a data feed.
  • the metadata can be provided by a radio broadcast data standard (RBDS) broadcast stream, a radio data system (RDS) broadcast stream, a high definition radio broadcast stream, a vertical blanking interval (VBI) broadcast stream, a digital audio broadcasting (DAB) broadcast stream, a MediaFLO broadcast stream, or a closed caption broadcast stream.
  • RBDS radio broadcast data standard
  • RDS radio data system
  • VBI vertical blanking interval
  • DAB digital audio broadcasting
  • MediaFLO MediaFLO broadcast stream
  • closed caption broadcast stream a radio broadcast data standard
  • the metadata source can include a broadcast log of the identified broadcast source, a third-party service provider of broadcast media information, or the Internet.
  • a system in another aspect, includes a broadcast server configured to maintain a pre-processed audio database.
  • the system also means for identifying an audio sample received through a user-initiated telephone connection for a predetermined period of time.
  • the system further includes means for identifying a broadcast source that broadcast the received audio sample.
  • Such computer program products can include executable instructions that cause a computer system to conduct one or more of the method acts described herein.
  • the systems described herein can include one or more processors and a memory coupled to the one or more processors.
  • the memory can encode one or more programs that cause the one or more processors to perform one or more of the method acts described herein.
  • FIG. 1 is a conceptual diagram of a system that can analyze audio samples obtained from a live broadcast and deliver personalized, interactive messages to the user.
  • FIG. 2 illustrates a schematic diagram of a system that can identify broadcast audio streams from various broadcast sources in a geographic region.
  • FIG. 3 A is a flow chart showing a method for providing broadcast audio identification.
  • FIG. 3B is a flow chart showing a method for comparing a user audio identifier (UAI) to a cached broadcast stream audio identifiers (BSAIs).
  • UAI user audio identifier
  • BSAIs cached broadcast stream audio identifiers
  • FIG. 4 illustrates conceptually a method for generating broadcast fingerprints of a single broadcast stream.
  • FIG. 5 shows an example comparison of a user fingerprint to a broadcast fingerprint.
  • FIG. 6A shows an example of a wireless access protocol (WAP) message that can be displayed on a user's phone to allow a user to rate the audio sample and contact the broadcast source.
  • WAP wireless access protocol
  • FIG. 6B shows another example of a WAP message that can be displayed on a user's phone to allow a user to purchase an identified song or buy a ringtone.
  • FIG. 6C shows yet another example of a WAP message including a coupon that can be displayed on a user's phone and used by the user in a future transaction.
  • FIG. 7 shows conceptually a method for comparing identified pre-processed audio sample with broadcast audio samples.
  • FIG. 8 is a flow chart showing a method of providing broadcast audio identification by a broadcast monitoring service.
  • FIG. 9 is a flow chart showing another method of providing broadcast audio identification by a broadcast monitoring service.
  • FIG. 1 is a conceptual diagram of a system 100 that can analyze audio samples obtained from a live broadcast, such as broadcast stream 122, from a broadcast audio source, e.g., 110, via a user's phone, e.g., 150, and deliver via a communication link, e.g., 152, personalized, interactive messages to the user's phone, e.g., 150.
  • the system and its associated methods permit users to receive personalized broadcast information associated with broadcast streams that are both current and relevant. It is current because it reflects realtime broadcast information. It is relevant because it can provide interactive information that are of interest to the user, such as hyperlinks and coupons, based on the audio sample without requiring the user to recognize or enter detailed information about the live broadcast from which the audio sample is taken.
  • a broadcast audio stream (or broadcast stream) 122, 124 can include, e.g., an audio component (broadcast audio) and a data component (metadata), which describes the content of the audio component.
  • the broadcast stream 122, 124 can include, e.g., just the broadcast audio.
  • the metadata can be obtained from a source other than the broadcast stream, e.g., the station log (e.g., a radio playlist), a third party service provider of broadcast media information (e.g., MediaGuide, Media Monitors, Nielsen, Auditude, or ex-Verance), the Internet (e.g., the broadcaster's website), and the like.
  • broadcast sources 110, 120 each transmits a corresponding broadcast stream 122, 124 in a geographic region 125.
  • a server cluster 130 which can include multiple servers in a distributed system or a single server, is used to receive and cache the broadcast streams 122, 124 from all the broadcast sources in the geographic region 125.
  • the server cluster 130 can also be used to store pre-processed audio databases.
  • the server cluster 130 can be deployed in situ or remotely from the broadcast sources 110, 120. In the case of a remote deployment, the server cluster 130 can tune to the broadcast sources 110, 120 and cache the broadcast streams 122, 124 in real time as the broadcast streams 122, 124 are received. In the case of an in situ deployment, a server of the server cluster 130 is deployed in each of the broadcast sources 110, 120 to cache the broadcast streams 122, 124 in real time, as each broadcast stream 122, 124 is transmitted.
  • the server cluster 130 In addition to caching (i.e., temporarily storing) the broadcast streams 122, 124, the server cluster 130 also processes the cached broadcast streams into broadcast fingerprints for portions of the broadcast audio. Each portion (or segment) of the broadcast audio corresponds to a predefined duration of the broadcast audio. For example, a portion (or segment) can be predefined to be 10 seconds or 20 seconds or some other predefined time duration of the broadcast audio. These broadcast fingerprints are also cached in the server cluster 130. In certain implementations, the server cluster 130 can also assign broadcast timestamps to the cached broadcast streams and store the information in a temporal database.
  • Users e.g., users 140, 145, who are tuned to particular broadcast channels of the broadcast sources 110, 120 may want more information on the broadcast audio stream that they are listening to or just heard.
  • user 140 may be listening to a song on broadcast stream 122 being transmitted from the broadcast source 110, which could be prerecorded or a live performance by the artist at the studio of the broadcast source 110. If the user 140 really likes the song but does not recognize it (e.g., because the song is new) and would like to obtain more information about the song, the user 140 can then use his phone 150 to connect with the server cluster 130 via a communications link 152 and obtain metadata associated with the song.
  • the communications link 152 can be a cellular network, a wireless network, a satellite network, an Internet network, some other type of communications network or combination of these.
  • the phone 150 can be a mobile phone, a traditional landline-based telephone, or an accessory device to one of these types of phones.
  • the user 140 can relay the broadcast audio via the communications link 152 to the server cluster 130.
  • a server in the server cluster 130 e.g., an audio server, samples the broadcast audio relayed to it from the phone 150 via communications link 152 for a predefined period of time, e.g., about 20 seconds in this implementation, and stores the sample (i.e., audio sample).
  • the predefined period of time can be more or less than 20 seconds depending on design constraints.
  • the predefined period of time can be 5 seconds, 10 seconds, 24 seconds, or some other period of time.
  • the server cluster 130 can then process the audio sample into a user audio fingerprint and perform an audio identification by comparing this user fingerprint with a pool of cached broadcast fingerprints.
  • the server cluster 130 can also compare the user fingerprint with a pre-processed audio database in order to identify the audio of interest to the user.
  • the predefined portion of the broadcast audio provided by the user has the same time duration as the predefined portion of the broadcast stream cached by the server cluster 130.
  • the system 100 can be configured so that a 10-second duration of the broadcast audio is used to generate broadcast fingerprints. Similarly, a 10- second duration of the audio sample is cached by the server cluster 130 and used to generate a user audio fingerprint.
  • the server cluster 130 can deliver a personalized and interactive message to the user 140 via communications link 152 based on the metadata of the identified broadcast stream.
  • This personalized message can include the song title and artist information, as well as a hyperlink to the artist's website or a hyperlink to download the song of interest.
  • the message can be a text message (e.g., SMS), an instant message, an email message, a video message, an audio message, a multimedia message (e.g., MMS), a wireless application protocol (WAP) message, a data feed (e.g., an RSS feed, XML feed, etc.), or a combination of these.
  • the user 145 may be listening to the broadcast stream 124 being transmitted by the broadcast source 120 and wants to find out more about a contest for a trip to Hawaii that is being discussed.
  • the user 145 can then use her phone 155, which can be a mobile phone, a traditional landline-based telephone, or an accessory device to one of these types of phones, to connect with the server cluster 130 via communications link 157 and obtain more information, such as metadata associated with the song, i.e., broadcast information.
  • the phone 155 By using the phone 155, the user 145 can relay the broadcast audio via the communications link 157 to the server cluster 130.
  • a server in the server cluster 130 samples the broadcast audio relayed to it from the phone 155 via communications link 157 for a predefined period of time, e.g., about 20 seconds in this implementation, and stores the sample (i.e., audio sample).
  • a predefined period of time can be more or less than 20 seconds depending on design constraints.
  • the predefined period of time can be about 5 seconds, 10 seconds, 14 seconds, 24 seconds, or some other period of time.
  • the personalized message can be in a form of a WAP message, which can include, e.g., a hyperlink to the broadcast source (e.g., the radio station) to obtain the rules of the contest. Additionally, the message can allow the user 145 to "scroll" back to an earlier segment of the broadcast by a predetermined amount of time, e.g., 30 seconds or some other period of time, in order to obtain information on broadcast audio that she might have missed.
  • a predetermined amount of time e.g. 30 seconds or some other period of time
  • server cluster 130 which is associated with the geographic region 125
  • other server clusters can be deployed to service other geographic regions.
  • a superset of server clusters can be formed with each server cluster communicatively coupled to one another.
  • server clusters in neighboring geographic regions can be queried to perform the audio identification. Therefore, the system 100 can allow for situations where a user travels from one geographic region to another geographic region.
  • FIG. 2 illustrates a schematic diagram of a system 200 that can be used to identify broadcast streams from various broadcast sources 202, 204, and 206 in a geographic region 208.
  • the broadcast sources 202, 204, and 206 can be any type of sources capable of transmitting broadcast streams, such as radios, televisions, Internet sites, satellites, and location broadcasts (e.g., background music at a mall).
  • a broadcast monitoring service 210 which includes a capture server 215 and a broadcast server 220, can be deployed in the geographic region 208 to record broadcast streams and deliver broadcast information (e.g., metadata) to users.
  • the capture server 215 can be deployed remote from the broadcast sources 202, 204, and 206 and broadcast server 220, but still within the geographic region 208; on the other hand, the broadcast server 220 can be deployed outside of the geographic region 208, but communicatively couple with the capture server 215 via a communications link 222.
  • the capture server 215 receives and caches the broadcast streams. Once the capture sever 210 has cached broadcast streams for a non-persistent, selected temporary period of time, the capture server 215 starts overwriting the previously cached broadcast streams in a first-in- first-out (FIFO) fashion. In this manner, the capture server 210 is different from a database library, which stores pre-processed information and intends to store such information permanently for long periods of time. Further, the most recent broadcast streams for the selected temporary period of time will be cached in the capture server 215. In one implementation, the selected temporary period of time can be configured to be about fifteen minutes and the capture server 210 caches the latest 15-mintue duration of broadcast streams in the geographic region 208. In other implementations, the selected temporary period of time can be configured to be longer or shorter than 15 minutes, e.g., five minutes , 45 minutes, 3 hours, a day, or a month.
  • the cached broadcast streams can then be processed by the broadcast server 220 to generate a series of broadcast fingerprints, which is discussed in further detail below. Each of these broadcast fingerprints is associated with a broadcast timestamp, which indicates the time that the broadcast stream was cached in the capture server 215.
  • the broadcast server 220 can also generate broadcast stream audio identifiers (BSAIs) associated with the cached broadcast streams. Each BSAI corresponds to a predetermined portion or segment (e.g., 20 seconds) of a broadcast stream.
  • the BSAI can include the broadcast fingerprint, the broadcast timestamp and metadata (broadcast information) retrieved from the broadcast stream.
  • the BSAI may not include the metadata associated with the broadcast stream.
  • the BSAIs are cached in the broadcast server 220 and can facilitate searching of an audio match generated from another source of audio.
  • a broadcast receiver 230 can be tuned by a user to one of the broadcast sources 202, 204, and 206.
  • the broadcast receiver 230 can be any device capable of receiving broadcast audio, such as a radio, a television, a stereo receiver, a cable box, a computer, a digital video recorder, or a satellite radio receiver. As an example, suppose the broadcast receiver 230 is tuned to the broadcast source 206.
  • a user listening to broadcast source 206 can then use her phone 235 to connect with the system 200, by, e.g., dialing a number (e.g., a local number, a toll free number, a vertical short code, or a short code), or clicking a link or icon on the phone's display, or issuing a voice or audio command.
  • the user via the user's phone 235, is then connected to a network carrier 240, such as a mobile phone carrier, an interexchange carrier (IXC), or some other network, through communications link 242.
  • a network carrier 240 such as a mobile phone carrier, an interexchange carrier (IXC), or some other network
  • the broadcast server 220 also generates and maintains a pre-processed audio database.
  • a user audio sample is sent to the broadcast monitoring service 210
  • the user audio sample is matched against the pre-processed database.
  • the pre-processed audio database can be used to identify the broadcast audio of interest to the user.
  • a user timestamp e.g., the timestamp corresponding to receipt of user audio sample by the broadcast server 220
  • the identified audio in the pre- processed database are matched against monitored broadcast sources 202, 204, and 206.
  • the broadcast monitoring service can also identify the broadcast source that transmitted the broadcast audio of interest to the user.
  • FIG. 3A is a flow chart showing a method 300 for providing broadcast audio identification based on audio samples obtained from a broadcast stream provided by a user through a user-initiated connection, such as by dialing-in.
  • the steps of method 300 are shown in reference to a timeline 302; thus, two steps that are at the same vertical position along timeline 302 indicates that the steps can be performed at substantially the same time. In other implementations, the steps of method 300 can be performed in different order and/or at different times.
  • a user tunes to a broadcast source to receive one or more broadcast audio streams.
  • This broadcast source can be a pre-set radio station that the user likes to listen to or it can be a television station that she just tuned in.
  • the broadcast source can be a location broadcast that provides background music in a public area, such as a store or a shopping mall.
  • the user uses a telephone (e.g., mobile phone or a landline-based phone) to connect to the server by, e.g., dialing a number, a short code, and the like.
  • the call is connected to a carrier, which can be a mobile phone carrier or an IXC carrier.
  • the carrier can then open a connection with the server, at 317 the server receives the user-initiated telephone connection.
  • the user is connected to the server and an audio sample can be relayed by the user to the server.
  • the server can be receiving broadcast streams from all the broadcast sources in a geographic region, such as a city, a town, a metropolitan area, a country, or a continent.
  • Each of the broadcast streams can be an audio channel transmitted from a particular broadcast source.
  • the geographic region can be the San Diego metropolitan area
  • the broadcast source can be radio station KMYI
  • the audio channel can be 94.1 FM.
  • the broadcast stream can include an audio signal, which is the audio component of the broadcast, and metadata, which is the data component of the broadcast.
  • the broadcast stream may not include the metadata.
  • the metadata can be obtained from a metadata source, such as the broadcast source's broadcast log (e.g., a radio playlist), a third party service provider of broadcast media information (e.g., MediaGuide, Media Monitors, Nielsen, Auditude, or ex-Verance), the Internet (e.g., the broadcaster's website), and the like.
  • the metadata when the metadata is part of the broadcast stream, it can be obtained from various broadcast formats or standards, such as a radio data system (RDS), a radio broadcast data system (RBDS), a hybrid digital (HD) radio system, a vertical blank interval (VBI) format, a closed caption format, a MediaFLO format, or a text format.
  • RDS radio data system
  • RBDS radio broadcast data system
  • HD hybrid digital
  • VBI vertical blank interval
  • closed caption format a closed caption format
  • MediaFLO format a text format.
  • the received broadcast streams are cached for a selected temporary period of time, for example, about 15 minutes.
  • a broadcast fingerprint is generated for a predetermined portion of each of the cached broadcast streams.
  • the predetermined portion of a broadcast stream can be between about 5 seconds and 20 seconds.
  • the predetermined portion is configured to be a 20-second duration of a broadcast stream and a broadcast fingerprint is generated every 5 seconds for a 20-second duration of a broadcast stream. This concept is illustrated with reference to FIG
  • broadcast stream audio identifiers are generated.
  • the BSAI can include a broadcast fingerprint and its associated timestamp, as well as a metadata associated with the broadcast portion (e.g., a 20-second duration) of the broadcast stream.
  • the BSAI may not include the metadata.
  • one BSAI is generated for each timestamp and a series of BSAIs can be generated for a single broadcast stream.
  • the BSAIs are cached by the server.
  • the server receives the user-initiated telephone connection and, at 320, the user relays the audio sample of interest to the server.
  • the server caches the audio sample, associates a user audio timestamp with the cached audio sample, and retrieves telephone information by, e.g., the SS7 protocol.
  • the SS7 information can include the following elements: (1) an automatic number identifier (ANI , or Caller ID); (2) a carrier identification (Carrier ID) that identifies which carrier originated the call. If this is unavailable, and the user has not identified her carrier in her user profile, a local number portability (LNP) database can be used to ascertain the home carrier of the caller for messaging purposes.
  • ANI automatic number identifier
  • Carrier ID carrier identification
  • LNP local number portability
  • a lookup table can be searched and an email address can be concatenated (e.g., 1234562222@tmomail.net) together and a message can be sent to that email address.
  • ALI or BSN information can be used to identify what server cluster the user is located in and what pool of BSAI cache the UAI should be compared with.
  • the server assigns the user timestamp based on the time that the audio sample is cached by the server.
  • the audio sample is a portion of the broadcast stream that the user is interested in and the portion can be a predetermine period of time, for example, a 5-20 second long audio stream.
  • the duration of the audio sample can be configured so that it corresponds with the duration of the broadcast portion of the broadcast stream as shown in FIG. 4.
  • the server generates a user audio fingerprint based on the cached audio sample.
  • the user audio fingerprint can be generated similarly to that of the broadcast fingerprints.
  • the user audio fingerprint is a unique representation of the audio sample.
  • the server generates a user audio identifier (UAI) based on, e.g., the SS7 elements, the user audio fingerprint, and the user timestamp.
  • UAI user audio identifier
  • the server compares the UAI with the cached series of BSAIs to find the most highly correlated BSAI for the audio sample.
  • the server retrieves the metadata from either the BSAI having the highest correlated broadcast fingerprint or an audio content from the backup database. As discussed above, when the metadata is part of the broadcast stream, it can be retrieved from the data component of the broadcast stream.
  • the metadata can be obtained from various broadcast formats or standards, such as those discussed above.
  • the metadata can be obtained from a metadata source based on the broadcast source and the broadcast timestamp associated with the most highly correlated BSAI.
  • the metadata source can be any source that can provide metadata of the identified broadcast stream, such as the broadcast source's broadcast log (e.g., a radio playlist), a third party service provider of broadcast media information (e.g., MediaGuide, Media Monitors, Nielsen, Auditude, or ex- Verance), the Internet (e.g., the broadcaster's website), and the like.
  • the server can also generate a user data set that includes the metadata, the user timestamp, and user data from a user profile.
  • the server generates a message, which can be a text message (e.g., an SMS message), an instant message, a multimedia message (e.g., a MMS message), an email message, or a wireless application protocol (WAP) message.
  • a text message e.g., an SMS message
  • an instant message e.g., an instant message
  • a multimedia message e.g., a MMS message
  • an email message e.g., a MMS message
  • WAP wireless application protocol
  • the amount of data and the format of the message sent by the server depends on the user's phone capability. For example, if the phone is a smartphone with Internet access, then a WAP message can be sent with embedded hyperlinks to allow the user to obtain additional information, such as a link to the artist's website, a link to download the song, and the like.
  • the WAP message can offer other interactive information based on Carrier ID and user profile. For example, hyperlinks to download a ringtone of the song from the mobile carrier can be included.
  • the server may only send an audio message with audio prompts.
  • FIG. 3B is a flow chart illustrating in further detail step 370 of FIG. 3 A, which compares the UAI to cached BSAIs.
  • the server obtains the user timestamp (UTS) from the UAI and then queries the cached BSAIs to select a broadcast timestamp (BTS) that most closely corresponds to the user timestamp, i.e., a corresponding broadcast timestamp or CBTS.
  • the server retrieves all the broadcast fingerprints (BFs) having the corresponding BTS.
  • the server compares the user fingerprint with each of the retrieved broadcast fingerprints to find the retrieved broadcast fingerprint that most closely corresponds to the user fingerprint.
  • FIG. 5 is discussed below.
  • the server determines whether the highest correlation from the comparison is higher than a predefined threshold value, e.g., 20%.
  • a predefined threshold value e.g. 20%.
  • the server can be configured to always look for 5 seconds of timestamps prior to the user timestamp.
  • the process repeats at 372, with the server retrieving an earlier timestamp at 372 and retrieving another series of broadcast fingerprints associated with the earlier broadcast timestamp.
  • the server determines whether there is a backup database of audio content.
  • the backup database can be similar to the database library of fingerprinted audio content. If a backup database is not available, at 384, then a broadcast audio identification cannot be achieved. However, if there is a backup database, at 386, the user fingerprint is compared with the backup database of fingerprints in order to find a correlation. At 388, the server determines whether the correlation is greater than a predefined threshold value. If the correlation is greater than the threshold value, at 380, the metadata for the audio content having the correlated fingerprint is retrieved or obtained. On the other hand, if the correlation does not exceed the threshold value, then the broadcast audio identification cannot be achieved at 384.
  • FIG. 4 illustrates conceptually a method for generating a series of broadcast fingerprints of a single broadcast stream.
  • time 20 seconds
  • the broadcast portion 406 of the broadcast stream 402 is processed to generate a broadcast fingerprint 408.
  • the broadcast fingerprint 408 is a unique representation of the broadcast portion 406. Any commonly known audio fingerprinting technology can be use to generate the broadcast fingerprint 408.
  • the next broadcast portion 412 which is a different 20-second duration of the broadcast stream 402, is processed to generated a broadcast fingerprint 414.
  • the broadcast fingerprint 414 is uniquely different from the broadcast fingerprint 408 because the broadcast portion 412 is different from the broadcast portion 406.
  • a series of additional broadcast fingerprints can be generated for each succeeding 20-second broadcast portion of the broadcast stream 402.
  • the broadcast stream 402 and the broadcast fingerprints (408, 414, 420, 426, 432, and 438) are then cached for a selected temporary period of time, e.g., about 15 minutes.
  • a selected temporary period of time e.g., about 15 minutes.
  • other broadcast streams (not shown) can be cached simultaneously with the broadcast stream 404.
  • Each of these additional broadcast streams will have its own series of broadcast fingerprints with a successive timestamp indicating a 5-second interval.
  • time 20 seconds
  • the broadcast fingerprint 408 of the broadcast stream 402 would be retrieved.
  • FIG. 5 shows an example comparison of a user fingerprint 510 with one of the retrieved broadcast fingerprints 520.
  • a 20-second duration of the broadcast stream is used to generate the broadcast fingerprint 520.
  • the correlation between the user fingerprint 510 and the broadcast fingerprint 520 does not have to be 100%; rather, the server selects the highest correlation greater than 0%. This is because the correlation is used to identify the broadcast stream and determine what metadata to send to the user.
  • FIGs. 6A - 6C illustrate exemplary messages that a server can send to a user based on the metadata of the identified broadcast stream.
  • FIG. 6A shows an example of a WAP message 600 that allows the user to rate the audio sample and contact the broadcast source.
  • the WAP message 600 includes a message ID 602 and identifies the broadcast sources as radio station KXYZ 604.
  • the WAP message 600 also identifies the artist 606 as "Coldplay" and the song title 608 as "Yellow.”
  • the user can enter a rating 610 of the identified song or sign up 612 with the radio station by clicking the "Submit" button 614.
  • the user can also send an email message to the disc jockey (DJ) of the identified radio station by clicking on the hyperlink 616.
  • DJ disc jockey
  • FIG. 6B shows an example of a WAP message 620 that allows the user to purchase the identified song or buy a ringtone directly from the phone.
  • the WAP message 620 includes a message ID 622 and identifies the broadcast sources as radio station KXYZ 624.
  • the WAP message 620 also identifies the artist 626 as "Beck,” the song title 628 as "Que onda Guero," and the compact disc title 630 as "Guero.”
  • the user can purchase the identified song by clicking on the hyperlink 632 or purchase a ringtone from the mobile carrier by clicking on the hyperlink 634.
  • WAP message 620 includes an advertisement for "The artist of the month" depicted as a graphical object. The user can find out more information about this advertisement by clicking on the hyperlink 636.
  • FIG. 6C shows an example of a WAP message 640 that delivers a coupon to the user's phone.
  • the WAP message 640 includes a 10% discount coupon 642 for "McDonald's."
  • the audio sample provided by the user is an advertisement or a jingle by "McDonald's" and as the server identifies the advertisement by retrieving or obtaining the metadata associated with the advertisement, the server can generate a WAP message that is targeted to interested users.
  • the WAP message 640 can include a "scroll back" feature to allow the user to obtain information on a previous segment of the broadcast stream that she might have missed.
  • the WAP message 640 includes a hyperlink 644 to allow the user to scroll back to a previous segment by 10 seconds, a hyperlink 646 to allow the user to scroll back to a previous segment by 20 seconds, a hyperlink 648 to allow the user to scroll back to a previous segment by 30 seconds.
  • Other predetermined period of time can also be provided by the WAP message 640, as long as that segment of the broadcast stream is still cached in the server.
  • This "scroll back" feature can accommodate situations where the user just heard a couple of seconds of the broadcast stream, and by the time she dials-in or connects to the broadcast audio identification system, the broadcast info is no longer being transmitted.
  • FIG. 7 shows an implementation of comparing the identified pre-processed audio sample 702 with broadcast audio samples 710, 712, 714 from various broadcast sources.
  • Each of the broadcast audio samples 710, 712, 714 can be associated with a different broadcast source.
  • broadcast audio sample 710 can represent an audio sample broadcasted by a radio station, FM 94.1; while broadcast audio sample 712 can represent an audio sample broadcasted by a different radio station, FM 102.2.
  • the server can retrieve broadcast audio samples having matching broadcast timestamps as the user timestamp. Additionally, the server can compare the identified pre- processed audio sample 702 with all the broadcast audio samples having the same broadcast timestamp.
  • the broadcast source can be identified based on the identified pre- processed audio sample and the timestamp. For example, suppose that the broadcast audio sample 710 (which is derived from a broadcast stream from FM 94.1) most closely matches the identified pre-processed audio sample 702. The server can then identify the broadcast source as the radio station, FM 94.1.
  • FIG. 8 is a flow chart showing a method 800 for providing broadcast audio identification based on audio samples obtained from a broadcast stream provided by a user through a user-initiated connection, such as by dialing-in.
  • the broadcast audio identification system can be implemented by a broadcast source or a broadcast monitoring service. If implemented by a broadcast source, there is one broadcast stream to be identified and the broadcast source already has information on the broadcast stream being transmitted.
  • the steps of method 800 are shown in reference to a timeline 802; thus, two steps that are at the same vertical position along timeline 802 indicates that the steps can be performed at substantially the same time. In other implementations, the steps of method 800 can be performed in different order and/or at different times.
  • a user tunes to a broadcast source to receive a broadcast audio stream transmitted by the broadcast source.
  • This broadcast source can be a pre-set radio station that the user likes to listen to or it can be a television station that she just tuned in.
  • the broadcast source can be a location broadcast that provides background music in a public area, such as a store or a shopping mall.
  • the user uses a telephone (e.g., mobile phone or a landline-based phone) to connect to the server of the broadcast source by, e.g., dialing a number, a short code, and the like.
  • the user can dial a number assigned to the broadcast source; for example, if the broadcast source is a radio station transmitting at 94.1 FM, the user can simply dial "*941" to connect to the server.
  • the call is connected to a carrier, which can be a mobile phone carrier or an IXC carrier.
  • the carrier can then open a connection with the server, at 820 the server receives the user-initiated telephone connection.
  • the user is connected to the server and an audio sample can be relayed by the user to the server.
  • the server can be obtaining the broadcast streams to be transmitted by one or more broadcast sources.
  • Each broadcast stream can be associated with a different broadcast source.
  • each broadcast stream can include many broadcast segments, each broadcast segment can be a predetermined portion of the broadcast stream.
  • a broadcast segment can be a 5- second duration of the broadcast stream.
  • the broadcast stream can also include an audio signal, which is the audio component of the broadcast.
  • the broadcast stream may or may not include the metadata, which is the data component of the broadcast.
  • the broadcast segments are stored in a broadcast audio database for a predetermined period of time, for example, about 15 minutes.
  • a broadcast timestamp (BTS) is associated with each of the stored broadcast segment.
  • the server caches the audio sample and associates a user timestamp (UTS) with the cached audio sample.
  • UTS user timestamp
  • the server assigns the user timestamp based on the time that the audio sample is cached by the server.
  • the audio sample is a portion of the broadcast stream that the user is interested in and the portion can be a predetermine period of time, for example, a 5-25 second long audio stream.
  • the duration of the audio sample can be configured so that it corresponds with the duration of the broadcast segment of the broadcast stream.
  • the server identifies the user audio sample.
  • the server compares the user audio sample with a database of pre-processed audio samples (which can include, e.g., songs, ads, TV shows, and the like) and obtain the matching pre- processed audio sample that most closely correlates to the user audio sample.
  • the server compares the UTS with the stored BTSs to find the broadcast segments associated with the most highly correlated BTS. Once the highest correlated BTS is selected, all the associated broadcast segments having the BTS can be retrieved. Additionally, the matching pre-processed audio sample can be used to compare with the broadcast segments associated with the most highly correlated BTS and obtain the matching broadcast segment.
  • the server identifies the broadcast source based on the matching broadcast segment.
  • a user can call the broadcast audio recognition system, and send an audio sample of "Snow” by the Red Hot chili Peppers.
  • the user audio sample (which is associated with a timestamp) is then accurately identified as "Snow” by the server when compared with the pre-processed database.
  • a broadcast monitoring service can determine which broadcast station was playing "Snow” at a time corresponding to the timestamp.
  • the audio identification or recognition is derived from a pre-processed database, and the broadcaster identification is derived based on the identified audio and its associated timestamp.
  • the server retrieves or obtains the metadata from the broadcast segment having the highest correlated BTS.
  • the metadata can be retrieved from the data component of the broadcast stream.
  • the metadata can be obtained from various broadcast formats or standards, such as those discussed above.
  • the metadata can be obtained from a metadata source based on the broadcast source and the broadcast timestamp associated with the most highly correlated BSAI.
  • the metadata source can be any source that can provide metadata of the identified broadcast stream, such as the broadcast source's broadcast log(e.g., a radio playlist), a third party service provider of broadcast media information (e.g., MediaGuide, Media Monitors, Nielsen, Auditude, or ex- Verance), the Internet (e.g., the broadcaster's website), and the like.
  • the server can also generate a user data set that includes the metadata, the user timestamp, and user data from a user profile.
  • the server generates a message, such as any of those discussed above. This message is transmitted to the user's phone and received by the user at 875.
  • FIG. 9 is a flow chart showing a method 900 of providing broadcast audio identification from a broadcast monitoring service.
  • the illustrated process provides broadcast audio identification by comparing a user-provided broadcast audio sample with a pre-processed audio database maintained by the broadcast monitoring service. For example, the illustrated process involves generating an audio database and identifying user audio samples. Additionally, the illustrated process involves matching a user timestamp with a broadcast timestamp and identifying the broadcaster that broadcast the user-provided audio sample. Further, the illustrated process involves retrieving metadata associated with the identified broadcast audio and interacting with the user via messages.
  • process 900 at 910, generates an audio database, which can be, e.g., a pre-processed database of songs and ads.
  • a broadcast monitoring service can also generate a temporal database by listening and capturing broadcast audio (e.g., radio and/or TV broadcasts) and storing the captured broadcast audio into the temporal database.
  • the BMS's depending on their service, can monitor broadcasts in multiple markets (e.g., the "big players” like Arbitron and Nielson listen to every market with >1M people - the top 100 markets, whereas the "smaller players" may only listen to the top 50 markets).
  • the BMS's can generate and maintain a pre-processed audio database of what they expect to hear from the broadcast audio.
  • the pre-processed audio database can be assembled by having their clients supply them with audio samples, such as a database of songs, ads, TV shows, and the like.
  • the BMS also generates a temporal database of broadcast audio, which is a database with timestamp information; thus, allowing a searchable field for the temporal database based on the timestamp information.
  • This temporal database is the broadcast stream which the BMS is comparing against the audio database to identify the broadcast audio of interest to a user.
  • the temporal database includes information on the broadcast channel that broadcast the audio of interest.
  • the user can dial a phone number where the BMS takes a user- provided audio sample of the broadcast audio and records the user timestamp.
  • the user can provide the audio sample using any known methods of transmitting and receiving voice or data calls.
  • the phone number that the user dials can be a local number in each market, a Toll Free Number, abbreviated dialing code, VoIP call (from buddy list), Nextel direct connect PTT connection, and the like.
  • an abbreviated dialing code of "#SKY" and a backup number of 1-87-PoundSKY are used.
  • the process 900 identifies the user-provided audio sample by, e.g., fingerprinting against the pre-processed audio database. For example, the user audio sample is compared against the audio database to learn what broadcast audio the user is interested in. This can be done, e.g., using known fingerprinting processes for identifying audio samples. Once the user-provided audio sample has been identified, at 930, process 900 matches the user timestamp with the broadcast timestamp maintained in the temporal database.
  • the audio identified in the audio database that corresponds to the user-provided audio sample can be compared against the temporal database at the broadcast timestamp corresponding to the user timestamp to learn who was broadcasting the user-provided audio.
  • process 900 identifies the broadcast source that broadcast the audio of interest to the user by, e.g., checking against broadcast playlists for the relevant market.
  • the relevant market can be found through information from the user telephone, such as the originating local access and transport area ("LATA") information.
  • LATA local access and transport area
  • process 900 can implement additional measures to accurately identify the broadcast source. For example, in one implementation, process 900 can return a set of "possible" broadcast sources who played the same song. This set can then be further narrowed down to the accurate broadcast source through, e.g., 1) receiving another user- provided audio sample on the same station; 2) telephone number (ANI) lookup; 3) information from the telephony network (e.g., GPS, cell cite, LATA, and the like); and 4) another source of location information (e.g., user preference setting).
  • ANI telephone number
  • process 900 retrieves or obtains the metadata from the identified broadcast audio.
  • the metadata when the metadata is part of the broadcast stream, it can be retrieved from the data component of the broadcast stream.
  • the metadata can be obtained from various broadcast formats or standards, such as those discussed above.
  • the metadata when the broadcast stream does not include the metadata, the metadata can be obtained from a metadata source based on the broadcast source and the broadcast timestamp associated with the most highly correlated BSAI.
  • the metadata source can be any source that can provide metadata of the identified broadcast stream, such as the broadcast source's broadcast log(e.g., a radio playlist), a third party service provider of broadcast media information (e.g., MediaGuide, Media Monitors, Nielsen, Auditude, or ex-Verance), the Internet (e.g., the broadcast source's website), and the like.
  • Process 900 can also generate a user data set that includes the metadata, the user timestamp, and user data from a user profile. At 960, process 900 generates a message, such as any of those discussed above and transmits the message to the user's phone.
  • Various implementations of the subject matter described herein can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof.
  • ASICs application specific integrated circuits
  • These various implementations can include implementations in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which can be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
  • These computer programs include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language.
  • the term "memory” comprises a "computer- readable medium” that includes any computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, RAM, ROM, registers, cache, flash memory, and Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal, as well as a propagated machine-readable signal.
  • machine-readable signal refers to any signal used to provide machine instructions and/or data to a programmable processor.
  • the metadata associated with the broadcast audio can be obtained from sources other than the broadcast stream.
  • the broadcast source can also be identified by knowing the broadcasting frequency (e.g., 96.1 MHz) in which the broadcast stream is broadcasted. For instance, if a broadcast stream is being received by Tuner #6 in the broadcast server, and Tuner #6 is set for a frequency of 94.9 MHz, one can easily determine that the broadcast stream associated with Tuner #6 is from a broadcast source at 94.9 MHz frequency.
  • the metadata for the identified broadcast audio can be obtained from the broadcast source's broadcast log (e.g., a radio playlist), a third party service provider of broadcast media information (e.g., MediaGuide, Media Monitors, Nielsen, Auditude, or ex-Verance), or the Internet (e.g., the broadcast source's website). Accordingly, other implementations are within the scope of the following claims.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Telephone Function (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Telephonic Communication Services (AREA)

Abstract

Divers aspects de l'invention peuvent être mis en œuvre pour fournir un système basé sur téléphone pour l'identification des flux et des sources de diffusion audio. En général, un aspect peut être un procédé incluant l'obtention d'un flux de diffusion comprenant un ou plusieurs segments de diffusion et l'association de chaque segment de diffusion à un horodatage de diffusion. Le procédé comprend également la réception d'un échantillon audio par l'intermédiaire d'une connexion téléphonique initiée par l'utilisateur pour un laps de temps prédéterminé. Le procédé comprend aussi la comparaison de l'échantillon audio reçu avec une base de données intégrant une pluralité d'échantillons audio prétraités ainsi que l'obtention d'un échantillon audio prétraité correspondant qui se rapproche le plus de l'échantillon audio reçu. D'autres mises en œuvre de cet aspect comprennent des systèmes et appareils correspondants et des produits de programmes informatiques.
PCT/US2008/077541 2007-09-24 2008-09-24 Identification de diffusion audio basée sur téléphone WO2009042697A2 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US97481107P 2007-09-24 2007-09-24
US60/974,811 2007-09-24

Publications (2)

Publication Number Publication Date
WO2009042697A2 true WO2009042697A2 (fr) 2009-04-02
WO2009042697A3 WO2009042697A3 (fr) 2009-05-28

Family

ID=40512102

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2008/077541 WO2009042697A2 (fr) 2007-09-24 2008-09-24 Identification de diffusion audio basée sur téléphone

Country Status (1)

Country Link
WO (1) WO2009042697A2 (fr)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2285026A1 (fr) * 2009-08-12 2011-02-16 BRITISH TELECOMMUNICATIONS public limited company Système de communication
WO2011018599A1 (fr) * 2009-08-12 2011-02-17 British Telecommunications Plc Système de communication
WO2012022831A1 (fr) * 2010-08-18 2012-02-23 Nokia Corporation Procédé et appareil d'identification et de mappage de contenu
EP2375600A3 (fr) * 2010-04-09 2012-08-01 Sony Ericsson Mobile Communications AB Procédé et appareil pour le réglage d'un canal de programme en fonction d'un échantillon de son dans un terminal de communication mobile
WO2012154125A1 (fr) * 2011-05-10 2012-11-15 Smart Hub Pte. Ltd. Système et procédé de reconnaissance de contenu de programme de radiodiffusion
WO2014164728A1 (fr) * 2013-03-12 2014-10-09 Shazam Investments Ltd. Procédés et systèmes pour identifier des informations d'une station de diffusion et des informations de contenu diffusé
EP3082280A1 (fr) * 2015-04-15 2016-10-19 Xiaomi Inc. Procédé et appareil permettant d'identifier des informations audio
US9904506B1 (en) 2016-11-15 2018-02-27 Spotify Ab Methods, portable electronic devices, computer servers and computer programs for identifying an audio source that is outputting audio

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6574594B2 (en) * 2000-11-03 2003-06-03 International Business Machines Corporation System for monitoring broadcast audio content
US20060184960A1 (en) * 2005-02-14 2006-08-17 Universal Music Group, Inc. Method and system for enabling commerce from broadcast content
US20070011699A1 (en) * 2005-07-08 2007-01-11 Toni Kopra Providing identification of broadcast transmission pieces
US20070143777A1 (en) * 2004-02-19 2007-06-21 Landmark Digital Services Llc Method and apparatus for identificaton of broadcast source

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6574594B2 (en) * 2000-11-03 2003-06-03 International Business Machines Corporation System for monitoring broadcast audio content
US20070143777A1 (en) * 2004-02-19 2007-06-21 Landmark Digital Services Llc Method and apparatus for identificaton of broadcast source
US20060184960A1 (en) * 2005-02-14 2006-08-17 Universal Music Group, Inc. Method and system for enabling commerce from broadcast content
US20070011699A1 (en) * 2005-07-08 2007-01-11 Toni Kopra Providing identification of broadcast transmission pieces

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2285026A1 (fr) * 2009-08-12 2011-02-16 BRITISH TELECOMMUNICATIONS public limited company Système de communication
WO2011018599A1 (fr) * 2009-08-12 2011-02-17 British Telecommunications Plc Système de communication
CN102474366A (zh) * 2009-08-12 2012-05-23 英国电讯有限公司 通信系统
EP2375600A3 (fr) * 2010-04-09 2012-08-01 Sony Ericsson Mobile Communications AB Procédé et appareil pour le réglage d'un canal de programme en fonction d'un échantillon de son dans un terminal de communication mobile
WO2012022831A1 (fr) * 2010-08-18 2012-02-23 Nokia Corporation Procédé et appareil d'identification et de mappage de contenu
JP2014519253A (ja) * 2011-05-10 2014-08-07 スマート ハブ ピーティーイー リミテッド 放送番組コンテンツを認識するシステム及び方法
RU2585250C2 (ru) * 2011-05-10 2016-05-27 Эйнновейшнз Холдингз Пте. Лтд. Система и способ распознавания содержимого широковещательной программы
CN103718482A (zh) * 2011-05-10 2014-04-09 斯玛特哈伯私人有限公司 用于识别广播节目内容的系统与方法
WO2012154125A1 (fr) * 2011-05-10 2012-11-15 Smart Hub Pte. Ltd. Système et procédé de reconnaissance de contenu de programme de radiodiffusion
KR20140033397A (ko) * 2011-05-10 2014-03-18 스마트 허브 피티이. 리미티드 방송프로그램 콘텐츠를 인식하기 위한 시스템 및 방법
AU2012254217B2 (en) * 2011-05-10 2014-11-27 Einnovations Holdings Pte. Ltd. System and method for recognizing broadcast program content
KR101602175B1 (ko) * 2011-05-10 2016-03-10 이이노베이션즈 홀딩즈 피티이 리미티드 방송프로그램 콘텐츠를 인식하기 위한 시스템 및 방법
WO2014164728A1 (fr) * 2013-03-12 2014-10-09 Shazam Investments Ltd. Procédés et systèmes pour identifier des informations d'une station de diffusion et des informations de contenu diffusé
US9451048B2 (en) 2013-03-12 2016-09-20 Shazam Investments Ltd. Methods and systems for identifying information of a broadcast station and information of broadcasted content
EP3082280A1 (fr) * 2015-04-15 2016-10-19 Xiaomi Inc. Procédé et appareil permettant d'identifier des informations audio
RU2634696C2 (ru) * 2015-04-15 2017-11-03 Сяоми Инк. Способ и устройство для идентификации аудиоинформации
US9904506B1 (en) 2016-11-15 2018-02-27 Spotify Ab Methods, portable electronic devices, computer servers and computer programs for identifying an audio source that is outputting audio
EP3321827A1 (fr) * 2016-11-15 2018-05-16 Spotify AB Procédés, dispositif électronique portable, serveurs informatiques et programmes informatiques d'identifier une source audio active
EP3495965A1 (fr) * 2016-11-15 2019-06-12 Spotify AB Procédés, dispositif électronique portable, serveurs informatiques et programmes informatiques d'identifier une source audio active

Also Published As

Publication number Publication date
WO2009042697A3 (fr) 2009-05-28

Similar Documents

Publication Publication Date Title
US20080051029A1 (en) Phone-based broadcast audio identification
US20080049704A1 (en) Phone-based broadcast audio identification
US20080066098A1 (en) Phone-based targeted advertisement delivery
US8571501B2 (en) Cellular handheld device with FM Radio Data System receiver
WO2009042697A2 (fr) Identification de diffusion audio basée sur téléphone
US9563699B1 (en) System and method for matching a query against a broadcast stream
US9106803B2 (en) Broadcast media information capture and communication via a wireless network
US20050176366A1 (en) Methods and system for retrieving music information from wireless telecommunication devices
CN1753502B (zh) 提供广告音乐的系统和方法
US8180277B2 (en) Smartphone for interactive radio
US20070281606A1 (en) Systems and methods for acquiring songs or products associated with radio broadcasts
US20090106397A1 (en) Method and apparatus for interactive content distribution
EP1982273A2 (fr) Système et procédé permettant de fournir des informations de contenu de diffusion commerciale à des abonnés mobiles
CN101669310B (zh) 使用便携通信设备的节目标识
JP5907632B2 (ja) 放送番組コンテンツを認識するシステム及び方法
WO2001043364A1 (fr) Systeme interactif conçu pour etre utilise avec la presse electronique et procede correspondant
WO2006003507A1 (fr) Procedes et appareils d'emission et reception de donnees numeriques dans un signal analogique
CA2836213A1 (fr) Procede et dispositif de traitement et de stockage d'une multiplicite de signaux radio
WO2006033835A2 (fr) Procede d'identification de contenu de media en meme temps que la diffusion
US20050102703A1 (en) On demand broadcast information distribution system and method
CN1448023A (zh) 访问信息的方法
WO2001089118A1 (fr) Amelioration de signaux de radiodiffusion par detection automatique de signal et prestation de services via des reseaux et des dispositifs de transmission de donnees
EP2398172A1 (fr) Appareil et procédé d'identification de contenús multimédias

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08833641

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08833641

Country of ref document: EP

Kind code of ref document: A2