EP2600545A1 - Research data measurement system and method - Google Patents

Research data measurement system and method Download PDF

Info

Publication number
EP2600545A1
EP2600545A1 EP11191708.4A EP11191708A EP2600545A1 EP 2600545 A1 EP2600545 A1 EP 2600545A1 EP 11191708 A EP11191708 A EP 11191708A EP 2600545 A1 EP2600545 A1 EP 2600545A1
Authority
EP
European Patent Office
Prior art keywords
data
audio data
recorded
audio
mobile
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP11191708.4A
Other languages
German (de)
French (fr)
Inventor
Ian Noctor
Paul O'leary
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Waterford Institute of Technology
Original Assignee
Waterford Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Waterford Institute of Technology filed Critical Waterford Institute of Technology
Priority to EP11191708.4A priority Critical patent/EP2600545A1/en
Publication of EP2600545A1 publication Critical patent/EP2600545A1/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H60/00Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
    • H04H60/29Arrangements for monitoring broadcast services or broadcast-related services
    • H04H60/31Arrangements for monitoring the use made of the broadcast services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H60/00Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
    • H04H60/35Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users
    • H04H60/37Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users for identifying segments of broadcast information, e.g. scenes or extracting programme ID
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H60/00Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
    • H04H60/56Arrangements characterised by components specially adapted for monitoring, identification or recognition covered by groups H04H60/29-H04H60/54
    • H04H60/58Arrangements characterised by components specially adapted for monitoring, identification or recognition covered by groups H04H60/29-H04H60/54 of audio

Definitions

  • the present invention relates to a distributed method and system for measuring exposure to broadcast audio data.
  • Media broadcasters such as radio stations and television studios, have a permanent need for data representative of their audience, in order to develop broadcast program content and scheduling.
  • This input data is however fraught with inaccuracy and may be imperfect, for a variety of reasons. Polled persons may have imperfect or selective recall about broadcasts they have heard or watched, in terms of broadcaster, program, location and/or time of reception, and may not want to provide personal information to help qualify the audience type. Polled persons may also provide false broadcast or personal data for a variety of socio-psychological reasons. Input data may be collected in an insufficient volume or amount to be statistically significant or, simply, may be incorrectly recorded through human error. Such methods of collecting audience input data are manpower - intensive and therefore costly, whereby data collection is habitually conducted at regular but distant time intervals, typically weekly at best, such that the method does not provide sufficient data granularity for meaningful analysis.
  • US 2011208515A1 discloses both a method of, and a system for, gathering broadcast research data in a portable monitoring device of an audience member, wherein time-domain audio data is received in the device, converted to frequency-domain data and processed for reading an ancillary code and extracting a signature therefrom.
  • the above technique requires domain shifting of the audio data between the time domain and the frequency domain, which is computationally expensive.
  • the present invention provides a distributed method and system, wherein ambient broadcast data from radio, television and network receivers is recorded by a plurality of suitably-configured mobile data communication devices, which communicate same to a remote data processing terminal for purposes of audio pattern matching in the time domain, wherein the remote data processing terminal records and stores network broadcast data as a plurality of reference patterns.
  • this system does not require any additional signal processing at the broadcaster, requires minimal signal processing at each mobile data communication device for power efficiency, and limits signal processing to the time domain.
  • a method of measuring exposure to broadcast audio data with a distributed communication system comprising a plurality of mobile data communication devices apt to transmit data to at least one remote data processing terminal, the method comprising the steps of measuring exposure to broadcast audio data with a distributed communication system, comprising a plurality of mobile data communication devices apt to transmit data to at least one remote data processing terminal, the method comprising the steps of broadcasting first and second audio data from at least one broadcasting source then, at each of the plurality of mobile data communication devices, recording first audio data in the time domain, from a substantially adjacent audio source reproducing the broadcast from the at least one broadcasting source, associating location data with the recorded first audio data, and communicating the recorded first audio data and location data to the at least one remote data processing terminal; and, and at the at least one remote data processing terminal, recording second audio data in the time domain over a network, matching respective samplings rates of the first and second recorded audio data to obtain respective comparable patterns, and matching the respective patterns to identify the recorded first audio data, by
  • either or both of the steps of recording first audio data and recording second audio data are performed at a predetermined time interval and for a predetermined period of time.
  • either or both of the predetermined time interval and the predetermined period of time correspond substantially to one another.
  • the method preferably comprises the further step of setting the predetermined time interval according to one or more variables selected from the group comprising time, broadcast schedule, data storage availability and power resources of the mobile data communication device.
  • the step of communicating the recorded first audio data comprises the further step of scheduling the communication according to one or more variables selected from the group comprising time, network availability, bandwidth availability and onboard power resources.
  • the step of recording first audio data comprises at least one further step selected from time - decimating, time - fragmenting and amplitude - reducing the first audio data.
  • the step of associating location data comprises the further step of associating data representative of at least one mobile network base station or assisted GPS ('A-GPS') data.
  • 'A-GPS' assisted GPS
  • the step of recording second audio data over a network comprises the further step of recording at least one data stream broadcast over a wide area network.
  • the second audio data is recorded in stereo and the step of matching the respective patterns comprises the further step of using the right or left recorded second audio data as the reference pattern.
  • the method comprises the further step of matching portions of the respective patterns to identify the recorded first audio data, in order to mitigate the effect of intermittent erasure, attenuation, muffling and/or interference in the recorded first data.
  • a system for measuring exposure to broadcast audio data with a distributed communication system comprising at least one broadcasting source broadcasting first audio data over the airwaves and second audio data on a wide area network, at least one data processing terminal connected to the wide area network, and a plurality of mobile data communication devices connected to a mobile communication network and apt to transmit data to the at least one remote data processing terminal.
  • Each of the plurality of mobile data communication devices is configured to record the first audio data in the time domain from a substantially adjacent audio source reproducing the broadcast from the at least one broadcasting source, associate location data with the recorded first audio data, and communicate the recorded first audio data and location data to the at least one remote data processing terminal.
  • the at least one data processing terminal is configured to record the second audio data in the time domain over the wide area network, match respective samplings rates of the first and second recorded audio data to obtain respective comparable patterns, and match the respective patterns to identify the recorded first audio data, by using the recorded second audio data as a reference pattern.
  • either or both of the plurality of mobile data communication devices and the at least one data processing terminal is further configured to record audio data at a predetermined time interval and for a predetermined period of time.
  • the predetermined time interval is set according to one or more variables selected from the group comprising time, broadcast schedule, data storage availability and power resources of the mobile data communication device.
  • each of the plurality of mobile data communication devices is further configured to communicate the recorded first audio data according to one or more variables selected from the group comprising time, network availability, bandwidth availability and onboard power resources.
  • each of the plurality of mobile data communication devices is further configured to time - decimate, time - fragment and/or amplitude - reduce the first audio data.
  • the location data is data representative of at least one mobile network base station or assisted GPS ('A-GPS') data.
  • 'A-GPS' assisted GPS
  • the recorded second audio data is recorded in stereo and the reference pattern is the right or left recorded second audio data.
  • the at least one data processing terminal is configured to match one or more portions of the respective patterns to identify the recorded first audio data, in order to mitigate the effect of intermittent erasure, attenuation, muffling and/or interference in the recorded first audio data.
  • any in the plurality of mobile data communication devices may selected from the group comprising mobile telephone handsets, tablet computers, portable computers, personal digital assistants, portable media players, portable game consoles.
  • a set of instructions recorded on a data carrying medium which, when processed by a mobile data communication device connected to a network, configures the device to perform the steps of recording audio data in the time domain, from a substantially adjacent audio source reproducing a broadcast from a remote broadcasting source, associating location data with the recorded audio data, and communicating the recorded audio data and location data to at least one remote data processing terminal over the network.
  • the set of instructions may be advantageously embodied as an application package file ('APK') for use with the AndroidTM operating system or embodied as an iPhoneTM application archive ('IPA') for use with the iOSTM operating system.
  • 'APK' application package file
  • 'IPA' iPhoneTM application archive
  • a data carrying medium which, when processed by a data processing terminal connected to a network, configures the terminal to perform the steps of recording audio data in the time domain, from a broadcast data source streamed over the network, matching the samplings rate of the recorded audio data received from at least one mobile data communication device, configured by the first - described set of instructions above, with the sampling rate of the recorded network audio data for obtaining respective comparable patterns, and matching the respective patterns to identify the broadcast recorded by the at least one mobile data communication device, by using the recorded network audio data as a reference pattern.
  • a server monitors and records all radio output on all wavelengths and digital streams within a designated geographical area.
  • Respective smartphones of members of the public are configured with recording software, which monitors the ambient radio output in the environment of each smartphone.
  • the recording software turns on periodically for a short listening period, for instance 2 seconds every 15 minutes.
  • the observed radio content is recorded by the phone.
  • the phones recordings are periodically uploaded to a central site via an internet connection or MMS.
  • the uploaded content is then compared to the content recorded by the server until a match is found to the content of a radio station's output.
  • the times of the radio output and the device's recordings are compared for verification.
  • the verified matches are analyzed by location, time, and personal data of the phone's owner e.g. age, gender, occupation, marital status and so on. This process is repeated for the number of smart phones carrying the software in the geographical area.
  • the analyzed data is then digitally distributed to the end user, typically a broadcaster or broadcast advertiser.
  • a computer program comprising program instructions for causing a computer program to carry out the above method which may be embodied on a record medium, carrier signal or read-only memory.
  • FIG. 1 an example embodiment of a system according to the invention is shown within a networked environment.
  • the networked environment firstly includes a plurality of broadcasters 101, 102, wherein each broadcaster generates respective first and second audio data 101 A, 101B and 102A, 102B.
  • the audio data content is respective to the broadcaster 101, 102, wherein the first broadcaster 101 may broadcast essentially music programs 101 A, 101 B and the second broadcaster 102 may broadcast essentially information programs 102A, 102B, the first audio data 101 A, 102A being identical in content to the second audio data 101 B, 102B in each case.
  • the first audio data 101A, 102A is broadcast as a modulation frequency (FM) signal within the very high frequency (VHF) part of the radio spectrum, typically between 87.5 MHz and 108 MHz, at one or more frequencies respective to the broadcaster 101, 102, by a respective emitter 101C, 102C.
  • FM broadcasts 101 A, 102A are available over the airwaves to any number of receivers, typically conventional personal radio receivers in dwellings, vehicles, retails spaces and the like, and reproduced through one or more speakers integrated or connected thereto.
  • the second audio data 101 B, 102B is broadcast as a digital network data stream over a Wide Area Network 104, in the example the Internet, by a respective data processing terminal 101 D, 102D of the broadcaster 101, 102. Any number of data processing devices with wired and/or wireless Wide Area Network connectivity may thus receive the second audio data 101 B, 102B and reproduce same through one or more speakers integrated or connected thereto.
  • the networked environment next includes a plurality of mobile data communication device 105, each located substantially adjacent an FM receiver 111 reproducing either of the FM broadcasts 101 A, 102A from its respective broadcasting source 101, 102, wherein the receiver thus acts as an audio source 111.
  • each mobile data communication device 105 is a mobile telephone handset 105 having wireless telecommunication emitting and receiving functionality over a cellular telephone network configured according to the Global System for Mobile Communication ('GSM'), General Packet Radio Service ('GPRS'), International Mobile Telecommunications-2000 (IMT — 2000, 'W-CDMA' or '3G') network industry standards, and wherein telecommunication is performed as voice, alphanumeric or audio-video data using the Short Message Service ('SMS') protocol, the Wireless Application protocol ('WAP') the Hypertext Transfer Protocol ('HTTP') or the Secure Hypertext Transfer Protocol ('HTTPS').
  • 'GSM' Global System for Mobile Communication
  • 'GPRS' General Packet Radio Service
  • IMT — 2000, 'W-CDMA' or '3G' International Mobile Telecommunications-2000
  • 'HTTP' Hypertext Transfer Protocol
  • 'HTTPS' Secure Hypertext Transfer Protocol
  • Each mobile telephone handset 105 receives or emits voice, text, audio and/or image data encoded as a digital signal over a wireless data transmission 106, wherein the signal is relayed respectively to or from the handset by the geographically-closest communication link relay 107 of a plurality thereof.
  • the plurality of communication link relays 107 allows digital signals to be routed between each handset 105 and their destination by means of a remote gateway 108 via a MSC or base station 109.
  • Gateway 108 is for instance a communication network switch, which couples digital signal traffic between wireless telecommunication networks, such as the cellular network within which wireless data transmissions 106 take place, and the Wide Area Network 104.
  • the gateway 108 further provides protocol conversion if required, for instance whether a handset 105 uses the WAP or HTTPS protocol to communicate data.
  • one or more of the plurality of mobile data communication device 105 may have wired and/or wireless telecommunication emitting and receiving functionality over, respectively, a wired Local Area Network ('LAN') and/or a wireless local area network ('WLAN') conforming to the 802.11 standard ('Wi-Fi').
  • 'LAN' wired Local Area Network
  • 'WLAN' wireless local area network
  • 802.11 standard 'Wi-Fi'
  • telecommunication is likewise performed as voice, alphanumeric and/or audio-video data using the Internet Protocol (IP), Voice data over IP ('VoIP') protocol, Hypertext Transfer Protocol ('HTTP') or Secure Hypertext Transfer Protocol ('HTTPS'), the signal being relayed respectively to or from the mobile data communication device 105 by a wired (LAN) or wireless (WLAN) router 109 interfacing the mobile data communication device 105 to the WAN communication network 104.
  • IP Internet Protocol
  • VoIP Voice data over IP
  • HTTP'HTTP' Hypertext Transfer Protocol
  • 'HTTPS' Secure Hypertext Transfer Protocol
  • a mobile telephone handset 105 may have wireless telecommunication emitting and receiving functionality over the WLAN in addition to GSM, GPRS, W-CDMA and/or 3G.
  • each of mobile data communication device 105 further comprises means to record and store the first audio data 101 A, 102A reproduced by the substantially adjacent audio source.
  • a typical handset 105 for use with the system according to the invention is preferably that commonly referred to as a 'smartphone' and may for instance be an iPhoneTM handset manufactured by the Apple Corporation or a Nexus OneTM handset manufactured for Google, Inc. by the HTC Corporation.
  • the mobile terminal 105 may be any portable data processing device having at least wireless communication means and audio recording and storage means.
  • one or more of the mobile data communication devices 105 may instead be a portable computer commonly referred to as a 'laptop' or 'netbook', a tablet computer such as an AppleTM iPadTM or a MotorolaTM XOOMTM, a personal digital assistant such as an Hewlett-PackardTM iPaqTM, a portable media player such as an ArchosTM 43 AndroidTM PMP, or even a portable game console such as a SonyTM PlaystationTM VitaTM.
  • a portable computer commonly referred to as a 'laptop' or 'netbook'
  • a tablet computer such as an AppleTM iPadTM or a MotorolaTM XOOMTM
  • a personal digital assistant such as an Hewlett-PackardTM iPaqTM
  • a portable media player such as an ArchosTM 43 AndroidTM PMP
  • a portable game console such as a SonyTM PlaystationTM VitaTM.
  • the networked environment next includes at least one data processing terminal 110 which emits and receives data encoded as a digital signal over a wired data transmission conforming to the IEEE 802.3 ('Gigabit Ethernet') standard, wherein the signal is relayed respectively to or from the computing device by a wired router 109 interfacing the computing device 110 to the WAN communication network 104.
  • the at least one data processing terminal 110 may be any portable or desktop data processing device having at least networking means apt to establish a data communication with both the broadcasting data processing terminals 101 E, 102E and the plurality of mobile data communication devices 105
  • each of mobile data communication device 105 further comprises means to record and store the first audio data 101 A, 102A reproduced by the substantially adjacent audio source.
  • the data processing terminal 110 comprises means to record and store the second audio data 101 B, 102B received from the data processing terminals 101 E, 102E as digital network data streams 101 D, 102D over the Wide Area Network 104.
  • the handset 105 firstly includes a data processing unit 201, for instance a general-purpose microprocessor ('CPU'), acting as the main controller of the handset 105 and which is coupled with memory means 202, comprising non-volatile random-access memory ('NVRAM').
  • a data processing unit 201 for instance a general-purpose microprocessor ('CPU'), acting as the main controller of the handset 105 and which is coupled with memory means 202, comprising non-volatile random-access memory ('NVRAM').
  • 'NVRAM' non-volatile random-access memory
  • the mobile telephone handset 105 further includes a modem 203 to implement the wireless communication functionality, as the modem provides the hardware interface to external communication systems, such as the GSM or GPRS cellular telephone network 107, 108, 109 shown in Figure 1 .
  • An aerial 204 coupled with the modem 203 facilitates the reception of wireless signals from nearby communication link relays 106 and, for some handsets 105, from nearby FM signal emitters 101C, 102C.
  • the modem 203 is interfaced with or includes an analogue-to-digital converter 205 ('ADC') for demodulating wavelength wireless signals, for instance the first audio data 101A, 102A received via the antenna 204 into digital data, and reciprocally for outgoing data.
  • 'ADC' analogue-to-digital converter
  • the handset 105 further includes self-locating means in the form of a GPS receiver 206, wherein the ADC 205 receives analogue positional and time data from orbiting satellites (not shown), which the data processing unit 201 or a dedicated data processing unit processes into digital positional and time data.
  • a GPS receiver 206 wherein the ADC 205 receives analogue positional and time data from orbiting satellites (not shown), which the data processing unit 201 or a dedicated data processing unit processes into digital positional and time data.
  • the handset 105 further includes a sound transducer 207, for converting ambient sound waves, such as the user's voice and first audio data 101A, 102A, into an analogue signal, which the ADC 205 receives for the data processing unit 201 or a dedicated data processing unit to process into digital first audio data.
  • a sound transducer 207 for converting ambient sound waves, such as the user's voice and first audio data 101A, 102A, into an analogue signal, which the ADC 205 receives for the data processing unit 201 or a dedicated data processing unit to process into digital first audio data.
  • the handset 105 may optionally further include imaging means 208 in the form of an electronic image sensor, for capturing image data which the data processing unit 201 or a dedicated data processing unit processes into digital image data.
  • imaging means 208 in the form of an electronic image sensor, for capturing image data which the data processing unit 201 or a dedicated data processing unit processes into digital image data.
  • the CPU 201, NVRAM 202, modem 203, GPS receiver 206, microphone 207 and optional digital camera 208 are connected by a data input/output bus 209, over which they communicate and to which further components of the handset 105 are similarly connected, in order to provide wireless communication functionality and receive user interrupts, inputs and configuration data.
  • Alphanumerical and/or image data processed by CPU 201 is output to a video display unit 210 ('VDU'), from which user interrupts may also be received if it is a touch screen display. Further user interrupts may also be received from a keypad 211 of the handset, or from an external human interface device ('HiD') connected to the handset via a Universal Serial Bus ('USB') interface 212.
  • the USB interface advantageously also allows the CPU 201 to read data from and/or write data to an external or removable storage device. Audio data processed by CPU 201 is output to a speaker unit 213.
  • Power is provided to the handset 105 by an internal module battery 214, which an electrical converter 215 charges from a mains power supply as and when required.
  • the data processing device 110 is a computer configured with a data processing unit 301, data outputting means such as video display unit (VDU) 302, data inputting means such as HiD devices, commonly a keyboard 303 and a pointing device (mouse) 304, as well as the VDU 302 itself if it is a touch screen display, and data inputting/outputting means such as the wired network connection 305 to the communication network 104 via the router 109, a magnetic data-carrying medium reader/writer 306 and an optical data-carrying medium reader/writer 307.
  • VDU video display unit
  • HiD devices commonly a keyboard 303 and a pointing device (mouse) 304
  • HiD devices commonly a keyboard 303 and a pointing device (mouse) 304
  • the VDU 302 itself if it is a touch screen display
  • data inputting/outputting means such as the wired network connection 305 to the communication network 104 via the router 109, a magnetic data-carrying medium reader/writer
  • a central processing unit (CPU) 308 provides task co-ordination and data processing functionality. Sets of instructions and data for the CPU 308 are stored in memory means 309 and a hard disk storage unit 310 facilitates non-volatile storage of the instructions and the data.
  • a wireless network interface card (NIC) 311 provides the interface to the network connection 305.
  • a universal serial bus (USB) input/output interface 312 facilitates connection to the keyboard and pointing devices 303, 304.
  • All of the above components are connected to a data input/output bus 313, to which the magnetic data-carrying medium reader/writer 306 and optical data-carrying medium reader/writer 307 are also connected.
  • a video adapter 314 receives CPU instructions over the bus 313 for outputting processed data to VDU 302. All the components of data processing unit 301 are powered by a power supply unit 315, which receives electrical power from a local mains power source and transforms same according to component ratings and requirements.
  • Figure 4 details the data processing steps of an embodiment of the method, performed in the environment of Figure 1 with the data processing devices 105, 110.
  • transmissions 101 A, 102 of various radio stations 101, 102 are recorded on multiple mobile handsets 105 as respective audio samples.
  • the quality of the sample on each handset 105 includes amplitude variations that may be caused by the distance between the mobile handset 105 and the source 111, or the broadcast reproduction volume thereof, and any significant additive interference.
  • the mobile handset 105 may be too distant from a source 111, making the reproduced broadcast 101 A, 102A indistinguishable from ambient sounds, or the mobile handset 105 may be located immediately adjacent a source 111 or be configured as the source 111 itself, providing a particularly clear sample of the broadcast 101 A, 102A.
  • the recording may occur by command from the user, automatically by command from the set of instructions loaded in the memory 202 of the handset 105, automatically by remote command from the set of instructions loaded in the memory 309 of the remote terminal 110, on an ad hoc basis and/or at predetermined time intervals, for instance every 15 minutes.
  • a specific broadcast frequency may also be included amongst the recording parameters.
  • the length of recording is a predetermined period of time and need not be extensive, and may amount to as little as point two of a second (0.2 sec) or less.
  • each mobile handset 105 subjects the recorded sample to a filtering algorithm, for instance a time decimation, time fragmentation or amplitude reduction, in order to nullify personal or incidental audio information generated in parallel to the reproduced broadcast within the vicinity of the mobile handset transducer 207.
  • a filtering algorithm for instance a time decimation, time fragmentation or amplitude reduction
  • each mobile handset 105 obtains data representative of the geographical location of the handset 105 at the time of the recording of step 401.
  • GPS data or, alternatively, data representative of the geographically - nearest MSC or base station(s) 109 or, alternatively still, assisted GPS data is obtained, respectively from the GPS means 206, the network or the relevant assistance server.
  • the step 403 is processed substantially as soon as the recording step 401 is completed but, for handsets 105 comprising multi-core processors 201, the step 403 may usefully be processed in parallel to steps 401 and/or 402. Steps 402 and 403 are processed for each iteration of step 401, thus for each recording.
  • each mobile handset 105 communicates the sample or each sample output by step 402 and stored, along with its respective location data of step 403, to the remote data processing terminal 110, via the gateway 108.
  • the communication may occur by command from the user, automatically by command from the set of instructions loaded in the memory 202 of the handset 105, automatically by remote command from the set of instructions loaded in the memory 309 of the remote terminal 110, on an ad hoc basis and/or at scheduled dates and times.
  • the set of instructions loaded in the memory 202 of the handset 105 may inhibit the communication of step 404 when power in the battery 214 depletes below a predetermined threshold, until such time as the battery charge is ongoing or replenished.
  • the remote data processing terminal 110 records network transmissions 101 B, 102B of various radio stations 101, 102 as respective stereo audio samples, at step 405.
  • the quality of each sample is at least optimal, since the network broadcasts are substantially unaffected by amplitude variations or additive interferences and, as such, these network recordings are used as reference samples.
  • the remote data processing terminal 110 receives a remote recording from a mobile handset 105 over the Wide Area Network 104 pursuant to step 404 and, at step 407, the remote data processing terminal 110 subjects the remote handset recording and the local recorded network broadcast to an interpolation and decimation filter in order match the respective sampling rates of the remote recording and the network recording and thereby obtain comparable audio patterns.
  • the remote data processing terminal 110 matches the respective patterns output by step 407 for identifying the broadcast recorded by the smartphone 105, using the right or left recorded network broadcast as the reference pattern.
  • the step 408 may include a further sub-filtering operation to accommodate a partial match of the remote handset recording with the reference pattern, for example to reduce the effect of intermittent recording, muffling and interference.
  • Figure 5 is a logical diagram of the contents of the memory means 202 of each mobile data communication device 105, when performing steps 401 to 404 at runtime.
  • An operating system is shown at 501 which, depending on the handset manufacturer, may be iOS 5TM developed and distributed by Apple Inc. or AndroidTM developed and distributed by Google Inc.
  • An application is shown at 502, which configures the mobile handset 105 to perform at least processing steps 401 to 404 as described hereinbefore, and which is interfaced with the OS 501 via one or more suitable Application Programmer Interfaces.
  • the application is either an application package file ('APK') for use with the AndroidTM operating system 501 or an iPhoneTM application archive ('IPA') for use with the iOSTM operating system 501, and readily installed on the mobile handset 105 via, respectively, Android MarketTM or the AppStoreTM.
  • Application data is shown at 503, which comprises local and network data.
  • Local data comprises broadcast audio data 101 A or 102A recorded via the transducer 207 at step 401 in a buffer 504, filtered audio data 505 output at step 402 and location data 506 obtained at step 403.
  • Network data 507 comprises packeted filtered audio data 505 and location data 506 being sent to the remote data processing terminal 110 and, optionally, remote commands 508 data received from the remote data processing terminal 110 for configuring the application 502.
  • the memory 202 may further comprise local and/or network data that is unrelated to application 502, respectively shown at 509 and 510, for instance used by or generated for another application being processed in parallel with application 502.
  • Figure 6 is a logical diagram of the contents of the memory means 309 of the data processing terminal 110, when performing 405 to 408 at runtime.
  • An operating system is shown at 601 which, if the terminal 110 is a desktop computer, is for instance Windows 7TM distributed by the Microsoft Corporation.
  • the OS 601 includes communication subroutines 602 to configure the terminal for bilateral network communication via the NIC 311.
  • An application is shown at 603, which configures the terminal 110 to perform at least processing steps 405 to 408 as described hereinbefore, and which is interfaced with the OS 601 and network communication subroutines 602 thereof via one or more suitable Application Programmer Interfaces.
  • the application 603 is therefore apt to buffer the incoming network broadcast streams 101 B, 102B in RAM 309 and store same in HDD 310 pursuant to step 405.
  • Application data is shown at 604, which comprises local and network data.
  • Local data comprises network audio data streams 101 B and 102B received via the NIC 311 and subroutines 602 in a buffer 605, sampling rate - matched pattern data 606 filtered according to step 407, pattern - matched samples 607 according to step 408, and a database 608 storing analysis data associated with the matching output of step 408, including location data 506.
  • Network data 609 comprises packeted filtered audio data 505 and location data 506 received from remote mobile handsets 105 at step 406 and, optionally, remote commands 508 data sent to remote mobile handsets 105 for configuring their respective instantiation of application 502.
  • the memory 309 may further comprise local and/or network data that is unrelated to application 603, respectively shown at 610 and 611, for instance used by or generated for another application being processed in parallel with application 603.
  • Figure 7 provides an example of the contents of the database 608 stored by the terminal 110 and processed by the application 603.
  • the database 608 is relational and comprises a plurality of data structures, in the example data tables 701, wherein data is organized logically.
  • a first table 701 may store information about network audio data recorded by the terminal 110, consisting of a plurality of individual records.
  • Each record comprises a unique identifier 702 following a format including broadcast frequency, date and time; a source network address 703 for the audio data stream, in the example the address of broadcaster server 101 D in the WAN 104; the broadcast frequency 704 corresponding to the broadcaster and over which the first audio data is broadcast; the recording side 705 of the stereo sample; a timestamp 706 for the recording; and the actual network recording 707.
  • a second table 701 may store information about broadcast audio data recorded by mobile handsets 105 and received by the terminal 110, consisting again of a plurality of individual records.
  • Each record comprises a unique identifier 708 following a format including handset identifier, date and time; a sample quality value 709 indicative of the amount of amplitude variations in the recording; a unique handset identifier 710 uniquely identifying the communicating handset in the system to the terminal 110, the location data 506 extracted from the communication, a handset recording timestamp 711 and the actual broadcast recording 712.
  • the present invention thus provides a method of matching audio patterns in time domain, using a time-based correlation, with a high sensitivity accommodating low signal-to-noise ratios or high distortion values.
  • Samples are time-fragmented to allow for variable levels in quality in recorded sample, and pattern matching is assisted through AGPS and/or base station identification.
  • the network second audio data is used as 2 reference templates against time-stamped smartphone samples, which are compensated in time difference since FM broadcast has a variable delay compared with the network stream.
  • the method is particularly flexible as it provides time-selected recording, whilst preserving privacy since first audio data is recorded in a fragmented or distorted manner, and optionally further scrambled.
  • the method and system are adaptable to other broadcast formats, and can accommodate televised broadcasts with minimal changes.
  • the internet stream (second audio data, which is used as a reference pattern) can be delayed with respect to the wireless broadcast (recorded as the first audio data).
  • the delay is partly due to the queuing of radio stream packets at intervening routers, as it traverses the Internet to the dedicated servers. As router queue lengths fluctuate considerably in an unpredictable fashion, the delay between the first and second audio data will also vary in an unpredictable manner.
  • the system and method of the invention provides a solution by making the second audio data record longer than the first audio data record, by a duration at least equal to the longest expected time difference between them.
  • the pattern recognition algorithm must perform multiple pattern matches for each first audio data record against each second audio data record (for example, if the second audio data record needs to be longer than the first audio data record by 100 samples, to allow for the delay, then the algorithm could require up to 100 pattern matches per reference second audio data).
  • the first audio data slides along the longer second audio data record. This means each first audio record generates multiple pattern match values, most of which will show low correlation.
  • the sliding calculation involves taking a series of second audio segments (each of the same length as the first audio length) from the from the record beginning, performing the calculation and then sliding along one sample at a time until a match is made or it is impossible to slide any further on the second audio record.
  • a pattern match (if present) will be indicated by a high correlation when the sliding has eliminated the difference in time between the first and second audio segments.
  • the embodiments in the invention described with reference to the drawings comprise a computer apparatus and/or processes performed in a computer apparatus.
  • the invention also extends to computer programs, particularly computer programs stored on or in a carrier adapted to bring the invention into practice.
  • the program may be in the form of source code, object code, or a code intermediate source and object code, such as in partially compiled form or in any other form suitable for use in the implementation of the method according to the invention.
  • the carrier may comprise a storage medium such as ROM, e.g. CD ROM, or magnetic recording medium, e.g. a floppy disk or hard disk.
  • the carrier may be an electrical or optical signal which may be transmitted via an electrical or an optical cable or by radio or other means.

Abstract

A method of measuring exposure to broadcast audio data with a distributed communication system, and a distributed system performing the method, are disclosed, which rely upon a plurality of mobile data communication devices apt to transmit data to at least one remote data processing terminal. At least one broadcasting source broadcasts first and second audio data. Each mobile device records first audio data in the time domain, from a substantially adjacent audio source reproducing the broadcast from the at least one broadcasting source, associates location data with the recorded first audio data, and communicates the recorded first audio data and location data to the at least one terminal. The terminal records second audio data in the time domain over a network, matches respective samplings rates of the first and second recorded audio data to obtain respective comparable patterns, and matches the respective patterns to identify the recorded first audio data, by using the recorded second audio data as a reference pattern.

Description

    Field of the Invention
  • The present invention relates to a distributed method and system for measuring exposure to broadcast audio data.
  • Background of the Invention
  • Media broadcasters, such as radio stations and television studios, have a permanent need for data representative of their audience, in order to develop broadcast program content and scheduling.
  • Many systems are known for obtaining audience data and processing statistical data therefrom. The simplest systems are based on an aided recall method, wherein one or more pollsters query members of the public about their media listening or watching habits and replies are recorded in a relevant format, forming the statistical input data.
  • This input data is however fraught with inaccuracy and may be imperfect, for a variety of reasons. Polled persons may have imperfect or selective recall about broadcasts they have heard or watched, in terms of broadcaster, program, location and/or time of reception, and may not want to provide personal information to help qualify the audience type. Polled persons may also provide false broadcast or personal data for a variety of socio-psychological reasons. Input data may be collected in an insufficient volume or amount to be statistically significant or, simply, may be incorrectly recorded through human error. Such methods of collecting audience input data are manpower - intensive and therefore costly, whereby data collection is habitually conducted at regular but distant time intervals, typically weekly at best, such that the method does not provide sufficient data granularity for meaningful analysis.
  • Recent developments both in communication networks and mobile data processing technology have allowed improved methods and systems for obtaining audience data and processing statistical data therefrom. In particular, US 2011208515A1 discloses both a method of, and a system for, gathering broadcast research data in a portable monitoring device of an audience member, wherein time-domain audio data is received in the device, converted to frequency-domain data and processed for reading an ancillary code and extracting a signature therefrom.
  • Although the above technique certainly improves upon the substantially human-based data collection methods of the prior art, it nevertheless features a distinct disadvantage, in that it requires each broadcaster to perform additional signal processing before each program is broadcast, as the ancillary code must be encoded in the broadcast data, and the signature must be generated from the broadcast data, before broadcasting.
  • Moreover, the above technique requires domain shifting of the audio data between the time domain and the frequency domain, which is computationally expensive.
  • Furthermore, when listening habits must be audited across a wide variety of programs and/or broadcasters, those broadcasters not performing the above technique are excluded from the corpus of input statistical data, since ancillary codes and signatures cannot be obtained for their programs.
  • An improved method of measuring exposure to broadcast audio data is therefore required, and a system embodying same, which mitigates at least the above shortcomings of the prior art.
  • Summary of the Invention
  • The present invention, as set out in the appended claims, provides a distributed method and system, wherein ambient broadcast data from radio, television and network receivers is recorded by a plurality of suitably-configured mobile data communication devices, which communicate same to a remote data processing terminal for purposes of audio pattern matching in the time domain, wherein the remote data processing terminal records and stores network broadcast data as a plurality of reference patterns.
  • Advantageously, this system does not require any additional signal processing at the broadcaster, requires minimal signal processing at each mobile data communication device for power efficiency, and limits signal processing to the time domain.
  • According to an aspect of the present invention, there is therefore provided a method of measuring exposure to broadcast audio data with a distributed communication system, comprising a plurality of mobile data communication devices apt to transmit data to at least one remote data processing terminal, the method comprising the steps of measuring exposure to broadcast audio data with a distributed communication system, comprising a plurality of mobile data communication devices apt to transmit data to at least one remote data processing terminal, the method comprising the steps of broadcasting first and second audio data from at least one broadcasting source then, at each of the plurality of mobile data communication devices, recording first audio data in the time domain, from a substantially adjacent audio source reproducing the broadcast from the at least one broadcasting source, associating location data with the recorded first audio data, and communicating the recorded first audio data and location data to the at least one remote data processing terminal; and, and at the at least one remote data processing terminal, recording second audio data in the time domain over a network, matching respective samplings rates of the first and second recorded audio data to obtain respective comparable patterns, and matching the respective patterns to identify the recorded first audio data, by using the recorded second audio data as a reference pattern.
  • In an embodiment of the method according to the invention, either or both of the steps of recording first audio data and recording second audio data are performed at a predetermined time interval and for a predetermined period of time. In a variant of this embodiment, either or both of the predetermined time interval and the predetermined period of time correspond substantially to one another.
  • In an embodiment of the method according to the invention, the method preferably comprises the further step of setting the predetermined time interval according to one or more variables selected from the group comprising time, broadcast schedule, data storage availability and power resources of the mobile data communication device.
  • In an embodiment of the method according to the invention, the step of communicating the recorded first audio data comprises the further step of scheduling the communication according to one or more variables selected from the group comprising time, network availability, bandwidth availability and onboard power resources.
  • In an embodiment of the method according to the invention, the step of recording first audio data comprises at least one further step selected from time - decimating, time - fragmenting and amplitude - reducing the first audio data.
  • In an embodiment of the method according to the invention, the step of associating location data comprises the further step of associating data representative of at least one mobile network base station or assisted GPS ('A-GPS') data.
  • In an embodiment of the method according to the invention, the step of recording second audio data over a network comprises the further step of recording at least one data stream broadcast over a wide area network.
  • In an embodiment of the method according to the invention, the second audio data is recorded in stereo and the step of matching the respective patterns comprises the further step of using the right or left recorded second audio data as the reference pattern.
  • In an embodiment of the method according to the invention, the method comprises the further step of matching portions of the respective patterns to identify the recorded first audio data, in order to mitigate the effect of intermittent erasure, attenuation, muffling and/or interference in the recorded first data.
  • According to another aspect of the present invention, there is also provided a system for measuring exposure to broadcast audio data with a distributed communication system, comprising at least one broadcasting source broadcasting first audio data over the airwaves and second audio data on a wide area network, at least one data processing terminal connected to the wide area network, and a plurality of mobile data communication devices connected to a mobile communication network and apt to transmit data to the at least one remote data processing terminal. Each of the plurality of mobile data communication devices is configured to record the first audio data in the time domain from a substantially adjacent audio source reproducing the broadcast from the at least one broadcasting source, associate location data with the recorded first audio data, and communicate the recorded first audio data and location data to the at least one remote data processing terminal. The at least one data processing terminal is configured to record the second audio data in the time domain over the wide area network, match respective samplings rates of the first and second recorded audio data to obtain respective comparable patterns, and match the respective patterns to identify the recorded first audio data, by using the recorded second audio data as a reference pattern.
  • In an embodiment of the system according to the invention, either or both of the plurality of mobile data communication devices and the at least one data processing terminal is further configured to record audio data at a predetermined time interval and for a predetermined period of time. In a variant of this embodiment, the predetermined time interval is set according to one or more variables selected from the group comprising time, broadcast schedule, data storage availability and power resources of the mobile data communication device.
  • In an embodiment of the system according to the invention, each of the plurality of mobile data communication devices is further configured to communicate the recorded first audio data according to one or more variables selected from the group comprising time, network availability, bandwidth availability and onboard power resources.
  • In an embodiment of the system according to the invention, each of the plurality of mobile data communication devices is further configured to time - decimate, time - fragment and/or amplitude - reduce the first audio data.
  • In an embodiment of the system according to the invention, the location data is data representative of at least one mobile network base station or assisted GPS ('A-GPS') data.
  • In an embodiment of the system according to the invention, the recorded second audio data is recorded in stereo and the reference pattern is the right or left recorded second audio data.
  • In an embodiment of the system according to the invention, the at least one data processing terminal is configured to match one or more portions of the respective patterns to identify the recorded first audio data, in order to mitigate the effect of intermittent erasure, attenuation, muffling and/or interference in the recorded first audio data.
  • For any of the above embodiments and further variants, any in the plurality of mobile data communication devices may selected from the group comprising mobile telephone handsets, tablet computers, portable computers, personal digital assistants, portable media players, portable game consoles.
  • According to yet another aspect of the present invention, there is also provided a set of instructions recorded on a data carrying medium which, when processed by a mobile data communication device connected to a network, configures the device to perform the steps of recording audio data in the time domain, from a substantially adjacent audio source reproducing a broadcast from a remote broadcasting source, associating location data with the recorded audio data, and communicating the recorded audio data and location data to at least one remote data processing terminal over the network.
  • The set of instructions may be advantageously embodied as an application package file ('APK') for use with the Android™ operating system or embodied as an iPhone™ application archive ('IPA') for use with the iOS™ operating system.
  • There is also provided another set of instructions recorded on a data carrying medium which, when processed by a data processing terminal connected to a network, configures the terminal to perform the steps of recording audio data in the time domain, from a broadcast data source streamed over the network, matching the samplings rate of the recorded audio data received from at least one mobile data communication device, configured by the first - described set of instructions above, with the sampling rate of the recorded network audio data for obtaining respective comparable patterns, and matching the respective patterns to identify the broadcast recorded by the at least one mobile data communication device, by using the recorded network audio data as a reference pattern.
  • In an example embodiment of the above system, a server monitors and records all radio output on all wavelengths and digital streams within a designated geographical area. Respective smartphones of members of the public are configured with recording software, which monitors the ambient radio output in the environment of each smartphone. The recording software turns on periodically for a short listening period, for instance 2 seconds every 15 minutes. The observed radio content is recorded by the phone. The phones recordings are periodically uploaded to a central site via an internet connection or MMS. The uploaded content is then compared to the content recorded by the server until a match is found to the content of a radio station's output. The times of the radio output and the device's recordings are compared for verification. The verified matches are analyzed by location, time, and personal data of the phone's owner e.g. age, gender, occupation, marital status and so on. This process is repeated for the number of smart phones carrying the software in the geographical area. The analyzed data is then digitally distributed to the end user, typically a broadcaster or broadcast advertiser.
  • There is also provided a computer program comprising program instructions for causing a computer program to carry out the above method which may be embodied on a record medium, carrier signal or read-only memory.
  • Brief Description of the Drawings
  • The invention will be more clearly understood from the following description of an embodiment thereof, given by way of example only, with reference to the accompanying drawings, in which:-
    • Figure 1 shows a network environment comprising a plurality of communication networks, broadcasters, mobile data communication devices and at least one data processing terminal remote from the mobile data communication devices.
    • Figure 2 is a logical diagram of a typical hardware architecture of each mobile data communication device shown in Figure 1, including memory means.
    • Figure 3 is a logical diagram of a typical hardware architecture of the remote data processing terminal shown in Figure 1, including memory means.
    • Figure 4 details the data processing steps of an embodiment of the method.
    • Figure 5 is a logical diagram of the contents of the memory means of each mobile data communication device shown in Figures 1 and 2, when performing the method of Figure 4, including a first set of instructions.
    • Figure 6 is a logical diagram of the contents of the memory means of the data processing terminal shown in Figures 1 and 3, when performing the method of Figure 4, including a second set of instructions and a database.
    • Figure 7 illustrates details of the data stored in the database shown in Figure 6.
    Detailed Description of the Embodiments
  • There will now be described by way of example a specific mode contemplated by the inventors. In the following description numerous specific details are set forth in order to provide a thorough understanding. It will be apparent however, to one skilled in the art, that the present invention may be practiced without limitation to these specific details. In other instances, well known methods and structures have not been described in detail so as not to unnecessarily obscure the description.
  • With reference to Figure 1, an example embodiment of a system according to the invention is shown within a networked environment.
  • The networked environment firstly includes a plurality of broadcasters 101, 102, wherein each broadcaster generates respective first and second audio data 101 A, 101B and 102A, 102B. The audio data content is respective to the broadcaster 101, 102, wherein the first broadcaster 101 may broadcast essentially music programs 101 A, 101 B and the second broadcaster 102 may broadcast essentially information programs 102A, 102B, the first audio data 101 A, 102A being identical in content to the second audio data 101 B, 102B in each case.
  • In the example, the first audio data 101A, 102A is broadcast as a modulation frequency (FM) signal within the very high frequency (VHF) part of the radio spectrum, typically between 87.5 MHz and 108 MHz, at one or more frequencies respective to the broadcaster 101, 102, by a respective emitter 101C, 102C. Within the same or respective geographical areas 103A, 103B that are respectively delimited by the emitting power of the signal emitters 101C, 102C, FM broadcasts 101 A, 102A are available over the airwaves to any number of receivers, typically conventional personal radio receivers in dwellings, vehicles, retails spaces and the like, and reproduced through one or more speakers integrated or connected thereto.
  • The second audio data 101 B, 102B is broadcast as a digital network data stream over a Wide Area Network 104, in the example the Internet, by a respective data processing terminal 101 D, 102D of the broadcaster 101, 102. Any number of data processing devices with wired and/or wireless Wide Area Network connectivity may thus receive the second audio data 101 B, 102B and reproduce same through one or more speakers integrated or connected thereto.
  • The networked environment next includes a plurality of mobile data communication device 105, each located substantially adjacent an FM receiver 111 reproducing either of the FM broadcasts 101 A, 102A from its respective broadcasting source 101, 102, wherein the receiver thus acts as an audio source 111. In the example, each mobile data communication device 105 is a mobile telephone handset 105 having wireless telecommunication emitting and receiving functionality over a cellular telephone network configured according to the Global System for Mobile Communication ('GSM'), General Packet Radio Service ('GPRS'), International Mobile Telecommunications-2000 (IMT — 2000, 'W-CDMA' or '3G') network industry standards, and wherein telecommunication is performed as voice, alphanumeric or audio-video data using the Short Message Service ('SMS') protocol, the Wireless Application protocol ('WAP') the Hypertext Transfer Protocol ('HTTP') or the Secure Hypertext Transfer Protocol ('HTTPS').
  • Each mobile telephone handset 105 receives or emits voice, text, audio and/or image data encoded as a digital signal over a wireless data transmission 106, wherein the signal is relayed respectively to or from the handset by the geographically-closest communication link relay 107 of a plurality thereof. The plurality of communication link relays 107 allows digital signals to be routed between each handset 105 and their destination by means of a remote gateway 108 via a MSC or base station 109. Gateway 108 is for instance a communication network switch, which couples digital signal traffic between wireless telecommunication networks, such as the cellular network within which wireless data transmissions 106 take place, and the Wide Area Network 104. The gateway 108 further provides protocol conversion if required, for instance whether a handset 105 uses the WAP or HTTPS protocol to communicate data.
  • Alternatively, or additionally, one or more of the plurality of mobile data communication device 105 may have wired and/or wireless telecommunication emitting and receiving functionality over, respectively, a wired Local Area Network ('LAN') and/or a wireless local area network ('WLAN') conforming to the 802.11 standard ('Wi-Fi'). In the LAN or WLAN, telecommunication is likewise performed as voice, alphanumeric and/or audio-video data using the Internet Protocol (IP), Voice data over IP ('VoIP') protocol, Hypertext Transfer Protocol ('HTTP') or Secure Hypertext Transfer Protocol ('HTTPS'), the signal being relayed respectively to or from the mobile data communication device 105 by a wired (LAN) or wireless (WLAN) router 109 interfacing the mobile data communication device 105 to the WAN communication network 104. A mobile telephone handset 105 may have wireless telecommunication emitting and receiving functionality over the WLAN in addition to GSM, GPRS, W-CDMA and/or 3G.
  • As will be described with reference to Figure 2 hereafter, each of mobile data communication device 105 further comprises means to record and store the first audio data 101 A, 102A reproduced by the substantially adjacent audio source.
  • A typical handset 105 for use with the system according to the invention is preferably that commonly referred to as a 'smartphone' and may for instance be an iPhone™ handset manufactured by the Apple Corporation or a Nexus One™ handset manufactured for Google, Inc. by the HTC Corporation. Generally, the mobile terminal 105 may be any portable data processing device having at least wireless communication means and audio recording and storage means. It will therefore be readily understood by the skilled person from the present disclosure, that one or more of the mobile data communication devices 105 may instead be a portable computer commonly referred to as a 'laptop' or 'netbook', a tablet computer such as an Apple™ iPad™ or a Motorola™ XOOM™, a personal digital assistant such as an Hewlett-Packard™ iPaq™, a portable media player such as an Archos™ 43 Android™ PMP, or even a portable game console such as a Sony™ Playstation™ Vita™.
  • The networked environment next includes at least one data processing terminal 110 which emits and receives data encoded as a digital signal over a wired data transmission conforming to the IEEE 802.3 ('Gigabit Ethernet') standard, wherein the signal is relayed respectively to or from the computing device by a wired router 109 interfacing the computing device 110 to the WAN communication network 104. Generally, the at least one data processing terminal 110 may be any portable or desktop data processing device having at least networking means apt to establish a data communication with both the broadcasting data processing terminals 101 E, 102E and the plurality of mobile data communication devices 105
  • As will be described with reference to Figure 2 hereafter, each of mobile data communication device 105 further comprises means to record and store the first audio data 101 A, 102A reproduced by the substantially adjacent audio source. As will be described with reference to Figure 3 hereafter, the data processing terminal 110 comprises means to record and store the second audio data 101 B, 102B received from the data processing terminals 101 E, 102E as digital network data streams 101 D, 102D over the Wide Area Network 104.
  • A typical hardware architecture of a mobile telephone handset 105 is shown in Figure 2 in further detail, by way of non-limitative example. The handset 105 firstly includes a data processing unit 201, for instance a general-purpose microprocessor ('CPU'), acting as the main controller of the handset 105 and which is coupled with memory means 202, comprising non-volatile random-access memory ('NVRAM').
  • The mobile telephone handset 105 further includes a modem 203 to implement the wireless communication functionality, as the modem provides the hardware interface to external communication systems, such as the GSM or GPRS cellular telephone network 107, 108, 109 shown in Figure 1. An aerial 204 coupled with the modem 203 facilitates the reception of wireless signals from nearby communication link relays 106 and, for some handsets 105, from nearby FM signal emitters 101C, 102C. The modem 203 is interfaced with or includes an analogue-to-digital converter 205 ('ADC') for demodulating wavelength wireless signals, for instance the first audio data 101A, 102A received via the antenna 204 into digital data, and reciprocally for outgoing data.
  • The handset 105 further includes self-locating means in the form of a GPS receiver 206, wherein the ADC 205 receives analogue positional and time data from orbiting satellites (not shown), which the data processing unit 201 or a dedicated data processing unit processes into digital positional and time data.
  • The handset 105 further includes a sound transducer 207, for converting ambient sound waves, such as the user's voice and first audio data 101A, 102A, into an analogue signal, which the ADC 205 receives for the data processing unit 201 or a dedicated data processing unit to process into digital first audio data.
  • The handset 105 may optionally further include imaging means 208 in the form of an electronic image sensor, for capturing image data which the data processing unit 201 or a dedicated data processing unit processes into digital image data.
  • The CPU 201, NVRAM 202, modem 203, GPS receiver 206, microphone 207 and optional digital camera 208 are connected by a data input/output bus 209, over which they communicate and to which further components of the handset 105 are similarly connected, in order to provide wireless communication functionality and receive user interrupts, inputs and configuration data.
  • Alphanumerical and/or image data processed by CPU 201 is output to a video display unit 210 ('VDU'), from which user interrupts may also be received if it is a touch screen display. Further user interrupts may also be received from a keypad 211 of the handset, or from an external human interface device ('HiD') connected to the handset via a Universal Serial Bus ('USB') interface 212. The USB interface advantageously also allows the CPU 201 to read data from and/or write data to an external or removable storage device. Audio data processed by CPU 201 is output to a speaker unit 213.
  • Power is provided to the handset 105 by an internal module battery 214, which an electrical converter 215 charges from a mains power supply as and when required.
  • A typical hardware architecture of each of the data processing terminal 110 is now shown in Figure 3 in further detail, by way of non-limitative example. The data processing device 110 is a computer configured with a data processing unit 301, data outputting means such as video display unit (VDU) 302, data inputting means such as HiD devices, commonly a keyboard 303 and a pointing device (mouse) 304, as well as the VDU 302 itself if it is a touch screen display, and data inputting/outputting means such as the wired network connection 305 to the communication network 104 via the router 109, a magnetic data-carrying medium reader/writer 306 and an optical data-carrying medium reader/writer 307.
  • Within data processing unit 301, a central processing unit (CPU) 308 provides task co-ordination and data processing functionality. Sets of instructions and data for the CPU 308 are stored in memory means 309 and a hard disk storage unit 310 facilitates non-volatile storage of the instructions and the data. A wireless network interface card (NIC) 311 provides the interface to the network connection 305. A universal serial bus (USB) input/output interface 312 facilitates connection to the keyboard and pointing devices 303, 304.
  • All of the above components are connected to a data input/output bus 313, to which the magnetic data-carrying medium reader/writer 306 and optical data-carrying medium reader/writer 307 are also connected. A video adapter 314 receives CPU instructions over the bus 313 for outputting processed data to VDU 302. All the components of data processing unit 301 are powered by a power supply unit 315, which receives electrical power from a local mains power source and transforms same according to component ratings and requirements.
  • Figure 4 details the data processing steps of an embodiment of the method, performed in the environment of Figure 1 with the data processing devices 105, 110.
  • At step 401, transmissions 101 A, 102 of various radio stations 101, 102 are recorded on multiple mobile handsets 105 as respective audio samples. The quality of the sample on each handset 105 includes amplitude variations that may be caused by the distance between the mobile handset 105 and the source 111, or the broadcast reproduction volume thereof, and any significant additive interference. For instance, the mobile handset 105 may be too distant from a source 111, making the reproduced broadcast 101 A, 102A indistinguishable from ambient sounds, or the mobile handset 105 may be located immediately adjacent a source 111 or be configured as the source 111 itself, providing a particularly clear sample of the broadcast 101 A, 102A.
  • The recording may occur by command from the user, automatically by command from the set of instructions loaded in the memory 202 of the handset 105, automatically by remote command from the set of instructions loaded in the memory 309 of the remote terminal 110, on an ad hoc basis and/or at predetermined time intervals, for instance every 15 minutes. When the recording is generated automatically, a specific broadcast frequency may also be included amongst the recording parameters. The length of recording is a predetermined period of time and need not be extensive, and may amount to as little as point two of a second (0.2 sec) or less.
  • At step 402, each mobile handset 105 subjects the recorded sample to a filtering algorithm, for instance a time decimation, time fragmentation or amplitude reduction, in order to nullify personal or incidental audio information generated in parallel to the reproduced broadcast within the vicinity of the mobile handset transducer 207. This step advantageously also reduces the storage requirements within the memory 202 for the recorded audio data.
  • At step 403, each mobile handset 105 obtains data representative of the geographical location of the handset 105 at the time of the recording of step 401. Thus, depending on the data processing capacity of the handset 105, bandwidth available on the network 107, 108 and capacity of self - positioning means 206, GPS data or, alternatively, data representative of the geographically - nearest MSC or base station(s) 109 or, alternatively still, assisted GPS data is obtained, respectively from the GPS means 206, the network or the relevant assistance server. The step 403 is processed substantially as soon as the recording step 401 is completed but, for handsets 105 comprising multi-core processors 201, the step 403 may usefully be processed in parallel to steps 401 and/or 402. Steps 402 and 403 are processed for each iteration of step 401, thus for each recording.
  • At step 404, each mobile handset 105 communicates the sample or each sample output by step 402 and stored, along with its respective location data of step 403, to the remote data processing terminal 110, via the gateway 108. The communication may occur by command from the user, automatically by command from the set of instructions loaded in the memory 202 of the handset 105, automatically by remote command from the set of instructions loaded in the memory 309 of the remote terminal 110, on an ad hoc basis and/or at scheduled dates and times. The set of instructions loaded in the memory 202 of the handset 105 may inhibit the communication of step 404 when power in the battery 214 depletes below a predetermined threshold, until such time as the battery charge is ongoing or replenished.
  • In parallel with steps 401 to 404, the remote data processing terminal 110 records network transmissions 101 B, 102B of various radio stations 101, 102 as respective stereo audio samples, at step 405. The quality of each sample is at least optimal, since the network broadcasts are substantially unaffected by amplitude variations or additive interferences and, as such, these network recordings are used as reference samples.
  • At step 406, the remote data processing terminal 110 receives a remote recording from a mobile handset 105 over the Wide Area Network 104 pursuant to step 404 and, at step 407, the remote data processing terminal 110 subjects the remote handset recording and the local recorded network broadcast to an interpolation and decimation filter in order match the respective sampling rates of the remote recording and the network recording and thereby obtain comparable audio patterns.
  • At step 408, the remote data processing terminal 110 matches the respective patterns output by step 407 for identifying the broadcast recorded by the smartphone 105, using the right or left recorded network broadcast as the reference pattern. The step 408 may include a further sub-filtering operation to accommodate a partial match of the remote handset recording with the reference pattern, for example to reduce the effect of intermittent recording, muffling and interference.
  • Figure 5 is a logical diagram of the contents of the memory means 202 of each mobile data communication device 105, when performing steps 401 to 404 at runtime. An operating system is shown at 501 which, depending on the handset manufacturer, may be iOS 5™ developed and distributed by Apple Inc. or Android™ developed and distributed by Google Inc.
  • An application is shown at 502, which configures the mobile handset 105 to perform at least processing steps 401 to 404 as described hereinbefore, and which is interfaced with the OS 501 via one or more suitable Application Programmer Interfaces. The application is either an application package file ('APK') for use with the Android™ operating system 501 or an iPhone™ application archive ('IPA') for use with the iOS™ operating system 501, and readily installed on the mobile handset 105 via, respectively, Android Market™ or the AppStore™.
  • Application data is shown at 503, which comprises local and network data. Local data comprises broadcast audio data 101 A or 102A recorded via the transducer 207 at step 401 in a buffer 504, filtered audio data 505 output at step 402 and location data 506 obtained at step 403.
  • Network data 507 comprises packeted filtered audio data 505 and location data 506 being sent to the remote data processing terminal 110 and, optionally, remote commands 508 data received from the remote data processing terminal 110 for configuring the application 502.
  • The memory 202 may further comprise local and/or network data that is unrelated to application 502, respectively shown at 509 and 510, for instance used by or generated for another application being processed in parallel with application 502.
  • Figure 6 is a logical diagram of the contents of the memory means 309 of the data processing terminal 110, when performing 405 to 408 at runtime. An operating system is shown at 601 which, if the terminal 110 is a desktop computer, is for instance Windows 7™ distributed by the Microsoft Corporation. The OS 601 includes communication subroutines 602 to configure the terminal for bilateral network communication via the NIC 311.
  • An application is shown at 603, which configures the terminal 110 to perform at least processing steps 405 to 408 as described hereinbefore, and which is interfaced with the OS 601 and network communication subroutines 602 thereof via one or more suitable Application Programmer Interfaces. The application 603 is therefore apt to buffer the incoming network broadcast streams 101 B, 102B in RAM 309 and store same in HDD 310 pursuant to step 405.
  • Application data is shown at 604, which comprises local and network data. Local data comprises network audio data streams 101 B and 102B received via the NIC 311 and subroutines 602 in a buffer 605, sampling rate - matched pattern data 606 filtered according to step 407, pattern - matched samples 607 according to step 408, and a database 608 storing analysis data associated with the matching output of step 408, including location data 506.
  • Network data 609 comprises packeted filtered audio data 505 and location data 506 received from remote mobile handsets 105 at step 406 and, optionally, remote commands 508 data sent to remote mobile handsets 105 for configuring their respective instantiation of application 502.
  • The memory 309 may further comprise local and/or network data that is unrelated to application 603, respectively shown at 610 and 611, for instance used by or generated for another application being processed in parallel with application 603.
  • Figure 7 provides an example of the contents of the database 608 stored by the terminal 110 and processed by the application 603. The database 608 is relational and comprises a plurality of data structures, in the example data tables 701, wherein data is organized logically.
  • Thus, a first table 701 may store information about network audio data recorded by the terminal 110, consisting of a plurality of individual records. Each record comprises a unique identifier 702 following a format including broadcast frequency, date and time; a source network address 703 for the audio data stream, in the example the address of broadcaster server 101 D in the WAN 104; the broadcast frequency 704 corresponding to the broadcaster and over which the first audio data is broadcast; the recording side 705 of the stereo sample; a timestamp 706 for the recording; and the actual network recording 707.
  • A second table 701 may store information about broadcast audio data recorded by mobile handsets 105 and received by the terminal 110, consisting again of a plurality of individual records. Each record comprises a unique identifier 708 following a format including handset identifier, date and time; a sample quality value 709 indicative of the amount of amplitude variations in the recording; a unique handset identifier 710 uniquely identifying the communicating handset in the system to the terminal 110, the location data 506 extracted from the communication, a handset recording timestamp 711 and the actual broadcast recording 712.
  • The present invention thus provides a method of matching audio patterns in time domain, using a time-based correlation, with a high sensitivity accommodating low signal-to-noise ratios or high distortion values. Samples are time-fragmented to allow for variable levels in quality in recorded sample, and pattern matching is assisted through AGPS and/or base station identification. The network second audio data is used as 2 reference templates against time-stamped smartphone samples, which are compensated in time difference since FM broadcast has a variable delay compared with the network stream. The method is particularly flexible as it provides time-selected recording, whilst preserving privacy since first audio data is recorded in a fragmented or distorted manner, and optionally further scrambled. The method and system are adaptable to other broadcast formats, and can accommodate televised broadcasts with minimal changes.
  • It will be appreciated that that in one embodiment the internet stream (second audio data, which is used as a reference pattern) can be delayed with respect to the wireless broadcast (recorded as the first audio data). The delay is partly due to the queuing of radio stream packets at intervening routers, as it traverses the Internet to the dedicated servers. As router queue lengths fluctuate considerably in an unpredictable fashion, the delay between the first and second audio data will also vary in an unpredictable manner.
  • The system and method of the invention provides a solution by making the second audio data record longer than the first audio data record, by a duration at least equal to the longest expected time difference between them. The pattern recognition algorithm must perform multiple pattern matches for each first audio data record against each second audio data record (for example, if the second audio data record needs to be longer than the first audio data record by 100 samples, to allow for the delay, then the algorithm could require up to 100 pattern matches per reference second audio data). In essence, the first audio data slides along the longer second audio data record. This means each first audio record generates multiple pattern match values, most of which will show low correlation. The sliding calculation involves taking a series of second audio segments (each of the same length as the first audio length) from the from the record beginning, performing the calculation and then sliding along one sample at a time until a match is made or it is impossible to slide any further on the second audio record.
  • A pattern match (if present) will be indicated by a high correlation when the sliding has eliminated the difference in time between the first and second audio segments.
  • The embodiments in the invention described with reference to the drawings comprise a computer apparatus and/or processes performed in a computer apparatus. However, the invention also extends to computer programs, particularly computer programs stored on or in a carrier adapted to bring the invention into practice. The program may be in the form of source code, object code, or a code intermediate source and object code, such as in partially compiled form or in any other form suitable for use in the implementation of the method according to the invention. The carrier may comprise a storage medium such as ROM, e.g. CD ROM, or magnetic recording medium, e.g. a floppy disk or hard disk. The carrier may be an electrical or optical signal which may be transmitted via an electrical or an optical cable or by radio or other means.
  • In the specification the terms "comprise, comprises, comprised and comprising" or any variation thereof and the terms include, includes, included and including" or any variation thereof are considered to be totally interchangeable and they should all be afforded the widest possible interpretation and vice versa.
  • The invention is not limited to the embodiments hereinbefore described but may be varied in both construction and detail.

Claims (15)

  1. A method of measuring exposure to broadcast audio data with a distributed communication system, comprising a plurality of mobile data communication devices apt to transmit data to at least one remote data processing terminal, the method comprising the steps of
    broadcasting first and second audio data from at least one broadcasting source,
    at each of the plurality of mobile data communication devices,
    recording first audio data in the time domain, from a substantially adjacent audio source reproducing the broadcast from the at least one broadcasting source,
    associating location data with the recorded first audio data, and communicating the recorded first audio data and location data to the at least one remote data processing terminal,
    and at the at least one remote data processing terminal,
    recording second audio data in the time domain over a network,
    matching respective samplings rates of the first and second recorded audio data to obtain respective comparable patterns, and
    matching the respective patterns to identify the recorded first audio data, by using the recorded second audio data as a reference pattern.
  2. A method according to claim 1, wherein either or both of the steps of recording first audio data and recording second audio data are performed at a predetermined time interval and for a predetermined period of time.
  3. A method according to claim 2, wherein either or both of the predetermined time interval and the predetermined period of time correspond substantially to one another.
  4. A method according to claim 2 or 3, comprising the further step of setting the predetermined time interval according to one or more variables selected from the group comprising time, broadcast schedule, data storage availability and power resources of the mobile data communication device.
  5. A method according to any of claims 2 to 4, wherein the step of communicating the recorded first audio data comprises the further step of scheduling the communication according to one or more variables selected from the group comprising time, network availability, bandwidth availability and onboard power resources.
  6. A method according to any of claims 1 to 5, wherein the step of recording first audio data comprises at least one further step selected from time - decimating, time - fragmenting and amplitude - reducing the first audio data.
  7. A method according to any of claims 1 to 6, wherein the step of associating location data comprises the further step of associating data representative of at least one mobile network base station or assisted GPS ('A-GPS') data.
  8. A method according to any of claims 1 to 7, wherein the step of recording second audio data over a network comprises the further step of recording at least one data stream broadcast over a wide area network.
  9. A method according to any of claims 1 to 8, wherein the second audio data is recorded in stereo and the step of matching the respective patterns comprises the further step of using the right or left recorded second audio data as the reference pattern.
  10. A method according to any of claims 1 to 9, comprising the further step of matching portions of the respective patterns to identify the recorded first audio data, in order to mitigate the effect of intermittent erasure, attenuation, muffling and/or interference in the recorded first data.
  11. A system for measuring exposure to broadcast audio data with a distributed communication system, comprising
    at least one broadcasting source broadcasting first audio data over the airwaves and second audio data on a wide area network
    at least one data processing terminal connected to the wide area network,
    and
    a plurality of mobile data communication devices connected to a mobile communication network and apt to transmit data to the at least one remote data processing terminal,
    wherein each of the plurality of mobile data communication devices is configured to record the first audio data in the time domain from a substantially adjacent audio source reproducing the broadcast from the at least one broadcasting source, associate location data with the recorded first audio data, and communicate the recorded first audio data and location data to the at least one remote data processing terminal, and
    wherein the at least one data processing terminal is configured to record the second audio data in the time domain over the wide area network, match respective samplings rates of the first and second recorded audio data to obtain respective comparable patterns, and match the respective patterns to identify the recorded first audio data, by using the recorded second audio data as a reference pattern.
  12. A system according to claim 11, wherein either or both of the plurality of mobile data communication devices and the at least one data processing terminal is further configured to record audio data at a predetermined time interval and for a predetermined period of time.
  13. A system according to claim 12, wherein the predetermined time interval is set according to one or more variables selected from the group comprising time, broadcast schedule, data storage availability and power resources of the mobile data communication device.
  14. A system according to any of claims 11 to 13, wherein each of the plurality of mobile data communication devices is further configured to communicate the recorded first audio data according to one or more variables selected from the group comprising time, network availability, bandwidth availability and onboard power resources, and/or, wherein each of the plurality of mobile data communication devices is further configured to time - decimate, time - fragment and/or amplitude - reduce the first audio data, and/or, wherein the location data is data representative of at least one mobile network base station or assisted GPS ('A-GPS') data, and/or, wherein the recorded second audio data is recorded in stereo and the reference pattern is the right or left recorded second audio data.
  15. A set of instructions recorded on a data carrying medium which, when processed by a mobile data communication device connected to a network, configures the device to perform the steps of
    recording audio data in the time domain, from a substantially adjacent audio source reproducing a broadcast from a remote broadcasting source,
    associating location data with the recorded audio data, and
    communicating the recorded audio data and location data to at least one remote data processing terminal over the network.
EP11191708.4A 2011-12-02 2011-12-02 Research data measurement system and method Withdrawn EP2600545A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP11191708.4A EP2600545A1 (en) 2011-12-02 2011-12-02 Research data measurement system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP11191708.4A EP2600545A1 (en) 2011-12-02 2011-12-02 Research data measurement system and method

Publications (1)

Publication Number Publication Date
EP2600545A1 true EP2600545A1 (en) 2013-06-05

Family

ID=45044457

Family Applications (1)

Application Number Title Priority Date Filing Date
EP11191708.4A Withdrawn EP2600545A1 (en) 2011-12-02 2011-12-02 Research data measurement system and method

Country Status (1)

Country Link
EP (1) EP2600545A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0210609A2 (en) * 1985-07-29 1987-02-04 A.C. Nielsen Company Broadcast program identification method and apparatus
WO2006026736A2 (en) * 2004-08-31 2006-03-09 Integrated Media Measurement, Inc. Detecting and measuring exposure to media content items
US20080126420A1 (en) * 2006-03-27 2008-05-29 Wright David H Methods and systems to meter media content presented on a wireless communication device
US20110071838A1 (en) * 2000-07-31 2011-03-24 Avery Li-Chun Wang System and methods for recognizing sound and music signals in high noise and distortion
US20110208515A1 (en) 2002-09-27 2011-08-25 Arbitron, Inc. Systems and methods for gathering research data

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0210609A2 (en) * 1985-07-29 1987-02-04 A.C. Nielsen Company Broadcast program identification method and apparatus
US20110071838A1 (en) * 2000-07-31 2011-03-24 Avery Li-Chun Wang System and methods for recognizing sound and music signals in high noise and distortion
US20110208515A1 (en) 2002-09-27 2011-08-25 Arbitron, Inc. Systems and methods for gathering research data
WO2006026736A2 (en) * 2004-08-31 2006-03-09 Integrated Media Measurement, Inc. Detecting and measuring exposure to media content items
US20080126420A1 (en) * 2006-03-27 2008-05-29 Wright David H Methods and systems to meter media content presented on a wireless communication device

Similar Documents

Publication Publication Date Title
US10021457B2 (en) System and method for engaging a person in the presence of ambient audio
JP2012502596A (en) Method and system for monitoring sound over a network
US20140073236A1 (en) Radio audience measurement
CN110265052B (en) Signal-to-noise ratio determining method and device for radio equipment, storage medium and electronic device
KR101602175B1 (en) System and method for recognizing broadcast program content
EP3111672B1 (en) Hearing aid with assisted noise suppression
CN104010226A (en) Multi-terminal interactive playing method and system based on voice frequency
CN103108229A (en) Method for identifying video contents in cross-screen mode through audio frequency
CN105657479A (en) Video processing method and device
CN112312167A (en) Broadcast content monitoring method and device, storage medium and electronic equipment
CN108024120A (en) Audio generation, broadcasting, answering method and device and audio transmission system
CN104486645A (en) Method for determining program audience rating, playback equipment, server and device
Kouwen et al. Digital forensic investigation of two-way radio communication equipment and services
CN102833595A (en) Method and apparatus for transferring information
CN109194998A (en) Data transmission method, device, electronic equipment and computer-readable medium
US8571863B1 (en) Apparatus and methods for identifying a media object from an audio play out
EP2600545A1 (en) Research data measurement system and method
JP2006211250A (en) Radio device identification method and apparatus
US20110066700A1 (en) Behavior monitoring system
US20150271309A1 (en) Radio communications device for attachment to a mobile device
CN105611314A (en) Method, device and system for acquiring related information of television program
EP2720389A1 (en) Method and system for obtaining music track information
CN102497242A (en) Radio equipment program list obtaining method and system
CN112311491A (en) Multimedia data acquisition method and device, storage medium and electronic equipment
WO2020024508A1 (en) Voice information obtaining method and apparatus

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20131206