GB2477940A - Music usage information gathering - Google Patents

Music usage information gathering Download PDF

Info

Publication number
GB2477940A
GB2477940A GB1002765A GB201002765A GB2477940A GB 2477940 A GB2477940 A GB 2477940A GB 1002765 A GB1002765 A GB 1002765A GB 201002765 A GB201002765 A GB 201002765A GB 2477940 A GB2477940 A GB 2477940A
Authority
GB
United Kingdom
Prior art keywords
data
track
server
audio
client apparatus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1002765A
Other versions
GB201002765D0 (en
Inventor
Leo Yu-Leung Tong
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to GB1002765A priority Critical patent/GB2477940A/en
Publication of GB201002765D0 publication Critical patent/GB201002765D0/en
Publication of GB2477940A publication Critical patent/GB2477940A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/19Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
    • G11B27/28Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/632Query formulation
    • G06F16/634Query by example, e.g. query by humming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/683Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/034Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/36Monitoring, i.e. supervising the progress of recording or reproducing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H60/00Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
    • H04H60/09Arrangements for device control with a direct linkage to broadcast information or to broadcast space-time; Arrangements for control of broadcast-related services
    • H04H60/14Arrangements for conditional access to broadcast information or to broadcast-related services
    • H04H60/21Billing for the use of broadcast information or broadcast-related information
    • H04H60/22Billing for the use of broadcast information or broadcast-related information per use
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H60/00Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
    • H04H60/29Arrangements for monitoring broadcast services or broadcast-related services
    • H04H60/31Arrangements for monitoring the use made of the broadcast services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H60/00Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
    • H04H60/35Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users
    • H04H60/37Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users for identifying segments of broadcast information, e.g. scenes or extracting programme ID
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H60/00Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
    • H04H60/61Arrangements for services using the result of monitoring, identification or recognition covered by groups H04H60/29-H04H60/54
    • H04H60/66Arrangements for services using the result of monitoring, identification or recognition covered by groups H04H60/29-H04H60/54 for using the result on distributors' side
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H60/00Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
    • H04H60/76Arrangements characterised by transmission systems other than for broadcast, e.g. the Internet
    • H04H60/81Arrangements characterised by transmission systems other than for broadcast, e.g. the Internet characterised by the transmission system itself
    • H04H60/82Arrangements characterised by transmission systems other than for broadcast, e.g. the Internet characterised by the transmission system itself the transmission system being the Internet
    • H04H60/87Arrangements characterised by transmission systems other than for broadcast, e.g. the Internet characterised by the transmission system itself the transmission system being the Internet accessed over computer networks
    • H04H60/88Arrangements characterised by transmission systems other than for broadcast, e.g. the Internet characterised by the transmission system itself the transmission system being the Internet accessed over computer networks which are wireless networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H60/00Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
    • H04H60/56Arrangements characterised by components specially adapted for monitoring, identification or recognition covered by groups H04H60/29-H04H60/54
    • H04H60/58Arrangements characterised by components specially adapted for monitoring, identification or recognition covered by groups H04H60/29-H04H60/54 of audio

Abstract

A system for gathering music usage information has client apparatus 6 and a server 10. The client apparatus 6 has a transmitter 9 that during a communication session with the server transmits audio data relating to sequentially played music tracks. The server has a receiver 13 that receives the audio data and a processor 15 that processes the audio data to generate identification data relating to the music tracks. The audio data may comprise a plurality of sequentially generated audio data packets which are identified by generating an acoustic fingerprint which is compared to an acoustic fingerprint database. The client apparatus may receive track data from the server and display the track data. A user interface may allow a user to modify the received track data if the track is not identified or is incorrectly identified. The system can be used for gathering information from public performances of copyright material at live events to determine royalties.

Description

System, server, client device and method for gathering music information.
Description
This invention relates to gathering music information. In particular, but not exclusive'y, it relates to a system for collecting music usage information so that copyright royalties may be accurately calculated.
If a business intends to use copyrighted sound recordings in public, such as places of business or at live events, or for broadcast, online or mobile usage, they must first obtain performing rights' clearance. Copyright law requires users of this content to pay performing rights holders.
The payment for performing rights is usually is covered by various tariffs and licenses set by copyright collection societies around the world, which are authorised to collectively represent rights holders and give clearance for usage.
In most cases, collection societies aim to allocate license revenue to rights holders according to the actual usage of each sound recording, or where this is not available, comparable usage information.
Certain licensees must report detailed, actual usage of content, such as larger TV and radio broadcasters. For smaller radio stations, collection societies may apply usage information received from larger stations that play similar music.
However, current methods of gathering this information from public performance' licensees such as businesses and live events, are dependent on music researchers who manually collect details of music use from a continually changing selection of premises. From this sample data they calculate what proportion of the money to pay each rights holder. Otherwise further, comparable chart or broadcast data may be used.
Public performances are seen to been important areas of revenue growth for the music industry, but rights holders continue to face hurdles in getting fair payment for the use of their content due to these inaccurate method of calculating payments.
The present invention provides an alternative approach to collecting music usage information.
The present invention provides a server having a receiver configured to receive, during a communication session with a client apparatus, audio data relating to a plurality of sequentially played music tracks and a processor configured to process the audio data to generate identification data relating to the music tracks.
In this way, identification data relating to a plurality of music tracks can be automatically and accurately generated. Thus, copyright royalties can be accurately determined.
The identification data is generated from the audio data, rather than from tags, headers or other metadata associated with the audio data.
Preferably, the audio data comprises a plurality of sequentially received audio data packets. Optionally, however, the audio data may be in the form of an audio data steam.
Preferably, the identification data comprises a plurality of sequentially generated identification data units, each identification data unit relating to music played during one of a plurality of predetermined time intervals.
Each identification data unit may be associated with time data relating to the corresponding predetermined time interval. Multiple identification data units are preferably generated for each music track. The processor may be configured to process the ID data units and corresponding time data so as to estimate the duration and/or start/end time of each music track. In this way, track durations can be estimated, even where a DJ makes live edits to the speed of play or finishes a track early, for example.
The music tracks may comprise multiple layers. Each identification data unit may comprise one or more identifier(s) generated from one of the audio data packets.
The processor may be configured to generate track data and to generate tracklist data by combining the generated track data.
The processor may be configured to generate track data when a predetermined number of identification data unit(s) are sequentially generated, each with the same set of identifier(s), if the shared set of identifier(s) differs to the set of identifier(s) associated with the most recently generated track data. In this way, track data can be accurately generated, even if a DJ makes live edits to the music tracks while they are played. The predetermined number is predetermined because it is determined before the corresponding track data is generated.
However, the predetermined number may be varied from generation of one track data to the next. Optionally, however, the predetermined number may be a fixed constant and may for example be a number between three and ten.
Preferably, the processor is configured to generate tracklist data by combining the generated track data. The server may be configured to make the tracklist data available to users of the server.
Preferably, the identification data identifies the music tracks. However, the identification data may comprise indication(s) that one or more of the music tracks have not been identified.
Preferably the identification data units are generated from the audio data using an audio identification process, preferably an identification process which uses acoustic fingerprinting. Audio identification processes using acoustic fingerprinting are known. Known identification processes identify individual audio tracks upon user request. In a known audio identification process, a server identifies audio data relating to a sing'e music track. The audio data relating to the music track is received from a client device during a communication session with the server.
The present invention also provides a client apparatus, comprising a transmitter configured to transmit, during a communication session with a server, audio data relating to a plurality of sequentially played music tracks, the server having a processor configured to process the audio data to generate identification data relating to the music tracks.
Preferably, the client apparatus is portable. The client apparatus may have a size and weight which allows it to be conveniently carried by hand.
The invention also provides a device for connection with audio/visual equipment, the device being configured to receive audio/video content from a remote content source and to transmit received audio/video content to the audio/visual equipment.
Preferably, said audio/visual equipment comprises DJ equipment.
So that the invention may be more fully understood, embodiments thereof will now be described with reference to the accompanying drawings, in which: Figure 1 shows a system for gathering music information.
Figure 2 shows a perspective view of a client device.
Figure 3 shows an end view of the client device.
Figure 4 illustrates identification data units.
Figure 5 shows a method for gathering music information.
Figure 6 illustrates tracklist data.
Figure 7(a), (b) and (c) and Figure 8 show identification data units generated during different communication sessions.
Figure 1 shows a disc jockey (DJ) miring console 1 having several inputs 2 for receiving audio signals from different audio sources 3, which may include one or several CD players or DJ turntables which a DJ can use to output selected recorded audio from his repertoire to the miring console 1. Miring console us used by the DJ for selectively altering the volume, frequency spectrum and/or other characteristics of the received signals and for selecting and combining the signals, thereby to play music for an audience, for instance at a nightclub or via radio broadcast. Other characteristics, for example the pitch and speed of play, may be varied using specific sources 3, for example turntables.
During a particular session, the DJ plays a sequence of music tracks. Each track may comprise a single recording. The tracks may be played one immediately after the other, or alternatively may be separated by intervals during which the DJ may speak to the audience via a microphone connected to an input channel of the mixer. The DJ may use the miring console 1 to effect a seamless transition between tracks by fading out a first recording while fading in a second recording simultaneously -this is known as cross-fading.
Miring console 1 may also be used to make other live chsnges to the music, for example by miring in samples of other recordings at selected times during the track. This is often done by nightclub DJs in particular. A dub DJ may also play a track in which two recordings axe played at the same time and with a synchronised rhythm -this technique is called beat-miring. Beat-miring is also used to achieve transition effects from one track to the next. Many other DJ techniques are known in the common general knowledge.
As shown in Figure 1, console 1 has a master audio output 4 and a further auxiliary audio output 5 in connection with a client device 6. Device 6 has an input 7 such as a stereo RCA audio line input, which receives the audio signal from the console 1. The device 6 further comprises a processor 8 configured to process the received audio signal by compressing and/or converting the signal, so as to generate audio data for transmission. As shown, the device 6 has a wireless communication unit 9 configured to transmit the audio data to a remote server 10, where it is used to identify the music tracks played by the DJ and to generate a tracklist comprising the identified music tracks.
Turning now to Figure 2, which shows a perspective view, device 6 further comprises a user interface in the form of a touchscreen 11. Touchscreen technology is well known per se and will not be described here. When power is supplied to the device, the touchscreen 11 displays a message requesting that the DJ enter login details. The DJ uses the touchscreen 11 to enter the requested details.
Referring again to Figure 1, the device 6 then initiates a communication session with the remote server 10 over a network. The network includes a node 12a, which may for example be a wireless router or a 3G receiver to which device 6 is wirelessly connected. As shown, node 12a and server 10 are connected via the internet 12.
The communication session may comprise an initial authentication process during which the login details entered by the DJ are transmitted to server 10 for authentication against a list of authorised user information. Such authentication processes are well known per se and wili not be described here. Once authentication is complete, the touchscreen displays a message to the DJ to indicate that a communication session has been established with the server.
The DJ may enter further details using the touchscreen 11, before or during the communication session, for example event details, location details or audio source details. This information is transmitted to the server via the internet 12.
The DJ then proceeds to play tracks in sequence using the miring console 1.
The processor 8 generates audio data from the audio signal received from the miring console 1 and transmits the audio data to the server 10. The audio data is transmitted in the form of a sequence of audio data packets which are transmitted at intervals. Each audio data packet is generated by compressing and/or converting the audio signal received by the device 6 during the period preceding transmission. Compressing the signal may for example comprise downsampling the audio signal in order to reduce data content, although other lossy or lossless compression processes known in the art may alternatively or in addition be used. Each audio data packet is transmitted together with time data relating to the time of play of the audio stored in each audio data packet.
The audio data packet transmission interval may be determined dynamically by the server 10 in dependence on the bandwidth of the connection between server and device 6. Optionally, however, the transmission interval may be a fixed period, for example five seconds.
As shown in Figure 1, the server 10 has a communication unit 13 in connection with the internet. The communication unit receives the audio data packets sent from the client device 6 via the network. The server further comprises a memory 14, and a processor 15 configured to process the received audio data packets so as to generate identification data (ID data) comprising a plurality of identification data units (ID data units), one for each received audio data packet.
The ID data units are generated using an audio identification process. Audio identification technology is knowperse and will not be described in detail here.
Briefly, in an audio identification process an acoustic fingerprint is generated from an audio data packet, and this fingerprint is compared against an audio fingerprint database for a match. Acoustic fingerprinting identification processes are described for example in WO 02/27600A2 (appendix 1), US 6,453,252B1 and The processor 15 is configured to generate an ID data unit for each received audio data packet using an audio identification process. Where the audio identification process identifies that a particular audio data packet relates to a certain track, the processor 15 generates an ID data unit comprising a unique identifier for that track and stores it in the memory 14. The identifier may comprise a code identifying the track, for example in the form of a text string. If identification fails for a particular audio data packet, the server 10 stores an ID data unit having a null identifier which indicates that the audio data packet has not been identified.
In this way, an ID data unit is stored in the memory 14 for each audio data packet received. Each ID data unit is associated with the time data received from the client device 6 with its corresponding audio data packet. Thus, a time ordered set of ID units is stored in the memory 14 of server 10.
Figure 4 shows an example of successive ID data units Bi, B2, B3 generated by processor 15. As shown, each ID data unit Bi, B2, B3 is associated with time data Ti, T2, T3. Each ID data unit comprises an identifier X. The processor 15 is configured to generate tracklist data including the start time, end time and track name of each track, using the ID data. The tracklist generation process comprises processing the ID data units one at a time in time order as they are generated at the server 10.
Processing an individual ID data unit comprises determining if the ID data unit relates to a new track. If the ID data unit is the first ID data unit generated during the communication session, then the ID data unit is assumed to relate to a new track. The processor 15 then generates track data for the new track, the track data having a "name" field, a "start-time" field, an "end-time" field and a "track ID" field. The time data associated with the ID data unit is included in the "start-time" field. The processor 15 stores the identifier comprised in the ID data unit in the "track ID" field of the track data. The processor 15 also Jo searches a track database pre-stored in the memory 14 for data corresponding to the identifier. In particular, the processor is configured to look up the track name associated with the identifier in the track database. The track name is
included in the "name" field of the track data.
If the ID data unit indicates that identification has failed, the "name" field is set as "unknown", for example by leaving the field empty, and the "track ID" field is set to a null value. The "end-time" field is left empty, as it is updated later, as described below.
The processor 15 may also include further data from the track database in the track data. This further data may also be pre-stored in the database, and associated with the identifier for the each. When new track data is generated which includes this identifier, the further data is included in the track data. This further data may include, for example one or more of the following: product artist(s), product display artist(s), product title, product version / type / format, product barcode / Universal Product Code (UPC) / International Standard Recording Code (ISRC), product catalogue number, product release date, product publishing rights holder(s), product copying rights holder(s), product artwork, track artist(s), track display artist(s), track title, track mix / version, track publishing rights holder(s), track copying rights holder(s), track isrc, track recording producer(s), track recording mixer(s), track composition composer(s), track composition lyricist(s), track composition publishers(s), track genre, track length, track key, track bpm / pitch, and/or a website link(s) to MySpace/Facebook/Twitter page(s) or other page(s) associated with the producer or music label or other person associated with the track, and/or a website link to a site where the track may be purchased or streamed.
When the processor 15 generates track data for a new track, the processor stores the track data in a tracklist data file stored in the memory 14. The server 10 also transmits the track data to the client device 6 via the network 12, so that the track name and other details can be displayed to the DJ.
-10 -The processor 15 then processes the next ID data unit. If this ID data unit has the same identifier as stored in the "track ID" field of the most recently generated track data, then the ID data unit is assumed to relate to the same track.
In this case, the processor 15 does not generate new track data and instead processes the next ID data unit. An ID data unit is also assumed to relate to the same track if it includes a null identifier and the most recent track data was generated with a null identifier in the "track ID" field.
The processor 15 continues to skip ID data units in this way until it identifies an ID data unit which has a new identifier. Such an ID data unit is assumed to relate to a new track and so the processor 15 generates new track data for the track. The processor 15 looks up the name and other details associated with the identifier in the track database and includes the relevant details for the track in corresponding fields of the track data. The processor 15 also includes the identifier and the time data from the ID data unit in the new track data. The processor 15 stores the track data in the tracklist data file, together with the other track data. The server 10 also transmits the track data to the client device 6, so that the new track identification information can be displayed to the DJ.
The processor also updates the "end time" field of the track data of the previous track at this time. The "end time" field of the previous track is set equal to the time data associated with the current ID data unit.
The processor 15 is configured to continuously update the tracklist data file in this way as new track data is generated. Thus, the server 10 automatically maintains a "live" tracklist of the tracks played by the DJ.
The client device 6 is configured to receive track data transmitted by the server when new track data is generated, for display to the DJ. The processor 8 of the client device 6 is configured to display the information stored in the name so field of the received track data on the touchscreen 11. Where the track data relates to a second or subsequent track played during the communication session, the track name is appended to a list of tracks displayed on the touchscreen 11. If the client device 6 receives track data in which the "name" field is empty or set to a null value, then the message "unknown track" is added to the list displayed on the touchscreen 11.
In this way, the client device 6 displays an ordered tracklist comprising the track which is playing and also the tracks already played.
The DJ may notice from the displayed message that the present track has not been identified or has been incorrectly identified. The touchscreen his configured to allow the DJ to enter the correct track name. The processor 8 is configured to modify the "name" field of the received track data to include the track name entered by the DJ. Other fields of the received track data may be modified in a similar way. The client device 6 then transmits the modified track data to the server 10. In response to receiving the modified track data, the server 10 is configured to modify the tracklist data stored in the memory 14 by replacing the track data for the present track with the received modified track data. In this way, the DJ may use the touchscreen 11 of the device 6 to manually modify the tracklist data stored at the remote server lOso as to identify tracks which have not been identified, or which have been incorrectly identified.
Where a modification is made in this way, the server may record the corresponding audio data packets in case there should there be any later dispute over incorrect entries, or for other purposes.
Optionally, the server 10 may be configured to update its track database based on the modified track data received from the client device 6.
Although the server 10 is described above as communicating with only a single device 6, the server 10 may communicate with many different devices 6 at the same or different times. For each such communication session, the server generates tracklist data from the audio data packets received from the device 6.
The tracklist data is stored at the server 10 and associated with other details received from the device 6 during the communication session, which may include for example the username or other information relating to the DJ, event information, location details and/or audio source identification details. The server 10 may also calculate the duration of each track from the "start time" and "end time" fields of each track data, and store the duration information with each track data.
The processor 15 may be configured to analyse tracklist data stored at the server in order to generate analytics. For example, the processor 15 may generate statistical information regarding the popularity of particular tracks with particular DJs.
The server 10 can also be configured to periodically combine tracklist data stored at the server 10 to generate a report for use by a copyright collection society.
The report may include details of the number of times each track is played in total and also the duration of play. The report may also include the name of the DJ or other person or organisation who played each tack, the location where the track was played and/or the start time, end time or duration of each track. This report can be periodically sent to the copyright collection society.
The server 10 also comprises a web service for distributing tracklist data via the internet. The web service has a list of subscribers who are authorised to view or receive one or more of the stored tracklists. Subscribers may include fans, DJs, track producers, DJs, Music Publishers, Record labels, Music Producers, Record labels, PR agencies, or Music promotion services.
Subscribers may login to the web service via the internet to view or download a selected tracklist. Alternatively, or in addition, the web service may employ "push" technology, ie: the web service may be configured to transmit the tracklist to subscribers, for example via email or other software (e.g Facebook, Twitter, last.fm), or via SMS, even in the absence of a specific subscriber request The analytics and copyright collection report described above may also be -13-distributed via the web service. Since web services to distribute data to subscribers either automatically or on request are well known per a, they will not be described in any detail here.
The web service may be further configured to allow selected authorised users to modify tracklist data stored on the server 10. Thus a DJ may use the web service to correct any tracks which the server has failed to identify, or has incorrectly identified. In this way, a DJ using client device 6 may use a computer, mobile phone or other device to modify tracklist data during his set, instead of using a touchscreen interface 11, for example if the DJ did not have time or did not wish to make corrections in real time. Optionally, the touchscreen may not be present in the client device. The web service may also be configured to allow playback of recorded audio steam to help verification.
As shown in Figure 2 and 3, the device 6 has several further inputs/outputs 16, 17, 18, 19,20 and may also have a sini card slot for 3G access (not shown).
Input 16 is a power input for electricity. Alternatively or in addition however, rechargeable battery power may be used. Output 17 is a stereo RCA line output, which outputs the same signal which is input to input 7 SO that the device 6 can be included in a daisy chain of audio devices. Output 18 and 19 are different types of IJSB ports. The device 6 also has a power switch 21.
An example will now be given with reference to Figures 5 and 6 of one way in which a tracklist may be generated and used. In this example, a device 6 is used at a music festival at which "DJ Alice" is performing. Before she begins her set, DJ Alice supplies power to the device 6 and connects a cable between the auxiliary output of her miring console 1 and the RCA line input of device 6. She uses the touchscreen interface 11 to enter her username and password, which she previously registered online using the web service. The username and password are transmitted to remote server 10 for authentication. As shown in Figure 5, step Al, the device 6 and server 10 then initiate a communication session, which comprises an initial authentication process in which the server 10 identifies from the username and password that DJ Alice is an authorised user of server 10. DJ Alice then enters event information, including the festival name and location using the touchscreen 11. This information is transmitted to the server 10. If this information has previously been entered or determined by the device 6, the device 6 may be configured to request the DJ Alice confirm the details.
Referring to FigureS, step A2, DJ Alice then plays a sequence of tracks called "alpha", "beta", "gsmma", using miring console 1. DJ Alice has played "alpha" and "beta" often before at previous music events, but this is the first time that DJ Alice has played track "gsmma" in public.
DJ Alice starts her set at 12:00.OOam by playing her first tack, "alpha". At 12:05.OOam, DJ Alice stops playing track "alpha" and begins playing track "beta".
At 12:10.OOam, DJ Alice stops playing track "beta" and begins playing track "gsmma".
As shown in FigureS, block hi, client device 6 receives an analogue audio signal via the line input of the device 6 as the tracks are being played. Processor 15 is configured to generate audio data from the audio signal, as shown in Figure 5, block A4. As described above, the audio data comprises a plurality of sequentially generated audio data packets. In this example, each audio data packet is generated by digitising and subsequently downsampling and digitally compressing the audio signal received during each five second interval of play.
Referring to Figure 5, blocks AS and A6, client device 6 is configured to sequentially transmit the audio data packets to the server 10 as they are generated at device 6. Each audio data packet is transmitted together with time data relating to the time, and optionally date, of play of the audio stored in each audio data packet. For example, the first audio data packet is transmitted together with a timestamp "12:00.OOam", ie: the start time of the audio signal on which the first audio data packet is based. Simikrly, the second audio data packet is transmitted together with a timestamp "12:00.OSam", ie: the start time of the audio signal on which the second audio data packet is based.
Referring to block A7, server 10 generates identification data from the received audio data. As each audio data packet is received at the server 10, the processor generates a corresponding identification data unit using an "audio fingerprint" identification process, as described above. In this example, fingerprints and corresponding identifiets for tracks "alpha" and "beta" are pre-stored in the fingerprint database comprised in the memory 14 of server 10. However, the fingerprint database does not include a fingerprint for the new track "gsmma".
When the first audio data packet is received at the server 10, the audio identification process identifies that that the segment relates to track "alpha", since a fingerprint for track "alpha" is stored in the fingerprint database. Thus, the processor generates an ID data unit comprising a unique identifier corresponding to track "alpha". The processor 15 identifies that this is the first ID data unit generated during the communication session and therefore generates track data for the first track. The processor 15 includes the name "alpha" in the "name" field of the track data, and populates the "stan-time" field with the timestamp "12:00.OOam" associated with the first audio data packet.
The processor 15 also includes the unique identifier for track "alpha" in the "track ID" fidd.
The processor 15 also transmits the track data to the device 6, which displays the track name "alpha" on the touchscreen 11. DJ Alice notices that her first track has been correctly identified.
The server 10 receives audio data packets relating to track "alpha" every five seconds until DJ Alice starts playing track "beta" at 12:05. Each such audio data packet is identified as relating to track "alpha" and a corresponding ID data unit is generated. The processor 15 skips these ID data units without generating new track data, because they all have the same identifier as was stored in the "track -16-ID" field of the most recently generated track data, Ic: the unique identifier for track "alpha".
However, at 12:05.OOam the server 10 receives the first audio data packet relating to track "beta". An ID data unit comprising a unique identifier for track "beta" is thus generated by the processor 15. Since this ID data unit has a different identifier compared to the "track ID" field of the most recently generated track data, the processor generates new track data including the identifier for track "beta" and a start-time "12:05.OOam". The new track data is stored together with the track data for track "alpha" in a tracklist data file in the memory 14. At this time, the processor updates the track data for track "alpha" by including an "end-time" of 12:05.OOam, ie: the time data associated with the current ID data unit The processor 15 also transmits the track data for track "beta" to the device 6, which adds the name "beta" to the tracklist displayed on the touchscreen 11. DJ Alice notices that her second track "beta" has also been correctly identified.
At 12:10:00am, DJ Alice starts to play track "gsmma". The client device 6 generates a first audio data packet for track "rnmla" and transmits it to the server 10. Server 10 falls to identify the audio data packet, because no fingerprint for track "gsmma" is stored in the memory 14. Thus, server 10 generates an ID data unit indicating that the track has not been identified. Since this ID data unit has a different identifier to the identifier stored in the "track ID" field of the most recently generated track data, the server generates new track data, which it includes in the tracklist data. The server sets the "name" field of the new track data to "unknown" and sets the "track ID" field to a null value. The "start-time" field is set equal to 12:10.OOam. The "end-time" field of the track data for track "beta" is also set equal to 12:10.OOam. The track data is transmitted to the client device 6 for display to the DJ. Since the ID data unit indicates that track gsmma has not been identified, the client device 6 adds the message "unknown track" to the list displayed on the touchscreen 11.
-17 -DJ Alice notices that track "gamma" has not been identified. She therefore manually identifies the track by entering the name "gamma" using the touchscreen 11. The processor 8 modifies the name field of the received track data to include the identifier "gamma" and transmits the track data to the server 10. The server modifies the tracklist data by replacing the track data for the present track with the received track data unit.
The ID data units generated for the next and subsequent audio data packets relating to track "gamma" also have a null identifier. The processor 15 skips these ID data units without generating new track data, because the track data for track "gamma" was generated with a null identifier in the "track ID" field.
Thus, a tracklist is generated on the server 10 while DJ Alice plays through her set. Figure 6 illustrates the tracklist data TD1. As shown tracklist data TD1 comprises track data Ti, T2 and T3 for tracks alpha, beta and gamma respectively. Each track data Ti, T2, T3 has a "name" field El, a "start-time" field P2, an "end-time" field P3 and a "track ID" field (not shown), which are populated with the values described above. As described above, other data may also be included in each track data.
There is no requirement for DJ Alice to actively interact with the device 6, although she may choose to do so when a track cannot be identified or is wrongly identified. Otherwise, she simply uses the mixer 1 to play through her set. Thus, the tracklist generation process is at least partly automatic and convenient to use, and does not unnecessarily distract DJ Alice from her performance.
When DJ Alice finishes her set, the client device 6 no longer receives an input Jo signal and consequently no longer transmits audio data to the server 10. The server 10 is configured to end the communication session with the client device 6 If it does not receive any audio data from the device 6 for the duration of a predetermined period, for example for one minute.
Bob is an attendee at the music festival and has subscribed to the web service running on server 10. Bob arrives at the venue at which DJ Alice is playing at 12:07:00am. At this time DJ Alice is playing track "beta". However, Bob does not recognise this track. Bob uses an Internet browser or other Internet application on his mobile phone to login to the web service running on server 10 and thus views the traciclist for DJ Alice's set Bob thus sees that the current track is called "beta". Bob also sees from the tracklist that DJ Alice has previously played track "alpha", and views the duration of track "alpha". At 12:10, DJ Alice starts playing track "gsmma". Bob does not recognise this track either. Indeed this is the first time that this track has been played In public.
However, Bob uses his mobile phone to view the present tracklist for DJ Alice's set and sees that the track is called "gsmmo".
Thus, Bob has access to Information regarding the track being played and also the previously played tracks via the web application running on server 10. Bob may also have access to links to purchase tracks or to mark the track as a favourite. Optionally, the server 10 may be configuted to allow Bob to give a personal rating to a particular track, which is stored In the memory 14. In particular, Bob can access the track name and start time of each track, and also the end time of each previously played track.
Charlie is another subscriber to the web service running on server 10, but is not an attendee of the music festival. However, Charlie is Interested In DJ Alice's music, and has requested that the web service send a copy of her tracklist to his email account. As a result, once DJ Alice's set is finished, the web service sends the generated tracklist to Chsrlie via email. Thus, Chsrlie is automatically provided with Information regarding the tracks which DJ Alice has played at the festival, Including the name and also the start-time, end-time and hence duration of each track -19 -The server 10 can be used by many other DJs, each using a client device 6. In one example, radio DJ Dave uses a client device 6 in connection with his mixer 1 at a radio studio. During each of DJ Dave's radio broadcasts, server 10 generates a tracklist of the tracks which DJ Dave plays. Subscribers to the web service running on the server may have access to this tracklist, as well as to other tracklists stored on the server 10. Thus, a subscriber hearing a particular song on the radio may identify the song by logging in to the web service. The subscriber can also see the previous tracks played during DJ Dave's radio programme.
Although the device 6 is described above in connection with a mixing console 1, alternatively, the device may be integrated with a mixing console 1. Further alternatively, a device 6 may be connected to devices other than a mixing console 1. For example, a client device 6 may be connected to an output of a sound system at a place of business such as a shop, bar, pub, hotel, restaurant, or other location where music is played to the public.
A disadvantage of the particular tracklist generation process described above is that if a first track finishes and a second track begins during the five second period in which the audio data packet is generated, then the generated audio data packet will include audio for two different tracks. This audio data packet will not be correctly identified by the audio identification process. If an audio data packet rebting to a transition between two tracks cannot be identified, the processor 1 5 will generate spurious track data including a null or empty name field, which will be included in the final tracklist between tracks. Further, if the DJ pauses between tracks, for example to briefly speak to the audience, then spurious track data may be generated. Moreover, if the DJ makes live changes to the music, for example by mixing in samples of other recordings, the audio so identification process may fail for some audio data packets, resulting in the generation of spurious track data.
Spurious track data is inconvenient for subscribers of the web service. Although users or administrators of the server 10 may correct the tracklist by manual correction, this is inefficient.
This problem may be addressed by modifying the track data generation process so that when the server 10 generates an ID data unit having a different identifier to that stored in the most recently generated track data, processor 15 does not generate new track data immediately. Instead, in the modified process the processor iSis configured to wait until a predetermined number of ID data units relating to the new track have been sequentially generated. The predetermined number may for example be a fixed number between three and ten. In this way, the server 10 is configured to not generate new track data until it has determined that the new track is longer than a predetermined duration. For example, if the predetermined number is five, the server 10 is configured not to generate new track data until it has determined that the new track is longer than twenty five seconds.
This is illustrated in the example of Figure 7(a), which shows a set of sequentially generated ID data units Il-Ill, earlier data units being further to the left. In this example and the examples which follow, a predetermined number of five will be used -of course this is not intended to be limiting. In Figure 7(a), Il is the first II) data unit generated in the communication session and contains the identifier "A". Since the next four ID data units 12,13,14 and IS which are generated also contain the identifier "A", the processor 15 generates new track data. The processor 15 then processes ID data unit 16. The processor 15 skips this ID data unit, since it contains the same identifier "A". The processor 15 then processes the next ID data unit 17. Since 17 contains a new identifier "B" and since the next four ID data units 18, 19,110 and Ill also contain identifier "B", the processor 15 generates new track data for track "B".
When the processor 15 generates new track data in this way, the start time of the new track data is set equal to the time data associated with the first ID data unit -21 -generated for the new track. Where the ID data unit is not the first generated in the communication session, the end time of the previous track is set equal to the time data associated with the ID unit which immediately follows the last ID data unit associated with the previous tack. Thus, in the example of Figure 7(a), when the track data for track "B" is generated, the start time for the new track is set equal to the time data associated with ID data unit 17. As this time, the end time for track "A" is also set equal to the time data associated with ID data unit 17.
In the case that five sequential ID data units containing a new identifier are not found, new tack data is not generated. In the case that five sequential ID data units are then subsequently generated which each have the same identifier, and this identifier is the same as the identifier associated with the most recently generated tack, then new track data is also not generated. However, in this case, processor 15 modifies the most recently generated track data to include an indication that the track contains an interruption. The processor 15 may also include the ID data units relating to the interruption in the tack data.
This is illustated in the example of Figure 7(b). In Figure 7(b), 112 is the first ID data unit generated in the communication session. Since the nat four ID data units 113,114,115 and 116 which are generated also contain the identifier "A", the processor 15 generates track data for tack "A". Although ID data units 117,118 and 119 contain a new identifier "B", five sequential ID data units containing the new identifier are not found, and therefore new track data is not generated. However, as shown five sequential data units 120,121,122,123 and 124 are subsequently generated which each have the same identifier "A", which is the same as the identifier "A" for the most recently generated tack data.
Therefore, new tack data is not generated. However, the processor 15 modifies the tack data for tack "A" to include an indication that the tack contains an interruption. The processor 15 also includes the ID data units 117,118 and 119 in the tack data, since these ID data units relate to the interruption. In this way, the information regarding the interruption is logged in the relevant track data.
Figure 7(c) shows another example. In Figure 7(c), 125 is the first ID data unit generated in the communication session. Since the next four ID data units 126, 127,128 and 129 which are generated also contain the identifier "A", the processor 15 generates track data for track "A". Although ID data units 130,131 and 132 contain a new identifier "B", five sequential ID data units containing the new identifier are not found, and therefore new track data is not generated.
However, as shown five sequential data units 133, 134, 135,136 and 137 are subsequently generated which each have the same identifier "C", and this is different to the identifier for the most recently generated track (ie: different to identifier "A"). Therefore, the processor 15 generates new track data for track "C". The start time of the track data for track "C" is set equal to the time data associated with the first ID data unit generated for the track "C", ie the time data associated with ID data unit 133. The end time of the previously generated track data (ie the track data for track "A") is set equal to the time data associated with the ID unit which immediately follows the last ID data unit associated with the previous track, ie the time data associated with ID data unit 130.
This modified tracklist data generation process allows tracklist data to be reliably generated even when the DJ makes live chsnges to the music. For example, the process allows the start and end time of each track to be determined, even if there are interruptions or live chsnges between or during tracks, for example if the DJ mixes samples of other recordings into the track, chsnges the speed of play, or uses beat-matching, cross-fading or other techniques. Furthermore, the generation of spurious track data is avoided. Instead interruptions to a track are logged with the relevant track data.
A disadvantage with the track generation process described above is that the tracklist may be generated with errors if the DJ plays two tracks simultaneously (ie: a mix). This is because although the audio identification process may be capable of identifying the tracks individually, it may not be able to identify the mix track -23 -Audio identification processes are known which can identify each of multiple sound recordings mixed together in a single stream, for example in a beat-matched stream of two simultaneously played recordings. An example of such a process is described in detail in W002/11123A2. Thus, where an audio sample comprises multiple recordings played at the same time (ie: multiple layers), the identification process of W002/11123A2 can identify each recording individually.
The track generation process can be modified to allow for the possibility that a track is a mix. In the modified process, the processor 15 is configured to generate an ID data unit for each received audio data packet using an audio identification process such as the process described in W002/11123A2, or any other identification process which can identify multiple layers. Where the audio identification process identifies that a particular audio data packet relates to one or more tracks, the processor 15 stores an ID data unit comprising an identifier for each identified track. When the processor generates track data, the identifier(s) are included in an appropriate field or fields of the track data, along with details of the identified track(s).
In this modified process, the processor 15 generates new track data if the set of identifier(s) for a generated ID data unit is different to the set of identifiers for the most recently generated track data. In a similar manner to the modification described above with reference to Figures 7(a), (b) and (c), the processor 15 may wait until a predetermined number of such ID data units are generated before new track data is generated. That is, the processor 15 may only generate new track data when a predetermined number of ID data units are generated, each including the same new set of identifiers.
In an example, club DJ Eddie uses a device 6 at a club night. DJ Eddie uses the device 6 in connection with his mixer 1, which he uses to play a sequence of tracks. DJ Eddie spends a few seconds speaking to the audience through a -24-microphone connected to mixer 1 before he begins his set. The first track played is called "Delta". DJ Eddie introduces samples from track "Epsilon" at various times during track "Delta". DJ Eddie's second track is a mix of tracks "Delta" and another track, "Zeta". As his third track, DJ Eddie phys track "Zeta" on its own. As his fourth track, DJ Eddie plays track "Eta". DJ Eddie uses a cross-fade transition between the end of track "Zeta" and the beginning of track "Eta". Fingerprints for tracks "Delta", "Epsilon", "Zeta" and "Eta" are stored at server 10.
Figure 8 illustrates ID data units generated at server 10 during DJ Eddie's set.
ID data units 22 are generated while DJ Eddie speaks to the audience via the microphone, and are not identified by the audio identification process. Thus, each ID data unit 22 comprises a null identifier. Since only three ID data units 22 are generated, processor 15 does not generate track data.
ID data units 23 are generated while DJ Eddie plays track "Delta". These tracks are identified and therefore each ID data unit contains one identifier, which is a unique identifier for track "Delta". Since five successive ID data units 23 are generated, track data is generated for track "Delta".
ID data units 24 are generated each time DJ Eddie introduces a sample from track "Epsilon". The samples are played together with track "Delta" and therefore each ID data unit 24 contains two identifiers, one for "Delta" and one for "Epsilon". Although these ID data units contain a different set of identifiers to the identifiers associated with the previous track data, the processor 15 does not generate new track data, since five successive ID data units 24 are not received. However, each interruption is logged in the track data for track "Delta" in the manner described above.
so ID data units 25 are generated while DJ Eddie plays his second track. The audio identification process identifies that each of the audio data packets for the second track relates to tracks "Delta" and "Zeta". Therefore, each ID data unit is generated with two identifiers, one for "Delta" and one of "Zeta". Since the ID data units 25 contain a different set of identifiers to the identifiers associated with the most recently generated track data, new track data is generated for the second track. Identifiers for "Delta" and "Zeta" are stored with the track data.
ID data units 26 are generated while DJ Eddie plays his third tack. The audio identification process identifies that this track relates to "Zeta" and therefore each ID data unit 26 is generated with an identifier for "Zeta". Since the ID data units 26 contain a different set of identifiers to the identifiers associated with the track data for the second tack, new track data is generated for the third track ID data units 27 are generated while DJ Eddie implements a cross-fade between tracks "Zeta" and "Eta". The audio identification process identifies that each of the audio data packets during this period relates to track "Zeta" and "Eta".
Although the ID data units 27 contain a different set of identifiers to the identifiers associated with the tack data for tack "Zeta", the processor 15 does not generate new track data, since five successive ID data units 27 are not received. However, subsequently five ID data units 28 are generated containing the identifier "Eta". Therefore, the processor generates track data for track In this way, when the DJ mixes a new "layer" into a track, the processor 15 determines whether the new layer stops. If it does not stop, new track data is generated for the new mix track. However, if the new layer does stop, and the tack subsequently continues as before without the extra layer, then the layer is identified as a sample/live edit, and logged as an interruption. On the other hand, if the new layer stops, but is followed by a new track, then new tack data is generated for the new track.
In a further modification, the device 6 may have a video output to output video information, for example Information displayed on the touchscreen such as Information regarding the current track Further, although the device 6 may receive an analogue audio signal, the client device 6 may alternatively be configured to receive a digital signal.
The device 6 and server 10 may be configured to allow users to mark a particulat track as secret. An Indication that a track is secret may be stored In the track data for the track. The server 10 may be configured not to Include track data which is marked secret in any copyright collection report.
In another modification, device 6 is further configured for connection with known DJ equipment, for example the Pioneer CDJ-2000 turntable, via one of the USB ports 18, 19. The wireless receiver 13 is configured to receive audio/video content from a remote content source (not shown), via the Internet.
The DJ may login to the remote content source and select content for download using the touchscreen 11. Alternatively, the device 6 may be configured so that following login using the touchscreen 11, the DJ can select tracks for download using the CDJ-2000 turntable. The device 6 is configured to allow the turntable to access the downloaded content via the connected USB ports 18, 19. In this way, downloaded content can be streamed or copied from the device 6 to the
turntable on-demand.
In a further modification, device 6 may output the received content via an analogue output, for example for connection with a stereo.
Further, although the system, device 6 and server 10 have been described as collecting music usage Information for both consumer use and for copyright collection, alternatively music usage Information may be gathered exclusively for copyright collection, or alternatively exclusively for consumer use.
-27 -Further, although the device 6 is described above as transmitting audio data packets to the server 10 at regular intervals, alternatively the audio data be transferred as a continuous stream, or at varying frequency. The device 6 may choose an appropriate transmission mode depending on the quality and bandwidth of the internet connection.
In a further modification, device 6 may be configured to send the audio data in the form of a high quality audio data stream. The server 10 may be configured to record the audio stream and to make the audio stream available to users of the web service.
Furthermore, the device 6 may have a built-in buffer memory (not shown) to temporarily store generated audio data in the event that it cannot be uploaded to the server due to a temporary problem or interruption of the connection with the server 10. The buffer memory may comprise a hard disk or solid state drive.
The processor 8 may be configured to resume the connection at the earliest possibility by transferring the audio data packets stored in the buffer to the server 10. If the generated audio data cannot be uploaded to the server 10 due to a sustained problem or interruption of the connection to the server 10, the processor 8 is configured to display a message on the touchscreen 11 to notify the user of this.
Furthermore, although the device 6 is described above as receiving an audio signal input via input 7, alternatively, the audio signal may be received via a microphone comprised in device 6. Alternatively, device 6 may have an input in the form of a 3.5 mm TRS jack (not shown), for connection with an external microphone.
Still further, the device may able to expand its functionally through the installation of various applications/software plugins designed specifically for the device.
-28 -A device 6 may be used in connection with a radio, for example at a place of business. In this case, the signal received from the radio by the device 6 may include radio identification information generated by a Radio Data System (RDS). The processor 8 can be configured to transmit the radio identification information together with each audio data segment transmitted by device 6.
Processor 15 of server 10 can be configured to associate the radio identification information with each identification data unit. When processor 15 generates new track data, it may include radio identification information associated with an ID data unit in the new track data. In this way, server 10 is configured to log a track as deriving from a radio source. This information can be important for determining copyright royalties in particular cases. In particular, where a track is logged as deriving from a radio source, it may not be necessary to submit it to collection society as this data is required from the radio station already.
Furthermore, the device 6 may comprise a Global Positioning System (GPS) configured to provide positioning information relating to the location of the device. This information may be transferred to the server 10 for association with the tracklist data generated during each communication session.
As described above, the invention may be useful for example to DJs, fans, businesses and copyright collection agencies. Many other beneficiaries of the invention are envisaged, for example online download stores, clubs, promoters, music publishers, live musicians, record labels and songwriters.
Many other modifications and variations will be evident to those skilled in the art.

Claims (14)

  1. -29 -Claims 1. A server for gathering music information comprising: a receiver configured to receive, during a communication session with a client apparatus, audio data relating to a plurality of sequentially played music tracks; and a processor configured to process the audio data to generate identification data relating to the music tracks.
  2. 2. A server as claimed in claim 1, wherein the identification data comprises a plurality of identification data units, each identification data unit relating to music played during one of a plurality of predetermined time intervals.
  3. 3. A server as claimed in claim 2, wherein multiple identification data units are generated for each track.
  4. 4. A server as claimed in claim 2 or claim 3, wherein the audio data comprises a plurality of sequentially received audio data packets.
  5. 5. A server as claimed in claim 4, wherein each identification data unit comprises one or more identifier(s) generated from one of said audio data packets using an audio identification process.
  6. 6. A server as claimed in claim 5, wherein the audio identification process comprises generating an acoustic fingerprint from an audio data packet and comparing the generated fingerprint against an acoustic fingerprint database.
  7. 7. A server as claimed in claim 6 further comprising a memory, wherein the processor is configured to generate tracklist data from the identification data and to store the tracklist data in the memory.
    -30 -
  8. 8. A server as claimed in claim 7, wherein the processor is configured to generate track data, and to generate the tracklist data by combining the generated track data.
  9. 9. A server as claimed in claim 8, wherein the processor is configured to generate track data when a predetermined number of identification data unit(s) are sequentially generated, each with the same set of identifier(s), if the shared set of identifier(s) differs to the set of identifier(s) associated with the most recently generated track data
  10. 10. A server as claimed in claim 8 or claim 9, wherein the receiver is further configured to receive time information associated with the audio data from the client apparatus and wherein the processor is further configured to process the time information to generate time data for each track data.
  11. 11. A server as claimed in any of claims 7 to 10, wherein the music tracks are played at a music event and the receiver is further configured to receive event data relating to the music event and to associate the event data with the tracklist data.
  12. 12. A server as claimed in any of claim 7 to 11, further comprising an internet service configured to make the tracklist data available via the internet to users of the internet service.
  13. 13. Client apparatus for gathering music information comprising: a transmitter configured to transmit, during a communication session with a server, audio data relating to a plurality of sequentially played music tracks, the server having a processor configured to process the audio data to generate identification data relating to the music tracks.
  14. 14. Client apparatus as claimed in claim 13, wherein the transmitter is configured to transmit the audio data in the form of a plurality of sequential audio data packets. -31 -15. Client apparatus as risimed in ritim l3or ritim 14, having a transceiver comprising the transmitter, the transceiver being further configured to receive track data from the server, further comprising a display configured to display identification information relating to the track data.16. Client apparatus as chimed in ritim 15, further comprising a user interface configured to aliow a user to modify the received track data if the displayed identification information indicates that a music track has not been identified or has been incorrectly identified, wherein the transceiver is configured to transmit the modified track data to the server.17. Client apparatus as chimed in citim 16, comprising a device having components including the user interface, transceiver and display, wherein the user interface comprises a touchscreen.18. Client apparatus as chimed in citim 16, comprising a first device induding the transceiver and a second device including the user interface.19. Client apparatus as rhimed in any of rhims 13 to 18, further comprising an input for receiving a signal relating to the plurality of music tracks, and a processor configured to process the received signal to generate said audio data.20. Client apparatus as rhimed in any of rhims 13 to 19, further comprising a system dock configured to generate time information associated with the audio data, wherein the transmitter is further configured to transmit the time information to the server, the server being configured to process the time information to generate time data for each track data.21. Client apparatus as rhimed in any of rhims 13 to 20, configured to receive audio/video content from a remote content source and to transmit the received content -32 - 22. Client apparatus as claimed in claim 21, having an output for connection with audio/visual equipment, and configured to transmit the received audio and/or video content to the audio/visual equipment.23. Client apparatus as claimed in claim 22, wherein the audio/visual equipment comprises disc jockey equipment.24. Client apparatus as claimed in any of claims 13 to 23, wherein the client apparatus is portable.25. Client apparatus as claimed in any of claims 13 to 24, having a wireless communication unit which comprises said transmitter/receiver.26. An audio mixer comprising a client apparatus as claimed in any of claims 13 to 25.27. A system for gathering music information comprising a client apparatus and a server, the client apparatus having a transmitter configured to transmit, during a communication session with the server, audio data relating to a plurality of sequentially played music tracks, the server having a receiver configured to receive the audio data and a processor configured to process the audio data to generate identification data relating to the music tracks.28 A method for gathering music information comprising transmitting, during a communication session, audio data relating to a plurality of sequentially played music tracks; receiving the audio data; and so processing the audio data to generate identification data relating to the music tracks.29. Client apparatus as substantially herein described with reference to Figure 2.
GB1002765A 2010-02-18 2010-02-18 Music usage information gathering Withdrawn GB2477940A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1002765A GB2477940A (en) 2010-02-18 2010-02-18 Music usage information gathering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1002765A GB2477940A (en) 2010-02-18 2010-02-18 Music usage information gathering

Publications (2)

Publication Number Publication Date
GB201002765D0 GB201002765D0 (en) 2010-04-07
GB2477940A true GB2477940A (en) 2011-08-24

Family

ID=42114015

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1002765A Withdrawn GB2477940A (en) 2010-02-18 2010-02-18 Music usage information gathering

Country Status (1)

Country Link
GB (1) GB2477940A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9336360B1 (en) 2013-03-14 2016-05-10 Kobalt Music Group Limited Analysis and display of a precis of global licensing activities
FR3028367A1 (en) * 2014-11-06 2016-05-13 Vicente Miguel Salsinha DEVICE FOR IDENTIFYING A TELEVISION CHAIN
USD773491S1 (en) 2013-03-15 2016-12-06 Kobalt Music Group Limited Display screen with a graphical user interface
USD773492S1 (en) 2013-03-15 2016-12-06 Kobalt Music Group Limited Display screen with a graphical user interface
USD773490S1 (en) 2013-03-15 2016-12-06 Kobalt Music Group Limited Display screen with a graphical user interface
ES2703606A1 (en) * 2018-06-21 2019-03-11 Glove Systems S L AUDIO SIGNAL CODING DEVICE (Machine-translation by Google Translate, not legally binding)
US10319040B1 (en) 2013-03-14 2019-06-11 Ktech Services Limited Control of the generation and display of royalty administration and rights management data based on the user's rights of access

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001062004A2 (en) * 2000-02-17 2001-08-23 Audible Magic Corporation A method and apparatus for identifying media content presented on a media playing device
WO2003067467A1 (en) * 2002-02-06 2003-08-14 Koninklijke Philips Electronics N.V. Fast hash-based multimedia object metadata retrieval
WO2005081829A2 (en) * 2004-02-26 2005-09-09 Mediaguide, Inc. Method and apparatus for automatic detection and identification of broadcast audio or video programming signal
WO2005101998A2 (en) * 2004-04-19 2005-11-03 Landmark Digital Services Llc Content sampling and identification
WO2007048124A2 (en) * 2005-10-21 2007-04-26 Nielsen Media Research, Inc. Methods and apparatus for metering portable media players
US20080154401A1 (en) * 2004-04-19 2008-06-26 Landmark Digital Services Llc Method and System For Content Sampling and Identification

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001062004A2 (en) * 2000-02-17 2001-08-23 Audible Magic Corporation A method and apparatus for identifying media content presented on a media playing device
WO2003067467A1 (en) * 2002-02-06 2003-08-14 Koninklijke Philips Electronics N.V. Fast hash-based multimedia object metadata retrieval
WO2005081829A2 (en) * 2004-02-26 2005-09-09 Mediaguide, Inc. Method and apparatus for automatic detection and identification of broadcast audio or video programming signal
WO2005101998A2 (en) * 2004-04-19 2005-11-03 Landmark Digital Services Llc Content sampling and identification
US20080154401A1 (en) * 2004-04-19 2008-06-26 Landmark Digital Services Llc Method and System For Content Sampling and Identification
WO2007048124A2 (en) * 2005-10-21 2007-04-26 Nielsen Media Research, Inc. Methods and apparatus for metering portable media players

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9336360B1 (en) 2013-03-14 2016-05-10 Kobalt Music Group Limited Analysis and display of a precis of global licensing activities
US10319040B1 (en) 2013-03-14 2019-06-11 Ktech Services Limited Control of the generation and display of royalty administration and rights management data based on the user's rights of access
USD773491S1 (en) 2013-03-15 2016-12-06 Kobalt Music Group Limited Display screen with a graphical user interface
USD773492S1 (en) 2013-03-15 2016-12-06 Kobalt Music Group Limited Display screen with a graphical user interface
USD773490S1 (en) 2013-03-15 2016-12-06 Kobalt Music Group Limited Display screen with a graphical user interface
FR3028367A1 (en) * 2014-11-06 2016-05-13 Vicente Miguel Salsinha DEVICE FOR IDENTIFYING A TELEVISION CHAIN
ES2703606A1 (en) * 2018-06-21 2019-03-11 Glove Systems S L AUDIO SIGNAL CODING DEVICE (Machine-translation by Google Translate, not legally binding)

Also Published As

Publication number Publication date
GB201002765D0 (en) 2010-04-07

Similar Documents

Publication Publication Date Title
US7349663B1 (en) Internet radio station and disc jockey system
TWI559778B (en) Digital jukebox device with karaoke and/or photo booth features, and associated methods
US20090178003A1 (en) Method for internet distribution of music and other streaming content
US9071662B2 (en) Method and system for populating a content repository for an internet radio service based on a recommendation network
GB2477940A (en) Music usage information gathering
US8239327B2 (en) System and method for user logging of audio and video broadcast content
US20060156343A1 (en) Method and system for media and similar downloading
JP2001042866A (en) Contents provision method via network and system therefor
US20140031960A1 (en) System and method for presenting advertisements in association with media streams
US11496780B2 (en) System and method for production, distribution and archival of content
GB2416887A (en) A method of storing and playing back digital media content
KR101645288B1 (en) System and method for receiving and synchronizing content on a communication device
JP2009266083A (en) Trial listening content distribution system and terminal device
US20050111662A1 (en) Method for internet distribution of music and other streaming media
WO2001020493A1 (en) Audio information distributing/collecting device and method
US20120158769A1 (en) Music distribution and identification systems and methods
KR20050093777A (en) Mobile device that uses removable medium for playback of content
US20160217136A1 (en) Systems and methods for provision of content data
JP4238160B2 (en) Distribution system, server, and information distribution method
JP5817713B2 (en) Content reproduction system and center apparatus
US20070219906A1 (en) Methods and apparatus for selling music using sca information
KR101113431B1 (en) Method and System for the shop specialty broadcasting using download and play
US20220093070A1 (en) System and method for syncing music
JP4743259B2 (en) Distribution system, audio device, server, information distribution method, and related information display method
KR20080083075A (en) The real time download system and method of music file on the air

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)