CA2478697C - System, computer program and method for quantifying and analyzing musical intellectual property - Google Patents

System, computer program and method for quantifying and analyzing musical intellectual property Download PDF

Info

Publication number
CA2478697C
CA2478697C CA2478697A CA2478697A CA2478697C CA 2478697 C CA2478697 C CA 2478697C CA 2478697 A CA2478697 A CA 2478697A CA 2478697 A CA2478697 A CA 2478697A CA 2478697 C CA2478697 C CA 2478697C
Authority
CA
Canada
Prior art keywords
performance
framework
elements
song
track
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CA2478697A
Other languages
French (fr)
Other versions
CA2478697A1 (en
Inventor
David Joseph Beckford
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SONIC SECURITIES Ltd
Original Assignee
SONIC SECURITIES Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SONIC SECURITIES Ltd filed Critical SONIC SECURITIES Ltd
Publication of CA2478697A1 publication Critical patent/CA2478697A1/en
Application granted granted Critical
Publication of CA2478697C publication Critical patent/CA2478697C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Auxiliary Devices For Music (AREA)

Abstract

A method is provided for converting one or more electronic music files into an electronic musical representation. A song framework is provided that includes a plurality of rules and associated processing steps for converting an electronic music file into a song framework output. The song framework output defines one or more framework elements; one or more performance elements; and a performance element collective. The rules and processing steps are applied to each instrument track included in one or more electronic music files, thereby: detecting the one or more performance elements; classifying the performance elements; and mapping the performance elements to the corresponding framework elements. A related method is also provide for preparing the electronic music files before applying the rules and associated processing steps of the song framework. The output of the method of the present invention is a song framework output file. A computer system and computer program is also provided for processing electronic music files in accordance with the method of the invention. One aspect of the computer system is an electronic music registry which includes a database where a plurality of song frame output files are stored. The computer program provides a comparison facility that is operable to compare the electronic musical representations of at least two different electronic music files and establish whether one electronic music file includes original elements of another electronic music file. The computer program also provides a reporting facility that is operable to generate originality reports in regard to one or more electronic music files selected by a user.

Description

System, Computer Program and Method for Quantifying and Analyzing Musical Intellectual Property Field of Invention This invention relates generally to a methodology for representing a mufti-track audio recording for analysis thereof. This invention further relates to a system and computer program for creating a digital representation of a mufti-track audio recording in accordance with the methodology provided. This invention further relates to a system, computer program and method for quantifying musical intellectual property. This invention still further relates to a system, computer program and method for enabling analysis of musical intellectual property.
Background to Invention The worldwide music industry generated $33.1 hillion in revenue in 2001 according to the RIAA. The American music industry alone generated approximately $14 billion in 2001 (RIAA). Over 250,000 new songs are registered with ASCAP
each year in the United States. According to Studiofmder.com, approximately 10,000 recording studios are active in the domestic US market. In reference to publisher back catalogs, EMI Music Publishing, for example, has over ane million songs in their back catalog.
The revenue of the music industry depends on the protection of musical intellectual property. Digital music files, however, are relatively easy to copy or plagiarize. This represents a well- publicized threat to the ability of the music industry to generate revenue from the sale of music.
Various methods for representing music are known. The most common methods are "standard notation", MIDI data, and digital waveform visualizations.
Standard musical notation originated in the 11 th century, and Was optimized for the symphony orchestra approximately 200 years ago. The discrete events of standard notation are individual notes.
N:\corpladelazek\Bcckf'ord, Dave\C'an. Patent ilppln\Appln as I~iled-Aug04.doc
2 Another method is known as "MIDI", which stands for Musical Instrument Digital Interface. MIDI is the communication standard of electronic musical instruments to reproduce musical performances. MIDI, developed in 1983, is well known to people who are skilled in the art. The applications that are able to visualize MIDI data consist of the known software utilities such as MIDI sequencing programs, notation programs, and digital audio workstation software.
The discrete events of MIDI are MIDI events. Digital waveforms are a visual representation of digital audio data. CD audio data can be represented at accuracy ratios of up to 1/44100 of a second. The discrete events of digital waveforms are individual samples.
Compositional infringement of music occurs when the compositional intent of a song is plagiarized (melody or accompanying parts) from another composition.
The scope of infringement may be as small as one measure of music, or may consist of the complete copying of the entire piece. Mechanical infringement occurs when a portion of another recorded song is incorporated into a new song without permission.
The technology required for mechanical infringement, such as samplers or computer audio workstations, is widespread because of legitimate uses. Depending on the length of the recording the infringi~Zg party may also be liable for compositional infringement as well.
Intellectual property protection in regard to musical works and performances exists by virtue of the creation thereof in most jurisdictions. Registration of copyright or rights in a sound recording represents means for improving the ability of rights holders to enforce their rights in regard to their musical intellectual property.
It is also common to mail a musical work to oneself via registered mail as a means to prove date of authorship and fixation of a particular musical work.
Also, many songwriter associations offer a registration and mailing service for musical works. However, proving that infringement of musical intellectual property has occurred is a relatively complicated and expensive process, as outlined below.
N:\corp\adefazekiBeckford, DavelCan. Patent ApplnlAppin as Fitcd-Aug04.doc This represents a significant barrier to enforcement of musical intellectual property, which in turn means that violation of musical intellectual property rights is relatively common.
In musical infringement, it is first generally determined whether the plaintiff owns a valid copyright or performance right in the material allegedly infringed. This is generally established by reference to the two layers of music/lyrics of a musical work or a sound recording. If the plaintiff owns a valid copyright or performance right, the next step is generally to establish whether the defendant has infringed the work or performance. This is usually decided on the basis of "substantial similarity".
Figure 1 shows a comparative analysis of two scored melodies by an expert witness musicologist.
In the United States, it is generally a jury who decides the issue of mechanical substantial similarity. The jury listens to the sample and the alleged source material and determines if the sample is substantially similar to 'the source.
Many shortfalls in individual music representations exist, such as the lack of representation in the analysis layer of music (motif, phrase, and sentence).
There is generally no standardized method for a song to communicate its elements.
Standard notation cannot generally communicate all elements accurately of electronic and recorded music. The following table illustrates a few of the shortfalls that standard notation has vs. electronic / recorded music.
N:lcorp\adeFazek\Beckford, Dave\Can. Patent Appln\Appln as Filed-Au~04.doc Musical Expression Standard NotationlectronicJ Recorded Music Rhythm Positional divisions / 64 divisions 1000 divisions /
beat / beat beat Durational quantize 4 divisions / 1000 divisions /
beat beat Pitch Coarse pitch range 12 semitones 12 semitones I octave I octave # Of discrete tunings between semitones 100 Pitch variance can be communicated 1000 times /

Pitch variance within a 1 pitch per noteeat note event rticulation envelopes can be Articulation Legato, Staccato,modulated in real accent time 12 (subjective) divisions Dynamics ppppp - ffffff 127 discrete points 64 points left Stereo panning None 64 points right Electronic instruments support performance Instrument specific controlNone automation of any parameter In a MIDI file, mechanical data and compositional data are indistinguishable from each other. Metric context is not inherently associated with the stream of events, as MIDI timing is communicated as delta ticks between MIDI events.
The digital wavefbrm display lacks of musical significance. Musical data (such as pitch, meter, polyphony) is undetectable to the human eye in a waveform display.
Prior art representations of music therefore pose a number of shortfalls. One such shortfall arises from the linearity of music, since all musical representations are based on a stream of data. There is nothing to identify one point in musical time from N:veorp\adeFazck\I3eckford. Uave\Can. Patent Appln\Appln as Iviled-r~ug04.doc another. Prior art music environments are generally optimized for the linear recording and playback of a musician's performance, not for the analysis of discrete musical elements.
S Another shortfall arises from absolute pitch. Absolute pitch is somewhat ineffective for the visual and auditory comparison of music in disparate keys.
Western music has twelve tonal centers or keys. In order for a melody to be performed by a person or a musical device, the melody must be resolved to one of the twelve keys.
The difficulty that this poses in a comparison exercise is that a single relative melody can have any of twelve visualizations, in standard notation, or twelve numeric offsets in MIDI note numbers. In order for melodies to be effectively compared (a necessary exercise in determining copyright infringement), the melodies need to be rendered to the same tonal center. Figure 2 shows a single melody expressed in a variety of keys.
A number of limitations to current musical representations arise from their use in the context of enforcement of musical intellectual property. Few universally recognized standards exist for testing substantial similarity, or fair use in the music industry. There is also usually no standardized basis of establishing remuneration for sampled content. The test for infringement is generally auditory; the original content owner must have auditory access to an infringing song, and be able to recognize the infringed content in the new recording. Finally, the U.S. Copyright office, for example, does not compare deposited works for similarities, advise on possible copyright infringement, or consult on prosecution of copyright violations.
There is a need therefore for a musical representation system that relies on a relative pitch system rather than. an absolute pitch. This in order to assist in the comparison of melodies. There is also a need for a musical representation system that enables the capture and comparison of most mechanical nuances of a recorded or electronic perforn~ance, as required for determining mechanical infringement.
There is a further need for a musical representation system that is capable of separating the compositional (theoretical) layer from the mechanical (performed layer) in order to determine compositional and/or mechanical infringement.
This representation would need to identify what characteri sties of the musical unit change N:\corp\adefazek\Beckrord, DavesCan. !'atcnt Appln\Appin as Filed-Aog04.doc from instance to instance, and what characteristics are shared across instances.
Communicating tick accuracy and context within the entire meter would be useful to outline the metric framework of a song.
Preparation of Multi-track Audio for Analysis Prior art technology allows for the effective conversion of an audio signal into various control signals that can be converted into an intermediate file. There are a number of 3rd party applications that can provide this functionality.
MIDI (referred to earlier) is best understood as a protocol designed for recording and playing back music on digital synthesizers that is supported by many makes of personal computer sound cards. Originally intended to control one keyboard from another, it was quickly adopted for use on a personal computer. Rather than I S representing musical sound directly, it transmits information about how music is produced. The command set includes note-on's, note-off's, key velocity, pitch bend and other methods of controlling a synthesizer. (From WHATIS.COM) The following inputs and preparation are required to perform a correct audio to MIDI conversion. The process begins with the digital audio mufti-track.
Figure 3 illustrates a collection of instrument mufti-track audio files (2). Each instrument track is digitized to a single continuous wave file of consistent length, with an audio marker at bar 0. Figure 4 shows a representation of a click track mufti-track audio file (4) aligned with the instrument mufti-track audio files (2). The audio click track audio file usually is required to be of the same length as the instrument tracks. It also requires the audio marker be positioned at bar 0. Then, a compressed audio format (i.e.
mp3) of the two-track master is required for verification.
As a next step, a compressed audio format of all of the samples (i.e. mp3) used in the mufti-track recording must then be disclosed. The source and time index of the sampled material are also required (see Figure 5).
Song environment data must be compiled to continue the analysis. The following environment data is generally required:
N:\corp\adcfazek\I3eckf'ord, Davc\C:an. Intent Appln\Appln us 1~ilcd-Aug04.doc ~ Track sheet to indicate the naming of the instrument tracks;
~ Total number of bars in song;
~ Song Structure with bar lengths. Every bar of the song must be included in a single song structure section ( Verse -16 bars, Chorus -16 bars, etc. );
~ Type and location of time signature changes within the song;
~ Type and location of tempo changes within a song; and ~ Type and location of key changes within a song.
Before audio tracks can be analyzed, the environment track must be defined.
The environment track consists of the following: tempo, Microform family (time signature), key, and song structure, The method of verifying the tempo will be to measure the "click" track supplied with the multi-track. Tempo values will carry over to subsequent bars if a new tempo value is not assigned. If tempo is out of alignment with the click track, the tempo usually can be manually compensated. Figure 6 illustrates bar indicators (6) being aligned to a click track multi-track audio file (4). Current state-of the-art digital audio workstations, such as Digidesign's Pro Tools, include tempo marker alignment as a standard feature.
Time signature changes are customarily supplif;d by the artist, and are manually entered for every bar where a change in time signature has occurred.
All time signatures are rotated as to the number of 8th notes in a bar. For example, 4/4 will be represented as 8/8. Time signature values will carry over to subsequent bars if a new time signature value is not assigned.
Key changes are supplied by the artist, and are manually entered for every bar where a change in key has occurred. In case there is a lack of tonal data to measure the key by, the default key shall be C. Key values will carry over to subsequent bars if a new key value is not assigned.
N:\corp\adetazek\Bcckfon, Dave\C:an. Patent Appln\Appln as I~iled-Aug04.doc g Song structure tags define both section name and section length. Song structure markers are supplied by the artist and are manually entered for at every bar where a structure change has occurred. Structure Marker carry over the number of bars that is assigned in the section length. All musical bars of a song must belong to a song structure section.
At the end of the environment track definition, every environment bar will indicate tempo, key, time signature and, ultimately, belong to a song structure. Figure 7 shows the final result of a song section as defined in the Environment Track.
After the environment track is defined, each track must be classified to determine the proper analysis process the instrument tracks can be classified as follows:
~ Monophonic (pitched), which includes single voice instrument, such as a trumpet.
~ Monophonic (pitched), which includes vocals, such as a solo vocal.
~ Polyphonic (pitched), which includes mufti-voice instrument, such as a piano, guitar, chords.
~ Polyphonic (pitched vocal), which includes multiple vocals singing different harmonic lines.
~ Non-pitched (percussion) such as "simple" drum loops, where no pitch information is available and individual percussion instruments.
~ Complex, such as full program loops and sound effects.
Figure 8 illustrates the process to generate (7) MIDI data (8) from an audio file (2), resulting in MIDI note data (10), and MIDI controller data (12).
The classifications A) through F) listed above are discussed in the following section, and are visualized in Figures 9 -14.
N:lcorp\adefazeklf3eckl'ord. DavclC.'an. Patent AppInlAppln as Filed-Aug04.doc The following data can be extracted from Audio~~to-Control Signal Conversion: coarse pitch, duration, pitch bend data, volume, brightness, and note position.
Analysis results for Various Track Classifications MonophonicPolyphonicPercussion Complex wave Analysis Analysis Analysis Analysis Coarse Pitch x Pitch bend x data Note Positionx x ' X x Volume x x X x Brightness x x X x Duration x x X x Monophonic Audio-to-MIDI Analysis includes:
~ pitch bend data, duration, volume, brightness, coarse pitch, and note position.
Polyphonic Audio-to-MIDI Analysis includes ~ volume, duration, brightness, and note position.
Percussion- to-MIDI Analysis includes:
~ volume, duration, brightness, and note position.
Complex Audio-to-MIDI Analysis includes:
~ volume, duration, brightness, and note position.
Generated events and user input data are combined in various track classifications.
A. Monophonic - Pitched Figure 9 illustrates the process to generate (7) MIDI data (8) from an audio file (2).
The user enters input metadata (12) that is specific to the Monophonic Pitched track classification.
Generated Events Monophonic Audio-to-MIDI Analysis Data V;',corpladelazek113eckf'ord, UavelCan. Patent AppInlAppln as Filec9-Ang04.doc User input events ! Timbre Significant timbral changes can be noted with MIDI text event B. Monophonic - Pitched Vocal Figure 10 illustrates the process to generate MIDI data from an audio ale (7) resulting in generated MIDI data (8). The user enters input metadata ( 12) that is S specific to the Monophonic Pitched Vocal track classification.
Generated Events Monophonic Audio-to-MIDI Analysis Data User input events Lyric Lyric Syllables can be attached to Note events with MIDI text event C. Polyphonic Pitched Figure lI illustrates the process to generate (7) MIDI data (8) from an audio file (2).
The user enters input metadata (12) that is specific to the Polyphonic Pitched track 10 classification.
Generated EventsPolyphonic Audio-to-MIDI Analysis Data User input eventsCoarse Pitch User enters coarse pitch for simultaneous notes Timbre Significant timbral changes can be noted with MIDI text event D. Polyphonic Pitched - Vocal Figure 12 illustrates the process to generate (7) MIDI data (8) from an audio file (2).
The user enters input metadata (12) that is specific to the Polyphonic Pitched Vocal track classii'ication.
Generated EventsPolyphonic Audio-to-MIDI Analysis Data User input eventsCoarse Pitch User enters coarse pitch for simultaneous notes Lyric Lyric Syllables can be attached to Note events with MIDI text event E. Non-Pitched, Percussion N:\corp\a<Ictazek\I3cckford, Dav~c\Can. fatcnt.Appln\Appln as Filcd-Au~;04.doc Figure 13 illustrates the process to generate (7) MIDI data (8) from an audio file (2).
The user enters input metadata (12) that is specific to the Non-Pitched Percussion track classification.
Generated Events Percussion, Non Pitched Audio-to-MIDI Analysis User input events Timbre User assigns timbres per note on Generic percussion timbres can be mapped to reserved MIDI note on ranges F. Complex Wave Figure 14 illustrates the process to generate (7) MIDI data (8) from an audio file (2).
The user enters input metadata (12) that is specific to the Complex Wave track classification.
Generated Events Complex Audio-to-MIDI Analysis User input events Sample ID
Reference to Source and time index ) can be noted with text event I O There are generally two audio conversion workflows. The first is the local processing workflow. The second is the remote processing workflow.
Figure 15 illustrates the local processing workflow. The local processing workflow consists of mufti-track audio (2) loaded (21) into a conversion workstation (20) by an upload technician ( 18). The conversion workstation is generally a known computer device including a microprocessor, such as for example a personal computer. Next, MIDI performance data (8) is generated (7) from t:he mufti-track audio files (2). After the content owner (16) has entered (23) the input metadata (14) for all of the mufti-track audio files (2), the input metadata (14) is combined (25) with the generated MIDI data (8) to form a resulting MIDI file (26).
Figure 16 illustrates the remote processing workflow. The remote processing workflow consists of mufti-track audio (2) loaded (2I) into the conversion workstation (20) by the upload technician (I8). The upload technician (18) then generally forwards (27) a particular mufti-track audio file (2) to an analysis specialist N:lcorpladeFazek\Beckford, Dave\Can. latent Appln\Appln as f~'iled-Avb04.doc (24). Next, MIDI performance data (8) is generated (7) from the mufti-track audio file (2) on the remote conversion workstation (20). At this point, the analysis specialist (24) enters (23) the input metadata (14) into the user input facility of the remote conversion workstation (20). After the analysis specialist (24) has entered (23) the input metadata (14) for the mufti-track audio file (2), the input metadata (14) is combined (25) with the generated MIDI data (8) to form a resulting partial MIDI file (28). The partial MIDI file (28) is then combined (29) with the original MIDI
file (26) from the local processing workflow.
In order to MIDI encode the environment track, tempo, key, and time signature are all encoded with their respective Midi Meta Events. Song structure markers will be encoded as a MIDI Marker Event. MIDI Encoding for track name and classification is encoded as MIDI Text events. MIDI encoding for control streams and user data from tracks is illustrated in following table.
Table of MIDI
Translations Coarse Pitch MIDI Note Number Pitch Bend Pitch Wheel Control Volume Volume Control 7 Brightness Sound Brightness Control Duration and Note On + Note Off timing Lyric and TimbreMIDI Text Figure l7.illustrates the package that is delivered to the server (in a particular implementation of this type of prior art system where the conversion workstation (20) is linked to a server) for analysis. The analysis package consists of the following:
~ formatted M1DI file;
~ mp3 of 2 track master;
~ mp3 of isolated sample files, with sources and time indexes;
~ Artist particulars; song title, creation date etc.; and ~ Upload studio particulars and ID from machine used in upload.
N:lcorp\adcfazck\Bcckford, Dave\Can. I'atent.Appln\Appln as Piled-Aug04.doc Summary of Invention One aspect of the present invention is a methodology for representing music, in a way that is optimized for analysis thereof.
Another aspect of the present invention is a method for converting music files to a song framework. The song framework comprises a collection of rules and associated processing steps that convert a music file such as a prepared MIDI
file into a song framework output. The song framework output constitutes an improved musical representation. The song framework enables the creation of a song framework output that generally consists of a plurality of framework elements, and performance elements. Framework elements are constructed from environmental parameters in a prepared music file, such as a MIDI ale, including parameters such as time signature, tempo, key, and song structure. For every instrument track in the prepared MIDI file, the performance elements are detected, classified, and mapped to the appropriate framework element.
Yet another aspect of the present invention is a song framework repository.
The song framework repository takes a framework output for a music file under analysis and normalizes its performance elements against a universal performance element collective, provided in accordance with the invention. The song framework repository also re-maps and inserts the framework elements of the music file under analysis into a master framework output store.
In accordance with another aspect of the present invention, a music representation system and computer program product is provided to enable the creation of a song framework output based on a music file.
Yet another aspect of the present invention is a reporting facility that enables generation of a plurality of reports to provide a detailed comparison of song framework outputs in the song framework repository.
A still other aspect of the present invention is a music registry system that utilizes the music representation system of the present invention.
N:lcorp\adetarek\Becki'ord, DnvcvC'an. Patent Appln\Appln as Piled-Au~04.doc Another aspect of the present invention is a music analysis engine that utilizes the music representation system of the present invention.
The proprietary musical representation of the current invention is capable of performing an analysis on a mufti-track audio recording of a musical composition.
The purpose of this process is to identify all of the unique discrete musical elements that constitute the composition, and the usage of those elements within the structure of the song.
The musical representation of the current invention has a hierarchal metric addressing system that communicates tick accuracy, as well as context within the entire metric hierarchy of a song. The musical representation of the current invention also determines the relative strength of positions within a metric structure.
The musical representation of the current invention relies on a relative pitch system rather than absolute pitch. The musical representation of the current invention captures all of the nuances of a recorded performance and separates this data into discrete compositional (theoretical) and mechanical (performed) layers.
Brief Description of Drawings Reference will now be made by way of example, to the accompanying drawings, which show preferred aspects of the present invention, and in which:
Figure 1 illustrates a comparison of notated melodies.
Figure 2 illustrates a single melody in various keys.
Figure 3 is a diagram of multitrack Audio Files.
Figure 4 is a diagram of audio Files with Click Track.
Figure 5 illustrates an example of sample file, index and source.
N:lcorp\adcfazek\Bcckford, have\Can. Patent Appln\Appln as filed-r1ug04.doc Figure 6 illustrates tempo alignment to click track.
Figure 7 illustrates the song Section of an environment track.
5 Figure 8 illustrates the audio to Control Signal Conversion process.
Figure 9 illustrates the Monophonic Pitched classification inputs.
Figure 10 illustrates the Monophonic Pitched Vocal classification.
Figure 11 illustrates the Polyphonic Pitched classification.
Figure 12 illustrates the Polyphonic Pitched Vocal classification, Figure 13 illustrates the Non-Pitched Percussion classification.
Figure 14 illustrates the Complex wave classification.
Figure I 5 is a diagram of a local audio to MIDI processing workflow.
Figure 16 is a diagram of local and remote audio to MIDI Processing workflows.
Figure 17 illustrates an example of an upload page.
Figure 18 illustrates the time, tonality, expression, and timbre relationship.
Figure 19 illustrates carrier and modulator concepts related to standard notation.
Figure 20 illustrates a Note Event, which is a Carrier Modulator transaction.
Figure 21 illustrates the harmonic series applied to timbre, harmony, and meter.
Figure 22 illustrates a spectrum comparison between light and the harmonic series.
v:\corpladefazek\Beck('ord, UaveO'an. Patent AppInWppin as FiIcYi-Aug04.doc Figure 23 illustrates the harmonic series.
Figure 24 is a diagram of various sound wave views.
Figure 25 illustrates compression and rarefaction at various harmonics.
Figure 26 illustrates the 4=2+2 metric hierarchy Figure 27 illustrates a wave to meter comparison Figure 28 illustrates a Metric hierarchy to harmonics comparison Figure 29 illustrates compression and rarefaction mapping to binary Figure 30 illustrates compression and rarefaction mapping to ternary problem 1.
Figure 31 illustrates Compression and rarefaction mapping to ternary problem 2 Figure 32 illustrates the compression and rarefaction mapping to ternary solution.
Figure 33 visualizes harmonic state notation.
Figure 34 illustrates the metric element hierarchy at the metric element.
Figure 35 illustrates the metric element hierarchy at the metric element group.
Figure 36 illustrates the metric element hierarchy at the metric element supergroup.
Figure 37 illustrates the metric element hierarchy at the metric element ultra group.
Figure 38 illustrates the harmonic layers of the 7Ttbb Carrier structure.
Figure 39 illustrates the linear and salient ordering of two Carrier Structures.
N:lcorp\adcfazcl<\Beckford, Dave\Can. Patent Alipln\Appln as 1~iled-Aug04.doc Figure 40 illustrates the western meter hierarchy.
Figure 41 illustrates the Carrier hierarchy.
Figure 42 illustrates the Note Event concept.
Figure 43 illustrates the tick offset of a "coarse" position.
Figure 44 is a diagram of modulators on carrier nodes.
Figure 45 illustrates the compositional and Mechanical Layers in Music.
Figure 46 is a diagram of a compositional and mechanical Note Variant.
Figure 47 is a diagram of a compositional note event.
Figure 48 is a diagram of a mechanical note event.
Figure 49 is a diagram of a compositional Performance Element.
Figure SO is a diagram of a mechanical Performance Element.
Figure 51 illustrates the western music hierarchy.
Figure 52 illustrates the musical hierarchy of the music representation of the current system.
Figure 53 is a diagram of a Microform Carrier.
Figure 54 is a diagram of a Microform Carner, Nanoform Carrier Signatures with Nanoform Carriers.
Figure 55 is a diagram of a Note Events bound to Nanoform nodes.
N.\corp\adefazek\Bcckford. Daw\Can. Patent AppinlAppln as Filed-Au~04.doc Figure S6 is a diagram of a Performance Element Modulator.
Figure S7 is a diagram of a Performance Element from a Carrier focus.
S Figure S8 is a diagram of a Performance Element from Modulator focus.
Figure S9 illustrates the 4 Bbb Carrier with linear, salient and metric element views.
Figure 60 illustrates the 8 B+BbbBbb Carrier with linear, salient and metric element views.
Figure 61 illustrates the 12 T+BbbBbbBbb Carrier with linear, salient and metric element views.
1 S Figure 62 illustrates the 16 B++B+BbbBbbB+BbbBbb Carrier with linear, salient and metric element views.
Figure 63 illustrates the S Bbt Carrier with linear, salient and metric element views.
Figure 64 illustrates the S Btb Carrier with linear, salient and metric element views.
Figure 6S illustrates the 6 Btt Carrier with linear, salient and metric element views.
Figure 66 illustrates the 6 Tbbb Carrier with linear, salient and metric element views.

Figure 67 illustrates the 7 Tbbt Carrier with linear, salient and metric element views.
Figure 68 illustrates the 7 Tbtb Carrier with linear, salient and metric element views.
Figure 69 illustrates the 7 Ttbb Carrier with linear, salient and metric element views.
Figure 70 illustrates the 8 Tttb Carrier with linear, salient and metric element views.
Figure 71 illustrates the 8 Ttbt Carrier with linear, salient and metric element views.
N:\corplaelcfazck\f3eckford, Dave\Can. I''atcnt Appln\Appln as Oiled-AngU4.doc Figure 72 illustrates the 8 Tbtt Carrier with linear, salient and metric element views.
Figure 73 illustrates the 9 Tttt Carrier with linear, salient and metric element views.
Figure 74 illustrates the 9 B+BbtBbb Carrier with linear, salient and metric element views.
Figure 75 illustrates the 9 B+BtbBbb Carrier with linear, salient and metric element views.
Figure 76 illustrates the 9 B+BbbBbt Carrier with linear, salient and metric element views.
Figure 77 illustrates the 9 B+BbbBtb Carrier with linear, salient and metric element views.
Figure 78 illustrates the 10 B+TbbbBbb Carrier with linear, salient and metric element views.
Figure 79 illustrates the 10 B+BbbTbbb Carrier with linear, salient and metric element views.
Figure 80 illustrates the 10 B+BbbBtt Carrier with linear, salient anal metric element views.
Figure 81 illustrates the 10 B+BttBbb Carrier with linear, salient and metric element views.
Figure 82 illustrates the 10 B+BbtBbt Carrier with linear, salient and metric element views.
Figure 83 illustrates the 10 B+BbtBtb Carrier with linear, salient and metric element views.
V:\cotp\adefazel<lBeckf'ord, Dave\Can. Patent Appln\Appln as Piled-Ang04.doc Figure 84 illustrates the 10 B+BtbBbt Carrier with linear, salient and metric element views.
5 Figure 85 illustrates the 10 B+BtbBtb Carrier with linear, salient and metric element views.
Figure 86 illustrates the I 1 B+BbtBtt Carrier with linear, salient and metric element views.
Figure 87 illustrates the 1 I B+BbtTbbb Carrier with linear, salient and metric element views.
Figure 88 illustrates the 11 B+BbtBtt Carrier with linear, salient and metric element views.
Figure 89 illustrates the 11 B+BtbBtt Carrier with linear, salient and metric element views.
Figure 90 illustrates the 1 I B+BtbTbbb Carrier with linear, salient and metric element views.
Figure 91 illustrates the 1 I B+BttBbt Carrier with linear, salient and metric element views.
Figure 92 illustrates the 1 I B+BttBtb Carrier with linear, salient and metric element views.
Figure 93 illustrates the 11 B+TbbbBbt Carrier with linear, salient and metric element views.
Figure 94 illustrates the 11 B+TbbbBtb Carrier with linear, salient and metric element views.
N:vcorp\adefazek\Beckford, Dave\Can. Patent Appln\Appln as filed-Aug04.doc Figure 95 illustrates the 12 B+BttBtt Carrier with linear, salient and metric element views.
Figure 96 illustrates the 12 B+TbbbTbbb Carrier with linear, salient and metric element views.
Figure 97 illustrates the 12 B+BttTbbb Carner with linear, salient and metric element views.
Figure 98 illustrates the 12 B+TbbbBtt Carrier with linear, salient and metric element views.
Figure 99 illustrates the Thru Nanoform Carrier with linear, salient and metric element views.
Figure 100 illustrates the 2 b Nanoform Carrier with linear, salient and metric element views.
Figure 101 illustrates the 3 t Nanoform Carrier with linear, salient and metric element views.
Figure 102 illustrates the 4 Bbb Nanoform Carrier with linear, salient and metric element views.
Figure 103 illustrates the 6 Btt Nanoform Carrier with linear, salient and metric element views.
Figure 104 illustrates the 5 Bbt Nanoform Carrier with linear, salient and metric element views.
Figure 105 illustrates the 5 Btb Nanoform Carrier with linear, salient and metric element views.
N:\corpvadefazek\Beckf'ard, t7aveiCan. Patcnt Appln\Appln as tiled-Aug04.doc Figure 106 illustrates the 8 B+BbbBbb Nanoform Carrier with linear, salient and metric element views.
Figure 107 is diagram of a Performance Element Collective.
Figure 108 is a diagram of a Macroform.
Figure 109 is a diagram of a Macroform with Microform class and Performance Events.
Figure 110 is a diagram of a Musical Structure Framework Modulator.
Figure I 1 I is a diagram of an Environment Track.
I 5 Figure I I 2 is a diagram of an Instrument Performance Track with mapped Performance Element.
Figure 113 is a diagram of a Musical Structure Framework from a Carrier Focus.
Figure 114 is a diagram of a Musical Structure Framework from a Modulator Focus.
Figure 115 is a diagram of the Song Module Anatomy.
Figure 116 is a diagram of the top level MIDI to Song Module translation process.
Figure 117 is a diagram of the Audio to MIDI conversion application facilities.
Figure 118 is a diagram of the Translation Engine facilities.
Figure 119 is a diagram of a Framework sequence created by song structure markers.
Figure 120 illustrates the creation of a Macroform and Microform Class from MIDI
data.
V:\cu~pladcfazeklBeckford, Uave\Can. Patent Appln\Appln as f~iled-Aug04.doc Figure I21 illustrates the creation of an Environment track and Instrument Performance from MIDI data.
Figure 122 is a diagram of the Performance Element creation process.
Figure 123 illustrates the Microform class setting capture range on MIDI data.
Figure 124 illustrates the capture detection algorithm.
Figure 125 illustrates the capture range to Nanoform allocation table.
Figure 126 illustrates Candidate Nanoforms compared in Carrier construction.
Figure 127 illustrates the salient weight of active capture addresses in each Nanoform.
Figure 128 illustrates the Salient weight of nodes in various Microform earners.
Figure I29 illustrates the Microform salience ambiguity examples.
Figure 130 illustrates the note-on detection algorithm.
Figure 131 illustrates the control stream detection algorithm.
Figure 132 illustrates control streams association with note events.
Figure 133 illustrates Modulator construction from detected note-ons and controller events.
Figure 134 illustrates Carrier detection result, Modulator detection result, and association for a Performance Element.
Figure 135 is a diagram of the Performance Element Collective equivalence tests.
Figure 136 illustrates the context summary comparison flowchart.
N:\corpladefazek\Beckford. ()avcsC:an. Patent AppInlAppln as I~ilui-Aug04.doc Figure 137 illustrates the compositional partial comparison flowchart.
Figure 138 illustrates the temporal partial comparison flowchart.
Figure 139 illustrates the event expression stream comparison flowchart.
Figure 140 illustrates Performance Element indexes mapped to Instrument Performance Track.
Figure 141 is diagram of the Song Module Repository normalization and insertion process.
Figure 142 is diagram ofthe Song Module Repository facilities.
Figure 143 illustrates the re-classification of local Performance Elements.
Figure 144 illustrates Instrument Performance Track re-mapping.
Figure 145 illustrates Song Module insertion and referencing.
Figure 146 is diagram of the system reporting facilities.
Figure 147 illustrates an originality Report.
Figure 148 illustrates the Similarity reporting process.
Figure 149 illustrates compositionally similar Performance Elements in Performance Element Collectives.
Figure 150 illustrates the comparison of mechanical Performance Elements.
Figure 151 illustrates a full Musical Structure Framework comparison.
N ~.cotp\adefazek\Beckford, Dave\Can. Patent Appln\Appln as Filed-AugO~t.doc Figure 152 illustrates a distribution of compositionally similar Performance Elements in the Musical Structure Frameworks.
Figure 153 illustrates a distribution of mechanically similar Performance Elements in 5 the Musical Structure Frameworks.
Figure I54 illustrates a standalone computer deployment of the system components.
Figure 155 illustrates a client / server deployment of the system components.
10 Figure 156 illustrates a client / server deployment of satellite Song Module Repositories and a Master Song Module Repository.
Figure 157 is diagram of the small-scale registry process.
15 Figure 158 is diagram of the enterprise registry process.
Figure 159 illustrates a comparison of Standard Notation vs. the Musical representation of the current system.
20 Figure 160 illustrates the automated potential infringement notification process.
Figure 161 illustrates the similarity reporting process.
Figure 162 illustrates the Content Verification Process.
Detailed Description The detailed description details one or more embodiments of some of the aspects of the present invention.
The detailed description is divided into the following headings arid sub-headings:
N:\corp\adefazek\f3cckford, Dave\Can. Patent Appln\Appin as Filed-Aug04.doc ( 1 ) "Theoretical Concepts" - which describes generally the theoretical concepts that comprise the music representation method of the present invention.
"Theoretical Concepts" consists of "Carrier Theory" and "Modulator Theory" sections.
(2) "Theoretical Implementation" - which describes generally the implementation of the music representation method of the present invention. "Theoretical Implementation" consists of "Performance Element", "Performance Element Collective" and "Framework Element" sections 10 (3) "Song Framework Functionality" - which describes the operation of the song framework functionality of the present invention whereby performance data from a MIDI file is translated into the music representation method of the present invention.
"Song Framework Functionality" consists of "Process to create Framework Elements and Instrument Performance Tracks from MIDI file data", "Process to create a 15 Performance Element from a bar of MIDI data", and "Classification and mapping of Performance Elements" sections.
(4) "Framework Repository Functionality" - which describes generally the database implementation of the present invention.
(5) "Applications" which describes generally a plurality of system and computer product implementations of the present invention.
Theoretical Concepts The music representation methodology of the present invention is best understood by reference to base theoretical concepts for analyzing music.
The American I~eritage Dictionary defines "music" as the following, "vocal or instrumental sounds possessing a degree of melody, harmony, or rhythm. "
Western Music i.s, essentially, a collocation of tonal and expressive parameters within a metric framework. This information is passed to an instrument, either N:lcorp\a'iefazek\Beckford, Dave\Can. Patent AppInlAppln as Filed-Ang04.doc manually or electronically and a "musical sound wave" is produced. Figure 18 shows the relationship between time, tonality, expression, timbre and a sound waveform.
Music representation focuses on the relationship between tonality, expression, and meter. A fundamental concept of the musical representation of the current invention is to view this as a carrier / modulator relationship. Meter is a carrier wave that is modulated by tonality and expression. Figure 19 illustrates the carrier /
modulator relationship and shows how the concepts can be expressed in terms of standard notation.
The musical representation of the current invention defines a "note event" as a transaction between. a specific carrier point and a modulator. .Figure 20 illustrates this concept.
The carrier concept is further discussed in the "Carrier Theory" section (below), and the modulator concept is further discussed in the "Modulator Theory"
section (also below).
Carrier Theory Carrier wave = 'a ...wave that cau be modulated... to transmit a signal. "
This section explains the background justification for carrier theory, an introduction to carrier theory notation, carrier salience, and finally carrier hierarchy.
In order to communicate the carrier concepts adequately, supporting theory must first be reviewed. The background theory for carrier concepts involves a discussion of harmonic series, sound waves, and western meter structures.
Figure 21 compares the spectrum of light to a "spectrum" of harmonic series.
Just as light ranges from infrared to ultraviolet, incarnations of the harmonic series range from meter at the slow end of the spectrum to timbre at the fast end of the spectrum.
Timbre, Harmony and Meter can all be expressed in terms o.f a harmonic series. Figure 22 illustrates the various spectrums of the harmonic series. In the V:\ct~rp\adeftzck\Beckford, DavclCan. Patent AppInlAppln as filed-Ang04.doc "timbral" spectrum of the harmonic series, the fundamental tone defines the base pitch of a sound, and harmonic overtones combine at different amplitudes to produce the quality of a sound. In the "harmonic" spectrum of the harmonic series, the fundamental defines the root of a key, and the harmonics define the intervallic relationships that appear in a chord or melody. Finally, in the "meter/hypermeter"
spectrum of the harmonic series, the fundamental defines the "whole" under consideration, and the harmonics define metrical divisions of that "whole".
The following are some key terms and quotes from various sources that support the spectrum of harmonic series concept:
Harmonic A tone [or wavelength] whose frequency is an integral multiple of the fundamental frequency Haf°monic Series I 5 The harmonic series is an infinite series of numbers constructed by the addition of numbers in a harmonic progression The harmonics series is also a series of overtones or partials above a given pitch (see Figure 23 ) Meter Zuckerkandl views meter as a series of "waves," of continuous cyclical ZO motions, away from one downbeat and towards the next. As such, meter is an active force: a tone acquires its special rhythmic quality from its place in the cycle of the wave, from "the direction of its kinetic impulse."
University of Indiana - Rhythm and Meter in Tonal Music Hypermeter 25 Hypermeter is Meter at levels above the notated measures, That is the sense of measures or groups of measures organize into hypermeasures, analogous to the way that beats organize into measures. William Rothstein defines hypermeter as the combination of measures according to a metrical scheme, including both the recurrence of equal sized measure groups and a definite 30 pattern of alteration between strong and weak measures.
University of Indiana - Rhythm and Meter in Tonal Music Timbre N:vcorpladcf'azek\Beckford, Dave\C:an. Patent ~lpplntAppln as Piled-Aug04.doc "Most sounds with definite pitches ( for example, those other than drums ) have a timbre which is based on the presence of harmonic overtones."
Joseph L. Monzo - Harmonic Series, Definition of Tuning Terms Harmony "Because Euro-centric ( Western ) harmonic practice has tended to emphasize or follow the types of intervaIlic structures embedded in the lower parts of the harmonic series, it has often been assumed as a paradigm or template fo>"
harmony."
Joseph L. Monzo - Harmonic Series, Definition of Tuning Terms Interconnection between Harmony and Meter "Harmony and Rhythm are really the same thing, happening at 2 different speeds. By slowing harmony down to the point where pitches become pulses, I
have observed that only the most consonant harmonic intervals become regularly repeating rhythms, and the more consonant the interval, the more repeating the rhythm. Looking at rhythm the opposite way, by speeding it up, reveals identical physical processes involved in the creation of both. Harmony is very fast rhythm."
Steven Jay - The Theory of Harmonic Rhythm Sound waves are longitudinal, alternating between compression and rarefaction. Also, sound waves can be reduced to show compression /
rarefaction happening at different harmonic levels.

Figure 24 shows a longitudinal and graphic view of sound pressure oscillating to make a sound wave. Figure 25 shows compression / rarefaction occurring at various harmonics within a complex sound wave.
Everything in western meter is reduced to a grouping of 2 or 3.
Binary:Strong - weak Terna~ Strong - weak - weak y:

N:\corp4~def'lzek\Beckf'ord. Dave\Can. Patent Apple\Appln as Filed-Attg04.doc These binary and ternary groupings assemble sequentially and hierarchally to form meter in western music.
4 = 2 + 2 Binary grouping of binary elements 6 = 2+ 2+ 2 Ternary grouping of binary elements 7 = 2 + 2 + 3 Ternary grouping of binary and ternary elements Figure 26 visualizes the 4=2+2 metric hierarchy.

The following are some key terms and quotes from various sources that support the metric hierarchy concept:
Architectonic 10 Rhythm is organized hierarchally and is thus "an organic process in which smaller rhythmic motives also function as integral parts of the larger rhythmic organization".
University of Indiana - Rhythm and Meter in Tonal Music 15 Metrical Structur°e Metrical structure is the psychological extrapolation of evenly spaced beats at a number of hierarchal levels. Fundamental to the idea o.f meter is the notion of periodic alternation between strong and weak beats. for beats to be strong or weak there must exist a metrical hierarchy. if a beat is felt to be strong at a 20 particular level, it is also a beat at the next larger level.
Lerhdahl & Jackenhoff Conceptually, the wave states of compression / rarefaction can map to the meter states of strong / weak. Figure 27 illustrates the camparison.
Hierarchal 25 metrical layers can also map conceptually to harmonic layers, as illustrated by Figure 28.
The mapping of compression / rarefaction states to the binary form is self evident, as Figure 29 indicates.
N:\corp\adefazek\Bccktord, Dave\C:an, Patent Appln\Appln as (~ilcd-Ang04.doc The mapping of compression / rarefaction states to the ternary form is not as straightforward because of differing number of states. This is illustrated in Figure 30.
The compression state maps to the first form state, and the rarefaction state maps to the last form state. Figure 31 illustrates that the middle form state is a point of ambiguity. The proposed solution, illustrated by Figure 32 is to assign compression to the first element only, and make the rarefaction compound, spread over the 2"d and 3rd elements.
The Carrier theory notation discussion involves harmonic state notation, Carrier Signature formats, and the metric element hierarchy used to construct carrier structures.
A decimal-based notation system is proposed to notate the various states of binary and ternary meter. Specifically:
0 Compression ( common for binary & ternary ) 5 Binary rarefaction
3 Ternary initial rarefaction 6 Ternary final rarefaction Figure 33 shows the harmonic state allocation for binary and ternary meter.
The harmonic state "vocabulary "therefore as stated above is: 0, 3, S, and 6.
These harmonic states are also grouped into metric elements.

binary metric element 2 carrier nodes ternary metric element 3 carrier nodes N:\corptadcfazek\Bcckford, Dave\C'an. Patent Appln'~nppln as Piled-~ug04.doc The following table illustrates the concept of a Carrier Signature and its component elements:
Carrier nature elements Sig Symbol Name Definition Harmonic state location ( big endian ) # - total number of nodes in the carrier b Binary metric element a structure consisting of 2 carrier nodes t Ternary metric element a structure consisting of 3 carrier nodes B Binary metric element a container consisting group of 2 metric elements T Ternary metric element a container consisting group of 3 metric elements B+ Binary metric element a container consisting of 2 supergroup metric element groups T+ Ternary metric element a container consisting of 3 supergroup metric element groups B++ Binary metric element a container consisting of 2 ultragroup metric element supergroups N:lcorpladelazek\Seekf'ord, Daac\C'an. Patent Appln\Appln as Piled-Aug04.doe The following table illustrates the hierarchal arrangement of metric elements Metric element metric elements form sequences of metric units.
Figure 34 visualizes binary and ternarymetric elements Metric element group metric element groups captain metric element s. A metric element group can contain any combination of metric elements. Figure 35 visualizes a metric element group Metric element supergroup metric element supergroups contain binary or ternary metric element groups inclusively. Figure 36 visualizes a metric element supergroup Metric element ultragroup metric element ultragroups contain metric element supergroups inclusively, Figure 37 visualizes a metric element ultragroup The following table illustrates a metric element group carrier (see Figure 35 for visualization ).
Carrier Signature Bbt Metric Metric meterharmonic Element Elementpos state group notation N:\corp\adcfazck\Beckf'ord, Dave\Can. Patent Appln\Appln as filed-Aug04.doc The following table illustrates a metric element supergroup carrier (see Figure 36 for visualization ) Carrier Signature 8 B+BbbBbb Metric Metric Metric meter harmonic pos element ElementElement state notation supergrougroup P

N:\corp\adefazek\Beckford, Dave\Can, Patent Appin\Appin as Filed-Aug04.dex The following table illustrates a metric element ultragroup carrier (see Figure 37 for visualization).
Carrier Signature 16 B++B+BbbBbbB+BbbBbb Metric metric metric metric meter pos harmonic element element element elemen state notation ultragroup supergroup group t S

S

S

The carrier salience discussion involves introducing the concept of carrier salience, the process to determine the salient ordering of carrier nodes, and the method of weighting the salient order.
The following term is relevant to the carrier salience discussion is defined as 10 follows Salience:
perceptual importance; the probability that an event or pattern will be noticed.
N:\corpladetazeklBeckford, Dave\C:an. Patent Appin\Appln as filed-Aug04.doc Every carrier position participates in a harmonic state at multiple levels.
Since the "cross section" of states is unique for each position, a salient ordering of the positional elements can be determined by comparing these harmonic "cross sections".
S Figure 38 shows the multiple harmonic states for the Carrier 7Ttbb.
The process to determine the salient order of carrier nodes is as follows 1) Convert from big endian to little endian representation t0 Positionbig little endi endian an l - 00 _> oo -_ 2 03 -> 30 3 06 -> 60
4 30 -> 03 35 -> 50 6 60 -> 06 7 65 -> 56 2) Assign a Lexicographic weighting to the harmonic states based on a ternary system Harmonic ~ ternary potential state weighting energy 0 i 2 v v
5 ' 0
6 i 0 The weighting is based on the potential energy of the harmonic state within a metric element.
N:',corpvadefazek\f3eckford, Dave\Can. Patent ApplnlAppln as Filed-Ang04.doc The lexicographic weighting is derived from the little endian harmonic states.
Positionbig endianlittle lexicographi endian c weighting 1 00 -> 00 -> 22 2 03 -> 30 -> 12 3 06 -> 60 -> 02 4 30 -> 03 -> 21 35 -> 53 -> O 1 6 60 -> 06 -> 20
7 65 -> 56 -> 00 3) Perform a descending order lexicographical sort of the ternary values Positiobig ~ little endianlexicographi n endian c weighting 1 00 -> 00 -> 22 4 30 -> 03 -> 21 6 60 -> 06 -> 20 2 03 -> 30 -> 12 3 06 -> 60 -> 02 5 35 -> 53 -> OI
7 65 -> 56 -> 00 The salient ordering process yields the following results for this metrical structure.
Harmon position is N:\corp\adefazcklBeckford, Dave\Can. ('atent Appln\Appin as Filvl-Au~04.doc Once a salient ordering for a metric structure is determined, it is possible to provide a weighting from the most to the least salient elements.
Salient weighting is based on a geometric series where:
~ r=2 ~ n = # metric elements ~ S"=r°+r~ +r2+r3+r4... r"-~
~ salient weight of a metric position n = r "'r ~ total salient weight of a metric structure = (r " - r° ) / (r-r°) Figure 39 shows linear and salient ordering of two carrier forms.
The carrier hierarchy discussion involves the presentation of the existing western meter hierarchy as, the introduction of the metric hierarchy of the musical representation of the current invention, and the combination of the metric levels of the musical representation of the current system.
Figure 40 shows western meter hierarchy as it exists currently. A Sentence is composed of multiple phrases, phrases are composed of multiple bars, and finally bars are composed of a number of beats The concept of a time signature is relevant to the carrier hierarchy discussion and is defined as follows:
~ The top number indicates the number of beats in a bar ~ The bottom number indicates the type of beat For the example "4/4", there are 4 beats in the bar and the beat is a quarter note.
Therefore the carrier hierarchy of the musical representation methodology of the current invention is illustrated in the following tables:
N:\corp\adefazek\Beckford, Uave\Can. Patent Appln\Appln as Filed-Aug04.doc Macroform Carrier 0000.000.000 Scope approximates the period / phrase level of western meter Structure Macroform Elements elements are not of uniform size. Actual structure is determined by the Microforms that are mapped to the Macroform node.

Microform Carrier 0000.000.000 Scope ~ bar level of western meter Structure I Microform Elements elements are of uniform size Microforms have a universal /8. All /4 time signature are restated in i.e.) 3/4 -> 6/8, 4/4 -> 8/8 Nanoform Carrier 0000.000.000 Scope ~ Contained within beat level of western meter Structure ( Nanoform Elements Positional elements can alter in size, but all event combinations must add up to a constant length of a beat N:\cotp\atlcfazek\Beckford, Dave\C'an. Patent AppInlAppln as filed-Au~04.doc Nanoform Layers null No note events Thru -~ Note event on Microform node I
2 - 3 Note event positions within beat ( l6tn /24th note equivalent ) II
4 - 6 Note event positions within beat ( 32°d /48tn note equivalent ) III
8 divisions of a beat ( 64tn note equivalent ) * not used for analysis application It is important to understand that the combinations of these carrier waves define an "address" for every possible point in musical time. The Macroform is extended by the Microform, and the Nanoform extends the Microform. Every point in 5 time can be measured in power/potential against any other point in time. The following examples illustrate the harmonic state notation of the carrier hierarchy of the musical representation of the current invention.
Macroform. Microform. Nanoform 0000.000.000 Carrier Signatures [8 B+BbbBbb].[7/8 Tbbt].[2 b]

Harmonic state 000.OS.S
Notation 10 ~ lst of 8 element Macroform ~ 2°d of 7 element Nlicroform ~ 2°d of 2 element Nanoform Carrier Signatures [7 Ttbb].[6/8 Btt].[3 t]
Harmonic state Notation 0-SS0.3S.3 ~ 7tn of 8 element Macroform I S ~ 4tn of 6 element Microform ~ 2°a of 3 element Nanoform N:\corp\a<Icfazcl:\~3eckFord, Davc\Can. Patent Appln\Appln as Filed-Aug04.doc Figure 41 visualizes the carrier hierarchy for the musical representation of the current invention.
Modulator Theory Within a single note event there are multiple parameters that; can be modulated at the start or over the duration of the note event to produce a musical effect. Figure 42 illustrates this concept.
The following performance metadata parameters must be defined for a note to sound: pitch, duration, volume, position, and instrument specific data.
Pitch what is the coarse "pitch" of a note ( what note was played ) ?

what is the fine "pitch" or tuning of a note?

does that tuning change over the duration of the note?

Duration what is the coarse duration of a note?

( quarter note, eighth note, etc..) what is the "fine" duration offset of a note?

Volume what is the initial volume of the note?

does the volume change over the duration of the note?

Position a note is considered to occur at a specific position if it falls within a tick offset range of the coarse position what is the one position of the note? see Figure Instrument instrument specific parameters can also be modulated over the duration of specific a note event to produce a musical effect. i.e.) stereo panning, effect level, etc.

The following terms are relevant to the Modulator Theory disclosure:
modulator a device that can be used to modulate a wave vector a one dimensional array N;\corp'vadcfazeklf3cckford, Dave\('an. Patent AppInlAppln as Filed-Aug04.doc The term "vector" is used to describe performance metadata parameters because they are of a finite range, and most of them are ordered.
The musical representation methodology of the current invention aggregates the multiple vectors that affect a note event into a note variant. Figure 44 illustrates Note Variants (62) that participate in a Note Event (64) "transaction" that modulates the metric position or carrier node (66) that they are attached to.
A feature of the modulator theory is that it addresses the concept of compositional and mechanical "layers" in music - the two aspects of music that are protected under copyright law.
The compositional layer represents a sequence of musical events and accompanying lyrics, which can be communicated by a musical score. An example of the compositional layer would be a musical score of the Beatles song "Yesterday".
The second layer in music is the mechanical layer. The mechanical layer represents a concrete performance of a composition. An example of the mechanical layer would be a specific performance of the "Yesterday" score. Figure 45 illustrates that a piece of music can be rendered in various performances that are compositionally equivalent but mechanically unique.
The compositional layer in the musical representation of the current system defines general parameters that can be communicated through multiple performance instances. The mechanical layer in the musical representation of the current system defines parameters that are localized to a specific performance of a score.
Parameter definitions at the "mechanical" layer differentiate one performance from another.
The following modulator concepts illustrate various implementations of compositional and mechanical layers in the musical representation of the current invention:
The Note Variant contains a compositional and mechanical layer. Figure 46 illustrates the compositional and mechanical layers of a Note Variant (62).
The vectors in the compositional partial (68) (pitch, coarse duration, lyrics) do not change N:\corp\adefazek\Beckford, I>ave\Can. PatenYr1ppln\Appln as Filed-Au~04.doc across multiple performances of the note variant. The Vectors in the temporal partial (70) (fine position offset, fine duration offset) are localized to a particular Note Variant (62).
The Note Event connects carrier nodes to Note Variants. Multiple Note Variants can map to a single Note Event (this creates polyphony). Figure 47 illustrates a compositional Note Event (64). Compositional Note Events (64) can contain multiple Note Variants (62) that have a compositional partial (68) only.
Figure 48 illustrates a Mechanical Note Event (64). Mechanical Note Events (64) can contain multiple Note Variants (62) that have both compositional (68) and temporal partials (70). Mechanical Note Events (64) also have an associated event expression stream (72). The event expression stream (72) contains all of the vectors (volume, brightness, and fine-tuning) whose values can vary over the duration of the Note Event (64). The event expression stream (72) is shared by all of the Note Variants (62) that participate the Note Event (64).
The Performance Element is a sequence of note events that is mapped to a Microform Carrier. It equates to single bar of music in Standard Notation. The Performance Element can be compositional or mechanical. Figure 49 illustrates a Compositional Performance Element (74). The Compositional Performance Element (74) maps compositional Note Events (64) to carrier nodes (66). It is also used for abstract grouping purposes. The Compositional Performance Element (74) is similar to the "class" concept in "Object Oriented Programming". Figure 50 illustrates a Mechanical Performance Element (74). The Mechanical Performance Element (74) maps mechanical Note Events (64) to carrier nodes (66). The Mechanical Performance Element (74) is similar to the "object instance" concept in Object Oriented Programming, in that an object is an individual realization of a class.
Theoretical Implementation The hierarchy of western music is composed of motives, phrases, and periods.
A motif is a short melodic (rhythmic) fragment used as a constructional element. The motif can be as short as two notes, and it is rarely longer than six or seven notes. A
phrase is a grouping of motives into a complete musical thought. The phrase is the N:\corpladefazek\Beckf'ord, Uave\C:an. Patent Appln\Appln as I'~'iled-Au~;04,doc shortest passage of music which having reached a point of relative repose, has expressed a more or less complete musical thought. There is no infallible guide by which every phrase can be recognized with certainty. A period is a grouping structure consisting of phrases. The period is a musical statement, made up of two or more phrases, and a cadence. Figure 51 illustrates the western music hierarchy.
Figure 52 illustrates the hierarchy of the musical representation of the current system. The Performance Element (74) is an intersection of Carrier and Modulator data required to represent a bar of music. The Performance Element Collective (34) is a container of Performance Elements (74) that are autilized within the Song Framework Output. How the Performance Element Collective (34) is derived is explained further below.
The Framework Element (32) defines the metric and tonal context for a I S musical section within a song. The Framework Element is composed of a Macroform Carrier structure together with Environment Track (80) and Instrument Performance Tracks (82).
The Environment Track (80) is a Master "Track" that supplies tempo and tonality information for all of the Macroform Nodes. Every Performance Element (74) that is mapped to a Macroform Node "inherits" the tempo and tonality properties defined for that Macroform Node. All Macroforms in the Framework Element (32) will generally have a complete Environment Track (80) before Instrument Performance tracks (82) can be det?ned. The Instrument Performance Track (82) is an "interface track" that connects Performance Elements (74) from a single Performance Element Collective (34) to the Framework Element (32).
Continuing up the hierarchy, the Framework Sequence (84) is a user defined, abstract, top level form to outline the basic song structure. An example Framework Sequence would be:
Intro ~ Verse 1 ~ Chorus 1 ~ Verse 2 ~ Bridge ~ Chorus 3 ~ Chorus 4 N:lcorp\adeFazek\Beckford, Dave\Can. Patent Appln\Appln as 1~iled-Aug04.doc Each Framework Sequence node is placeholder for a full Framework Element (32). The Framework Elements (32) are sequenced end to end to form the entire linear structure for a song. Finally, the Song Framework Output (30) is the top-level container in the hierarchy of the musical representation of the current system.

Performance Element The first structure to be discussed in this "Theory Implementation" section is the "Performance Element". The Performance Element has Carrier implementation 10 and Modulator implerrientation.
The Performance Element Carrier is composed of a Microform, Nanoform Carrier Signatures, and Nanoforms. Microform nodes do not participate directly with note events, rather a Nanaform Carrier Signature is selected, and Note Events are 15 mapped to the Nanoform nodes. Figure 53 illustrates a Microform Carrier;
Figure 54 illustrates Microform Carrier (88) with Nanoform Carrier Signatures (90) and Nanoform Carrier nodes (92), and Figure 55 shows Note Events (64) bound to Nanoform Carrier nodes (92).
20 The following is an ordered index of Microform Carner structures that can be used in Performance Element construction:
8 B+BbbBbb 12 B+BttBtt 25 4 Bbb 6 Btt 6 Tbbb 12 B+TbbbTbbb
9 Tttt 30 12 T+BbbBbbBbb 12 B+BttTbbb 12 B+TbbbBtt
10 B+BbbTbbb 10 B+TbbbBbb Nacorp\aclefazel:~,f3eckford, Dave\Can. Patent Applnl~ppln as Oiled-Aug04.doc 9 B+BbbBbt 9 B+BbbBtb 9 B+BbtBbb 9 B+BtbBbb ll B+BttBtb Il B+BttBbt
11 B+TbbbBbt II B+TbbbBtb II B+BbtBtt 10ll B+BtbBtt 11 B+BbtTbbb ll ,B+BtbTbbb 10 B+BbbBtt 10 B+BttBbb 1510 B+BbtBbt 10 B+BtbBtb 10 B+BbtBtb IO B+BtbBbt 5 Bbt 205 Btb 7 Tbbt 7 Tbtb 7 Ttbb 8 Tttb 258 Ttbt 8 Tbtt The following is an Index of Nanoform Carrier structures at various quantize levels that are used in Performance Element construction:
Null -N° 8t" note equivalent (Microforrai node thru) N-' I 6t" /24t" note equivalent ~ 2 b 3t N:vcorp\adefazek\Beckford, Dave\Can. Patent Appln\Appln as E;iled-Ang04.doc N-' 32"° /48'" note equivalent 4 Bbb 6 Btt Bbt S Btb N-'64t" note equivalent 8 B+BbbBbb Figure 56 illustrates a complete Performance Element Modulator. The Performance Element Modulator is composed of compositional partials (68) and temporal partials (70) grouped into Note Variants (62) and an event expression stream 5 (72). Multiple Note Variants attached to a single Note Event denotes polyphony.
The compositional partial contains coarse pitch and coarse duration vectors, along with optional lyric, timbre, and sample 1D data. The temporal partial contains pico position offset, and pico duration offset vectors. The event expression stream is shared across all Note Variants that participate in a Note Event. The event expression stream contains volume, pico tuning, and brightness vectors.
The following are the ranges of the Modulator vectors that can be used in a Performance Element construction:
Coarse Pitch Reinforces Key Neutral pulls away from Key Coarse Duration Denominations 8' 16t 32°
Pico Duration Offset 60 ticks <-> + 60 ticks Pico Position offset N:\corpladefazek\Beckford. Duvc',Can. Patent Appln\Appln us I-'iled-Au~04.doc -40 ticks <-> + 40 ticks Expression Controllers (Volume, Pico Tuning, Brightness All Controller Vectors have a range of 0 - 127 with an optional extra precision controller.
Figure 57 visualizes a complete Performance Element from a Carrier Focus.
Figure 58 partially visualizes a Performance Element from a Modulator Focus.
For both Figure 57 and Figure 58, the Carrier consists of a Microform (88), Nanoform Carrier Signatures (90), and Nanoform carrier nodes (92). Note events connect the carrier and modulator components of the Performance Element. The Modulator consists of an event expression stream (72) and Note Variants, (62) that containing compositional partials (68) and mechanical partials (70).
I 5 The Carrier focus view of the Performance Element highlights the Carrier Portion of the Performance Element, and reduces the event expression stream to a symbolic representation. The Modulator focus highlights the full details of the event expression stream, while reducing the Carner component down to harmonic state notation.
Figures 59-106 illustrates the carrier structure, linear order and salient ordering corresponding to the various Carrier Structures. More particularly:
Figure 59 shows the Visualization, Linear Ordering, and Salient Ordering of Carrier Structure 4 Bbb.
Figure 60 shows the Visualization, Linear Ordering, and Salient Ordering of Carrier Structure 8 B+BbbBbb.
Figure 61 shows the Visualization, Linear Ordering, and Salient Ordering of Carrier Structure 12 T+BbbBbbBbb.
N:\corp~adefazck\Beckford, Dave\Can. Patent Appln\Appln as Filed-AugU4.doc Figure 62 shows the Visualization, Linear Ordering, and Salient Ordering of Carrier Structure 16 B++B+BbbBbbB+BbbBbb.
Figure 63 shows the Visualization, Linear Ordering, and Salient Ordering of Carrier Structure 5 Bbt.
Figure 64 shows the Visualization, Linear Ordering, and Salient Ordering of Carrier Structure 5 Btb.
Figure 65 shows the Visualization, Linear Ordering, and Salient Ordering of Carrier Structure 6 Btt.
Figure 66 shows the Visualization, Linear Ordering, and Salient Ordering of Carrier Structure 6 Tbbb.
Figure 67 shows the Visualization, Linear Ordering, and Salient Ordering of Carrier Structure 7 Tbbt.
Figure 68 shows the Visualization, Linear Ordering, and Salient Ordering of Carrier Structure 7 Tbtb.
Figure 69 shows the Visualization, Linear Ordering, and Salient Ordering of Carrier Structure '7 Ttbb.
Figure 70 shows the Visualization, Linear Ordering, and Salient Ordering of Carrier Structure 8 Tttb.
Figure 71 shows the Visualization, Linear Ordering, and Salient Ordering of Carrier Structure 8 Ttbt.
Figure 72 shows the Visualization, Linear Ordering, and Salient Ordering of Carrier Structure 8 Tbtt.
N:\copladefazek\Beekford, pae~e\Can. Patent Appln\Appln as filed-Aug04.doc Figure 73 shows the Visualization, Linear Ordering, and Salient Ordering of Carrier Structure 9 Tttt.
Figure 74 shows the Visualization, Linear Ordering, and Salient Ordering of 5 Carrier Structure 9 B+BbtBbb.
Figure 75 shows the Visualization, Linear Ordering, and ,Salient Ordering of Carrier Structure 9 B+BtbBbb.
10 Figure 76 shows the Visualization, Linear Ordering, and Salient Ordering of Carrier Structure 9 B+BbbBbt.
Figure 77 shows the Visualization, Linear Ordering, and Salient Ordering of Carrier Structure 9 B+BbbBtb.
Figure 78 shows the Visualization, Linear Ordering, and Salient Ordering of Carrier Structure 10 B+TbbbBbb.
Figure 79 shows the Visualization, Linear Ordering, and Salient Ordering of Carrier Structure 10 B+BbbTbbb.
Figure 80 shows the Visualization, Linear Ordering, and Salient Ordering of Carrier Structure 10 B+BbbBtt.
Figure 81 shows the Visualization, Linear Ordering, and Salient Ordering of Carrier Structure 10 B+BttBbb.
Figure 82 shows the Visualization, Linear Ordering, and Salient Ordering of Carrier Structure 10 B+BbtBbt.
Figure 83 shows the Visualization, Linear Ordering, and Salient Ordering of Carrier Structure 10 B+BbtBtb.
N:\corp\adefazek\Beckford, Dave\Can. Natent Appln\Appln as Filed-Au?04.doc Figure 84 shows the Visualization, Linear Ordering, and Salient Ordering of Carrier Structure 10 B+BtbBbt.
Figure 85 shows the Visualization, Linear Ordering, and Salient Ordering of Carrier Structure 10 B+BtbBtb.
Figure 86 shows the Visualization, Linear Ordering, and Salient Ordering of Carrier Structure 1 I B+BbtBtt.
Figure 87 shows the Visualization, Linear Ordering, and Salient Ordering of Carrier Structure 11 B+BbtTbbb.
Figure 88 shows the Visualization, Linear Ordering, and Salient Ordering of Carrier Structure 11 B+BbtBtt.
Figure 89 shows the Visualization, Linear Ordering, and Salient Ordering of Carrier Structure I 1 B+BtbBtt.
Figure 90 shows the Visualization, Linear Ordering, and Salient Ordering of Carrier Structure I 1 B+BtbTbbb.
Figure 91 shows the Visualization, Linear Ordering, and Salient Ordering of Carrier Structure 1 I B+BttBbt.
Figure 92 shows the Visualization, Linear Ordering, and Salient Ordering of Carrier Structure 11 B+BttBtb.
Figure 93 shows the Visualization, Linear Ordering, and Salient Ordering of Carrier Structure I I B+TbbbBbt.
Figure 94' shows the Visualization, Linear Ordering, and Salient Ordering of Carrier Structure 1 I B+TbbbBtb.
~l:\corpladefazek\Beckford, Davc\(:an. Patent Appln\Appln as Filed-Aug04.doc Figure 95 shows the Visualization, Linear Ordering, and Salient Ordering of Carrier Structure 12 B+BttBtt.
Figure 96 shows the Visualization, Linear Ordering, and Salient Ordering of Carrier Structure 12 B+TbbbTbbb.
Figure 97 shows the Visualization, Linear Ordering, and Salient Ordering of Carrier Structure 1.2 B+BttTbbb.
Figure 98 shows the Visualization, Linear Ordering; and Salient Ordering of Carrier Structure 12 B+TbbbBtt.
Figure 99 shows the Visualization of Nanoform Carrier Structure 2hru.
Figure 100 shows the Visualization, Linear Ordering, and Salient Ordering of Nanoform Carrier Structure 2 b.
Figure 101 shows the Visualization, Linear Ordering, and Salient Ordering of Nanoform Carrier Structure 3 t.
Figure 102 shows the Visualization, Linear Ordering, and Salient Ordering of Nanoform Carrier Structure 4 Bbb.
Figure 103 shows the Visualization, Linear Ordering, and Salient Ordering of Nanoform Carrier Structure 6 Btt.
Figure 104 shows the Visualization, Linear Ordering, and Salient Ordering of Nanoform Carrier Structure S Bbt.
Figure 105 shows the Visualization, Linear Ordering, and Salient Ordering of Nanoform Carrier Structure S Btb.
Figure 106 shows the Visualization, Linear Ordering, and Salient Ordering of Nanoform Carrier Structure 8 B+BbbBbb.
N:\corp\adefazek\Beckford, DavelCan. Patent Appln\Appln as Filed-Aug04,doc Performance Element Collective The second structure to be discussed in this "Theory Implementation" section is the Performance Element Collective. A Performance Element Collective contains all of the unique Performance Elements that occur within the Song Framework Output. The allocation of Performance Elements to a particular Performance Element Collective is explored in the "Classi .fication and mapping of Performance Elements"
section. The Performance Element Collective initially associates internal Performance Elements by Microform Family compatibility, For example, all of the 8 family of Microforms are compatible. Within the Microform family association, the Performance Element Collective also provides a hierarchical group of such Performance Elements according to compositional equivalence. Figure 107 visualizes a Performance Element Collective (34), which associates compositional Performance Elements (94) by metric equivalence. Compositional Performance Elements (94) act as grouping structures for mechanical Performance Elements (96) in the Performance Element Collective (34).
Framework Element The third structure to be discussed in this section is the Framework Element.
The Framework Element has Carrier implementation and Modulator implementation, The Framework Element's Carrier is composed of a Macroform, and . Microform Carrier class assignments. The Macroform provides the structural framework for a section of music (i.e. Chorus). Figure 108 Shows a Macroform Structure (100).
Another aspect of the Framework Element Carrier is the Microform Carrier family. The Microform Carrier family restricts Performance Event participation only to those Performance Elements that have Microforms within the Microform Carrier class.
LE.) 8 Mieroform Carrier class N:vcorpladcfazel<lBeckford, Lave\Gan. PaCCnt AppInlAppln as Filed-Aug04.doc 8 B+BbbBbb 8 Tttb 8 Ttbt 8 Tbtt A Microform Carrier class must be assigned to every Macro.form Node.
Figure 109 shows a Macroform (100) with Microform Carrier classes (102) and Performance Events ( I04).
A Performance Event is added to every Macroform node (measure) within the Framework Element. The Performance Event brokers the carrier extension of the Framework Element by Performance Elements for a particular Macroform node within the Framework Element. Only Performance Elements that conform to the Microform Family specified at the Performance Event's Macroform node can participate in the Performance Event. Performance Elements that participate in the Performance Event also inherit the def ned key and tempo values in the Framework Element Modulator at the Performance Event's Macroform node.
The following is an index of Macroform Carrier Structures that can be used in defining a Song Framework:
4 Bbb 8 B+BbbBbb
12 T+BbbBbbBbb l6 B++B+BbbBbbB+BbbBbb S Bbt 5 Btb 6 Btt 6 Tbbb 7 Tbbt 7 Tbtb 7 Ttbb 8 Tttb N:\corpladcfazck\Beckfbrd, Davc\(:'an. Patent Appln\Appln as Filed-Aug04.doc 8 Ttbt 8 Tbtt 9 Tttt 9 B+BbtBbb 5 9 B+BtbBbb 9 B+BbbBbt 9 B+BbbBtb 10 B+TbbbBbb 10 B+BbbTbbb 10 10 B+BbbBtt 10 B+BttBbb 10 B+BbtBbt 10 B+BbtBtb 10 B+BtbBbt 15 10 B+BtbBtb ll B+BbtBtt ll B+BbtTbbb II B+BbtBtt ll B+BtbBtt 20 ll B+BtbTbbb ll B+BttBbt 11 B+BttBtb 11 B+TbbbBbt 11 B+TbbbBtb 25 12 B+BttBtt 72 B+TbbbTbbb 12 B+BttTbbb 12 B+TbbbBtt 30 Figure 110 visualizes a Framework Element Modulator (76). The Framework Element Modulator (76) is composed of the environment partial (106) and Performance Elements (74). The Framework Element Modulator (32) is intersected by multiple Instrument Performance Tracks (82). The Performance Element (74) participates in both the environment partial (106) and an Instrument Performance N:\coip\adefazek\Beckford, Dave\Can. Patent Appln\Appln as Filw1-Aug04.doc Track (82). The Framework Element Modulator (32) is attached to the Performance Event ( 104).
Figure 111 visualizes an environment track (80). The environment track (80) is a sequence of all environment partials ( 106) mapped across the Performance Events (104) for a particular Framework Element. These environment partials (106) are generally part of a MIDI or other music file, or otherwise are compiled in a manner known to those skilled in the art. The environment partial defines the contextual data for a particular Performance Event. This data is applied to every Performance Element that participates in the Performance Event. The environment partial contains tempo and key vectors.
Figure 112 visualizes an Instrument Performance Track (82). The Instrument Performance Track (82) is an instrument-specific modulation space that spans across I 5 all of the Performance Events (104) for a particular Framework Element.
Performance Elements (74) are mapped to the Instrument Performance Track (82) from the Performance Element Collective (34).
The associated instrument defines the Instrument Performance Track's timbral qualities. Currently, the instrument contains octave and instrument family vectors.
Performance Elements mapped to a particular Instrument Performance Track inherit the Instrument Performance Track's timbral qualities.
The following tables define the ranges of the modulator vectors that are used in the environment partial:
Tempo BPM <-> 240 BPM
Key Gb b Ab Eb Bb C G D E B #

6b Sb b 3b 2b lb 0 1#1# 1# 1# 1# 1#
' N:\corp\adefazuk\Bcekford, Davc\Can. Patent Apptn\Appln as filed-Au~04.doc The following tables define the ranges of the modulator vectors that are used for each instrument Octave (Fundamental Frequency) 32Hz 64Hz128HzHz S l2Hz 1024Hz2048Hz4096Hz Instrument Family on Organbass eys ow Pluckeed ind oicerass bellsyntheriodic Figure 113 represents a complete Framework Element from a Carrier Focus.
Figure 114 partially visualizes a Framework Element from a Modulator Focus.
In both Figure 113 and Figure 114, the Carrier section consists of a Macroform (100), Microform Carrier classes (102), and Macroform Carrier Nodes 1 S ( 108). Performance Events ( 104) connect the Carrier and Modulator components of the Framework Element. The Modulator section consists of an environment track (80) containing environment partials ( 106) and Instrument Performance Tracks (82) that contain and route Performance Elements (74) to specific Instruments.
The Carrier focus view of the Framework Element highlights the Carrier portion of the Framework Element, and reduces Modulator detail. The Modulator focus highlights additional Modulator detail, while reducing the Carrier component down to harmonic state notation.
2S Figure 115 summarizes the Song Framework Output (30) anatomy, and thereby explains the operation of the Song Framework of the present invention.
A
Framework Sequence (84) outlines the top-level song structure (Intro, verse 1, chorus 1 etc...). Framework Elements (32) are mapped (8S) to nodes of the Framework N:\corp\adefazek\Beckford, Dave\Can. Patent Apph~Appln as Filed-Aug04.doc Sequence (84), in order to define the detailed content for every song section.
Framework Elements (32) define the metric structure environment parameters, and participating instruments for a particular song structure section. Instrument Performance Tracks (82) within the Framework Element (32) are mapped (35) with Performance Elements (74) from the Performance Element Collective (34).
Instrument Performance Tracks (82) across multiple Framework Elements (32) can share the same Performance Element Collective (34). For example, all of the "bass guitar" Instrument Performance Tracks, will be mapped by Performance Elements from the "bass guitar" Performance Element Collective. Figure 115 is best understood by referring also to the description of the "Song Framework Functionality" set out below.
Song Framework Functionality l 5 Figure 116 illustrates the high-level functionality of the Song Framework.
The purpose of the Song framework is to analyze a music file such as a prepared MIDI file (26) ("preparation" explained in the background above) and convert its constituent elements into a Song Framework Output file (30), in accordance with the method described. This in turn enables the Reporting Functionality of the Song Framework Output (30) in accordance with the processes described below.
In order to translate a prepared MIDI file into a Song Framework Output file, the Song Framework must employ the following main functionalities.
The first top-level function of the Song Framework (22) is to construct (113) a Framework Sequence (84) and a plurality of Framework Elements (32) as required.
The second top-level function of the Song Framework (22) is the definition (I
15) of Instrument Performance Tracks (82) for all of the Framework Elements (32) (as explained below). The third top-Level function of the Song Framework (22) is a performance analysis (I 19). The performance analysis (119), constructs (111) Performance Elements (74) from an instrument MIDI track, and maps (117) Performance Elements indexes (74) onto Instrument Performance Tracks (82).
N ~,corp\adcfazeklBeckford, DavelC'an. Natent ApplolAppln as Filed-Aug04.doc This process consists generally of mapping the various elements of a MIDI file defining a song so as to establish a series of Framework Elements (32), in accordance with the method described. In accordance with the invention, the Framework Elements (32) are based on a common musical content structure defined above.
The creation of the Framework. Elements (32) consists of translating the data included in the MIDI file.to a format corresponding with this common musical content structure.
This in turn enables the analysis of the various Framework Elements (32) to enable the various processes described below.
The Framework Sequence is used to define the main sections of the song at an abstract level, for example "verse", "chorus", "bridge". Next, Framework Elements are created to define the structural and environmental features of each song section.
The Framework Element's Macroform Container and Macroform combinations define the length and "phrasing" of each of the song sections. Also, the Framework Element's environment track identifies the environmental parameters (such as key, tempo and time signature) for every structural node in the newly created Framework Element. Framework Element creation is further discussed in the "Process to create Framework Elements and Instrument Performance Tracks from MIDI file data"
section.
For each recorded instrument, a corresponding Instrument Performance Track is created within each Framework Element. The Instrument Performance Track is populated using the performance analysis process (described below). Instrument Performance Track creation is further discussed in the "Process to create Framework Elements and Instrument Performance Tracks from MIDI file data" section.
In order to populate the Instrument Performance Track, the performance analysis process examines an instrument's MIDI track on a bar by bar basis to determine the identity of Performance Elements at a specific location. The resulting compositional and mechanical Performance Element index values are then mapped to the current analysis location on the Framework Element: Performance Element index mapping is further discussed in the "ClassiEcation and mapping of Performance Elements" section below.
N:\corp\adefazek\Beckford, DavclC'an. Patent AppInlAppln as f fled-Aug04.doc In the performance analysis process described, at least one Performance Element is identified based on the analysis of the MIDI Data; a Performance Element Collective classification is also derived from the MIDI Data. The Performance Element Collective classification identifies the compositional and mechanical uniqueness of the newly detected Performance Element. Performance analysis is further discussed in the "Process to create a Performance Element from a bar of MIDI
data" section. Performance Element Collective classification is further discussed in the "Classification and mapping of Performance Elements" section.
10 "Song Framework Functionality" utilizes the functionality of the audio to MIDI conversion application to prepare the MIDI file according to the process outlined in "Preparation of Mufti-track Audio fox Analysis", and the Translation Engine to convert the prepared MIDI file into a Song Framework Output file.
15 One aspect of the computer program product of the present invention is a conversion or translation computer program that is provided in a manner that is known and includes a Translation Engine. In one aspect thereof, the Translation Engine enables audio to MIDI conversion. The conversion computer program (54) of Figure 117, in one particular embodiment thereof, consists of a Graphical User 20 Interface (GUI) application used to extract Music Instrument Digital Interface (MIDI) data from mufti-track audio files. It is also used to collect the various song metadata associated with a MIDI file that is described below. This metadata is pertinent for analysis of the final outputted MIDI file.
25 The conversion computer program (54) of Figure 117 uses the following inputs, in one embodiment thereof: audio files (of standard length with a common synchronization point) and Song Metadata (such as tempo, key, and respective time signatures). Song Metadata is used to create the Musical structure framework for the musical composition. Additionally, Performance metadata may be required to 30 supplement the analyzed data of individual instrument tracks.
The Audio to MIDI conversion application output is a Type 1 MIDI file that is specifically formatted for the Translation Engine (56).
N:leorpladcfazek\Bcckford, Dave\Can. Patent ApplnlAppln as filed-Awg04.doc Figure 117 visualizes the component elements that constitute the Audio to MIDI conversion application (54). The following processing steps illustrate the operation of the Audio to MIDI conversion application (54). First, a multi-track audio file (2) is played through (121) an audio to MIDI conversion facility (122) to create (7) system-generated data (8). Second, a user supplements the system-generated data with additional performance metadata (14) as required, by entering values into the graphic user interface facility ( 124). The user-generated data ( 14) is converted into MIDI data and merged (25) with the existing system-generated data into a MIDI
file (26). The Audio to MIDI conversion application functionality is further illustrated in the Background.
The Translation Engine, in another aspect thereof, is a known file processor that takes a prepared MIDI File and creates a proprietary Song Framework Output XML file.
Figure 118 shows a representation of the Translation Engine (56). First, a MIDI Parsing facility (126) parses a prepared MIDI ale (26) to identify MIDI
events in various tracks and their respective timings. Next, an Analysis facility (128) translates the MIDI data into data format of the musical representation of the current invention. Finally, a XML Construction facility (130) packages the translated data into a Song Framework Output XML file (132).
Process to create Framework Elements and Instrument Performance 7Cracks from MIDI file data The first function of the Song Framework, as seen in (113) of Figure 116, is to define the Framework Sequence and Framework Elements as required. Figure illustrates that the Framework Sequence (84) is defined from song structure marker events (134) in MIDI Track 0. Figure 120 illustrates the Carrier construction of a Framework Element. The Macroform Carrier ( 100), is defined by song structure marker events (I34) defined in Track 0, and the Microform Carrier classes (I02) are defined by time signature events (136) in Track 0. Figure 121 illustrates the Modulator construction of a Framework Element. The Environment Track (80) is populated (139) by the key events and tempo events (138) in MIDI Track 0.
N:\corp\adefazek\Becktbrd, Uavc\Clan. Patent .4ppln\Applo as Filed-Aug04.doe The second function of the Song Framework, as seen in ( 115) of Figure 116 is to create empty Instrument Performance Tracks on each of the required Framework Elements. Figure 121 also illustrates that Instrument Performance Tracks (82) are created from header data ( 140) in MIDI Tracks 1 - n.
The following code fragment shows an XML result of the Framework Element crearion.
<Nucleus>
<ENS_SEQ>
<Ensemble Chromatin name='Versel' id='ensl' />
<Ensemble Chromatin name='Chorusl' id='ens2' />
<Ensemble Chromatin name='Verse2' id='ens3' />
<Ensemble Chromatin name='Chorus2' id='ens4' />
</ENS SEQ>
<ENS CONTENT>
<Ensemble Chromatin Content id='ensl'>
<Macroform carrier='8B+BbbBbb'>
<Macroform Node hsn=000 microform class='8'>

<Environment Partial tempo=' 120' key='B'/>

<Channel Partial inst id='instl' comp="
mech="h <Channel Partial inst id ='inst2' comp="
rnech="/>

<Channel Partial inst id ='inst3' comp="
mech="/>

<Channel Partial inst id ='inst4' comp="
mech="/>

</Macroform Node>

<Macroform Node hsn=005 microform class='8'>

<Environment Partial tempo=' 120' key='B'/>

<Channel Partial inst id ='instl' comp='' mech="/>

<Channel Partial inst id ='inst2' comp="
mech="h <Channel Partial inst id ='inst3' comp="
mech="/>

<Channel Partial inst id ='inst4' comp="
mech="/>

</Macroform Node>

N:\corp\adefazek\Beckford, VavcvCan, Patent Appln\Appln as Filed-Aug04.doc <Macroform Node hsn=050 mieroform class='8'>
<Environment Partial tempo=' 120' key='B'/>
<ChanneI Partial inst id ='instl' comp=" mech="/>
<Channel Partial inst id ='inst2' comp=" mech="/>
<Channel Partial inst id ='inst3' comp=" mech="/>
<Channel Partial inst id ='inst4' comp=" rnech="/>
</Macroform Node>
</Ensemble Chromatin Content>
...
<IENS CONTENT/>
<Instruments>
<Instrument name='Bass' inst id='instl' />
<Instrument name='Guitar' inst id='inst2' />
<Instrument name='Piano' inst id='inst3' />
<Instrument name='Trumpet' inst id='inst4' />
</Instruments>
</Nucleus>
Process to create a Performance Element from a bar of MIDI data The third function of the Song Framework is the population of the Instrument Performance Tracks through a performance analysis, as seen in ( 119) of Figure 116.
One processing step in the performance analysis is the Performance Element creation Process, as seen in (111) of Figure 116.
Figure 122 illustrates in greater particularity the Performance Element creation process. The Performance Element creation from MIDI data, in on embodiment thereof, can be described as a three-step procedure. First, Carrier construction (143) is achieved by identifying "capture addresses" within a beat (145), determining the most salient Nanoform Carrier Structure to represent each beat (147), and then determining the most salient Microform Carrier to represent the metric , structure of the Performance Element (149). Second, Modulator construction (151) is achieved by the detection ofnote-on data (153), detection and allocation controller N:vcorpladetazek\Bcckf'ord, Dave\C'an. Patent Appln\Appln as filed-Au~04.do<;

events ( I SS), and a subsequent translation into Modulator data (1 S7).
Finally, Modulators are associated with their respective Micro/Nanoform Carriers (1S9) through Note Events ID's. This completes the definition of a Performance Element.
The Carner Construction process, as seen in (143) of Figure 122 has three steps: capture address detection, Nanoform carrier identification, and Microform Carrier identification.
The first step in capture address detection, as seen in (14S) of Figure 122 is to determine the number of beats in the detected bar. The Microform Carrier Class is used to determine the number of eighths to capture for bar analysis. Figure visualizes the bar capture range, (and the eighth capture ranges) set by Microform Carrier class. For example, an eighth contains 480 ticks. In practice this number can increases depending on system capability.
IS
Every eighth note has twelve capture addresses of 40 ticks each (at the 480ticks / eighth quantize). Capture ranges for an eighth note would be, in one particular embodiment of the present invention, as follows:
Tick OffsetCapture Address N:\corp\adefazek\Beckford, Dave\Can. Patent Appln\Apptn as f fled-Aug04.doc Each eighth is examined on a tick-by-tick basis to identify note-on activity in the capture addresses, Figure 124 illustrates one particular aspect of the Translation Engine, namely 5 the Capture address detection algorithm. If MIDI note-on events are detected at capture addresses 1, 3, 5, 7, 9, 10, 11, then the adjacent capture addresses are bypassed for MIDI note-on event detection. Subsequent MIDI note-on events in the adjacent capture range will be interpreted as polyphony within a Note Event -which is 80 ticks in length. Note-on polyphony detection is introduced in tlhe Modulator l.0 Construction Process, as seen in ( 153) of Figure 122. If MIDI note-ons are detected in capture addresses 2, 4, 6, 8, 10, 12 then adjacent capture addresses are not skipped because MIDI note-on activity in the adjacent capture ranges can be associated with a separate Note Event. The following representative code fragment illustrates a particular XML rendering of the Capture Address analysis:
<Detect Bar eigths=8>
<eighth>
<capture address_1 active=" />
<capture address 2 active=" />
<capture address 3 active=" />
<capture address 4 active='true' />
<capture address 5 active=" />
<capture address 6 active=" />
<capture address 7 active=" h <capture address_8 active=" I>
<capture address 9 active='true' />
<capture address-10 active=" />
<capture address-11 active=" />
<capture address_12 active='true' />
</eighth>
</Detect Bar>
N:vcorp\adefazek\t3eckford, Dave\C.an. Patent App(n\Appln us Eilcd-Au~04.doc A second step in Carrier construction is Nanoform identification, as seen in ( 147) of Figure 122. Nanoform structures can be identified based on the most effective representation (highest salient value) of active capture addresses in the eighth. If none of the capture ranges are active the Nanoform Carrier structure is null.
Figure 125 shows the mapping of capture addresses to Nanoform structures.
The following text outlines the Nanoform identification process through example.
In this example, capture ranges 2,S,and 10 are flagged as active. Figure 126 shows the Nanoforms that can accommodate all of the active capture ranges and Figure 127 shows the salient weight of the active capture addresses in each candidate Nanoform.
The Nanoform with the highest salient weight is the most efficient representation of the active capture addresses. Harmonic state notation is assigned and Note Event ID's are mapped to the Nanoform nodes. The following code fragment shows a representative XML rendering of Nanoform identification process:
<Microform Node index =1 hsn=" node salience=" nanoform carrier='3t' nano salience mutt=1.0>
<Nanoform Node hsn=0 neid=1>
<Nanoform Node hsn=3 neid=2>
<Nanoform Node hsn=6 neid=3>
</Microform Node >
The final step in Carrier construction is Microform identification, as seen in ( 149) of Figure 122. After Nanoform nodes and Nanoform Carrier structures are defined for each microform node, it is possible to calculate the most efficient microform carrier (based on the highest salient value).
The following text outlines the Microform identification process through example. In this 8 Microform class example the following nodes active. Salient Multipliers are also included.
N:\corp\advfazeklBcckford, Davc\Cati. Patent Appln\Appln as l~ itt:d-Au~T~4.doc Node Salient Multiplier 1 1.0 -_ 4 1.0 7 1.0 8 0.33 Figure 128 shows the salient weight of the nodes in the Microforms of the 8 Microform Class.
The Microform with the highest salient weight is the most efficient representation of the active nodes. The end results of Microform Identification are that the Microform Carrier is identified, the Harmonic state notation is provided for the microform nodes, and total salience is calculated for the Microforrn carrier structure.
The following code fragment illustrates a particular aspect of the present invention, namely an XML Carrier representation before the Microform is identified:
<Carrier microform carrier=" total salience=">
<Microform Node index=1 hsn=" node salience=" nanoform carrier='thru' nano salience mull=1.0>
<Nanoform Node hsn=0 neid=1 />
</Microform Node >
<Microform Node index=2 hsn=" node salience=" nanoform carrier='null' nano salience mull=0>
</Microform Node >
<Microform Node index=3 hsn=" node salience=" nanoform carrier='null' nano salience mull=0>
</Microform Node >
<Microform Node index=4 hsn=" node salience=" nanoform carrier='thru' nano salience mull=1.0>
<Nanoforrn Node hsn=0 neid=2 />
N:\corp\adet'azck\l3cckford, Dave\Can. Yatcnt .Apple \Appln as filed-Aug04.doc 6g <lMicroform Node >
<Microform Node index=5 hsn=" node salience=" nanoform carrier='null' nano salience mutt=0>
</Microform Node >
<Microforrn Node index=6 hsn=" node salience=" nanoform carrier='null' nano salience mutt=0>
</Microform Node >
<Microform Node index=7 hsn=" node salience=" nanoform carrier='thru' nano salience mutt=1.0>
<Nanoform Node hsn=0 neid=3 />
</Microform Node >
<Microform Node index=7 hsn=" node salience=" nanoform carrier='thru' nano salience mutt=0.33>
<Nanoform Node hsn=0 neid=4 />
</Microform Node >
</Carrier>
The following code fragment shows an illustrative XML Carrier representation after the Microform Carrier is identified:
<Carrier microform carrier='8 Tttb' total salience=224.33>
<Microform Node index=1 hsn='00' node salience=128 nanoform carrier='thru' nano salience mutt=1.0>
<Nanoform Node hsn=0 neid=1 />
</Microform Node >
<Microform Node index=2 hsn='03' node salience=0 nanoform carrier='null' nano salience rnult=0>
</Microform Node >
<Microform Node index=3 hsn='06' node salience=0 nanoform carrier='null' nano salience mutt=0>
</Microform Node >
<Microform Node index=4 hsn='30' node salience=64 nanoform carrier='thru' nano salience mutt=1.0>
<Nanoform Node hsn=0 neid=2 />
N:\corp\adefazek\Beckf'ord, Dave\Can. Patent Appln\Appln as Filed-Aug04.doc </Microform Node >
<Microform Node index=5 hsn='33' node salience=0 nanoform carrier='null' nano salience muff=0>
</Microform Node > .
<Microform Node index=6 hsn='36' node salience=0 nanoform carrier='null' nano salience mutt=0>
</Microform Node >
<Microform Node index=7 hsn='60' node salience=32 nanoform carrier='thru' nano salience muff=1.0>
<Nanoform Node hsn=0 neid=3 />
</Microform Node >
<Microform Node index=7 hsn='65' node salience=0.33 nanoform carrier='thru' nano salience muff=0.33>
<Nanoform Node hsn=5 neid=4 />
1 S </Microform Node >
</Carrier>
Exceptions in the salient ordering of Mieroforms exist. Figure 129 shows cases where the salient weighting for active nodes will result in the same weighting.
In order to resolve this ambiguity, the highest ordered Microform from the Microform index is used. The following table illustrates selection of the highest order Microform within a Microform class.
8 Microform Class > 8 B+BbbBbb 8 Tttb 8 Ttbt 8 Tbtt The Modulator construction process, as seen in (151) of Figure 122 generally has three steps: note-on detection, controller stream detection, and translation of MIDI
data into modulator data.
N:\corp\adefazck\Beckford, Uave',Can. f'aten2 Appln\Appln as Filed-Aog04.doe The first step in Modulator construction is note-on detection, as seen in (153) of Figure 122. Figure 130 shows a note-on detection algorithm that detects monophonic and polyphonic Note Events.
5 The second step in Modulator construction is controller stream detection, as seen in (155) of Figure 122. Controllers such as volume, brightness, and pitchbend produce a stream of values that are defined on a tick-by-tick basis. Figure illustrates a particular aspect of the Translation Engine of the present invention, namely a MIDI control stream detection algorithm. Figure 132 illustrates the control 10 stream association logic. MIDI control streams (160) are associated with a Note Event (64) for the duration of the Note Event (64), or until a new Note Event (64) is detected.
The final stage in Modulator construction is translation of detected MIDI data 15 into Modulator data, as seen in ( 157) of Figure 122. The following code fragment shows illustrates a particular processing method for arriving at the resulting data from note-on and control stream detection in a Modulator construction:
neidl 20 midiote, start tick n , duration (text events) [0 val,pb val, bright ,volval]

[1 ,vol val,pbval, bright~val]

[2 ,vol _val,pbval, bnght_val]

25 [3 val,pbval, bright_val]
,vol [4 ,vol val,pbval, bright val]

[5 ,vol _val,pbval, bright val]

[6 ,vol val,pbval, bright val]

30 neid2 midi note , start_tick , duration midi note , start tick , duration (text events) [O ,vol val,pb_val, bright val]
V:lcorp\adefazek\Beckford, Dave\C:an. Patent Appln\Appln as Filed-Aug04.doc [1 ,vol val,pb val, bright val]
[2 ,vol val,pb val, bright val]
[3 ,vol val,pb val, bright val]
[4 ,vol val,pb val, bright val]
...
Figure 133 illustrates Modulator translation from detected note and control stream data.
The compositional partial (68) is assembled in the following manner: Relative pitch (162) and delta octave (164) are populated by the passing the MIDI note number and environment Key to a relative pitch function (165). Passing the detected note event tick duration ( 167) to a greedy function which populates eighth, sixteenth and 32nd coarse duration values (168). The greedy function is similar to a mechanism that calculates the change due in a sale. Finally; lyric (170) and timbre (172) information are populated by MIDI text events ( 173).
The temporal partial (70) is assembled in the following manner: Pico position offset (174) is populated by start tick minus 40 (175). Pico duration offset (176) is populated by the tick remainder minus 40 (177) of the greedyDuration function.
The event expression stream (72) is populated (179) by the MIDI controller array associated with the Note Event.
In the Final Output, Note Variants are ordered by ascending MIDI note number, in one particular implementation. Temporal partials are replicated for each Note Variant (based on current technology). The following code fragment illustrates the modulator structure in an XML format:
<Modulator Content>
<Compositional Content>
<CompositionalyEvent neid=1>
N:lcorp\adefazeklBeck(''ord, UavelCan. Patent Appln\Appln as I~iled-Aug04.doc <Comp Partial id=1 octave=mid rel_pitch=tnc dur8=1 durl6=1 dur32=0 lyric="
timbre="/>
<lCompositional Event>
<Compositional Event neid=2>
<Comp Partial id=1 octave=mid rel_pitch=+3 dur8=1 durl6=1 dur32=0 lyric=" timbre="/>
</Compositional~Event>
<Compositional~Event neid=3>
<Comp Partial id=1 octave=mid rel_pitch=p4 dur8=1 durl6=2 dur32=0 lyric=" timbre="/>
<ICompositional Event>
<Compositional Event neid=4>
<Comp Partial id=1 octave=mid rel_pitch=+2 dur8=1 durl6=1 dur32=0 lyric=" timbre="/>
</Compositional~Event>
</Compositional Content>
<Mechanical Content>
<Mechanical Event neid=1>
<Temp_Partial id=1 pico_position=-5 pico_duration=-10 />
<Expression Stream>
<tick=0 vol=64 pitch bend=45 bright=20>
<tick=1 vol=70 pitch bend=42 bright=22>
<tick=2 vol=73 pitch fend=48 bright=24>
</Expression Stream>
</Mechanical Event>
<Mechanical Event neid=2>
<Temp_Partial id=1 pico_position=-7 pico~duration=+2 />
<Expression_Stream>
<tick=0 vol=64 pitch bend=45 bright=20>
<tick=1 vol=70 pitch bend=42 bright=22>
</Expression-Stream>
</Mechanical Event>
N:\corp\adeFazek\I3cckford, 1)avclCan. fatcnt Appln\Appln as Filed-~1ug04.doc <Mechanical Event neid=3>
<Temp Partial id=1 pico_position=+10 pico~duration=-15 />
<Expression Stream>
<tick=0 vol=64 pitch bend=45 bright=20>
<tick=1 vol=70 pitch~bend=42 bright=22>
<tick=2 vol=73 pitch bend=48 bright=24>
</Expression_Stream>
</Mechanical Event>
<Mechanical Event neid=4>
<Temp Partial id=1 pico_position=+3 pico duration=+7 /
<Expression Stream>
<tick=0 vol=64 pitch bend=45 bright=20>
<tick=1 vol=70 pitch bend=42 bright=22>
<tick=2 vol=73 pitchybend=48 bright=24>
</Expression Stream»
</Mechanical Event>
</Mechanical Content>
</Modulator Content>
The final stage of Performance Element Creation is Carrier / Modulator , integration as seen in (159) of Figure 122. Figure 134 illustrates Carrier /
Modulator integration. The Carrier structure is detected and identified. Modulators are detected and constructed. Modulators are then associated to carrier nodes through Note Event IDs. The following code fragment illustrates the complete XML structure for a detected Performance Element:
<Detected Performance Gene>
<Carrier microform carrier='8 Tttb' total salience=224.33>
<Microform Node index---1 hsn='00' node salience=128 nanoform carrier='thru' nano salience mint=1.0>
<Nanoform Node hsn=0 neid=1 />
</Microform Node >
N:\corp\adefazek\Beckford, DavclC;an. Patent ApptnlAppln as filed-Aug04.doc <Microform Node index=2 hsn='03' node salience=0 nanoform carrier='null' nano salience molt=0>
</Microform Node >
<Microform Node index=3 hsn='06' node salience=0 nanoform carrier='null' nano salience molt=0>
</Microform Node >
<Microform Node index=4 hsn='30' node salience=64 nanoform carrier='thru' nano salience mutt=1.0>
<Nanoform Node hsn=0 neid=2 />
</Microform Node >
<Microform Node index=5 hsn='33' node salience=0 nanoform carrier='null' nano salience molt=0>
</Microform Node >
<Microform Node index=6 hsn='36' node salience=0 nanoform carrier='null' nano salience molt=0>
</Microform Node >
<Microform Node index=7 hsn='60' node salience=32 nanoform carrier='thru' nano salience molt=1.0>
<Nanoform Node hsn=0 neid=3 />
</Microform Node >
<Microform Node index=7 hsn='65' node salience=0.33 nanoform carrier='thru' nano salience mutt=0.33>
<Nanoform Node hsn=5 neid=4 />
</Microform Node >
</Carrier>
continued. . .
<Modulator Content>
<Compositional Content>
<Compositional Event neid=1>
<Comp Partial id=1 octave=mid rel_pitch=tnc dur8=1 durl6=1 dur32=0 lyric="
timbre="/>
</Compositional Event>
N:\corp\adetitzek\F3eckford, Dave\C:.m. Patent AppInlAppln as Filed-Aug04.doc <Compositional Event neid=2>
<Comp Partial id=1 octave=mid rel_pitch=+3 dur8=1 durl6=1 dur32=0 lyric=" timbre="/>
</Compositional Event>
5 <Compositional Event neid=3>
<Comp~Partial id=1 octave=mid rel_pitch=p4 dur8=1 durl6=2 dur32=0 lyric=" timbre="/>
</Compositional Event>
<Compositional Event neid=4>
10 <C;omp Partial id=1 octave=mid rel_pitch=+2 dur8=1 durl6=1 dur32=0 lyric=" timbre="/>
</Compositional Event>
</Compositional Content>
<Mechanical Content>
15 <Mechanical Event neid=1>
<Temp Partial id=1 pico_position=-5 pico~duration=-10 />
<Expression_Stream>
<tick=0 vol=64 pitch bend=45 bright=20>
<ticlc=1 vol=70 pitch bend=42 bright=22>
20 <tick=2 vol=73 pitch bend=48 bright=24>
</Expression Stream>
</Mechanical Event>
<Mechanical Event neid=2>
25 <Temp Partial id=1 pico_position=-7 pico duration=+2 />
<Expression_Stream>
<tick=0 vol=64 pitch bend=45 bright=20>
<tick=1 vol=70 pitch_bend=42 bright=22>
30 </Expression Stream>
</Mechanical Event>
<Mechanical Event neid=3>
<TemplPartial id=1 pico_position=+10 pico duration=-15 h <Expression-Stream>
N:\corp\adefazek\Beckford, Lave\Can. Patent Appln\Appln as Filed-Aug04.doc <tick=0 vol=64 pitch bend=45 bright=20>
<tick=1 vol=70 pitch~bend=42 bright=22>
</Expression-Stream>
</Mechanical Event>
<Mechanical Event neid=4>
<Temp Partial id=1 pico_position=+3 pico duration=+7 />
<Expression~Stream>
<tick=0 vol=64 pitchTbend=45 bright=20>
<tick=1 vol=70 pitch bend=42 bright=22>
<tick=2 vol=73 pitch~bend=48 bright=24>
</Expression-Stream»
</Mechanical Event>
</Mechanical Content>
</Modulator Content>
</Detected Performance Gene>
Classification and mapping of Performance Elements The third function of the Song Framework is the population of the Instrument Performance Tracks through a performance analysis., as seen in {119) of Figure 116.
The second process within performance analysis is the classification and mapping of Performance Elements Process, as seen in (117) of Figure 116.
Figure 135 illustrates the classification of Performance Elements. The newly detected Performance Element (74) is introduced to the Performance Element Collective (34) for a particular Instrument Performance Track. The Performance Element Collective (34) compares the candidate Performance Element (74) against the existing Performance Elements in the Collective, by subjecting it to a series of equivalence tests. The compositional equivalences tests consist of a context summary comparison (195), and a compositional partial comparison (197). The mechanical equivalences tests consist of a temporal partial comparison (199), and a event expression stream comparison (201 ).
N:\corpladefazek\Beckford, Dave\Can. latent Appln\Appln as 1~iled-Aug04.doc The first equivalence test is the context summary comparison, as seen in (195) of Figure 135. The context summary comparison looks for a match in the Microform Carrier Signature, and total salience value. Figure 136 illustrates the context summary comparison flowchart.
The second equivalence test is the compositional partial comparison, as seen in ( 197) of Figure 135. The compositional partial comparison looks for a match in the delta octave and relative pitch parameters of the compositional partial.
Figure 137 illustrates the compositional partial comparison.
If the candidate Performance Element returns positive results for the first two equivalence tests, then the candidate Performance Element is compositionally equivalent to a pre-existing Performance Element in the Performance Element Collective. If the candidate Performance Element returns a negative result to either of the first two equivalence tests, then the candidate Performance Element is compositionally unique in the Performance Element Collective.
If the candidate Performance Element is compositionally unique, then the mechanical data of the newly detected Performance Element is used to create a new mechanical index within the newly created compositional group.
However, if the candidate Performance Element is determined to be compositionally equivalent to a pre-existing Performance Element in the Performance Element Collective, then the following tests are performed to determine if the mechanical data is equivalent to any of the pre-existing mechanical indexes in the compositional group.
The third equivalence test is the temporal partial comparison, as seen in (199) of Figure 135. The temporal partial comparison accumulates a total variance between the pico position offsets in the candidate Performance Element, and pico position offsets in a pre-existing Performance Element in the compositional group.
Figure 138 illustrates the temporal partial comparison.
N:lcorp\adcfazck\Bcckford. Dave\Can. Patent Appln\Appln as Filed-Aug04.doc The fourth equivalence test is the event expression stream comparison, as seen in (201) of Figure 135. The event expression stream comparison accumulates a total variance between pico tuning, volume, and brightness in the candidate Performance Element, and pico tuning, volume, and brightness in a pre-existing Performance Element in the compositional group. Figure 139 illustrates the event expression stream comparison.
If the candidate Performance Element returns a total variance within the accepted threshold for the third and fourth equivalence tests, then the candidate Performance Element is mechanically equivalent to a pre-existing mechanical index within the compositional group.
If the candidate Performance Element returns a total variance that exceeds the accepted threshold for either of the third or fourth equivalence tests, then the candidate Performance Element is mechanically unique within the compositional group.
Figure 140 visualizes population and mapping of the classification result to the current analysis location. If the candidate Performance Element is found to be compositionally equivalent (180) or mechanically equivalent (182) to a pre-existing entry in the Performance Element Collective (34), the indexes of the matching Performance Elements are identified (183) and the classification result is populated ( 185) with the matching Performance Element indexes. If the candidate Performance Element is determined to be compositionally unique (186) or mechanically unique (188), new indexes are created (189) in the Performance Element Collective (34), and the classification result is populated ( 185) with the newly created Performance Element indexes. The index results of the classification process are mapped (191) to the current analysis location in the Instrument Performance Track (82), and the analysis location is incremented (193) to the next bar.
Song Framework Repository Functionality Figure 141 illustrates an overview of the Song Framework Repository functionality. The Song Framework Repository is best understood as a known N:\corp\adefazck\Beckfbrd. Dave\C an. Patent Appin\Appln as Filed-Aug04.doc database and associated utilities such as database management utilities, for storing and retrieving music f le representations of the present inventions, and further, analyzing such music file representations.
The first function of the Song Framework Repository is the insertion and normalization (37) of Performance Elements (74) within the local Song Framework Output to the universal compositional Performance Element Collective (202) and the mechanical Performance Element Collective (204). The second function of the Song Framework Repository functionality is the re-mapping (41) of Framework Elements (32) of the local Sang Framework Output (30) with newly acquired universal Performance Element indices. The third function of the Song Framework Repository is the insertion (39) of the re-mapped Song Framework Output (30) into the Framework Output Store (206).
"Song Framework Repository functionality" utilizes the functionality of the Song Framework Repository database. The Song Framework Repository database accepts a Song Framework Output XML file as input.
Figure 142 visualizes the components of the Song Framework Repository database, in one particular implementation thereof. A comparison facility (208) analyzes the new Song Framework Output XML file (132) in order to normalize and re-map its components against the pre-existing Song Framework Outputs in the Song Framework Repository database (38). The database management facility (60) then allocates the components of the new Song Framework Output XML file (132) into the appropriate database tables within the Song Framework Output Repository database (38).
Figure 143 illustrates the insertion and normalization of local Performance Elements, as seen in (37) of Figure 141. Upon introduction to the Song Framework Repository database (38), all of the compositional Performance Elements (94) and mechanical Performance Elements (96) of the local Song Framework Output are re-classified (37) by the universal compositional Performance Element Collective (202) and the mechanical Performance Element Collective (204). The re-classification process is generally the same process employed by the Song Framework, as seen in N:\corpladefazcklBcckford, Dav~e\Can. Patent Appln\Appln as Filed-Ang04.doc Figure 135 for the initial classification of the Performance Elements in the local Performance Element Collectives.
Figure 144 illustrates the re-mapping of Framework Elements within the Local Song Framework Output; as seen in (41) of Figure 141. All of the local Instrument Performance Tracks (82) in all of the Framework Elements of the local Song Framework Output are re-mapped (41) with the newly acquired universal Performance Element indexes.
10 Figure 145 illustrates insertion of the re-mapped Song Framework Output into the Framework Output store, as seen in (39) of Figure 141. The re-mapped Song Framework Output (30) is inserted (39) into the Framework Output store (206) and the Song Framework Output (30) reference is then added (209) to all the newly classified Performance Elements in the universal Performance Element Collectives.
Reporting The Reporting Facility of the current invention is generally understood as a known facility for accessing the Song Framework Repository Database and generating a series of reports based on analysis of the data thereof.
The Reporting Facility, in a particular implementation thereof, generates three types of reports. The Song Framework Output checksum report generates a unique identifier for every Song Framework Output inserted into the Song Framework Repository. The originality report indicates common usage of Performance Elements in the Song Framework Repository. Third, the similarity report produces a detailed content and contextual comparison of two Song Framework Outputs.
Figure 146 illustrates the reporting functionality of the current invention.
Currently the report facility (58) queries (21 l) the Song Framework Repository database (38), and translates (213) the Song Framework Output XML data (132) into Structure Vector Graphics (SVG) and HTML pages (214). The reporting facility will be expanded in the future to generate various output formats.
N:lcorp\adefazek\Beckford, Uave\Can. Patent ApplnlAppln as filed-Ang04.doc gl A Song Framework Output checksum will be generated from the following data. The total number of bars multiplies by total number of Instrument Performance Tracks, total number of compositional Performance Elements in Song Framework Output, total number of Mechanical Performance Elements in Song Framework Output, and accumulated total salient value for all compositional Performance Elements in the Song Framework Output. A representative Song Framework Output Checksum example would be: 340.60.180.5427.
An originality report is generated for every Song Framework Output inserted into the Song Framework Repository. Figure 147 shows the elements of the originality report. A histogram is created for each compositional and mechanical Performance Element in the Song Framework Output. The histogram indicates the complete usage of the Performance Element in the entire Song Framework Repository database. The number of Song Framework Outputs that share a variable amount of 1 S Performance Elements with the current Song Framework Output is also indicated.
The originality report will grow in accuracy as more Song Framework Output files axe entered into the Song Framework Repository database. The comparisons in the originality report will forn~ the basis of an automated infringement detection process as detailed in the "Content Infringement Detection" application below.
Similarity reporting is performed to compare the content and contextual similarities of two specific Song Framework Outputs. Figure 148 illustrates the three content comparison reports and three context comparison reports. The content comparison reports consist of a total similarity comparison (215), a compositional content distribution comparison (217), and a mechanical content comparison (219).
The context comparison reports consist of a full Framework Element comparison (221), a compositional context comparison (223), and a mechanical context comparison (225).
The total compositional sirniiarity report as seen in (215) of Figure 148 indicates the following: The number of compositionally similar (shared) Performance Elements between the two Song Framework Outputs, the total number of Performance N:\corp\adcfazet<\acckford. DavclCan. Patent Appln\Appln as Piled-Aug04.doc Elements for both Song Framework Outputs are determined, and the content percentage of the common material is determined for each Song Framework Output.
The following table illustrates this comparison:
Common Performance Elements ~ Total Performance Elements ~ Common Percentage of Total 5 ~ 42 ~ 11.9 33 ~ 15.1 Figure 149 illustrates the compositional content distribution report as seen in (217) of Figure 148. The compositional content distribution report indicates the distribution of similar compositional Performance Elements (94) in the Performance Element Collectives (34).
Figure 150 illustrates the mechanical similarity report as seen in (219) of Figure 148. For each compositionally similar Performance Element (94), an ordered comparison of the mechanical Performance Elements (96) is performed. The number of mechanical comparisons will be limited to the smallest number of Mechanical Performance Elements (96). The degree of mechanical similarity will be colour coded according to total variance.
Figure 151 illustrates the full Framework Element comparison report as seen in (221) of Figure 148. Framework Elements (32) of both Song Framework Outputs (30) are compared sequentially.
Figure 152 illustrates the compositional context distribution report as seen in (223) of Figure 148. The compositional context distribution report indicates the isolated distribution of similar compositional Performance Elements (94) in Framework Elements (32).
Figure 153 illustrates the mechanical context distribution report as seen in (225) of Figure 148. The mechanical context distribution report indicates the isolated N:\corp\adefazek\Bcckford, Dave\C'an. Patent AppInlAppln as Filed-Aug04.doc distribution of similar mechanical Performance Elements (96) in Framework Elements (32).
Applications of the system of the current invention Figure 154 illustrates one particular embodiment of the system of the present invention. A known computer is illustrated. The computer program of the present invention is linked to the computer. It should be understood that the present invention contemplates the use of a server computer, personal computer, web server, distributed network computer, or any other form of computer capable of computing the processing steps described herein.
The computer (226), in one particular embodiment of the system, will generally link to audio services to support the Audio to MIDI conversion application 1 S (54) functionality. The Translation Engine (56) of the present invention, in this embodiment, is implemented as a CGI-like program that would process a local MIDI
file. The Song Module Repository database (38) stores the Song Framework Output XML files, and a Web Browses (228) or other application that enables viewing is used to view the reports generated by the Reporting Application (58).
Figure 155 illustrates a representative client/server deployment of the system of the current invention. The system of the current invention can also be deployed in a client/server environment. The Audio Conversion application (54) would be distributed on multiple workstations (230) with audio services in a secure LAN/WAN
environment. The Translation Engine (56) would be implemented on a server (232), and would be accessed by the workstations (230), for example, through a secure logon process. The Translation Engine (56) would upload the XML files described above to the Song Framework Repository database (38) through a secure connection. A
server (232) would host the Song Framework Repository database (38) and the Reporting application (58) to generate SVG and HTML pages. A Web Browses (228) would access the reporting functionality through a secure logon process. The Translation Engine (56), Song Framework Repository (38), and Reporting application (58) could alternatively share a single server (232); depending on the scale of the deployment.
Otherwise, a distributed server architecture could be used, in a manner that is known.
N:\corp\adefazck\Becktbrd, Dave\C'an. Patent Appln\Appln as I~ilcd-~ug04.doc Figure 156 illustrates a client / server hierarchy between satellite Song Framework Repository servers (234) and a Master Song Framework Repository server (236). The satellite Song Framework Repository servers (234) would incrementally upload their database contents to a Master Song Framework Repository server (236) through a secure connection. The Master Song Framework Repository (236) would normalize the data from all of the satellite Song Framework Repositories (234).
The Reporting functionality of the system of the current invention can be accessed through the Internet via a secure logon to a Song Framework Repository Server. An Electronic Billing/Account System would be implemented to track and charge client activity.
A number of different implementations of the system of the present invention are contemplated. For example, (1) a musical content registration system; (2) musical content infringement detection system; and (3) musical content verification system.
Other application or implementations are also possible, using the musical content Translation Engine of the present invention.
There are two principal aspects to the musical content registration system of the present invention. The first is a relatively small-scale Song Framework Output Registry service that is made available to independent recording artists. The second is an Enterprise-scale Song Framework Output Registry implementation made available to large clients, such as music publishers or record companies. The details of implementation of these aspects, including hardware/software implementations, database implementations, integration with other system, including billing systems etc. are can all be provided by a person skilled in the art.
Figure 157 illustrates the small-scale Song Framework Output Registry process. The small-scale content registration involves generally the following steps:
First; the upload technician (18) uploads (21) multi-track audio files (2) to the Audio to MIDI conversion workstation (20) in order to perform an environment setup, as described in the "Preparation of multi-track audio for analysis section".
Following the ~:\corpladcfazek\accktord. Dave\C'an. Latent r~ppln\Appln as Filed-Aug04.doc environment setup, the content owner (16) supplements (23) the required user data (14) for the appropriate tracks. Alternatively, the satellite technician sends (27) audio tracks (2) to an analysis specialist (24) through a secure network. The specialist supplies user data (14) for the requested audio tracks (2). Once the audio analysis 5 process is complete, a client package (238) is prepared (239) for upload to a central processing station (240). At the central processing station (240), the client package (238) is reviewed (241) for quality assurance purposes, and the intermediate MIDI file (26) is then uploaded (31) to the Translation Engine (56) to create a Song Framework Output XML file (132). The Song Framework Output XML file (132) is then inserted 10 (39) into the Song Framework Registry database (38), and the appropriate reports (242) are generated. Finally, the reports (242) are sent back (243) to the content owner ( 16).
Figure 158 illustrates the Enterprise-scale Song Framework Output Registry 15 process. The Enterprise-scale content registration process involves the following steps. First, mufti-track audio files (2) are prepared to initial speciEcation and uploaded to an Audio to MIDI conversion workstation (20). Next, an upload technician (18) performs an environment setup, as described in the "Preparation of mufti-track audio for analysis section". At this point, analysis specialists (24) examine 20 the audio tracks (2) and supplement all of the required user data ( 14).
Once the audio analysis is complete, an intermediate MIDI file (26) is uploaded (31 ) to a local Translation Engine (56) to create a Song Framework Output XML file (132). The Song Framework Output XML file ( 132) is inserted (39) to a local Song Framework Repository (234). Finally, the local Song Framework Repository (234) updates its 25 contents (245) to a master Song Framework Repository (236), through a secure batch communication process.
The Content registration services would be implemented using the Client/Server deployment strategy.
A second application of the system of the current invention is a content infringement detection system.
N:\corp\adefazeklf3eckford, Dave\C:an. Patent Appln\Appln as 1~iled-Aug04.doc The process for engaging in compositional analysis of music to identify copyright infringement is currently as follows. Initially, a musicologist may transcribe the infringing sections of each musical composition to standard notation.
The transcription is provided as a visual aid for an auditory comparison.
Subsequently, the musicologist will then perform the isolated infringing sections (usually on a piano) in order to provide the court with an auditory comparison of the two compositions. The performed sections of music are rendered at the same key and tempo to ease comparison of the two songs.
In order to test for a mechanical copyright infringement, audio segments of the infringing recordings are played for a jury. Waveform displays may also be used as a visual aid for the auditory comparison.
1.n bath types of infringement, the initial test is auditory. The plaintiff has to be exposed to the infringing material, and then be able to recognize their copyrighted material in the defendant's work.
The system of the cun-ent invention provides two additional inputs to content infringement detection. The infringement notification service would automatically notify copyright holders of a potential infringement between two songs (particularized below). Also, the similarity reporting service described above would provide a fully detailed audio-visual comparison report of two songs, to be used as evidence in an infringement case. This report could be used Figure 159 shows a comparison of Standard Notation vs. the Musical representation of the current invention.
Figure 160 shows the automated infringement detection notification process .The infringement notification service is triggered automatically whenever a new Song Framework Output XML hle (132) is entered (39) into the Song Framework Repository database (38). If the new Song Framework Output XML file (132) has exceeded a threshold of similarity with an existing Song Framework Output in the Song Framework Repository database (38), Content owners (16) and legal advisors N:lcorp\adefttzek\Lieckford, UuveiC:an. Patent Apple\Appln as Filed-Aug04.doc (246) are notified (247). The infringement notification (247) serves only as a warning of a potential infringement.
Figure 161 shows the similarity reporting process. The similarity reporting service provides an extensive comparison of two Song Framework Output XML
files ( 132) in the case of an alleged infringement. The content owners ( 16) upload (39) their Song Framework Output XML files ( 132) into the Song Framework Repository database (38). The generated similarity report (248) not only indicates content similarity of compositional and mechanical Performance Elements, but also indicates the usage context of the similar elements within both Song Framework Outputs.
The content infringement detection services could be implemented using the standalone deployment strategy. Alternatively, this set of services could be implemented using the client/server deployment strategy.
A third application of the system of the current invention is content verification. Before content verification is discussed, a brief review of existing content verification methods is useful fox comparison.
The IRMA anti piracy program requires that a replication rights form be filled out by the by a recording artist who wants to manufacture a CD. This form requires the artist to certify their ownership, or to disclose all of the copyright information for songs that will be manufactured. Currently there is no existing recourse for the CD
manufacturer to verify the replication rights form against the master recording.
Figure 162 visualizes the content verification process. The system of the current invention's content verification process is as follows: First, the content owner (16) presents the Song Framework Output reports (242) of analyzed songs to the music distributor such as a CD manufacturer (250). Next, The CD manufacturer (250) loads the Song Framework Output report (242) into reporting workstation (252) and the song status is queried (251) using checksum values through a secure Internet connection. In response to the CD manufacturer query (251 ), the Master Song Framework Registry (236) returns (253) a status report (254) to the CD
manufacturer (250). The status report (254) verifies song content (samples and sources), authorship, N:\corp\adef'azelc113ecki'ord, Dave\Can. Patent Appln\Appln as Piled-Aug04.doc g8 creation date, and litigation the status of the song. Upon confirmation of the master recording (256) content, the CD manufacturer (250) can accept the master recording (256) at a lower risk of copyright infringement.
The content verification process can also be used by large-scale content licensors / owners to verify a new submission for use, Music publishers, advertising companies that license music, and record companies would also be able to benefit from the content verification process The content verification services could be implemented using the remote reporting deployment strategy.
It should be understood that the various processes and functions described above can be implemented using a number of different utilities, or one or lesser number of utilities than is described, in a manner that is known. The particular hardware or software implementations are for illustration purposes only. A
person skilled in the art can adapt the invention described to alternate hardware or software implementations. For example, the present invention can also be implemented to a wireless network.
It should also be understood that the present invention involves an overall method, and within the overall method a series of sub-sets of steps. The ordering of the steps, unless specifically stated is not essential. One or more of the steps can be incorporated into a lesser number of steps than is described, or one or more of the steps can be broken into a greater number of steps, without departing from the invention. Also, it should be understood that.other steps can be added to the method described without diverging from the essence of the invention.
Numerous extensions of the present inventions are possible.
For example, the Song Framework Registry could be used to identify common patterns within a set of Song Framework Outputs. This practice would be employed to establish a set of metrics that could identify a set of "best practices" or "design patterns" of the most popular songs within a certain genre. The information can be Nacorp\adeFazck\E3eckford, Dave\C'an. Patent AppInlAppln as Piled-Au504.doc tied to appeal of specific design patterns to specific demographics. This content could be used as input, for example, to a music creation tool to improve the mass appeal of music, including to speciEc demographics.
As a further example, performance Elements can be stored in the Song Framework Repository with software synthesizer data and effects processor data allocated to the Instrument. The synthesizer data and effects processor data would allow a consistent playback experience to be transported along with the performance data. This practice would be useful for archival purposes, in providing a compact file format and a consistent playback experience.
As a still further example, the system of the current invention can be used to construct new Performance Elements, or create a new song out of existing Performance Elements within the Song Framework Registry. Alternatively, the system of the current invention would be used to synthesize new Performance Elements, based on seeding a generator with existing Performance Elements.
This practice would be useful in developing a set of "rapid application development" tools for song construction.
The system of the current invention could be extended to use a standardized notation export format, such as MusicXML as an intermediate translation file.
This would be useful in extending the input capabilities of the Translation Engine.
For the sake of clarity, it should also be understood that the present invention contemplates application of the invention to one or more music libraries to generate the output described.
V:\co~p\adefazek\Beckford, UavclCan. Patent Appln\Appln as filed-Aug04.doc

Claims (21)

The embodiments of the invention in which an exclusive property or privilege is claimed are defined as follows:
1. A
computer implemented method of converting one or more electronic music files into an electronic musical representation, the computer implemented method comprising the steps of:
(a) providing a song framework including a plurality of song framework rules and associated processing steps for converting a one or more track electronic music file into a song framework output, the song framework output defining:
(i) one or more framework elements;
(ii) one or more performance elements, being bar-level representations of one track of the one or more track electronic music file comprising a structure incorporating at least one of the following:
(A) metric information;
(B) non-metric information being one or more compositional elements and one or more mechanical elements; and (C) expressive information being one or more mechanical performance elements; and (iii) a performance element collective, that functions to perform the further steps of:
(A) maintaining a collection of performance elements within a song framework output, such performance elements being non-matching to other performance elements in the collection;
(B) identifying performance elements according to at least metric information and pitch sequence equivalence; and (C) grouping mechanical performance elements within a grouping structure consisting of performance elements that have common compositional performance elements; and (b) applying the plurality of song framework rules to each track of the one or more track electronic music file, thereby:
(i) detecting the one or more performance elements of each track of the one or more track electronic music file;
(ii) classifying each of the detected one or more performance elements as matching or non-matching, involving a comparison of the metric information, mechanical performance elements, and compositional performance elements of the performance elements to other performance elements stored in the performance element collective; and (iii) mapping each of the one or more performance elements to the corresponding framework elements.
2. The computer implemented method claimed in claim 1, whereby each of the one or more framework elements is comprised of:
(a) a phrase-level metric hierarchy;
(b) an environment track, comprising one or more environment partials mapped across the one or more performance events for the one or more framework elements, said environment partials comprising tempo and key vectors; and (c) a plurality of instrument performance tracks.
3. The computer implemented method claimed in claim 2, whereby:
(a) the function of the environment track is to provide tempo and tonality parameters for each of the one or more performance elements mapped to a particular framework element; and (b) the function of each instrument performance track is to associate a sequence of performance elements with a particular instrument, over the course of the framework element.
4. The computer implemented method claimed in claim 3, whereby the function of each of the one or more performance elements is to map a sequence of note events to a bar-level metric hierarchy, in order to create a bar of music.
5. The computer implemented method claimed in claim 1, whereby the framework elements are derived from one or more environmental parameters defined by the one or more electronic music files, including one or more of: time signature;
tempo; key; or song structure.
6. The computer implemented method claimed in claim 1, whereby the one or more electronic music files consist of one or more MIDI files.
7. The computer implemented method claimed in claim 1, comprising the further step of preparing the one or more electronic music files by performing the following functions:
(a) digitizing each track of the one or more electronic music files, being one or more track electronic music files, to define a single continuous wave file having a substantially consistent length, each wave file being referenced to a common audio marker;
(b) determining audio parameters of the electronic music file, the audio parameters including one or more of the following:
(i) an audio format; and (ii) a source and time index;
for sampled material included in the one or more electronic music files;
(c) determining song environment data for the one or more electronic music files;
(d) defining parameters for an environment track linked to the one or more electronic music files, said environment track comprising one or more environment partials mapped across the one or more performance events for the one or more framework elements, said environment partials comprising tempo and key vectors;
(e) analyzing each track of the one or more electronic files to determine the musical parameters of said track, said musical parameters including whether the track is: pitched; pitched vocal; percussion or complex;
includes a single voice instrument; solo vocal; or multiple vocals;
(f) based on (e) analyzing each track of the one or more electronic files to determine which one of a plurality of operations for converting audio data to MIDI data should be applied to a particular track of the one or more electronic files; and (g) applying the corresponding conversion operation to each track of the one or more electronic files, based on (f).
8. The computer implemented method claimed in claim 1, comprising the further step of applying the plurality of song framework rules in sequence so as to:
(a) construct a framework sequence defining the main sections of a song, and further construct the one or more framework elements;
(b) define instrument performance tracks for the one or more framework elements; and (c) apply a performance analysis operable to:
(i) generate performance elements from the one or more electronic music files; and (ii) map indexes corresponding to the performance elements to the corresponding instrument performance tracks, thereby populating the instrument performance tracks.
9. The computer implemented method claimed in claim 8, whereby the one or more performance elements are generated by:
(a) construction of a carrier structure by:
(i) identifying capture addresses within each beat;

(ii) determining a most salient nanoform carrier structure to represent each beat, being configured to represent metric patterns and variations within note event windows of each measure; and (iii) determining a most salient microform carrier to represent a metric structure of the particular performance element, being configured to represent containing structures for sequences of one or more of the nanoform carrier structures across a bar-length even window of an electronic music stream of the one or more track electronic music file;
(b) construction of a performance element modulator by:
(i) detecting note-on data;
(ii) detecting and allocating controller events; and (iii) translation of (i) and (ii) into performance element modulator data;
and (c) associating performance element modulator data with corresponding microform carriers or nanoform carrier structures through note event identifiers, thereby defining the particular performance element.
10. The computer implemented method of claim 8, whereby the application of the performance analysis consists of the further steps of:
(a) introducing the generated performance elements into the performance element collective for a particular instrument performance track; and (b) comparing the performance elements against existing performance elements in the performance element collective based on a series of equivalence tests.
11. The computer implemented method claimed in claim 1, comprising the further step of applying the plurality of song framework rules to a MIDI file by operation of a translation engine, and thereby creating a song framework output file.
12. The computer implemented method of claim 1, comprising the further step of configuring a computer to apply the plurality of rules and associated processing steps to the one or more electronic music files.
13. The computer implemented method claimed in claim 1, comprising the further step of defining the one or more performance elements by way of a performance element modulator operable to determine for each bar of a track of the one or more track electronic music file:
(a) one or more compositional performance elements, representing a theoretical layer and being capable of capturing an abstracted or quantized container level representation of pitch, duration and meter information;
and (b) one or more mechanical performance elements, representing a performance reproductive layer and serving to capture expressive information contained in continuous controller data of the one or more track electronic music file as input media and the event variance in pitch, duration and meter information from the quantized compositional performance element.
14. The computer implemented method claimed in claim 1, comprising the further step of generating one or more reports by way of a reporting facility that is operable to generate originality reports in regard to one or more electronic music files selected by a user, said originality reports detailing whether performance elements of one or more of the electronic files are matching the one or more performance elements stored in the performance element collective.
15. A computer system for converting one or more electronic music files into an electronic musical representation comprising:
(a) a computer; and (b) a computer application including computer instructions for configuring one or more computer processors to apply or facilitate the application of the computer instructions defining a music analysis engine on the computer, the music analysis engine being operable to define a song framework that includes a plurality of song framework rules and associated processing steps for converting the one or more electronic music files into a song framework output, the song framework output defining:
(i) one or more framework elements;
(ii) one or more performance elements, being bar-level representations of one track of a one or more track electronic music file comprising a structure incorporating at least one of the following:
(A) metric information;
(B) non-metric information being one or more compositional elements and one or more mechanical elements; and (C) expressive information being one or more mechanical performance elements; and (iii) a performance element collective that functions to:
(A) maintain a collection of performance elements within a song framework output, such performance elements being non-matching to other performance elements in the collection;
(B) identify performance elements according to at least metric information and pitch sequence equivalence; and (C) group mechanical performance elements within a grouping structure consisting of performance elements that have common compositional performance elements; and wherein the music analysis engine is operable to apply the plurality of rules of the song framework to each track of the one or more track electronic music file, thereby being operable to:
(iv) detect the one or more performance elements of each track of the one or more track electronic music file;
(v) classify each of the detected one or more performance elements as matching or non-matching, involving a comparison of the metric information, mechanical performance elements, and compositional performance elements of the performance elements to other performance elements stored in the performance element collective; and (vi) map each of the one or more performance elements to the corresponding framework elements.
16. The computer system claimed in claim 15, wherein the computer application includes computer instructions for further defining on the computer a conversion engine that is operable to convert audio files into MIDI files.
17. The computer system claimed in claim 15, wherein the computer system is linked to a database and a database management utility, wherein the computer system is operable to store a plurality of electronic musical representations, including at least performance elements, to the database, the computer system thereby defining an electronic music registration system.
18. The computer system claimed in claim 15, wherein the computer system is linked to a database and a database management utility, wherein the computer system is operable to:
(a) store a plurality of electronic musical representations to a database;
(b) normalize a song framework output's performance element collective against a universal performance element collective stored to the database;
and (c) re-map and insert the electronic musical representations of the electronic music file into a master framework output store;
wherein the computer system defines a song framework repository.
19. A computer program product for converting one or more electronic music files into an electronic musical representation, for use on a computer, the computer program product comprising:

(a) a computer useable medium; and (b) computer instructions stored on the computer useable medium, said instructions for configuring one or more computer processors to apply, or facilitate the application of the computer instructions defining a computer application on the computer, and for configuring one or more processors to perform the computer application defining a music analysis engine, the music analysis engine defining a song framework that includes a plurality of rules and associated processing steps for converting the one or more electronic music files into a song framework output, the song framework output defining:
(i) one or more framework elements;
(ii) one or more performance elements, being bar-level representations of one track of the one or more electronic music files comprising a structure incorporating at least one of the following:
(A) metric information;
(B) non-metric information being one or more compositional elements and one or more mechanical elements; and (C) expressive information being one or more mechanical performance elements; and (iii) a performance element collective that functions to:
(A) maintain a collection of performance elements within a song framework output, such performance elements being non-matching to other performance elements in the collection;
(B) identify performance elements according to at least metric information and pitch sequence equivalence; and (C) group mechanical performance elements within a grouping structure consisting of performance elements that have common compositional performance elements; and wherein the music analysis engine is operable to apply the plurality of rules of the song framework to each track included in one or more electronic music files, thereby being operable to:
(iv) detect the one or more performance elements of each track of the electronic music files;
(v) classify each of the detected one or more performance elements as matching or non-matching, involving a comparison of the metric information, mechanical performance elements, and compositional performance elements stored in the performance element collective; and (vi) map each of the one or more performance elements to the corresponding framework elements.
20. The computer program product claimed in claim 19, wherein the computer application further defines on the computer a comparison facility that is operable to compare the electronic musical representations of at least two of the one or more electronic music files, by way of a series of equivalence tests, and establish whether any of the one or more electronic music files includes original performance elements of another of the one or more electronic music files.
21. The computer program product claimed in claim 19, wherein the computer application further defines a reporting facility that is operable to generate originality reports, reporting matching performance elements, in regard to one or more electronic music files selected by a user.
CA2478697A 2004-08-20 2004-08-20 System, computer program and method for quantifying and analyzing musical intellectual property Expired - Fee Related CA2478697C (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US49640104P 2004-08-20 2004-08-20
US60/496,401 2004-08-20

Publications (2)

Publication Number Publication Date
CA2478697A1 CA2478697A1 (en) 2006-02-20
CA2478697C true CA2478697C (en) 2013-10-15

Family

ID=35874778

Family Applications (1)

Application Number Title Priority Date Filing Date
CA2478697A Expired - Fee Related CA2478697C (en) 2004-08-20 2004-08-20 System, computer program and method for quantifying and analyzing musical intellectual property

Country Status (1)

Country Link
CA (1) CA2478697C (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111504963B (en) * 2020-04-10 2023-07-07 上海蓝长自动化科技有限公司 Data space-time fusion method applied to chlorophyll and blue-green algae fluorescence detection
CN111739493B (en) * 2020-06-23 2023-07-14 腾讯音乐娱乐科技(深圳)有限公司 Audio processing method, device and storage medium

Also Published As

Publication number Publication date
CA2478697A1 (en) 2006-02-20

Similar Documents

Publication Publication Date Title
US7723602B2 (en) System, computer program and method for quantifying and analyzing musical intellectual property
Byrd et al. Problems of music information retrieval in the real world
Typke Music retrieval based on melodic similarity
Cano et al. ISMIR 2004 audio description contest
Ras et al. Advances in music information retrieval
Liu et al. Lead sheet generation and arrangement by conditional generative adversarial network
US9378718B1 (en) Methods and system for composing
De Haas et al. A geometrical distance measure for determining the similarity of musical harmony
Meek et al. Thematic Extractor.
Scotto The structural role of distortion in hard rock and heavy metal
Ammirante et al. Low-skip bias: The distribution of skips across the pitch ranges of vocal and instrumental melodies is vocally constrained
Weiß Computational methods for tonality-based style analysis of classical music audio recordings
Gómez et al. Using and enhancing the current MPEG-7 standard for a music content processing tool
Alfaro-Paredes et al. Query by humming for song identification using voice isolation
CA2478697C (en) System, computer program and method for quantifying and analyzing musical intellectual property
Hudak Haskore music tutorial
Banerjee A survey of prospects and problems in Hindustani classical raga identification using machine learning techniques
Papadopoulos Joint estimation of musical content information from an audio signal
Nikzat et al. KDC: AN OPEN CORPUS FOR COMPUTATIONAL RESEARCH OF DASTG ŻAHI MUSIC
Proutskova et al. You Call That Singing? Ensemble Classification for Multi-Cultural Collections of Music Recordings.
US11756515B1 (en) Method and system for generating musical notations for musical score
Simonetta Music interpretation analysis. A multimodal approach to score-informed resynthesis of piano recordings
Li et al. E3MSD: A New Music Information Retrieval Architecture for an Original Music Identifier
de Paiva Santana et al. A Computer Model for the Analysis of Diversity and Coordination in Orchestration
Srinivasamurthy et al. Getting started on computational musicology and music information research: an indian art music perspective

Legal Events

Date Code Title Description
EEER Examination request
MKLA Lapsed

Effective date: 20170821