WO2002082420A1 - Storing multipart audio performance with interactive playback - Google Patents

Storing multipart audio performance with interactive playback Download PDF

Info

Publication number
WO2002082420A1
WO2002082420A1 PCT/US2002/010976 US0210976W WO02082420A1 WO 2002082420 A1 WO2002082420 A1 WO 2002082420A1 US 0210976 W US0210976 W US 0210976W WO 02082420 A1 WO02082420 A1 WO 02082420A1
Authority
WO
WIPO (PCT)
Prior art keywords
performance
interactive
audio
virtual
pool
Prior art date
Application number
PCT/US2002/010976
Other languages
French (fr)
Inventor
Bradley J. Naples
Kevin D. Morgan
Original Assignee
Musicplayground, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US09/900,289 external-priority patent/US20020144588A1/en
Priority claimed from US09/900,287 external-priority patent/US20020144587A1/en
Application filed by Musicplayground, Inc. filed Critical Musicplayground, Inc.
Priority to JP2002580306A priority Critical patent/JP4267925B2/en
Publication of WO2002082420A1 publication Critical patent/WO2002082420A1/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • G10H1/361Recording/reproducing of accompaniment for use with an external source, e.g. karaoke systems
    • G10H1/365Recording/reproducing of accompaniment for use with an external source, e.g. karaoke systems the accompaniment information being stored on a host computer and transmitted to a reproducing terminal by means of a network, e.g. public telephone lines
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/005Non-interactive screen display of musical or status data
    • G10H2220/011Lyrics displays, e.g. for karaoke applications
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2240/00Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
    • G10H2240/011Files or data streams containing coded musical information, e.g. for transmission
    • G10H2240/031File merging MIDI, i.e. merging or mixing a MIDI-like file or stream with a non-MIDI file or stream, e.g. audio or video
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2240/00Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
    • G10H2240/011Files or data streams containing coded musical information, e.g. for transmission
    • G10H2240/046File format, i.e. specific or non-standard musical file format used in or adapted for electrophonic musical instruments, e.g. in wavetables
    • G10H2240/056MIDI or other note-oriented file format
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2240/00Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
    • G10H2240/011Files or data streams containing coded musical information, e.g. for transmission
    • G10H2240/046File format, i.e. specific or non-standard musical file format used in or adapted for electrophonic musical instruments, e.g. in wavetables
    • G10H2240/061MP3, i.e. MPEG-1 or MPEG-2 Audio Layer III, lossy audio compression
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2240/00Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
    • G10H2240/171Transmission of musical instrument data, control or status information; Transmission, remote access or control of music data for electrophonic musical instruments
    • G10H2240/281Protocol or standard connector for transmission of analog or digital data to or from an electrophonic musical instrument
    • G10H2240/295Packet switched network, e.g. token ring
    • G10H2240/305Internet or TCP/IP protocol use for any electrophonic musical instrument data or musical parameter transmission purposes
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2240/00Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
    • G10H2240/171Transmission of musical instrument data, control or status information; Transmission, remote access or control of music data for electrophonic musical instruments
    • G10H2240/281Protocol or standard connector for transmission of analog or digital data to or from an electrophonic musical instrument
    • G10H2240/311MIDI transmission

Definitions

  • Patent Application Serial No. 09/900,287 entitled “A Nirtual Music System", filed on July 6, 2001.U.S. Provisional Application Serial No. 60/282,420, entitled “A Multimedia Data File”, and filed April 9, 2001; U.S. Provisional Application Serial No. 60/282,549, entitled “A Virtual Music System “, and filed April 9, 2001; U.S. Provisional Application Serial No. 60/288,876, entitled “A Multimedia Data File”, and filed May 4, 2001 ; and U.S. Provisional Application Serial No. 60/288,730, entitled “An Interactive Karaoke System", and filed May 4, 2001.
  • TECHNICAL FIELD This invention relates to multipart data files.
  • MPEG Moving Picture Experts Group
  • MIDI Musical Instrument Digital Interface
  • MIDI was designed for the recording and playback of digital audio content on synthesizers.
  • MIDI streams do not represent audio content directly but provide information about how the content is to be synthesized.
  • MIDI streams are multi-track, where each track can be mapped to a discrete profile such as a musical instrument. Each track of the MIDI stream includes the discrete notes to be played by that instrument. Since a MIDI file is the computer equivalent of traditional sheet music for a particular song (figuratively speaking, as opposed to the sound recording for the song itself), these files tend to be small and compact when compared to files which record the audio content directly and continuously.
  • MIDI streams typically require some form of wave table or FM synthesizer chip to generate their sounds. Additionally, MIDI files tend to lack the richness and robustness of actual sound recordings of the same content.
  • MP3 streams unlike MIDI streams, contain actual sound recordings of audio content.
  • MP3 streams are single track files and do not include information concerning the specific musical notes or the instruments utilized in the recording.
  • MIDI files typically require additional hardware in order to be played back
  • MP3 files can quite often be played back on a modern multimedia personal computer with a minimal amount of specialized hardware.
  • the invention features a computer-readable medium having a data structure encoding an audio performance for interactive playback stored thereon.
  • the data structure includes a virtual instrument pool that encodes an interactive part of the audio performance. Audio content of the interactive part is encoded at least in a sequence of synthesizer control data. Each datum in the synthesizer control data specifies a digital sample of the audio content to be played back.
  • the data structure also includes a global accompaniment pool, which encodes non-interactive portions of the audio performance.
  • the global accompaniment pool includes timing information to synchronize the playback of the audio performance.
  • the synthesizer control data is MIDI data.
  • the digital sample is an MP3 clip.
  • the global accompaniment pool encodes a non-interactive part of the audio content of the audio performance.
  • the global accompaniment pool includes a collection of sound fonts, in which each sound font provides parameters for synthesizing the playback of an interactive part.
  • the invention features a computer-readable medium that stores a data structure which encodes an audio performance for interactive playback.
  • the data structure includes a global accompaniment pool, which encodes a non-interactive part of the audio performance.
  • a portion of the non-interactive part is encoded as synthesizer control data, while another portion of the non-interactive part is encoded as digital samples of the audio performance.
  • the data structure also includes a virtual instrument pool, which encodes an interactive part of the audio performance.
  • the interactive part has audio content encoded at least in synthesizer control data. Each datum in the synthesizer control data specifies musical notes to be synthesized, or specifies a digital sample of the audio content to be played back.
  • the synthesizer control data is MIDI data.
  • the digital samples are MP3 clips.
  • the virtual instrument pool includes cue data that specifies prompts coordinated with the audio content the interactive part.
  • the invention features code stored on a computer readable medium.
  • the code is a computer in an entertainment system that includes an audio output subsystem, an input device, and a memory storing a musical performance data structure having an interactive portion of a musical performance and an accompanying, non- interactive portion of the musical performance.
  • the code includes a virtual manager object which causes the computer to read the musical performance data structure stored in the memory and generate a virtual object representing a virtual instrument identified in the performance data structure.
  • the virtual object causes the computer to map user input from the input device to the interactive portion of the musical performance and play the mapped interactive portion of the musical performance through the audio output subsystem.
  • the code also includes a global accompaniment object which causes the computer to play the accompanying non-interactive portion of the musical performance through the audio output system.
  • the stored musical performance data structure identifies a plurality of different virtual instruments, each representing a different musical instrument.
  • the virtual manager object causes the computer to generate a plurality of virtual objects, each of which represents a different corresponding one of the identified plurality of instruments.
  • Each of the virtual objects causes the computer to map user input from input devices to a corresponding part of the interactive portion of the musical performance and play the mapped corresponding part of the interactive portion of the musical performance through the audio output subsystem.
  • the global accompaniment object also includes logic which when executed on the computer causes it to provide a master timing signal for the virtual object.
  • the virtual object also includes logic which causes the computer to display a visual representation of the timing cues through the video display system to aid the user in playing the virtual instrument.
  • the stored musical performance data structure includes a plurality of digital clips each representing a different part of the non-interactive portion of the musical performance and a sequence of trigger points, each of which presents timing information and identifies which one of the digital clips is to be played at times identified in the timing information, then in that case the global accompaniment object includes logic which causes
  • the entertainment system to play through the audio output subsystem the identified one of the plurality of digital clips at the appropriate time as identified by the stored sequence of trigger points.
  • the accompaniment object furthermored
  • >0 includes logic that causes the computer to retrieve the sound fonts from the stored musical performance data structure and load them into the synthesizer to control the character of the audio output subsystem.
  • FIG. 1 A is a block diagram of an interactive karaoke system.
  • FIG. IB is a flowchart of apart encoding process.
  • FIG. 2 is a block diagram of a multipart data file.
  • FIG. 3 A is a block diagram of a chunk.
  • FIG. 3B is a block diagram of a part chunk.
  • FIG. 4 is a block diagram of a client device and connected devices.
  • FIG. 5 is a block diagram of software layers.
  • FIG. 6A is a block diagram of object classes and interfaces. 5
  • FIG. 6B is a flowchart of system behavior.
  • FIG. 6C is a flowchart of system initialization.
  • FIG. 7A is a block diagram of a performance object.
  • FIG. 7B is a flowchart of a live interactive playback process.
  • FIG. 8 A is a diagram of an application window.
  • I o FIG. 8B is a block diagram of a peripheral manager obj ect.
  • FIG. 9 A is a block diagram of a virtual instrument manager.
  • FIG. 9B is a flowchart of a VI manager load process.
  • FIG. 10A is a flowchart of a file selection process.
  • FIG. 1 OB is a flowchart of a part selection process.
  • FIG 11 A is a block diagram of a virtual instrument obj ect.
  • FIG. 1 IB is a diagram of virtual instrument inheritance.
  • FIG. 12A is a first diagram of a user area.
  • FIG. 12B is a second diagram of a user area.
  • FIG. 13A is a block diagram of a global accompaniment.
  • FIG. 13B is a flowchart of a global accompaniment load process.
  • FIG. 14A is a diagram of a performance timer interface.
  • FIG. 14B is a diagram of a transport interface.
  • FIG. 14C is a diagram of a performance pool interface.
  • FIG. 15 A is a flowchart of a mapping process.
  • FIG. 15B is a flowchart of a real-time mapping process.
  • FIG. 16 is a flowchart of a MIDI mapping playback process.
  • a data file contains a standardized performance of music or 0 sound digitally encoded, typically at a high quality ⁇ for instance, comparable to FM radio or better.
  • Methods for digitally encoding the sound include digital recordings or samples in a format such as MP3, as well as synthesizer parameters in a format such as MIDI.
  • the standardized performance is encoded in one or more parts that can be played back synclironously by an interactive karaoke system. For instance, the standardized performance
  • 5 can be a song or musical performance, with various parts allocated to musicians and their vocals or instruments.
  • the data file contains additional content such as timing cues, lyrics, and other features, as will be explained.
  • the additional content is time-correlated to the audio content for synchronous playback.
  • One or more human users can use the interactive karaoke system.
  • Each user has an input device and a part to "play", i.e., to interact with in real time via the input device.
  • the interactive karaoke system presents a user interface via a display device to the users.
  • the interactive karaoke system manages synchronous playback of the audio content. During playback, the karaoke system visually prompts each user to interact with the karaoke system
  • the interactive karaoke system correlates user inputs at the input device to the user's part.
  • the interactive karaoke system then plays audio content from the part to simulate the user playing the part.
  • the audio content represents a musical performance, for instance, the interactive karaoke system can recreate a version of that musical performance as apparently played by the one or more users.
  • a virtual instrument uses a part, an input device, and a sound font. Virtual instruments are encoded as software objects generated and maintained by the karaoke system.
  • a live performance is the karaoke system's rendering of the standardized performance after adjusting for real-time user inputs and for user preferences.
  • the live performance usually deviates from the standardized performance as a result of these adjustments. For example, if a user's inputs stray too far from the timing information encoded in part, then the karaoke system will suppress all or part of the audio
  • the karaoke system plays samples from the standardized performance according to the timing of the real-time user input. If the user deviates too far from the timing of the standardized performance, therefore, the live performance will deviate as well. Still other deviations can be due to system settings that the user chooses. For instance, a user can choose to have the karaoke system omit one or more parts of the standardized performance. The variations between live perfo ⁇ nances and the standardized performance contribute to the entertainment value of the karaoke system. Interactive aspects of the system and the content of the multipart file are suitable for musical instruction, as well. Still another use of the multipart file applies to deejay software.
  • an interactive karaoke system 10 plays multipart data files
  • Each standardized performance 15 corresponds to a standardized performance 15 such as a song 15a or audio content 15b.
  • Each standardized performance 15 contains one or more parts 15c, which typically are audio content of standardized performance 15 assigned to a particular instrument or human performer.
  • Data file 14 includes either a part chunk 42 or a tracks chunk 38a for each part 15c of standardized performance 15, as will be explained.
  • Multipart data file 14 contains sufficient information for system 10 to reproduce standardized performance 15 and parts 15c.
  • Karaoke system 10 includes interactive and audio-visual features. For instance, a user 16 interacts with system 10 via an input device 28, which can be a musical input device 28". User 16 views a visual display device 26, through which system 10 displays information to user 16. Audio output subsystem 27 produces sound audible to user 16, including the live performance.
  • System logic 18 includes procedures encoded as instructions that can be carried out by a processing device, as will be explained. In other words, system logic 18 is software. System logic 18 includes a player application 20 and an engine library 22, explained later.
  • system 10 distinguishes between “interactive” or “non-interactive” parts 15c of a standardized performance 15.
  • System 10 makes- interactive parts 15c available to be played by user 16 during a live performance.
  • System can render interactive parts 15c either automatically (in a demonstration or guide mode) or interactively (subject to user input stimuli, as will be explained.)
  • system 10 renders non-interactive parts 15c automatically during a live performance.
  • Non-interactive parts 15c are background or accompaniment to interactive parts 15c.
  • interactive and non-interactive parts 15c are encoded in data file 14.
  • interactive parts 15c correspond to part chunks 42 in VI pool 40 (shown in FIG. 2), while non-interactive parts 15c correspond to tracks chunk 38a in accompaniment pool 38.
  • Part encoding process 19 maps parts 15c to portions of a data file 14, broadly speaking.
  • Part encoding process 19 receives a standardized performance 15 with each part 15c designated interactive or non-interactive (process 19a). For example, a human administrator could provide such designations.
  • Part encoding process 19 selects a part 15c from a standardized perfo ⁇ nance 15 to be encoded in a data file 14 (process 19b). Part encoding process 19 tests whether part 15c is interactive (process 19c). If the test is affirmative, part encoding process 19 encodes part 15c as a virtual instrument (process 19d). For instance, the part 15c is mapped to a part chunk 42 in VI pool 40 in data file 14. If the test is not affirmative, part encoding process 19 encodes part 15c as a portion of the global accompaniment (process 19e). For instance, the part 15c is mapped to a tracks chunk 38a in accompaniment pool 38 in data file 14.
  • Part encoding process 19 returns to process 19b for each part 15c in the input (process 19f).
  • a multipart data file 14 includes a header 32 and a body 34.
  • the header 32 typically precedes the body 34 in file 14.
  • the header 32 contains an encryption flag 32a that indicates whether body 34 is encrypted, and a song identifier 32b.
  • Song identifier 32b is a value that uniquely identifies song 15a relative to other songs 15a. For example, song identifier 32b can act as a product number in a publisher's catalog of songs 15 a.
  • Body 34 includes song information 36, an accompaniment pool 38, and a virtual instrument (or "VI") pool 40.
  • Song information 36 specifies the standardized performance 15 associated with multipart data file 14.
  • Song information 36 includes fields such as title 36a, artist 36b, description 36c, length 36d, genre 36e, subgenre 36f, publisher 36g, copyright 36h, writers 36i, version 36k, format 36m, and difficulty rating 36n.
  • Title 36a is a name that identifies the standardized performance 15 to user 16.
  • Description 36c, genre 36e, and subgenre 36f further explain the standard performance 15 to user 16.
  • Artist 36b indicates one 5 or more artists represented in the standardized performance 15.
  • Length 36d indicates the duration in time of the standardized performance 15.
  • Publisher 36g, copyright 36h, and writers 36i identify intellectual property rights in the standardized performance 15, while version 36k and format 36m are metadata that assist different versions of system 10 (for instance, future revisions) in recognizing the rubrics in place at the time that that data file 14 0 was encoded.
  • Difficulty rating 36n is a measure of the overall difficulty of the parts 15c in the standardized performance 15.
  • Accompaniment pool 38 and VI pool 40 include data formatted as chunks 50. Moreover, accompaniment pool 38 and VI pool 40 themselves use the chunk 50 format. Chunks 50 are described with reference to FIG. 3A. 5
  • accompaniment pool 38 contains information that interactive karaoke system 10 interprets in order to manage a live performance and to render non-interactive parts 15c. Furthermore, accompaniment pool 38 provides sound fonts 39 specific to the : ⁇ standardized performance 15, as will be explained. Accompaniment pool 38 contains a tracks chunk 38a, a soundbank chunk 38b, a DA (for "digital audio") trigger chunk 38c, and a DA chunk 38d.
  • the tracks chunk 38a encodes global accompaniment content.
  • the tracks chunk 38a includes timing to define the tempo and length at which system 10 will render the >5 corresponding standardized performance 15.
  • the tracks chunk 38a usually (but not always) also encodes actual audio content.
  • the tracks chunk 38a could be part of a standardized performance 15 that contains an unaccompanied part 15c, for instance a solo vocal performance.
  • the standardized performance 15 is still encoded with a global accompaniment track 38a, at least to provide a master timing signal.
  • the soundbank chunk 38b provides sound fonts 39 specific to the standardized performance 15 corresponding to file 14.
  • a sound font 39 includes samples and acoustical characteristics for a virtual 5 instrument.
  • Acoustical characteristics include the envelope, or volume of a sample as it moves over time.
  • the envelope typically includes an attack (initial volume rising rapidly over time), an initial decay from attack, sustain (held for as long as note needs to be held), and release (what happens to the sound when the instrument is done playing the note).
  • the sample will be an ! o actual recording of an overdriven guitar playing a defined note or frequency.
  • user 16 provides an input stimulus that, according to performance track 48a (shown in FIG. 3B), corresponds to a note having the same frequency as the sample, the sample will be played without modification. However, if that input stimulus corresponds to a note at a different frequency than the frequency of the sample, interactive karaoke system 10 will shift the 15 frequency of the sample to that of the required note.
  • Synthesizer 66a (shown in FIG. 5) can perform frequency shifts.
  • sound fonts 39 are compatible with technologies and products from Creative Labs, Inc.
  • DA trigger chunk 38c gives a set of control messages that allow playing digital audio clips such as MP3 samples. The clips themselves are stored in DA chunk 38d.
  • DA trigger chunk 38c indexes the clips and includes information that maps MIDI note event values to MP3 samples, for example in a table of pairs that associate note event values 25 with clips.
  • the DA guide track 48g associated with a part 15c can use these indexes as a space-efficient shorthand when referencing the clips.
  • VI pool 40 includes a collection of part chunks 42.
  • Multipart data file 14 includes a 30 part chunk 42 for each virtual instrument playable in the corresponding standardized performance 15.
  • Part chunk 42 formats are explained with reference to FIG. 3B.
  • a part chunk 42 holds the data that encodes an interactive part 15c.
  • the VI Manager looks for the VI pool 40 during startup and generates a virtual instrument object 80 for each part chunk 42.
  • a chunk 50 is a format for storage of digital information.
  • the chunk 50 format can store a wide range of data.
  • Chunk 50 includes a metadata portion 52 and a data portion 54.
  • Metadata fields describe the nature of data stored in the data portion 54.
  • Metadata 52 includes name 52a, type 52b, size 52c, an encryption indicator 52d, and a compression indicator 52e.
  • Encryption indicator 52d indicates whether data portion 54 is encrypted.
  • Compression indicator 52e describes a compression scheme used in data portion 54.
  • metadata 52 is stored as plaintext, while data portion 54 is stored with encryption and compression.
  • Examples of data stored in data portion 54 include digital audio recordings, MIDI data, and text .
  • Data portion can also store additional chunks 50 - that is, the structure of chunk 50 is recursive. Size 52c indicates when a given chunk 50 ends.
  • a part chunk 42 includes an information chunk 44 and a data chunk 44.
  • Information chunk 44 includes a name 42a, a type 42b, a difficulty rating
  • the name 42a for the part 15c identifies it to user 16.
  • Difficulty rating 42c and a description 44d further explain the standard performance 15 to user 16.
  • Type 42b allows part 15c to be matched to appropriate virtual instruments: for instance, drum parts 15c to drum instruments.
  • the data chunk 44 contains MIDI data.
  • the MIDI data is formatted into MIDI tracks.
  • Track types include guide track 48b, performance track 48a, cue track 48c, score track 48d, local accompaniment track 48e, video track 48f, and DA guide track 48g.
  • GUIDE TRACK Guide track 48b is a non-interactive complement to an interactive part 15c.
  • Guide track 48b encodes the portion of a standardized performance 15 corresponding to a part 15c.
  • User can toggle the playback of guide track 48b on and off manually.
  • the system can play guide track 48b automatically.
  • User 16 can configure system 10 such that a live performance has no user assigned to a given interactive part. When the audio content of that part is needed for the live
  • system 10 renders the audio content of the guide track 48b non-interactively - for instance, in lieu of an interactive rendering of performance track 48a.
  • Guide track 48b can be stored in several formats.
  • Guide track 48b can include a synthesizer control stream, such as a MIDI stream, or a sound recording file 94, such as an MP3 file.
  • one or more guide tracks 48b can be selectively played to provide guide information to user 16.
  • This guide information provides insight to the user concerning the pitch, rhythm, and timbre of the performance of that particular virtual instrument. For example, if user 16 is singing an unfamiliar song 15 a, guide track 48b can be played in
  • Performance track 48 a encodes audio content that is the basis for the live performance of a part 15c when user provides acceptable input.
  • Performance track 48a 15 includes a MIDI stream.
  • the note event values of the MIDI stream encode synthesizer inputs.
  • Virtual instruments need not have a performance track 48a. Apart for a string input device 28 or a percussion input device 28 typically does have a performance track 48a. For such parts, interactive karaoke system 10 must generate a note having the appropriate pitch o (as specified by performance track 48a) for each input stimulus received. User input for vocal parts, however, does not require system 10 to generate a note. Instead, user 16 provides vocal part inputs via a microphone 28b (shown in FIG. 5).
  • cue track 48c indicates how and when system 10 should prompt user 16 for input during the live performance.
  • the prompts do not have to correspond to the performance track 48a on a one-to-one basis. Instead, typically, the prompts summarize the performance track 48a. This summarizing helps system 10 simplify parts so that user 16 does not have to play every note in performance track 48a. Cues in cue track 48c can collect
  • Cue track 48c specifies timing intervals during which the user is prompted for input stimuli. In general, cue intervals do not overlap.
  • the timing (both the start and duration) of a cue interval has several functions. It shows when a prompt should be displayed to the user. The interval also indicates sections of the performance track 48a that will be played if acceptable user input occurs during that window.
  • Score track 48d encodes musical notations that are synchronized with the performance track 48a for display during a live performance.
  • the notations can take several forms. One form is textual descriptions of chords, such as "F#5" or "C5". Notations can also describe conventional musical notations, for instance staff or tablature.
  • Local accompaniment track 48e within a virtual instrument part 15c is distinct from the global accompaniment.
  • Local accompaniment track 48e provides additional audio "fill" 30 for the virtual instrument part as needed.
  • system 10 can create the audio illusion that the user is playing an entire instrument part, when in fact the input stimuli only correspond to a portion of the standardized performance 15 of the part.
  • the standardized performance 15 can be a combination of the performance track 48 a and the local accompaniment track 48e.
  • drum kit As a physical device, a drum kit can be fairly
  • Local accompaniment track 48e allows user 16 to play a subset or an approximation of the total notes in the part and to have the rest of the notes provided anyway. For instance, in the drum example, one option is for the user 16 to just play the snare-drum part, while an accompaniment track within the VI track provides kick drum, tom-tom, high hat, and so forth.
  • system 10 does not render the audio content of local accompaniment track 48e.
  • Video track 48f provides interactive visuals synchronized to the live performance.
  • Video track 48f includes a time-encoded series of visual frames for system 10 to present to user 16 in response to user interaction. For instance, automated music training can benefit from video response. Video track 48f can include a stock series of pictures or movies, coordinated to certain points in standardized performance 15. For instance, the video track
  • 48f can depict a turntable for a deej ay application.
  • the video track 48f can offer a different, customized version of a turntable.
  • the DA guide track 48 g is similar to the guide track 48b but operates 30 specifically with digital audio clips.
  • DA guide track 48g uses MIDI control messages to point to digital audio clips, indexed in the DA trigger chunk 38c and stored in the DA chunk 38d.
  • DA guide track 48g includes a time-encoded series of trigger intervals. The trigger intervals indicate when a given clip should be played.
  • the note number indicates which clip to play, the note placement in time indicates when to play it, and the note duration indicates for how long to play it.
  • DA guide track 48g is useful at least when the standardized performance 15 includes audio content that cannot be synthesized satisfactorily, such as with a particular vocal performance or, in general, any performance with unusual or distinctive sonic qualities.
  • a client device 12 executes system logic 18 of karaoke system 10.
  • client device 12 is a personal computer.
  • Client device 12 includes main memory 12a, storage 12b, and a processor 12c, interconnected by a bus 12d.
  • Storage 12b is a non- volatile storage medium such as a disk drive.
  • Processor 12c executes machine-readable instructions stored in main memory 12a or in storage 12b, or both, according to operating system 18a.
  • Bus 12d carries communications between components of the client device 12.
  • operating system 18a is a Microsoft Windows operating system such as Windows 98, Windows 98SE, Windows ME, Windows 2000, Windows XP, or other compatible operating systems.
  • Audio output subsystem 27 includes components for the reproduction of sound under the control of processor 12c.
  • client device 12 this typically includes a sound card, a loudspeaker or headphones, and an amplifier, together with software drivers for operating the sound card with the operating system 18a.
  • Client device 12 optionally includes a network interface 12e, which enables communication by client device 12 across a network 58 via a link 58 a.
  • Example network interfaces 12e include an Ethernet transceiver or a modem.
  • Network interface 12e is typically present, at least so that client device 12 can communicate with server 30, which is a computing device distinct from client device 12 and which uses a link 58b to communicate via network 58.
  • Client device 12 can download files 14 from server 30.
  • Client device 12 also includes a visual display device 26 and one or more input devices 28.
  • Visual display device 26 is a computer screen.
  • input devices 28 include common personal computer peripheral input devices 28', such as a QWERTY keyboard 28e, mouse 28f, or touch-sensitive screen (not shown).
  • Other types of input device 28 include musical input devices 28", such as string input device 28a (e.g., an electronic guitar pick for a virtual guitar or for a virtual bass guitar), microphone input device 28b, percussion input device 28d (e.g., an electronic drum pad for a virtual drum), or MIDI-enabled instrument input device 28c (e.g. an electronic piano, guitar, etc.).
  • musical and non-musical devices can be used as input devices 28 to system 10.
  • a user 16 can provide input stimuli to a part by tapping on the space bar of a QWERTY keyboard 28e.
  • Client device 12 includes input ports (not shown) for various virtual instrument input devices 28.
  • These virtual instrument devices are the subject of U.S. Patent No. 5,393,926, entitled “Virtual Music System”, filed June 7, 1993, issued February 28, 1995, and herein incorporated by reference. Further, these virtual instrument input devices 28 and virtual instruments are the subject of U.S. Patent No. 5,670,729, entitled “A Virtual Music Instrument with a Novel Input Device”, filed May 11, 1995, issued September 23, 1997, and incorporated herein by reference.
  • the virtual pick devices 28a are USB devices.
  • Server layer 60d is due to a client/server division of services.
  • Server layer 60d includes services of server 30 that are remote relative to client device 12, such as shared storage 30a.
  • System 10 communicates with components of server layer 60d across network .
  • Layers local to client device 12 include an executable layer 60a, a libraries layer 60b, and an operating system (or "OS") services layer 60c.
  • Executable layer 60a includes player 20 and a song editor 20a.
  • player 20 is a ".EXE" file.
  • player 20 is an application executable by operating system 18a.
  • Player 20 is the primary executable involved in playing back files 14.
  • the libraries layer 60b includes an engine library 22.
  • engine library 22 is a dynamically linked library, or "DLL”.
  • Engine library 22 contains instructions and data that supplement the computing instructions of player 20. Player 20 loads engine library 22 automatically.
  • the libraries layer 60b also includes auxiliary files such as instrument bank 24.
  • Instrument bank 24 contains sound fonts 39, independent of sound fonts 39 stored in data file 14.
  • instrument bank 24 can act as a library of available sound fonts 39 that is pre-installed along with player 20.
  • Instrument bank 24 is a data file or document, used by system logic 18.
  • the layered architecture of system logic 18 reflects standard practices for the operating system 18a and active software (i.e., instructions that are executable).
  • OS services layer 60c includes services that can be used or shared by • applications running on the operating system 18a, including services that are part of operating system 18a.
  • OS services layer 60c includes OS services 62 and third- party services 66.
  • OS services 62 are part of operating system 18a (shown in FIG. 4).
  • OS services 62 include device drivers 66a, a graphics applications programming interface (API) 66b, an audio mixer API 66c, and a file system 66d.
  • the graphics API 66b for instance, enables system 10 to use visual display device 26.
  • Audio mixer API 66c enables system 10 to use audio output subsystem 27.
  • File system 66d enables system 10 to use storage 12d.
  • Device drivers 66a handle low-level communications with input devices 28, typically shielding components of system 10 from having to manage such low-level communications, while device drivers 66a act as a gateway for communications at a high level.
  • Third-party services 66 include an audio synthesizer 66a. Audio synthesizer 66a can read a MIDI stream and render it as audio via audio output subsystem 27.
  • system logic 18 includes classes that define software objects.
  • System logic 18 also includes interfaces that are implemented in the classes.
  • the classes specify behaviors and properties.
  • a class definition provides enough information for an object-oriented runtime process, such as system logic 18, to generate an object.
  • the generated object is an instance of the class and is said to belong to that class.
  • An object that belongs to a class implements the behaviors and properties of that class.
  • An interface specifies a collection of behaviors.
  • a class defines an implementation of an interface. Typically, both classes and objects from such classes are said to implement an interface.
  • an interface is to standardize a common set of behaviors. Different types of objects can each implement the same interface. This simplifies manipulations of such disparate objects, as the common interface imposes consistency.
  • object oriented languages such as Java, an object that implements an interface can be referenced via its interface implementation, as distinct from a reference to the object as a whole.
  • System logic 18 includes top-level objects 70a, dynamic objects 70b, and interfaces 70c.
  • Top-level objects 70a include performance object 72, VI manager object 74, global accompaniment object 76, performance pool object 78, and peripheral manager object 79.
  • Dynamic objects 70b include virtual instrument object 80.
  • Interfaces 70c include performance timer interface 84 and transport interface 86.
  • system logic 18 includes system behavior 90.
  • system behavior 90 includes procedures for selecting a multipart file 14 and playing back the associated live performance, in response to user input.
  • System behavior 90 initializes objects and settings of system 10 (process 92). Once user 16 chooses a standardized performance (process 90a), system behavior 90 selects a corresponding multipart data file 14 and prepares related objects (process 94), as will be explained. Once user 16 chooses parts to interact with (process 90b), system behavior 90 configures corresponding virtual instrument objects 80 (process 96). Next, user initiates playback (process 90c) and system behavior 90 begins live interactive playback process 98.
  • system initialization 92 includes starting the player 20
  • process 92a for example when operating system 18a loads player 20 for execution by processor 12c.
  • player 20 can start when user 16 uses a mouse input device 28 and a graphical user interface (GUI) shown in the visual display device 26 to double-click on an icon for the player 20.
  • GUI graphical user interface
  • system initialization 92 creates a performance object 72 (process 92b).
  • performance object 72 generates and initializes other top-level objects 70a, except that the VI manager object 74 creates a peripheral manager object 79 to help coordinate the creation and operation of virtual instrument objects 80.
  • a performance object 72 represents a live performance of a standardized performance 15 and includes properties and behaviors to manage the live performance.
  • Performance object 72 is the first top-level object 70a to be instantiated.
  • Performance object 72 launches other top-level objects 70a.
  • performance object 72 includes a process for child object creation 72c.
  • Performance object 72 also includes properties such as a song reference 72g, which specifies the standardized performance 15 to perform.
  • Child object creation 72c is invoked when performance object 72 is created.
  • Child object creation 72c includes processes such as VI Manager launch 72d, accompaniment launch 72e, and performance pool launch 72f.
  • VI Manager launch 72d creates a VI manager object 74.
  • Accompaniment launch 72e creates a global accompaniment object 76.
  • Performance pool launch 72f creates a performance pool object 78.
  • Each of these objects (VI manager object 74, global accompaniment object 76, and performance pool object 78) created by the performance object 72 is singular to that performance object 72.
  • Performance object 72 also implements a transport interface 86, described with reference to FIG. 15A and FIG. 15B, respectively.
  • player 20 has an application window 100 in the GUI managed by operating system 18a.
  • Application window 100 includes a control area 100a.
  • the user 16 interacts with the control area 100a to select a standardized performance 15 from a list lOOd of standardized performances 15 performable on system 10.
  • List lOOd displays, for each available standardized performance 15, information stored in the song information 36 (shown in FIG. 2) of corresponding data file 14.
  • User 16 accesses and navigates list lOOd via the GUI.
  • List lOOd can show those standardized performances 15 for data files 14 already downloaded from remote music server 30. Additionally, list lOOd can include standardized performances 15 for data files 14 available from remote music server 30.
  • Application window 100 also includes a song info display 100b and a user area region 100c.
  • Song info display 100b displays information stored in the song information 36 of a currently selected standardized performance 15.
  • User area region 100c includes one or more user areas 102, each of which corresponds to a part playable by a user 16.
  • each such user 16 receives visual feedback appropriate to his or her part in a user area 102 dedicated to that user 16.
  • peripheral manager object 79 includes processes such as device discovery 79a, device catalog service 79b, and driver management 79e. Peripheral manager object 79 also includes properties such as input device catalog 79c, which contains input
  • Device discovery 79a is invoked at runtime to discover input devices 28 attached to client device 12.
  • Device discovery 79a stores information about such input devices 28 in input device descriptions 79d.
  • Device catalog service 79b makes the contents of input device catalog 79c available to other objects such as virtual instrument objects 80.
  • 5 management 79e interacts with device drivers 62 a (shown in FIG. 5) to communicate with input devices 28.
  • a VI manager object 74 manages a collection of virtual instrument objects >0 80.
  • each such virtual instrument object 80 represents a different part of the audio content of standardized performance 15.
  • a VI manager object 74 includes processes such as virtual instrument creation 74a, child object creation 74b, and load process 104.
  • VI manager object 74 also includes properties such as a virtual instrument object collection 74d, which contains >5 a reference 74e for each virtual instrument object 80 created by VI manager object 74.
  • VI manager object 74 is instantiated during system initialization 92. Automatically upon being instantiated, VI manager object 74 performs child object creation 74b. Child object creation 74b instantiates a peripheral manager object 79 (process 74c).
  • Load process 104 occurs when user 16 selects a song 15a, as part of file selection 94, as will be explained. 50 Referring now to FIG. 9B, load process 104 looks in file 14 for a VI pool 40 (process
  • load process 104 looks in VI pool 40 for part chunks 42 (process 104b).
  • Load process 104 examines multipart data file 14 to dete ⁇ nine which virtual instalments need to be generated. In particular, load process 104 scans the information chunk 44 (shown in FIG. 3) of each part chunk 42 (process 104c). Load process 104 find a reference that specifies the cu ⁇ ent part chunk 42 (process 104d) and passes that reference when it instantiates a virtual instrument object 80 to co ⁇ espond to that part chunk 42 (process 74a). Load process 104 also adds (to collection 74d) a reference 74e to the new virtual instrument object 80 (process 104e). Load process 104 loops for each part chunk 42 in VI pool 40 (process 104b) and exits afterward.
  • File selection 94 locates the co ⁇ esponding data file 14 (procedure 94a).
  • File selection 94 passes a file reference that specifies the data file 14 to performance object 72 (procedure 94b).
  • the file reference can be a file name within filing system 62d (shown in FIG. 5).
  • the performance object 72 causes the performance pool object 78 to load the data file 14 (procedure 94c).
  • the performance object 72 uses load process 104 to instruct its child objects to load (procedure 94d).
  • interactive karaoke system 10 downloads the appropriate multipart data file 14 from server 30.
  • Part selection 96 responds to user interactions with that list and related GUI controls in application window 100.
  • part selection 96 allows zero or more users 16 to select parts to play. If no users 16 are paired with parts, system 10 can use guide tracks 48b to render the standardized performance 15. If multiple users 16 are paired with parts, a virtual band is created. If a user indicates he wants to play a part (process 96a), part selection 96 makes the co ⁇ esponding virtual instrument object 80 interactive (96b). Part selection 96 then uses the GUI to prompt the user 16 to choose an input device 28 (process 96c) and a sound font 39 (process 96d).
  • processes 96c and 96d are optional, as the part chunk 42 has a default input device 28 and sound font 39 that can be deduced from type 44b.
  • Process 96d allows user 16 to override the default sound font 39.
  • An example of process 96c is the user 16 choosing a guitar pick 28a to play a drum part.
  • part selection 96 makes the co ⁇ esponding virtual instrument object 80 non-interactive (96e). Part selection 96 repeats these choices (process 96f) for as many users 16 choose to play parts, subject to the number of available input devices 28.
  • Live interactive playback process 98 instructs performance object 72 to begin playback processing 72a (process 98a). Playback processing 72a then instructs virtual instrument objects 80 each to begin user input processing 80a
  • Playback processing 72a also instructs global accompaniment object 76 to begin non-interactive playback 76a (process 98c).
  • Virtual instrument objects 80 and global accompaniment Object 76 operate separately during live performance (process 98d) until the standardized performance 15 is complete or is interrupted by user 16.
  • a virtual instrument object 80 includes processes such as user input processing 80a, part player 80b, and cue display 82.
  • Virtual instrument object 80 also includes properties such as a matching tag 80f, a peripheral manager reference 80g, a performance pool reference 80h, and a performance pool offset 80i.
  • Virtual instrument object 80 has a reference to a performance timer interface 84 on global accompaniment object 76. Virtual instrument object 80 also implements a transport interface 86, described with reference to FIG. 14A and FIG. 14B, respectively.
  • Virtual instrument object 80 is interactive, i.e., responds to user input stimuli during a live performance. User input processing 80a handles these interactions, co ⁇ elating these stimuli to prompting data encoded in cue track 48e.
  • Peripheral manager reference 80g specifies peripheral manager object 79, which enables communication with an input device . 28.
  • Virtual instrument object 80 presents visual feedback to user 16 via cue display 82.
  • Matching tag 80f specifies types of musical input devices 28" that are recommended for use with virtual instrument object 80.
  • Input devices 28 are represented in input device catalog 79c (shown in FIG. 8B).
  • Virtual instrument object 80 reads performance track 48a (shown in FIG. 15C) and other tracks via the performance pool object 78.
  • Performance pool reference 80h and performance pool offset 80i specify the location of the relevant performance track 48a.
  • Part player 80b includes an interactive playback process 80c and a fill process 80d.
  • Interactive playback process 80c renders audio content of the performance track 48a and (when such content is present) renders the local accompaniment track 48e and video track 48f.
  • Fill process 80d renders guide track 48b and DA guide track 48g.
  • interactive karaoke system 10 can render a live performance which does not have any un-played parts 15c, as fill process 80d fills in any missing performances.
  • user 16 provides input stimuli to one or more of these virtual instrument input devices 28. These input stimuli generate one or more input signals, each of which co ⁇ esponds to one of the virtual instrument input devices 28.
  • the form of input stimulus provided by user 16 varies with the type of input device 28 and virtual instrument that user 16 is playing. For parts that utilize an electronic guitar pick 28a (shown in FIG. 4), user 16 typically provides an input stimulus by swiping the virtual guitar pick 28a on a hard surface. For percussion parts that use an electronic drum pad 28d, user 16 typically strikes the drum pad with a hard object. For vocal parts, user 16 sings into a microphone 28b.
  • Part player 80b maps the input signal received by a particular virtual instrument object 80 to notes for audio output in accordance with audio content encoded in performance track 48a. However, user 16 might provide these input stimuli early or late in time, relative to timing indicia. Or, user 16 might provide a different number of input stimuli that audio content specifies. Accordingly, for each pitch control indicia 96, part player 80b determines a time window during which any input stimulus received from the co ⁇ esponding virtual instrument is mapped to audio content of performance track 48a for that time period.
  • part player 80b would render three samples of the co ⁇ esponding audio content, even if the audio content specifies continuous, sustained sound during that time. This allows user 16 to improvise and customize their performance.
  • part player 80b sets the acoustical characteristics of each virtual instrument in accordance with the sound font 39 for that particular virtual instrument.
  • non-vocal virtual instrument objects 80 e.g., ones representing guitars, basses, or drums
  • a performance track 48c provides the information required to map each one of these input stimuli to a particular note or set of notes.
  • virtual instrument object 80 supports object inheritance.
  • General characteristics of a virtual instrument, as expressed in the class 110 for virtual instrument object 80, can be inherited by subclasses that refine or customize these characteristics to their needs, as well as adding characteristics that do not apply to other subclasses of virtual instrument.
  • a VJNocal class 111 can include a microphone interface process Ilia
  • a VIDrummer object 112 includes a stick interface process 112a
  • a VIStrummer object 114 includes a pick interface process 114.
  • Each of these interface processes 110a, 112a, and 114a is unique to its class.
  • Subclasses of virtual instrument class 110 can have their own subclasses.
  • NIBass 116 and NIGuitar 118 each inherit from the VIStrummer class.
  • cue display 82 prompts user 16 for input stimuli during a live performance.
  • Cue display 82 renders the prompts in a user area 102 according to timing indicia in cue track 48c. These timing indicia vary in form depending on the type of virtual instrument input device 28 and virtual instrument being played. If virtual instrument input device 28 is a string input device 28 or a percussion input device 28, for instance, timing indicia are rendered as spikes 122. Each spike 122 graphically displays the point in time at which user 16 is to provide an input stimulus to the virtual instrument input device 28. The time is visually represented by the position of the spike 122 within a cueing region, along an axis 102c.
  • This cue track 48c is the subject of U.S. Patent No. US 6,175,070 Bl, entitled
  • cue display 82 can display information concerning the pitch of the notes being played, in the form of a staff (not shown) or note-based musical annotation, as provided by score track 48d. For instance, cue display 82 can render chord notation 102e, or (shown in FIG. 12B) tablatures 102f or 102g.
  • Cue display 82 can render spikes 122 as double spikes 122a on both of the sides of cueing region 102b that are aligned with time axis 102c. Alternatively, cue display 82 can render spikes 122 as single spikes 122b on one side of cueing region 102b.
  • Another alternative is two groups of single spikes 122b, on opposing sides of cueing region 102b.
  • a first group of single spikes 122b provides cues
  • the other group of single spikes 122b illustrates the timing of the actual input stimuli provided by user 16 during the live performance.
  • the relative positions of the cuing spikes 122b and the stimuli spikes 122b provides graphic feedback regarding the accuracy of the user input, relative to the timing of the cues.
  • spikes 122 are in a fixed position on cueing region 102b while a sweeper 102h repeatedly sweeps from left to right across the cueing region 102b.
  • a sweeper 102h repeatedly sweeps from left to right across the cueing region 102b.
  • cueing region 102b and its contents can scroll to the left.
  • the timing of each prompt is indicated by the corresponding spike 122 passing under a fixed timing indicator 102i.
  • cue display 82 can prompt the user 16 with lyrics.
  • the timing indicia provided by cue track 48c includes such lyrics, together with timing information indicating the specific point in time that each word or phrase is to be sung.
  • Cue display 82 can- sequentially render each word or phrase as highlighted lyrics 102k at the specific point in time that each word is to be sung, in coordination with sweeper 102h or timing indicator 102i.
  • Cue display 82 renders a name 102a in cueing region 102b.
  • Name 102a typically contains text describing the part, co ⁇ esponding to information provided in information chunk 44 (shown in FIG. 3B).
  • a live performance requires at least one track of musical instructions from the global accompaniment. Even if all parts are interactive, i.e. not audibly accompanied, a performance needs a master timing control.
  • global accompaniment object 76 includes processes such .. as a accompaniment load process 120 and a non-interactive playback process 76a.
  • Global accompaniment object 76 also includes properties such as accompaniment pool reference 76b, which locates the accompaniment pool 38 in data file 14 via performance pool object 78, and a matching tag 76c, which specifies sound fonts 39, similar to the matching tag 80f of virtual instrument object 80.
  • the matching tag 80f of virtual instrument object 80 specifies compatible input devices 28, while matching tag 76c does not.
  • Non-interactive playback process 76a renders the audio content of tracks chunk 38a and provides a master timing pulse for a live performance.
  • Global accompaniment object 76 implements a performance timer interface 84 and a transport interface 86, described with reference to FIG. 14A and FIG. 14B, respectively.
  • accompaniment load process 120 loads musical content from tracks, chunk 38a (process 120a).
  • accompaniment load process 120 interacts with software synthesizer 66a to prepare it with sound fonts 39 (process 120b).
  • accompaniment load process 120 reads at least the first portion of DA trigger chunk 38c (process 120c).
  • Accompaniment load process 120 then primes audio buffers of audio output subsystem 27 with initial samples of MP3 files from DA chunk 38d, if any exist (process 120d). The priming is advance of the signal from user 16 to begin the live performance.
  • a performance timer interface 84 allows the exchange of timing signals.
  • performance timer interface 84 allows the dissemination of a clock pulse between objects that implement the performance timer interface 84.
  • Performance timer interface 84 includes a pulse dissemination process 84a and a pulse reception process 84b.
  • Pulse reception process 84b lets a compliant object receive notice of timed events in synchronicity with a master timer.
  • the global accompaniment object 76 acts as the master timer. It originates the clock pulse, based on timing infonnation in the tracks chunk 38a, and uses the pulse dissemination process 84a to signal other objects that use the master timing signal, including performance object 72 and virtual instrument object 80.
  • Events that are timed and disseminated by the pulse dissemination process 84a include both the pulse and musical events, such as starts and stops of a live perfo ⁇ nance, boundaries of musical measures, and beats.
  • Transport interface 86 describes processes for controlling the rate of playback of multipart data file 14.
  • Transport interface 86 includes processes for play 86a, stop 86b, forward 86c, and rewind 86d.
  • Transport interface 86 allows objects to coordinate synchronous playback of parts.
  • performance object 72 and global accompaniment object 76 can control the rate of synchronous playback by virtual instrument object 80.
  • performance pool object 78 includes processes such as decryption 78a, decompression 78b, and directory services 78c.
  • Directory services 78c includes a discovery process 78d, a navigation process 78e, and an inspection process 78f.
  • Performance pool object 78 also includes properties such as a directory structure 78g and an abstract access point 78h.
  • Performance pool object 78 provides directory services 78c into data file 14. In other words, performance pool mediates between objects of system logic 18 and the data file 14 in storage 12b or on server 30.
  • Performance pool object 78 provides an abstract access point 78h to data, thus shielding virtual instrument objects 80, for example, from having to inspect the file structure of data file 14, or to know the location of data file 14.
  • Performance pool object 78 can provide a different abstract access point 78h to different client objects.
  • directory services 78c are processes that are exposed for other objects to use.
  • Discovery process 78d discovers recursive data structures 78g such as chunks 50.
  • Navigation process 78e allows objects to navigate between such data structures 78g.
  • Inspection process 78f allows objects to view data structures 78g and access their contents.
  • Decryption 78a and decompression 78b translate storage formats of data file 14 into formats available for use in system logic 18.
  • performance pool object 78 shields other objects from information about encryption, the delivery mechanism of data file 14, the location of data file 14, and the internal file structure of data file 14.
  • the MIDI protocol defines a time-encoded stream that can deliver note event data, along with other features such as a control stream.
  • the note data assumes integer values from a range between 0 and 127 inclusive. Traditionally, each note in this range represents a distinct musical note in the Western musical scale, approximately encompassing the range of a traditional piano keyboard and most musical performances. According to this custom, the values of data in the note event stream represent notes for rendering by a synthesizer 66a. Also according to this custom, note event value 1 is a higher pitch than note event value 0, value 2 is higher than 1, and so forth throughout the range.
  • non-note information such as lyrics or control information, can be passed via MIDI in the control stream.
  • the architecture of DA trigger chunk 38c uses MIDI more generally, as a time-coded communication protocol.
  • the values in the note event stream are semantically mapped to non-note meanings.
  • the DA trigger architecture uses MIDI note event values to pass non-note data.
  • the values in the note event stream are indexes to digital audio clips.
  • the customary ordering of note event values i.e., the notion that ascending note event values correspond to ascending pitch
  • the values in this alternative use of the MIDI note event stream can be chosen such that the index indicates the order in which the co ⁇ esponding digital audio clip appears in the DA chunk 38d of file 14. Other orderings are also possible, or the note event values can be used without assigning any significance to their relative order.
  • a mapping process 130 maps nominal MIDI note event values to non-note values, such as digital audio clips.
  • MIDI note event value since that is a conventional term for this portion of the MIDI stream.
  • note event value in this context should be understood as not necessarily conveying musical note information. This description attaches the word
  • mapping process 130 is that it not restricted by the customary interpretations of MIDI note event values as musical notes.
  • Mapping process 130 receives a mapping of nominal note event values to audio clips, for use with a MIDI stream (process 130a). Each nominal note event values in the mapping corresponds to a different audio clip.
  • Mapping process 130 reads a nominal note event value from the MIDI stream (process 130b).
  • Mapping process 130 maps the value to non-note value, such as the index of an audio clip according to DA trigger chunk 38c (process 130c).
  • Mapping process 130 returns to read subsequent values from stream until the end of the stream (process 130d).
  • Mapping process 130 then outputs the MIDI stream with nominal MIDI note event values replaced by corresponding clip references (process 130e).
  • a real-time mapping process 132 is similar to mapping process 130, above, except for the timing of the output.
  • Real-time mapping process 132 omits the output stage (process 130e) of mapping process 130. After mapping the read value to an audio clip reference, and before repeating the next read, real-time mapping process 132 outputs the MIDI data with the current nominal MIDI note event value replaced by a co ⁇ esponding cu ⁇ ent clip reference (process 132a).
  • a MIDI mapping playback process 134 incorporates a MIDI mapping process to play back audio clips reference in a stream of MIDI nominal note event values.
  • MIDI mapping playback process 134 receives a MIDI stream and a mapping of note values to audio clips (process 134a).
  • DA trigger chunk 38c provides a suitable mapping of nominal note event values to audio clips.
  • MIDI mapping playback process 134 uses real-time mapping process 132 on the MIDI stream, yielding a stream of references to audio clips (process 134b).
  • MIDI mapping playback process 134 then renders the audio clips specified by the references (process 134c).
  • multipart data file 14 has been described as being transfe ⁇ ed in a unitary fashion, this is for illustrative purposes only.
  • Each multipart data file 14 is simply a collection of various components (e.g., interactive virtual instrument object 80 and global accompaniment object 76), each of which includes various subcomponents and tracks.
  • these components and/or subcomponents can also be transfe ⁇ ed individually or in various groups.
  • data file 14 is a file on a storage medium 12b or shared storage 30a.
  • the format of data file 14 applies to any digital medium.
  • the format of data file 14 organizes digital information in a stream, such as in a network communication flow, or digital information iri main memory of client device 12 or a server 30.
  • Part encoding process 19 receives a standardized performance 15 with each part 15c designated interactive or non-interactive (process 19a). For example, a human administrator could provide such designations.
  • operating system 18a is a Microsoft Windows operating system such as Windows 95, Windows NT 4.0, or-other compatible operating systems.
  • Engine library 22 has been described has a DLL, but engine library 22 could be a software component according to another standard. Moreover, engine library 22 need not be separate from player 20 but could be integrated.
  • System logic 18 has been described as residing on client device 12, which executes system logic. Alternatively, system logic 18 could be- distributed across multiple devices 12.
  • data file 14 contains one .standardized performance 15.
  • data file 14 can contain more than one standardized performance 15.
  • data file 14 can contain fractional portions of a standardized perfonnance 15. For example, a first file 14 could contain a song 15a while a second file 14 could contain supplemental or alternate parts 15c.
  • data file 14 has a format that uses chunks 50, including a body 34 that includes accompaniment pool 38 and VI pool 40, which in turn contain additional chunks 50.
  • data file 14 could have the same logical entities in a different format.
  • client device 12 is a personal computer. Other devices
  • client device 12 includes storage 12b.
  • storage 12b could be remote relative to client device 12.
  • Visual display device 26 could be a projector or other display.
  • the user chooses the part, then the system automatically selects the sound fonts and an input device.
  • the user can choose among types of sounds for the part.
  • synthesizer control data is MIDI nominal note event • values which can adopt any of 128 distinct integer values in the range 0 to 127.
  • the synthesizer control data could be non-MIDI data.
  • the synthesizer control data could be MIDI values other nominal note event values, or could adopt values from other ranges.
  • the synthesizer control data could be capable of adopting more (or less) than 128 distinct values.
  • digital audio clips are always played from the beginning.
  • system 10 could have random-access playback of • digital audio clips.
  • mapping process 130 and real-time mapping process 132 map nominal note event values to audio clips.
  • mapping process 130 and real-time mapping process 132 translate nominal note event values to any non-note data, when provided with an appropriate map.
  • mapping process 130 and realtime mapping process 132 each enable MIDI to be used as a general-purpose time-coded communication protocol. The map replaces the traditional musical meanings of MIDI nominal note event values with non-note meanings.
  • MIDI mapping playback process 134 uses real-time mapping process 132 on the MIDI stream. In alternate embodiments, MIDI mapping playback process 134 could use mapping process 130 instead of real-time mapping process 132.
  • the described embodiment makes use of objects in the architecture of system logic 18.
  • the data and processes of the described objects could be included in code or logic that does not use objects per se but that performs comparable processing of comparable data.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Electrophonic Musical Instruments (AREA)

Abstract

A computer-readable medium stores a data structure that encodes an audio performance for interactive playback. The data structure includes a virtual instrument pool (40, which encodes an interactive part (15c) of the audio performance. Audio content (15b) of the interactive part is encoded at least in a sequence of synthesizer control data. Each datum in the synthesizer control data specifies a digital sample of the audio content to be played back. The data structure also includes a global accompaniment pool (38), which encodes non-interactive portions (15c) of the audio performance. The global accompaniment pool includes timing information to synchronize the playback of the audio performance.

Description

STORING MULTIPART AUDIO PERFORMANCE WITH INTERACTIVE PLAYBACK
RELATED APPLICATIONS
This application claims the priority of: United States Patent Application Serial No. 09/900,289, entitled "A Multimedia Data File" and filed on July 6, 2001 , United States
Patent Application Serial No. 09/900,287, entitled "A Nirtual Music System", filed on July 6, 2001.U.S. Provisional Application Serial No. 60/282,420, entitled "A Multimedia Data File", and filed April 9, 2001; U.S. Provisional Application Serial No. 60/282,549, entitled "A Virtual Music System ", and filed April 9, 2001; U.S. Provisional Application Serial No. 60/288,876, entitled "A Multimedia Data File", and filed May 4, 2001 ; and U.S. Provisional Application Serial No. 60/288,730, entitled "An Interactive Karaoke System", and filed May 4, 2001.
This application herein incorporates by reference: U.S. Patent No. 5,393,926, entitled "Virtual Music System", filed June 7, 1993, and issued February 28, 1995; U.S. Patent No. 5,670,729, entitled "A Virtual Music Instrument with a Novel Input Device", filed May 11 , 1995, and issued September 23, 1997; and U.S. Patent No. US 6,175,070 Bl, entitled "System and Method for Variable Music Annotation", filed February 17, 2000, and issued January 16, 2001.
TECHNICAL FIELD This invention relates to multipart data files.
BACKGROUND
Moving Picture Experts Group (MPEG or MP3) and Musical Instrument Digital Interface (MIDI) are protocols for digital audio storage and transmission.
MIDI was designed for the recording and playback of digital audio content on synthesizers. MIDI streams do not represent audio content directly but provide information about how the content is to be synthesized. MIDI streams are multi-track, where each track can be mapped to a discrete profile such as a musical instrument. Each track of the MIDI stream includes the discrete notes to be played by that instrument. Since a MIDI file is the computer equivalent of traditional sheet music for a particular song (figuratively speaking, as opposed to the sound recording for the song itself), these files tend to be small and compact when compared to files which record the audio content directly and continuously. However, MIDI streams typically require some form of wave table or FM synthesizer chip to generate their sounds. Additionally, MIDI files tend to lack the richness and robustness of actual sound recordings of the same content.
MP3 streams, unlike MIDI streams, contain actual sound recordings of audio content. Typically, MP3 streams are single track files and do not include information concerning the specific musical notes or the instruments utilized in the recording. However, while MIDI files typically require additional hardware in order to be played back, MP3 files can quite often be played back on a modern multimedia personal computer with a minimal amount of specialized hardware.
SUMMARY
In general, in one aspect, the invention features a computer-readable medium having a data structure encoding an audio performance for interactive playback stored thereon. The data structure includes a virtual instrument pool that encodes an interactive part of the audio performance. Audio content of the interactive part is encoded at least in a sequence of synthesizer control data. Each datum in the synthesizer control data specifies a digital sample of the audio content to be played back. The data structure also includes a global accompaniment pool, which encodes non-interactive portions of the audio performance. The global accompaniment pool includes timing information to synchronize the playback of the audio performance.
Preferred embodiments include one or more of the following features. The synthesizer control data is MIDI data. The digital sample is an MP3 clip. The global accompaniment pool encodes a non-interactive part of the audio content of the audio performance. The global accompaniment pool includes a collection of sound fonts, in which each sound font provides parameters for synthesizing the playback of an interactive part.
In general, in another aspect, the invention features a computer-readable medium that stores a data structure which encodes an audio performance for interactive playback. The data structure includes a global accompaniment pool, which encodes a non-interactive part of the audio performance. A portion of the non-interactive part is encoded as synthesizer control data, while another portion of the non-interactive part is encoded as digital samples of the audio performance. The data structure also includes a virtual instrument pool, which encodes an interactive part of the audio performance. The interactive part has audio content encoded at least in synthesizer control data. Each datum in the synthesizer control data specifies musical notes to be synthesized, or specifies a digital sample of the audio content to be played back.
Preferred embodiments include one or more of the following features. The synthesizer control data is MIDI data. The digital samples are MP3 clips. The virtual instrument pool includes cue data that specifies prompts coordinated with the audio content the interactive part.
In general, in still another aspect, the invention features code stored on a computer readable medium. The code is a computer in an entertainment system that includes an audio output subsystem, an input device, and a memory storing a musical performance data structure having an interactive portion of a musical performance and an accompanying, non- interactive portion of the musical performance. The code includes a virtual manager object which causes the computer to read the musical performance data structure stored in the memory and generate a virtual object representing a virtual instrument identified in the performance data structure. The virtual object causes the computer to map user input from the input device to the interactive portion of the musical performance and play the mapped interactive portion of the musical performance through the audio output subsystem. The code also includes a global accompaniment object which causes the computer to play the accompanying non-interactive portion of the musical performance through the audio output system. Preferred embodiments include one or more of the following features. The stored musical performance data structure identifies a plurality of different virtual instruments, each representing a different musical instrument. The virtual manager object causes the computer to generate a plurality of virtual objects, each of which represents a different corresponding one of the identified plurality of instruments. Each of the virtual objects causes the computer to map user input from input devices to a corresponding part of the interactive portion of the musical performance and play the mapped corresponding part of the interactive portion of the musical performance through the audio output subsystem.
The global accompaniment object also includes logic which when executed on the computer causes it to provide a master timing signal for the virtual object.
5 Assuming that the entertainment system includes a video display subsystem and the stored musical performance data structure includes a stored sequence of timing cues associated with the interactive portion of the musical performance, the virtual object also includes logic which causes the computer to display a visual representation of the timing cues through the video display system to aid the user in playing the virtual instrument. Also o assuming that the stored musical performance data structure includes a plurality of digital clips each representing a different part of the non-interactive portion of the musical performance and a sequence of trigger points, each of which presents timing information and identifies which one of the digital clips is to be played at times identified in the timing information, then in that case the global accompaniment object includes logic which causes
5 the entertainment system to play through the audio output subsystem the identified one of the plurality of digital clips at the appropriate time as identified by the stored sequence of trigger points.
Assuming that the audio output subsystem includes a synthesizer and the stored musical performance data structure includes sound fonts, the accompaniment object further
>0 includes logic that causes the computer to retrieve the sound fonts from the stored musical performance data structure and load them into the synthesizer to control the character of the audio output subsystem.
The details of one or more embodiments of the invention are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the
25 invention will be apparent from the description and drawings, and from the claims.
DESCRIPTION OF DRAWINGS
FIG. 1 A is a block diagram of an interactive karaoke system. FIG. IB is a flowchart of apart encoding process. FIG. 2 is a block diagram of a multipart data file. 30 FIG. 3 A is a block diagram of a chunk. FIG. 3B is a block diagram of a part chunk.
FIG. 4 is a block diagram of a client device and connected devices.
FIG. 5 is a block diagram of software layers.
FIG. 6A is a block diagram of object classes and interfaces. 5 FIG. 6B is a flowchart of system behavior.
FIG. 6C is a flowchart of system initialization.
FIG. 7A is a block diagram of a performance object.
FIG. 7B is a flowchart of a live interactive playback process.
FIG. 8 A is a diagram of an application window. I o FIG. 8B is a block diagram of a peripheral manager obj ect.
FIG. 9 A is a block diagram of a virtual instrument manager.
FIG. 9B is a flowchart of a VI manager load process.
FIG. 10A is a flowchart of a file selection process.
FIG. 1 OB is a flowchart of a part selection process. 15 FIG 11 A is a block diagram of a virtual instrument obj ect.
FIG. 1 IB is a diagram of virtual instrument inheritance.
FIG. 12Ais a first diagram of a user area.
FIG. 12B is a second diagram of a user area.
FIG. 13Ais a block diagram of a global accompaniment. 0 FIG. 13B is a flowchart of a global accompaniment load process.
FIG. 14A is a diagram of a performance timer interface.
FIG. 14B is a diagram of a transport interface.
FIG. 14C is a diagram of a performance pool interface.
FIG. 15 A is a flowchart of a mapping process. 5 FIG. 15B is a flowchart of a real-time mapping process.
FIG. 16 is a flowchart of a MIDI mapping playback process.
Like reference symbols in the various drawings indicate like elements.
DETAILED DESCRIPTION
In one embodiment, a data file contains a standardized performance of music or 0 sound digitally encoded, typically at a high quality ~ for instance, comparable to FM radio or better. Methods for digitally encoding the sound include digital recordings or samples in a format such as MP3, as well as synthesizer parameters in a format such as MIDI. The standardized performance is encoded in one or more parts that can be played back synclironously by an interactive karaoke system. For instance, the standardized performance
5 can be a song or musical performance, with various parts allocated to musicians and their vocals or instruments.
The data file contains additional content such as timing cues, lyrics, and other features, as will be explained. The additional content is time-correlated to the audio content for synchronous playback.
0 One or more human users can use the interactive karaoke system. Each user has an input device and a part to "play", i.e., to interact with in real time via the input device. The interactive karaoke system presents a user interface via a display device to the users. The interactive karaoke system manages synchronous playback of the audio content. During playback, the karaoke system visually prompts each user to interact with the karaoke system
5 according to timing information encoded in the part. The interactive karaoke system correlates user inputs at the input device to the user's part. The interactive karaoke system then plays audio content from the part to simulate the user playing the part. When the audio content represents a musical performance, for instance, the interactive karaoke system can recreate a version of that musical performance as apparently played by the one or more users.
!0 To play a part, the user chooses the part and an input device. The system automatically selects the sound profiles (or "sound fonts", as will be explained) for that part. A virtual instrument uses a part, an input device, and a sound font. Virtual instruments are encoded as software objects generated and maintained by the karaoke system.
In general, this description distinguishes live performances from the standardized
»5 performance encoded in the data file. A live performance is the karaoke system's rendering of the standardized performance after adjusting for real-time user inputs and for user preferences. The live performance usually deviates from the standardized performance as a result of these adjustments. For example, if a user's inputs stray too far from the timing information encoded in part, then the karaoke system will suppress all or part of the audio
10 output for that part. Other deviations can be due to timing. The karaoke system plays samples from the standardized performance according to the timing of the real-time user input. If the user deviates too far from the timing of the standardized performance, therefore, the live performance will deviate as well. Still other deviations can be due to system settings that the user chooses. For instance, a user can choose to have the karaoke system omit one or more parts of the standardized performance. The variations between live perfoπnances and the standardized performance contribute to the entertainment value of the karaoke system. Interactive aspects of the system and the content of the multipart file are suitable for musical instruction, as well. Still another use of the multipart file applies to deejay software.
SYSTEM Referring now to FIG. 1A, an interactive karaoke system 10 plays multipart data files
14, each of which corresponds to a standardized performance 15 such as a song 15a or audio content 15b. Each standardized performance 15 contains one or more parts 15c, which typically are audio content of standardized performance 15 assigned to a particular instrument or human performer. Data file 14 includes either a part chunk 42 or a tracks chunk 38a for each part 15c of standardized performance 15, as will be explained. Multipart data file 14 contains sufficient information for system 10 to reproduce standardized performance 15 and parts 15c.
Karaoke system 10 includes interactive and audio-visual features. For instance, a user 16 interacts with system 10 via an input device 28, which can be a musical input device 28". User 16 views a visual display device 26, through which system 10 displays information to user 16. Audio output subsystem 27 produces sound audible to user 16, including the live performance.
System logic 18 includes procedures encoded as instructions that can be carried out by a processing device, as will be explained. In other words, system logic 18 is software. System logic 18 includes a player application 20 and an engine library 22, explained later.
PART ENCODING PROCESS
In general, system 10 distinguishes between "interactive" or "non-interactive" parts 15c of a standardized performance 15. System 10 makes- interactive parts 15c available to be played by user 16 during a live performance. System can render interactive parts 15c either automatically (in a demonstration or guide mode) or interactively (subject to user input stimuli, as will be explained.) In contrast, system 10 renders non-interactive parts 15c automatically during a live performance. Non-interactive parts 15c are background or accompaniment to interactive parts 15c.
The distinction between interactive and non-interactive parts 15c is encoded in data file 14. In general, interactive parts 15c correspond to part chunks 42 in VI pool 40 (shown in FIG. 2), while non-interactive parts 15c correspond to tracks chunk 38a in accompaniment pool 38.
Referring now to FIG. IB, a part encoding process 19 maps parts 15c to portions of a data file 14, broadly speaking. Part encoding process 19 receives a standardized performance 15 with each part 15c designated interactive or non-interactive (process 19a). For example, a human administrator could provide such designations.
Part encoding process 19 selects a part 15c from a standardized perfoπnance 15 to be encoded in a data file 14 (process 19b). Part encoding process 19 tests whether part 15c is interactive (process 19c). If the test is affirmative, part encoding process 19 encodes part 15c as a virtual instrument (process 19d). For instance, the part 15c is mapped to a part chunk 42 in VI pool 40 in data file 14. If the test is not affirmative, part encoding process 19 encodes part 15c as a portion of the global accompaniment (process 19e). For instance, the part 15c is mapped to a tracks chunk 38a in accompaniment pool 38 in data file 14.
Part encoding process 19 returns to process 19b for each part 15c in the input (process 19f).
FILE STRUCTURE
Referring now to FIG. 2, a multipart data file 14 includes a header 32 and a body 34. The header 32 typically precedes the body 34 in file 14. The header 32 contains an encryption flag 32a that indicates whether body 34 is encrypted, and a song identifier 32b. Song identifier 32b is a value that uniquely identifies song 15a relative to other songs 15a. For example, song identifier 32b can act as a product number in a publisher's catalog of songs 15 a.
Body 34 includes song information 36, an accompaniment pool 38, and a virtual instrument (or "VI") pool 40. Song information 36 specifies the standardized performance 15 associated with multipart data file 14. Song information 36 includes fields such as title 36a, artist 36b, description 36c, length 36d, genre 36e, subgenre 36f, publisher 36g, copyright 36h, writers 36i, version 36k, format 36m, and difficulty rating 36n. Title 36a is a name that identifies the standardized performance 15 to user 16. Description 36c, genre 36e, and subgenre 36f further explain the standard performance 15 to user 16. Artist 36b indicates one 5 or more artists represented in the standardized performance 15. Length 36d indicates the duration in time of the standardized performance 15. Publisher 36g, copyright 36h, and writers 36i identify intellectual property rights in the standardized performance 15, while version 36k and format 36m are metadata that assist different versions of system 10 (for instance, future revisions) in recognizing the rubrics in place at the time that that data file 14 0 was encoded. Difficulty rating 36n is a measure of the overall difficulty of the parts 15c in the standardized performance 15.
Accompaniment pool 38 and VI pool 40 include data formatted as chunks 50. Moreover, accompaniment pool 38 and VI pool 40 themselves use the chunk 50 format. Chunks 50 are described with reference to FIG. 3A. 5
ACCOMPANIMENT POOL
In general, accompaniment pool 38 contains information that interactive karaoke system 10 interprets in order to manage a live performance and to render non-interactive parts 15c. Furthermore, accompaniment pool 38 provides sound fonts 39 specific to the :θ standardized performance 15, as will be explained. Accompaniment pool 38 contains a tracks chunk 38a, a soundbank chunk 38b, a DA (for "digital audio") trigger chunk 38c, and a DA chunk 38d.
The tracks chunk 38a encodes global accompaniment content. The tracks chunk 38a includes timing to define the tempo and length at which system 10 will render the >5 corresponding standardized performance 15. The tracks chunk 38a usually (but not always) also encodes actual audio content. For instance, the tracks chunk 38a could be part of a standardized performance 15 that contains an unaccompanied part 15c, for instance a solo vocal performance. In this case, the standardized performance 15 is still encoded with a global accompaniment track 38a, at least to provide a master timing signal.
50 SOUNDBANKAND SOUNDFONTS
The soundbank chunk 38b provides sound fonts 39 specific to the standardized performance 15 corresponding to file 14.
A sound font 39 includes samples and acoustical characteristics for a virtual 5 instrument. Acoustical characteristics include the envelope, or volume of a sample as it moves over time. The envelope typically includes an attack (initial volume rising rapidly over time), an initial decay from attack, sustain (held for as long as note needs to be held), and release (what happens to the sound when the instrument is done playing the note).
For example, if the sound font 39 is for an overdriven guitar, the sample will be an ! o actual recording of an overdriven guitar playing a defined note or frequency. If user 16 provides an input stimulus that, according to performance track 48a (shown in FIG. 3B), corresponds to a note having the same frequency as the sample, the sample will be played without modification. However, if that input stimulus corresponds to a note at a different frequency than the frequency of the sample, interactive karaoke system 10 will shift the 15 frequency of the sample to that of the required note. Synthesizer 66a (shown in FIG. 5) can perform frequency shifts.
In the described embodiment, sound fonts 39 are compatible with technologies and products from Creative Labs, Inc.
20 DA TRIGGER AND DA CHUNK
DA trigger chunk 38c gives a set of control messages that allow playing digital audio clips such as MP3 samples. The clips themselves are stored in DA chunk 38d.
DA trigger chunk 38c indexes the clips and includes information that maps MIDI note event values to MP3 samples, for example in a table of pairs that associate note event values 25 with clips. The DA guide track 48g associated with a part 15c can use these indexes as a space-efficient shorthand when referencing the clips.
VI POOL
VI pool 40 includes a collection of part chunks 42. Multipart data file 14 includes a 30 part chunk 42 for each virtual instrument playable in the corresponding standardized performance 15. Part chunk 42 formats are explained with reference to FIG. 3B. Broadly, a part chunk 42 holds the data that encodes an interactive part 15c. As will be explained, the VI Manager looks for the VI pool 40 during startup and generates a virtual instrument object 80 for each part chunk 42.
CHUNKS
Referring now to FIG. 3 A, a chunk 50 is a format for storage of digital information. The chunk 50 format can store a wide range of data. Chunk 50 includes a metadata portion 52 and a data portion 54. Metadata fields describe the nature of data stored in the data portion 54. Metadata 52 includes name 52a, type 52b, size 52c, an encryption indicator 52d, and a compression indicator 52e. Encryption indicator 52d indicates whether data portion 54 is encrypted. Compression indicator 52e describes a compression scheme used in data portion 54. Typically, metadata 52 is stored as plaintext, while data portion 54 is stored with encryption and compression.
Examples of data stored in data portion 54 include digital audio recordings, MIDI data, and text . Data portion can also store additional chunks 50 - that is, the structure of chunk 50 is recursive. Size 52c indicates when a given chunk 50 ends.
PART CHUNKS
Referring now to FIG. 3B, a part chunk 42 includes an information chunk 44 and a data chunk 44. Information chunk 44 includes a name 42a, a type 42b, a difficulty rating
42c, and a description 42d. The name 42a for the part 15c identifies it to user 16. Difficulty rating 42c and a description 44d further explain the standard performance 15 to user 16. Type 42b allows part 15c to be matched to appropriate virtual instruments: for instance, drum parts 15c to drum instruments. The data chunk 44 contains MIDI data. The MIDI data is formatted into MIDI tracks.
Track types include guide track 48b, performance track 48a, cue track 48c, score track 48d, local accompaniment track 48e, video track 48f, and DA guide track 48g.
GUIDE TRACK Guide track 48b is a non-interactive complement to an interactive part 15c. Guide track 48b encodes the portion of a standardized performance 15 corresponding to a part 15c. User can toggle the playback of guide track 48b on and off manually. In addition, the system can play guide track 48b automatically.
User 16 can configure system 10 such that a live performance has no user assigned to a given interactive part. When the audio content of that part is needed for the live
5 performance, system 10 renders the audio content of the guide track 48b non-interactively - for instance, in lieu of an interactive rendering of performance track 48a.
Guide track 48b can be stored in several formats. Guide track 48b can include a synthesizer control stream, such as a MIDI stream, or a sound recording file 94, such as an MP3 file.
0 In addition to providing audio "fill" in the event that a user chooses not to play a virtual instrument, one or more guide tracks 48b can be selectively played to provide guide information to user 16. This guide information provides insight to the user concerning the pitch, rhythm, and timbre of the performance of that particular virtual instrument. For example, if user 16 is singing an unfamiliar song 15 a, guide track 48b can be played in
5 addition to the performance sung by user 16. User 16 would typically play this guide track 48b at a volume level lower than that of the vocals. (Alternatively, user 16 can listen to guide track 48b through headphones.) This guide track 48b, which is played softly behind the vocal performance rendered by user 16, assists the user in providing an accurate performance for that vocal virtual instrument. Guide track 48b can be used to provide guide information
»0 for non- vocal virtual instruments, as well.
PERFORMANCE TRACK
Performance track 48 a encodes audio content that is the basis for the live performance of a part 15c when user provides acceptable input. Performance track 48a 15 includes a MIDI stream. The note event values of the MIDI stream encode synthesizer inputs.
Virtual instruments need not have a performance track 48a. Apart for a string input device 28 or a percussion input device 28 typically does have a performance track 48a. For such parts, interactive karaoke system 10 must generate a note having the appropriate pitch o (as specified by performance track 48a) for each input stimulus received. User input for vocal parts, however, does not require system 10 to generate a note. Instead, user 16 provides vocal part inputs via a microphone 28b (shown in FIG. 5).
CUE TRACK
5 Broadly, cue track 48c indicates how and when system 10 should prompt user 16 for input during the live performance. The prompts do not have to correspond to the performance track 48a on a one-to-one basis. Instead, typically, the prompts summarize the performance track 48a. This summarizing helps system 10 simplify parts so that user 16 does not have to play every note in performance track 48a. Cues in cue track 48c can collect
I o multiple notes or phrases from the performance track 48a. The mapping of individual stimuli to multiple notes is one way in which system 10 can create the illusion of a fuller performance than the stimuli strictly describe.
Cue track 48c specifies timing intervals during which the user is prompted for input stimuli. In general, cue intervals do not overlap.
15 The timing (both the start and duration) of a cue interval has several functions. It shows when a prompt should be displayed to the user. The interval also indicates sections of the performance track 48a that will be played if acceptable user input occurs during that window.
20 SCORE TRACK
Score track 48d encodes musical notations that are synchronized with the performance track 48a for display during a live performance. The notations can take several forms. One form is textual descriptions of chords, such as "F#5" or "C5". Notations can also describe conventional musical notations, for instance staff or tablature.
25 Examples of displayed notations are discussed with regard to FIG. 12A and FIG. 12B.
LOCAL ACCOMPANIMENT TRACK
Local accompaniment track 48e within a virtual instrument part 15c is distinct from the global accompaniment. Local accompaniment track 48e provides additional audio "fill" 30 for the virtual instrument part as needed. Using local accompaniment track 48e, system 10 can create the audio illusion that the user is playing an entire instrument part, when in fact the input stimuli only correspond to a portion of the standardized performance 15 of the part. The standardized performance 15 can be a combination of the performance track 48 a and the local accompaniment track 48e.
As an example, consider a drum kit. As a physical device, a drum kit can be fairly
5 complex, involving several percussion instruments. Some skilled drummers can play with two hands and two feet separately and simultaneously. The input device 28 that the user of system 10 manipulates can be much simpler, even to the extent that the simpler input device 28 makes it difficult or impossible for the user to recreate exactly through the single device 28 the many interactions that a professional drummer might malce with a full drum kit in real
0 time. Local accompaniment track 48e allows user 16 to play a subset or an approximation of the total notes in the part and to have the rest of the notes provided anyway. For instance, in the drum example, one option is for the user 16 to just play the snare-drum part, while an accompaniment track within the VI track provides kick drum, tom-tom, high hat, and so forth.
15 In performance, as with performance track 48a, during periods when user is not providing acceptable input, system 10 does not render the audio content of local accompaniment track 48e.
VIDEO TRACK
20 Video track 48f provides interactive visuals synchronized to the live performance.
Video track 48f includes a time-encoded series of visual frames for system 10 to present to user 16 in response to user interaction. For instance, automated music training can benefit from video response. Video track 48f can include a stock series of pictures or movies, coordinated to certain points in standardized performance 15. For instance, the video track
25. 48f can depict a turntable for a deej ay application. In this case, for a given standardized performance 15, the video track 48f can offer a different, customized version of a turntable.
DA GUIDE TRACK
Conceptually, the DA guide track 48 g is similar to the guide track 48b but operates 30 specifically with digital audio clips. DA guide track 48g uses MIDI control messages to point to digital audio clips, indexed in the DA trigger chunk 38c and stored in the DA chunk 38d. DA guide track 48g includes a time-encoded series of trigger intervals. The trigger intervals indicate when a given clip should be played. The note number indicates which clip to play, the note placement in time indicates when to play it, and the note duration indicates for how long to play it. DA guide track 48g is useful at least when the standardized performance 15 includes audio content that cannot be synthesized satisfactorily, such as with a particular vocal performance or, in general, any performance with unusual or distinctive sonic qualities.
One efficient use of sound recordings, or digital audio clips, exploits the fact that many standardized performances 15 include redundancy. For example, background tracks often contain repeated musical passages, or large portions of silence, or both. Therefore, these background tracks can be broken into discrete clips, each of which represents a first instance of each repeated portion, making subsequent repeated instances obsolete. Thus, storage space and bandwidth are not wasted saving redundant passages. During playback, these clips can be rendered repeatedly by referencing each appropriate clip at an appropriate time. For example, if a standardized performance 15 has five identical fifteen second background choruses and these five choruses are each separated by forty-five seconds of silence, this background track recorded in it entirety would be four minutes and fifteen seconds long. However, there is only fifteen seconds of unique data is this track, in that this chunk of data is repeated five times. Accordingly, by recording only the unique portions of data, a four minute and fifteen second background track can be reduced to only fifteen seconds, resulting in a 94% file size reduction. By utilizing a MIDI trigger file to initiate the timed and repeated playback of this fifteen second data track (once per minute for five minutes), a background track can be created which has the space saving characteristics of a MIDI file yet the robust sound characteristics of a MPEG file.
DEVICES
Referring now to FIG. 4, a client device 12 executes system logic 18 of karaoke system 10. In this embodiment, client device 12 is a personal computer. Client device 12 includes main memory 12a, storage 12b, and a processor 12c, interconnected by a bus 12d. Storage 12b is a non- volatile storage medium such as a disk drive. Processor 12c executes machine-readable instructions stored in main memory 12a or in storage 12b, or both, according to operating system 18a. Bus 12d carries communications between components of the client device 12.
In this embodiment, operating system 18a is a Microsoft Windows operating system such as Windows 98, Windows 98SE, Windows ME, Windows 2000, Windows XP, or other compatible operating systems.
Audio output subsystem 27 includes components for the reproduction of sound under the control of processor 12c. In client device 12 this typically includes a sound card, a loudspeaker or headphones, and an amplifier, together with software drivers for operating the sound card with the operating system 18a. Client device 12 optionally includes a network interface 12e, which enables communication by client device 12 across a network 58 via a link 58 a. Example network interfaces 12e include an Ethernet transceiver or a modem. Network interface 12e is typically present, at least so that client device 12 can communicate with server 30, which is a computing device distinct from client device 12 and which uses a link 58b to communicate via network 58. Client device 12 can download files 14 from server 30.
Client device 12 also includes a visual display device 26 and one or more input devices 28. Visual display device 26 is a computer screen. There can be several input devices 28 (shown in FIG. 1 A), including common personal computer peripheral input devices 28', such as a QWERTY keyboard 28e, mouse 28f, or touch-sensitive screen (not shown). Other types of input device 28 include musical input devices 28", such as string input device 28a (e.g., an electronic guitar pick for a virtual guitar or for a virtual bass guitar), microphone input device 28b, percussion input device 28d (e.g., an electronic drum pad for a virtual drum), or MIDI-enabled instrument input device 28c (e.g. an electronic piano, guitar, etc.). Both musical and non-musical devices can be used as input devices 28 to system 10. For example, a user 16 can provide input stimuli to a part by tapping on the space bar of a QWERTY keyboard 28e.
Client device 12 includes input ports (not shown) for various virtual instrument input devices 28. These virtual instrument devices are the subject of U.S. Patent No. 5,393,926, entitled "Virtual Music System", filed June 7, 1993, issued February 28, 1995, and herein incorporated by reference. Further, these virtual instrument input devices 28 and virtual instruments are the subject of U.S. Patent No. 5,670,729, entitled "A Virtual Music Instrument with a Novel Input Device", filed May 11, 1995, issued September 23, 1997, and incorporated herein by reference.
In the present embodiment, the virtual pick devices 28a are USB devices.
SOFTWARE ARCHITECTURE
Referring now to FIG. 5, software components of system 10 have a layered architecture. In general, the layers collect software components according to function. Server layer 60d is due to a client/server division of services. Server layer 60d includes services of server 30 that are remote relative to client device 12, such as shared storage 30a. System 10 communicates with components of server layer 60d across network .
58.
Layers local to client device 12 include an executable layer 60a, a libraries layer 60b, and an operating system (or "OS") services layer 60c. Executable layer 60a includes player 20 and a song editor 20a. In this embodiment, which uses a Microsoft Windows operating system 18a, player 20 is a ".EXE" file. In other words, player 20 is an application executable by operating system 18a. Player 20 is the primary executable involved in playing back files 14.
The libraries layer 60b includes an engine library 22. In this embodiment, which uses a Microsoft Windows operating system 18a, engine library 22 is a dynamically linked library, or "DLL". Engine library 22 contains instructions and data that supplement the computing instructions of player 20. Player 20 loads engine library 22 automatically.
The libraries layer 60b also includes auxiliary files such as instrument bank 24. Instrument bank 24 contains sound fonts 39, independent of sound fonts 39 stored in data file 14. For example, instrument bank 24 can act as a library of available sound fonts 39 that is pre-installed along with player 20.
Though both the engine library 22 and the instrument bank 24 are referred to as "libraries", they are conceptually different at least in that engine library 22 contains executable instructions and instrument bank 24 does not. Instrument bank 24 is a data file or document, used by system logic 18. In general, the layered architecture of system logic 18 reflects standard practices for the operating system 18a and active software (i.e., instructions that are executable). Broadly, OS services layer 60c includes services that can be used or shared by • applications running on the operating system 18a, including services that are part of operating system 18a. In particular, OS services layer 60c includes OS services 62 and third- party services 66. OS services 62 are part of operating system 18a (shown in FIG. 4). OS services 62 include device drivers 66a, a graphics applications programming interface (API) 66b, an audio mixer API 66c, and a file system 66d. The graphics API 66b, for instance, enables system 10 to use visual display device 26. Audio mixer API 66c enables system 10 to use audio output subsystem 27. File system 66d enables system 10 to use storage 12d. Device drivers 66a handle low-level communications with input devices 28, typically shielding components of system 10 from having to manage such low-level communications, while device drivers 66a act as a gateway for communications at a high level.
Third-party services 66 include an audio synthesizer 66a. Audio synthesizer 66a can read a MIDI stream and render it as audio via audio output subsystem 27.
CLASSES AND INTERFACES
Referring now to FIG. 6, system logic 18 includes classes that define software objects. System logic 18 also includes interfaces that are implemented in the classes. In general, the classes specify behaviors and properties. A class definition provides enough information for an object-oriented runtime process, such as system logic 18, to generate an object. The generated object is an instance of the class and is said to belong to that class. An object that belongs to a class implements the behaviors and properties of that class. An interface specifies a collection of behaviors. A class defines an implementation of an interface. Typically, both classes and objects from such classes are said to implement an interface.
One use of an interface is to standardize a common set of behaviors. Different types of objects can each implement the same interface. This simplifies manipulations of such disparate objects, as the common interface imposes consistency. In addition, in some object oriented languages such as Java, an object that implements an interface can be referenced via its interface implementation, as distinct from a reference to the object as a whole.
This description and these figures focus on objects. The class definitions of such objects are understood to be available to system logic 18. System logic 18 includes top-level objects 70a, dynamic objects 70b, and interfaces 70c. Top-level objects 70a include performance object 72, VI manager object 74, global accompaniment object 76, performance pool object 78, and peripheral manager object 79. In general, top-level objects 70a define objects that are generated when system 10 is initialized. Dynamic objects 70b include virtual instrument object 80. Interfaces 70c include performance timer interface 84 and transport interface 86.
SYSTEM BEHAVIOR
Referring now FIG. 6B, system logic 18 includes system behavior 90. In general, system behavior 90 includes procedures for selecting a multipart file 14 and playing back the associated live performance, in response to user input.
System behavior 90 initializes objects and settings of system 10 (process 92). Once user 16 chooses a standardized performance (process 90a), system behavior 90 selects a corresponding multipart data file 14 and prepares related objects (process 94), as will be explained. Once user 16 chooses parts to interact with (process 90b), system behavior 90 configures corresponding virtual instrument objects 80 (process 96). Next, user initiates playback (process 90c) and system behavior 90 begins live interactive playback process 98.
SYSTEM INITIALIZATION Referring now FIG. 6C, system initialization 92 includes starting the player 20
(process 92a), for example when operating system 18a loads player 20 for execution by processor 12c. For instance, player 20 can start when user 16 uses a mouse input device 28 and a graphical user interface (GUI) shown in the visual display device 26 to double-click on an icon for the player 20. Next, system initialization 92 creates a performance object 72 (process 92b). As will be explained, performance object 72 generates and initializes other top-level objects 70a, except that the VI manager object 74 creates a peripheral manager object 79 to help coordinate the creation and operation of virtual instrument objects 80.
System initialization 92 launches an application window 100 (process 92c). PERFORMANCE
In general, a performance object 72 represents a live performance of a standardized performance 15 and includes properties and behaviors to manage the live performance. Performance object 72 is the first top-level object 70a to be instantiated. Performance object 72 launches other top-level objects 70a.
Referring now to FIG. 7A, performance object 72 includes a process for child object creation 72c. Performance object 72 also includes properties such as a song reference 72g, which specifies the standardized performance 15 to perform.
Child object creation 72c is invoked when performance object 72 is created. Child object creation 72c includes processes such as VI Manager launch 72d, accompaniment launch 72e, and performance pool launch 72f. VI Manager launch 72d creates a VI manager object 74. Accompaniment launch 72e creates a global accompaniment object 76. Performance pool launch 72f creates a performance pool object 78. Each of these objects (VI manager object 74, global accompaniment object 76, and performance pool object 78) created by the performance object 72 is singular to that performance object 72.
Performance object 72 also implements a transport interface 86, described with reference to FIG. 15A and FIG. 15B, respectively.
APPLICATION WINDOW Referring now FIG. 8 A, player 20 has an application window 100 in the GUI managed by operating system 18a. Application window 100 includes a control area 100a. The user 16 interacts with the control area 100a to select a standardized performance 15 from a list lOOd of standardized performances 15 performable on system 10. List lOOd displays, for each available standardized performance 15, information stored in the song information 36 (shown in FIG. 2) of corresponding data file 14. User 16 accesses and navigates list lOOd via the GUI. List lOOd can show those standardized performances 15 for data files 14 already downloaded from remote music server 30. Additionally, list lOOd can include standardized performances 15 for data files 14 available from remote music server 30.
Application window 100 also includes a song info display 100b and a user area region 100c. Song info display 100b displays information stored in the song information 36 of a currently selected standardized performance 15. User area region 100c includes one or more user areas 102, each of which corresponds to a part playable by a user 16. During a live performance, when each user interacting with karaoke system 10 is paired to part 15c, each such user 16 receives visual feedback appropriate to his or her part in a user area 102 dedicated to that user 16.
5
PERD? HERAL MANAGER
Referring to FIG. 8B, peripheral manager object 79 includes processes such as device discovery 79a, device catalog service 79b, and driver management 79e. Peripheral manager object 79 also includes properties such as input device catalog 79c, which contains input
0 device descriptions 79d.
Device discovery 79a is invoked at runtime to discover input devices 28 attached to client device 12. Device discovery 79a stores information about such input devices 28 in input device descriptions 79d. Device catalog service 79b makes the contents of input device catalog 79c available to other objects such as virtual instrument objects 80. Driver
5 management 79e interacts with device drivers 62 a (shown in FIG. 5) to communicate with input devices 28.
VI MANAGER OBJECT
In general, a VI manager object 74 manages a collection of virtual instrument objects >0 80. Typically, each such virtual instrument object 80 represents a different part of the audio content of standardized performance 15.
Referring now to FIG. 9 A, a VI manager object 74 includes processes such as virtual instrument creation 74a, child object creation 74b, and load process 104. VI manager object 74 also includes properties such as a virtual instrument object collection 74d, which contains >5 a reference 74e for each virtual instrument object 80 created by VI manager object 74.
VI manager object 74 is instantiated during system initialization 92. Automatically upon being instantiated, VI manager object 74 performs child object creation 74b. Child object creation 74b instantiates a peripheral manager object 79 (process 74c). Load process 104 occurs when user 16 selects a song 15a, as part of file selection 94, as will be explained. 50 Referring now to FIG. 9B, load process 104 looks in file 14 for a VI pool 40 (process
104a). Next, load process 104 looks in VI pool 40 for part chunks 42 (process 104b). Load process 104 examines multipart data file 14 to deteπnine which virtual instalments need to be generated. In particular, load process 104 scans the information chunk 44 (shown in FIG. 3) of each part chunk 42 (process 104c). Load process 104 find a reference that specifies the cuπent part chunk 42 (process 104d) and passes that reference when it instantiates a virtual instrument object 80 to coπespond to that part chunk 42 (process 74a). Load process 104 also adds (to collection 74d) a reference 74e to the new virtual instrument object 80 (process 104e). Load process 104 loops for each part chunk 42 in VI pool 40 (process 104b) and exits afterward.
FILE SELECTION
Referring now to FIG. 10 A, user 16 selects a standardized performance 15 (process 90a, shown in FIG. 6B). File selection 94 locates the coπesponding data file 14 (procedure 94a). File selection 94 passes a file reference that specifies the data file 14 to performance object 72 (procedure 94b). For instance, the file reference can be a file name within filing system 62d (shown in FIG. 5). • Using the file reference, the performance object 72 causes the performance pool object 78 to load the data file 14 (procedure 94c). The performance object 72 uses load process 104 to instruct its child objects to load (procedure 94d).
When user 16 wishes to perform a standardized performance 15 available on database of remote music server 30, or when an administrator wishes to add a standardized performance 15 to list lOOd, interactive karaoke system 10 downloads the appropriate multipart data file 14 from server 30.
PART SELECTION
Referring now to FIG. 10B, available virtual instruments are presented to user 16 in the form of a list displayed in application window 100. Part selection 96 responds to user interactions with that list and related GUI controls in application window 100. In general, part selection 96 allows zero or more users 16 to select parts to play. If no users 16 are paired with parts, system 10 can use guide tracks 48b to render the standardized performance 15. If multiple users 16 are paired with parts, a virtual band is created. If a user indicates he wants to play a part (process 96a), part selection 96 makes the coπesponding virtual instrument object 80 interactive (96b). Part selection 96 then uses the GUI to prompt the user 16 to choose an input device 28 (process 96c) and a sound font 39 (process 96d). Note that processes 96c and 96d are optional, as the part chunk 42 has a default input device 28 and sound font 39 that can be deduced from type 44b. Process 96d allows user 16 to override the default sound font 39. An example of process 96c is the user 16 choosing a guitar pick 28a to play a drum part.
If a user indicates he does not want to play a part (process 96a), part selection 96 makes the coπesponding virtual instrument object 80 non-interactive (96e). Part selection 96 repeats these choices (process 96f) for as many users 16 choose to play parts, subject to the number of available input devices 28.
PLAYBACK
Referring now to FIG. 7B, user 16 instructs system to begin a live interactive playback process 98 (process 90c). Live interactive playback process 98 instructs performance object 72 to begin playback processing 72a (process 98a). Playback processing 72a then instructs virtual instrument objects 80 each to begin user input processing 80a
(process 98b). Playback processing 72a also instructs global accompaniment object 76 to begin non-interactive playback 76a (process 98c). Virtual instrument objects 80 and global accompaniment Object 76 operate separately during live performance (process 98d) until the standardized performance 15 is complete or is interrupted by user 16.
VIRTUAL INSTRUMENT OBJECT
Referring now to FIG. 11 A, a virtual instrument object 80 includes processes such as user input processing 80a, part player 80b, and cue display 82. Virtual instrument object 80 also includes properties such as a matching tag 80f, a peripheral manager reference 80g, a performance pool reference 80h, and a performance pool offset 80i.
Virtual instrument object 80 has a reference to a performance timer interface 84 on global accompaniment object 76. Virtual instrument object 80 also implements a transport interface 86, described with reference to FIG. 14A and FIG. 14B, respectively.
Virtual instrument object 80 is interactive, i.e., responds to user input stimuli during a live performance. User input processing 80a handles these interactions, coπelating these stimuli to prompting data encoded in cue track 48e. Peripheral manager reference 80g specifies peripheral manager object 79, which enables communication with an input device . 28.
Virtual instrument object 80 presents visual feedback to user 16 via cue display 82. Matching tag 80f specifies types of musical input devices 28" that are recommended for use with virtual instrument object 80. Input devices 28 are represented in input device catalog 79c (shown in FIG. 8B).
Virtual instrument object 80 reads performance track 48a (shown in FIG. 15C) and other tracks via the performance pool object 78. Performance pool reference 80h and performance pool offset 80i specify the location of the relevant performance track 48a. Part player 80b includes an interactive playback process 80c and a fill process 80d.
Interactive playback process 80c renders audio content of the performance track 48a and (when such content is present) renders the local accompaniment track 48e and video track 48f. Fill process 80d renders guide track 48b and DA guide track 48g. Regardless of the parts 15c that user 16 chooses to play, interactive karaoke system 10 can render a live performance which does not have any un-played parts 15c, as fill process 80d fills in any missing performances.
During a live performance, user 16 provides input stimuli to one or more of these virtual instrument input devices 28. These input stimuli generate one or more input signals, each of which coπesponds to one of the virtual instrument input devices 28. The form of input stimulus provided by user 16 varies with the type of input device 28 and virtual instrument that user 16 is playing. For parts that utilize an electronic guitar pick 28a (shown in FIG. 4), user 16 typically provides an input stimulus by swiping the virtual guitar pick 28a on a hard surface. For percussion parts that use an electronic drum pad 28d, user 16 typically strikes the drum pad with a hard object. For vocal parts, user 16 sings into a microphone 28b.
Part player 80b maps the input signal received by a particular virtual instrument object 80 to notes for audio output in accordance with audio content encoded in performance track 48a. However, user 16 might provide these input stimuli early or late in time, relative to timing indicia. Or, user 16 might provide a different number of input stimuli that audio content specifies. Accordingly, for each pitch control indicia 96, part player 80b determines a time window during which any input stimulus received from the coπesponding virtual instrument is mapped to audio content of performance track 48a for that time period. For example, if user 16 strums a virtual guitar pick 28a three times in the time window (each strum being a stimulus), part player 80b would render three samples of the coπesponding audio content, even if the audio content specifies continuous, sustained sound during that time. This allows user 16 to improvise and customize their performance.
In addition to controlling the pitch of the specific notes played by a user, part player 80b sets the acoustical characteristics of each virtual instrument in accordance with the sound font 39 for that particular virtual instrument.
While vocals do not require any processing and are simply replayed by interactive karaoke system 10, input stimuli provided to non-vocal virtual instrument objects 80 (e.g., ones representing guitars, basses, or drums) are processed so that one or more notes, each having a specific pitch, timing and timbre, can be played for each of these input stimuli. A performance track 48c provides the information required to map each one of these input stimuli to a particular note or set of notes.
- VI TREE Referring now to FIG. 1 IB, virtual instrument object 80 supports object inheritance. General characteristics of a virtual instrument, as expressed in the class 110 for virtual instrument object 80, can be inherited by subclasses that refine or customize these characteristics to their needs, as well as adding characteristics that do not apply to other subclasses of virtual instrument. For example, a VJNocal class 111 can include a microphone interface process Ilia, while a VIDrummer object 112 includes a stick interface process 112a, and a VIStrummer object 114 includes a pick interface process 114. Each of these interface processes 110a, 112a, and 114a is unique to its class. Subclasses of virtual instrument class 110 can have their own subclasses. For example, NIBass 116 and NIGuitar 118 each inherit from the VIStrummer class.
CUE DISPLAY
Referring now to FIG. 12A, cue display 82 prompts user 16 for input stimuli during a live performance. Cue display 82 renders the prompts in a user area 102 according to timing indicia in cue track 48c. These timing indicia vary in form depending on the type of virtual instrument input device 28 and virtual instrument being played. If virtual instrument input device 28 is a string input device 28 or a percussion input device 28, for instance, timing indicia are rendered as spikes 122. Each spike 122 graphically displays the point in time at which user 16 is to provide an input stimulus to the virtual instrument input device 28. The time is visually represented by the position of the spike 122 within a cueing region, along an axis 102c. This cue track 48c is the subject of U.S. Patent No. US 6,175,070 Bl, entitled
"System and Method for Variable Music Annotation", filed February 17, 2000, issued
January 16, 2001, and incorporated herein by reference.
In addition to or instead of spikes 122, which only show the point in time at which the user 16 is to provide an input stimulus, cue display 82 can display information concerning the pitch of the notes being played, in the form of a staff (not shown) or note-based musical annotation, as provided by score track 48d. For instance, cue display 82 can render chord notation 102e, or (shown in FIG. 12B) tablatures 102f or 102g.
Cue display 82 can render spikes 122 as double spikes 122a on both of the sides of cueing region 102b that are aligned with time axis 102c. Alternatively, cue display 82 can render spikes 122 as single spikes 122b on one side of cueing region 102b.
Another alternative is two groups of single spikes 122b, on opposing sides of cueing region 102b. In this case, a first group of single spikes 122b provides cues, while the other group of single spikes 122b illustrates the timing of the actual input stimuli provided by user 16 during the live performance. Thus, the relative positions of the cuing spikes 122b and the stimuli spikes 122b provides graphic feedback regarding the accuracy of the user input, relative to the timing of the cues.
Referring now to FIG. 12B, spikes 122 are in a fixed position on cueing region 102b while a sweeper 102h repeatedly sweeps from left to right across the cueing region 102b. Alternatively, referring now to FIG. 12 A, cueing region 102b and its contents can scroll to the left. In this latter scheme, the timing of each prompt is indicated by the corresponding spike 122 passing under a fixed timing indicator 102i.
For a live performance of a vocal part, cue display 82 can prompt the user 16 with lyrics. For a vocal part, the timing indicia provided by cue track 48c includes such lyrics, together with timing information indicating the specific point in time that each word or phrase is to be sung. Cue display 82 can- sequentially render each word or phrase as highlighted lyrics 102k at the specific point in time that each word is to be sung, in coordination with sweeper 102h or timing indicator 102i.
Cue display 82 renders a name 102a in cueing region 102b. Name 102a typically contains text describing the part, coπesponding to information provided in information chunk 44 (shown in FIG. 3B).
GLOBAL ACCOMPANIMENT
A live performance requires at least one track of musical instructions from the global accompaniment. Even if all parts are interactive, i.e. not audibly accompanied, a performance needs a master timing control.
Referring now to FIG. 13 A, global accompaniment object 76 includes processes such .. as a accompaniment load process 120 and a non-interactive playback process 76a. Global accompaniment object 76 also includes properties such as accompaniment pool reference 76b, which locates the accompaniment pool 38 in data file 14 via performance pool object 78, and a matching tag 76c, which specifies sound fonts 39, similar to the matching tag 80f of virtual instrument object 80. However, the matching tag 80f of virtual instrument object 80 specifies compatible input devices 28, while matching tag 76c does not. (Global accompaniment object 76 does not require information on input devices 28, since global accompaniment object 76 plays non-interactive parts.) Non-interactive playback process 76a renders the audio content of tracks chunk 38a and provides a master timing pulse for a live performance.
Global accompaniment object 76 implements a performance timer interface 84 and a transport interface 86, described with reference to FIG. 14A and FIG. 14B, respectively.
Referring now to FIG. 13B, accompaniment load process 120 loads musical content from tracks, chunk 38a (process 120a). Next, accompaniment load process 120 interacts with software synthesizer 66a to prepare it with sound fonts 39 (process 120b). Next, accompaniment load process 120 reads at least the first portion of DA trigger chunk 38c (process 120c). Accompaniment load process 120 then primes audio buffers of audio output subsystem 27 with initial samples of MP3 files from DA chunk 38d, if any exist (process 120d). The priming is advance of the signal from user 16 to begin the live performance.
Priming the buffers improves responsiveness when that signal does occur. PERFORMANCE TJMER AND TRANSPORT INTERFACES
In general, synchronous playback of the multiple part of multipart data file 14 requires a coordinated notion of timing. Referring now to FIG. 14A, a performance timer interface 84 allows the exchange of timing signals. In particular, performance timer interface 84 allows the dissemination of a clock pulse between objects that implement the performance timer interface 84.
Performance timer interface 84 includes a pulse dissemination process 84a and a pulse reception process 84b. Pulse reception process 84b lets a compliant object receive notice of timed events in synchronicity with a master timer. The global accompaniment object 76 acts as the master timer. It originates the clock pulse, based on timing infonnation in the tracks chunk 38a, and uses the pulse dissemination process 84a to signal other objects that use the master timing signal, including performance object 72 and virtual instrument object 80. Events that are timed and disseminated by the pulse dissemination process 84a include both the pulse and musical events, such as starts and stops of a live perfoπnance, boundaries of musical measures, and beats.
Referring now to FIG. 14B, a transport interface 86 describes processes for controlling the rate of playback of multipart data file 14. Transport interface 86 includes processes for play 86a, stop 86b, forward 86c, and rewind 86d. Transport interface 86 allows objects to coordinate synchronous playback of parts. In particular, performance object 72 and global accompaniment object 76 can control the rate of synchronous playback by virtual instrument object 80.
PERFORMANCE POOL
Referring now to FIG. 14C, performance pool object 78 includes processes such as decryption 78a, decompression 78b, and directory services 78c. Directory services 78c includes a discovery process 78d, a navigation process 78e, and an inspection process 78f. Performance pool object 78 also includes properties such as a directory structure 78g and an abstract access point 78h. Performance pool object 78 provides directory services 78c into data file 14. In other words, performance pool mediates between objects of system logic 18 and the data file 14 in storage 12b or on server 30. Performance pool object 78 provides an abstract access point 78h to data, thus shielding virtual instrument objects 80, for example, from having to inspect the file structure of data file 14, or to know the location of data file 14. Performance pool object 78 can provide a different abstract access point 78h to different client objects.
In general, directory services 78c are processes that are exposed for other objects to use. Discovery process 78d discovers recursive data structures 78g such as chunks 50. Navigation process 78e allows objects to navigate between such data structures 78g. Inspection process 78f allows objects to view data structures 78g and access their contents.
Decryption 78a and decompression 78b translate storage formats of data file 14 into formats available for use in system logic 18. In general, performance pool object 78 shields other objects from information about encryption, the delivery mechanism of data file 14, the location of data file 14, and the internal file structure of data file 14.
ALTERNATE MIDI MAPPINGS
The MIDI protocol defines a time-encoded stream that can deliver note event data, along with other features such as a control stream. The note data assumes integer values from a range between 0 and 127 inclusive. Traditionally, each note in this range represents a distinct musical note in the Western musical scale, approximately encompassing the range of a traditional piano keyboard and most musical performances. According to this custom, the values of data in the note event stream represent notes for rendering by a synthesizer 66a. Also according to this custom, note event value 1 is a higher pitch than note event value 0, value 2 is higher than 1, and so forth throughout the range. A further custom is that non-note information, such as lyrics or control information, can be passed via MIDI in the control stream.
The architecture of DA trigger chunk 38c uses MIDI more generally, as a time-coded communication protocol. The values in the note event stream are semantically mapped to non-note meanings. In other words, the DA trigger architecture uses MIDI note event values to pass non-note data. In particular, the values in the note event stream are indexes to digital audio clips. The customary ordering of note event values (i.e., the notion that ascending note event values correspond to ascending pitch) is optional under this approach. For instance, the values in this alternative use of the MIDI note event stream can be chosen such that the index indicates the order in which the coπesponding digital audio clip appears in the DA chunk 38d of file 14. Other orderings are also possible, or the note event values can be used without assigning any significance to their relative order.
Referring now to FIG. 15A, a mapping process 130 maps nominal MIDI note event values to non-note values, such as digital audio clips. For clarity, this description will use the term "MIDI note event value", since that is a conventional term for this portion of the MIDI stream. However, the term "note event value" in this context should be understood as not necessarily conveying musical note information. This description attaches the word
"nominal" to emphasize that the MIDI note event value is refeπed to in name only. Indeed, one benefit of mapping process 130 is that it not restricted by the customary interpretations of MIDI note event values as musical notes.
Mapping process 130 receives a mapping of nominal note event values to audio clips, for use with a MIDI stream (process 130a). Each nominal note event values in the mapping corresponds to a different audio clip. Mapping process 130 reads a nominal note event value from the MIDI stream (process 130b). Mapping process 130 maps the value to non-note value, such as the index of an audio clip according to DA trigger chunk 38c (process 130c). Mapping process 130 returns to read subsequent values from stream until the end of the stream (process 130d). Mapping process 130 then outputs the MIDI stream with nominal MIDI note event values replaced by corresponding clip references (process 130e).
Referring now to FIG. 15B, a real-time mapping process 132 is similar to mapping process 130, above, except for the timing of the output. Real-time mapping process 132 omits the output stage (process 130e) of mapping process 130. After mapping the read value to an audio clip reference, and before repeating the next read, real-time mapping process 132 outputs the MIDI data with the current nominal MIDI note event value replaced by a coπesponding cuπent clip reference (process 132a).
Referring now to FIG 16, a MIDI mapping playback process 134 incorporates a MIDI mapping process to play back audio clips reference in a stream of MIDI nominal note event values. MIDI mapping playback process 134 receives a MIDI stream and a mapping of note values to audio clips (process 134a). In the described embodiment, DA trigger chunk 38c provides a suitable mapping of nominal note event values to audio clips. MIDI mapping playback process 134 then uses real-time mapping process 132 on the MIDI stream, yielding a stream of references to audio clips (process 134b). MIDI mapping playback process 134 then renders the audio clips specified by the references (process 134c).
ALTERNATE EMBODIMENTS
While multipart data file 14 has been described as being transfeπed in a unitary fashion, this is for illustrative purposes only. Each multipart data file 14 is simply a collection of various components (e.g., interactive virtual instrument object 80 and global accompaniment object 76), each of which includes various subcomponents and tracks.
Accordingly, in addition to the unitary fashion described above, these components and/or subcomponents can also be transfeπed individually or in various groups.
Moreover, in the described embodiment, data file 14 is a file on a storage medium 12b or shared storage 30a. However, the format of data file 14 applies to any digital medium. In alternate embodiments, the format of data file 14 organizes digital information in a stream, such as in a network communication flow, or digital information iri main memory of client device 12 or a server 30.
Part encoding process 19 receives a standardized performance 15 with each part 15c designated interactive or non-interactive (process 19a). For example, a human administrator could provide such designations.
In this embodiment, operating system 18a is a Microsoft Windows operating system such as Windows 95, Windows NT 4.0, or-other compatible operating systems.
Engine library 22 has been described has a DLL, but engine library 22 could be a software component according to another standard. Moreover, engine library 22 need not be separate from player 20 but could be integrated.
System logic 18 has been described as residing on client device 12, which executes system logic. Alternatively, system logic 18 could be- distributed across multiple devices 12.
The header 32 has been described preceding the body 34 in data file 14. ' Other permutations of the orderings of the components of data file 14, either at a physical level or a logical level or both, are possible. In the described embodiment, data file 14 contains one .standardized performance 15. Alternatively, data file 14 can contain more than one standardized performance 15. As another alternative, data file 14 can contain fractional portions of a standardized perfonnance 15. For example, a first file 14 could contain a song 15a while a second file 14 could contain supplemental or alternate parts 15c.
In the described embodiment, data file 14 has a format that uses chunks 50, including a body 34 that includes accompaniment pool 38 and VI pool 40, which in turn contain additional chunks 50. In alternate embodiments, data file 14 could have the same logical entities in a different format. In the described embodiment, client device 12 is a personal computer. Other devices
12 are possible.
In the described embodiment, client device 12 includes storage 12b. Alternatively, storage 12b could be remote relative to client device 12.
Visual display device 26 could be a projector or other display. In the described embodiment, to play a part, the user chooses the part, then the system automatically selects the sound fonts and an input device. In an alternate embodiment, the user can choose among types of sounds for the part.
In the described embodiment, synthesizer control data is MIDI nominal note event • values which can adopt any of 128 distinct integer values in the range 0 to 127. In alternate • embodiments, the synthesizer control data could be non-MIDI data. In other alternate embodiments, the synthesizer control data could be MIDI values other nominal note event values, or could adopt values from other ranges. In general, the synthesizer control data could be capable of adopting more (or less) than 128 distinct values.
In the described embodiment, digital audio clips are always played from the beginning. In alternate embodiments, system 10 could have random-access playback of • digital audio clips.
In the described embodiment, mapping process 130 and real-time mapping process 132 map nominal note event values to audio clips. However, in general, mapping process 130 and real-time mapping process 132 translate nominal note event values to any non-note data, when provided with an appropriate map. In other words, mapping process 130 and realtime mapping process 132 each enable MIDI to be used as a general-purpose time-coded communication protocol. The map replaces the traditional musical meanings of MIDI nominal note event values with non-note meanings.
In the described embodiment, MIDI mapping playback process 134 uses real-time mapping process 132 on the MIDI stream. In alternate embodiments, MIDI mapping playback process 134 could use mapping process 130 instead of real-time mapping process 132.
The described embodiment makes use of objects in the architecture of system logic 18. However, in alternate embodiments, the data and processes of the described objects could be included in code or logic that does not use objects per se but that performs comparable processing of comparable data.
A number of embodiments of the invention have been described. Nevertheless, it will be understood that various modifications can be made without departing from the spirit and scope of the invention. Accordingly, other embodiments are within the scope of the following claims.

Claims

WHAT IS CLAIMED IS:
1. A computer-readable medium having a data structure encoding an audio perfoπnance for interactive playback stored thereon, the data structure comprising: a virtual instrument pool that encodes an interactive part of the audio performance, wherein audio content of the interactive part is encoded at least in a sequence of synthesizer control data, each datum in the synthesizer control data specifying a digital sample of the audio content to be played back; and a global accompaniment pool that encodes non-interactive portions of the audio performance, including timing information to synchronize the playback of the audio performance.
2. The medium of claim 1 , wherein the synthesizer control data is MIDI data.
3. The medium of claim 1, wherein the digital sample is an MP3 clip.
4. The medium of claim 1, wherein the global accompaniment pool encodes a non- interactive part of the audio content of the audio performance.
5. The medium of claim 1, wherein the global accompaniment pool includes a collection of sound fonts, each such sound font providing parameters for synthesizing the playback of an interactive part.
6. A computer-readable medium having a data structure encoding an audio perfoπnance for interactive playback stored thereon, the data structure comprising: a global accompaniment pool that encodes a non-interactive part of the audio performance, wherein a portion of the non-interactive part is encoded as synthesizer control data, and another portion of the non-interactive part is encoded as digital samples of the audio performance; and a virtual instrument pool that encodes an interactive part of the audio performance, the interactive part having audio content encoded at least in synthesizer control data, each datum in the synthesizer control data specifying one or more musical notes to be synthesized or specifying a digital sample of the audio content to be played back.
7. The medium of claim 6, wherein the synthesizer control data is MIDI data.
8. The medium of claim 6, wherein the digital samples are MP3 clips.
9. The medium of claim 6, wherein the virtual instrument pool includes cue data that specifies prompts coordinated with the audio content the interactive part.
10. Code stored on a computer readable medium, said code for running on a computer in an entertainment system that includes an audio output subsystem, an input device, and a memory storing a musical performance data structure having an interactive portion of a musical performance and an accompanying, non-interactive portion of the musical performance, said code comprising: a virtual manager object which causes the computer to read the musical performance data structure stored in the memory and generate a virtual object representing a virtual instrument identified in said performance data structure, wherein said virtual object causes said computer to map user input from the input device to the interactive portion of the musical performance and play the mapped interactive portion of the musical performance through the audio output subsystem; and a global accompaniment object which causes the computer to play the accompanying non-interactive portion of the musical performance through the audio output system.
11. The code of claim 10 wherein the global accompaniment object also comprises logic which when executed on the computer causes said computer to provide a master timing signal for the virtual object.
12. The code of claim 10 wherein the entertaimnent system includes a plurality of input devices one of which is the first-mentioned input device, wherein the stored musical performance data structure identifies a plurality of different virtual instruments each representing a different musical instrument and wherein the virtual manager object causes the computer to generate a plurality of virtual objects, each of which represents a different coπesponding one of the identified plurality of instruments, said plurality of virtual objects including the first-mentioned virtual object, wherein each of said plurality of virtual objects causes said computer to map user input from a corresponding one of the input devices to a coπesponding part of the interactive portion of the musical performance and play the mapped corresponding part of the interactive portion of the musical performance through the audio output subsystem.
13. The code of claim 10 wherein the entertainment system includes a video display subsystem and the stored musical performance data structure includes a stored sequence of timing cues associated with the interactive portion of the musical performance and wherein said virtual object also comprises logic which causes the computer to display a visual representation of the timing cues through the video display system to aid the user in playing the virtual instrument.
14. The code of claim 10 wherein the stored musical performance data structure includes a plurality of digital clips each representing a different part of the non-interactive portion of the musical performance and a sequence of trigger points, each of said trigger points presenting timing information and identifying which one of said digital clips is to be played at times identified in the timing information, wherein the global accompaniment object comprises logic which causes the entertainment system to play through the audio output subsystem the identified one of the plurality of digital clips at the appropriate time as identified by the stored sequence of trigger points.
5. The code of claim 10 wherein the audio output subsystem includes a synthesizer and the stored musical performance data structure includes sound fonts and wherein the accompaniment object further comprises logic that causes the computer to retrieve the sound fonts from the stored musical performance data structure and load them into the synthesizer to control the character of the audio output subsystem.
PCT/US2002/010976 2001-04-09 2002-04-09 Storing multipart audio performance with interactive playback WO2002082420A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2002580306A JP4267925B2 (en) 2001-04-09 2002-04-09 Medium for storing multipart audio performances by interactive playback

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US60/282,549 2001-04-09
US60/282,420 2001-04-09
US60/288,730 2001-05-04
US60/288,876 2001-05-04
US09/900,287 2001-07-06
US09/900,289 US20020144588A1 (en) 2001-04-09 2001-07-06 Multimedia data file
US09/900,289 2001-07-06
US09/900,287 US20020144587A1 (en) 2001-04-09 2001-07-06 Virtual music system

Publications (1)

Publication Number Publication Date
WO2002082420A1 true WO2002082420A1 (en) 2002-10-17

Family

ID=27129259

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2002/010976 WO2002082420A1 (en) 2001-04-09 2002-04-09 Storing multipart audio performance with interactive playback

Country Status (1)

Country Link
WO (1) WO2002082420A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5908997A (en) * 1996-06-24 1999-06-01 Van Koevering Company Electronic music instrument system with musical keyboard
US5952599A (en) * 1996-12-19 1999-09-14 Interval Research Corporation Interactive music generation system making use of global feature control by non-musicians
US6018118A (en) * 1998-04-07 2000-01-25 Interval Research Corporation System and method for controlling a music synthesizer
US6093880A (en) * 1998-05-26 2000-07-25 Oz Interactive, Inc. System for prioritizing audio for a virtual environment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5908997A (en) * 1996-06-24 1999-06-01 Van Koevering Company Electronic music instrument system with musical keyboard
US5952599A (en) * 1996-12-19 1999-09-14 Interval Research Corporation Interactive music generation system making use of global feature control by non-musicians
US6018118A (en) * 1998-04-07 2000-01-25 Interval Research Corporation System and method for controlling a music synthesizer
US6093880A (en) * 1998-05-26 2000-07-25 Oz Interactive, Inc. System for prioritizing audio for a virtual environment

Similar Documents

Publication Publication Date Title
US6924425B2 (en) Method and apparatus for storing a multipart audio performance with interactive playback
CA2239684C (en) Method and apparatus for interactively creating new arrangements for musical compositions
US10056062B2 (en) Systems and methods for the creation and playback of animated, interpretive, musical notation and audio synchronized with the recorded performance of an original artist
US6093880A (en) System for prioritizing audio for a virtual environment
Goto Development of the RWC music database
US6970822B2 (en) Accessing audio processing components in an audio generation system
EP1116214B1 (en) Method and system for composing electronic music and generating graphical information
US6975995B2 (en) Network based music playing/song accompanying service system and method
US7305273B2 (en) Audio generation system manager
US20100216549A1 (en) System and method for network communication of music data
US20020144587A1 (en) Virtual music system
US20020144588A1 (en) Multimedia data file
WO2002082420A1 (en) Storing multipart audio performance with interactive playback
Oliver In dub conference: empathy, groove and technology in Jamaican popular music
Tomczak On the development of an interface framework in chipmusic: theoretical context, case studies and creative outcomes
JP3956504B2 (en) Karaoke equipment
Gibson The AudioPro home recording course: a comprehensive multimedia audio recording text
Huber Midi
Dvorin Apple Pro Training Series: Logic Pro 8: Beyond the Basics
Michael Hawley The Personal Orchestra, or Audio Datq Compression
KR20020027683A (en) Method for Providing Various Music Information for One Song on Internet

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2002580306

Country of ref document: JP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase