WO1999038152A1 - Phrase and rhythm engines for music generation - Google Patents

Phrase and rhythm engines for music generation Download PDF

Info

Publication number
WO1999038152A1
WO1999038152A1 PCT/US1999/000569 US9900569W WO9938152A1 WO 1999038152 A1 WO1999038152 A1 WO 1999038152A1 US 9900569 W US9900569 W US 9900569W WO 9938152 A1 WO9938152 A1 WO 9938152A1
Authority
WO
WIPO (PCT)
Prior art keywords
note
musical
rhythm
phrase
signals
Prior art date
Application number
PCT/US1999/000569
Other languages
French (fr)
Inventor
Jimmy C. Hotz
Original Assignee
The Hotz Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The Hotz Corporation filed Critical The Hotz Corporation
Publication of WO1999038152A1 publication Critical patent/WO1999038152A1/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0033Recording/reproducing or transmission of music for electrophonic musical instruments
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/341Rhythm pattern selection, synthesis or composition
    • G10H2210/361Selection among a set of pre-established rhythm patterns
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2240/00Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
    • G10H2240/171Transmission of musical instrument data, control or status information; Transmission, remote access or control of music data for electrophonic musical instruments
    • G10H2240/281Protocol or standard connector for transmission of analog or digital data to or from an electrophonic musical instrument
    • G10H2240/315Firewire, i.e. transmission according to IEEE1394

Definitions

  • the present invention relates generally to electronic musical instruments and music and musical information generators. More particularly, the present invention relates to a versatile user-programmable musical instrument with the capability of programmably manipulating the timing of the execution of musical event instructions in real time.
  • Electronic keyboard and other electronic musical instruments are known in the prior art. Also known are electronic musical keyboard instruments which generate tone and velocity information compatible with the MIDI (Musical Instrument Digital Interface) standard which has come into wide usage in recent years.
  • MIDI Musical Instrument Digital Interface
  • Electronic musical instruments which provide for an automatic accompaniment to be generated by the instrument in response to a performer playing the instrument are also known in the art. Examples of such instruments are found in Hall et al. U.S. Pat. Nos. 4,433,601, 4,508,002, and 4,682,526.
  • electronic musical instruments include some sort of gesture interface which allows them to be “played”.
  • the gesture controller takes movement or some kind of change and generates "NOTE-ON' and "NOTE-OFF' signals, such as pressing a key of a piano keyboard.
  • NOTE-ON is generated when a key is depressed
  • NOTE-OFF is generated when the key is released.
  • the data stream from this gesture controller would then include NOTE-ON, NOTE-ID, some time would pass and then NOTE-OFF, NOTE-ID, where NOTE-ID was an identification of which note had been acted on, such as middle-C or (for example) MIDI note 60, or some other representation, depending upon the system used.
  • rhythmic manipulations For example, even when using a harmonic translator, performing certain rhythmic manipulations requires a certain amount of physical skill and dexterity that would have to be acquired by the user. In some cases, desired rhythmic manipulations might be outside the skill set available even to highly skilled musicians due to the speed and/or complexity of the desired manipulation.
  • More advanced users may desire to create rhythmic patterns well outside the envelope of human experience and utilize programmable electronic systems to provide the speed and/or dexterity that they lack.
  • Still other users may desire to duplicate or replicate a rhythmic or phrasing performance by themselves, or by another performer, so as to have repeatable access to an expert performance at the mere touch of a few buttons.
  • a rhythm engine provides a plurality of rhythm structure tables which are selectable by a user through a rhythm table selector. Each rhythm table corresponds to a particular rhythmic beat or pattern and defines a series of trigger events in time and magnitude (velocity) which may be output to control a downstream instrument.
  • a gesture controller played by the user generates a series of musical note signals for input to the rhythm generator, each of which includes a note-on signal and a note-off signal. These musical note signals are then input to the rhythm engine, processed along with the selected rhythm structure table, and output as processed musical note trigger signals for downstream use at timing intervals dictated by the selected rhythm structure table and with the corresponding velocities also dictated by the rhythm structure table.
  • the rhythm structure table selection may be changed as a user plays, either automatically as dictated by a prerecorded musical piece being played along with by the user, or at the choice of the user.
  • Processed musical note trigger signals may then be applied to other conventional components of a digital music system, such as to a harmonic translator optionally operating in conjunction with prerecorded music and prerecorded musical information, to a conventional sequencer, and to conventional sound generation equipment.
  • a phrase engine provides a plurality of phrase structure tables and operates similarly to the rhythm engine above except that phrases differ from rhythms in that phrases contain note values as well as duration and velocity information and may contain polyphonic information, e.g., it may include a complex performance.
  • a phrase may be substituted for a selected musical note signal (or signals) received from the gesture controller.
  • Phrase engines and rhythm engines may also be cascaded by providing outputs of the phrase engine as inputs to a rhythm engine for cascaded processing. In this way the output of the phrase engine will conform to the rhythmic pattern imposed by the rhythm generator.
  • Yet a further object and advantage of the present invention is to enable extensive rhythmic manipulation of musical instruments such as electronic musical instruments.
  • Another object and advantage of the present invention is to enable phrase and rhythm information to be stored and recalled for playback of such phrase and rhythm information in conjunction with otherwise selected musical note information on demand.
  • Another object and advantage of the present invention is to enable expert-type phrase and rhythmic manipulations of an electronic instrument based upon pre-recorded information selected by a user.
  • FIG. 1 is a block diagram of an electronic musical sound generation system in accordance with a presently preferred embodiment of the present invention.
  • FIG. 2 is a block diagram of the rhythm engine processing portion of an electronic musical instrument in accordance with a presently preferred embodiment of the present invention.
  • FIG. 3 is a block diagram of the phrase engine processing portion of an electronic musical instrument in accordance with a presently preferred embodiment of the present invention.
  • FIGS. 4, 5, 6, 7 and 8 are diagrams of rhythm engine configuration screens in accordance with a presently preferred embodiment of the present invention.
  • FIGS. 9 and 10 are diagrams of a phrase engine configuration screen in accordance with a presently preferred embodiment of the present invention.
  • FIGS. 11, 12, 13 and 14 illustrate detail of the signalling used in accordance with a presently preferred embodiment of the present invention.
  • FIG. 1 a system block diagram shows a complete electronic musical instrument system 10 in accordance with a presently preferred embodiment of the present invention.
  • a gesture controller 12 provides a manipulative interface between the system 10 and the user.
  • Gesture controller 12 could be a conventional electronic instrument keyboard, a computer keyboard, an electronic guitar, or other conventional musical interface device capable of converting human (or non-human) manipulation into electronic musical instrument note information, such as MIDI information.
  • Such note information typically includes a note identification signal, a note-on signal, a note- off signal, an initial note velocity signal representative of the initial striking velocity of the note (where appropriate), and optionally similar information well known to those of ordinary skill in the art.
  • the invention is not to be limited to just the MIDI (Musical Instrument Digital Interface) standard currently in effect in the electronic musical industry, but can be used equally well with future standards such as IEEE 1394 and subsequent iterations thereof as well as with other standards not yet identified.
  • the gesture controller 12 therefore, generates musical note trigger signals which include note-on signals and note-off signals which are not processed as to time and appear on line 14.
  • phrase engine 16 and rhythm engine 18 would not be present and these initial musical note trigger signals would pass directly to, for example, a MIDI synthesizer or a harmonic translator 20 for processing in accordance with the description included in U.S. Patent Nos. 5099738, 5502274, and 5619003 the full text of each of which is hereby incorporated herein by reference as if set forth fully herein.
  • the user-supplied initial musical note trigger signals are used to trigger an optional sequencer 22 and ultimately to trigger sound generation equipment 24 in a conventional manner.
  • Play-along capability is optionally provided through a prerecorded musical information source 26 which may include one or more actual storage systems providing a source of a sound recording as well as corresponding synchronized chord and scale information to cause the notes ultimately sounded by the sound generation equipment 24 to be within the defined chord and scale for the portion of the sound recording being played back at that time.
  • a prerecorded musical information source 26 which may include one or more actual storage systems providing a source of a sound recording as well as corresponding synchronized chord and scale information to cause the notes ultimately sounded by the sound generation equipment 24 to be within the defined chord and scale for the portion of the sound recording being played back at that time.
  • Such chord and scale information is transmitted to the harmonic translator 20 so as to cause it to translate the system 10 to output only notes within the selected chord and scale. This procedure is discussed in detail in U.S. Patents 5099738, 5502274 and 5619003.
  • a phrase engine 16 and/or a rhythm engine 18 are provided to, in essence, process the initial musical note trigger signals coming from the gesture controller 12 on line 14 into rhythm- processed and/or phrase-processed musical note trigger signals which can in turn be applied to conventional sequencers 22 and/or sound generation equipment 24 (e.g., synthesizers, etc.) so that the initial musical note trigger signals become time-constrained to the selected rhythm pattern applied by the rhythm engine 18 and/or the selected phrase is inserted by phrase engine 16.
  • conventional sequencers 22 and/or sound generation equipment 24 e.g., synthesizers, etc.
  • rhythm engine 18 operates as follows: Rhythm engine 18 can be optionally programmed to act only on certain channels and/or notes as in block 26. While the invention will be discussed with respect to the MIDI standard interface, those of ordinary skill in the art will recognize that it can be used to equal advantage with other interface standards to be developed in the future.
  • MIDI data input from the gesture controller 12 (or from phrase engine 16) is applied to rhythm engine 18 and rhythm-processed musical note trigger signals are output on line 28 for further processing or application to sequencers 22 and/or sound generation equipment 24 as discussed above.
  • a rhythm structure table memory 30 stores rhythm templates as exemplified in FIGS. 4, 5, 6, 7 and 8.
  • a rhythm table selector 34 selects the one of the rhythm tables stored in rhythm structure table memory to be used at any given time. It is also preferably possible to select no rhythm table (i.e., the null set) and in that case, no rhythm template would be applied to the signal and no rhythm processing would occur. The rhythm table selector 34 can thus choose among no rhythm table selection and any one of the stored rhythm tables.
  • the rhythm table selector can be a data track stored in synchronicity with a pre-recorded sound recording, a data track stored on a compact disk, data embedded in digital signals stored on a compact disk which may be extracted to provide a phrase and/or rhythm template selection signal for application in choosing the phrase and/or rhythm table to apply at any given time, or any other suitable mechanism, such as a knob, switch, computer program, or other input device responsive to real time human (or non-human) control.
  • FIGS. 4 - 8 sample rhythm templates or tables are shown.
  • FIG. 4 shows a snapshot of a software control window which would control a software program running on a computer used for purposes of implementing the rhythm engine.
  • rhythm template of FIG. 4 a rhythm having a 1/16 note beat and a constant velocity factor of 96 out of a total possible of 127 is applied.
  • the FIG. 4 embodiment shows a total of 4 beats as mapped toward the bottom of the figure and labelled 1, 2, 3, 4.
  • any initial musical note trigger signals received would be forced to conform to this beat. That is, if a note-on signal for a particular note is received, the rhythm template will cause the note to be sounded in conformance with the rhythm table — if the note is not released (i.e., no corresponding note-off signal is received) before the next beat would be sounded, then the note is retriggered in accordance with the rhythm table until a note-off signal is received from the gesture controller.
  • rhythm template of FIG. 5 a more complex rhythm pattern is shown and again, 4 beats are shown, each different.
  • rhythm template of FIG. 6 a single beat having a number of 1/128 beat elements effects a "stair climb" rhythm which is probably beyond the consistent ability of most if not all artists to create without the aid of an electronic or computer-based device.
  • This example shows one of the types of rhythmic enhancements that the invention is capable of bringing to music as a real time performance tool.
  • rhythm template of FIG. 7 an extremely complex rhythmic pattern extending over 16 beats is shown.
  • the rhythm engine can accurately and consistently reproduce this rhythmic pattern over and over again where no human operator could achieve the timing accuracy without the assistance of electronic and/or computer-based equipment.
  • rhythm template of FIG. 8 another complex rhythmic pattern extending over 8 beats is shown.
  • rhythmic pattern enabled by this technology can be made to extend over any length of repetition from 1 beat to as many as one wishes.
  • the creation and editing of such rhythm templates is enhanced with a visual Windows-based program running on a computer which can display graphically the velocity of each rhythmic element along with the timing as shown in FIGS. 4 - 8.
  • Those of ordinary skill in the art are well aware of how to implement such computer programs to provide this editing capability to users of the rhythm engine.
  • the rhythm engine 18 is designed to come after the phrase engine 16 in the system diagram of FIG. 1 (if such a phrase engine is present) because the rhythm is supposed to override the output of the phrase engine as described in more detail below.
  • a duration override controller 32 allows any other input controller such as, preferably, a pitch bend controller with a center detent, to be used to adjust in real time the duration values from the rhythm structure table. In this way, a positive deflection of the pitch bend controller would result in an increase in the duration of the notes played by the rhythm structure table and a negative deflection would cause a decrease in the duration of those notes. This feature allows a user additional creative input over the perimeters of the rhythm structure table elements.
  • a velocity override controller 31 would behave in a similar fashion to duration override controller 32 in allowing the velocity values of the notes stored in the rhythm structure table to be increased or decreased at will by the user in real time.
  • a note start override controller 33 would perform the function of allowing the timing of the note start to be adjusted positively or negatively with respect to the timing of the note start defined by the rhythm structure table. In this way, the notes could be delayed or advanced at the will of the user by simple operation of the override controller. While a pitch bend controller with a center detent has been suggested here as a preferred embodiment for these override controllers, any of a number of standard input devices could serve the same function as would be known to those of ordinary skill in the art. Similarly, those of ordinary skill in the art would realize that such override controllers could be either scaled to absolute values or to relative values or percentage changes in values so that the changes in value could be over any chosen range selected by the user.
  • override controllers 31, 32 and 33 may also be used with phrase engine 16 as well as rhythm engine 18 as shown in FIG. 3.
  • the purpose of the phrase engine 16 is to take unprocessed initial musical note trigger signals and use them to trigger one shot or repetitious application of a pre-programmed phrase (comprising notes, note durations and note velocities) of any duration.
  • a pre-programmed phrase comprising notes, note durations and note velocities of any duration.
  • phrase table selector 38 similar in concept to the rhythm table selector 34 discussed above is provided to enable selection of one of a number of phrase tables (each storing a phrase definition) from a phrase structure table memory 40, thus, phrase table selector 38 selects the one of the phrase tables stored in phrase structure table memory 40 to be used at any given time. It is also possible to select no phrase table (i.e., the null set) and in that case, no phrase template would be applied to the signal and no phrase processing would occur. The phrase table selector 38 can thus choose among no phrase table selection and any one of the stored phrase tables.
  • the phrase table selector can be a data track stored in synchronicity with a pre-recorded sound recording, a data track stored on a compact disk, data embedded in digital signals stored on a compact disk which may be extracted to provide a phrase and/or rhythm template selection signal for application in choosing the phrase and/or rhythm table to apply at any given time, or any other suitable mechanism, such as a knob, switch, computer program, or other input device responsive to real time human (or non- human) control.
  • FIG. 9 an example of a phrase template or phrase table which might be stored in and selectable from phrase structure table memory 40 is shown as it might be displayed in a similar Windows-based table editing program as discussed above with respect to rhythm table editors.
  • the phrase template of FIG. 9 shows a complex phrase comprised of note identifications, note velocity parameters and note duration information.
  • FIG. 9 is a Windows-type editing window for a phrase template 42.
  • Area 44 includes identification information.
  • area 48 which is a note identification, note duration and note timing array in which notes are represented by horizontal bars such as bar 49 (identified by the "keyboard" 50 to the left which shows note 49 to be C#4).
  • Area 48 also shows when notes are to be played, when the playing of the notes is to begin (e.g., start time 52), the duration of the notes (e.g., length 54), and the note stop times (e.g., note stop time 56).
  • a velocity representation area 58 in which initial velocities of notes are shown.
  • the initial velocity value of note 62 is shown by vertical bar 60 which can be read on velocity scale 64 to the left.
  • FIG 10 the material of FIG 9 is shown augmented by area 46 disposed between areas 44 and 48.
  • Area 46 shows a rhythm template imposed over the phrase template defined by areas 48 and 58.
  • the rhythm template of area 46 would cause notes being
  • rhythm template 46 sustained during transitions in rhythm template 46 to be re-triggered with velocities defined by rhythm template 46 at times defined by rhythm template 46.
  • note 49 is a C#4 which begins at time 66 with an initial velocity 67 and ends at time 68. Referring to the portion of area 46 displayed vertically above note 49, this note will be retriggered approximately 16 times at various velocity levels between time 66 and time 68 as rhythm template 46 is imposed over the phrase template.
  • FIG. 11 shows the basic flow of signals from gesture controller 12 to phrase engine 16 to rhythm engine 18 and finally to such downstream equipment 70 as may be employed.
  • Initial musical note events 72 are passed from the gesture controller 12 to phrase engine 16 (if present) over line 14.
  • the nature of initial musical note events 72 is shown in more detail in FIG. 12.
  • initial musical note events 72 include note-on signals 74, note identification signals 76, velocity information 77, note off signals 80 and a duration of note information 78 determined by the time difference between note-on signals 74 and corresponding note-off signals 80.
  • Substituted phrases 82 are passed from phrase engine 16 (if present and active) to rhythm engine 18.
  • substituted phrases output by the phrase engine 16 include pre-programmed phrases triggered by the occurrence of selected inputs received from the gesture controller. Phrases are diagrammed in FIG. 13 and include note-on signals 84, note identification signals 86, initial note velocity signals 88, note-off signals 92 and implicit note durations 90 determined as described above. Finally, rhythm processed output signals 94 are output by the rhythm engine 18 for use by downstream equipment. These signals are diagrammed in FIG. 14. They include note-on signals 96, note identification signals 98, initial note velocity signals 100, note-off signals 104 and implicit note durations 102 determined as described above.
  • 11 phrase and/or rhythm engines is provided by "algorithm controllers" 106, 108 (FIGS. 2 and 3) which permit real time modification of the phrase and/or rhythm engine functionality.
  • algorithm controllers 106, 108 (FIGS. 2 and 3) which permit real time modification of the phrase and/or rhythm engine functionality.
  • one algorithm could allow the original "attack” or note-on time, note value and note velocity to be added directly to the manipulated data output in order to allow certain nuances of the human performance to pass through the system un-processed while another algorithm could allow only the manipulated data to be output with no provision for nuance pass-through.
  • a vast number of such possible “algorithms” permitting real time modification of the operation of phrase and/or rhythm engines could be imagined and easily implemented by those or ordinary skill in the art.

Abstract

A rhythm engine (18) for an electronic musical instrument provides a plurality of rhythm structure tables (30) selectable by a user through a rhythm table selector (34). Each rhythm table (30) corresponds to a particular rhythmic beat or pattern and defines a series of trigger events in time and magnitude (velocity). A gesture controller (12) generates a series of musical note signals, each of which includes a note-on signal and a note-off signal. These musical note signals are then input to the rhythm engine (18), processed along with the selected rhythm structure table (30), and output as processed musical note trigger signals at timing intervals dictated by the selected rhythm structure table (30) and with corresponding velocities also dictated by the rhythm structure table (30). The rhythm structure table selection may be changed as a user plays, either by the user or automatically as dictated by a prerecorded musical piece being played along with by the user. Processed musical note trigger signals may then be applied to other conventional components of a digital music system. A phrase engine (16) is also provided which provides a plurality of phrase structure tables (40) and operates similarly to the rhythm engine (18) above except that phrases differ from rhythms in that phrases contain note values as well as duration and velocity information and may contain polyphonic information.

Description

S P E CI F IC A T IO N
TITLE OF THE INVENTION
PHRASE AND RHYTHM ENGINES FOR MUSIC GENERATION
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates generally to electronic musical instruments and music and musical information generators. More particularly, the present invention relates to a versatile user-programmable musical instrument with the capability of programmably manipulating the timing of the execution of musical event instructions in real time.
2. The B ackground Art
Electronic keyboard and other electronic musical instruments are known in the prior art. Also known are electronic musical keyboard instruments which generate tone and velocity information compatible with the MIDI (Musical Instrument Digital Interface) standard which has come into wide usage in recent years.
Electronic musical instruments which provide for an automatic accompaniment to be generated by the instrument in response to a performer playing the instrument are also known in the art. Examples of such instruments are found in Hall et al. U.S. Pat. Nos. 4,433,601, 4,508,002, and 4,682,526. Typically electronic musical instruments include some sort of gesture interface which allows them to be "played". The gesture controller takes movement or some kind of change and generates "NOTE-ON' and "NOTE-OFF' signals, such as pressing a key of a piano keyboard. In an electronic implementation of a piano keyboard, NOTE-ON is generated when a key is depressed, NOTE-OFF is generated when the key is released. The data stream from this gesture controller would then include NOTE-ON, NOTE-ID, some time would pass and then NOTE-OFF, NOTE-ID, where NOTE-ID was an identification of which note had been acted on, such as middle-C or (for example) MIDI note 60, or some other representation, depending upon the system used.
Recently, electronic musical instruments have become available (as have computer programs for causing computers to generate signals for controlling music generating hardware — such systems are included in the term "electronic musical instruments" as used herein) which allow a number of input devices associated with such an instrument (such as keys) to be played along with a recording of a musical piece (such as on a compact disk (CD) or CD-ROM in a CD-ROM drive, a ".WAV" file in a computer, MIDI file, and similar devices). A digital
1 file "played" along with the recording has stored in it chord and scale information which causes the input devices to in turn cause the generation of musical signals which are within the chord and scale defined by the digital file. In this way, even an unskilled musician can only play notes which are in harmony with the recording being played. Such instruments are disclosed at, for example, U.S. Patent Nos. 5099738, 5502274 and 5619003. Software for accomplishing the same effects on a personal computer is presently commercially available on the World Wide Web at URL http://www.hotz.com/ and from The Hotz Corporation of Agoura Hills, California. Such systems are referred to as harmonic translators.
For example, even when using a harmonic translator, performing certain rhythmic manipulations requires a certain amount of physical skill and dexterity that would have to be acquired by the user. In some cases, desired rhythmic manipulations might be outside the skill set available even to highly skilled musicians due to the speed and/or complexity of the desired manipulation.
Even users of harmonic translators, while playing along with prerecorded musical tracks (as, for example, on compact disks and the like), may not have a completely fulfilling experience due to the fact that such users may lack the basic rhythmic skills to successfully play along at the level of complexity that such users desire.
More advanced users may desire to create rhythmic patterns well outside the envelope of human experience and utilize programmable electronic systems to provide the speed and/or dexterity that they lack.
Still other users may desire to duplicate or replicate a rhythmic or phrasing performance by themselves, or by another performer, so as to have repeatable access to an expert performance at the mere touch of a few buttons.
While harmonic translators tremendously enhance the capabilities of both skilled and unskilled musicians to obtain harmonious sounds from their instruments, both in real time performances and in play-along situations, additional assistance is desirable to use the capabilities of the computer to provide assistance to the performer in timing or rhythm as well as in executing complex phrasing. SUMMARY OF THE INVENTION
In a first aspect of the invention, a rhythm engine provides a plurality of rhythm structure tables which are selectable by a user through a rhythm table selector. Each rhythm table corresponds to a particular rhythmic beat or pattern and defines a series of trigger events in time and magnitude (velocity) which may be output to control a downstream instrument. A gesture controller played by the user generates a series of musical note signals for input to the rhythm generator, each of which includes a note-on signal and a note-off signal. These musical note signals are then input to the rhythm engine, processed along with the selected rhythm structure table, and output as processed musical note trigger signals for downstream use at timing intervals dictated by the selected rhythm structure table and with the corresponding velocities also dictated by the rhythm structure table. The rhythm structure table selection may be changed as a user plays, either automatically as dictated by a prerecorded musical piece being played along with by the user, or at the choice of the user. Processed musical note trigger signals may then be applied to other conventional components of a digital music system, such as to a harmonic translator optionally operating in conjunction with prerecorded music and prerecorded musical information, to a conventional sequencer, and to conventional sound generation equipment.
In a second aspect of the invention, a phrase engine provides a plurality of phrase structure tables and operates similarly to the rhythm engine above except that phrases differ from rhythms in that phrases contain note values as well as duration and velocity information and may contain polyphonic information, e.g., it may include a complex performance. In accordance with the present invention, a phrase may be substituted for a selected musical note signal (or signals) received from the gesture controller. Phrase engines and rhythm engines may also be cascaded by providing outputs of the phrase engine as inputs to a rhythm engine for cascaded processing. In this way the output of the phrase engine will conform to the rhythmic pattern imposed by the rhythm generator.
OBJECTS AND ADVANTAGES OF THE INVENTION
Accordingly, it is an object and advantage of the present invention to provide a phrase engine for use with electronic musical equipment.
It is a further object and advantage of the present invention to provide a means for representing complex phrasing and gesturing with a relatively less complicated "shorthand" gesture, the shorthand gesture used to activate a phrase engine, and the phrase engine then playing the complex phrasing and gesturing from memory.
It is a further object and advantage of the present invention to provide a rhythm engine for use with electronic musical equipment.
Yet a further object and advantage of the present invention is to enable extensive rhythmic manipulation of musical instruments such as electronic musical instruments.
Another object and advantage of the present invention is to enable phrase and rhythm information to be stored and recalled for playback of such phrase and rhythm information in conjunction with otherwise selected musical note information on demand.
Another object and advantage of the present invention is to enable expert-type phrase and rhythmic manipulations of an electronic instrument based upon pre-recorded information selected by a user.
These and many other objects and advantages of the present invention will become apparent to those of ordinary skill in the art from a consideration of the drawings and ensuing description of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a block diagram of an electronic musical sound generation system in accordance with a presently preferred embodiment of the present invention.
FIG. 2 is a block diagram of the rhythm engine processing portion of an electronic musical instrument in accordance with a presently preferred embodiment of the present invention.
FIG. 3 is a block diagram of the phrase engine processing portion of an electronic musical instrument in accordance with a presently preferred embodiment of the present invention.
FIGS. 4, 5, 6, 7 and 8 are diagrams of rhythm engine configuration screens in accordance with a presently preferred embodiment of the present invention.
FIGS. 9 and 10 are diagrams of a phrase engine configuration screen in accordance with a presently preferred embodiment of the present invention.
FIGS. 11, 12, 13 and 14 illustrate detail of the signalling used in accordance with a presently preferred embodiment of the present invention.
DESCRIPΗON OF THE PREFERRED EMBODIMENTS
Those of ordinary skill in the art will realize that the following description of the present invention is illustrative only and is not intended to be in any way limiting. Other embodiments of the invention will readily suggest themselves to such skilled persons from an examination of the within disclosure.
Turning now to FIG. 1, a system block diagram shows a complete electronic musical instrument system 10 in accordance with a presently preferred embodiment of the present invention.
In accordance with the present invention, a gesture controller 12 provides a manipulative interface between the system 10 and the user. Gesture controller 12 could be a conventional electronic instrument keyboard, a computer keyboard, an electronic guitar, or other conventional musical interface device capable of converting human (or non-human) manipulation into electronic musical instrument note information, such as MIDI information. Such note information typically includes a note identification signal, a note-on signal, a note- off signal, an initial note velocity signal representative of the initial striking velocity of the note (where appropriate), and optionally similar information well known to those of ordinary skill in the art. The invention is not to be limited to just the MIDI (Musical Instrument Digital Interface) standard currently in effect in the electronic musical industry, but can be used equally well with future standards such as IEEE 1394 and subsequent iterations thereof as well as with other standards not yet identified.
The gesture controller 12, therefore, generates musical note trigger signals which include note-on signals and note-off signals which are not processed as to time and appear on line 14. In the prior art, phrase engine 16 and rhythm engine 18 would not be present and these initial musical note trigger signals would pass directly to, for example, a MIDI synthesizer or a harmonic translator 20 for processing in accordance with the description included in U.S. Patent Nos. 5099738, 5502274, and 5619003 the full text of each of which is hereby incorporated herein by reference as if set forth fully herein. In such systems, the user-supplied initial musical note trigger signals are used to trigger an optional sequencer 22 and ultimately to trigger sound generation equipment 24 in a conventional manner. Play-along capability is optionally provided through a prerecorded musical information source 26 which may include one or more actual storage systems providing a source of a sound recording as well as corresponding synchronized chord and scale information to cause the notes ultimately sounded by the sound generation equipment 24 to be within the defined chord and scale for the portion of the sound recording being played back at that time. Such chord and scale information is transmitted to the harmonic translator 20 so as to cause it to translate the system 10 to output only notes within the selected chord and scale. This procedure is discussed in detail in U.S. Patents 5099738, 5502274 and 5619003.
In accordance with a presently preferred embodiment of the present invention, a phrase engine 16 and/or a rhythm engine 18 are provided to, in essence, process the initial musical note trigger signals coming from the gesture controller 12 on line 14 into rhythm- processed and/or phrase-processed musical note trigger signals which can in turn be applied to conventional sequencers 22 and/or sound generation equipment 24 (e.g., synthesizers, etc.) so that the initial musical note trigger signals become time-constrained to the selected rhythm pattern applied by the rhythm engine 18 and/or the selected phrase is inserted by phrase engine 16.
Turning now to FIG. 2, rhythm engine 18 operates as follows: Rhythm engine 18 can be optionally programmed to act only on certain channels and/or notes as in block 26. While the invention will be discussed with respect to the MIDI standard interface, those of ordinary skill in the art will recognize that it can be used to equal advantage with other interface standards to be developed in the future. MIDI data input from the gesture controller 12 (or from phrase engine 16) is applied to rhythm engine 18 and rhythm-processed musical note trigger signals are output on line 28 for further processing or application to sequencers 22 and/or sound generation equipment 24 as discussed above. A rhythm structure table memory 30 stores rhythm templates as exemplified in FIGS. 4, 5, 6, 7 and 8.
A rhythm table selector 34 selects the one of the rhythm tables stored in rhythm structure table memory to be used at any given time. It is also preferably possible to select no rhythm table (i.e., the null set) and in that case, no rhythm template would be applied to the signal and no rhythm processing would occur. The rhythm table selector 34 can thus choose among no rhythm table selection and any one of the stored rhythm tables. The rhythm table selector can be a data track stored in synchronicity with a pre-recorded sound recording, a data track stored on a compact disk, data embedded in digital signals stored on a compact disk which may be extracted to provide a phrase and/or rhythm template selection signal for application in choosing the phrase and/or rhythm table to apply at any given time, or any other suitable mechanism, such as a knob, switch, computer program, or other input device responsive to real time human (or non-human) control. Turning now to FIGS. 4 - 8, sample rhythm templates or tables are shown. FIG. 4 shows a snapshot of a software control window which would control a software program running on a computer used for purposes of implementing the rhythm engine. This particular example is implemented on the Windows 95 operating system operating on an Intel-based microprocessor in a conventional manner, however, virtually any system and computer could be used for the implementation of this system, including ASICs and the like, as would be known to those of ordinary skill in the art.
In the rhythm template of FIG. 4, a rhythm having a 1/16 note beat and a constant velocity factor of 96 out of a total possible of 127 is applied. The FIG. 4 embodiment shows a total of 4 beats as mapped toward the bottom of the figure and labelled 1, 2, 3, 4. Thus any initial musical note trigger signals received would be forced to conform to this beat. That is, if a note-on signal for a particular note is received, the rhythm template will cause the note to be sounded in conformance with the rhythm table — if the note is not released (i.e., no corresponding note-off signal is received) before the next beat would be sounded, then the note is retriggered in accordance with the rhythm table until a note-off signal is received from the gesture controller. If no signal was received between a note-off and the next beat, no beat would be sounded, i.e., no rhythm-processed musical note control signal would issue if no corresponding input was received by the time that particular element of the rhythm pattern was to be played.
In the rhythm template of FIG. 5, a more complex rhythm pattern is shown and again, 4 beats are shown, each different.
In the rhythm template of FIG. 6, a single beat having a number of 1/128 beat elements effects a "stair climb" rhythm which is probably beyond the consistent ability of most if not all artists to create without the aid of an electronic or computer-based device. This example shows one of the types of rhythmic enhancements that the invention is capable of bringing to music as a real time performance tool.
In the rhythm template of FIG. 7, an extremely complex rhythmic pattern extending over 16 beats is shown. The rhythm engine can accurately and consistently reproduce this rhythmic pattern over and over again where no human operator could achieve the timing accuracy without the assistance of electronic and/or computer-based equipment. In the rhythm template of FIG. 8, another complex rhythmic pattern extending over 8 beats is shown.
As can be seen, the rhythmic pattern enabled by this technology can be made to extend over any length of repetition from 1 beat to as many as one wishes. The creation and editing of such rhythm templates is enhanced with a visual Windows-based program running on a computer which can display graphically the velocity of each rhythmic element along with the timing as shown in FIGS. 4 - 8. Those of ordinary skill in the art are well aware of how to implement such computer programs to provide this editing capability to users of the rhythm engine.
The rhythm engine 18 is designed to come after the phrase engine 16 in the system diagram of FIG. 1 (if such a phrase engine is present) because the rhythm is supposed to override the output of the phrase engine as described in more detail below.
A duration override controller 32 allows any other input controller such as, preferably, a pitch bend controller with a center detent, to be used to adjust in real time the duration values from the rhythm structure table. In this way, a positive deflection of the pitch bend controller would result in an increase in the duration of the notes played by the rhythm structure table and a negative deflection would cause a decrease in the duration of those notes. This feature allows a user additional creative input over the perimeters of the rhythm structure table elements. A velocity override controller 31 would behave in a similar fashion to duration override controller 32 in allowing the velocity values of the notes stored in the rhythm structure table to be increased or decreased at will by the user in real time. Similarly, a note start override controller 33 would perform the function of allowing the timing of the note start to be adjusted positively or negatively with respect to the timing of the note start defined by the rhythm structure table. In this way, the notes could be delayed or advanced at the will of the user by simple operation of the override controller. While a pitch bend controller with a center detent has been suggested here as a preferred embodiment for these override controllers, any of a number of standard input devices could serve the same function as would be known to those of ordinary skill in the art. Similarly, those of ordinary skill in the art would realize that such override controllers could be either scaled to absolute values or to relative values or percentage changes in values so that the changes in value could be over any chosen range selected by the user. Note that override controllers 31, 32 and 33 may also be used with phrase engine 16 as well as rhythm engine 18 as shown in FIG. 3. The purpose of the phrase engine 16 is to take unprocessed initial musical note trigger signals and use them to trigger one shot or repetitious application of a pre-programmed phrase (comprising notes, note durations and note velocities) of any duration. As with the rhythm engine, provision is made at block 36 to select which channels and/or notes to activate the phrase engine in response to. A phrase table selector 38, similar in concept to the rhythm table selector 34 discussed above is provided to enable selection of one of a number of phrase tables (each storing a phrase definition) from a phrase structure table memory 40, thus, phrase table selector 38 selects the one of the phrase tables stored in phrase structure table memory 40 to be used at any given time. It is also possible to select no phrase table (i.e., the null set) and in that case, no phrase template would be applied to the signal and no phrase processing would occur. The phrase table selector 38 can thus choose among no phrase table selection and any one of the stored phrase tables. The phrase table selector can be a data track stored in synchronicity with a pre-recorded sound recording, a data track stored on a compact disk, data embedded in digital signals stored on a compact disk which may be extracted to provide a phrase and/or rhythm template selection signal for application in choosing the phrase and/or rhythm table to apply at any given time, or any other suitable mechanism, such as a knob, switch, computer program, or other input device responsive to real time human (or non- human) control.
Turning now to FIG. 9, an example of a phrase template or phrase table which might be stored in and selectable from phrase structure table memory 40 is shown as it might be displayed in a similar Windows-based table editing program as discussed above with respect to rhythm table editors. The phrase template of FIG. 9 shows a complex phrase comprised of note identifications, note velocity parameters and note duration information. FIG. 9 is a Windows-type editing window for a phrase template 42. Area 44 includes identification information. Below area 44 is area 48 which is a note identification, note duration and note timing array in which notes are represented by horizontal bars such as bar 49 (identified by the "keyboard" 50 to the left which shows note 49 to be C#4). Area 48 also shows when notes are to be played, when the playing of the notes is to begin (e.g., start time 52), the duration of the notes (e.g., length 54), and the note stop times (e.g., note stop time 56). Below area 48 is a velocity representation area 58 in which initial velocities of notes are shown. For example, the initial velocity value of note 62 is shown by vertical bar 60 which can be read on velocity scale 64 to the left.
Turning now to FIG 10, the material of FIG 9 is shown augmented by area 46 disposed between areas 44 and 48. Area 46 shows a rhythm template imposed over the phrase template defined by areas 48 and 58. The rhythm template of area 46 would cause notes being
10 sustained during transitions in rhythm template 46 to be re-triggered with velocities defined by rhythm template 46 at times defined by rhythm template 46. For example, note 49 is a C#4 which begins at time 66 with an initial velocity 67 and ends at time 68. Referring to the portion of area 46 displayed vertically above note 49, this note will be retriggered approximately 16 times at various velocity levels between time 66 and time 68 as rhythm template 46 is imposed over the phrase template.
Turning now to FIGS. 11, 12, 13 and 14, the signal processing of the present invention is now described in more detail. FIG. 11 shows the basic flow of signals from gesture controller 12 to phrase engine 16 to rhythm engine 18 and finally to such downstream equipment 70 as may be employed. Initial musical note events 72 are passed from the gesture controller 12 to phrase engine 16 (if present) over line 14. The nature of initial musical note events 72 is shown in more detail in FIG. 12. Here it can be seen that initial musical note events 72 include note-on signals 74, note identification signals 76, velocity information 77, note off signals 80 and a duration of note information 78 determined by the time difference between note-on signals 74 and corresponding note-off signals 80. Substituted phrases 82 are passed from phrase engine 16 (if present and active) to rhythm engine 18. As discussed before, substituted phrases output by the phrase engine 16 include pre-programmed phrases triggered by the occurrence of selected inputs received from the gesture controller. Phrases are diagrammed in FIG. 13 and include note-on signals 84, note identification signals 86, initial note velocity signals 88, note-off signals 92 and implicit note durations 90 determined as described above. Finally, rhythm processed output signals 94 are output by the rhythm engine 18 for use by downstream equipment. These signals are diagrammed in FIG. 14. They include note-on signals 96, note identification signals 98, initial note velocity signals 100, note-off signals 104 and implicit note durations 102 determined as described above.
Alternative Embodiments
Although illustrative presently preferred embodiments and applications of this invention are shown and described herein, many variations and modifications are possible which remain within the concept, scope, and spirit of the invention, and these variations would become clear to those of skill in the art after perusal of this application. The invention, therefore, is not to be limited except in the spirit of the appended claims. For example, a system could implement multiple instances of the phrase and/or rhythm engine running simultaneously so that independent gestures (or gestures coming in on different input channels or from different sources) may be processed through different phrase and/or rhythm engines simultaneously. In another modification of the invention, additional algorithmic control of the
11 phrase and/or rhythm engines is provided by "algorithm controllers" 106, 108 (FIGS. 2 and 3) which permit real time modification of the phrase and/or rhythm engine functionality. For example, one algorithm could allow the original "attack" or note-on time, note value and note velocity to be added directly to the manipulated data output in order to allow certain nuances of the human performance to pass through the system un-processed while another algorithm could allow only the manipulated data to be output with no provision for nuance pass-through. A vast number of such possible "algorithms" permitting real time modification of the operation of phrase and/or rhythm engines could be imagined and easily implemented by those or ordinary skill in the art.
12

Claims

CLAIMSWhat is claimed is:
1. A musical timing manipulation system, comprising: a gesture controller for providing a series of initial musical note event signals, each of said initial musical note event signals including an initial note identification signal; a memory; a plurality of phrase structure tables stored in said memory, each of said phrase structure tables defining at least one substitute phrase to be substituted for a selected initial musical note event signal, each substitute phrase including note-on signals, note identification signals, initial note velocity signals, and note-off signals displaced forward in time from said corresponding note-on signals; a phrase selector for selecting one or none of said phrase structure tables at any given time; and a processor for outputting said substitute phrase in response to receipt of said selected initial musical note event signal.
2. A musical timing manipulation system according to claim 1 , further comprising: a musical sound generator responsive to said processor.
3. A musical timing manipulation system, comprising: a gesture controller for providing a series of initial musical note event signals, each of said initial musical note event signals including an initial note identification, a note-on signal and a note-off signal displaced forward in time from said note-off signal; a memory; a plurality of rhythm structure tables stored in said memory, each of said rhythm structure tables defining a rhythmic pattern including at least one note-on signal, at least one corresponding initial note velocity signal, and at least one corresponding note-off signal; a rhythm selector for selecting one or none of said rhythm structure tables at any given time; and a processor for outputting a series of signals responsive to said gesture controller, said signals having note-on components synchronized with said selected rhythm structure table and having initial note velocities defined by said selected rhythm structure table.
13
4. A musical timing manipulation system according to claim 3, further comprising: a musical sound generator responsive to said processor.
5. A musical timing manipulation system comprising: a gesture controller for generating initial musical note event signals; a first memory; a second memory; a plurality of phrase structure tables stored in said first memory; a plurality of rhythm structure tables stored in said second memory; a phrase engine including a phrase selector for selecting one of said phrase structure tables at any given time; and a rhythm engine including a rhythm selector for selecting one of said rhythm structure tables at any given time.
6. A musical timing manipulation system according to claim 5 further comprising: a musical sound generator responsive to signals output from said rhythm engine.
7. A method for processing a series of initial musical note events received from a gesture controller, each of said initial musical events including a note-on signal, a note identification signal and a note-off signal, said method comprising the steps of: obtaining the initial musical note events from the gesture controller; and using a rhythm structure table having a rhythmic pattern, said rhythmic pattern defining output timing and initial velocities of notes, to generate output note identification signals, output note-on signals and output note velocity signals on an output line in accordance with said rhythmic pattern in response to said initial musical note events and during an interval defined as beginning at each said note-on signal and ending at each corresponding said note- off signal of said initial musical note events.
8. A method according to claim 7, further comprising the step of selecting said rhythm structure table from a memory containing a plurality of different rhythm structure tables.
14
9. A method for processing a series of initial musical note events received from a gesture controller, each of said initial musical events including a note-on signal, a note identification signal and a note-off signal, said method comprising the steps of: obtaining the initial musical note events from the gesture controller; and repeatedly playing on an output line a phrase defined in a phrase structure table in response to a selected initial note-on signal and note identification signal during an interval defined as beginning at the receipt of said selected initial note-on signal and note identification signal and continuing until receipt of a corresponding note-off signal from said gesture controller, said phrase including output note identification signals, output note-on signals, output note-off signals and output note velocity signals.
10. A method according to claim 9, further comprising the step of selecting said phrase structure table from a memory containing a plurality of different phrase structure tables.
11. A musical timing manipulation system, comprising: a gesture controller for providing a series of initial musical note event signals, each of said initial musical note event signals including an initial note identification signal, a note-on signal and a note-off signal displaced forward in time from said note-off signal; a first memory; a plurality of phrase structure tables stored in said first memory, each of said phrase structure tables defining at least one substitute phrase to be substituted for a selected initial musical note event signal, each substitute phrase including note-on signals, note identification signals, initial note velocity signals and note-off signals displaced forward in time from said corresponding note-on signals; a phrase selector for selecting one or none of said phrase structure tables at any given time; a processor for outputting said substitute phrase in response to receipt of said selected initial musical note event signal; a second memory; a plurality of rhythm structure tables stored in said second memory, each of said rhythm structure tables defining a rhythmic pattern including at least one note-on signal, at least one corresponding initial note velocity signal, and at least one corresponding note-off signal; a rhythm selector for selecting one or none of said rhythm structure tables at any given time; and
15 a processor for outputting a series of signals responsive to said gesture controller, said signals having note-on components synchronized with said selected rhythm structure table and having initial note velocities defined by said selected rhythm structure table.
12. A musical timing manipulation system according to claim 1, further comprising a duration override controller operatively connected to said processor.
13. A musical timing manipulation system according to claim 1 further comprising a note- start override controller operatively connected to said processor.
14. A musical timing manipulation system according to claim 1 further comprising a velocity override controller operatively connected to said processor.
15. A musical timing manipulation system according to claim 1 further comprising an algorithm controller operatively connected to said processor.
16. A musical timing manipulation system according to claim 3, further comprising a duration override controller operatively connected to said processor.
17. A musical timing manipulation system according to claim 3 further comprising a note- start override controller operatively connected to said processor.
18. A musical timing manipulation system according to claim 3 further comprising a velocity override controller operatively connected to said processor.
19. A musical timing manipulation system according to claim 3 further comprising an algorithm controller operatively connected to said processor.
16
20. A musical timing manipulation system according to claim 5, further comprising a duration override controller operatively connected to at least one of said phrase engine and said rhythm engine.
21. A musical timing manipulation system according to claim 5 further comprising a note- start override controller operatively connected to at least one of said phrase engine and said rhythm engine.
22. A musical timing manipulation system according to claim 5 further comprising a velocity override controller operatively connected to at least one of said phrase engine and said rhythm engine.
23. A musical timing manipulation system according to claim 5 further comprising an algorithm controller operatively connected to at least one of said phrase engine and said rhythm engine.
24. A musical timing manipulation system according to claim 11, further comprising a duration override controller operatively connected to said processor.
25. A musical timing manipulation system according to claim 11 further comprising a note- start override controller operatively connected to said processor.
26. A musical timing manipulation system according to claim 11 further comprising a velocity override controller operatively connected to said processor.
27. A musical timing manipulation system according to claim 11 further comprising an algorithm controller operatively connected to said processor.
28. A method according to claim 7, further comprising the step of:
17 using a duration override controller to modify said rhythmic pattern in real time.
29. A method according to claim 7, further comprising the step of: using a note-start override controller to modify said rhythmic pattern in real time.
30. A method according to claim 7, further comprising the step of: using a velocity override controller to modify said rhythmic pattern in real time.
31. A method according to claim 7, further comprising the step of: using an algorithm controller to modify said rhythmic pattern in real time.
32. A method according to claim 9, further comprising the step of : using a duration override controller to modify said phrase in real time.
33. A method according to claim 9, further comprising the step of: using a note-start override controller to modify said phrase in real time.
34. A method according to claim 9, further comprising the step of: using a velocity override controller to modify said phrase in real time.
35. A method according to claim 9, further comprising the step of: using an algorithm controller to modify said phrase in real time.
18
PCT/US1999/000569 1998-01-26 1999-01-11 Phrase and rhythm engines for music generation WO1999038152A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US1335398A 1998-01-26 1998-01-26
US09/013,353 1998-01-26

Publications (1)

Publication Number Publication Date
WO1999038152A1 true WO1999038152A1 (en) 1999-07-29

Family

ID=21759524

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1999/000569 WO1999038152A1 (en) 1998-01-26 1999-01-11 Phrase and rhythm engines for music generation

Country Status (1)

Country Link
WO (1) WO1999038152A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4742748A (en) * 1985-12-31 1988-05-10 Casio Computer Co., Ltd. Electronic musical instrument adapted for sounding rhythm tones and melody-tones according to rhythm and melody play patterns stored in a timed relation to each other
US5182414A (en) * 1989-12-28 1993-01-26 Kabushiki Kaisha Kawai Gakki Seisakusho Motif playing apparatus
US5262584A (en) * 1991-08-09 1993-11-16 Kabushiki Kaisha Kawai Gakki Seisakusho Electronic musical instrument with record/playback of phrase tones assigned to specific keys
US5369218A (en) * 1991-10-14 1994-11-29 Kabushiki Kaisha Kawai Gakki Seisakusho External device phrase data input/output apparatus for an electronic musical instrument
EP0715295A1 (en) * 1994-11-29 1996-06-05 Yamaha Corporation Automatic playing apparatus substituting available pattern for absent pattern

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4742748A (en) * 1985-12-31 1988-05-10 Casio Computer Co., Ltd. Electronic musical instrument adapted for sounding rhythm tones and melody-tones according to rhythm and melody play patterns stored in a timed relation to each other
US5182414A (en) * 1989-12-28 1993-01-26 Kabushiki Kaisha Kawai Gakki Seisakusho Motif playing apparatus
US5262584A (en) * 1991-08-09 1993-11-16 Kabushiki Kaisha Kawai Gakki Seisakusho Electronic musical instrument with record/playback of phrase tones assigned to specific keys
US5369218A (en) * 1991-10-14 1994-11-29 Kabushiki Kaisha Kawai Gakki Seisakusho External device phrase data input/output apparatus for an electronic musical instrument
EP0715295A1 (en) * 1994-11-29 1996-06-05 Yamaha Corporation Automatic playing apparatus substituting available pattern for absent pattern

Similar Documents

Publication Publication Date Title
Wessel et al. Problems and prospects for intimate musical control of computers
Winkler Composing interactive music: techniques and ideas using Max
US6011212A (en) Real-time music creation
US5763804A (en) Real-time music creation
USRE37654E1 (en) Gesture synthesizer for electronic sound device
Wanderley et al. Escher-modeling and performing composed instruments in real-time
Winkler Composing interactive music
CN112955948A (en) Musical instrument and method for real-time music generation
JPH11167341A (en) Musicplay training device, play training method and recording medium
Jehan Perceptual synthesis engine: an audio-driven timbre generator
Rubine et al. The videoharp: an optical scanning MIDI controller
JP3829780B2 (en) Performance method determining device and program
JPH09325773A (en) Tone color selecting device and tone color adjusting device
Simon et al. Audio analogies: Creating new music from an existing performance by concatenative synthesis
Dixon et al. The" Air Worm": an Interface for Real-Time manipulation of Expressive Music Performance.
Wright et al. An improvisation environment for generating rhythmic structures based on north indian" Tal" patterns
Rigopulos Growing music from seeds: parametric generation and control of seed-based msuic for interactive composition and performance
Jaffe et al. The computer-extended ensemble
WO1999038152A1 (en) Phrase and rhythm engines for music generation
Didkovsky Recent compositions and performance instruments realized in Java Music Specification Language
Dahlstedt Mapping strategies and sound engine design for an augmented hybrid piano.
Todoroff Control of digital audio effects
Menzies New performance instruments for electroacoustic music
Wright Problems and prospects for intimate and satisfying sensor-based control of computer sound
Salmi Using sample-based virtual instruments to produce orchestral strings in film music

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase