GB2209425A - Music sequencer - Google Patents

Music sequencer Download PDF

Info

Publication number
GB2209425A
GB2209425A GB8820543A GB8820543A GB2209425A GB 2209425 A GB2209425 A GB 2209425A GB 8820543 A GB8820543 A GB 8820543A GB 8820543 A GB8820543 A GB 8820543A GB 2209425 A GB2209425 A GB 2209425A
Authority
GB
United Kingdom
Prior art keywords
harmonic
musical
directive
data
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB8820543A
Other versions
GB8820543D0 (en
Inventor
Paul Schlusser
Michael Carlos
Quentin Goldfinch
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fairlight Instruments Pty Ltd
Original Assignee
Fairlight Instruments Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fairlight Instruments Pty Ltd filed Critical Fairlight Instruments Pty Ltd
Publication of GB8820543D0 publication Critical patent/GB8820543D0/en
Publication of GB2209425A publication Critical patent/GB2209425A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • G10H1/38Chord
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0008Associated control or indicating means
    • G10H1/0025Automatic or semi-automatic music composition, e.g. producing random music, applying rules from music theory or modifying a musical piece
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0033Recording/reproducing or transmission of music for electrophonic musical instruments
    • G10H1/0041Recording/reproducing or transmission of music for electrophonic musical instruments in coded form
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/101Music Composition or musical creation; Tools or processes therefor
    • G10H2210/105Composing aid, e.g. for supporting creation, edition or modification of a piece of music
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/101Music Composition or musical creation; Tools or processes therefor
    • G10H2210/145Composing rules, e.g. harmonic or musical rules, for use in automatic composition; Rule generation algorithms therefor
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/101Music Composition or musical creation; Tools or processes therefor
    • G10H2210/151Music Composition or musical creation; Tools or processes therefor using templates, i.e. incomplete musical sections, as a basis for composing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Electrophonic Musical Instruments (AREA)

Abstract

A series of data packets representing the pitches and temporal relationships of a series of musical events is input into a computer, resulting in a musical sequence file. One or more directives are inserted into the musical sequence file by an operator. On command from the operator, the computer reads the musical sequence file and sends control information to music generating means, either internal to or external to the music sequencer to produce a performance of the input music. The data packets are interpreted according to rules selected by the directives so as to cause the characteristics of the series of data packets originally input to be modified or to cause certain packets to be omitted or to cause additional packets to be generated. One application is to assist a composer in the musical composition and arrangement process. The directives may be of two types:- harmonic directives and rhythmic directives. The harmonic directives may be used e.g. for chord generation and pitch transformation. The rhythmic directives may be used e.g. to produce timing perturbations (shuffle) of musical events. <IMAGE>

Description

MUSIC SEQUENCER The present invention relates to methods of, and electronic devices for, composing and performing musical works.
Music sequencers are electronic devices used for the recording, editing and replaying of sequences of musical events. Musical events are typically musical notes corresponding to keystrokes of a music keyboard, plucking of strings, striking of a drum, or other actions which result in a predictable musical note or sound being produced. For convenience, a piano-like keyboard fitted with suitable sensors is generally used as an input device to sequencers.
Sequencers typically incorporate computing hardware equipped with suitable software arranged so that the activation of keys on the keyboard can be recorded by a memory means. As well as data identifying which keys are depressed, the time interval between depressions is also recorded. At the conclusion of recording, the user can instruct the sequencer to recall the data, and using the timing information previously stored along with keystroke data, electrical signals analogous to the signals generated by the keyboard at the time of recording are generated as output. If this output is fed to suitable sound-generating means, for example a music synthesiser, an automatic performance of the music previously played on the keyboard can be achieved.
This arrangement is similar in principle to the well-known player piano.
A further object of the music sequencer is to allow the user to edit the recorded performance, for example to correct wrongly-played notes. This is very difficult using mechanical means such as the player piano, but can readily be achieved using a computer-based electronic sequencer. As well as allowing editing, such arrangements make it possible to input notes at the editing stage, that is, to create additional notes which were not part of the performance as recorded in realtime. It is further possible to enter complete musical phrases or indeed entire compositions in this non-realtime manner, so that even a musician lacking the physical skills required to play the keyboard can enter compositions, which the sequencer can then replay in realtime, simulating a realtime or "live" performance.
Many sequencers of varying capability are readily available. Those incorporating microprocessors and display means are capable of fast and accurate entry and editing of musical data and are therefore the most useful for the musician.
Whereas all prior art sequencers are capable only of recording, editing, and replaying musical events, the creation of music, from composition through to performance, involves many other processes which traditionally rely upon the performing musician's skills, such as the ability to elaborate upon simple structures in the written composition according to musical rules and the performer's personal interpretation. For example, a composer may start by writing down a melody line, and then add a series of chord symbols from which a skilled musician can derive the accompanying parts and add them to the score. Prior art music sequencers are deficient in that they do not substantially assist in these creative processes. In many ways, they are little more than a highly advanced form of tape recorder which records and replays musical events rather than actual sound.
A further deficiency of prior art music sequencers is that if a mistake is made while inputting a performance, for example, a minor chord is entered where the composition has been assigned a major key signature, prior art sequencers have no ability to bring such a mistake to the user's attention, nor to automatically correct it. Similarly, if the composer changes his or her mind about certain characteristics of the composition after entering and/or editing it, for example by deciding that a single melodic line would be more pleasing if accompanied by an eight-part harmony, the composer must perform a laborious and error-prone editing procedure to change the music using a prior art music sequencer.
A third deficiency of prior art music sequencers is their limited ability to imbue the composition with what is known to musicians as "feel". Although feel is a somewhat subjective concept, there are certain aspects of feel which are quite objectively defined. In particular, feel is determined by the way in which a performer deviates from the strict rhythm defined by the written notes. It is usual, and generally desirable, that a musician performing a particular work introduce rhythmic perturbations according to the desired feel. Several common feels, such as those known as shuffle or swing, are well known to musicians, who can apply these and other feels to a given composition on request. Attempts have been made to incorporate facilities for introducing rhythmic perturbations into some prior art music synthesisers.Such features include quantization, whereby the timings of notes are rounded to the nearest quantum, as defined by the user, and shuffle, which causes notes to be displaced from their theoretical temporal position according to an algorithm. Such schemes have hitherto failed to predictably emulate the behaviour of a human player, being particularly deficient in that it has generally not been possible to achieve a desired feel, and in those cases where it is possible, it is only after extensive experimentation with a number of independently variable controls. In many respects, prior art schemes for achieving feel have been directed to the experimenter who wishes to compose by trial-and-error, rather than the competent musician who wishes to describe as unambiguously as possible a given feel and have the sequencer perform accordingly, with predictable results.
The present invention is directed to providing an improved system whereby several desirable musical processes, in addition to the record, edit, and play functions of prior art systems, are possible. For example, for any given musical idiom a set of rules may be adopted by the composer, and more or less strictly adhered to with respect to the melodic, rhythmic, and harmonic structures of the composition. The present invention is capable of detecting any departure from such rules and optionally ignoring, correcting or reinforcing the anomaly. This process is analogous to a music copyist noticing an error in a composition and correcting it without needing to ask the composer for precise instructions as to the correction required.Additionally, the present invention is preferably capable of embellishing a composition if so instructed by the user, for example by addition of further musical accompaniments according to traditional musical rules.
The invention is also directed to provision of accurate and predictable control of musical feel.
According to the present invention there is provided a method of sequencing music by use of a computing device comprising a central processing unit and associated memory means, said method comprising the steps of inputting a musical performance into said computing device, storing data representing the notes and timings of said performance, entering into said computing device directives to be associated with predetermined sections of said performance, processing said performance so as to produce additional musical data or modify the existing musical data according to said directives, and outputting the processed performance. The term "directive" as used herein refers to data which serve as instructions to higher-level processes which can in turn result in note data being generated or modified.Directives can be inserted by the user into the stored sequence data, or stored separately and linked by reference to certain ranges of the sequence data.
Directives differ fundamentally from the musical data component of the stored sequence, in that whereas musical data results in musical notes being sounded when the sequence is replayed, directives do not themselves correspond directly to notes to be played.
By allowing the user to direct that more than one directive be associated with a sequence, each directive being associated with a certain range of sequence data defined by the user, it is possible, using this invention, to cause differing musical interpretations to be applied to different sections of a sequence. This allows the user to achieve a performance resembling that of a trained musician much more quickly and simply than has been possible with prior art sequencers.
Another useful and novel feature of the invention is a method of performing pitch transformations, comprising the steps of inputting a musical note event from a sequence, adding or subtracting a transformation factor requested by the user, measuring the fit of the resulting note according to the harmonic context at that point of the sequence, and, depending on the result of the measurement, further adjusting the resulting note. The harmonic context is defined as the factors influencing which pitch is the most musically correct result, for the purpose of harmonisation, of a given nominal pitch transformation at a given point in the sequence, given that the musical concept of harmonisation is often subjective, within a certain algorithmic framework.The factors influencing harmonic context include, amongst others, the history of the melody, the key signature, the mode and the chord applicable at that point in the sequence. This pitch transformation method is hereinafter referred to as harmonic pitch transformation and is described fully in the detailed description below.
Yet another novel and useful feature of the invention is a method of introducing musical feel to a sequence, comprising the steps of accepting from the sequence file sequence data containing nominal timing information, quantizing the timing information according to a quantization factor selected by the user, feeding the quantized timings to a shuffling algorithm which distorts the time-base of music data events over a specified time interval (the shuffle interval, defined in beats or fractions of beats), delaying events from their nominal positions by an amount which increases to a maximum at the middle of the time interval and returns to zero at the end of the interval, the degree of such shuffling being also selected by the user. This method of introducing feel is also described fully in the detailed description below.
In accordance with another aspect of the present invention there is also provided apparatus for use in carrying out the above described methods.
An embodiment of the present invention will now be described, by way of example only, with reference to the drawings in which: Fig. 1 is a schematic block diagram of an embodiment of a music sequencer according to the invention; Fig. 2 is a schematic representation of the sequence file data structures stored within the music sequencer; Fig. 3 shows the the representation of directives on the display screen layout; Fig. 4 is a schematic representation of the data structures of a harmonic directive; Fig. 5 is a schematic representation of the data structures of a rhythmic directive; and Fig. 6 shows the feel menu display.
Referring now to Fig. 1, music data input 1 is a digital signal which includes at least pitch information, and is formatted so as to convey information specifying which notes are being played by the musician at any given moment. This data typically originates from a piano-like keyboard fitted with suitable electronic sensors and data transmission means, although other input means, such as a trumpet fitted with pitch detection means, can be used. In this embodiment, music data input 1 is equipped to receive data conforming to the well-known standard MIDI (Musical Instrument Digital Interface).
Music data input 1 is received by computer 2, which preferably incorporates a microprocessor, said data being formatted in a suitable way and stored in memory means 4.
The data stored in memory means 4 is hereinafter referred to as the "sequence file". In cases where a live performance is to be recorded, suitable timing information is derived from the realtime chronology of the performance and stored along with the music data in the sequence file. In this embodiment of the invention computer 2, as well as the other components of Fig.
1, comprise part of a Fairlight Computer Musical Instrument series III, which includes 14 megabytes of random access memory, disk storage, and four microprocessors.
This embodiment of the present invention comprises this hardware in combination with certain software routines common to such instruments, as well as software routines unique to this invention. The software can be written in any language, although for convenience in this embodiment it is written in a combination of the language C and assembler language. Details of implementation of the software are only given in the following description in cases where such implementation would not be obvious to a skilled programmer.
Display means 6 is used to make visible symbolic representation of the sequence file, so that the user can inspect and optionally edit it. Display means 6 in this embodiment is a cathode-ray device displaying a bit-mapped raster, under control of computer 2. Other suitable display devices can be used, including light-emitting-diode or liquid crystal display panels, or other means suitable for display of alphabetical, numeric, or graphic data. User input means 5 is used among other things to instruct the computer which portion of the sequence file is to be displayed and what editing functions are to be performed. In this embodiment, an alpha-numeric keyboard and mouse is used for this purpose.
Other suitable input means include a push-button switch array, graphics tablet, touch screen, direct connection from another sequencer or from a musical instrument, or other means suitable for data entry. As well as changing data input to the sequence file via music data input 1, the editing functions include the ability to input a part or the whole of a sequence file, including timing information, from user input means 5. User input means 5 is also used to insert directives into the sequence file.
Music data output 3 serves to convey sequence data to a music synthesiser by means of which the resulting musical performance is performed. To ensure compatibility with a wide range of sequencers, in this embodiment the MIDI standard is supported by this output.
Computer 2 of the preferred embodiment of Fig 1 is equipped with suitable software to allow the recording of incoming musical data and the storing of timing information along with this data as a sequence file in memory means 4.
Software is also provided to allow the sequence file to be recalled from memory and despatched to a music synthesiser via music data output 3, the timing of despatch of music data being determined by the timing information stored in the sequence file.
The embodiment so far described is similar to many wellknown music sequencers capable of recording, editing and replaying music data. As well as these components, this embodiment of the invention comprises software which, in combination with the hardware as described above, implements the novel directive functions which are central to the invention.
Software is provided so that, having entered a sequence, the user can instruct that directives be added to the sequence file, at positions determined by the user. For the construction of a suitable sequence file, a number of data formatting schemes can be employed with good results, the format used in this embodiment being shown in schematic form in Fig. 2.
Referring now to Fig. 2, the sequence file can be seen to be a series of data packets, linked together logically so that their chronological order can be recalled for replaying or editing purposes. The sequence file comprises at least two types of data packets, musical events and directives. The directives are positioned logically within the file so as to occur at points defined by the user within the music. When the sequence file is replayed, musical event packets result directly in data being transmitted from music data output 3, that is, there is a simple correspondence between the contents of the packet and the data generated at play time. Directive packets, however, do not themselves correspond to musical data and accordingly are not transmitted to the output at play time.
Musical event packets comprise a number of bytes of data, for example as shown in expanded form in Fig. 2 as musical event packet 2. The first byte of the musical event is a status byte, which defines the type of musical event, for example key depression, key release, or control change. A number of optional bytes, dependent on the event type, follow the status byte. These are the data bytes which convey information such as key number, key velocity or instrument number. The data structure of musical event packets is similar to that of the MIDI standard, except that timing information is added.
Directive packets, as shown in expanded form in Fig. 2 as directive 3, contain data to be used as instructions to algorithms used for automatic processing of the sequence file by further software of this embodiment.
In many cases, it is desirable to apply a given directive to only part of a sequence. For example, the user may wish to apply a "swing" feel to only a small group of measures within a larger work. For this purpose, in this embodiment the sequence is logically segmented into a number of subsets of sequence data, each corresponding, in musical terms, to one measure of the work. Software is provided to allow the user to associate directives with any range of measures. For convenience, this is implemented using the screen layout shown in Fig. 3. If desired, provision could also be made to specify ranges to a resolution finer than a measure.
Referring to Fig. 3, a sequence is displayed on the screen of this embodiment as a series of rectangular symbols, each representing one measure, and arranged in sequence starting with the first measure in the top left of the display, and progressing across and down the screen. The measures are labelled Ml, M2 and so on. Spaces are left between measures, into which directives can be inserted. Directives are shown on this display as smaller, vertically-oriented symbols. Different types of directives are characterised by a letter prefix, H being used for Harmonic Directives, and R for Rhythmic Directives.
A numeric suffix describes the type of directive within these groupings. At the bottom of the display is a series of control symbols, labelled INSERT H, INSERT R and DELETE. Directives are inserted into the sequence by first selecting INSERT H or INSERT R (for Harmonic or Rhythmic directives respectively) using the mouse. A window then appears, requesting further detail as required by the directive type. The point at which insertion is desired is then selected, and a suitably-identified symbol is inserted at the appropriate point in the display. The DELETE function is used to remove a directive from the sequence and display.
If desired, it is also possible to insert multiple directives at the same point in a sequence. For example, Fig. 3 shows a harmonic directive and a rhythmic directive being inserted between measures 7 and 8.
Two major types of directives are defined here as components of this embodiment, and each can be stored in a number of formats with good results. A number of minor types of directives are used also, as lower-level controls within these two major groups. Common to the format of directives used in this embodiment is a status byte indicating the directive type, followed by a variable number of data bytes. The structure and effect of the directives will now be described, as well as examples of applications for which they are useful.
HARMONIC DIRECTIVES The first directive type is termed a Harmonic Directive, and is used to perform a range of operations related to the harmonic and melodic content of the sequence. Harmonic directives serve a similar function to chord symbols in conventional music notation, that is, in conjunction with the key signature as well as other factors, they establish a harmonic context from which a skilled musician can deduce what notes will produce harmonious results when played at a given point in a musical work.
Harmonic directives as used in this embodiment comprise two numbers as shown in Fig. 4. The first number defines the root note of the chord to be applied from the point in the sequence at which said directive appears, the numbers 0 to 127 uniquely identifying the full range of chromatic notes, in this embodiment corresponding to key numbers as defined by the MIDI standard. The second number is a 24 bit binary word which is used as a bit mask, defining the notes to be included in the desired chord within a 2-octave range relative to the root note. Numbering the bits in said mask from 0 through 23, the bit number indicates the interval between the root note and the note of the chord indicted by the bit.A binary one in any bit position indicates that the corresponding note is present in the chord, and a zero in any bit position would indicate the absence of the corresponding note from the desired chord. The interval indicated by each bit position is as follows: Bit number Interval 0 Unison 1 minor second 2 second 3 minor third 4 third 5 fourth 6 diminished fifth 7 fifth 8 minor sixth 9 major sixth 10 diminished seventh 1 1 major seventh 1 2 octave 1 3 minor ninth 14 ninth 1 5 diminished tenth 16 tenth 17 eleventh 18 diminished twelfth 1 9 twelfth 20 minor thirteenth 2 1 thirteenth 22 diminished fourteenth 23 major fourteenth Having entered harmonic directives, the user can invoke a number of software routines to process the sequence file, based on these directives. The functions of such routines are either to automatically generate new notes to become part of the sequence, or to modify the notes which are already included in the sequence. The descriptions of the harmonic directive processing routines of this embodiment follow.
Transcription The simplest routine utilizing harmonic directives implements a function termed "transcription".
When the user enables the transcription function, having previously entered one or more harmonic directives at the desired points in a sequence, appropriate software causes chords to be generated according to the current harmonic directive. For each bit set in the bit-mask of the harmonic directive one note of the chord is generated, by adding the intervals indicated by the bit position to the root note of the directive. The resulting chord is issued at the time dictated by the beginning of the measure following the harmonic directive, and continues until the next harmonic directive is encountered, at which point a new chord is issued, or until the end of the range over which the transcription has been requested is reached.
This type of transcription generates long, sustained chords and is known as padding.
A more sophisticated type of transcription, known in relation to this embodiment of the invention as template transcription, allows more complex rhythmic patterns of chords to be generated. In this case, chords are derived from the harmonic directives as described above, but instead of simply applying single, long-duration chords for the entire range selected, a rhythmic pattern is imposed on the chords, the pattern being defined by what is called a template. The template is a data structure indicating only the timing values of a sequence of notes, that is no pitch information is included.
When used with a template, transcribed chords are caused to conform to the rhythmic structure of the template.
The user can apply a rhythmic pattern to a template in a number of ways. For example, the template can be defined using a simple notation, similar to a conventionally-notated drum part, or by playing a pattern in real-time using the music keyboard connected to the invention, from which only the timing information is used, or by selecting a given range of measures of the sequence file from which the rhythmic information is extracted.
When performing transcription, further control is provided over the voicing of the chords produced. Voicing, in traditional musical terms, refers to the use of inversions of chords and also the degree of "openness" of the chord (whether the notes chosen are clustered closely around the root, or spread widely across a number of octaves). When the user of this embodiment of the invention invokes the transcription function, optional arguments can be included in the command to determine the openness, expressed as an integer from 0 to 7, and degree of inversion, expressed as an integer from -24 to +24. If the inversion degree is a positive number n greater than 1, inversion is performed by taking the lowest n-l notes of the chord and moves them up an octave at a time until they are the highest notes of the chord.If n is negative and less than -1, the n-l highest notes of the chord are moved by octaves to the become the lowest notes in the chord. If n is greater than the number of notes in the chord, the notes "leapfrog" over each other until the requested number of inversions have been performed.
If an openness of 0 is specified, the notes of the chord are left unchanged (except for any inversion or transformation requested). As the openness increases, the notes are spread in octave intervals above and below the notes defined in the harmonic directive, the size of spread being proportional to the openness requested.
Harmonic Pitch Transformation One useful function of this invention is the function known as Harmonic Pitch Transformation (KPT). This forms the basis for many useful musical functions which require selected pitch values to be transformed (adapted to a different harmonic context).
The simplest application of HPT is analogous to what is called "transposition" in usual musical terminology. Some sequencers have previously offered transposition in the strict musical sense, which is nothing more than simply moving the pitch of a given note or group of notes up or down by a specified number of semitones. Transposition as such is of limited musical value, and to a musician, a much more complex process which takes into account the key signature, mode, chord, and other factors is often desirable during the composition process. Only in the simplest case, for example when a composition needs to be shifted up or down in pitch to accommodate the range limitations of a singer, does a simple transposition yield musically desirable results.
In this embodiment of the invention, the more musically useful function HPT is performed by an algorithm which accepts as input a pitch value and a nominal offset (desired transformation factor), the key signature, the mode, the harmonic directive, and a short history of preceding input values. The HPT algorithm yields as an output a transformed pitch value. The operation of this algorithm will now be described in detail.
To determine the transformed pitch, the HPT algorithm of this embodiment employs a system of "voting" strategies.
An array is maintained for accumulating weighted votes.
Each element of the array corresponds to a potential output pitch. The array has 2*n+1 elements, where n is the maximum deviation from the nominal note offset. In this embodiment maximum deviation is plus or minus a minor third (three semitones) and the array therefore has seven elements.
The voting strategies used are absolute, historical, modal and chordal. In the default case these strategies have different weights (priorities), absolute being lowest and chordal being highest. The user can, however, assign alternative priorities if desired. These strategies will now be described.
The absolute strategy consists simply of adding the nominal offset (transformation factor) to the input pitch. The sum defines the pitch represented by the centre element (n) of the array. This element is then initialised with a single vote (multiplied by the priority of the absolute strategy, which defaults to one but may be altered by the user). All other elements are set to zero. Subsequent strategies cast votes for the seven possible pitches by adding their own vote to the value stored in the corresponding element of the array.
The historical strategy propagates any short term pattern detected in previous notes, such as ascending, descending, or monotonic series. In this embodiment the historical strategy serves primarily to give additional preference to notes which preserve the melodic motion of the input, that is to favour output selections which increase or decrease in pitch by similar amounts to the input notes over the same time period, even if this implies deviation from the nominal offset. The previous input and output pitches are stored. The difference of the current and previous input pitches is added to the previous output pitch and a weighted vote is cast for the resulting note.
A secondary historical strategy, having lower priority, can be invoked to detect patterns of longer duration allocating a number of votes determined by the duration of the pattern.
For example, a consecutive chromatic series detected causes votes to be cast for the note which continues the series, the number of votes being proportional to the length of the series.
The modal strategy casts votes for pitches which are included in the set defined by the key signature and mode, including for example the traditional ecclesiastic modes, and the more popular major and minor modes of contemporary music. For this purpose, the user must specify the key signature and also which mode is intended, for example whether it is a C major or an A minor key signature. A secondary modal strategy includes historical considerations, for example in the case of the minor mode, the modal strategy considers the fact that the permissible set of notes depends on whether the sequence is ascending or descending.Applying the modal strategy in the case of changing from the key of C major to C minor, a transformation factor of zero is used, and the set of notes for which the modal strategy votes comprises the tonic, second, minor third, fourth and fifth, as well as major sixth and major seventh in the case of ascension, or minor sixth and minor seventh in the case of descension.
The number of votes for each pitch is inversely proportional to its distance from the central or nominal pitch. Although this exerts a powerful influence on the tonality of the output, it has a lower priority than the chordal strategy, which can effectively impose accidentals, or modal anomalies upon the harmonic context.
The chordal strategy functions identically to the modal strategy, except that membership in the set of pitches defined by the currently applicable harmonic directive is the criterion for voting.
A further factor which can optionally be invoked by the user is the ability to prevent the chordal and modal strategies voting for HPT output choices which conform to their strategies if the input to the HPT does not conform with the strategy. This function is useful in cases where a deliberately dissonant melody has been entered, that is the notes played do not conform to the harmonic context, and it is not desired that the HPT destroy this dissonance. In such cases other strategies assume higher or absolute priority.
After these strategies have voted each element of the array contains the accumulated votes for the note it represents. The note with the highest vote is then chosen for the output pitch.
In the case of a draw, the user is prompted for a deciding (manual) vote, in the absence of which the note nearest to the nominal offset is used.
By redefining the priorities of each strategy, the user can bias the results in favour of personal preferences for the various strategies.
HPT is used by a number of the directives supported by this embodiment of the invention. These will now be described.
Parallel Harmonv One of the many useful applications of HPT is to add a parallel harmony to a melody. The user, having inserted harmonic directives into the sequence where desired, selects a range of the sequence over which parallel harmony is desired, selects the transformation factor desired, as well as a selection of either overwrite or merge functions. The overwrite function causes the original notes to be replaced with the results of the HPT, resulting in a transformed melody which bears a similar harmonic relationship to the harmonic context as the original. The merge function causes the original notes to be remain, the results of the HPT being added to them to create a harmony.After the parallel harmony command has been issued, the result is similar to what a singer or guitarist would choose for a parallel musical line, when instructed, for example, to "take the harmony a third above".
Expand The expand function is used to build chords, comprising as many notes as desired, from a single note. After inserting the desired harmonic directives at the desired points in the sequence, the user specifies the range over which the expansion is to be performed. For each additional note of the chord to be added, a nominal pitch transformation factor is also specified, in this embodiment by inserting a special harmonic directive, which contains no root note data byte, or by adding suitable arguments to the EXPAND command.
When the EXPAND command is issued, the HPT function is applied to each note in turn, over the range specified, and using the first transformation factor specified. For each event of the selected sequence range, the resulting note is merged with the original note. The process is then performed again, using the next specified transformation factor as the input for the HPT algorithm. This process is repeated until each of the requested transformations has been performed, resulting in chords which follow the harmonic context and rhythmic pattern of the original sequence.
Voicing control, as described above in the context of transcription, can also be applied to the resulting expansion.
A further feature of this function is the ability to enable only a subset of notes of the resulting chords. For example, the user can specify that the second lowest note of each chord be omitted, or that all additional notes be omitted at the beginning of the range, and that progressively fewer notes be omitted as the sequence continues, so that the chords build in density. Complex arpeggiations can also be introduced by cyclic suppression of notes.
Tran sform/copv According to this function, a musical phrase (source phrase) is processed according to the HPT function, the result being placed at another point in the sequence (destination). The process for achieving this is similar to the parallel harmony function described above, except that the output of the HPT is directed to a different point in the sequence from the point where the source phrase occurs. The user selects the start and end points of the destination, the start point of the source phrase, and the transformation factor required. When the transform/copy command is issued, the HPT accepts as input the first note of the source phrase, transforms it, and outputs it to the destination. The user can also optionally instruct that the length of the input phrase is to be less than the length of the destination.In the case where the length of the input phrase has not been set by the user, the process continues until the end of the destination is reached. If a length has been set, the input to the HPT algorithm is reset to the beginning of the source phrase when the specified source length has been processed. This function also continues until the end of the destination is reached. The purpose of the limited-input-range function is to enable a small input phrase to be extended by repeating it, in transformed form, to fill a larger range of the sequence.
Kev change The key signature for a composition can be changed at any arbitrary point, by inserting key change directives as required. A key change directive in this embodiment comprises both a key signature and mode indicator. This is of great benefit, since both factors are used in performing musically pleasing transformations. For example, the key signature for C major is identical to that of A minor, and by using the mode indicator this embodiment performs musically desirable functions that cannot be deduced from the key signature alone.
Having inserted key change directives where desired, the invention can perform a number of very useful transformations on the sequence by applying the HPT function. The resulting transformation is influenced by the priorities assigned to each of the HPT strategies by the user.
In the simplest case, strict transposition can be achieved by assigning zero priority to all but the absolute strategy.
If a change of key is desired from, say, a major key to its relative minor, over a given range of the sequence, the user inserts the appropriate directives at the desired points, and issues a KEY command, to be performed with the modal strategy having a high priority.
Analvse The HPT can be applied to a selected range of the sequence without generating notes as output, providing instead indication of "fit" of the sequence with the harmonic context.
This is a valuable function for music education purposes, or for locating the source of an undesired dissonance.
RHYTHMIC DIRECTIVE The second type of directive is termed the Rhythmic Directive. Rhythmic directives of this embodiment control perturbation of timing of note events, that is, small variations are made to the nominal time values associated with the musical events stored in the sequence file, according to the data of rhythmic directives.
Timing perturbations are controlled by two variables, stored as data bytes of the rhythmic directive. The variables are called quantize and shuffle. Sequence data is fed to two timeperturbing routines in turn. First the data is processed by a quantizing algorithm, which adds or subtracts time values so that each note event falls on the nearest timing boundary (defined in beats or fractions of beats) as determined by the quantization value in the appropriate data byte of the rhythmic directive. The quantized data is then processed by the shuffle algorithm, which performs a function analogous to the well-known musical feel of the same name.
Shuffle, sometimes called "swing", is a term used by musicians to describe a variable degree of "threeness" imposed on an n/4 time signature. The jazz term swing refers to shuffling at eighth-note level by interpreting each group of two eighth-notes as quarter-note triplet and eighth-note triplet pairs. The rock term shuffle implies a similar perturbation of time at a sixteenth-note or thirty-second-note level.
Specifically, shuffle distorts the time-base of music data events over a specified time interval (the shuffle interval, defined in beats or fractions of beats), delaying events from their nominal positions by an amount which increases to a maximum at the middle of the time interval and returns to zero at the end of the interval. In this embodiment of the invention, the shuffle function is implemented as follows.
Shuffle is controlled by two user controls, shuffle level and shuffle degree. Shuffle level defines the repeated time unit over which the shuffle is performed. Shuffle degree defines the amount of perturbation introduced. In this embodiment, shuffle degrees range from zero to 43, zero representing no shuffle and 43 representing full shuffle, the number 43 being selected purely as a matter of convenience as the arithmetic involved, when using eight-bit binary numbers, is simplified by this choice. At degree 0, no perturbation is introduced. At degree 43, note events at the middle of the time unit are delayed by 1/3 of the time from the beginning of the time unit.
Three levels of shuffle are provided, namely eighth-note, sixteenth-note, and thirty-second-note. The algorithm operates on time units of twice the shuffle level duration, for example at eighth-note level the time unit of a quarter-note is used. Each note event within each time unit is tested to see whether it falls in the first half of the time unit.If it does, the time value for the event is modified according to the formula t2=tl+tl*s/128 where t2 is the new time of an event (measured from the start of the time unit), tl is the original time of the event, and s is the shuffle degree (in the range 0-43) -If the note event falls within the second half of the time unit, the time value for the event is modified according to the formula:: t2=tl +(u-tl)*(s/l 28) where t2 is the new time of an event (measured from the start of the time unit), tl is the original time of the event, s is the shuffle degree (in the range 0-43), and u is the total time of the time unit These formulae have been selected so that, for example, at the maximum shuffle factor, a note that was originally in the middle of a one-beat time unit, at the position of the second eighth-note of the beat, will be moved to the position of the third eighth-note triplet in that beat. Notes at the beginning and end of the time unit are not affected.
In this embodiment the duration of each note event remains unchanged by the shuffle algorithm, only the starting positions being affected. It is a simple matter, if desired, to provide a further facility whereby the duration of note events is reduced in proportion to the delay of their starting times.
The output of the shuffle algorithm is used to replace the timing values of the selected sequence data.
According to this invention, the musical term "feel" is interpreted to mean a combination of both quantization and shuffle. Furthermore, this embodiment provides a simple and convenient way of selecting any of a large number of musically valid feels from the large number of possible combinations of quantization and shuffle degrees. These features are achieved using what will be referred to as a "feel menu". The feel menu of this embodiment is illustrated in Fig.
6. This menu is displayed on the screen as an array of 16 cells on the horizontal axis representing quantization, ranging from Off then quarter-note to 64th septuplet, and 4 cells on the vertical axis representing shuffle mode, ranging from Off then eighth-note to 32nd note. The cells are marked by a symbol representing the quantization and shuffle level of each cell, corresponding to the combination of the effects of each axis.
By selecting the appropriate cell, any of the possible quantize/shuffle modes permitted are selectable.
Certain cells of the matrix are disabled, the choices being deliberately limited to those which result in musically useful feels. For example, with quarter-note quantization, eighthnote shuffle is not selectable, since the quarter-note quantization will ensure that no events occur at eighth-note intervals in the sequence file being processed by the shuffle routine. Similarly, with eighth-note quantization, sixteenthnote shuffle is not offered. Such cells of the menu are disabled because no shuffle would result from their selection.
Other cells of the menu, which would result in an effect but have been found by experiment to produce musically undesirable effects, are also disabled, as seen in Fig. 6. This embodiment of the invention includes a function by which the user can request that all cells of the feel menu be accessible if so desired.
The result is that every cell in the matrix will provide a recognized musical feel. The coupling of quantization and shuffle in this manner greatly reduces the complexity of operation, compared to prior-art systems. The feel menu also presents choices in a way which is easily understood and recognised by musicians, with the result that the musical outcome of a given selection can be predicted much more readily than in prior-art schemes.
As well as the matrix displayed on the feel menu of Fig. 6, a control is provided for selection of shuffle degree. As seen in Fig. 6, this takes the form of a horizontal bar display, which the user operated by sliding left or right using the mouse.
This control is calibrated from minimum, which yields the value 0 to the shuffle algorithm, to maximum, which yields a value of 43.
The format for the data bytes associated with rhythmic directives of this embodiment is shown in Fig. 5.
Referring to Fig. 5, the rhythmic directive can be seen to comprise four bytes. The first byte is the status byte, indicating that the directive is a rhythmic directive, the second byte conveys the quantization factor, corresponding to the sixteen levels of quantization represented by the horizontal axis of the feel menu, the third byte conveys the shuffle level represented by the vertical axis of the feel menu, and the fourth byte conveys the shuffle degree (in this embodiment 0 - 43). These values are used as input to the quantize and shuffle algorithms described above.
To determine the feel of a particular section of the sequence, the user inserts a Rhythmic Directive at the point at which the feel is to take effect. When the "INSERT R" (Fig. 3) function is selected, the feel menu is displayed, and the user selects the feel to be introduced from that point in the sequence. A user option is also provided in this embodiment to enable a function called "feel merge". When this function is enabled, transition from the feel defined by one rhythmic directive to that of another rhythmic directive is performed gradually, by interpolation. When this function is not enabled, the feel changes suddenly when the new directive is encountered.
For convenience, processing of sequence files according to directives can be performed by the computer in realtime, that is, as the sequence is being played. In some cases, however, it may be desired to know in advance of performance what effect application of given directives will have on the musical work. This need is met by providing the option to process the sequence file according to the specified directives and store the resulting musical data in the sequence file. This allows the user to inspect and modify the result before playing. This facility is also useful in cases where the complexity of the process defined by the directives is so great that the computer cannot proceed in realtime.
The foregoing describes only one embodiment of the present invention, and modifications, obvious to those skilled in the art, can be made thereto without departing from its scope. For example, the invention can be implemented as part of a music synthesiser or sequencer, or as a separate device.
Furthermore, while for convenience the present invention can be realised as a combination of computer hardware and software, it is possible to implement the same functions in the form of suitable dedicated hardware.
Furthermore, a person skilled in the art would realise that the choice of data structures used within the various directives is an arbitrary choice with regard to an implementation of the present invention and as such is not a limitation; other configurations are able to be used. For example, whereas the harmonic directive is described as comprising two words of data, this is a matter of convenience only, and other ways of expressing a chord structure can be used without departing from the scope of the invention. In some cases it is desirable to provide extra data within the directives, for example to indicate the location of the next directive. Similarly, the representation of directives on the display can be achieved in a wide variety of ways. For example, harmonic directives can be displayed as chord symbols, as seen commonly on sheet music for guitars.
As will be apparent to those skilled in the art, a wide range of other musical processes can be implemented by encoding suitable algorithms into the software of the present invention, these algorithms using for their input the contents of the sequence file including the directive packets.
Although two types of directives are described in relation to the preferred embodiment herein disclosed, the invention can be realized with any number of directive types.
Whereas the harmonic pitch transformation process described in relation to the embodiment described above involves four particular strategies, these are given by way of example only, and it is anticipated that different or additional strategies can be beneficially used. Indeed it is anticipated that some embodiments of the invention will allow the user to define further strategies as desired.

Claims (26)

1. A musical performance generating method, comprising the steps of: inputting a plurality of data packets, each data packet being representative of at least the pitch of a musical event; associating timing data with each data packet, said timing data being representative of the temporal relationship between said each data packet and other associated musical events; arranging said data packets to form a sequence file, within which subsets of data packets can be identified; storing a number of directives which represent instructions for further processing of said sequence file; associating each directive with at least one selected subset of data packets; reading data from said sequence file according to the said timing data of the stored musical events associated therewith; and processing the recalled data according to the instructions represented by directives associated with the data packets being recalled.
2. A musical performance generating method as claimed in claim 1 wherein at least one of said directives is a harmonic directive comprising data representative of a root note and related musical intervals of a chord.
3. A musical performance generating method as claimed in claim 1 wherein at least one directive is a rhythmic directive comprising data representative of a degree of quantization and shuffle to be applied to musical events.
4. A pitch transformation method, comprising the steps of: inputting values of a sequence of musical notes; adding to or subtracting from one of said values a transformation factor requested by the user to generate a resulting pitch value; measuring a fit of said resulting pitch value, according to a harmonic context at that point of said sequence; and depending on the result of the measurement, further adjusting the pitch of the output pitch value for a better fit.
5. A pitch transformation method as claimed in claim 4, wherein the step of measuring the fit of the resulting note according to the harmonic context at that point of the sequence comprises the sub-steps of analyzing at least one characteristic from the group consisting of the history of the melody, the key signature, the mode and the chord applicable at that point in the sequence.
6. A pitch transformation method as claimed in claim 5 wherein the chord applicable at a given point in the sequence is defined by a harmonic directive comprising data representative of the root note and related musical intervals of a chord.
7. A method of introducing musical feel to a sequence comprising the steps of: accepting from a sequence file sequence data including nominal timing information; quantizing the timing data to coincide with predetermined timing points, according to a quantization factor selected by the user; shuffling the quantized timings by delaying events occurring within a prescribed time interval by a delay amount; and increasing said delay amount from the beginning of the interval to the middle of the interval and reducing from the middle of the interval to the end of the interval, a degree of such shuffling being controlled by the user.
8. A method of introducing musical feel to a sequence as claimed in claim 7 wherein the feel is controlled by.the user selecting the degree of quantization and the degree of shuffle from a displayed menu, each menu selection defining a predetermined degree of quantization and shuffle.
9. A music transcription method comprising the steps of: inputting at least one harmonic directive, each comprising data representing, at least, a root note and a presence or absence of at least one related musical interval of a chord; storing said at least one harmonic directive; establishing a chronological relationship between said harmonic directives; reading each said harmonic directive in chronological order; for each said harmonic directive, calculating a number of output notes by adding the corresponding interval indicated by the harmonic directive to a value of the root note indicated by the harmonic directive for each interval indicated as present by the harmonic directive; and outputting for each said harmonic directive a resulting group of notes as a chord.
10. A music transcription method as claimed in claim 9 including the step of combining said chord with a rhythmic pattern stored as timing data within memory means.
11. A music sequencer comprising: means for receiving input of a plurality of data packets, each said data packet being representative of at least a pitch of a musical event; memory means connected to said receiving means for storing said data packets and data representative of a temporal relationship between said musical events as a sequence file; processing means for: a) identifying subsets of data packets within said sequence file; b) storing within said memory means a number of directives, which represent further instructions for said sequence file; c) associating each said directive with selected ones of said subsets of data packets; d) recalling the file data according to said temporal data of the stored musical events; and e) applying musical processing routines to the recalled data according to the directives associated with the data packets being recalled.
12. A music sequencer as claimed in claim 11 wherein at least one of said directives is a harmonic directive comprising data representative of the root note and related musical intervals of a chord.
13. A music sequencer as claimed in claim 11 wherein at least one directive is a rhythmic directive comprising data representative of the degree of quantization and shuffle to be applied to musical events.
14. A music sequencer as claimed in claim 11 further comprising means for displaying said identified subsets of sequence data as a series of symbols of a first shape and displaying said directives as symbols of a second shape.
15. A pitch transformer, comprising: means for receiving input of a sequence of musical note pitch values; and processing means for: a) generating a resulting pitch value by adding to or subtracting from one of said values a transformation factor requested by the user; b) measuring a fit of the resulting pitch value according to a harmonic context at that point of the sequence of pitch values; and c) depending on the result of the measurement, further adjusting the output pitch value.
16. A pitch transformer as claimed in claim 15, wherein the means for measuring the fit of the resulting note according to the harmonic context at that point of the sequence comprises means for analyzing at least one characteristic from the group consisting of the history of the melody, the key signature, the mode and the chord applicable at that point in the sequence.
17. A pitch transformer as claimed in claim 16 wherein the chord applicable at a given point in the sequence is defined by a harmonic directive comprising data representative of the root note and related musical intervals of a chord.
18. A musical sequence feel generator comprising: means for accepting from a sequence file sequence data including nominal timing information; processing means for: a) quantizing the timing data to coincide with predetermined timing points, according to a quantization factor selected by the user; b) shuffling the quantized timing data by delaying events occurring within a prescribed time interval by a delay amount; and c) increasing the delay amount from the beginning of the interval towards the middle of the interval and reducing the delay amount from the middle of the interval towards the end of the interval; and user control means for controlling a degree of such shuffling.
19. A musical sequence feel generator as claimed in claim 18 wherein the feel is controlled by the user selecting the degree of quantization and the degree of shuffle from a displayed menu, each menu selection defining a predetermined degree of quantization and shuffle.
20. A music transcriber comprising: means for receiving input of least one harmonic directive, each comprising data representing, at least, a root note and a presence or absence of at least one related musical interval of a chord; memory means connected to said receiving means for storing said at least one harmonic directive; processing means for: a) establishing a chronological relationship between said harmonic directives; b) reading each said harmonic directive in chronological order; c) for each said harmonic directive, calculating a number of output notes by adding the corresponding interval indicated by the harmonic directive to a value of the root note indicated by the harmonic directive for each interval indicated as present by the harmonic directive; and d) outputting for each said harmonic directive a resulting group of notes as a chord.
21. A music transcriber as claimed in claim 20 including means for combining said chord with a rhythmic pattern stored as timing data within memory means.
22. A music transcriber substantially as herein described with reference to the accompanying drawings.
23. A pitch transformer substantially as herein described with reference to the accompanying drawings.
24. A musical feel generator substantially as herein described with reference to the accompanying drawings.
25. A music sequencer substantially as herein described with reference to the accompanying drawings.
26. Any novel integer or step, or combination of integers or steps, hereinbefore described, irrespective of whether the - present claim is within the scope of or relates to the same, or a different, invention from that of the preceding claims.
GB8820543A 1987-09-02 1988-08-31 Music sequencer Withdrawn GB2209425A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AUPI410687 1987-09-02

Publications (2)

Publication Number Publication Date
GB8820543D0 GB8820543D0 (en) 1988-09-28
GB2209425A true GB2209425A (en) 1989-05-10

Family

ID=3772426

Family Applications (1)

Application Number Title Priority Date Filing Date
GB8820543A Withdrawn GB2209425A (en) 1987-09-02 1988-08-31 Music sequencer

Country Status (1)

Country Link
GB (1) GB2209425A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0542313A2 (en) * 1991-11-15 1993-05-19 Gold Star Co. Ltd Adaptive chord generating apparatus and the method thereof
FR2687244A1 (en) * 1992-02-07 1993-08-13 Castello Francois Sound reproduction method and apparatus for implementing this method
DE4302045A1 (en) * 1992-03-06 1993-09-16 Kawai Musical Instr Mfg Co Sequencer for use with musical instrument - has musical instrument digital interface for receiving and transmitting data with user keyboard providing selection and search control.
WO2000014719A1 (en) * 1998-09-04 2000-03-16 Lego A/S Method and system for composing electronic music and generating graphical information
GB2359657A (en) * 1999-12-06 2001-08-29 Yamaha Corp Automatic Play Apparatus and Function Expansion Device
GB2364161A (en) * 1999-12-06 2002-01-16 Yamaha Corp Automatic play apparatus and function expansion device.
US9536508B2 (en) 2011-03-25 2017-01-03 Yamaha Corporation Accompaniment data generating apparatus
US20210241730A1 (en) * 2020-01-31 2021-08-05 Spotify Ab Systems and methods for generating audio content in a digital audio workstation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4417494A (en) * 1980-09-19 1983-11-29 Nippon Gakki Seizo Kabushiki Kaisha Automatic performing apparatus of electronic musical instrument
GB2133198A (en) * 1982-12-24 1984-07-18 Casio Computer Co Ltd Automatic music playing apparatus
US4466324A (en) * 1980-12-04 1984-08-21 Nippon Gakki Seizo Kabushiki Kaisha Automatic performing apparatus of electronic musical instrument

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4417494A (en) * 1980-09-19 1983-11-29 Nippon Gakki Seizo Kabushiki Kaisha Automatic performing apparatus of electronic musical instrument
US4466324A (en) * 1980-12-04 1984-08-21 Nippon Gakki Seizo Kabushiki Kaisha Automatic performing apparatus of electronic musical instrument
GB2133198A (en) * 1982-12-24 1984-07-18 Casio Computer Co Ltd Automatic music playing apparatus

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0542313A2 (en) * 1991-11-15 1993-05-19 Gold Star Co. Ltd Adaptive chord generating apparatus and the method thereof
EP0542313A3 (en) * 1991-11-15 1994-02-02 Gold Star Co
US5455379A (en) * 1991-11-15 1995-10-03 Gold Star Co., Ltd. Adaptive chord generating apparatus and the method thereof
FR2687244A1 (en) * 1992-02-07 1993-08-13 Castello Francois Sound reproduction method and apparatus for implementing this method
DE4302045A1 (en) * 1992-03-06 1993-09-16 Kawai Musical Instr Mfg Co Sequencer for use with musical instrument - has musical instrument digital interface for receiving and transmitting data with user keyboard providing selection and search control.
WO2000014719A1 (en) * 1998-09-04 2000-03-16 Lego A/S Method and system for composing electronic music and generating graphical information
US6353170B1 (en) 1998-09-04 2002-03-05 Interlego Ag Method and system for composing electronic music and generating graphical information
GB2364161B (en) * 1999-12-06 2002-02-27 Yamaha Corp Automatic play apparatus and function expansion device
GB2364161A (en) * 1999-12-06 2002-01-16 Yamaha Corp Automatic play apparatus and function expansion device.
GB2359657B (en) * 1999-12-06 2002-02-27 Yamaha Corp Automatic play apparatus and function expansion device
GB2359657A (en) * 1999-12-06 2001-08-29 Yamaha Corp Automatic Play Apparatus and Function Expansion Device
US6620993B2 (en) 1999-12-06 2003-09-16 Yamaha Corporation Automatic play apparatus and function expansion device
US6660924B2 (en) 1999-12-06 2003-12-09 Yamaha Corporation Automatic play apparatus and function expansion device
US9536508B2 (en) 2011-03-25 2017-01-03 Yamaha Corporation Accompaniment data generating apparatus
EP2690620B1 (en) * 2011-03-25 2017-05-10 YAMAHA Corporation Accompaniment data generation device
US20210241730A1 (en) * 2020-01-31 2021-08-05 Spotify Ab Systems and methods for generating audio content in a digital audio workstation
US11798523B2 (en) * 2020-01-31 2023-10-24 Soundtrap Ab Systems and methods for generating audio content in a digital audio workstation

Also Published As

Publication number Publication date
GB8820543D0 (en) 1988-09-28

Similar Documents

Publication Publication Date Title
US6703549B1 (en) Performance data generating apparatus and method and storage medium
JP3582359B2 (en) Music score allocating apparatus and computer readable recording medium recording music score allocating program
US5939654A (en) Harmony generating apparatus and method of use for karaoke
EP1638077B1 (en) Automatic rendition style determining apparatus, method and computer program
US8324493B2 (en) Electronic musical instrument and recording medium
US6175072B1 (en) Automatic music composing apparatus and method
US8314320B2 (en) Automatic accompanying apparatus and computer readable storing medium
US7186910B2 (en) Musical tone generating apparatus and musical tone generating computer program
EP0945850B1 (en) Electronic music-performing apparatus
JP2002268636A (en) Device for automatic musical symbol determination based upon music data, device for musical score display control based upon music data, and program for automatic musical symbol determination based upon music data
GB2209425A (en) Music sequencer
JP3489503B2 (en) Sound signal analyzer, sound signal analysis method, and storage medium
WO2023056004A1 (en) Method and system for automatic music transcription and simplification
US4418601A (en) String snub effect simulation for an electronic musical instrument
JP2745215B2 (en) Electronic string instrument
JP3353777B2 (en) Arpeggio sounding device and medium recording a program for controlling arpeggio sounding
JP2518056B2 (en) Music data processor
JP2660462B2 (en) Automatic performance device
JP3620396B2 (en) Information correction apparatus and medium storing information correction program
JP3661963B2 (en) Electronic musical instruments
JP3282675B2 (en) Electronic musical instrument
JP3832147B2 (en) Song data processing method
JPH04257895A (en) Apparatus and method for code-step recording and automatic accompaniment system
JP2679308B2 (en) Sound source determination device and electronic musical instrument using the same
JPH0553577A (en) Automatic playing device

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)