EP1265221A1 - Verfahren und Vorrichtung zur automatischen Musikimprovisation - Google Patents

Verfahren und Vorrichtung zur automatischen Musikimprovisation Download PDF

Info

Publication number
EP1265221A1
EP1265221A1 EP01401485A EP01401485A EP1265221A1 EP 1265221 A1 EP1265221 A1 EP 1265221A1 EP 01401485 A EP01401485 A EP 01401485A EP 01401485 A EP01401485 A EP 01401485A EP 1265221 A1 EP1265221 A1 EP 1265221A1
Authority
EP
European Patent Office
Prior art keywords
music
data
improvisation
source
musical instrument
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP01401485A
Other languages
English (en)
French (fr)
Inventor
Francois Pachet
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony France SA
Original Assignee
Sony France SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony France SA filed Critical Sony France SA
Priority to EP01401485A priority Critical patent/EP1265221A1/de
Priority to EP02290851A priority patent/EP1274069B1/de
Priority to US10/165,538 priority patent/US7034217B2/en
Publication of EP1265221A1 publication Critical patent/EP1265221A1/de
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0008Associated control or indicating means
    • G10H1/0025Automatic or semi-automatic music composition, e.g. producing random music, applying rules from music theory or modifying a musical piece
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/031Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
    • G10H2210/061Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for extraction of musical phrases, isolation of musically relevant segments, e.g. musical thumbnail generation, or for temporal structure analysis of a musical piece, e.g. determination of the movement sequence of a musical work
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2250/00Aspects of algorithms or signal processing methods without intrinsic musical character, yet specifically adapted for or used in electrophonic musical processing
    • G10H2250/005Algorithms for electrophonic musical instruments or musical processing, e.g. for automatic composition or resource allocation
    • G10H2250/015Markov chains, e.g. hidden Markov models [HMM], for musical processing, e.g. musical analysis or musical composition

Definitions

  • the invention relates to a device and process for automatically improvising music such that it follows on seamlessly and in real time from music produced from an external source, e.g. a musical instrument being played live.
  • It can serve to simulate an improvising performing musician, capable for instance of completing a musical phrase started by a human musician, taking up instantly with an improvisation that takes into account the immediate musical context, style and other characteristics.
  • the invention contrasts with prior computerised music composing systems, which can be classed into two types:
  • the invention can overcome this hurdle by creating meta instruments which address this issue explicitly: providing fast, efficient and enhanced means of generating interesting improvisation, in a real-world, real-time context.
  • a first object of the invention is to provide a method of automatically generating music that constitutes an improvisation, characterised in that it comprises the steps of:
  • the music patterns can be musical phrases extracted from a stream of music data, each phrase constituting a respective music pattern.
  • the continuation is determined from potential root nodes of a tree structure expressing music patterns.
  • the embodiment can comprise the steps of:
  • the tree structure can be constructed in accordance the Lempel-Ziv algorithm using a data compression scheme.
  • the generating step may comprise the steps of:
  • the above procedure can preferably also include the step of applying a weighting to nodes of the tree.
  • the invention can provide an improvisation control mode, comprising the steps of:
  • the music control data can comprise a sequence of n notes, where n is an arbitrarily chosen number.
  • the continuation can be determined from potential root nodes of a tree structure expressing music patterns, wherein the selecting step comprises the step of attributing, to each of one or more nodes of the tree structure, a weight as a function of how a sequence associated to a given node matches with the control data.
  • the weight attributing step may comprise the steps of:
  • the overall weighting function for a possible node X, can be defined as: Weight(X) ⁇ (1 - S)*LZ_prob(X) + S*Harmo_prob(X), where LZ(prob(X) is the probability of X in the Tree, and S is a tuning parameter.
  • the method can further comprise the step of providing a jumping procedure comprising the steps of:
  • the step of establishing a data base can comprise the step of discriminating between chords and non-chordal note successions, chords being identified as notes separated from each other by a time interval shorter than a predetermined threshold.
  • the improvisation generating step is preferably halted upon detecting current music data. Conversely, the improvisation generating step is preferably started upon detecting an interruption in the current music data.
  • the music patterns forming the data base can originate from a source, e.g. music files, different from the source producing the current music data, e.g. a musical instrument.
  • the music patterns can also be extracted from the source producing the current music data, e.g. a musical instrument.
  • the music control data can be produced from a source, e.g. a musical instrument, different from the source, e.g. another musical instrument, producing the current music data.
  • the invention provides a device for automatically generating music that constitutes an improvisation, characterised in that it comprises:
  • the above device may further comprise means for extracting musical phrases from a stream of music data, each phrase constituting a respective music pattern.
  • the improvisation generating means may comprise selection means for selecting music patterns from potential root nodes of a tree structure expressing music patterns.
  • the device further comprises:
  • the generating means may comprise:
  • the above device may further comprise means for applying a weighting to nodes of the tree.
  • the device may further be equipped with an improvisation control mode, comprising:
  • the continuation may be determined from potential root nodes of a tree structure expressing music patterns, the selecting means preferably comprising means for attributing, to each of one or more nodes of the tree structure, a weight as a function of how a sequence associated to a given node matches with the control data (Ctrl).
  • the device further comprises at least one of:
  • the invention provides a music improvisation system, characterised in that it comprises:
  • the first source of audio data can be one of:
  • the first source and second source of audio data can equally be a same musical instrument or respective musical instruments.
  • the invention provides a music improvisation system, characterised in that it comprises:
  • the improvisation control source can be a musical instrument different from a musical instrument forming the second source of audio data.
  • the invention provides a system comprising:
  • the invention provides a software package comprising functional units for implementing the method according to the first object with a computer.
  • a music improvisation system 1 is based on a combination of two modules: a learning module 2 and a generator/continuation module 4, both working in real time.
  • the input 6 and the output 8 of the system are streams of Midi information.
  • the system 1 is able to analyse and produce pitch, amplitude and rhythm information (onsets and duration).
  • the system accommodates several playing modes; it can adopt an arbitrary role and cooperate with any number of musicians.
  • the system 1 is used by one musician, whose musical instrument, e.g. an electric guitar 10, has a Midi-compatible output connected to a Midi input interface 12 of the learning module 2 via a Midi connector box 14.
  • the output 8 of the system 1 is taken from a Midi output interface 16 to a Midi synthesiser 18 and then to a sound reproduction system 20.
  • the latter plays through loudspeakers 22 either the audio output of the system 1 or the direct output from the instrument 10, depending whether the system or the instrument is playing.
  • the learning module 2 and the generator/continuation module 4 are under the overall control of a central management and software interface unit 24 for the system.
  • This unit is functionally integrated with a personal computer (PC) comprising a main processing unit (base station) 26 equipped with a mother board, memory, support boards, CDrom and/or DVDrom drive 28, a diskette drive 30, as well as a hard disk, drivers and interfaces.
  • the software interface 24 is user accessible via the PC's monitor 32, keyboard 34 and mouse 36.
  • further control inputs to the system 1 can be accessed from pedal switches and control buttons on the Midi connector box 14, or Midi gloves.
  • the system acts as a "sequence continuator": the note stream of the musician's instrument 10 is systematically segmented into phrases by a phrase extractor 38, using a temporal threshold (200 milliseconds). Each phrase resulting from that segmentation is sent asynchronously from the phrase extractor 38 to a phrase analyser 40, which builds up a model of recurring patterns. In reaction to the played phrase, the system also generates a new phrase, which is built as a continuation of the input phrase, and not as an "answer" as in the case of the Biles reference supra.
  • the learning module 2 systematically learns all melodic phrases played by the musician.
  • the technique consists in building progressively a database of recurring patterns 42 detected in the input sequences produced by the phrase analyser 40.
  • the learning module 2 uses a data compression scheme (implemented by module 44) adapted from the scheme described by Lempel-Ziv in the paper "Compression of individual sequences via a variable rate coding, IEEE Trans. Int. Computer Music Conf. (ICMC'91), pp.344-347.
  • This scheme referred to hereafter as the LZ scheme, has been shown to be well suited for capturing melodic patterns efficiently, as explained by Assayag & al. in the paper "Guessing the composer's mind: applying universal prediction to musical style, Proc. ICMC 99, Beijing, China, I.C.M.A., San Francisco, USA.
  • the technique consists in building a prefix Tree by a simple, linear analysis of each input sequence. Each time a sequence is input to the system 1, it is parsed from left to right and new prefixes encountered are systematically added to the Tree. The principle of the LZ Tree is described in detail in the Assayag et al paper.
  • the LZ scheme is itself well known in data compression, the algorithm itself dating back from the 80's.
  • the idea of applying LZ to musical modelling has been proposed by various authors (Assayag, Dubnov, Delerue and others), but only in the context of off-line musical composition, or musical style classification.
  • the present invention adapts this scheme to the context of real-time (*real tim*) music improvisation.
  • the embodiment provides an efficient data structure based on 1) a standard Lempel Ziv Tree structure and 2) a hash table (dictionary) 46 that allows to directly access the nodes in the structures having a given data. This serves to ensure that learning can be done in real time (less than a few milliseconds). There is recorded in the Tree only information related to pitch and velocity (Midi). Rhythmic information is discarded in the present embodiment, although it can also be accommodated if required.
  • Learning can be done on the fly with the user playing from scratch. It can also be done from a library of predefined musical styles (for instance in the style of well known jazz artists such as Pat Martino, John McLaughlin, John Coltrane, Charlie Parker, etc.).
  • LZ Tree Because most usage of LZ for music is for batch processing (e.g. composition or classification), the present application calls for an efficient representation of the LZ Tree, both for Tree updating (learning) and traversal (generation).
  • the Assayag et al. paper proposes a dual representation of the LZ Tree which is supposed to speed up the traversal, but which takes up much more space, by basically doubling the size of the Tree.
  • Hash table 46 mapping each data (here Midi pitch) to the set of Tree nodes containing this data.
  • the Hash table 46 is updated each time a new node is created.
  • the Hash table directly yields the list of potential root nodes to start from.
  • the generator/continuation module 4 of the system 1 is the real time continuation mechanism, which generates the music in reaction to an input sequence.
  • the generation is performed using a traversal of Tree, through a traversal Tree module 48, to obtain the set of all possible continuations of the sequence. Then, the following item is chosen by a random draw, weighted by the probabilities of each possible continuation. This function is provided by a random draw and weighting module 50.
  • Music input to the generator/continuation module 4 is taken from the real time Midi source input interface 12 via an internal connection L1. In this way, the module 4 receives, at least, pitch and velocity information from the instrument 10.
  • the system 1 automatically detects musical phrases, by the means of a given time threshold applied to the phrase extractor 38.
  • a musical phrase is detected, it is sent to the phrase analyser 40 of the learning module 2 AND, through internal connection L1, to the generator/continuation module 4.
  • the latter computes a possible continuation for the input phrase, of a given length (parameter), and outputs it to the sound reproduction system 20 (e.g. through a Midi scheduler).
  • prefixes are only looked up from the root node.
  • the embodiment introduces a modification in the access method of the Tree. Since a given prefix can be located arbitrarily in the Tree structure, a check is made for all its possible occurrences in any part of the Tree. The selection of the "next" node to take is based on the reunion of all possible continuations thus found, associated with the corresponding weights.
  • an LZ Tree usually contains only partial information about patterns present in the input sequence, and is not complete (as opposed to classical Markov models). Consequently, a given pattern can be present at several locations in the Tree. For instance, in the Tree shown in figure 2, the pattern ABC is present twice (at nodes with an asterix):
  • the usual Tree traversal mechanism is augmented slightly by computing all possible continuations for a given input sequence.
  • the hash table 46 yields three nodes having data "A”, so three "standard” traversals are performed, to get eventually only two sets of possible continuations (here, D, D and E). These continuations are aggregated and the next item is drawn according to the respective weights of each node.
  • the system can handle over 35 000 LZ Tree nodes in real time (i.e. with a response time of less than 200 milliseconds), with a Java implementation on a personal computer PC running with a Pentium III microprocessor using a "MidiShare" (Pentium III is a registered trademark of Intel. Inc.).
  • the preferred embodiment makes use of the LZ (Lempel-Ziv) algorithm in its capacity to build a Tree of recurring patterns in an efficient way, but not its compressive capacity, as in usual implementations. More particularly, there is provided a more efficient use of this method, so that a real-time learning can be performed.
  • LZ Lempel-Ziv
  • the above-mentioned Tree is built and eventually used to build a compressed representation of a given sequence.
  • the embodiment does not utilise this compression, but merely uses the Tree of patterns for generating other sequences "in the style of" the previously inputted sequences.
  • the real-time learning can be performed by any method, as long as it can be done quickly (as in the case of Lempel-Ziv algorithm, which involves only one traversal of the input sequence), and as long as it can quickly produce a structure useable to complete the input sequence.
  • Lempel-Ziv parsing consists in building a Tree starting from the beginning of the sequence (root is on the left) to the right, and building each time a prefix Tree.
  • sequence A B C A B C D A B C C D E produces a Tree as follows, where each prefix is indicated successively between commas (,):
  • the Tree does not contain all the patterns in the sequence. However, it does converge, for infinite sequences, to the entropy of the sequence, i.e. it has good properties in the long term.
  • the Tree can be used for generating a sequence by choosing each time a node among possible continuations, according to its "weight", cf. unit 50.
  • the weight is in fact the size of its sub-Tree, and corresponds, by construction, to its probability of occurrence.
  • chords are abstract entities: a Midi stream is by definition a stream of single notes, so chords as such do not exist in reality for a Midi stream. What shall be termed a chord in the present context is any stream of n notes whose tonal duration is less than a given threshold, e.g. 40 milliseconds. When such a stream is detected, a single chord object aggregating all the notes in the stream is created, instead of n single note objects.
  • a given threshold e.g. 40 milliseconds.
  • each chord has a canonical representation that takes into account basic facts about tonal harmony:
  • Harmony is a fundamental notion in most forms of music, jazz being a particularly good example in this respect. Chord changes play an important role in deciding whether notes are "right” or not. It is important to note that while harmony detection is extremely simple to perform for a normally trained musician, it is extremely difficult for a system to express and represent explicitly harmony information, especially in real time. Accordingly, a design choice of the system according to the present embodiment is not to manage harmony explicitly. This choice is justified on three considerations :
  • External information may be sent as additional input to the system via the harmonic mode control input 54.
  • This information can be typically the last n notes (pitches) played by any external source (e.g. a piano with Midi interface 56) in a piano-guitar ensemble, where the guitar 10 is connected to Midi input interface 12, for instance.
  • the value of n can be set arbitrarily to 8, to provide steering on the basis of the last eight notes.
  • External input 54 is thus used to influence the generation process as follows.
  • the random draw and weighting module 50 is set to weight the nodes according to how they match the notes presented at the external input 54. For instance, it can be decided to give preference to nodes whose pitch is included in the set of external pitches, to favour branches of the Tree having common notes with the piano comping (?).
  • the harmonic information is provided in real time by one of the musicians (in this case the pianist), without intervention of the user, and without having to explicitly enter the harmonic grid in the system. The system then effectively matches its improvisation to the thus-entered steering notes.
  • Harmonic_prob This matching is achieved by a harmonic weighting function designated “Harmo_prob” and defined as follows.
  • Harmo_prob(x) belongs to [0,1], and is maximal (1) when all the notes of X are in the set of external notes.
  • the weight function is therefore defined as follows, where X is a possible node: Weight(X) ⁇ (1 - S)*LZ_prob(X) + S*Harmo_prob(X).
  • the system 1 introduces a "jumping procedure", which allows to avoid a drawback of the general approach. Indeed, it may be the case that for a given input sub-sequence seq, none of the possible continuations have a non-zero Harmo_prob value. In such a case, the system 1 introduces the possibility to "jump" back to the root of the LZ Tree, to allow the generated sequence to be closer to the external input. Of course, this jump should not be made too often, because the stylistic consistency represented by the LZ Tree would otherwise be broken. The system 1 therefore performs this jump by making a random draw weighted by S, as follows:
  • the system 1 does not learn nor produce rhythmic information. In the context of jazz improvisation, for instance, it was found that letting the continuation system produce rhythm by combining patterns is very awkward, and in fact limits its usability. Instead, the preferred embodiment generates phrases of eight notes. Velocity is enough to produce phrases that sound human to the point of being indistinguishable. The system 1 can actually also generate non-linear streams of notes by simply using the rhythmic pattern of the input sequence, and mapping it to the output sequence. In any case, no rhythmic information is learned.
  • Midi files for improvisation are used as input sequences.
  • known improvisers which are long enough to let the system actually capture the recurring patterns.
  • the learning scheme can be carried out several times to accelerate the learning process.
  • the embodiment described is limited, intentionally, in its musical knowledge.
  • the limitations are of three kinds: 1) rhythmic, 2) harmonic, and 3) polyphonic. Interestingly, it has turned out that these limitations are actually strengths, because of the control mode. In some way, these limitations are compensated by control.
  • the embodiment implements a set of basic controllers that are easy to trigger in real time. These are accessible through the software interface 24 in the form of on-screen pushbuttons and pull-down menus on the PC monitor 32, which can be activated through the computer keyboard 34 and/or mouse 36.
  • FIG. 3 An example of a typical screen page graphic of the computer interface displayed on the monitor 32 is shown in figure 3.
  • the last control is particularly useful.
  • the system stops playing when the user starts to play or resumes, to avoid superposition of improvisations. With a little bit of training, this mode can be used to produce a unified stream of notes, thereby producing an impression of seamlessness.
  • the system 1 takes over with its improvisation immediately from the point where the musician (guitar 10) stops playing, and ceases instantly when the musician starts to play again.
  • These controls are implemented with a foot controller of the Midi connector box 14 when enabled by the basic controls on screen (tick boxes). They can also be implemented with "Midi" gloves.
  • An internal link L2 is active in this case to also send the music output of the instrument from the Midi input interface 12 to the Midi output interface 16, so as to allow the instrument to be heard through the Midi synthesiser 18, sound reproduction system 20 and speakers 22.
  • the software interface allows a set of parameters to be adjusted from the screen 32, such as:
  • FIG 4 shows an example of a set-up for the sharing mode in the case of a guitar and piano duo (of course, other instruments outside this sharing mode can be present in the music ensemble).
  • each instrument in the sharing mode is non acoustic and composed a two functional parts : the played portion and a respective synthesiser.
  • the played portion For the guitar, these portions are respectively the main guitar body 10 with its Midi output and a guitar synthesiser 18b.
  • the piano they are respectively the main keyboard unit with its Midi output 56 and a piano synthesiser 18a.
  • One of the improvisation systems 1a has its Midi input interface 12a connected to the Midi output of the main guitar body 10 and its Midi output interface 16a connected to the input of the piano synthesiser 18a. The latter thus plays the improvisation of system 1a, through the sound reproduction system 20a and speakers 22a, based on the phrases taken from the guitar input.
  • the other improvisation system 1b has its Midi input interface 12b connected to the Midi output of the main keyboard unit 56 and its Midi output interface 16b connected to the Midi input of the guitar synthesiser 18b. The latter thus plays the improvisation of system 1b, through the sound reproduction system 20b and speakers 22b, based on the phrases taken from the piano input.
  • This inversion of synthesisers 18a and 18b is operative all while the improvisation is active.
  • the improvisation is automatically interrupted so that his/her instrument 10 or 56 takes over through its normally attributed synthesiser 18b or 18a respectively.
  • This taking over is accomplished by adapting link L2 mentioned supra so that a first link L2a is established between Midi input interface 12a and Midi output interface 16b when the guitar 10 starts to play, and a second link L2b is established between Midi interface 12b and Midi output interface 16a when the piano 56 starts playing.
  • the embodiment allows to generate automatically musical melodies in a given style. It automatically learns the style from various sorts of musical inputs (real time, Midi files).
  • the invention allows to generate musical melodies in a reactive way, according to musical stimulus, coming, e.g. from real time Midi signals.
  • modes of control are proposed, to make the invention actually useable in a real live performance context.
  • the invention can be embodied in wide variety of forms with a large range of optional features.
  • the implementation described is based largely on existing hardware elements (computer, Midi interfaces, etc.), with the main aspects contained in software based modules. These can be integrated in a complete or partial software package in the form of a suitable data carrier, such as DVD or CD disks, or diskettes that can be loaded through the appropriate drives 28, 30 of the PC.
  • the invention can be implemented as a complete stand-alone unit integrating all the necessary hardware and software to implement a complete system connectable to one or several instruments and having its own audio outputs, interfaces, controls etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Electrophonic Musical Instruments (AREA)
EP01401485A 2001-06-08 2001-06-08 Verfahren und Vorrichtung zur automatischen Musikimprovisation Withdrawn EP1265221A1 (de)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP01401485A EP1265221A1 (de) 2001-06-08 2001-06-08 Verfahren und Vorrichtung zur automatischen Musikimprovisation
EP02290851A EP1274069B1 (de) 2001-06-08 2002-04-05 Verfahren und Vorrichtung zur automatischen Fortsetzung von Musik
US10/165,538 US7034217B2 (en) 2001-06-08 2002-06-07 Automatic music continuation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP01401485A EP1265221A1 (de) 2001-06-08 2001-06-08 Verfahren und Vorrichtung zur automatischen Musikimprovisation

Publications (1)

Publication Number Publication Date
EP1265221A1 true EP1265221A1 (de) 2002-12-11

Family

ID=8182762

Family Applications (1)

Application Number Title Priority Date Filing Date
EP01401485A Withdrawn EP1265221A1 (de) 2001-06-08 2001-06-08 Verfahren und Vorrichtung zur automatischen Musikimprovisation

Country Status (1)

Country Link
EP (1) EP1265221A1 (de)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8847054B2 (en) * 2013-01-31 2014-09-30 Dhroova Aiylam Generating a synthesized melody
US9110817B2 (en) 2011-03-24 2015-08-18 Sony Corporation Method for creating a markov process that generates sequences
CN109637509A (zh) * 2018-11-12 2019-04-16 平安科技(深圳)有限公司 一种音乐自动生成方法、装置及计算机可读存储介质
US20210210057A1 (en) * 2018-03-15 2021-07-08 Score Music Productions Limited Method and system for generating an audio or midi output file using a harmonic chord map

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5736666A (en) * 1996-03-20 1998-04-07 California Institute Of Technology Music composition
WO1999046758A1 (en) * 1998-03-13 1999-09-16 Adriaans Adza Beheer B.V. Method for automatically controlling electronic musical devices by means of real-time construction and search of a multi-level data structure
US5990407A (en) * 1996-07-11 1999-11-23 Pg Music, Inc. Automatic improvisation system and method
WO2001009874A1 (en) * 1999-07-30 2001-02-08 Mester Sandor Jr Method and apparatus for producing improvised music

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5736666A (en) * 1996-03-20 1998-04-07 California Institute Of Technology Music composition
US5990407A (en) * 1996-07-11 1999-11-23 Pg Music, Inc. Automatic improvisation system and method
WO1999046758A1 (en) * 1998-03-13 1999-09-16 Adriaans Adza Beheer B.V. Method for automatically controlling electronic musical devices by means of real-time construction and search of a multi-level data structure
WO2001009874A1 (en) * 1999-07-30 2001-02-08 Mester Sandor Jr Method and apparatus for producing improvised music

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9110817B2 (en) 2011-03-24 2015-08-18 Sony Corporation Method for creating a markov process that generates sequences
US8847054B2 (en) * 2013-01-31 2014-09-30 Dhroova Aiylam Generating a synthesized melody
US20210210057A1 (en) * 2018-03-15 2021-07-08 Score Music Productions Limited Method and system for generating an audio or midi output file using a harmonic chord map
US11837207B2 (en) * 2018-03-15 2023-12-05 Xhail Iph Limited Method and system for generating an audio or MIDI output file using a harmonic chord map
CN109637509A (zh) * 2018-11-12 2019-04-16 平安科技(深圳)有限公司 一种音乐自动生成方法、装置及计算机可读存储介质
CN109637509B (zh) * 2018-11-12 2023-10-03 平安科技(深圳)有限公司 一种音乐自动生成方法、装置及计算机可读存储介质

Similar Documents

Publication Publication Date Title
US7034217B2 (en) Automatic music continuation method and device
JP3812328B2 (ja) 自動伴奏パターン発生装置及び方法
US20040055444A1 (en) Synchronous playback system for reproducing music in good ensemble and recorder and player for the ensemble
CN111602193B (zh) 用于处理乐曲的演奏的信息处理方法和装置
JP4834821B2 (ja) 電子音楽ファイルから音楽パートを生成する方法
JPH11167341A (ja) 演奏練習装置、演奏練習方法及び記録媒体
JP2003514259A (ja) 圧縮カオス音楽合成のための方法及び装置
Pachet Interacting with a musical learning system: The continuator
JP7327497B2 (ja) 演奏解析方法、演奏解析装置およびプログラム
JPH04330495A (ja) 自動伴奏装置
Weinberg et al. A real-time genetic algorithm in human-robot musical improvisation
Thörn et al. Human-robot artistic co-creation: a study in improvised robot dance
Hsu Strategies for managing timbre and interaction in automatic improvisation systems
EP1265221A1 (de) Verfahren und Vorrichtung zur automatischen Musikimprovisation
JP5394401B2 (ja) オーディオプレーヤ間の出力ボリュームの類似性の改善のためのシステムおよび方法
CN112912951B (zh) 表示动作的数据的信息处理装置
JP3812510B2 (ja) 演奏データ処理方法および楽音信号合成方法
Kobayashi et al. New ensemble system based on mutual entrainment
Rigopulos Growing music from seeds: parametric generation and control of seed-based msuic for interactive composition and performance
JP2629418B2 (ja) 楽音合成装置
Pinch Emulating sound: what synthesizers can and can’t do: explorations in the social construction of sound
US11715447B2 (en) Spontaneous audio tone inducing system and method of use
WO2024082389A1 (zh) 音乐分轨匹配振动的触觉反馈方法、系统及相关设备
JP3812509B2 (ja) 演奏データ処理方法および楽音信号合成方法
JP3627675B2 (ja) 演奏データ編集装置及び方法並びにプログラム

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

AX Request for extension of the european patent

Free format text: AL;LT;LV;MK;RO;SI

18W Application withdrawn

Withdrawal date: 20021126