WO2001086625A2 - Production automatisee de sequences sonores - Google Patents

Production automatisee de sequences sonores Download PDF

Info

Publication number
WO2001086625A2
WO2001086625A2 PCT/GB2001/001971 GB0101971W WO0186625A2 WO 2001086625 A2 WO2001086625 A2 WO 2001086625A2 GB 0101971 W GB0101971 W GB 0101971W WO 0186625 A2 WO0186625 A2 WO 0186625A2
Authority
WO
WIPO (PCT)
Prior art keywords
generative
audio system
controlling
items
engine
Prior art date
Application number
PCT/GB2001/001971
Other languages
English (en)
Other versions
WO2001086625A3 (fr
Inventor
John Tim Cole
Murray Peter Cole
Original Assignee
Sseyo Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GB0010969A external-priority patent/GB0010969D0/en
Priority claimed from GB0010967A external-priority patent/GB0010967D0/en
Priority claimed from GB0011178A external-priority patent/GB0011178D0/en
Priority claimed from GB0022164A external-priority patent/GB0022164D0/en
Priority claimed from GB0030834A external-priority patent/GB0030834D0/en
Application filed by Sseyo Limited filed Critical Sseyo Limited
Priority to AU58529/01A priority Critical patent/AU5852901A/en
Publication of WO2001086625A2 publication Critical patent/WO2001086625A2/fr
Publication of WO2001086625A3 publication Critical patent/WO2001086625A3/fr

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0033Recording/reproducing or transmission of music for electrophonic musical instruments
    • G10H1/0041Recording/reproducing or transmission of music for electrophonic musical instruments in coded form
    • G10H1/0058Transmission between separate instruments or between individual components of a musical system
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0008Associated control or indicating means
    • G10H1/0025Automatic or semi-automatic music composition, e.g. producing random music, applying rules from music theory or modifying a musical piece
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/18Selecting circuits
    • G10H1/26Selecting circuits for automatically producing a series of tones
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/021Background music, e.g. for video sequences, elevator music
    • G10H2210/026Background music, e.g. for video sequences, elevator music for games, e.g. videogames
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/101Music Composition or musical creation; Tools or processes therefor
    • G10H2210/145Composing rules, e.g. harmonic or musical rules, for use in automatic composition; Rule generation algorithms therefor
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/135Musical aspects of games or videogames; Musical instrument-shaped game input interfaces
    • G10H2220/145Multiplayer musical games, e.g. karaoke-like multiplayer videogames
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/155User input interfaces for electrophonic musical instruments
    • G10H2220/321Garment sensors, i.e. musical control means with trigger surfaces or joint angle sensors, worn as a garment by the player, e.g. bracelet, intelligent clothing
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/155User input interfaces for electrophonic musical instruments
    • G10H2220/351Environmental parameters, e.g. temperature, ambient light, atmospheric pressure, humidity, used as input for musical purposes
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2230/00General physical, ergonomic or hardware implementation of electrophonic musical tools or instruments, e.g. shape or architecture
    • G10H2230/005Device type or category
    • G10H2230/015PDA [personal digital assistant] or palmtop computing devices used for musical purposes, e.g. portable music players, tablet computers, e-readers or smart phones in which mobile telephony functions need not be used
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2240/00Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
    • G10H2240/011Files or data streams containing coded musical information, e.g. for transmission
    • G10H2240/046File format, i.e. specific or non-standard musical file format used in or adapted for electrophonic musical instruments, e.g. in wavetables
    • G10H2240/056MIDI or other note-oriented file format
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2240/00Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
    • G10H2240/011Files or data streams containing coded musical information, e.g. for transmission
    • G10H2240/046File format, i.e. specific or non-standard musical file format used in or adapted for electrophonic musical instruments, e.g. in wavetables
    • G10H2240/061MP3, i.e. MPEG-1 or MPEG-2 Audio Layer III, lossy audio compression
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2240/00Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
    • G10H2240/171Transmission of musical instrument data, control or status information; Transmission, remote access or control of music data for electrophonic musical instruments
    • G10H2240/175Transmission of musical instrument data, control or status information; Transmission, remote access or control of music data for electrophonic musical instruments for jam sessions or musical collaboration through a network, e.g. for composition, ensemble playing or repeating; Compensation of network or internet delays therefor
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2240/00Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
    • G10H2240/171Transmission of musical instrument data, control or status information; Transmission, remote access or control of music data for electrophonic musical instruments
    • G10H2240/201Physical layer or hardware aspects of transmission to or from an electrophonic musical instrument, e.g. voltage levels, bit streams, code words or symbols over a physical link connecting network nodes or instruments
    • G10H2240/241Telephone transmission, i.e. using twisted pair telephone lines or any type of telephone network
    • G10H2240/251Mobile telephone transmission, i.e. transmitting, accessing or controlling music data wirelessly via a wireless or mobile telephone receiver, analog or digital, e.g. DECT GSM, UMTS
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2240/00Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
    • G10H2240/171Transmission of musical instrument data, control or status information; Transmission, remote access or control of music data for electrophonic musical instruments
    • G10H2240/281Protocol or standard connector for transmission of analog or digital data to or from an electrophonic musical instrument
    • G10H2240/295Packet switched network, e.g. token ring
    • G10H2240/305Internet or TCP/IP protocol use for any electrophonic musical instrument data or musical parameter transmission purposes
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2240/00Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
    • G10H2240/171Transmission of musical instrument data, control or status information; Transmission, remote access or control of music data for electrophonic musical instruments
    • G10H2240/281Protocol or standard connector for transmission of analog or digital data to or from an electrophonic musical instrument
    • G10H2240/311MIDI transmission

Definitions

  • This invention relates to methods and systems for automated generation of sound sequences, and especially (though not exclusively) of sound sequences in the form of music.
  • Non-generative systems include deterministic systems which will produce the same sequences every time, along with systems that simply replay (perhaps in a random or other order) pre-composed sections of music.
  • the vast majority of current systems which produce musical output make use of this type of approach, for example by selecting and playing a particular predefined sequence of notes at random when the key is pressed or a mouse button clicked.
  • Generative Music Systems may be considerably more complex. Such systems generate musical content, typically note by note, on the basis of a higher-level of musical knowledge. Such systems either explicitly or implicitly are aware of a variety of musical rules which are used to control or influence the generation of the music.
  • the rules may operate purely on the individual notes being generated, without imposing any form of higher order musical structure on the output; in such systems, any musical order that arises will be of an emergent nature. More sophisticated systems may include higher-level rules which can influence the overall musical structure.
  • Generative Music Systems will normally create musical content "on the fly", in other words the musical sequences are built up note by note and phrase by phrase, starting at the beginning and finishing at the end. This means that - in contrast with some of the non-generative systems - the musical content can be generated and played in real time: there is no need for example for the whole of the phrase to be generated before the first few notes of the phrase can be played.
  • a generative music system For our present purposes, the essential features of a generative music system are that it generates musical content in a non-deterministic way, based upon a plurality of musical rules (which may either be implicit within the software or which may be explicitly specified by either the program writer or the user of the program).
  • a generative sound system produces non-deterministic sound sequences based upon sound-generation rules.
  • a generative audio system including a generative audio engine, the engine being controlled or influenced by messages received from a plurality of controlling items.
  • the messages may be transmitted and received via a wireless or a physical link, with any suitable protocol (such as SMS) being used.
  • any suitable protocol such as SMS
  • controlling items are fully networked, so that bi-directional message-passing capabilities are provided. In that way, complex interactions may occur between the various elements, as the music or other sounds are being generated.
  • the audio engine may be controlled or influenced by the content of the messages being received, the type of messages, the number of messages, the timing of messages, and/or the presence or absence of messages of a particular type.
  • a message sent by an individual controlling unit may identify the type of unit sending the message, along with its absolute or relative position and/or orientation. Where the sending unit has an audio engine of its own, the message may include information representative of the set-up and/or state of that audio engine (for example by means of appropriate parameters).
  • the method and system of the invention is applicable to the use of generative music (or other sound) systems within a stand-alone or networked digital device, and in this respect has many aspects by which the audio output of such digital devices may be controlled and exchange of musical (or other sound) information can be effected.
  • the invention may be applied, for example, in the context of mobile telephones and other network communications facilities, in the field of electronic toys, and in respect of other audio-capable digital electronic devices.
  • Short data messages may be used to transfer control information between devices for an audio or music rendering system, generative or otherwise.
  • the control information may be in the form of MIDI instructions or in a form which can operate or control a generative music system. This allows very small messages with low bandwidth requirement to facilitate rich and complex audio- musical behaviour of these devices.
  • the messages may trigger sound effects or musical/audio sequences, potentially integrating them within an audio output that is continuously being generated by the generative music system, or even within the context of the audio interpretation of the system.
  • the triggering may be related to such events as changes in value of a stock portfolio, incoming news events, weather announcements, networked musical performances.
  • the method and system of the invention may be used in the context of communication networks.
  • the devices of the network may each include an individual generative music system and messages transmitted between them may be used to co-ordinate their musical behaviour.
  • a device incorporating a generative music system of the invention may receive messages communicated, for example by wireless, relating to such matters as the musical activity, position or orientation of other devices.
  • These other devices may be, for example, in the form of tags or tokens and the response of the generative music system to the messages may be such as to indicate the relationship, in musical terms and/or positionally, of the tags or tokens to one another and/or the receiving device.
  • the invention extends to a game, and to a puzzle, incorporating a generative audio system as previously described.
  • the individual controlling items are in one embodiment collectable items of some sort, such as cards, building blocks, small toys, items of jewellery or the like.
  • the individual controlling items comprise toys, figures or models which, taken together, make up a "band” or “orchestra". These may automatically interact, by means of message passing, so as co-operatively to generate and play a musical composition, with each member of the "band” or “orchestra” performing a different role or being representative of a particular noise or instrument.
  • the figures may, together, form a peer - peer network or, alternatively, a controlling unit may be provided which controls the individual players. Wireless communication is preferred in this embodiment, although physical connections are not excluded.
  • a device incorporating a generative music system of the invention may receive messages relating to such matters as the musical activity, position or orientation of other devices.
  • These other devices may be, for example, in the form of tags or tokens and the response of the generative music system to the messages may be such as to indicate the relationship, in musical terms and/or positionally, of the tags or tokens to one another and/or the receiving device.
  • sounds or music could more generally be controlled or influenced by the number, type, relative locations, orientation, combination, proximity and composition of the various controlling elements, and the ambient environment(s) in which they are located (e.g. light levels, humidity, temperature etc).
  • Figure 1 is a schematic representation of the preferred system of the invention
  • Figure 2 is illustrative of objects that are involved in a component of the system of Figure 1;
  • FIG. 3 is a flow-chart showing process steps involved in control sequencing within the method and system of the invention.
  • Figure 4 is illustrative of operation of the method and system of the invention in relation to scale and harmony rules
  • Figure 5 illustrates operation of the method and system of the invention in relation to the triggering of note sequences and their integration into a musical work as currently being composed and played;
  • Figure 6 shows in schematic form devices that each utilise the method and system of the present invention and are in wireless communication with one another and/or a base station;
  • Figure 7 is illustrative of an arrangement which includes a device that utilises the method and system of the present invention for providing sound output dependent on the position and/or orientation of other items.
  • Figure 8 is an embodiment in which sounds are generated in dependence upon the position of cards on a base unit
  • Figure 9 shows schematically the cards used in the embodiment of Figure 8;
  • Figure 10 is another embodiment in which sounds are generated in dependence upon the position of stacking blocks on a base unit;
  • Figure 11 shows another embodiment in which cards can be slotted into a base unit
  • Figure 12 shows another embodiment in which sounds are generated in dependence upon the position of objects within a room
  • Figure 13 shows a further embodiment in which the generation of music within a portable music player is influenced by jewellery worn by the user.
  • Figure 14 shows yet a further embodiment consisting of a number of toys or figures making up an "orchestra".
  • the method and system to be described are for automated generation of sound sequences and to integrate data presented or interpreted in a musical context for generating an output reflecting this integration. Operation is within the context of generation of musical works, audio, sounds and sound environments in real- time. More especially, the method and system function in the manner of a 'generative music system' operating in real-time to enable user-interaction to be incorporated into the composition on-the-fly.
  • the overall construction of the system is shown in Figure 1 and will now be described.
  • the system involves four high-level layers, namely, an applications layer I comprising software components 1 to 5, a layer II formed by an application programmer's interface (API) 6 for interfacing with a music engine SKME that is manifest in objects or components 7 to 14 of a layer III, and a hardware device layer IN comprising hardware components 15 to 19 that interact with the music engine SKME of layer III.
  • Information flow between the software and hardware components of layers I to IN is represented in Figure 1 by arrow-heads on dotted-line interconnections, whereas arrow-heads on solid lines indicate an act of creation; for example, information in the composed- notes buffer 11 is used by the conductor 12 which is created by the soundscape 8.
  • the applications layer I determines the look, feel and physical instantiation of the music engine SKME.
  • Users can interact with the music engine SKME through web applications 1, or through desktop computer applications 2 such as those marketed by the Applicants under their Registered Trade Mark KOA ⁇ as KOA ⁇ PRO and KOA ⁇ X; the music engine SKME may itself be such as marketed by the Applicants under the Registered Trade Mark KOA ⁇ .
  • Interaction with the engine SKME may also be through applications on other diverse platforms 3 such as, for example through mobile telephones or electronic toys. All applications 1 to 3 ultimately communicate with the music engine SKME via the API 6 which protects the internals of the music engine SKME from the outside world and controls the way in which the applications can interact with it.
  • the instructions sent to the API 6 from the applications 1 to 3 consist of commands that instruct the music engine SKME to carry out certain tasks, for example starting the composition and playback, and changing the settings of certain parameters (which may affect the way in which the music is composed/played).
  • communication with the API 6 may be direct or via an intermediate API.
  • communication to the API 6 is direct from the desktop computer applications 2, whereas it is via an intermediate browser plug-in API 4 and Java API 5 from applications 1 and 3 respectively.
  • the music engine SKME which is held in memory within the system, comprises eight main components 7 to 14.
  • SSFIO 7 which is for file input/output, holds a description of the parameters, rules and their settings used by algorithms within the engine, to compose.
  • a soundscape 8 is created in memory and this is responsible for creating a composer 10, conductor 12 and all the individual compositional objects 9 relating to the description of the piece as recorded in the SSFIO 7.
  • the compositional objects are referred to by the composer 10 to decide what notes to compose next.
  • the composed notes are stored in a number of buffers 11 along with a time-stamp which specifies when they should be played.
  • the conductor 12 keeps time, by receiving accurate time information from a timer device 19 of level IV.
  • the relevant notes are removed from the buffers 11 and the information they contain (such as concerning pitch, amplitude, play time, the instrument to be used, etc.) is passed to the appropriate rendering objects 13.
  • the rendering objects 13 determine how to play this information, in particular whether via a MIDI output device 17, or as an audio sample via an audio-out device 18, or via a synthesiser engine 14 which generates complex wave-forms for audio output directly, adding effects as needed.
  • the hardware devices layer IV includes in addition to the devices 17 to 19, a file system 15 that stores complete descriptions of rules and parameters used for individual compose/playback sessions in the system; each of these descriptions is stored as an 'SSfile', and many of these files may be stored by the file system 15.
  • a MIDI in device 16 is included in layer IV to allow note and other musical-event information triggered by an external hardware object (such as a musical keyboard) to be passed into the music engine SKME and influence the composition in progress.
  • the system can be described as having essentially two operative states, one, a ⁇ dynamic' state, in which it is composing and the other, a 'static' state, in which it is not composing.
  • a static state the system allows modification of the rules that are used by the algorithms to later compose and play music, and keeps a record encapsulated in the SSFIO component 7, of various objects that are pertinent to the description of how the system may compose musical works.
  • the system is also operative in the dynamic state to keeps records of extra objects which hold information pertinent to the real-time composition and generation of these works. Many of these objects (the compositional objects 9 for example) are actual instantiations in memory of the descriptions contained in the SSFIO 7. Modification of the descriptions in the SSFIO 7 via the API layer II during the dynamic state, results in those modifications being passed down to the compositional objects 9 so that the real-time composition changes accordingly.
  • Figure 2 shows a breakdown of the SSFIO component 7 into its constituent component objects which exist when the system is in its static and dynamic states; the system creates real-time versions of these objects when composing and playing.
  • SSfiles 20 stored each provide information as to 'SSObject(s)' 21 representing the different types of object that can be present in the description of a work; these objects may, for example, relate to piece, voice, scale rule, harmony rule, rhythm rule.
  • Each of these objects has a list of 'SSFparameters' 22 that describe it; for example, they may relate to tempo, instrument and scale root.
  • the API 6 allows a number of functions to be effected such as 'start composing and playing', 'change the rules used in the composition', 'change the parameters that control how the piece is played' including the configuration of effects etc.
  • One of the important advantages of the described method and system is the ability to trigger generative pattern sequences in response to external events.
  • the triggering of a generative pattern sequence has a range of possible outcomes that are defined by the pattern sequence itself. In the event that a generative pattern sequence is already in operation when another trigger event is received, the currently operational sequence is ended and the new one scheduled to start at the nearest availability.
  • Generative pattern sequences allow a variety of musical seed phrases of any length to be used in a piece, around which the music engine SKME can compose in real time as illustrated in Figure 3. More particularly, the generative pattern sequence contains a collection of one or more note-control sub-patterns with or without one or more additional sequence-control sub- patterns.
  • note-control sub-patterns Three types can be created, namely: 'rhythm' note-control sub-pattern containing note duration information, but not assigning specific frequencies to use for each note; 'frequency and rhythm' note-control sub-pattern containing both note duration and some guidance to the generative music engine SKME as to the frequency to use for each note; and 'forced frequency' note-control sub-pattern containing note duration, temporal positioning and explicit frequency information to use for each note.
  • Sequence- control sub-patterns can be used to specify the sequence in which the note-control sub-patterns are played, and each note-control sub- pattern may also specify ranges of velocities and other musical information to be used in playing each note.
  • the music engine SKME allows the use of multiple sub-patterns in any generative pattern sequence.
  • step 30 of triggering the generative pattern sequence acts through step 31 to determine whether there are any other sequence-control sub-patterns operative. If not, a note-control sub-pattern is chosen at random in step 32 from a defined set; each note-control sub-pattern of this set may be assigned a value that determines its relative probability of being chosen. Once it is determined in step 33 that the selected note-control sub-pattern is finished, another (or the same) note-control sub-pattern is selected similarly from the set. The generative pattern sequence continues to play in this manner until instructed otherwise.
  • step 31 If the result of step 31 indicates that there is one or more sequence-control sub- patterns operative, then any sequence-control sub-pattern is chosen at random in step 34 from the defined set; each sequence-control sub-pattern may be assigned a value that determines its relative probability of being chosen. Once a sequence-control sub-pattern has been selected in step 34, it is consulted to determine in step 35 a sequence of one or more note-control sub-patterns to play. As each note-control sub-pattern comes to an end, step 36 prompts a decision in step 37 as to whether each and every specified note-control sub- pattern of the operative sequence has played for the appropriate number of times.
  • step 35 If the answer is NO, then the next note-control sub-pattern is brought into operation through step 35, whereas if the answer is YES another, or the same, sequence-control sub-pattern is selected through repetition of step 34. As before, the generative pattern sequence continues to play in this manner until instructed otherwise.
  • Each sequence-control sub-pattern defines the note-control sub-pattern(s) to be selected in an ordered list, where each entry in the list is given a combination of: (a) a specific note-control sub-pattern to play, or a range of note-control sub- patterns from which the one to play is chosen according to a relative probability weighting; and (b) a value which defines the number of times to repeat the selected note-control sub-pattern, before the next sequence-control sub-pattern is selected.
  • the number of repetitions may be defined as a fixed value (e.g. 1), as a range of values (e.g. repeat between 2 and 5 times), or as a special value indicating that the specified note-control sub-pattern should be repeated continuously.
  • various rules internal to the music engine SKME may be used to determine the exact pitch, duration and temporal position of the notes to be played. For example, if a 'rhythm' note-control sub- pattern is in operation at a particular point in the generative pattern sequence, then the scale rule, harmony rule and next-note rule within the music engine SKME for that 'triggered voice 1 will be consulted to obtain the exact notes. Alternatively, if the 'forced frequency' note-control sub-pattern is operational, no internal rules need be consulted since all the note information is already specified. Furthermore, for the case of 'frequency and rhythm', the music engine SKME combines the given frequency offset information with its rules and other critical information such as the root of the current scale and range of available pitch values for the voice in question.
  • composition e.g. tempo
  • rules and other parameters affecting composition (e.g. tempo) within the music engine SKME are defined in memory, specifically within the SSFIO 7, and its real-time instantiation of the compositional objects 9.
  • Use of rules and parameters within the music engine SKME form part of the continual compositional process for other voice objects within the system.
  • Figure 4 illustrates this more general process based on examples of scale and harmony rules shown at (1) and (2) respectively.
  • the scale rule is illustrated at (1) with shaded blocks indicating a non-zero probability of choosing that interval offset from a designated scale root note.
  • the octave Ove, major third M3 and fifth 5 are the most likely choices, followed by M2, 4, M6 and M7; the rest will never be chosen. Sequences that may be generated by the system from this are shown below the blocks, and in this respect the octave has been chosen most often followed by the major third and the fifth.
  • the resulting sequence of notes output from the system in this example are C,E,C,D,G,A,E,D,C,G,E,B,C,F, as illustrated at (1) of Figure 4.
  • the harmony rule defines how the system may choose the pitches of notes when other notes are playing, that is to say, how those pitches should harmonise together.
  • only the octave and major second are indicated (by shading) to be selected. This means that when the pitch for a voice is chosen, it must be either the same pitch as, or a major second from, all other notes currently being played.
  • rhythm rules applicable to the voice objects VI -V3 in this example give rise to a generated sequence of notes as follows: voice VI starts playing a note, then voice V2 starts playing a note, then voice V3 starts playing a note, and then after all notes have ended, voice V2 starts playing another note, followed by voice VI and then voice V3.
  • voice VI starts playing a note
  • voice V3 starts playing a note
  • voice VI starts playing another note
  • voice VI starts playing another note
  • the pitch for voice V2 must either be the same as (Ove) or a major second above (M2) the fifth. In the case illustrated, it is chosen to be " the same, and so the fifth is chosen too.
  • voice V3 starts playing it must harmonise with both voices VI and V2, so the pitch chosen must be the same as, or a major second above that of voices VI and V2. As illustrated, the system chooses voice V3 to be a major second above, therefore giving pitch offset M6 from the scale root.
  • the actual pitches and harmonisation of that sequence is determined by the composer 10 using several items of information, namely: (a) the note-control sub-pattern operational at that moment; (b) the scale, rhythm, harmony and next-note rules depending upon the type of the note-control subsequence; and (c) any piece-level rules which take into account the behaviour of other voices within the piece.
  • the music engine SKME When the music engine SKME is in dynamic (i.e. composing and playing) mode, it typically contains a number of voice compositional objects 9.
  • the composer 12 composes a sequence of notes for each of these and makes sure they obey the various rules. The process involved is illustrated in the flow diagram of Figure 5.
  • the music engine SKME responds to an external trigger applied at step 51, and the API 6 through step 52 instructs a voice 1 in step 53 to register that it must start a sequence.
  • Voice 1 and the voices 2 to N in step 54 have their own rules, and the composer 10 ensures that the relevant rules are obeyed when utilising any of the voices 1 to N. More particularly, the composer 10 responds in step 55 to the instruction of step 53 for voice 1 to start a sequence, by starting the generative pattern sequence sub-system of Figure 3. This sends note-control sub-sequences to the trigger voice (voice 1 in this example), but the composer 10 makes sure the resulting notes harmonise with the other voices in the piece. The outcome via the conductor 12 in step 56 is played in step 57.
  • the generative pattern sequence triggered will play forever, or until the system is instructed otherwise. If a sequence control sub-pattern is used to define a generative pattern sequence such that the final note control sub-pattern is one which plays silence (rest notes) in an infinite loop, then when this pattern sequence is selected, the voice will become effectively 'inactive' until another trigger is detected. Further triggering events for the same generative pattern sequence may sound different as the process is generative, or since the rules in use by the piece or the scale of the trigger voice, its harmony or next note rules may have changed (either via interaction through the API 6 or via internal music engine SKME changes).
  • the sounds used to 'render' each note, whether from triggered sequences or generative voices may be played either through the MIDI sounds or the samples of the rendering objects 13, or via software of the synthesiser engine 14 which may add digital signal processing effects such as, for example, filter sweeps, reverberation and chorus.
  • the entire process can be used to generate musical event information that is then fed into, and may thus control, other processing units within the system such as synthesiser related units allowing the triggering of generative sound effects.
  • Voices can also be added which make use of the software synthesiser engine 14 to generate non note-based effects such as sound washes and ambient environmental sounds, such as chimes, wind and other organic sounds.
  • the method and system of the invention is applicable with advantage in a networked system. In particular, it may be used for the purpose of networked musical "jamming" (joint composition).
  • Figure 6 illustrates an example of their application in this context.
  • two wireless networked devices 60 each include a generative music system (not shown in full detail) of the form described above, the devices 60 being in this respect typical of a multiplicity of digital devices linked together for wireless communication in the relevant network.
  • Each device 60 is in wireless communication with an (optional) base unit 65.
  • Each device 60 also includes an integrator 61 that receives information by reception of messages transmitted to it by wireless from the base unit 65 (and other network devices), and also internally by information from within its own generative music system.
  • the musical sounds generated by the devices 60 may also be controlled or influenced by the relative positions of the devices 60 themselves and/or the central unit 65.
  • the information on relative positions could be achieved by message passing either directly between the devices 60 or between each device 60 and the central unit 65.
  • Each device 60 receives information from the SSFIO 62 and also from the composer 64 that is linked to it through the compositional objects 63, of its generative music system.
  • the information from the SSFIO 62 describes its current musical 'behaviour', whereas that from the composer 64 describes the current state of the musical output from the device 60.
  • the information from all three sources is used within the device 60 to make changes to the SSFIO 62 so as to affect the future musical behaviour of the device 60.
  • any wireless messages passing between the devices 60, or between a device 60 and the central unit 65 requires only a very small bandwidth.
  • Such messages may be effectively small files which can: define explicitly the compositional rules or other elements to be used to compose/generate audio in the relevant receiving device 60; (b) describe instructions which are effective to modify the compositional rules in an integrative fashion within the receiving device 60; and (c) effect the changes required in near real-time in the receiving device 60.
  • the messages may moreover contain small audio sample files or descriptions of sound processing units or effects (e.g. synthesiser unit descriptors).
  • the method and system of the invention may be utilised to facilitate audio signification of events relating to or associated with the context within which they are operative.
  • the method and system may be implemented within a telephone or personal computer to provide built-in generative ring or other tones in one or more monitored events.
  • sound effects or other audio elements may then be generated or modified to reflect the change in status or detail of the event that is being monitored.
  • Generative effects signifying, for example, incoming news events or weather announcements, may also be incorporated. These effects can aid in the signification and appreciation of time-sensitive or time-series information.
  • Figure 7 shows an embodiment involving the use of units that communicate positional and/or orientational information to a generative music system, so that the output of the system is dependent upon that information.
  • a plurality of tokens or units 71 each containing a positioning system e.g. a Global Positioning System (GPS) 72 transmit wireless messages at intervals to a device 73 that corresponds to the device 60 of the arrangement of Figure 6. More particularly, the device 73 incorporates an integrator 74 that operates in conjunction with the generative music system of the device 73 to monitor the incoming messages from the individual units or tokens 71; the units 71 may be interrogated in turn to prompt the transmission of the messages.
  • the messages include positional and/or orientational information concerning the individually identified units 71 and the integrator 74 passes appropriately modified messages to the SSFIO 75 of the generative music system. The result is that the musical output of the device 73 is dependent upon the positions and/or orientations of the various tokens or units 71, in such a way that arranging the units 71 in different geometric combinations and alignments achieves different audio effects or compositions.
  • a positioning system e.g. a Global Positioning System (GPS) 72 transmit wireless messages at intervals
  • the Positioning System feature of the units 71 may be omitted, and each unit may instead be arranged to determine its absolute or relative position and/or orientation in some other way.
  • positional/inclination sensors may be provided on each of the units 71, these being used to supply the information that is needed to control or influence sound/music generation within the device 73. It would also be possible for the units 71 to determine their own relative positions and inclinations - for example by means of proximity sensors - with the resultant information being reported back by means of a message or messages sent to the device 73.
  • the music-generating device 73 is contained within a flat base unit 80 which is arranged to be connected to an external loudspeaker 82 by a lead 84.
  • the speaker could be built into the base unit 80 itself.
  • each card 86 includes, preferably on its rear surface, a card locator 90 which, in association with suitable electronics (not shown) within the base unit, enables the system to be able to determine exactly where on the surface each individual card has been located.
  • the generative musical sound engine within the base unit 80 is controlled or influenced by the number of cards on the surface, the card types, type combinations, and/or the cards' absolute or relative positions and/or orientations. This is achieved by message-passing between the cards and/or between the cards and the base unit.
  • Each type of card may control or influence the musical sound being generated in its own individual way.
  • One card might, for example, control the bass voice, another the guitar, another the drums and so on.
  • Other types of card may influence the musical sound generation in other ways, for example controlling the volume of one or more voices, pitch, timbre, rhythm and so on.
  • Individual cards may also contain pre-defined musical or sound "templates" to produce a basic or skeleton composition which may then be influenced or augmented by other cards.
  • One card type 98 may be a master card, defining the overall rule set. This may need to be placed in a specific region of the based unit's upper surface, for example the left hand corner, as shown in Figure 8.
  • the master card 98 could also define or control the contribution to the composition of the other cards. It might, for example, instruct the system that cards of type 1 should be treated as the bass line, cards of type 2 as the guitar line and so on. In that way, the use of different master cards may fundamentally affect the overall composition.
  • the cards 86 and/or 98 may be user-programmable, provided of course that the card location element 90 includes some type of non-volatile memory. Users could for example download template definitions, master card definitions, sample sound or music descriptions and musical rules, individual parameter values and so on from a central server (e.g. via the Internet).
  • each card may include a printed design and/or text which differs according to card type.
  • the cards may then be collected and/or swapped. It will be understood of course that the embodiment of Figure 8 may be extended to other toy or collecting scenarios, in which the cards 86,98 may be replaced with books, toys or other collectable units.
  • FIG 10 there is shown an alternative embodiment in which the cards 86,98 are replaced with building blocks 102.
  • the blocks may be connectable in some way - for example in the manner of LegoTM blocks or SticklebricksTM - to enable towers 104 to be built.
  • the blocks are placed on, or may be securable to, a base unit 100 which operates in a manner similar to that of the base unit 80 of Figure 8.
  • the blocks pass messages between themselves and/or between themselves and the base unit.
  • the musical sounds generated by the system may in this embodiment depend on the position of bricks in the third dimension, as well as the positioning on the surface of the base unit 100.
  • the height of the towers 104, their locations, rotational positions, angles of attachment and the type of blocks within them may all affect the sound or music generation.
  • the relative positions of the blocks/units may determine the flow of the music - for example each block may represent a single bar of music with the bars being played according to a left-to-right or top-to-bottom progression.
  • the device shown in Figure 10 may be configured so that the audio output reflects how close the users is to a desired configuration (for example in solving a puzzle associated with the blocks).
  • the output may depend upon how close the user has got to laying out the cards to a predefined configuration, initially unknown to the user.
  • the positioning of blocks may be an interesting way to control the temporal evolution of the piece of music. In such a way the user can mix and improvise by repositioning blocks at particular points in time.
  • the individual objects could take the form of balls, preferably within an enclosed container such as a sphere.
  • an enclosed container such as a sphere.
  • a base unit 110 has a plurality of parallel slots 112 for receiving individual cards such as the cards 86,98 of Figure 8.
  • the base unit is connected to an external loudspeaker 111; alternatively, the loudspeaker may be integrated within the unit.
  • a collector of the cards 86,98 places them, as desired, into the various slots 112 in order to control or influence the musical sounds being played by the speaker 111.
  • the system detects how many cards have been placed into slots, the card types, combinations and the card locations in order to control the music or sound generation.
  • a generative sound or music system is contained within a speaker enclosure 120 having a speaker 112.
  • the enclosure has a detector 124 enabling it to receive messages from movable articles of furniture, crockery, ornaments and the like.
  • the detector receives wireless signals 126 from a vase 128, a chair 130 and coffee cups 132.
  • the sound or music generator may be controlled or influenced by the absolute or relative positioning within the room of the various movable items, as notified to the detector 124.
  • the sound or music may be controlled or influenced in some other way by the messages being passed (ie without reference to absolute or relative positions).
  • Each unit may, for example, pass a unique identifying message, and the sound or music may be controlled according to the presence or absence of certain messages, or combinations thereof.
  • the enclosure 120 may be static within the room, and may for example form part of a hi-fi system. The system could then generate and play background music and/or sounds in dependence upon the configuration of the furniture and/or other items within the room.
  • the enclosure 120 could be portable, and could be carried on the owner's person. Then, as the owner walks around the room, the music and/or sounds generated by the system will automatically vary.
  • the enclosure 120 could be dispensed with and one or more speakers built into the individual objects themselves.
  • Figure 13 shows yet a further embodiment in which the generative engine is contained within a small belt-mounted unit 130 which is connected by means of a lead 132 to a pair of earphones 134.
  • a detector 136 on the unit 130 receives messages from jewellery/clothing or other items being worn by the user, for example a watch 138 and a bracelet 140.
  • Each piece of jewellery includes a transponder 142, 144 permitting communication with the detector 136, as indicated by the dotted line 146.
  • message-passing between the individual items may occur, as indicated by the dotted line 148, as well as between the items and the sensor 136.
  • Messages transmitted to the sensor could control the generative engine based upon, for example, the number, type and sequence of individual beads or other elements on the bracelet 144.
  • FIG 14 illustrates the way in which the above principles can be extended to co-ordinating musical output through automated message passing.
  • This embodiment consists of a number of toys, figures or other units 142, each of which contains within it a generative music engine and a loudspeaker. Each unit generates and plays its own music while at the same time transmitting and receiving messages 144,146 from other similar units and/or an optional master unit 148. This results in an automated self- organising orchestra of networked devices.
  • each device While each device is composing and rendering its own audio output, it is simultaneously listening-out to ensure that this output harmonises appropriately with everything else that is happening. Instructions and data passed between the devices in the manner described above with reference to Figure 6 - that is, by automated message passing - but at least some of the devices may also receive at least some information by listening-out for the audio output of other devices. Additional means of supplying input to the devices, whether automated or human, could also be supplied.
  • One or more of the devices might assume control of different areas of the group composition. For example, a "drummer” or a “conductor” device might dictate the rhythm and tempo. Adding new devices to the group enhances and augments the audio experience.
  • the toys or other items 142 may be representative of specific individuals within the “orchestra", and may be provided with “instruments” appropriate to their role. Each device may have a mechanical action appropriate to that role, as well: for example, a “drummer” may appear to play the drums and a “saxophonist” the saxophone.
  • the music generated by the "orchestra” may be controlled or influenced by the number, type, relative locations, orientation, combination and proximity of the individual devices 142, and/or the ambient environment(s) in which they are located (e.g. light levels, humidity, temperature etc).
  • Each device may have a sensor (not shown) for sensing one or more details of the ambient environment, and generating a message based on the sensed values.
  • a master device 148 may control certain fundamentals of the composition, for example key, rhythm, tempo and so on.
  • the master device 148 may also define or control the individual voices of the players 142 so that, for instance, it might modify the sound of one of the instruments from that of a saxophone to that of a trumpet.
  • the individual units 142 do not generate their own individual sounds. Instead, composition and rendering of the audio output is carried out entirely within the unit 148, and played by a suitable speaker on that unit.

Abstract

L'invention concerne un système sonore génératif comprenant un moteur audio génératif commandé ou influencé par des messages reçus à partir d'une pluralité d'unités ou d'articles individuels (71). Dans divers modes de réalisation, ces articles peuvent comprendre des cartes de collection (86), des blocs constitutifs (102), des articles de mobilier, des ornements, etc. (128, 130, 132), des dispositifs électroniques portables tels que des téléphones mobiles (60), ainsi que des jouets, des modèles ou des figures (142). Ces unités ou articles individuels peuvent échanger des messages les uns avec les autres et/ou avec une unité de base.
PCT/GB2001/001971 2000-05-05 2001-05-04 Production automatisee de sequences sonores WO2001086625A2 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU58529/01A AU5852901A (en) 2000-05-05 2001-05-04 Automated generation of sound sequences

Applications Claiming Priority (10)

Application Number Priority Date Filing Date Title
GB0010969A GB0010969D0 (en) 2000-05-05 2000-05-05 Automated generation of sound sequences
GB0010967.8 2000-05-05
GB0010969.4 2000-05-05
GB0010967A GB0010967D0 (en) 2000-05-05 2000-05-05 Automated generation of sound sequences
GB0011178.1 2000-05-09
GB0011178A GB0011178D0 (en) 2000-05-09 2000-05-09 Automated generation of sound sequences
GB0022164A GB0022164D0 (en) 2000-09-11 2000-09-11 Automated generation of sound sequences
GB0022164.8 2000-09-11
GB0030834A GB0030834D0 (en) 2000-05-05 2000-12-18 Automated generation of sound sequences
GB0030834.6 2000-12-18

Publications (2)

Publication Number Publication Date
WO2001086625A2 true WO2001086625A2 (fr) 2001-11-15
WO2001086625A3 WO2001086625A3 (fr) 2002-04-18

Family

ID=27515939

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2001/001971 WO2001086625A2 (fr) 2000-05-05 2001-05-04 Production automatisee de sequences sonores

Country Status (2)

Country Link
AU (1) AU5852901A (fr)
WO (1) WO2001086625A2 (fr)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6815600B2 (en) 2002-11-12 2004-11-09 Alain Georges Systems and methods for creating, modifying, interacting with and playing musical compositions
US6972363B2 (en) 2002-01-04 2005-12-06 Medialab Solutions Llc Systems and methods for creating, modifying, interacting with and playing musical compositions
US7076035B2 (en) 2002-01-04 2006-07-11 Medialab Solutions Llc Methods for providing on-hold music using auto-composition
US7169996B2 (en) 2002-11-12 2007-01-30 Medialab Solutions Llc Systems and methods for generating music using data/music data file transmitted/received via a network
WO2007124469A3 (fr) * 2006-04-21 2008-07-31 Vergence Entertainment Llc Dispositifs d'interaction musicale
US8134061B2 (en) 2006-04-21 2012-03-13 Vergence Entertainment Llc System for musically interacting avatars
US8257157B2 (en) 2008-02-04 2012-09-04 Polchin George C Physical data building blocks system for video game interaction
US9818386B2 (en) 1999-10-19 2017-11-14 Medialab Solutions Corp. Interactive digital music recorder and player
US20220114993A1 (en) * 2018-09-25 2022-04-14 Gestrument Ab Instrument and method for real-time music generation
US11842710B2 (en) 2021-03-31 2023-12-12 DAACI Limited Generative composition using form atom heuristics

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006043929A1 (fr) 2004-10-12 2006-04-27 Madwaves (Uk) Limited Systemes et procedes de remixage de musique

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5166463A (en) * 1991-10-21 1992-11-24 Steven Weber Motion orchestration system
US5633985A (en) * 1990-09-26 1997-05-27 Severson; Frederick E. Method of generating continuous non-looped sound effects
US5908996A (en) * 1997-10-24 1999-06-01 Timewarp Technologies Ltd Device for controlling a musical performance
US5920024A (en) * 1996-01-02 1999-07-06 Moore; Steven Jerome Apparatus and method for coupling sound to motion
WO2000077770A1 (fr) * 1999-06-09 2000-12-21 Innoplay Aps Dispositif de composition et d'arrangement musical
US6198034B1 (en) * 1999-12-08 2001-03-06 Ronald O. Beach Electronic tone generation system and method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5633985A (en) * 1990-09-26 1997-05-27 Severson; Frederick E. Method of generating continuous non-looped sound effects
US5166463A (en) * 1991-10-21 1992-11-24 Steven Weber Motion orchestration system
US5920024A (en) * 1996-01-02 1999-07-06 Moore; Steven Jerome Apparatus and method for coupling sound to motion
US5908996A (en) * 1997-10-24 1999-06-01 Timewarp Technologies Ltd Device for controlling a musical performance
WO2000077770A1 (fr) * 1999-06-09 2000-12-21 Innoplay Aps Dispositif de composition et d'arrangement musical
US6198034B1 (en) * 1999-12-08 2001-03-06 Ronald O. Beach Electronic tone generation system and method

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9818386B2 (en) 1999-10-19 2017-11-14 Medialab Solutions Corp. Interactive digital music recorder and player
US6972363B2 (en) 2002-01-04 2005-12-06 Medialab Solutions Llc Systems and methods for creating, modifying, interacting with and playing musical compositions
US7102069B2 (en) 2002-01-04 2006-09-05 Alain Georges Systems and methods for creating, modifying, interacting with and playing musical compositions
US7076035B2 (en) 2002-01-04 2006-07-11 Medialab Solutions Llc Methods for providing on-hold music using auto-composition
US6977335B2 (en) 2002-11-12 2005-12-20 Medialab Solutions Llc Systems and methods for creating, modifying, interacting with and playing musical compositions
US7169996B2 (en) 2002-11-12 2007-01-30 Medialab Solutions Llc Systems and methods for generating music using data/music data file transmitted/received via a network
US6815600B2 (en) 2002-11-12 2004-11-09 Alain Georges Systems and methods for creating, modifying, interacting with and playing musical compositions
US6979767B2 (en) 2002-11-12 2005-12-27 Medialab Solutions Llc Systems and methods for creating, modifying, interacting with and playing musical compositions
US7015389B2 (en) 2002-11-12 2006-03-21 Medialab Solutions Llc Systems and methods for creating, modifying, interacting with and playing musical compositions
US7022906B2 (en) 2002-11-12 2006-04-04 Media Lab Solutions Llc Systems and methods for creating, modifying, interacting with and playing musical compositions
US7026534B2 (en) 2002-11-12 2006-04-11 Medialab Solutions Llc Systems and methods for creating, modifying, interacting with and playing musical compositions
US6958441B2 (en) 2002-11-12 2005-10-25 Alain Georges Systems and methods for creating, modifying, interacting with and playing musical compositions
US6916978B2 (en) 2002-11-12 2005-07-12 Alain Georges Systems and methods for creating, modifying, interacting with and playing musical compositions
US6960714B2 (en) 2002-11-12 2005-11-01 Media Lab Solutions Llc Systems and methods for creating, modifying, interacting with and playing musical compositions
US6897368B2 (en) 2002-11-12 2005-05-24 Alain Georges Systems and methods for creating, modifying, interacting with and playing musical compositions
JP2009534711A (ja) * 2006-04-21 2009-09-24 ヴェルジェンス エンターテインメント エルエルシー 音楽について相互作用するデバイス
US8134061B2 (en) 2006-04-21 2012-03-13 Vergence Entertainment Llc System for musically interacting avatars
US8324492B2 (en) 2006-04-21 2012-12-04 Vergence Entertainment Llc Musically interacting devices
WO2007124469A3 (fr) * 2006-04-21 2008-07-31 Vergence Entertainment Llc Dispositifs d'interaction musicale
US8257157B2 (en) 2008-02-04 2012-09-04 Polchin George C Physical data building blocks system for video game interaction
US20220114993A1 (en) * 2018-09-25 2022-04-14 Gestrument Ab Instrument and method for real-time music generation
EP3857539A4 (fr) * 2018-09-25 2022-06-29 Reactional Music Group AB Instrument et procédé pour la production de musique en temps réel
US11842710B2 (en) 2021-03-31 2023-12-12 DAACI Limited Generative composition using form atom heuristics
US11887568B2 (en) 2021-03-31 2024-01-30 DAACI Limited Generative composition with defined form atom heuristics

Also Published As

Publication number Publication date
AU5852901A (en) 2001-11-20
WO2001086625A3 (fr) 2002-04-18

Similar Documents

Publication Publication Date Title
US6093880A (en) System for prioritizing audio for a virtual environment
US6975995B2 (en) Network based music playing/song accompanying service system and method
JP3659149B2 (ja) 演奏情報変換方法、演奏情報変換装置、記録媒体および音源装置
US20030045274A1 (en) Mobile communication terminal, sensor unit, musical tone generating system, musical tone generating apparatus, musical tone information providing method, and program
CN103021390B (zh) 通过独立于音乐再现装置的信息处理装置显示音乐再现的内容
US7498504B2 (en) Cellular automata music generator
WO2002077585A1 (fr) Systeme et procede de creation et d'arrangement musicaux
Weinberg The aesthetics, history and future challenges of interconnected music networks
WO2001086625A2 (fr) Production automatisee de sequences sonores
WO2001086628A2 (fr) Production informatisee de sequences sonores
WO2001086627A2 (fr) Generation automatisee de sequences sonores
JP2001331175A (ja) 副旋律生成装置及び方法並びに記憶媒体
JP5967564B2 (ja) 電子オルゴール
JP3654143B2 (ja) 時系列データの読出制御装置、演奏制御装置、映像再生制御装置、および、時系列データの読出制御方法、演奏制御方法、映像再生制御方法
JP4700351B2 (ja) マルチユーザ環境の制御
WO2001086630A2 (fr) Generation automatisee de sequences de sons
WO2001086626A2 (fr) Generation automatique de sequences de sons
WO2001086629A2 (fr) Generation automatisee de sequences sonores
CN107943279A (zh) 智能穿戴设备及工作方法、具有存储功能的装置
JP2007156280A (ja) 音響再生装置、音響再生方法および音響再生プログラム
JP4983012B2 (ja) 楽曲再生において立体音響効果を付加する装置およびプログラム
JP7285175B2 (ja) 楽音処理装置、及び楽音処理方法
Weinberg et al. ZooZBeat: a Gesture-based Mobile Music Studio.
JP2007258885A (ja) 情報提供システム、情報提供サーバ及び携帯端末等
JP2003108130A (ja) 楽曲再生方法及び携帯電話装置

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
AK Designated states

Kind code of ref document: A3

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase in:

Ref country code: JP