WO2001086625A2 - Automated generation of sound sequences - Google Patents

Automated generation of sound sequences

Info

Publication number
WO2001086625A2
WO2001086625A2 PCT/GB2001/001971 GB0101971W WO2001086625A2 WO 2001086625 A2 WO2001086625 A2 WO 2001086625A2 GB 0101971 W GB0101971 W GB 0101971W WO 2001086625 A2 WO2001086625 A2 WO 2001086625A2
Authority
WO
Grant status
Application
Patent type
Prior art keywords
system
generative
music
audio
note
Prior art date
Application number
PCT/GB2001/001971
Other languages
French (fr)
Other versions
WO2001086625A3 (en )
Inventor
John Tim Cole
Murray Peter Cole
Original Assignee
Sseyo Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0033Recording/reproducing or transmission of music for electrophonic musical instruments
    • G10H1/0041Recording/reproducing or transmission of music for electrophonic musical instruments in coded form
    • G10H1/0058Transmission between separate instruments or between individual components of a musical system
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0008Associated control or indicating means
    • G10H1/0025Automatic or semi-automatic music composition, e.g. producing random music, applying rules from music theory or modifying a musical piece
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/18Selecting circuits
    • G10H1/26Selecting circuits for automatically producing a series of tones
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/021Background music, e.g. for video sequences, elevator music
    • G10H2210/026Background music, e.g. for video sequences, elevator music for games, e.g. videogames
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/101Music Composition or musical creation; Tools or processes therefor
    • G10H2210/145Composing rules, e.g. harmonic or musical rules, for use in automatic composition; Rule generation algorithms therefor
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/135Musical aspects of games or videogames; Musical instrument-shaped game input interfaces
    • G10H2220/145Multiplayer musical games, e.g. karaoke-like multiplayer videogames
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/155User input interfaces for electrophonic musical instruments
    • G10H2220/321Garment sensors, i.e. musical control means with trigger surfaces or joint angle sensors, worn as a garment by the player, e.g. bracelet, intelligent clothing
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/155User input interfaces for electrophonic musical instruments
    • G10H2220/351Environmental parameters, e.g. temperature, ambient light, atmospheric pressure, humidity, used as input for musical purposes
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS
    • G10H2230/00General physical, ergonomic or hardware implementation of electrophonic musical tools or instruments, e.g. shape or architecture
    • G10H2230/005Device type or category
    • G10H2230/015PDA [personal digital assistant] or palmtop computing devices used for musical purposes, e.g. portable music players, tablet computers, e-readers or smart phones in which mobile telephony functions need not be used
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS
    • G10H2240/00Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
    • G10H2240/011Files or data streams containing coded musical information, e.g. for transmission
    • G10H2240/046File format, i.e. specific or non-standard musical file format used in or adapted for electrophonic musical instruments, e.g. in wavetables
    • G10H2240/056MIDI or other note-oriented file format
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS
    • G10H2240/00Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
    • G10H2240/011Files or data streams containing coded musical information, e.g. for transmission
    • G10H2240/046File format, i.e. specific or non-standard musical file format used in or adapted for electrophonic musical instruments, e.g. in wavetables
    • G10H2240/061MP3, i.e. MPEG-1 or MPEG-2 Audio Layer III, lossy audio compression
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS
    • G10H2240/00Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
    • G10H2240/171Transmission of musical instrument data, control or status information; Transmission, remote access or control of music data for electrophonic musical instruments
    • G10H2240/175Transmission of musical instrument data, control or status information; Transmission, remote access or control of music data for electrophonic musical instruments for jam sessions or musical collaboration through a network, e.g. for composition, ensemble playing or repeating; Compensation of network or internet delays therefor
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS
    • G10H2240/00Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
    • G10H2240/171Transmission of musical instrument data, control or status information; Transmission, remote access or control of music data for electrophonic musical instruments
    • G10H2240/201Physical layer or hardware aspects of transmission to or from an electrophonic musical instrument, e.g. voltage levels, bit streams, code words or symbols over a physical link connecting network nodes or instruments
    • G10H2240/241Telephone transmission, i.e. using twisted pair telephone lines or any type of telephone network
    • G10H2240/251Mobile telephone transmission, i.e. transmitting, accessing or controlling music data wirelessly via a wireless or mobile telephone receiver, analog or digital, e.g. DECT GSM, UMTS
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS
    • G10H2240/00Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
    • G10H2240/171Transmission of musical instrument data, control or status information; Transmission, remote access or control of music data for electrophonic musical instruments
    • G10H2240/281Protocol or standard connector for transmission of analog or digital data to or from an electrophonic musical instrument
    • G10H2240/295Packet switched network, e.g. token ring
    • G10H2240/305Internet or TCP/IP protocol use for any electrophonic musical instrument data or musical parameter transmission purposes
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS
    • G10H2240/00Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
    • G10H2240/171Transmission of musical instrument data, control or status information; Transmission, remote access or control of music data for electrophonic musical instruments
    • G10H2240/281Protocol or standard connector for transmission of analog or digital data to or from an electrophonic musical instrument
    • G10H2240/311MIDI transmission

Abstract

A generative sound system has a generative audio engine which is controlled or influenced by means of messages received from a plurality of individual articles or units (71). The articles may, in a variety of embodiments include collectable cards (86), building blocks (102), articles of furniture, ornaments and so on (128, 130, 132), portable electronic devices such as mobile phones (60), and toys, models or figures (142). The individual articles or units may exchange messages with each other and/or with a base unit.

Description

Automated Generation of Sound Sequences

This invention relates to methods and systems for automated generation of sound sequences, and especially (though not exclusively) of sound sequences in the form of music.

The automated creation of music has a long history, going back at least as far as Mozart's use of musical dice. One of the first musical works generated by a computer was L. Hiller's Illiac suite. Since that time, of course, the sophistication of computer-generated music or more generally audio sequences has increased substantially.

Systems for creating musical sequences by computer may conveniently be divided up into two areas, which have been called "non-generative" and "generative". Non-generative systems include deterministic systems which will produce the same sequences every time, along with systems that simply replay (perhaps in a random or other order) pre-composed sections of music. The vast majority of current systems which produce musical output make use of this type of approach, for example by selecting and playing a particular predefined sequence of notes at random when the key is pressed or a mouse button clicked. Generative Music Systems, on the other hand, may be considerably more complex. Such systems generate musical content, typically note by note, on the basis of a higher-level of musical knowledge. Such systems either explicitly or implicitly are aware of a variety of musical rules which are used to control or influence the generation of the music. In some systems, the rules may operate purely on the individual notes being generated, without imposing any form of higher order musical structure on the output; in such systems, any musical order that arises will be of an emergent nature. More sophisticated systems may include higher-level rules which can influence the overall musical structure. Generative Music Systems will normally create musical content "on the fly", in other words the musical sequences are built up note by note and phrase by phrase, starting at the beginning and finishing at the end. This means that - in contrast with some of the non-generative systems - the musical content can be generated and played in real time: there is no need for example for the whole of the phrase to be generated before the first few notes of the phrase can be played.

For our present purposes, the essential features of a generative music system are that it generates musical content in a non-deterministic way, based upon a plurality of musical rules (which may either be implicit within the software or which may be explicitly specified by either the program writer or the user of the program). By analogy, a generative sound system produces non-deterministic sound sequences based upon sound-generation rules.

According to the present invention there is provided a generative audio system including a generative audio engine, the engine being controlled or influenced by messages received from a plurality of controlling items.

The messages may be transmitted and received via a wireless or a physical link, with any suitable protocol (such as SMS) being used.

Preferably, the controlling items are fully networked, so that bi-directional message-passing capabilities are provided. In that way, complex interactions may occur between the various elements, as the music or other sounds are being generated.

The audio engine may be controlled or influenced by the content of the messages being received, the type of messages, the number of messages, the timing of messages, and/or the presence or absence of messages of a particular type. A message sent by an individual controlling unit may identify the type of unit sending the message, along with its absolute or relative position and/or orientation. Where the sending unit has an audio engine of its own, the message may include information representative of the set-up and/or state of that audio engine (for example by means of appropriate parameters).

The method and system of the invention is applicable to the use of generative music (or other sound) systems within a stand-alone or networked digital device, and in this respect has many aspects by which the audio output of such digital devices may be controlled and exchange of musical (or other sound) information can be effected. In this regard the invention may be applied, for example, in the context of mobile telephones and other network communications facilities, in the field of electronic toys, and in respect of other audio-capable digital electronic devices.

Short data messages may be used to transfer control information between devices for an audio or music rendering system, generative or otherwise. The control information may be in the form of MIDI instructions or in a form which can operate or control a generative music system. This allows very small messages with low bandwidth requirement to facilitate rich and complex audio- musical behaviour of these devices. More specifically, the messages may trigger sound effects or musical/audio sequences, potentially integrating them within an audio output that is continuously being generated by the generative music system, or even within the context of the audio interpretation of the system. The triggering may be related to such events as changes in value of a stock portfolio, incoming news events, weather announcements, networked musical performances.

The method and system of the invention may be used in the context of communication networks. In this context the devices of the network may each include an individual generative music system and messages transmitted between them may be used to co-ordinate their musical behaviour.

A device incorporating a generative music system of the invention may receive messages communicated, for example by wireless, relating to such matters as the musical activity, position or orientation of other devices. These other devices may be, for example, in the form of tags or tokens and the response of the generative music system to the messages may be such as to indicate the relationship, in musical terms and/or positionally, of the tags or tokens to one another and/or the receiving device.

The invention extends to a game, and to a puzzle, incorporating a generative audio system as previously described. The individual controlling items are in one embodiment collectable items of some sort, such as cards, building blocks, small toys, items of jewellery or the like.

In another embodiment, the individual controlling items comprise toys, figures or models which, taken together, make up a "band" or "orchestra". These may automatically interact, by means of message passing, so as co-operatively to generate and play a musical composition, with each member of the "band" or "orchestra" performing a different role or being representative of a particular noise or instrument. The figures may, together, form a peer - peer network or, alternatively, a controlling unit may be provided which controls the individual players. Wireless communication is preferred in this embodiment, although physical connections are not excluded.

A device incorporating a generative music system of the invention may receive messages relating to such matters as the musical activity, position or orientation of other devices. These other devices may be, for example, in the form of tags or tokens and the response of the generative music system to the messages may be such as to indicate the relationship, in musical terms and/or positionally, of the tags or tokens to one another and/or the receiving device.

It will be understood of course that the sounds or music could more generally be controlled or influenced by the number, type, relative locations, orientation, combination, proximity and composition of the various controlling elements, and the ambient environment(s) in which they are located (e.g. light levels, humidity, temperature etc).

A method and system for automated generation of sound sequences, and applications of such method and system, according to the preferred embodiments of the invention will now be described, by way of example, with reference to the accompanying drawings, in which:

Figure 1 is a schematic representation of the preferred system of the invention; Figure 2 is illustrative of objects that are involved in a component of the system of Figure 1;

Figure 3 is a flow-chart showing process steps involved in control sequencing within the method and system of the invention;

Figure 4 is illustrative of operation of the method and system of the invention in relation to scale and harmony rules;

Figure 5 illustrates operation of the method and system of the invention in relation to the triggering of note sequences and their integration into a musical work as currently being composed and played;

Figure 6 shows in schematic form devices that each utilise the method and system of the present invention and are in wireless communication with one another and/or a base station;

Figure 7 is illustrative of an arrangement which includes a device that utilises the method and system of the present invention for providing sound output dependent on the position and/or orientation of other items.

Figure 8 is an embodiment in which sounds are generated in dependence upon the position of cards on a base unit;

Figure 9 shows schematically the cards used in the embodiment of Figure 8; Figure 10 is another embodiment in which sounds are generated in dependence upon the position of stacking blocks on a base unit;

Figure 11 shows another embodiment in which cards can be slotted into a base unit;

Figure 12 shows another embodiment in which sounds are generated in dependence upon the position of objects within a room;

Figure 13 shows a further embodiment in which the generation of music within a portable music player is influenced by jewellery worn by the user; and

Figure 14 shows yet a further embodiment consisting of a number of toys or figures making up an "orchestra".

The method and system to be described are for automated generation of sound sequences and to integrate data presented or interpreted in a musical context for generating an output reflecting this integration. Operation is within the context of generation of musical works, audio, sounds and sound environments in real- time. More especially, the method and system function in the manner of a 'generative music system' operating in real-time to enable user-interaction to be incorporated into the composition on-the-fly. The overall construction of the system is shown in Figure 1 and will now be described.

Referring to Figure 1, the system involves four high-level layers, namely, an applications layer I comprising software components 1 to 5, a layer II formed by an application programmer's interface (API) 6 for interfacing with a music engine SKME that is manifest in objects or components 7 to 14 of a layer III, and a hardware device layer IN comprising hardware components 15 to 19 that interact with the music engine SKME of layer III. Information flow between the software and hardware components of layers I to IN is represented in Figure 1 by arrow-heads on dotted-line interconnections, whereas arrow-heads on solid lines indicate an act of creation; for example, information in the composed- notes buffer 11 is used by the conductor 12 which is created by the soundscape 8.

The applications layer I determines the look, feel and physical instantiation of the music engine SKME. Users can interact with the music engine SKME through web applications 1, or through desktop computer applications 2 such as those marketed by the Applicants under their Registered Trade Mark KOAΝ as KOAΝ PRO and KOAΝ X; the music engine SKME may itself be such as marketed by the Applicants under the Registered Trade Mark KOAΝ. Interaction with the engine SKME may also be through applications on other diverse platforms 3 such as, for example through mobile telephones or electronic toys. All applications 1 to 3 ultimately communicate with the music engine SKME via the API 6 which protects the internals of the music engine SKME from the outside world and controls the way in which the applications can interact with it. Typically, the instructions sent to the API 6 from the applications 1 to 3 consist of commands that instruct the music engine SKME to carry out certain tasks, for example starting the composition and playback, and changing the settings of certain parameters (which may affect the way in which the music is composed/played). Depending on the needs of the individual applications, communication with the API 6 may be direct or via an intermediate API. In the present case communication to the API 6 is direct from the desktop computer applications 2, whereas it is via an intermediate browser plug-in API 4 and Java API 5 from applications 1 and 3 respectively.

The music engine SKME, which is held in memory within the system, comprises eight main components 7 to 14. Of these, SSFIO 7, which is for file input/output, holds a description of the parameters, rules and their settings used by algorithms within the engine, to compose. When the engine SKME is instructed via the API 6 to start composition/playback, a soundscape 8 is created in memory and this is responsible for creating a composer 10, conductor 12 and all the individual compositional objects 9 relating to the description of the piece as recorded in the SSFIO 7. The compositional objects are referred to by the composer 10 to decide what notes to compose next. The composed notes are stored in a number of buffers 11 along with a time-stamp which specifies when they should be played. The conductor 12 keeps time, by receiving accurate time information from a timer device 19 of level IV. When the current time exceeds the time-stamp of notes in the buffers 11, the relevant notes are removed from the buffers 11 and the information they contain (such as concerning pitch, amplitude, play time, the instrument to be used, etc.) is passed to the appropriate rendering objects 13. The rendering objects 13 determine how to play this information, in particular whether via a MIDI output device 17, or as an audio sample via an audio-out device 18, or via a synthesiser engine 14 which generates complex wave-forms for audio output directly, adding effects as needed.

The hardware devices layer IV includes in addition to the devices 17 to 19, a file system 15 that stores complete descriptions of rules and parameters used for individual compose/playback sessions in the system; each of these descriptions is stored as an 'SSfile', and many of these files may be stored by the file system 15. In addition, a MIDI in device 16 is included in layer IV to allow note and other musical-event information triggered by an external hardware object (such as a musical keyboard) to be passed into the music engine SKME and influence the composition in progress.

The system can be described as having essentially two operative states, one, a Λ dynamic' state, in which it is composing and the other, a 'static' state, in which it is not composing. In the static state the system allows modification of the rules that are used by the algorithms to later compose and play music, and keeps a record encapsulated in the SSFIO component 7, of various objects that are pertinent to the description of how the system may compose musical works. The system is also operative in the dynamic state to keeps records of extra objects which hold information pertinent to the real-time composition and generation of these works. Many of these objects (the compositional objects 9 for example) are actual instantiations in memory of the descriptions contained in the SSFIO 7. Modification of the descriptions in the SSFIO 7 via the API layer II during the dynamic state, results in those modifications being passed down to the compositional objects 9 so that the real-time composition changes accordingly.

Figure 2 shows a breakdown of the SSFIO component 7 into its constituent component objects which exist when the system is in its static and dynamic states; the system creates real-time versions of these objects when composing and playing. In this respect, SSfiles 20 stored each provide information as to 'SSObject(s)' 21 representing the different types of object that can be present in the description of a work; these objects may, for example, relate to piece, voice, scale rule, harmony rule, rhythm rule. Each of these objects has a list of 'SSFparameters' 22 that describe it; for example, they may relate to tempo, instrument and scale root. When an SSfile 20 is loaded into the music engine SKME, actual instances of these objects 21 and their parameters 22 are created giving rise to 'SSFObjectlnstance' 23 and 'SSFParameterlnstance' 24 as illustrated in Figure 2.

Referring again to Figure 1, the user interacts with the system through applications 1 to 3 utilising the services of the API 6. The API 6 allows a number of functions to be effected such as 'start composing and playing', 'change the rules used in the composition', 'change the parameters that control how the piece is played' including the configuration of effects etc. One of the important advantages of the described method and system is the ability to trigger generative pattern sequences in response to external events. The triggering of a generative pattern sequence has a range of possible outcomes that are defined by the pattern sequence itself. In the event that a generative pattern sequence is already in operation when another trigger event is received, the currently operational sequence is ended and the new one scheduled to start at the nearest availability.

Generative pattern sequences allow a variety of musical seed phrases of any length to be used in a piece, around which the music engine SKME can compose in real time as illustrated in Figure 3. More particularly, the generative pattern sequence contains a collection of one or more note-control sub-patterns with or without one or more additional sequence-control sub- patterns. Three types of note-control sub-patterns can be created, namely: 'rhythm' note-control sub-pattern containing note duration information, but not assigning specific frequencies to use for each note; 'frequency and rhythm' note-control sub-pattern containing both note duration and some guidance to the generative music engine SKME as to the frequency to use for each note; and 'forced frequency' note-control sub-pattern containing note duration, temporal positioning and explicit frequency information to use for each note. Sequence- control sub-patterns, on the other hand, can be used to specify the sequence in which the note-control sub-patterns are played, and each note-control sub- pattern may also specify ranges of velocities and other musical information to be used in playing each note. The music engine SKME allows the use of multiple sub-patterns in any generative pattern sequence.

Referring to Figure 3, the step 30 of triggering the generative pattern sequence acts through step 31 to determine whether there are any other sequence-control sub-patterns operative. If not, a note-control sub-pattern is chosen at random in step 32 from a defined set; each note-control sub-pattern of this set may be assigned a value that determines its relative probability of being chosen. Once it is determined in step 33 that the selected note-control sub-pattern is finished, another (or the same) note-control sub-pattern is selected similarly from the set. The generative pattern sequence continues to play in this manner until instructed otherwise.

If the result of step 31 indicates that there is one or more sequence-control sub- patterns operative, then any sequence-control sub-pattern is chosen at random in step 34 from the defined set; each sequence-control sub-pattern may be assigned a value that determines its relative probability of being chosen. Once a sequence-control sub-pattern has been selected in step 34, it is consulted to determine in step 35 a sequence of one or more note-control sub-patterns to play. As each note-control sub-pattern comes to an end, step 36 prompts a decision in step 37 as to whether each and every specified note-control sub- pattern of the operative sequence has played for the appropriate number of times. If the answer is NO, then the next note-control sub-pattern is brought into operation through step 35, whereas if the answer is YES another, or the same, sequence-control sub-pattern is selected through repetition of step 34. As before, the generative pattern sequence continues to play in this manner until instructed otherwise.

Each sequence-control sub-pattern defines the note-control sub-pattern(s) to be selected in an ordered list, where each entry in the list is given a combination of: (a) a specific note-control sub-pattern to play, or a range of note-control sub- patterns from which the one to play is chosen according to a relative probability weighting; and (b) a value which defines the number of times to repeat the selected note-control sub-pattern, before the next sequence-control sub-pattern is selected. The number of repetitions may be defined as a fixed value (e.g. 1), as a range of values (e.g. repeat between 2 and 5 times), or as a special value indicating that the specified note-control sub-pattern should be repeated continuously.

Depending upon the note-control sub-pattern operational at any moment after a generative pattern sequence is triggered, various rules internal to the music engine SKME may be used to determine the exact pitch, duration and temporal position of the notes to be played. For example, if a 'rhythm' note-control sub- pattern is in operation at a particular point in the generative pattern sequence, then the scale rule, harmony rule and next-note rule within the music engine SKME for that 'triggered voice1 will be consulted to obtain the exact notes. Alternatively, if the 'forced frequency' note-control sub-pattern is operational, no internal rules need be consulted since all the note information is already specified. Furthermore, for the case of 'frequency and rhythm', the music engine SKME combines the given frequency offset information with its rules and other critical information such as the root of the current scale and range of available pitch values for the voice in question.

The rules and other parameters affecting composition (e.g. tempo) within the music engine SKME are defined in memory, specifically within the SSFIO 7, and its real-time instantiation of the compositional objects 9. Use of rules and parameters within the music engine SKME form part of the continual compositional process for other voice objects within the system. Figure 4 illustrates this more general process based on examples of scale and harmony rules shown at (1) and (2) respectively.

Referring to Figure 4, the scale rule is illustrated at (1) with shaded blocks indicating a non-zero probability of choosing that interval offset from a designated scale root note. The larger the shaded block, the greater the probability of the system choosing that offset. Thus, for this example, the octave Ove, major third M3 and fifth 5 are the most likely choices, followed by M2, 4, M6 and M7; the rest will never be chosen. Sequences that may be generated by the system from this are shown below the blocks, and in this respect the octave has been chosen most often followed by the major third and the fifth. With the scale root set in the system as C, the resulting sequence of notes output from the system in this example are C,E,C,D,G,A,E,D,C,G,E,B,C,F, as illustrated at (1) of Figure 4. The harmony rule defines how the system may choose the pitches of notes when other notes are playing, that is to say, how those pitches should harmonise together. In the example illustrated at (2) of Figure 4, only the octave and major second are indicated (by shading) to be selected. This means that when the pitch for a voice is chosen, it must be either the same pitch as, or a major second from, all other notes currently being played.

For the purpose of further explanation, consideration will be given to the example represented at (3) of Figure 4 involving three voice objects VI -V3. The rhythm rules applicable to the voice objects VI -V3 in this example, give rise to a generated sequence of notes as follows: voice VI starts playing a note, then voice V2 starts playing a note, then voice V3 starts playing a note, and then after all notes have ended, voice V2 starts playing another note, followed by voice VI and then voice V3. With this scenario, the note from voice V2 must harmonise with that of voice VI and the voice V3 note must harmonise with that of voice V2. If in these circumstances the voice VI is, as illustrated by bold hatching, chosen with a pitch offset of a fifth from the scale root, the pitch for voice V2 must either be the same as (Ove) or a major second above (M2) the fifth. In the case illustrated, it is chosen to be "the same, and so the fifth is chosen too. When voice V3 starts playing it must harmonise with both voices VI and V2, so the pitch chosen must be the same as, or a major second above that of voices VI and V2. As illustrated, the system chooses voice V3 to be a major second above, therefore giving pitch offset M6 from the scale root.

After voice V3 all notes end, and the next note begins, as illustrated at (4) of Figure 4 with voice V2. This next note by voice V2 is governed by the next- note rule used by voice V2, and the last note played by voice V2. According to •this rule, the system chooses pitch offset M2 for voice V2, and then harmonises voices V3 and VI with it by choice of a major second for both of them. With the scale root set in the system to C, the entire generated sequence accordingly follows that indicated at (5) of Figure 4, where 'S' denotes a note starting and 'E' a note ending.

Thus, when sequences are generated in response to an external trigger, the actual pitches and harmonisation of that sequence is determined by the composer 10 using several items of information, namely: (a) the note-control sub-pattern operational at that moment; (b) the scale, rhythm, harmony and next-note rules depending upon the type of the note-control subsequence; and (c) any piece-level rules which take into account the behaviour of other voices within the piece.

When the music engine SKME is in dynamic (i.e. composing and playing) mode, it typically contains a number of voice compositional objects 9. The composer 12 composes a sequence of notes for each of these and makes sure they obey the various rules. The process involved is illustrated in the flow diagram of Figure 5.

Referring to Figure 5, the music engine SKME responds to an external trigger applied at step 51, and the API 6 through step 52 instructs a voice 1 in step 53 to register that it must start a sequence. Voice 1 and the voices 2 to N in step 54, have their own rules, and the composer 10 ensures that the relevant rules are obeyed when utilising any of the voices 1 to N. More particularly, the composer 10 responds in step 55 to the instruction of step 53 for voice 1 to start a sequence, by starting the generative pattern sequence sub-system of Figure 3. This sends note-control sub-sequences to the trigger voice (voice 1 in this example), but the composer 10 makes sure the resulting notes harmonise with the other voices in the piece. The outcome via the conductor 12 in step 56 is played in step 57.

The generative pattern sequence triggered will play forever, or until the system is instructed otherwise. If a sequence control sub-pattern is used to define a generative pattern sequence such that the final note control sub-pattern is one which plays silence (rest notes) in an infinite loop, then when this pattern sequence is selected, the voice will become effectively 'inactive' until another trigger is detected. Further triggering events for the same generative pattern sequence may sound different as the process is generative, or since the rules in use by the piece or the scale of the trigger voice, its harmony or next note rules may have changed (either via interaction through the API 6 or via internal music engine SKME changes).

The sounds used to 'render' each note, whether from triggered sequences or generative voices may be played either through the MIDI sounds or the samples of the rendering objects 13, or via software of the synthesiser engine 14 which may add digital signal processing effects such as, for example, filter sweeps, reverberation and chorus. The entire process can be used to generate musical event information that is then fed into, and may thus control, other processing units within the system such as synthesiser related units allowing the triggering of generative sound effects. Voices can also be added which make use of the software synthesiser engine 14 to generate non note-based effects such as sound washes and ambient environmental sounds, such as chimes, wind and other organic sounds. The method and system of the invention is applicable with advantage in a networked system. In particular, it may be used for the purpose of networked musical "jamming" (joint composition). Figure 6, illustrates an example of their application in this context.

Referring to Figure 6, two wireless networked devices 60 (for example mobile phones, portable music systems, gaming consoles etc) each include a generative music system (not shown in full detail) of the form described above, the devices 60 being in this respect typical of a multiplicity of digital devices linked together for wireless communication in the relevant network. Each device 60 is in wireless communication with an (optional) base unit 65. Each device 60 also includes an integrator 61 that receives information by reception of messages transmitted to it by wireless from the base unit 65 (and other network devices), and also internally by information from within its own generative music system.

The musical sounds generated by the devices 60 may also be controlled or influenced by the relative positions of the devices 60 themselves and/or the central unit 65. The information on relative positions could be achieved by message passing either directly between the devices 60 or between each device 60 and the central unit 65.

Each device 60 receives information from the SSFIO 62 and also from the composer 64 that is linked to it through the compositional objects 63, of its generative music system. The information from the SSFIO 62 describes its current musical 'behaviour', whereas that from the composer 64 describes the current state of the musical output from the device 60. The information from all three sources is used within the device 60 to make changes to the SSFIO 62 so as to affect the future musical behaviour of the device 60.

Since each device 60 has its own generative music engine, any wireless messages passing between the devices 60, or between a device 60 and the central unit 65, requires only a very small bandwidth. Such messages may be effectively small files which can: define explicitly the compositional rules or other elements to be used to compose/generate audio in the relevant receiving device 60; (b) describe instructions which are effective to modify the compositional rules in an integrative fashion within the receiving device 60; and (c) effect the changes required in near real-time in the receiving device 60. The messages may moreover contain small audio sample files or descriptions of sound processing units or effects (e.g. synthesiser unit descriptors).

Furthermore, the method and system of the invention may be utilised to facilitate audio signification of events relating to or associated with the context within which they are operative. For example, the method and system may be implemented within a telephone or personal computer to provide built-in generative ring or other tones in one or more monitored events. As events occur e.g. changes in stock portfolio and/or share price) and the relevant information is communicated to the telephone or computer, sound effects or other audio elements may then be generated or modified to reflect the change in status or detail of the event that is being monitored. Generative effects, signifying, for example, incoming news events or weather announcements, may also be incorporated. These effects can aid in the signification and appreciation of time-sensitive or time-series information. Turning now to Figure 7, this shows an embodiment involving the use of units that communicate positional and/or orientational information to a generative music system, so that the output of the system is dependent upon that information.

Referring to Figure 7, a plurality of tokens or units 71 each containing a positioning system e.g. a Global Positioning System (GPS) 72 transmit wireless messages at intervals to a device 73 that corresponds to the device 60 of the arrangement of Figure 6. More particularly, the device 73 incorporates an integrator 74 that operates in conjunction with the generative music system of the device 73 to monitor the incoming messages from the individual units or tokens 71; the units 71 may be interrogated in turn to prompt the transmission of the messages. The messages include positional and/or orientational information concerning the individually identified units 71 and the integrator 74 passes appropriately modified messages to the SSFIO 75 of the generative music system. The result is that the musical output of the device 73 is dependent upon the positions and/or orientations of the various tokens or units 71, in such a way that arranging the units 71 in different geometric combinations and alignments achieves different audio effects or compositions.

Alternatively, the Positioning System feature of the units 71 may be omitted, and each unit may instead be arranged to determine its absolute or relative position and/or orientation in some other way. For example, positional/inclination sensors may be provided on each of the units 71, these being used to supply the information that is needed to control or influence sound/music generation within the device 73. It would also be possible for the units 71 to determine their own relative positions and inclinations - for example by means of proximity sensors - with the resultant information being reported back by means of a message or messages sent to the device 73.

In a local environment, the approach described generally above with reference to Figure 7 may be used in a number of specific applications, some of which will be described in more detail below with reference to Figures 8 to 14.

In Figure 8, the music-generating device 73 is contained within a flat base unit 80 which is arranged to be connected to an external loudspeaker 82 by a lead 84. Alternatively, the speaker could be built into the base unit 80 itself.

The flat upper part of the base unit is arranged to receive cards or tokens 86, either positioned anywhere the user desires on the surface or placed into pre- formed bays or slots (not shown). As shown in Figure 9, each card 86 includes, preferably on its rear surface, a card locator 90 which, in association with suitable electronics (not shown) within the base unit, enables the system to be able to determine exactly where on the surface each individual card has been located.

The generative musical sound engine within the base unit 80 is controlled or influenced by the number of cards on the surface, the card types, type combinations, and/or the cards' absolute or relative positions and/or orientations. This is achieved by message-passing between the cards and/or between the cards and the base unit.

Each type of card may control or influence the musical sound being generated in its own individual way. One card might, for example, control the bass voice, another the guitar, another the drums and so on. Other types of card may influence the musical sound generation in other ways, for example controlling the volume of one or more voices, pitch, timbre, rhythm and so on. Individual cards may also contain pre-defined musical or sound "templates" to produce a basic or skeleton composition which may then be influenced or augmented by other cards.

One card type 98 may be a master card, defining the overall rule set. This may need to be placed in a specific region of the based unit's upper surface, for example the left hand corner, as shown in Figure 8. In addition to including templates, where appropriate, the master card 98 could also define or control the contribution to the composition of the other cards. It might, for example, instruct the system that cards of type 1 should be treated as the bass line, cards of type 2 as the guitar line and so on. In that way, the use of different master cards may fundamentally affect the overall composition.

The cards 86 and/or 98 may be user-programmable, provided of course that the card location element 90 includes some type of non-volatile memory. Users could for example download template definitions, master card definitions, sample sound or music descriptions and musical rules, individual parameter values and so on from a central server (e.g. via the Internet).

The front surface of each card may include a printed design and/or text which differs according to card type. The cards may then be collected and/or swapped. It will be understood of course that the embodiment of Figure 8 may be extended to other toy or collecting scenarios, in which the cards 86,98 may be replaced with books, toys or other collectable units.

Some further embodiments will now be described with reference to Figures 10 to 14. It is to be understood that each individual movable element within those embodiments may have any of the features, characteristics or functionality of the cards 86,98 of Figure 8 or the units 71 of Figure 7.

Turning first to Figure 10, there is shown an alternative embodiment in which the cards 86,98 are replaced with building blocks 102. The blocks may be connectable in some way - for example in the manner of Lego™ blocks or Sticklebricks™ - to enable towers 104 to be built. The blocks are placed on, or may be securable to, a base unit 100 which operates in a manner similar to that of the base unit 80 of Figure 8. The blocks pass messages between themselves and/or between themselves and the base unit.

The musical sounds generated by the system may in this embodiment depend on the position of bricks in the third dimension, as well as the positioning on the surface of the base unit 100. The height of the towers 104, their locations, rotational positions, angles of attachment and the type of blocks within them may all affect the sound or music generation.

In this and other embodiments, the relative positions of the blocks/units may determine the flow of the music - for example each block may represent a single bar of music with the bars being played according to a left-to-right or top-to-bottom progression. The device shown in Figure 10 may be configured so that the audio output reflects how close the users is to a desired configuration (for example in solving a puzzle associated with the blocks). A similar concept may be used in conjunction with the embodiment of Figure 8: for example, the output may depend upon how close the user has got to laying out the cards to a predefined configuration, initially unknown to the user.

The positioning of blocks may be an interesting way to control the temporal evolution of the piece of music. In such a way the user can mix and improvise by repositioning blocks at particular points in time.

In an alternative embodiment (not shown) the individual objects could take the form of balls, preferably within an enclosed container such as a sphere. By jiggling the balls about within the container, the user could create a kaleidoscope of sound as the positions of the balls continually vary.

Turning next to Figure 11, a base unit 110 has a plurality of parallel slots 112 for receiving individual cards such as the cards 86,98 of Figure 8. The base unit is connected to an external loudspeaker 111; alternatively, the loudspeaker may be integrated within the unit.

A collector of the cards 86,98 places them, as desired, into the various slots 112 in order to control or influence the musical sounds being played by the speaker 111. The system detects how many cards have been placed into slots, the card types, combinations and the card locations in order to control the music or sound generation. In another embodiment, shown in Figure 12, a generative sound or music system is contained within a speaker enclosure 120 having a speaker 112. The enclosure has a detector 124 enabling it to receive messages from movable articles of furniture, crockery, ornaments and the like. In the example shown, the detector receives wireless signals 126 from a vase 128, a chair 130 and coffee cups 132. As in the previous embodiments, the sound or music generator may be controlled or influenced by the absolute or relative positioning within the room of the various movable items, as notified to the detector 124.

Alternatively, the sound or music may be controlled or influenced in some other way by the messages being passed (ie without reference to absolute or relative positions). Each unit may, for example, pass a unique identifying message, and the sound or music may be controlled according to the presence or absence of certain messages, or combinations thereof.

Provision may be made for the individual items to pass messages between themselves, for example as indicated by the dotted line 134, enabling more complex interactions to take place. If, for example, the coffee cups on the table can between them determine their relative positions and/or locations quite precisely, that information may then be passed back to the sound or music generator to influence the overall composition, without any need for the sensor 124 to distinguish, on its own, between the locations of the two cups.

The enclosure 120 may be static within the room, and may for example form part of a hi-fi system. The system could then generate and play background music and/or sounds in dependence upon the configuration of the furniture and/or other items within the room.

Alternatively, the enclosure 120 could be portable, and could be carried on the owner's person. Then, as the owner walks around the room, the music and/or sounds generated by the system will automatically vary.

In another alternative, the enclosure 120 could be dispensed with and one or more speakers built into the individual objects themselves.

Figure 13 shows yet a further embodiment in which the generative engine is contained within a small belt-mounted unit 130 which is connected by means of a lead 132 to a pair of earphones 134. A detector 136 on the unit 130 receives messages from jewellery/clothing or other items being worn by the user, for example a watch 138 and a bracelet 140. Each piece of jewellery includes a transponder 142, 144 permitting communication with the detector 136, as indicated by the dotted line 146.

For more complex effects, message-passing between the individual items may occur, as indicated by the dotted line 148, as well as between the items and the sensor 136. Messages transmitted to the sensor could control the generative engine based upon, for example, the number, type and sequence of individual beads or other elements on the bracelet 144.

The embodiment shown in Figure 14 illustrates the way in which the above principles can be extended to co-ordinating musical output through automated message passing. This embodiment consists of a number of toys, figures or other units 142, each of which contains within it a generative music engine and a loudspeaker. Each unit generates and plays its own music while at the same time transmitting and receiving messages 144,146 from other similar units and/or an optional master unit 148. This results in an automated self- organising orchestra of networked devices.

While each device is composing and rendering its own audio output, it is simultaneously listening-out to ensure that this output harmonises appropriately with everything else that is happening. Instructions and data passed between the devices in the manner described above with reference to Figure 6 - that is, by automated message passing - but at least some of the devices may also receive at least some information by listening-out for the audio output of other devices. Additional means of supplying input to the devices, whether automated or human, could also be supplied.

One or more of the devices might assume control of different areas of the group composition. For example, a "drummer" or a "conductor" device might dictate the rhythm and tempo. Adding new devices to the group enhances and augments the audio experience.

The toys or other items 142 may be representative of specific individuals within the "orchestra", and may be provided with "instruments" appropriate to their role. Each device may have a mechanical action appropriate to that role, as well: for example, a "drummer" may appear to play the drums and a "saxophonist" the saxophone.

The music generated by the "orchestra" may be controlled or influenced by the number, type, relative locations, orientation, combination and proximity of the individual devices 142, and/or the ambient environment(s) in which they are located (e.g. light levels, humidity, temperature etc). Each device may have a sensor (not shown) for sensing one or more details of the ambient environment, and generating a message based on the sensed values.

Where a master device 148 is provided, that may control certain fundamentals of the composition, for example key, rhythm, tempo and so on. The master device 148 may also define or control the individual voices of the players 142 so that, for instance, it might modify the sound of one of the instruments from that of a saxophone to that of a trumpet.

In a variation of the embodiment of Figure 14, the individual units 142 do not generate their own individual sounds. Instead, composition and rendering of the audio output is carried out entirely within the unit 148, and played by a suitable speaker on that unit.

Claims

CLAIMS:
1. A generative audio system including a generative audio engine, the engine being controlled or influenced by messages received from a plurality of controlling items.
2. A generative audio system as claimed in claim 1 in which the messages are transmitted from the controlling items via a wireless link.
3. A generative audio system as claimed in claim 1 or claim 2 in which the audio engine is contained within a base unit.
4. A generative audio system as claimed in claim 1 or claim 2 in which each controlling item includes its own respective audio engine.
5. A generative audio system as claimed in claim 4 in which each controlling item includes means for rendering and playing audio generated by its respective audio engine.
6. A generative audio system as claimed in claim 5 in which each controlling item includes means for listening-out for the audio played by other items, and controlling its respective audio engine accordingly.
7. A generative audio system as claimed in any one of the preceding claims in which the messages contain information on the absolute or relative positionings of the controlling items.
8. A generative audio system as claimed in any one of the preceding claims in which the messages contain information on the absolute or relative orientations of the controlling units.
9. A generative audio system as claimed in any one of the preceding claims in which each controlling item includes means for transmitting messages to, and receiving messages from, other controlling items.
10. A generative audio system as claimed in any one of claims 1 to 9 in which the controlling items are cards.
11. A generative audio system as claimed in any one of claims 1 to 9 in which the controlling items are blocks or balls.
12. A generative audio system as claimed in any one of claims 1 to 9 in which the controlling items are mobile phones.
13. A generative audio system as claimed in any one of claims 1 to 9 in which the controlling items are pieces of jewellery.
14. A generative audio system as claimed in any one of claims 1 to 9 in which the controlling items are items of clothing.
15. A generative audio system as claimed in any one of claims 1 to 9 in which the controlling items are items of furniture, ornaments or other household items.
16. A generative audio system as claimed in any one of claims 1 to 9 in which the controlling items are personal, portable music systems.
17. A generative audio system as claimed in any one of claims 1 to 9 in which the controlling items are toys, figures or models.
18. A generative audio system as claimed in claim 17 in which the toys, figures or models are each representative of an individual member of a band or orchestra.
19. A generative audio system as claimed in claim 18 in which each toy, figure or model includes a mechanical action appropriate to its function in the band or orchestra.
20. A generative audio system as claimed in any one of the preceding claims in which the controlling items are of a plurality of different types, each type controlling or influencing the engine in a different way.
21. A generative audio system as claimed in claim 20 including a master controlling item which controls how the engine responds to controlling items of a given type.
22. A game incorporating a generative audio system as claimed in any one of the preceding claims.
23. A puzzle incorporating a generative audio system as claimed in any one of the preceding claims .
24. A generative audio system as claimed in any one of claims 1 to 20 in which the messages contain information relating to the ambient environment of the controlling items.
25. A generative audio system as claimed in any one of claims 1 to 21 in which the messages contain information derived from sensors associated with each of the controlling items.
PCT/GB2001/001971 2000-05-05 2001-05-04 Automated generation of sound sequences WO2001086625A3 (en)

Priority Applications (10)

Application Number Priority Date Filing Date Title
GB0010969.4 2000-05-05
GB0010969A GB0010969D0 (en) 2000-05-05 2000-05-05 Automated generation of sound sequences
GB0010967.8 2000-05-05
GB0010967A GB0010967D0 (en) 2000-05-05 2000-05-05 Automated generation of sound sequences
GB0011178.1 2000-05-09
GB0011178A GB0011178D0 (en) 2000-05-09 2000-05-09 Automated generation of sound sequences
GB0022164.8 2000-09-11
GB0022164A GB0022164D0 (en) 2000-09-11 2000-09-11 Automated generation of sound sequences
GB0030834.6 2000-12-18
GB0030834A GB0030834D0 (en) 2000-05-05 2000-12-18 Automated generation of sound sequences

Publications (2)

Publication Number Publication Date
WO2001086625A2 true true WO2001086625A2 (en) 2001-11-15
WO2001086625A3 true WO2001086625A3 (en) 2002-04-18

Family

ID=27515939

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2001/001971 WO2001086625A3 (en) 2000-05-05 2001-05-04 Automated generation of sound sequences

Country Status (1)

Country Link
WO (1) WO2001086625A3 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6815600B2 (en) 2002-11-12 2004-11-09 Alain Georges Systems and methods for creating, modifying, interacting with and playing musical compositions
US6972363B2 (en) 2002-01-04 2005-12-06 Medialab Solutions Llc Systems and methods for creating, modifying, interacting with and playing musical compositions
US7076035B2 (en) 2002-01-04 2006-07-11 Medialab Solutions Llc Methods for providing on-hold music using auto-composition
US7169996B2 (en) 2002-11-12 2007-01-30 Medialab Solutions Llc Systems and methods for generating music using data/music data file transmitted/received via a network
WO2007124469A3 (en) * 2006-04-21 2008-07-31 Brent W Barkley Musically interacting devices
US8134061B2 (en) 2006-04-21 2012-03-13 Vergence Entertainment Llc System for musically interacting avatars
US8257157B2 (en) 2008-02-04 2012-09-04 Polchin George C Physical data building blocks system for video game interaction
US9818386B2 (en) 1999-10-19 2017-11-14 Medialab Solutions Corp. Interactive digital music recorder and player

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006043929A1 (en) 2004-10-12 2006-04-27 Madwaves (Uk) Limited Systems and methods for music remixing

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5166463A (en) * 1991-10-21 1992-11-24 Steven Weber Motion orchestration system
US5633985A (en) * 1990-09-26 1997-05-27 Severson; Frederick E. Method of generating continuous non-looped sound effects
US5908996A (en) * 1997-10-24 1999-06-01 Timewarp Technologies Ltd Device for controlling a musical performance
US5920024A (en) * 1996-01-02 1999-07-06 Moore; Steven Jerome Apparatus and method for coupling sound to motion
WO2000077770A1 (en) * 1999-06-09 2000-12-21 Innoplay Aps A device for composing and arranging music
US6198034B1 (en) * 1999-12-08 2001-03-06 Ronald O. Beach Electronic tone generation system and method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5633985A (en) * 1990-09-26 1997-05-27 Severson; Frederick E. Method of generating continuous non-looped sound effects
US5166463A (en) * 1991-10-21 1992-11-24 Steven Weber Motion orchestration system
US5920024A (en) * 1996-01-02 1999-07-06 Moore; Steven Jerome Apparatus and method for coupling sound to motion
US5908996A (en) * 1997-10-24 1999-06-01 Timewarp Technologies Ltd Device for controlling a musical performance
WO2000077770A1 (en) * 1999-06-09 2000-12-21 Innoplay Aps A device for composing and arranging music
US6198034B1 (en) * 1999-12-08 2001-03-06 Ronald O. Beach Electronic tone generation system and method

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9818386B2 (en) 1999-10-19 2017-11-14 Medialab Solutions Corp. Interactive digital music recorder and player
US6972363B2 (en) 2002-01-04 2005-12-06 Medialab Solutions Llc Systems and methods for creating, modifying, interacting with and playing musical compositions
US7076035B2 (en) 2002-01-04 2006-07-11 Medialab Solutions Llc Methods for providing on-hold music using auto-composition
US7102069B2 (en) 2002-01-04 2006-09-05 Alain Georges Systems and methods for creating, modifying, interacting with and playing musical compositions
US6977335B2 (en) 2002-11-12 2005-12-20 Medialab Solutions Llc Systems and methods for creating, modifying, interacting with and playing musical compositions
US6960714B2 (en) 2002-11-12 2005-11-01 Media Lab Solutions Llc Systems and methods for creating, modifying, interacting with and playing musical compositions
US6958441B2 (en) 2002-11-12 2005-10-25 Alain Georges Systems and methods for creating, modifying, interacting with and playing musical compositions
US6979767B2 (en) 2002-11-12 2005-12-27 Medialab Solutions Llc Systems and methods for creating, modifying, interacting with and playing musical compositions
US7015389B2 (en) 2002-11-12 2006-03-21 Medialab Solutions Llc Systems and methods for creating, modifying, interacting with and playing musical compositions
US7022906B2 (en) 2002-11-12 2006-04-04 Media Lab Solutions Llc Systems and methods for creating, modifying, interacting with and playing musical compositions
US6916978B2 (en) 2002-11-12 2005-07-12 Alain Georges Systems and methods for creating, modifying, interacting with and playing musical compositions
US6897368B2 (en) 2002-11-12 2005-05-24 Alain Georges Systems and methods for creating, modifying, interacting with and playing musical compositions
US7169996B2 (en) 2002-11-12 2007-01-30 Medialab Solutions Llc Systems and methods for generating music using data/music data file transmitted/received via a network
US7026534B2 (en) 2002-11-12 2006-04-11 Medialab Solutions Llc Systems and methods for creating, modifying, interacting with and playing musical compositions
US6815600B2 (en) 2002-11-12 2004-11-09 Alain Georges Systems and methods for creating, modifying, interacting with and playing musical compositions
WO2007124469A3 (en) * 2006-04-21 2008-07-31 Brent W Barkley Musically interacting devices
JP2009534711A (en) * 2006-04-21 2009-09-24 ヴェルジェンス エンターテインメント エルエルシーVergence Entertainment Llc Device to interact about music
US8134061B2 (en) 2006-04-21 2012-03-13 Vergence Entertainment Llc System for musically interacting avatars
US8324492B2 (en) 2006-04-21 2012-12-04 Vergence Entertainment Llc Musically interacting devices
US8257157B2 (en) 2008-02-04 2012-09-04 Polchin George C Physical data building blocks system for video game interaction

Also Published As

Publication number Publication date Type
WO2001086625A3 (en) 2002-04-18 application

Similar Documents

Publication Publication Date Title
Blaine et al. Contexts of collaborative musical experiences
US6353170B1 (en) Method and system for composing electronic music and generating graphical information
Moore Authenticity as authentication
Kelly Cracked media: the sound of malfunction
US20060179160A1 (en) Orchestral rendering of data content based on synchronization of multiple communications devices
US20140140536A1 (en) System and method for enhancing audio
US20130025437A1 (en) System and Method for Producing a More Harmonious Musical Accompaniment
US7504577B2 (en) Music instrument system and methods
Chadabe Interactive composing: An overview
US20030167904A1 (en) Player information-providing method, server, program for controlling the server, and storage medium storing the program
US6653545B2 (en) Method and apparatus for remote real time collaborative music performance
Gresham-Lancaster The aesthetics and history of the hub: The effects of changing technology on network computer music
US20060060065A1 (en) Information processing apparatus and method, recording medium, program, and information processing system
US20140053711A1 (en) System and method creating harmonizing tracks for an audio input
US20140053710A1 (en) System and method for conforming an audio input to a musical key
US8111241B2 (en) Gestural generation, sequencing and recording of music on mobile devices
US20080257133A1 (en) Apparatus and method for automatically creating music piece data
US20100307320A1 (en) flexible music composition engine
US6093880A (en) System for prioritizing audio for a virtual environment
US20040064380A1 (en) Contents supplying system
US20030140769A1 (en) Method and system for creating and performing music electronically via a communications network
US20030045274A1 (en) Mobile communication terminal, sensor unit, musical tone generating system, musical tone generating apparatus, musical tone information providing method, and program
US20130220102A1 (en) Method for Generating a Musical Compilation Track from Multiple Takes
US20060079213A1 (en) System and method of music generation
US20080066609A1 (en) Cellular Automata Music Generator

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
AK Designated states

Kind code of ref document: A3

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase in:

Ref country code: JP