GB2607556A - Method and system for providing a spatial component to musical data - Google Patents

Method and system for providing a spatial component to musical data Download PDF

Info

Publication number
GB2607556A
GB2607556A GB2103458.2A GB202103458A GB2607556A GB 2607556 A GB2607556 A GB 2607556A GB 202103458 A GB202103458 A GB 202103458A GB 2607556 A GB2607556 A GB 2607556A
Authority
GB
United Kingdom
Prior art keywords
sound
path
layout
spatial
bounded space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2103458.2A
Other versions
GB202103458D0 (en
Inventor
Thibaut Daniel-Junior
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Daniel Junior Thibaut
Original Assignee
Daniel Junior Thibaut
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Daniel Junior Thibaut filed Critical Daniel Junior Thibaut
Priority to GB2103458.2A priority Critical patent/GB2607556A/en
Publication of GB202103458D0 publication Critical patent/GB202103458D0/en
Publication of GB2607556A publication Critical patent/GB2607556A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0033Recording/reproducing or transmission of music for electrophonic musical instruments
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0008Associated control or indicating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S3/00Systems employing more than two channels, e.g. quadraphonic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/031Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/155Musical effects
    • G10H2210/265Acoustic effect simulation, i.e. volume, spatial, resonance or reverberation effects added to a musical sound, usually by appropriate filtering or delays
    • G10H2210/295Spatial effects, musical uses of multiple audio channels, e.g. stereo
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/155Musical effects
    • G10H2210/265Acoustic effect simulation, i.e. volume, spatial, resonance or reverberation effects added to a musical sound, usually by appropriate filtering or delays
    • G10H2210/295Spatial effects, musical uses of multiple audio channels, e.g. stereo
    • G10H2210/305Source positioning in a soundscape, e.g. instrument positioning on a virtual soundstage, stereo panning or related delay or reverberation changes; Changing the stereo width of a musical source
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/021Indicator, i.e. non-screen output user interfacing, e.g. visual or tactile instrument status or guidance information using lights, LEDs, seven segments displays
    • G10H2220/026Indicator, i.e. non-screen output user interfacing, e.g. visual or tactile instrument status or guidance information using lights, LEDs, seven segments displays associated with a key or other user input device, e.g. key indicator lights
    • G10H2220/056Hand or finger indicator, e.g. for indicating which hand or which specific finger should be used
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/091Graphical user interface [GUI] specifically adapted for electrophonic musical instruments, e.g. interactive musical displays, musical instrument icons or menus; Details of user interactions therewith
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/091Graphical user interface [GUI] specifically adapted for electrophonic musical instruments, e.g. interactive musical displays, musical instrument icons or menus; Details of user interactions therewith
    • G10H2220/101Graphical user interface [GUI] specifically adapted for electrophonic musical instruments, e.g. interactive musical displays, musical instrument icons or menus; Details of user interactions therewith for graphical creation, edition or control of musical data or parameters
    • G10H2220/106Graphical user interface [GUI] specifically adapted for electrophonic musical instruments, e.g. interactive musical displays, musical instrument icons or menus; Details of user interactions therewith for graphical creation, edition or control of musical data or parameters using icons, e.g. selecting, moving or linking icons, on-screen symbols, screen regions or segments representing musical elements or parameters
    • G10H2220/111Graphical user interface [GUI] specifically adapted for electrophonic musical instruments, e.g. interactive musical displays, musical instrument icons or menus; Details of user interactions therewith for graphical creation, edition or control of musical data or parameters using icons, e.g. selecting, moving or linking icons, on-screen symbols, screen regions or segments representing musical elements or parameters for graphical orchestra or soundstage control, e.g. on-screen selection or positioning of instruments in a virtual orchestra, using movable or selectable musical instrument icons
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/15Aspects of sound capture and related signal processing for recording or reproduction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S3/00Systems employing more than two channels, e.g. quadraphonic
    • H04S3/008Systems employing more than two channels, e.g. quadraphonic in which the audio signals are in digital form, i.e. employing more than two discrete digital channels

Abstract

A system for providing a spatial component to data relating to music, the system having: means for defining a bounded space representing space in which the sound will be emitted; means for defining a path in the bounded space, wherein the path indicates a desired movement of the sound through the bounded space; means for generating a layout within the bounded space, wherein the layout comprises a centre and two or more loci which encompass the centre, the two or loci having different sizes and being concentric about the centre; and means for mapping the path onto the layout. Locations on the two or more loci, representing locations at which a plurality of sound sources are to be positioned, are selected and musical data relating to a part for an instrument (the data having no spatial component)is received by the system. A temporal component associated with the path is determined, the temporal component determining the timing of the sound moving along the path. The musical data relating to a part into subparts is divided, using the path and the temporal component to determine how the part is divided, and each subpart is allocated to a location.

Description

METHOD AND SYSTEM FOR PROVIDING A SPATIAL COMPONENT TO MUSICAL DATA
TECHNICAL FIELD
The invention relates to a system and method of processing data relating to music to provide a spatial component. The invention also relates to a method of recording music having a spatial component.
BACKGROUND
There has been a surge in demand for surround sound and home cinema systems in recent years. However, there is a lack of audio content suitable for use in such systems.
One method of providing spatialised audio content relies on using software to up-mix an audio file having a multichannel recording or streams and then adding spatial components, where each channel is fed into the different speakers of the sound system. The process of spatial up-mixing combines multiple audio signals in a manner to optimize the playback of sound, such as a piece of music, on a particular sound system. This requires familiarity of spatial audio up-mixing and does not give a composer or artist the freedom to add spatial components prior to composing or recording music.
Another method of providing spatialised content relies on using software to split an audio signal (or audio signals from multichannel recordings) into multiple audio signals for feeding into the different speakers. The processing of the audio file can depend on the number and type (e.g., woofer, sub-woofer) of speakers in the sound system. For example, in a two-speaker system (e.g., headphones) the spatial up-mixing process encodes the audio file into binaural audio format for output from two sound sources.
Quality loss is an issue with these up-mixing methods. There is a lack of spatialised content having a precise and accurate representation of the location of a sound in space.
There is also a lack of content for the performance of spatialised music. Furthermore, conventional methods of providing spatialized audio content do not provide data suitable for the live performance of spatialized music.
It is an object of the invention to overcome the disadvantages of the prior art.
SUMMARY OF THE INVENTION
An aspect of the present invention provides a system for providing a spatial component to data relating to music, the system comprising: means for defining a bounded space representing space in which the sound will be emitted; means for defining a path in the bounded space, wherein the path indicates a desired movement of the sound through the bounded space; means for generating a layout within the bounded space, wherein the layout comprises a centre and two or more loci which encompass the centre, the two or loci having different sizes and being concentric about the centre; means for mapping the path onto the layout; means for selecting locations on the two or more loci, representing locations at which a plurality of sound sources are to be positioned; means for receiving musical data relating to a part for an instrument; means for determining a temporal component associated with the path, the temporal component determining the timing of the sound moving along the path; means for dividing the musical data relating to a part into subparts, using the path and the temporal component to determine how the part is divided; and means for allocating each subpart to a location.
The system enables the planning of "spatial musical performance". The term "spatial musical performance" relates to the property that each music note has a spatial component (i.e., the spatial coordinate in 2D or 3D at which it is to be played) in addition to its musical properties (i.e., pitch, duration etc).
By providing a spatial component to each music note, an audience listening to the spatial performance will have an immersive experience of the music moving relative to them. If the audience is positioned at the centre of the sound sources, at a position corresponding to the centre of the layout, the audience will experience the music moving around them.
The system enables a composer to compose a piece of music with a spatial component. In addition, the system enables a spatial component to be added to an existing piece of music.
A musical part refers to a single strand of melody or harmony of music within a larger ensemble or a polyphonic musical composition. For example, in a string quartet a part refers to the melody or harmony played by a cello, whereas in an orchestra, a part may refer to a group of violins playing the same melody. The separation of a musical part into subparts may comprise the division of a series of consecutive musical notes into several sets of smaller series of notes, which may be consecutive. The musical data may comprise first and second parts, wherein the first part is divided into first subparts and the second part is divided into second subparts, and wherein the first subparts are allocated to selected locations on a first locus and the second subparts are allocated to selected locations on a second locus. For example, in a string quartet, the first part may refer to melody played by a violin and the second part may refer to a harmony played by a cello.
The musical data may be sourced from direct composition or from stored data.
The temporal component may comprise data relating to the speed, order, and direction of the sound path. For example, the order may relate to the order in which the sound path passes through the locations, and the direction may refer to clockwise, anticlockwise, up, and down.
The system may comprise means for outputting and/or storing the musical data for each subpart separately. The system may comprise means for determining spatial coordinates for each location. The system may comprise storage means for storing each sub-part together with spatial parameters associated with the location at which the sound for the sub-part is to be emitted. The spatial parameters may comprise spatial coordinates. Alternatively, the spatial parameters may comprise a code indicating the spatial position of the location.
Optimal enjoyment of the spatial performance will depend on the human audience member's ability to identify the location of an emitted sound, known as sound localisation. Sound localisation relies on inter-aural timing (i.e., the difference in arrival time of a sound between two ears) and visual cues. The spatial performance can be enhanced by the provision of visual cues which match the location and movement of the sounds emitted from the sound sources.
The present invention has the advantage that data relating to subparts and their corresponding locations may be used for the programming of the visual cues. In one embodiment, data relating to subparts and their corresponding locations is output to a visualiser system. The visualiser system may comprise a plurality of visual indicators, which in use are positioned in the space in which the sound will be emitted. The visualiser system allows the visual tracking of the sound and its motion in space in a precise and accurate manner.
The system may comprise means for receiving visual data relating to a series of visual signals, means for defining a visual path in the bounded space, wherein the visual path indicates a desired movement of visual signals through the bounded space, means for mapping the visual path onto the layout, means for selecting visual signal locations on the layout at which a plurality of visual outputs are to be positioned, and means for allocating a visual signal to each visual signal location, based on the visual path and the temporal component.
The visual path may be the same as the sound path and/or the means for mapping the visual path onto the layout may be the same as the means for mapping the sound path onto the layout.
The system may comprise means for calculating a simulated path in the bounded space according to the selected locations and the temporal component, wherein the simulated path indicates a predicted movement of sound through the bounded space.
The system may comprise means for optimising the simulated path.
The means for calculating and/or optimising a simulated path may be configured to account for factors affecting sound perception. Examples of these factors include room presence, sound source brilliance, running reverberance and listener envelopment.
The sound arriving at a listener's ears following a note played by a performer is made of three parts: direct sound, early reflections, and reverberant sound. The direct sound is heard directly from the performer's instrument, the early reflections is sound reflected from the walls, ceiling etc, and the reverberant sound is caused by numerous reflections, building up and then decaying. Listener envelopment is the degree to which the reverberant sound seems to surround the listener. All these components can be measured for a specific venue.
The perception of acoustics can be divided into two basic constituents: source presence and room presence. Source presence relates to the direct sound and the early part of a venue's acoustic response, whilst room presence relates to the latter part of the venue's acoustic response. Brilliance refers to a quality of the sound source, typically having a frequency between around 8kHz to around 16kHz for the 10th Octave. It can typically be described as having a bright, clear ringing sound in which the treble frequencies are prominent and having a rather slow decay. Running reverberance refers to the perceived reverberation and includes the interplay between source and the venue. As before, all these components are measurable.
The system may comprise means for simulating sound emitted, according to the simulated path, from a sound source positioned at each of the respective selected locations. The means for simulated sound emitting may comprise outputting a visual display illustrating the sound emitted from sound sources. The simulation allows the experience to be tested and optimised if necessary.
The means for defining the bounded space may be configured to select the bounded space from a database of bounded spaces. In one embodiment, the bounded space is a geodesic dome. Other suitable bounded spaces include 3D spaces, such as a sphere or cube or 2D spaces, such as a circle, rectangle, square etc. The bounded space may exist in a virtual reality environment or an augmented reality environment or a mixed reality environment.
The bounded space is only considered to be 'bounded' for the purpose of carrying out the method described herein. It is intended that the bounded space relates to either an indoor space or outdoor space which may or may not be physically defined by a boundary.
The two or more loci may have the same shape but different size. Each locus may comprise a geometric shape. The geometric shape may comprise a 2D shape, for example a circle, square etc. Alternatively, the geometric shape may comprise a 3D shape, for example a sphere, cube etc. The locations may be positioned on the two or loci of the layout. The selected locations may be distributed around each locus.
In one embodiment, the loci are circular, and the selected locations are distributed on the circular shapes such that they form two or more arcs extending from the centre, the arcs optionally having rotational symmetry about the centre.
Each sound source may be selected from a musical instrument (including a human voice) and an electronic speaker.
The output means may be configured to output the layout and/or selected locations within the bounded space as a sound source map. The sound source map is also known as a spatialisation plan, as it includes spatial data relating to the sound sources. The sound source map may comprise the spatial parameters, such as spatial coordinates for each location. Alternatively, the spatial parameters may comprise a code referring to the spatial coordinates. The sound source map may be used to plan the layout of sound sources (such as musicians or electronic speakers) for a performance. In a performance, each musician or electronic speaker acts as a sound source and is positioned at a selected location, each selected location being associated with a subpart.
An aspect of the invention provides a spatialization plan or sound source map created by the method.
The system may comprise display means configured to display the part in musical notation, wherein each subpart is displayed separately from other subparts. In one embodiment, the display comprises multiple staves, with each subpart being allocated its own stave.
An aspect of the present invention provides a method of providing a spatial component to data relating to music, the method comprising the following steps: defining a bounded space representing space in which sound relating to the part will be emitted; defining a path in the bounded space, wherein the path indicates a desired movement of the sound through the bounded space; generating a layout within the bounded space, wherein the layout comprises a centre and two or more loci which encompass the centre, the two or more loci having different sizes and being concentric about the centre; mapping the path onto the layout; selecting locations on the two or more loci, representing locations at which a plurality of sound sources are to be positioned; receiving musical data relating to a part for an instrument; determining a temporal component associated with the path, the temporal component determining the timing of the sound moving along the path; dividing the musical data relating to a part into subparts, using path and the temporal component to determine how the part is divided; and allocating each subpart to a location.
The method steps may be carried out in any suitable order.
The method may comprise the step of outputting and/or storing the musical data for each subpart separately.
The method may comprise the step of determining spatial coordinates for each location. The method may comprise the step of storing each subpart together with the spatial coordinates associated with the location at which the sound for the subpart is to be emitted.
The method may comprise the step of defining a visual path in the bounded space, wherein the visual path indicates a desired movement of visual signals through the bounded space, mapping the visual path onto the layout, selecting visual signal locations on the layout at which a plurality of visual outputs are to be positioned, and allocating a visual signal to each visual signal location, based on the visual path and the temporal component. The visual path may correspond to the sound path.
The method may comprise the step of calculating a simulated path in the bounded space according to the selected locations and the temporal component, wherein the simulated path indicates a predicted movement of sound through the bounded space. The method may comprise the step of simulating sound emitted, according to the simulated path, from a sound source positioned at each of the respective selected locations. The simulated sound may be visually displayed.
The temporal component may take into account the layout of the selected locations since these selected locations may be slightly different to the path indicating a desired movement of the sound through the bounded space, as shown in Fig 7.
The step of defining the bounded space may comprise selecting the bounded space from a database of bounded spaces, optionally wherein the bounded space is a geodesic dome or sphere. The bounded space may exist in a virtual reality environment or an augmented reality environment or a mixed reality environment.
The two or more loci may have the same shape but different size. The locations may be positioned on the two or loci of the layout and the selected locations may be distributed around each locus. In one embodiment, the loci are circular, and the selected locations are distributed on the circular shapes such that they form two or more arcs extending from the centre, the arcs optionally having rotational symmetry about the centre such as a spiral.
The method may comprise the step of outputting the layout and/or selected locations within the bounded space as a spatialisation plan or sound source map.
Each sound source may be selected from a musical instrument (including a human voice) and an electronic speaker.
The method may comprise the step of displaying each subpart in musical notation, wherein each subpart is displayed on a separate stave.
An aspect of the invention provides a display of the part in musical notation produced according to the method.
An aspect of the invention provides a method of recording music with a spatial component, wherein the method comprising the following steps: providing a spatial component to data relating to music according to the claimed method; providing a space corresponding to the bounded space; providing sound sources at positions corresponding to locations on the layout; providing a sound recording means configured to record sound emitted from each sound source; and emitting sounds at the sound sources, the sounds corresponding to the subparts, path, and temporal component, and recording the sound at the sound recording means.
The sound recording means may be located in the region of each sound source.
An aspect of the invention provides a method of recording music with a spatial component, wherein the method comprises the following steps: providing a space in which sound will be emitted; providing musical data relating to a part for an instrument, the musical data being divided into subparts, with each of the subparts having an associated spatial coordinate; providing a plurality of sound sources at locations corresponding to the spatial coordinate; providing a sound recording means configured to record sound from each sound source; emitting sounds corresponding to the musical data at the plurality of sound sources and recording the sound at each sound recording means.
The sound recording means may be located in the region of each sound source.
The sound recording means may comprise digital recording means. The sound recording means may comprise multiple microphones. In one embodiment, a microphone is located in the region of each sound source. In another embodiment, several microphones are located at a single location, with each microphone recording sound from a different direction.
The musical data may be sourced from a pre-recorded soundtrack. The musical data may further comprise a live vocal line or melodic line for an instrument to the part. The digital format may be a.MIDI format.
A separate audio signal and/or soundtrack may be obtained from each sound recording means.
The method includes the step of creating a multitrack data file, the multitrack datafile comprising a track corresponding to each sound recording means. The multitrack datafile may include a spatial component, wherein spatial information is included with each track.
The methods of processing sound for spatial sound applications described herein recreate sound as we hear it naturally. The methods described provide a universal audio content having a higher quality that is suitable for reproduction in any sound system.
An aspect of the present invention provides a system for providing a spatial component to data relating to sound, the system comprising: an electronic processor having an electrical input for receiving data relating to the sound; an electronic memory device electronically coupled to the electronic processor and having instructions stored therein; wherein the electronic processor is configured to access the memory device and execute the instructions stored there such that it is operable to: define a bounded space representing space in which sound relating to the part will be emitted; define a path in the bounded space, wherein the path indicates a desired movement of the sound through the bounded space; generate a layout within the bounded space, wherein the layout comprises a centre and two or more loci which encompass the centre, the two or more loci having different sizes and being concentric about the centre; map the path onto the layout; select locations on the two or more loci at which a plurality of sound sources are to be positioned; receive musical data relating to a part for an instrument; determine a temporal component associated with the path, the temporal component determining the timing of the sound moving along the path; divide the musical data relating to a part into subparts, using the path and the temporal component to determine how the part is divided; and allocate each subpart to a sound source.
An aspect of the invention provides a computer program for providing a spatial component to data relating to music according to the method described herein.
An aspect of the invention provides a computer-readable storage medium comprising instructions that, when executed by a processor, cause it to perform the method of providing a spatial component to data relating to music described herein.
When implemented on a computer the method described can be automated and a music score, layout and spatialisation plan can be provided in digital format.
The method of processing sound as described herein may be implemented using computer processes operating in processing systems or processors. These methods may be extended to computer programs, particularly computer programs on or in a carrier, adapted for putting the aspects into practice. The program may be in the form of non-transitory source code, object code, a code intermediate source and object code such as in partially compiled form, or in any other non-transitory form suitable for use in the implementation of processes described herein. The carrier may be any entity or device capable of carrying the program. For example, the carrier may comprise a storage medium, such as a solid-state drive (SSD) or other semiconductor-based RAM; a ROM, for example a CD ROM or a semiconductor ROM; a magnetic recording medium, for example a floppy disk or hard disk; optical memory devices in general; etc.. In accordance with an embodiment, there is provided a non-transitory computer-readable storage medium comprising a set of computer-readable instructions stored thereon, which, when executed by a processing system, cause the processing system to perform a method according to the invention. The described examples may be implemented at least in part by computer software stored in (non-transitory) memory and executable by the processor, or by hardware, or by a combination of tangibly stored software and hardware (and tangibly stored firmware).
An aspect of the invention provides a music score for reproducing data relating to music having a spatial component according to the system or method described herein, the music score comprising a plurality of staves, wherein each stave corresponds to each respective selected location on the layout, and musical notes corresponding to each subpart respectively allocated to each location and represented on each respective stave, thereby to provide an immersive spatial sound in the bounded space when sound is emitted, according to the musical notes, from a sound source at each location.
The music score allows the composition or piece of music to be played in a spatialised format in a live performance.
Throughout the description and claims of this specification, the words "comprise" and "contain" and variations of the words, for example "comprising" and "comprises", mean "including but not limited to", and do not exclude other components, integers or steps.
Moreover, the singular encompasses the plural unless the context otherwise requires: in particular, where the indefinite article is used, the specification is to be understood as contemplating plurality as well as singularity, unless the context requires otherwise.
Preferred features of each aspect of the invention may be as described in connection with any of the other aspects. Within the scope of this application it is expressly intended that the various aspects, embodiments, examples and alternatives set out in the preceding paragraphs, in the claims and/or in the following description and drawings, and in particular the individual features thereof, may be taken independently or in any combination. That is, all embodiments and/or features of any embodiment can be combined in any way and/or combination, unless such features are incompatible.
BRIEF DESCRIPTION OF THE DRAWINGS
One or more embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings, in which: Figure 1 is a flow chart of a method of providing a spatial component to data relating to music according to an example; Figures 2A-2C are schematics of a bounded space according to examples; Figures 3A and 3B are schematics of a path through a bounded space according to
examples;
Figures 3C and 3D are schematics of a path through a bounded space according to examples; Figure 4A is a schematic of a layout according to an example; Figure 45 is a schematic of a layout with locations according to an example; Figure 5A is a schematic of a layout with labelled locations according to an example; Figure 55 is a schematic of a layout with locations in an offset arrangement according to an example; Figure 6A is a schematic of a path mapped onto a layout according to an example; Figure 65 is a schematic of a layout with selected locations according to an example; Figure 7 is a schematic of a layout with a simulated path according to an example; Figure 8 is a schematic of a system for providing a spatial component to data relating to music according to an example; Figure 9 is a schematic of a music score according to an example; Figure 10A is a schematic of a recording layout according to an example; Figure 10B is a schematic of a recording layout according to an example; Figure 11A is a schematic showing a conventional seating arrangement for a string ensemble according to an example; Figure 11B is a schematic showing a conventional music score for a string ensemble according to an example; and Figure 12 is a schematic showing a spatialised music score according to an example.
DETAILED DESCRIPTION
A method of processing musical data to provide a spatial component is described with reference to Figs 1 to 8. This method is typically carried out on a sound processing system, the system typically having an electronic processor and memory.
A piece of music generally has a number of different parts, with each part being played by a single or group of instruments. For example, a piece of classical music performed by a string quartet will have four parts: one for each of the first violin, second violin, viola and cello. When the part is processed to have a spatial component, each part is divided into subparts, with a spatial location and musician allocated to each subpart. The part is performed by positioning the different musicians at their respective spatial locations and each playing their subpart. Thus, the cello's part may be divided into five subparts, with a different musician playing each subpart. An audience listening to performance of the cello's part will experience the movement of the music from one musician's spatial location to the next musician's spatial location, as the part is played.
Referring to Figure 1, the method includes the following steps: defining a bounded space 102 representing space in which sound relating to the part will be emitted, defining a path in the bounded space indicating a desired movement of the sound through the bounded space 104, generating a layout within the bounded space 106, mapping the path onto the layout 108, selecting locations on the layout at which a plurality of sound sources are to be positioned 110, receiving musical data relating to a part for an instrument 112, determining a temporal component associated with the path 114, the temporal component determining the timing of the sound moving along the path, such as the speed that the sound moves along the path, dividing the musical data relating to the part into subparts, using the path and the temporal component to determine how the part is divided 116, and allocating each subpart to a location 118. Each of these steps will now be described in more detail.
Bounded space In step 102 of Fig 1, a bounded space is defined, the bounded space representing the environment through which the sound will travel. This involves defining a bounded space by way of a co-ordinate system. The bounded space may be a two-dimensional area or three-dimensional volume enclosed by a boundary, for example as shown in figure 2A and 2B respectively. The bounded space can take any shape of area or form of volume. In an example the bounded space may represent a real space such as a geodesic dome as shown in Fig 2C. In other examples the bounded space may represent a space in a virtual reality, mixed reality, or augmented reality environment.
The bounded space is defined by a user. In an example, the user manually enters dimensions of the area or volume of the environment into the sound processing system. In another example, the user can select the bounded space from a database of bounded spaces representing various environments having predefined dimensions. One such predefined environment could be the Madison Square Garden (MSG) Sphere in London and/or Las Vegas.
The origin of the co-ordinate system used to describe the bounded space is placed at a central location within the bounded space, such that the environment extends from the origin of the co-ordinate system. Suitable co-ordinate systems include a cartesian coordinate system and a polar co-ordinate system.
Sound path In step 104 of Fig 1, a path is defined, which indicates a desired movement of the sound through the bounded space. Figs 3A and 3B show examples of paths 302, 304. A user can define the path by entering the path into the co-ordinate system describing the bounded space. Alternatively, the user can select a path from a database. The path is therefore represented by a series of co-ordinates within the bounded space. In the example shown in Figs 3A and 3B, the path 302, 304 represents the desired movement of sound around the centre 306 of the bounded space in an anti-clockwise direction. In the example shown in Fig 3A the path 302 extends in 2D. In the example shown in Fig 3B, the path 304 extends in 3D additionally representing the desired movement of sound rising up passed the centre 306 in an 'upwards' direction.
Layout In step 106, a layout is generated within the bounded space. The layout is used to define locations in the bounded space representing positions at which sound will be emitted, for example by instruments or electronic speakers. The layout is made up of a number of loci positioned around a common centre. Each locus takes the form of a geometrical shape, with the two or more loci having different sizes. An example layout is shown in Fig 4A.
The user determines the number of different parts for the piece of music being processed for providing a spatial component to the musical data. Taking a piece of classical music for a string quartet as an example, the user can determine that the classical music has four parts, so the layout will be generated with four loci (A, B, C, D). The user enters a numerical value into the system to define the number of loci in the layout. Each locus represents potential positions for each instrument to be positioned on the layout. For example, a string quartet would require a layout having four loci, and a suitable arrangement might be the first violins on a first locus, the second violins on a second locus, the violas on a third locus, and cellos on a fourth locus.
Each locus may comprise a geometric shape having a degree of symmetry. Each of the geometrical shapes may be of the same type but of different dimensions. As shown in Fig 4A, locus A has the smallest diameter, with the diameter of each locus increasing with distance from the common centre out to locus D which has the largest diameter. The loci may extend in two-dimensions or three-dimensions. Suitable geometrical shapes for a locus include a circle and a sphere, or a square and a cube, amongst other shapes.
Locations representing potential positions for sound sources are positioned on each locus.
The user enters a numerical value to define the number of locations for the layout (i.e., representing the number of sound sources). The number of locations is equally divided between the number of loci in the layout. The locations are equally spaced on each locus. The number of locations is determined by the maximum number of sound sources, i.e., the number of musicians or electronic speakers which could be used in a live performance.
In the example shown in Fig 4B there are twenty-four sound sources and four parts in the piece of music, resulting in four concentric loci (A, B, C, D). The twenty-four sound sources are equally divided between the four loci to give six locations 402 for each of the four geometric shape.
As shown in Fig 5A the locations are each labelled with a co-ordinate or point of reference (e.g., AO, A+1, A+2, A00, A-2, A-1). The locations on the right-hand side of the layout are assigned a positive (+) co-ordinate and the locations on the left-hand side of the layout are assigned a negative (-) co-ordinate. The locations in the middle of the layout are assigned a zero (0 or 00) co-ordinate. These aid tracking of the sound movement through the environment. As the locations are equally spaced around each locus, the circumferential distance between the locations increases for each geometric shape moving out from the common centre. In the example where the geometrical shape is a circle, the sound sources for each locus are equally spaced apart on the circumference of the circle. As shown, AO, A+1, A+2, A00, A-2, A-1 are circumferentially closer together on the innermost locus compared to DO, D+1, D+2, DOO, D-2, D-1 on the outermost locus.
The locations on each locus may be circumferentially (or rotationally) offset from the locations on each other locus. This arrangement results in each location having an uninterrupted path to the common centre. The angle between each adjacent location may be the same. Whilst Fig 5A is shown in two-dimensions, when extended into three-dimensions (above or below) the same layout may be used albeit in a layer or layers above one another, as shown in Fig 3C, so as to still provide an uninterrupted path to the common centre.
In the example of Fig 5A and 5B, the layout comprises twenty-four locations 502 arranged over four concentric circular loci. The circular layout is divided into twenty-four equally shaped sectors, each extending from the centre to the circumference. A location is placed on each line 504 extending from the centre to the perimeter. The resulting distribution of locations is in the shape of several spiral arms radiating from the common centre. In the example shown there are six spiral arms radiating from the common centre.
The user may adjust the layout that is generated to re-position the locations. This adjustment will modify the positions at which the sound sources are to be positioned within the bounded space. The user may validate the layout and/or the positions of the locations.
This layout may be used as a basis for different pieces of music. However, different layouts may be generated for different pieces of music if required.
Mapping In step 108 of Fig 1, the path is mapped onto the layout. The path is mapped or overlayed onto the layout as shown in Fig 6A. The mapping is achieved by overlaying co-ordinates of the path onto the same co-ordinates of the layout.
Spatialisation plan In step 110 of Fig 1, locations on the layout are selected. In an example illustrated in Fig 6B, the co-ordinates of the locations that are closest to the co-ordinates of the path are selected 602. The mapping assists the selection of locations on the layout at which sound is to be emitted. The locations are selected on two or more concentric loci of the layout.
The number of locations selected is linked to the number of sound sources to be positioned within the bounded space. In one example, a full set of sound sources is used for a piece of music, such that a sound source is positioned at each available location. In another example, a subset of locations is chosen for sound sources. Thus, the same layout can be used for different pieces of music, by selecting different subsets of locations for the sound sources.
The layout may be displayed as a spatialisation plan. Fig 5A shows a spatialisation plan which represents the locations within the bounded space from which sound can be emitted.
Each location in the spatialisation plan represents the position of a sound source and may include spatial coordinates (or a code identifying the spatial position) and/or an identification of each sub-part to be played for that location.
The selection of locations can be represented in the output of the spatialisation plan or sound source map for the piece of music.
Receiving musical data In step 112 of Fig 1, musical data relating to a part for an instrument is received into the sound processing system. The musical data could be in the form of a stored file, for example digital data containing musical notation. The musical data could be entered by a composer as he/she composes the part, for example via a keyboard or touch screen. The musical data may contain multiple parts, with each part being processed in a similar manner.
Temporal component In step 114, a temporal component associated with the path is determined. The temporal component is associated with the path and determines temporal parameters, such as speed, direction (for example clockwise or anticlockwise, up, or down) and time duration of the movement.
Where the path moves through a three-dimensional volume, the volume of the bounded space may be divided into sub-volumes or layers, as shown in Figs 3C and 3D respectively. In an example, a first layer corresponds to sound moving in a plane below the centre 306, a second layer corresponds to sound moving in the same plane as the centre 306, and a third layer corresponds to sound moving in a plane above the centre 306.
Division of parts into sub-parts The piece of music being processed has one or more parts, with each part representing the music played by an individual instrument. Each part generally refers to a single strand of melody or harmony within a larger ensemble or a polyphonic musical composition. A solo piece of music would contain only one part.
At step 116 of Fig 1, each part is divided into sub-parts, with each sub-part being associated with a location on the spatialisation plan. The sound path and temporal component are used to determine how the part is divided into sub-parts. At step 118 each sub-part is allocated to a location on the spatialisation plan.
An example division of parts into sub-parts will now be described with reference to Figs 11A & 11B and Fig 12.
Fig 11A shows a conventional seating arrangement for a string ensemble having four instrument sections: violin, viola, cello and contrabass. The audience is positioned in front of the seating arrangement. Fig 11B shows a conventional music score for the string ensemble. The music score does not have notation that describes a movement or motion of the piece of music but rather relates to a static piece of music that does not contain spatialisation.
Fig 12 shows a music score for the same piece of music that has been spatialised according to the methods described herein. The music score represents the same piece of music to be played by the ensemble (violin, viola, cello and contrabass) with the difference that it has notation that describes the movement or motion of the piece of music. Whilst conventionally each instrument is positioned in a particular section (Fig 11A), in contrast and according to the methods described herein, the musical notes in the spatialised music score (Fig 12) are allocated to a position and subsequently an instrument is assigned to each position according to the spatialisation plan (Fig SA). In the music score of Fig 12 the musical notes to be played by the violins are assigned to the innermost locus A (AO, A+1, A+2, A00, A-2, A-1), the violas are assigned to the locus B (BO, B+1, B+2, BOO, B-2, B-1), and so on (the scores for the violoncello/cello and contrabass have not been shown for brevity). In the example shown in Fig 12 the musical notes played by the instruments on locus A provide a clockwise motion to the piece of music and the musical notes played by the instruments on locus B provide an anti-clockwise motion to the piece of music. When the piece of music is played according to the spatialised music score the audience at the common centre experience the piece of music in an immersive manner such as the music moving or flowing around them.
Music score A piece of music which has been processed to provide a spatial component may be displayed on a special music score.
The term "music score" typically refers to the musical notation representing a piece of music; this can relate to a solo composition or an ensemble composition. In a conventional music score, each part (i.e., relating to a particular instrument or group of instruments playing the same melody or harmony) is represented on its own stave.
In a piece of music with a spatial component, each part has been divided into sub-parts. A special music score for music with spatial component allows the sub-part for each location to be separately displayed. The music score has multiple staves, each stave representing the sub-part corresponding to a location within the bounded space. Each stave may be provided with a code relating to the spatial position at which the sub-part will be played.
An example music score is shown in Fig 9 and Fig 12. Musical notes on staves designated with an 'A' location (AO, A+1, A+2, A00, A-2, A-1) may be played by first violins and musical notes on staves designated with a 'B' location (BO, B+1, B+2, BOO, B-2, B-1) may be played by second violins, and so on. The grouping of staves into locations (A, B, C, D) divides the piece of music between the different musical parts. Each grouping corresponds to a sound associated with the different instrument.
Simulation After the spatial components to the music have been planned, a simulation process allows the simulation of the resulting movement of sound through the environment.
As shown in Fig 7 a simulated path 702 in the bounded space is calculated according to the selected locations, their associated sub-parts and the temporal component. The simulated path indicates a predicted movement of sound through the bounded space in the duration of time.
In an example the sound emitted from each sound source is simulated according to the simulated path. The simulation of the sound movement through the bounded space may be achieved using visual means. The simulated path may be displayed on a monitor, for example as colour.
In one embodiment, the calculation of the simulated path includes parameters relating to interactions between sound emitted by the sound sources and the boundary of the bounded space and/or interactions between sounds emitted by multiple sound sources and/or psychoacoustic data. These parameters may be obtained from a database or algorithm.
The sound path, the layout, the locations, the selected locations, and simulated path can be defined using co-ordinates of the co-ordinate system.
Visualisation During a performance of music with a spatial component, the immersive experience is improved by the provision of visual means which give a visual representation of the movement of sound. A plurality of visual means, such as light sources, monitors and projection screens, are provided.
In one embodiment, each visual means is associated with a sound source. For example, a visual means may be positioned adjacent each sound source or at the same angular position from the common centre as the sound source. In this arrangement, the spatial coordinates of the sound sources and the temporal component give sufficient information to coordinate the visual signals with the sound emitted from each sound source. The spatial coordinates may be derived from the spatialisation plan.
In another embodiment, the arrangement of the visual means is independent of the sound sources. In this case, a visual path needs to be planned and mapped onto the layout to determine spatial coordinates of the visual means. In this case, the spatial coordinates of the visual means, the spatial coordinates of the sound sources and the temporal component are required to coordinate the visual signals with the sound emitted from each sound source.
Sound processing system A sound processing system suitable for carrying out the method is shown in Fig 8. The sound processing system comprises a processor 802, memory 804, a user interface 806 such as an input device, database 808, audio output 810 and visual means 812. A user can input data to the processor via the user interface. For example, the data inputted may relate to the number of sound sources. The visual means may comprise a display screen.
The database may comprise data in relation to predefined bounded spaces, predetermined paths or pieces of music to be spatialised. The audio output, for example an electronic signal to an electronic speaker, is used to output sound data. The memory may comprise a computer program for processing sound according to the methods described herein.
Performance The spatialisation plan may be used to plan a performance of music having a spatial component. A sound source, for example a musician with a musical instrument or an electronic speaker, is positioned at each location on the spatialisation plan and an audience is positioned at the centre. As described above, each location has an associated sub-part. The piece of music is performed by each sound source emitting music according to its sub-part. For a live performance, each musician may use the music score (e.g., Fig 12) showing their sub-part.
The audience at the centre will have an immersive experience with the impression of the music moving around them.
Recording As illustrated in Fig 10A, the sound emitted from a sound source may be recorded at each selected location. The recording at each location provides an individual audio file for a subpart performed at that location. The individual audio files from the recordings at each location may be combined to provide a multitrack file. Suitable recording devices used at each location can be a single microphones directed to each sound source, for example a microphone directed towards each musician or directly attached on each instrument. A suitable microphone is a Core 4099 Instrument Microphone manufactured by DPATM, for example positioned on a violin. Another suitable microphone is a 5T4006A standard instrument recording microphone manufactured by DPATm, directed towards and close to the sound source. The recordings obtained through the recording devices provide multiple recorded tracks that can be up-mixed without loss of sound quality.
Alternatively, or in addition, the sound may be recorded at the origin of the bounded space or common centre, as shown in Fig 10B. In this case, the sound recording means may comprise multiple directional microphones. A suitable recording device used at the common centre can be an OmniPro Binaural microphone manufactured by 3DIOT". This contains eight discrete microphones, providing omnidirectional recording ability. This recording device provides eight separate tracks which may be manually up-mixed using the separate tracks or automatically up-mixed. Each microphone is provided with an ear-shape, which assists in the capture of the spatial performance as it would naturally be heard at each audio input.
Another suitable recording device is the ORTF-3D eight channel / eight microphones device manufactured by SCHOEPSTM, which allows for binaural up-mixing to provide both an accurate simulation and recording of a spatial performance.
Multitrack master file The multitrack file may be used to reproduce the music via a multi-channel sound system.
The multitrack file may be compiled into a standard ambisonic format or any other suitable file format or codec. The multitrack file may include within its code the MIDI of each subpart, such that the file can be read by any compatible digital music sheet reader for playing back the sub-parts.
The multitrack file may be used to generate an audio-visual representation of sound. This is possible because the spatial coordinates associated with each track and the temporal component stored on the multitrack file may be used for positioning and timing of the visual representation of the sound. This is particularly effective in an integrated system using virtual reality, where the audio is provided by speakers and the visual is provided in a VR headset. In an example, the visualization of the movement of sound can be combined with a head tracking system in virtual, augmented, or mixed reality for an even more immersive experience, where the listener can simulate head movement to see the visual representation shown in the direction of the sound. This provides the audience with a 360 degrees directional immersive experience and this experience is further enhanced when a pre-recorded track is combined with virtual reality.
The systems and methods described herein provide a unique way to obtain a precise and accurate representation of the location of a sound in space, as the relationship is determined during the composition process. In other words, the spatial component is considered before the composition and/or recording stage.

Claims (26)

  1. CLAIMS1. A system for providing a spatial component to data relating to music, the system comprising: means for defining a bounded space representing space in which the sound will be emitted; means for defining a path in the bounded space, wherein the path indicates a desired movement of the sound through the bounded space; means for generating a layout within the bounded space, wherein the layout comprises a centre and two or more loci which encompass the centre, the two or more loci having different sizes and being concentric about the centre; means for mapping the path onto the layout; means for selecting locations on the two or more loci, representing locations at which a plurality of sound sources are to be positioned; means for receiving musical data relating to a part for an instrument; means for determining a temporal component associated with the path, the temporal component determining the timing of the sound moving along the path; means for dividing the musical data relating to a part into subparts, using the path and the temporal component to determine how the part is divided; and means for allocating each subpart to a location.
  2. 2. A system according to claim 1, comprising means for outputting and/or storing the musical data for each subpart separately.
  3. 3. A system according to any preceding claim, comprising means for determining spatial parameters for each location, optionally wherein the spatial parameters comprise spatial coordinates.
  4. 4. A system according to any claim 3, comprising storage means for storing each subpart together with the spatial parameters associated with the location at which the sound for the sub-part is to be emitted.
  5. 5. A system according to any preceding claim, comprising means for receiving visual data relating to a series of visual signals, means for defining a visual path in the bounded space, wherein the visual path indicates a desired movement of visual signals through the bounded space; means for mapping the visual path onto the layout; means for selecting visual signal locations on the layout at which a plurality of visual outputs are to be positioned; and means for allocating a visual signal to each visual signal location, based on the visual path and the temporal component.
  6. 6. A system according to claim 5 wherein the visual path corresponds to the sound path.
  7. 7. A system according to any preceding claim, comprising means for calculating a simulated path in the bounded space according to the selected locations and the temporal component, wherein the simulated path indicates a predicted movement of sound through the bounded space.
  8. 8. A system according to any preceding claim, comprising means for visually displaying the simulated path.
  9. 9. A system according to any preceding claim, wherein the locations are positioned on the two or more loci of the layout and wherein the selected locations are distributed around each locus.
  10. 10. A system according to any preceding claim, wherein each of the selected locations has a different angular position about the centre.
  11. 11. A system according to any preceding claim wherein the two or more loci are circular and wherein the selected locations are distributed on the circular loci such that they form two or more arcs extending outwards from the centre.
  12. 12. A system according to any preceding claim, comprising output means configured to output the layout and/or selected locations to a sound source map.
  13. 13. A system according to any preceding claim, wherein the musical data comprises first and second parts, wherein the first part is divided into first subparts and the second part is divided into second subparts, and wherein the first subparts are allocated to selected locations on a first locus and the second subparts are allocated to selected locations on a second locus.
  14. 14. A system according to any preceding claim, comprising display means configured to display the part in musical notation comprising multiple staves, and wherein each sub-part is displayed on a separate stave.
  15. 15. A method of providing a spatial component to data relating to music, the method comprising the following steps: defining a bounded space representing space in which sound relating to the part will be emitted; defining a path in the bounded space, wherein the path indicates a desired movement of the sound through the bounded space; generating a layout within the bounded space, wherein the layout comprises a centre and two or more loci which encompass the centre, the two or more loci having different sizes and being concentric about the centre; mapping the path onto the layout; selecting locations on the layout at which a plurality of sound sources are to be positioned; providing musical data relating to a part for an instrument; determining a temporal component associated with the path, the temporal component determining the timing of the movement of sound along the path; dividing the musical data relating to a part into subparts, using the path and the temporal component to determine how the part is divided; and allocating each subpart to a location.
  16. 16. A method according to claim 15, comprising outputting and/or storing the musical data for each subpart separately.
  17. 17. A method according to any of claim 15-16, comprising determining spatial parameters for each location, optionally wherein the spatial parameters comprise spatial coordinates.
  18. 18. A method according to claim 17, comprising storing each sub-part together with the spatial parameters associated with the location at which the sound for the sub-part is to be emitted.
  19. 19. A method according to any of claims 15-18, comprising calculating a simulated path in the bounded space according to the selected locations and the temporal component, wherein the simulated path indicates a predicted movement of sound through the bounded space.
  20. 20. A method according to any of claims 15-19, comprising outputting the layout and/or selected locations within the bounded space as a sound source map.
  21. 21. A method according to any of claims 15-20, comprising displaying each sub-part in musical notation comprising multiple staves, wherein each sub-part is displayed on a separate stave.
  22. 22. A display of the subparts in musical notation produced according to the method of claim 21.
  23. 23. A method of recording music with a spatial component, wherein the method comprising the following steps: providing a spatial component to data relating to music according to any of claims 15-22; providing a space corresponding to the bounded space; providing sound sources at positions corresponding to locations on the layout; emitting sounds at the sound sources, the sounds corresponding to the subparts, path and temporal component; positioning a sound recording means in the region of each sound source; and emitting sounds corresponding to the subparts at the plurality of sound sources and recording the sound at each sound recording means.
  24. 24. A method of recording music with a spatial component, wherein the method comprises the following steps: providing a space in which sound will be emitted; providing musical data relating to a part for an instrument, the musical data being divided into sub-parts, with each sub-parts having an associated spatial parameter; providing sound sources at locations corresponding to the spatial parameter; providing a sound recording means in the region of each sound source; emitting sounds corresponding to the musical data at the plurality of sound sources and recording the sound at each sound recording means.
  25. 25. A method according to claim 23 or claim 24, comprising creating a multitrack audio file from the recording, the multitrack audio file comprising a track corresponding to each sound recording means.
  26. 26. A music score for reproducing data relating to music having a spatial component provided by the system of any of claims 1 to 14 or the method of any of claims 15 to 25, the music score comprising a plurality of staves, wherein each stave corresponds to each respective selected location on the layout, and musical notes corresponding to each subpart respectively allocated to each location and represented on each respective stave, thereby to provide an immersive spatial sound in the bounded space when sound is emitted, according to the musical notes, from a sound source at each location.
GB2103458.2A 2021-03-12 2021-03-12 Method and system for providing a spatial component to musical data Pending GB2607556A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB2103458.2A GB2607556A (en) 2021-03-12 2021-03-12 Method and system for providing a spatial component to musical data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2103458.2A GB2607556A (en) 2021-03-12 2021-03-12 Method and system for providing a spatial component to musical data

Publications (2)

Publication Number Publication Date
GB202103458D0 GB202103458D0 (en) 2021-04-28
GB2607556A true GB2607556A (en) 2022-12-14

Family

ID=75623143

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2103458.2A Pending GB2607556A (en) 2021-03-12 2021-03-12 Method and system for providing a spatial component to musical data

Country Status (1)

Country Link
GB (1) GB2607556A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130157759A1 (en) * 2011-12-14 2013-06-20 Bally Gaming, Inc. Gaming machine having a simulated musical interface
US20140133682A1 (en) * 2011-07-01 2014-05-15 Dolby Laboratories Licensing Corporation Upmixing object based audio

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140133682A1 (en) * 2011-07-01 2014-05-15 Dolby Laboratories Licensing Corporation Upmixing object based audio
US20130157759A1 (en) * 2011-12-14 2013-06-20 Bally Gaming, Inc. Gaming machine having a simulated musical interface

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JEFFREY RIEDMILLER ET AL: "Immersive & Personalized Audio: A Practical System for Enabling Interchange, Distribution & Delivery of Next Generation Audio Experiences", ANNUAL TECHNICAL CONFERENCE & EXHIBITION, SMPTE 2014, vol. 124, no. 5, 26 October 2015 (2015-10-26), Hollywood, CA, USA, pages 1 - 23, XP055611936, ISBN: 978-1-61482-954-6, DOI: 10.5594/j18578 *
JÉRÉMIE GARCIA ET AL: "Trajectoires", HUMAN FACTORS IN COMPUTING SYSTEMS, ACM, 2 PENN PLAZA, SUITE 701 NEW YORK NY 10121-0701 USA, 7 May 2016 (2016-05-07), pages 3671 - 3674, XP058258109, ISBN: 978-1-4503-4082-3, DOI: 10.1145/2851581.2890246 *

Also Published As

Publication number Publication date
GB202103458D0 (en) 2021-04-28

Similar Documents

Publication Publication Date Title
Emmerson et al. Electro-acoustic music
US7928311B2 (en) System and method for forming and rendering 3D MIDI messages
EP1866742A2 (en) System and method for forming and rendering 3d midi messages
Boren History of 3D sound
Ziemer Psychoacoustic music sound field synthesis: creating spaciousness for composition, performance, acoustics and perception
d'Escrivan Music technology
KR101919508B1 (en) Method and apparatus for supplying stereophonic sound through sound signal generation in virtual space
WO2022248729A1 (en) Stereophonic audio rearrangement based on decomposed tracks
KR100955328B1 (en) Apparatus and method for surround soundfield reproductioin for reproducing reflection
Brümmer Composition and perception in spatial audio
GB2607556A (en) Method and system for providing a spatial component to musical data
CN111405456B (en) Gridding 3D sound field sampling method and system
Boren et al. Spatial organization in musical form
JP6634857B2 (en) Music performance apparatus, music performance program, and music performance method
Sharma et al. Are Loudspeaker Arrays Musical Instruments
Canfield-Dafilou Performing, Recording, and Producing Immersive Music in Virtual Acoustics
Rossetti et al. Studying the Perception of Sound in Space: Granular Sounds Spatialized in a High-Order Ambisonics System
Howie Capturing orchestral music for three-dimensional audio playback
Lynch Space in multi-channel electroacoustic music: developing sound spatialisation techniques for composing multi-channel electroacoustic music with emphasis on spatial attribute perception
Pottier et al. Interpretation and space
Lopes INSTRUMENT POSITION IN IMMERSIVE AUDIO: A STUDY ON GOOD PRACTICES AND COMPARISON WITH STEREO APPROACHES
WO2023085140A1 (en) Information processing device and method, and program
McGuire et al. Mixing
Tom Automatic mixing systems for multitrack spatialization based on unmasking properties and directivity patterns
Rindel et al. Auralisation of a symphony orchestra with ODEON, the chain from musical instruments to the eardrums