WO2004060509A2 - Interface humaine en espace libre (non tactile) pour musique interactive, instrument de musique integrale, commande de media immersive - Google Patents

Interface humaine en espace libre (non tactile) pour musique interactive, instrument de musique integrale, commande de media immersive Download PDF

Info

Publication number
WO2004060509A2
WO2004060509A2 PCT/US2003/041798 US0341798W WO2004060509A2 WO 2004060509 A2 WO2004060509 A2 WO 2004060509A2 US 0341798 W US0341798 W US 0341798W WO 2004060509 A2 WO2004060509 A2 WO 2004060509A2
Authority
WO
WIPO (PCT)
Prior art keywords
type
sensor
midi
space
free
Prior art date
Application number
PCT/US2003/041798
Other languages
English (en)
Other versions
WO2004060509A3 (fr
Inventor
David Clark
John Gibbon
Original Assignee
David Clark
John Gibbon
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by David Clark, John Gibbon filed Critical David Clark
Priority to EP03808641A priority Critical patent/EP1584087A4/fr
Priority to AU2003303523A priority patent/AU2003303523A1/en
Publication of WO2004060509A2 publication Critical patent/WO2004060509A2/fr
Publication of WO2004060509A3 publication Critical patent/WO2004060509A3/fr

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0008Associated control or indicating means
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0033Recording/reproducing or transmission of music for electrophonic musical instruments
    • G10H1/0041Recording/reproducing or transmission of music for electrophonic musical instruments in coded form
    • G10H1/0058Transmission between separate instruments or between individual components of a musical system
    • G10H1/0066Transmission between separate instruments or between individual components of a musical system using a MIDI interface
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/02Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos
    • G10H1/04Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos by additional modulation
    • G10H1/053Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos by additional modulation during execution only
    • G10H1/055Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos by additional modulation during execution only by switches with variable impedance elements
    • G10H1/0553Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos by additional modulation during execution only by switches with variable impedance elements using optical or light-responsive means
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/155User input interfaces for electrophonic musical instruments
    • G10H2220/341Floor sensors, e.g. platform or groundsheet with sensors to detect foot position, balance or pressure, steps, stepping rhythm, dancing movements or jumping
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/155User input interfaces for electrophonic musical instruments
    • G10H2220/4013D sensing, i.e. three-dimensional (x, y, z) position or movement sensing.
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/155User input interfaces for electrophonic musical instruments
    • G10H2220/405Beam sensing or control, i.e. input interfaces involving substantially immaterial beams, radiation, or fields of any nature, used, e.g. as a switch as in a light barrier, or as a control device, e.g. using the theremin electric field sensing principle
    • G10H2220/411Light beams
    • G10H2220/415Infrared beams

Definitions

  • Symmetry-Enhancing Media Feedback Even when given arbitrary inputs, symmetry- enhancing transfer functions maintain or increase the quality of aesthetics for music outputs, including rhythmic tempo/meter/pattern alignment, timbre, and harmonics of chord-scale note alignment. Effortless play with a pleasing result is spontaneous for unpracticed players and for those without musical training. This facility of ease for beginners is however in no detriment to the large scope of subtle, complex and varied creative musical expressions achievable by practiced and virtuoso players.
  • the Free-Space Interface is embodied in two forms, a floor Platform (for full body play) and a floor-stand-mounted Console (for upper body play).
  • the invention employs the following sets of opto-mechanical design features, human factors ergonomic processes, and operational features. This section serves to summarize the scope of the invention in broad conceptual terms, including with useages of certain special terminology employed where necessary, and without specific references to the Drawings.
  • Sensors are arranged within the surface of the Interface radially (circularly), within certain preferred angular and radial spacing constraints.
  • Narrow-field optical, passive, through-beam (line-of-sight), shadow-transition detecting Type /sensors are employed.
  • An overhead optical source fixture assembly provides an invisible infrared (IR) flood to generate the player IR shadows which affect Type I sensor shadow transitions, or "triggers.”
  • IR infrared
  • Type I sensors with associated electronics and software in preferred embodiments also exhibit Speed detection, in the form of detecting the lateral translation speed of any shadowing or unshadowing object across the line-of-sight of a Type I sensor.
  • Type II Sensors are also employed.
  • Type I and Type II sensors are employed together, in practice with strategically cross-multiplied data spaces.
  • Software logic synthesizes the two data types into an integral 6-degrees-of-freedom, real-time non-contact body sensing system.
  • Multiple active visual feedback are spatially co-registered on-axis (surrounding) the passive (through-beam) sensor trigger regions, including planar LED-illuminated light-pipes, and projecting microbeams preferably used with fogging materials. Active feedback forms a player-surrounding cone shape as a frame of reference.
  • a visible player shadow ⁇ s employed as an ergonomic feedback.
  • the visible shadow is obtained by means of the overhead fixture assembly which combines the invisible infra-red (IR) flood source with a low-intensity but visible flood source for this purpose.
  • the resulting visible player shadow are precisely spatially co-registered and aligned with the array of Type I sensors and with the surface light pipes and immersive active microbeams.
  • Intentional regions of spatial ambiguity and spatial displacements of visual feedback are employed within specific design constraints. These involve the spatial configuration of the Type I sensor in relationship to its surrounding concentric planar light pipes, features of the active immersive beams, and also the player's visible shadow.
  • the passive aspect of the visible microbeams indicates player position before affecting trigger events (e.g. player position relative to the potential but not actualized trigger of Type I sensors).
  • Class A fixed color, no microbeams
  • Class B variable RGB color, no microbeams
  • Class C fixed color, with microbeams
  • Class D variable RGB color, with microbeams
  • Our invention constitutes a transparent human interface that is self-evident, easy, clear, precise and creatively expressive.
  • Our invention is fully content-programmable. It provides simultaneous effortless and precision play, within the full range of popular, ethnic, classical, and any musical style and genre, including in seamless aesthetic integration across all musical parameters with pre-authored accompaniment including with prerecorded titles configured for free-space interactive music "play-along”.
  • Transparent trigger-event-by-event rhythmic time quantizing processes are in terms of individual notes. These temporal adjustment processes maintain a spatially- and
  • GUI Graphic User Interface
  • MIDI Control Change types Modulation, Breath Control, Portamento, Pan, Expression, Tremolo Depth, Vibrato Depth, Chorus Depth, etc.;
  • CGI Computer Graphic Images
  • a MIDI protocol is employed which is designed specifically for free-space content: the CZB Command Protocol. This protocol enables flexible content title authoring and control of the vast realm of disclosed transfer functions conveniently, including for storage and recall utilizing conventional MIDI sequencer tracks.
  • Two additional free-space MIDI protocols are also disclosed, which are used for intercommunication between the major functional modules of the complete free- space interactive music media system. These are the Free-Space Event Protocol and the Visuals & Sensor Mode Protocol.
  • each Creative Zone Behavior Control Type is applied in a Creative Zone Behavior together with specif ic employed transfer function Control Parameters.
  • specif ic employed transfer function Control Parameters.
  • MIDI Notes and Local Visuals these include such as: LSB / MSB (least significant byte / most significant byte) values, % Anchor, Map Type, Map Group, Custom Map #, Groove #, Groove Bank, Mode flags, # Values (depth), Low Value, High Value, etc.
  • the invention employs multiple transparent transfer functions (551 > 552 > 553 ) mapping from a 6-dimensional (563) input feature space ( 546 ) of player's sensor-detected full-body "free-space” state: radial extension or “Reach” (578), angular rotation or “Position” (5 7 9 ) , Height ( 58 ° ) , Speed ( 581) , Precision ( 582 ) and Event timing (583) .
  • Methods of active visual feedback employ coordinated and programmable (color) changes in "intensity” or Lightness ( 5 8 7 ) , Hue (584 ), Hue Variation (585 ) nd Saturation (588) .
  • Such visual changes are "polyphonic” (e.g. occurring at multiple locations, overiapping, and in sync with corresponding polyphonic musical note responses).
  • Player actions include intercepting an array of photonic sensor trigger regions which are nested within the conical visual frame of reference, and which are inputs to the scope of transfer functions (551 . 552, 55 3) resulting in media outputs.
  • Two Types (I & II) of sensors are employed: Type I detecting player's shadowing (23) and unshadowing (24) the array of optical sensors (e.g., intercepting an overhead visible and infrared (IR) dual-source Flood (831) within its lines-of-sight through to the sensors), and Type II detecting player's height by means of reflective ranging techniques.
  • IR infrared
  • the "LightDancerTM” or Platform [Series A, B] is mounted at floor level and requires a relatively large footprint of contact with the venue floor (2.5 m) 2 .
  • the "SpaceHarpTM” or Console [Series C] is stand-mounted above floor level and requires a relatively compact footprint of stand contact with the venue floor (1.0 m) 2 although it extends above floor level over a relatively large area (2.0 m) x (1.0 m).
  • the Platform embodiment encourages unrestricted and arbitrary full-body motions (except for torso translation • 2.0m) and senses player (17) full torso, head, arms and legs.
  • the Console embodiment encourages unrestricted upper- torso motion and primarily senses the upper torso including head and arms.
  • the Platform venue also ideally includes an additional zone of surrounding unobstructed space (• 0.5 m surrounding its periphery), while the Console venue only requires unobstructed space along its "inside” or the side of player (147) access (1.0 m) +/- (0.5 m).
  • Kinesthetic Spatial Sync Multiple correlated passive and active visual (548) and musical ( 5 4 7 ) responses, in the context of the specified preferred opto-mechanic constraints, entrains player perceptual-motor perception into identification of input actions (23 > 24) as unified with the synchronous active (output) responses (306) , and contextualizes player's actually asynchronous (most of the time) sensor trigger (input) actions in terms of spatio- temporal Proximity (305) to the synchronous events.
  • Kinesthetic Spatial Sync is in a strict classical sense, a biofeedback entrainment effect.
  • the Kinesthetic Spatial S nc feedback paradigm furthermore entrains players to perceive their body's input actions ( 2 3 ,24 ) to be exactly spatially synchronized and transparently tempo aligned with multisensory immersive media output responses, even while such responses (510 « 511, 512) are clock-slaved (477) to an arbitrary internal or external source of variable (tempo) Clock Master (47 2 ) , such as CD audio track (513) , MIDI sequence (497) , or digital audio track (525).
  • variable (tempo) Clock Master such as CD audio track (513) , MIDI sequence (497) , or digital audio track (525).
  • the invention may be employed as an optimal ergonomic human interface for interactive music, a virtuoso full-body musical performance instrument, an immersive visual media performance instrument, a 6-degree of freedom full-body spatial input controller, a full-body Augmented Reality (AR) interface, a limited motion capture system, and a choreography pattern recognition and classification system.
  • a virtuoso full-body musical performance instrument an immersive visual media performance instrument
  • a 6-degree of freedom full-body spatial input controller a full-body Augmented Reality (AR) interface
  • AR Augmented Reality
  • a limited motion capture system a choreography pattern recognition and classification system.
  • MIDI interface or MIDI input device may be utilized in both solo (unaccompanied) venues as well as accompanied either with MIDI sequences and/or audio pre-recordings.
  • U.S. Provisional Patent Application Free-Space Interactive Interface - Specification invention also includes provision for deployment of (n) multiple such interfaces in precision synchronization of all aesthetic parameters of media response.
  • Multiple free-space interfaces may be used simultaneously and conjunct within a shared (common / adjacent) physical media space or within a shared logical media space spanning physically remote locations via data networks such as LAN, WAN and the Internet.
  • Such free-space interfaces may also be used with aesthetic result in various mixed ensembles such as together with traditional acoustic musical instruments, other electronic MIDI controllers and voice.
  • the invention is also suitable as a six-degrees-of-freedom interactive human interface to control 3D robotic lighting, lasers, 3D computer graphics, 3D animation, and 3D virtual reality systems having outputs of either pseudo-3D (planar displays) and/or immersive-3D (stereoscopic or holographic displays.)
  • Timbre Today's electronic keyboards employing sound generators and synthesizers, with the nearly effortless touch of a key provide transparent access to aesthetic timbres from large libraries of audio output sounds (using techniques such as FM synthesis, wavetable, DLS data, samples, etc.) This results in significant reduction of performance skill requirements (as compared to such as brass, woodwind or unfretted
  • Rhythm is integral to inter- subjective perception of ongoing aesthetic character in musical expression, such that if rhythm is absent or irregular (with the exception of some solo contexts) more often than not such temporally chaotic character of events "outweighs" the degree of musicality in other elements of the performance.
  • rhythm is absent or irregular (with the exception of some solo contexts) more often than not such temporally chaotic character of events "outweighs" the degree of musicality in other elements of the performance.
  • rhythmic transfer function without an enhanced interactive musical system or instrument's employing a rhythmic transfer function, the non-musician or non-rhythmic "casual" player faces at times a steep mental and physical obstacle, requiring a focus of concentration, co-ordination and effort to overcome this barrier and express an intersubjectively aesthetic performance.
  • Players must in this case exert sufficient perceptual-motor control to adjust their body behaviors precisely in relation to tempo and meter, this being critical even if timbre, effects and/or pitch are transparently being adjusted by other available methods or equipment.
  • This Free-space Instrument implements Transparent Rhythmic Processing.
  • the invention advances the evolution both of rhythmic transfer function transparency and symmetry. It introduces new constraints of body-motion (gesture) mappings into musical responses, however skillfully exploiting those to yield new freedoms of creative expression. Specifically for example, it employs certain techniques of real-time Quantization ( 74 ) and auto-Sustain (573) adjustments to player input actions, thus applying symmetry in the time domain.
  • Critical to achieving transparency in these temporal transfer functions are the specifically disclosed combined methods of entrainment (306) whereby strategic delays are made in practice "invisible” or re-contextualized [Sheets D2,
  • a single compact illumination fixture ( 19 ⁇ 125 ) is employed above the free-space interface floor Platform (O or Console ( 13 °), containing optically supe ⁇ osed 11 ) IR (infrared) and visible optical flood sources ( 831 > 832 ).
  • the IR flood component ( 831 ) is utilized with the primary or Type I sensor ( 16 > 43 ) array to sense IR shadows ( 18 > 148 ) produced by objects such as players ( 1 . 147) or their clothing or optional props intercepting Type I sensor "trigger regions" (20, 21, 22, 144, 145).
  • the overhead source assembly ( 19 ' 12 5) produces dual and co-aligned output frequency components: (a) a near-IR (invisible) component between 800 nm to 1000 nm wavelength ( 831 ), amplitude pulsed or intensity square wave cycled by a self-clocked circuit (105) at a frequency of 2.0 to 10.0 khz as source for Type I sensors; together with (b) a continuous visible component ( 832 > at a frequency between 400 nm and 700nm.
  • Both sources are optically and mechanically configured (103, 111, 112) to illuminate or flood the entire interface surface 0> 13 °) situated beneath including in particular all the Type I sensors ( 16 . 3 . 9 5, 99, 143, 233) comprising the interface's Type I array.
  • the source fixture's ( 19 ) height is adjustable ( 833 ) to (3.0 m) +/- (1.0 m) above the center "hex" segment (2) of the floor Platform.
  • the source fixture's ( 12 5) position ( 889 > 890 ' 891 ) is fixed at (1.0 m) +/- (0.3 m) in height above the top of the interface ( 13 °), and is positioned by means of supports ( 12 6) off-center to the "outside” or convex side of the Console enclosure ( 13 °) as compared to the typical players ( 14? ) "inside” position on the concave side.
  • IR and visible ( 107 ) sources are physically separate sources optically combined so that the IR may employ it's clock pulse circuit ( 105 ) while the visible remains continuous, thus avoiding a flickering visible shadow (892, 893).
  • a beam combiner ( 11 ) is employed such that the dual frequencies exit the fixture's baffle aperture ( 1 2 ) superposed.
  • the visible source (107) exit aperture (839) is wider at 30.0 mm +/- 10.0 mm, being thereby a slightly spatially extended source by means of an appropriate extended filament or equivalent in lamp (1 °7 ) , and thus resulting in visible shadow (892 > 893) blurred edges (894) (for the ergonomic reasons disclosed).
  • Optical filter (109) may also include a diffuser function in the relevant visible wavelengths to achieve this result.
  • Fig. A6-b Large Acceptable Maroin-of-Error in Fixture Alignment over Platform.
  • Fig. A6-b The combination of: (a) single IR flood source (79) for all Type I sensors; (b) the Type I sensor processing AGC (automatic gain control) logic of software ( 27) residing in memory ( 68 > 469) ; the further measures employed to suppress optical crosstalk including (c) IR souce clock (105) ; (d) band-pass filters 0 91) ; and (e) mirrored sensor well (189 > 2 04) ,
  • the invention employs a primary (Type I) optical sensor array comprised of a plurality of (n) separate
  • Such software ( 42 7) may employ polling of such registers or memory, and in preferred embodiment the sensor I/O circuit ( 416 ) further employs a processor-interrupt scheme.
  • Software ( 42 7) inte ⁇ rets the value(s) of sensor I/O data and determines whether or not a "valid" shadow-transition event (23, 24) has occured or not. If deemed valid, this warrants reporting the valid trigger and it's Speed ( 581 > value by means of an employed MIDI protocol ( 444 ) to the CZB (Creative Zone Behavio ⁇ Processing Module software ( 461 > on host computer ( 8? ) for further contextual processing to affect media responses ( 5 7, 548).
  • Type I sensors ( 16 > 73, 95, ") are mounted within a "thin” (30.0 mm) +/- (5.0 mm) Platform mounted at floor level [Figs. A1-a, A1-b].
  • the Type I sensor is housed in a "well” assembly ( 189 > 204) beneath an scratch-resistant transparent window ( 19 ) the top surface of which is flush with the surrounding opaque Platform ( 1 ) surface [Sheets D4 through D7].
  • Type I sensors are mounted in modules equivalent to ( 128 ) except inside a "thick" Platform. (See Section 4.4, Description of Sheet D9.)
  • the on-axis module configuration accepts an arbitrarily bright source for the Beam-1 , including even non-LED sources such as (RGB dichroic filtered) halogen or incandescents, because the Type I sensor is better shielded from internal reflections from the Beam-1 LEDs (259) as compared to the folded "thin" elliptical design [Sheets D6, D7].
  • non-LED sources such as (RGB dichroic filtered) halogen or incandescents
  • Type I primary Sensor Data. [Series G, H].
  • the Type I sensor array is considered "primary" in that it's use in practice defines both player ergonomics and media responses according to shadow ( 23 ) and un-shadow ( 24 ) actions, which actions are furthermore contextualized by programmable system transfer functions (550, 551, 552, 553) into three distinct Event ( 583 ) types.
  • Attack ( 25 ) is the result of shadowing after auto-sustain (573) finish.
  • Finish ( 2? ) is the entrained generalized result of unshadowing action.
  • Re-attac (26) is the result of re-shadowing before auto-sustain (573) finish.
  • Type II (Secondary) Sensor Data [Series G, H].
  • the Type II Height (286) sensor ( 1 3 > array is "secondary.” Height data does not itself generate Events ( 583 ), but instead may be used in software ( 429 > 461) to define the system transfer functions ( 551 ) of Events for Notes Behaviors ( 430 > 565) including Velocity ( 57 2), Sustain (573), Quantize (574), Range (575), Channels ( 576 ), and Aftertouch (577). Applying Height data in the form of live kinesthetic parameters ( 593 > for Type l-generated Events ( 25 .
  • Type I Sensor Transition Events [Figs. A2-A7]. As player (17 > 147) moving limbs (455 > 56), torso or ⁇ props at typical velocities (2.0 m/sec +/-1.5 m/sec) intercept the overhead IR source flood (831) and thus create IR shadow ( 1 ⁇ . 14 8) edges passing over Type I sensors, the resultant photonic intensity transition events generate easily detected changes in output current of the photoconductjve sensors ( 16 > 73, 95, 99, 143, 233). An IR source ( 108 ) is employed having an intensity level such that shadow-edge transitions are of sufficient magnitude to obtain a robust signal-to-noise ratio into the A D electronics ( 416).
  • Type 1 Sensor Transition Speed may be employed in a context of detecting "binary" shadow actions (23) and un-shadow actions ( 24) only, e.g. without speed detection.
  • Type I sensors combined with appropriately high-resolution A/D electronics and signal processing (416.427 ) m ay deconvolve the IR source clock (105 ) induced square wave aspect from the detecting sensor's current output waveform, thus revealing just the transition current's ramp or slope.
  • the preferred embodiment may thus detect dynamic range as to Speed (581) (e.g. transition current slope values), and do so independently for both shadow actions and un-shadow actions over a single Type I sensor.
  • Detecting varied speeds even with a dynamic range as limited as four, may yet be employed with great advantage as one ( 5 8 1 ) of 6-degrees-of-freedom of Kinesthetic control (563) in embodiments inco ⁇ orating both Type I and Type II arrays, or as one of 5-degrees-of-freedom in embodiments having exclusively Type I arrays (e.g. without height sensing).
  • Type I Sensor Narrow Trigger Regions [Figs. A2, A3, A6, A7, B2, B3, C3].
  • R source aperture (459), comprises its "sensor trigger region” (20, 21, 22, 120, 144, 145) anC
  • the trigger regions are ideally each • 3.0mm in diameter (181) and should not exceed a maximum of 8.0 mm in diameter in order to maintain the ergonomically desired ratios (182 .
  • Type I sensor positions are arranged into concentric groups situated from their mutual center at two or more distinct radial distances: in the case of two, (5 - 6 ) for Platform and (842 . 843) for Console.
  • the innermost group has the highest angular frequency or narrower inter-sensor spacing, and outer zone(s) employ a lower angular frequency, or wider inter-sensor spacing.
  • sensors are spaced equidistantly: inner sensors approximately 30° apart (?) for Platform and 18° apart (138) for Console, and outer sensors approximately 60° apart (8) for Platform and 36° apart (137) for Console.
  • Outer groups are typically spaced at twice the angular frequency (e.g. half the number of sensors per interface circumference) in order to optimize polyphonic event structure variety and musical response interest (see Section 4.7, Musical Response).
  • Concentric "groups” disclosed here should not be confused with the arrangements of "Zones” which may or may not be equivalent in geometry [Fig. H6].
  • Type I Sensor Trigger Regions [Series A, B].
  • the array of Type I sensors taken together have an outermost diameter at Platform level of 1.7 to 2.7 meters, with a preferred embodiment (6) shown at 2.3 meters in diameter (115.0 cm radius).
  • a Platform scale is preferred (for a setup suitable for either adult or child) since it yields reasonable heights ( 833) of • 3.5 meters for the overhead fixture O 9 ) without "crowding" the player (17 . 5 7 ) f r0 m too “tight” a shadow projection angle (834 > 844) which would produce (unintentional) over- triggering from player's shoulders, head, and torso [Figs. A6-a,b].
  • a Platform designed for use exclusively by younger (smaller) children may be less than 2.0 meters in diameter without detriment.
  • Type I Sensor Zone Configurations Type I sensors in use, are functionally allocated into variously configured 1 ,2,3,4,5 or even 6 "Zones" of sensors, as shown [Fig. H6-a] in the GUI Comand Interface "Zone Maps Menu” (656) .
  • Figs. A1-A8, A10 there are typically three zones, comprised of two inner zones (630 > 631) of five sensors each plus one outer zone ( 6 9) of six sensors [Fig. H3-a].
  • Zone configurations are denoted numerically ( 66 ) .
  • Zone allocations are one of the primary 6-degrees-of-freedom (563) for Kinesthetic inputs, as far as organizing system transfer functions f 430 - 432 > to media response outputs (547, 5 4 8) . Given their predominantly inner/outer character this feature may be characterized in the kinesthetic feature space (5 4 6) approximately in terms of "reach” ( 57 8) , although they also may be “split” in bilateral (left/right) fashion as well.
  • LED- illuminated sources are essentially continuous, and have no embedded carrier frequency to speak of except to consider their maximum possible transition duty cycles between events Responses ( 74 > 5 , 76) during player performance; and that is typically two to three orders of magnitude less (even with time-quantization function disabled and a 1 -tick auto-sustain duration) than IR source clock rate.
  • events Responses 74 > 5 , 76
  • time-quantization function disabled and a 1 -tick auto-sustain duration
  • IR source clock rate For example, successive 32nd note attacks at a rapid tempo of 200 (at or above the humanly achievable performance limit) still results in only approximately 26 attacks/second. Plus, the IR frequency component from even the high- power LEDs ( 218 » 253) js relatively negligable; LEDs run "cool" compared to other types of sources such as incandescent, halogen, etc.
  • Type I sensors in all module configurations [Figs. D4-b through D9-b] are optically band-pass "notch iltered ( 191 ) to receive IR light only within a narrow band of frequencies centered around their peak IR sensitivity wavelength and as complementary to IR source ( 108 . 11 °) frequency, so as to further suppress the potential for spurious crosstalk and maximize signal-to-noise ratio in the A/D circuits (4i ⁇ ). While shown as separate filters ( 191 ), in practice these are aften integral to the sensors ( 16 , 73, 95, 99, 143, 233) themselves in the form of optical coatings.
  • Platform sensors 16 > 73, 95, 99 are positioned at the bottom of mirrored "wells" ( 189 > 204) sucn that even if IR flood light ( 831 ) from the source fixture ( 19 ) does not directly fall upon the sensor - as will be the case from some height adjustment settings ( 833 ) or from manufacturing modulo orientation errors or from Platform postioning (840) _ then secondary internal reflections inside the mirrored tube will do so indirectly and sufficiently [Figs. D4 through D7].
  • the wells furthermore greatly reduce if not eliminate the potential for crosstalk from ambient IR sources, even those unlikely ones having clocked components at peak frequency sensitivities, due to the narrow directional selectivity for IR source positions forced by the deep wells.
  • AGC Automatic Gain Control
  • AGC Automatic Gain Control
  • AGC also performs a baseline floating differential, polling the unshadowed level periodically at relatively long intervals (• 500 msec) to detect any slow drift in intensity such as from intervening fogging materials.
  • AGC utilizes whatever recieved IR levels (whether direct or indirect) are available from un-shadowed sensor state, even though these may vary greatly, both from sensor to sensor and over time for each sensor.
  • Type II Sensors [Series B].
  • the invention in preferred embodiments ( 8 75-877, 880-885) employs a secondary Type II array of (n) separate proximity (height) detecting optical or ultrasonic sensor systems (113) , each independently comprised of a transmittor / emittor (115 ) combined with a reciever / sensor (114) configured for reflective echo-ranging.
  • Type II sensors typically may detect proximity or height (distance to torso or limb) within a broader spatial region of sensitivity including throughout various planar, spherical, or ellipsoidal shaped regions (121 » 12 2, 14 6) a nd still serve the intended ergonomics of the invention.
  • Type II regions of proximity detection typically have much greater aggregate volume than those of Type I sensors, and overlap them in space [Sheets B2, B3, C4].
  • Type II Sensors The number of Type II sensors employed may range from a maximum of one corresponding to each and every Type I sensor module in a given free- space interface, to a minimum of one per each entire interface. A reasonable compromise between adequate sensing resolutions vs. implementation cost and software complexity/overhead would be six as shown [Sheets B2, B3, F7] for the example "Remote Platform #1 ( 54 3) which illustrates an example of Platform embodiment Variation 6 (876) .
  • Type II sensors (113) may be positioned: (i) all within the Console (13 ° ) [Figs. C2-a, C2-d], or (ii) all within the Platform [Fig. B2-a], or (iii) all within an alternate overhead fixture assembly (not illustrated), or (iv) mounted in a combination of above and below locations (123) as in the arrangement shown for the alternate
  • Type II Sensor Array In the Platform cases (8? 5, 877) , jyp e n sensor modules may be mounted in a circular distribution with approximately equal angular distribution ( 116) in the case of six at 60°, and at a radius in-between the radius of the inner (5) and outer ( 6 > Type I sensor groups.
  • Type II sensors are ideally mounted within Platform-flush plug-in modules (117) together with replacement bevels (118) and safety lamp (119) , or in Console instances ( 88 o- 8 85) integrated into the main Console enclosure (13 ° ) .
  • Type II sensors may alternatively be contained within external accessory modules either positioned adjacent to the main Platform on the floor, attached to the Console enclosure (13 ° ) or its floor stand (131) or separately mounted above and/or around the player, provided suitable software (428 ) adjustments are made for these alternative locations. (The cabling and ergonomic aspects of such an external Type II modules configuration however, are less desireable.)
  • Type II sensors may be arrayed to have partially mutually overiapping ( 2 1, 14 6) detection spatial regions [Sheets B2, B3, C4] in order to obtain a best spatial "fit” in also overlapping adjacent corresponding Type I trigger regions (12 ° ) . This also serves to maximize Type II data's signal-to-noise ratios over all employed spatial regions of detection, by averaging or inte ⁇ olation in software (428) .
  • the spatial Type II detection regions individually or taken together, may comprise a cylindrical, hemispherical, ellipsoidal, or other shape.
  • Type II sensors (113) are inco ⁇ orated which have a limited range of distance sensing (• 60% of distance to IR aperture (459) of fixture (19)
  • two Type II groups may be employed.
  • One group has three spaced at 120° (124) in the Platform aimed upwards, and the other group has three spaced at 120° apart aimed downwards and housed in an alternate overhead fixture ( 23) [Fig. B3- c].
  • the relative angular position of the two groups may be 60° shifted, so the combined array of two groups has a combined angular spacing of 60° between Type II modules thus covering 360°, and alternating between upward and downward directions.
  • the Console In the Console
  • Type II modules may all be mounted either within the Console (1 3 °) as shown [Figs. C1, C2, C4] or the flood fixture's 25) enclosure.
  • Type II Sensor Dynamic Range Type II sensors ( 11 3 ) together with their associated electronics (4 5) may employ various dynamic ranges for proximity (height) detection response within their sensitivity regions (121 - 148) . These dynamic ranges may also extend across complex 3D shapes such as nested ellipsoidal layers. Dynamic ranges of as little as 4 and as much as 128 may be effectively employed, with a higher dynamic range generally exhibiting an increased advantage in the scope of available ergonomic features of the invention. Notably, such dynamic ranges may include representation of relative "lateral" positions orthogonal to an on-axis projection from the Type II module (113) , in addition to or combined with reporting "proximity” or linear distance (height) from the module.
  • Type II sensor data processing (4 8 > takes this into account, to weight or inte ⁇ ret Type II data primarily in terms of on-axis distance or height, since Type I sensors detect lateral motions already (such motions being the most common form of shadow/unshadow actions.)
  • Type I sensors 16 > 73, 95, 99, 143, 233) together with their associated MUX and A/D electronics (4 ⁇ ) and processing software logic ( 42 7 ) may in practice exhibit duty cycles of detecting valid shadow/un-shadow events of as little as 3.0 msec.
  • Type II sensors ( 1 3) with their associated electronics (41 5) and logic (428) are configured to report proximity range values at substantially slower duty cycles, on the order of 45.0 msec +/- 15.0 msec.
  • Type II data reporting rates are desireable and acceptible since their data is employed by system logic ( 4 6 1 ) to generate parameters (593) used with the much faster Type I trigger events (25 > 2 6, 2 7) use d in the creation of ultimate media results (MIDI note ON/OFF messages with their parameters). This is why Type II sensors may even employ such as the relatively “slow” ultrasonic technologies (vs. much faster optical techniques) with no significant disadvantage as to the ergonomics or musical response times of the invention.
  • Type II values are via software ( 28 , 42 9) averaged (706, 707) 0 r the most recent detected height (705 ) over a given Type I zone (triggered) is applied [Sheets F1 , F2, F3, i3].
  • Type I and Type II Crosstalk Suppression of Type I and Type II Crosstalk.
  • a substantial differential is employed between the Type I IR source ( 1 ° 8) carrier frequency from clock circuit ( 1° 5) vs. the modulation frequencies used in encoding of IR from Type II optical transmitters ( 11 5) .
  • Non-optical Type II sensors may alternatively be used, such as ultrasonic in which case these crosstalk issues become moot.
  • Type I Sensor/LED Assemblies [Series D]. Type I sensors are mounted within an optomechanical assembly (or “module") also housing active LED-illuminated light pipe indicators at near the free-space interface's surface 0. 1 3 ° ) . In between the innermost sensor* and Light Pipe 2, beam-forming optics ( 44) project (fogged) active visible microbeams (60 ,129 ) . j n e array of (n) such microbeams form a conical array around the player.
  • Each Type I sensor ( i6 , 73, 93, 99, 143, 233) j s surrounded by two concentric LED-illuminated display surfaces: the outer Light Pipe 1 (orLP-1) 0 3 > 70 , 93, 97, 140, 230) an d the inner Light Pipe 2 (or LP-2) (14, 71. 94, 98, 141, 231).
  • n the Platform embodiments both Light Pipes are visible through a clear, scratch-resistant cover 97) which cover also protects Beam 1 optics and Type I sensors from damage by player impacts.
  • the Light Pipes have a 3-D shape 40, 41, 30 > 231) extending above the interface enclosure ( i3 ° ) in a module enclosure (235 > 49) .
  • Beam-Forming Optics [Sheets D6, D7, D9]. Centered within Light Pipe 2 is the projected microbeam's exit aperture, Beam-1 5 > 7 2, 142). ⁇ ne supe ⁇ osition of Type I sensor trigger region line-of-sight input at the center of Beam-1 output is achieved either by perforated elliptical mirror ( 05) or a modified Schmidt-Cassegrain arrangement (244 - 247 . 2 8, 261).
  • Class B (96) , Class C 0°. 11. 1 2) , or Class D (6 8 ) differentiate Platform embodiment Variations 1 through 4 ( 8 7 1-8 7 4 ) .
  • jh distinctions between these four sensor/LED module Classes includes: (i) their use of fixed-color vs. dynamic RGB; and (ii) their use of surface light pipes (LP-1 and LP-2) only vs. use of both surfaceHight pipes and active projecting microbeams (Beam-1).
  • Type II sensors are employed in the Platform, Class B or Class D are always used, as these modules include full RGB color modulation functionality which is essential to providing sufficient degrees-of-freedom ( 584 , 585, 586, 587) 0 f feedback for the Type II Height (58 °) data.
  • the Console embodiment Variations 1 through 8 (87 8- 68 5 ) all use one of two circularly symmetric, on-axis type of Sensor/LED modules [Sheets D8, D9]. These module types both have RGB processing as the Console is intended to employ floating zones [Fig. H6] since its light pipes 1 and 2 are uniformly circular. The difference between the two Console modules disclosed is whether or not projecting microbeam optics are included.
  • the "thick" Platform Variation ( 87 ) also uses the on-axis, D-Class module type of [Sheet D9].
  • the Type I sensor (16. 73, 95, 99, 143, 233) direction of invisible sensing input vs. the active visible output of Beam 1 ( 56, 5 8 , 5 9 , 129 ) a re optically opposed, in that their respective light sources are opposed.
  • the overhead IR (83 ) and visible (832) source floods are aimed "downwards," while the active microbeam-forming optical assemblies are aimed “upwards.” This reduces potential for crosstalk. Aiming the active visible microbeams upwards furthermore eliminates the occurrence of false/multiple player shadows (confusing the kinesthetic ergonomics) which could be the case if Beams-1 were aimed downwards.
  • Demarcation of zones is accomplished by operational logic (656) f or LEDs 98 . 1", 2 16, 217, 218, 237, 238, 251, 252) control (for 'floating zones'), and may
  • Light Pipes 1 and 2 employ geometric shapes distinct to each Zone, for example the circle OL 9 1), hexagon 0 2 > 92 ), and octagon 0°> 9 °).
  • Fixed-zone interfaces may further reinforce the ergonomic distinction between Zones by employing Hue assignments (e.g., different Hues for each respective Zone), these being constructed with various suitable fixed-color LEDs 0 93 - 1 9 4, 2 07, 208).
  • the minimal ratio of Type I sensor trigger region diameter ( 18 1) to outermost Light Pipe 1 diameter 0 79 ) equals at least 1:12, for example 72.0 mm diameter light-pipes to 6.0 mm diameter sensor.
  • a minimal diameter for the Light Pipe 2 is also recommended, such that even if (for example) the sensor diameter is less than 1.0 mm, the Light Pipe 2 outermost diameter should still be at least 60.0 mm.
  • the (fogged) Beam-1 (60) diameter 0 86 ) has a minimum ratio (considered in planar cross section) to Type I trigger region diameter 0 8 1) of at least 1:6, for example 36.0 mm at exit aperture (15, 72) to 6.0 mm diameter sensor.
  • a slight Beam-1 divergence e.g. lack of exact collimation expands at maximum distance (overhead fixture height) ( 883 ) to as much as 1: 24 ratio for a 150.0 mm diameter visual beam ( 887 ).
  • the beam-forming optics ( 2 4, 2 15, 205, 206) an d exit aperture 86 ) for the active Beam 1 are configured so as to result in this extent of beam divergence.
  • the beam forming optics also are so configured so as to result in blurred beam edges ( 26 , ⁇ 88), preferably of Gaussian or similar beam intensity profile.
  • Sha ⁇ er apparent beam edges are disadvantageous, as they would diminish or even eliminate a desired "envelope of spatio- temporal ambiguity" by making the moment of traversal into the immersive beam edge
  • the free-space instrument is a physical device located in space (on the floor or mounted on stand 0 3 1 ) ), the point of human interaction is not at the interface surface, but in fact in empty space above it.
  • the immersive Beams-1 (56, 58, 59, 129) are supe ⁇ osed with the sensor trigger regions (20, 21, 22, 120, 144, 14 5 ) .
  • the surface Light Pipes 1&2 ( 13, 1 4 , 70, 71, 93, 94, 97, 98) an d p
  • the net perceived effect is not so much that the passive and active visual elements represent the instrument, but rather that they comprise a single, coherent frame of reference in space (full-cone shape for the Platform and partial cone shape for the Console) for the player's Body which is the instrument.
  • the active visual media responses may be experienced as "collision detection indicators" of the body intersecting through the frame of reference conical shape [Fig. A8-a].
  • the active responses highlight the spatial frame of reference in changing Light Pipes 1&2 and Beam-1 Hue, Hue Variation, Saturation and/or Lightness (which of the latter parameters are changeable depends upon the embodiment Variation and the sensor/LED module Class). Active visuals thus are experienced as a result of play rather than as means of play.
  • the overhead fixture O 9 , 125 includes a surrounding optical stop baffle (H2) confining the radius of the visible flood at interface surface to a maximum of 0.5 m beyond its circumference, reducing the potential for multi-shadow confusion between two or more adjacent interfaces in a given venue.
  • H2 optical stop baffle
  • the visible overhead source component is optically configured via a slightly extended optical aperture ( 839 ) so that the edges ( 894 ) of player shadows generated from play at most-frequent heights (1.5 m) +/- (0.5 m) are slightly blurred, preferably exhibiting a Gaussian intensity gradient.
  • Such blurred edges may range between 20.0mm and 30.0 mm in width, and ideally not less than 10.0mm, for a 0% to 100% intensity transition.
  • the edges are blurred enough to maintain sufficient ambiguity for masking asynchronicity, yet are sufficiently clear to indicate body position with respect to sensor regions especially before and after active responses.
  • position of player's shadow may serve to indicate spatial proximity to sensor trigger regions, this being somewhat analogous to a piano player resting fingers on keys without yet pressing down to sound the notes. Without such a player visible shadow feedback, it would be difficult to determine (at most-frequent heights of play and typical body positions) the lateral proximity (e.g. the potential) to causing a trigger, without actually triggering the sensor.
  • Familiar Shadow Paradigm A player's body shadow is a familiar perception in everyday experience.
  • the simple 2-D planar shadow projection is further reinforced by corroboration of feedback from surface Light Pipes 1&2 and Beam-1 responses which are spatially co- registered with the shadow.
  • These in combination support rapid learning of the 3D perceptual-motor skills of intercepting (shadowing/unshadowing) Type I sensor trigger zones at all heights and all relevant X-Y-Z positions in 3D-space.
  • Rapid learning here means: proficiency achieved during the first 30-60 seconds of play, even for first-time casual players.
  • the overhead visible flood source is balanced in Intensity and Hue (with respect to Light Pipes 1&2 and Beam-1) in such a
  • the visual response paradigm employs multiple forms of visual feedback to provide maximum possible synesthesia [Series G] under varying ambient lighting conditions.
  • the LED-illuminated Light Pipes 1&2 and Beams-1 provide feedback in passive form as a spatial frame of reference when in the Finish Response State, and an in active form when when changing to Attack or Re-Attack Response States. These together with the passive player-projected visible shadow provides multiple correlated and synesthetic visual feedback sufficient for clear, easy and precision performance under varied ambient lighting conditions,
  • Unconstrained Method A player is unconstrained in that he or she may move about in a great variety of body positions and movements, to affect shadow/un-shadow actions, from both the inside and the outside of the conical shape of the IR Type I trigger regions, using any combination of torso, head, arms, hands, legs, feet and even hair.
  • Player body actions may range from gentle reaches or swings (4 5 5 , 4 5 6) ( to any dance-like motions, to acrobatics, flips, head stands, tai chi, martial arts, and also from various seated (including wheelchair) or even lying down positions.
  • rhythmic synchronization (474) between live note events ( 51°. 5 H ) and accompaniment pre-recordings (487 > 513, 525).
  • Any shadow- creating body (47) , or prop intercepting the overhead IR Flood (83 1), at any height along a given Type I sensor's line-of-sight ray (20, 1, 22, 120, 144, 145) (source-to-sensor) will result in the identical States Change Vector as per the State Changes Table [Sheets D1, D1-b] This promotes player's freedom of expression and variety of body motion simultaneously with repeatable, precise responses for each sensor. For example, a shadow formed at a 20.0 mm height above a Type I sensor will result in logically the same State Change as a shadow formed at a 2.0 meter height.
  • a centrally standing player ( 1 7 , 47 ), with horizontally (or slightly lower than horizontal) outstretched arms (or legs) can easily shadow sensors only within the inner concentric region (2 °> 22 ) a t radius (5 > 842) , and do so either without significantly reaching (leaning) or moving (stepping) off-center.
  • a centrally positioned, upright, standing player may easily intercept multiple sensors across both concentric radius (5 > 6, 8 4 2 , 8 4 3) by reaching outstretched arm(s) at heights above horizontal level, thus intercepting the overall cone ( 834 > 844) where its diameter is less, and thus generating shadows 0 8 , 58) 0 f larger scale where such shadows fall at Platform level.
  • Radial sweeps of limbs can play various sensors within multiple radius zones simultaneously, provided appropriate lean and/or reach (torso angle and/or limb height) is applied.
  • Player(s) also may optionally employ any shadow-creating props such as paddles, wands, feathers, clothing, hats, capes and scarves.
  • any shadow-creating props such as paddles, wands, feathers, clothing, hats, capes and scarves.
  • Two or more players may simultaneously position and move themselves above and around the Platform so as to generate shadow/unshadow actions as input into the sysiem.
  • Event-bv-Event Rhythmic Processing favors player event-by-event ( 23 , 24 ) musical transfer functions (5 51 , 552, 55 3) [Figs. D1, D1-b, Series E], as contrasted with the alternative approach of single-trigger activation of multi-event responses such as subsequences or recording playbacks.
  • the preferred approach maximizes clear feedback and player ownership of creative acts, contributes to optimal ergonomics, and also enables the maximum degree of variation in forms of polyphonic musical structures.
  • the disclosed systems incorporate a slight variation in degree of achievable polyphony relative to varied heights of play. Positioned at a low height near the surface of the interface, with minimal motions a given IR-intercepting limb passing over a sensor can trigger individual responses from that sensor only. Positioned at the opposite extreme of height, (i.e. player raising one or both hands up) close to the IR/visible flood fixture 0 9 - 1 5) , a single limb can with little motion trigger responses from all (n) Type I sensors in all sensor zones at once,
  • a similar result can alternatively be achieved by means external to the Free-Space logic ( 4 6 1 ) such as by employing MIDI Program Change and bank select Control Change messages in sequencer ( 4 ", 440 ) tracks (497) , or by various Channel mapping functions available in Other MIDI Software ( 39) and controlled by its track (498) .
  • MIDI Program Change and bank select Control Change messages in sequencer ( 4 ", 440 ) tracks (497) , or by various Channel mapping functions available in Other MIDI Software ( 39) and controlled by its track (498) .
  • Channel assignments will always be the same for Attack Event (25) and Re-Attack Event (26) generated Note messages.
  • Only the internal free-space Channels ( 5 76) function via software ( 461 ) allows differentiation of Channel assignments between the Attack (25) and Re-Attack (26) Events. This can be a very useful and musically rich application of the free-space Re-Attack.
  • the internal Channel configuration provides for the uniquely free-space behaviors dynamically controlled by players according to the additional live kinesthetic parameters (593) including Height (286) , Speed (28? ), and Precision (288) — illustrated for the case of Precision, in example #1 (295 > illustrated on [Sheets i5, J5].
  • Zones in practice are typically operated independently with respect to each other as regards their response modes and parameters ( 56 5, 566) including Channel ( 57 6) as
  • Zone Behaviors may be made to aesthetically correspond with instruments and the compositional aesthetics of the song.
  • a Zone set to a pizzicato string voice could employ a shorter Quantize (574) and or a shorter Sustain (573) , while in contrast a legato flute could employ longer values for Quantize and/or Sustain.
  • instrument voicing is re-assigned dynamically for a Zone, so also may other CZB Behaviors be adjusted for that Zone to aesthetically match the instrument change.
  • the system may employ the "Pan" parameter (stereo balance of relative audio channel levels) as part of Controller ( 566) Creative Zone Behaviors (43 ) or the Voices panels (611 « 633 . 63 4 ) t or this may be done by means of Audio Mixer ( 48 1) or Sound Module(s) ( 48 °- 866) . This can be used to match the general physical positions of the Type I sensor Zones on the Free-Space Interface to audio spatialization.
  • the inner left zone (63 ° ) of (5) sensors may have its audio output set at a more "left Pan” position
  • the inner right zone (631) of (5) sensors may use a “right Pan” position
  • the outer zone (629) of (6) centers may use a "center” Pan position, for example.
  • This further reinforces the (sound-light-body) Synesthesia (560) effect, and amplifies the sense of Kinesthetic Spatial Sync ( 306) engendered.
  • responses from trigger of outer radius sensors (5 > 842 ) V s. inner radius of sensors (6 , ⁇ 4 3) may also employ differing levels of Reverb and other effects ( 5 6 6 ) to generate spatial a feel of "nearer” vs. "further”. This further reinforces the (sound-light- body) Synesthesia (560) effect, and amplifies the overall subjective sense of Kinesthetic Spatial Sync (306) engendered.
  • the invention employs a distinct method of Re-attack ( 26 ) response resulting from player shadow action during auto-Sustain duration (see State Changes Table [Sheet D1b]).
  • Re-attack 26
  • Most MIDI sound modules however will have no audible result from receiving additional note-ON messages (having non-zero Velocity) for a sounding note; ("for non-zero velocity” since some modules will inte ⁇ ret velocity zero Note-ON as a Note-OFF).
  • modules ignore a note-ON message received after a previous note-ON message with no intervening note-OFF message received for the same note number.
  • this state of affairs is seldom an issue, although at times polyphonic aftertouch is employed — however that only affects velocity level.
  • the invention implements the Re-Attack as a full-fledged ergonomic feature of music media expression which may be uniquely and variously applied to all transfer functions of Creative Zone Behaviors (43 o, 431, 432, 433 ) ( n ot only relative Velocity.
  • Re-Attack processing is disclosed in the State Changes Table [Fig. D1-b] and examples detailed in [Sheets E6, E7].
  • Re-Attack generates a truncation of the current Note ON: first a Note-OFF message is generated V 4 0 ⁇ 4 ) or V 15 O 75) and sent out immediately.
  • GUI Display Command Interface
  • MIDI MIDI Command Interface
  • GUI MIDI Command Interface
  • Reductions to practice include the use of specific MIDI protocols (444, 445, 502, 51 o, 512) and a user interface or GUI via such as an LCD or CRT display (44 ) and input devices such as mouse, touch-surface or trackball (443) .
  • the display may be either embedded into the Interface surface, as in Console embodiments (88 o-8 8 5 ) , 0 r remote from the Interface surface as in Platform embodiments ( 871-877).
  • MIDI Protocol Uses [Series F]. MIDI message types including System Exclusive, System Realtime including Beat Clock, Note On/Off and Control Changes are used in three protocols specificaily designed for free-space. These are the CZB Command Protocol (50 2 ) , the Free-Space Event Protocol (445 ) and the Visuals and Sensor Mode Protocol ( 444) . These free-space MIDI protocols and their uses, along with novel uses of conventional, third-party manufacturer compatible protocols, are disclosed in depth, in the Section 4.6 Description of the Drawings for Series F.
  • a CRT or LCD graphic display and relevant input device(s) are employed primarily for the definition, selection and control of Creative Zone Behaviors and their defining CZB Setups data during studio authoring of interactive content titles.
  • the process of authoring content consisists primarily of using the display to control the capturing of desired CZB Command sequences which are later used to recall or reconstruct the corresponding CZB Setups.
  • the graphic display also may be used for the selection of content titles by any free-space players just before initiating a session of play.
  • the display and associated input device are rarely to be used by players during free-space music performance itself, although this is appropriate for practiced and virtuoso players and for authoring venues, in particular using the Integrated Console embodiments (882 -8 8 5 ) .
  • Use of Speech Recognition Use of the CZB Command Interface during performance, especially for all Platform embodiments (but also for Console embodiments (87 8-881)). may optionally be made more practical (and to minimize distraction from the free-space paradigm) by means of providing the player with a wireless microphone as input into a suitable voice recognition system on the host PC computer ( 87) which translates a predefined set of speaker-independent speech commands into equivalent input device commands.
  • the overhead IR/Visible flood fixture 9) position is adjustable in height (833) ranging between a
  • An alternative idealized "thick" Platform embodiment Variation 7 may include embedded servo-mechanisms or similar means to swivel into the correct angular position a modified Class D type of on-axis LED/beam/sensor modules [Fig. D9].
  • manual "click-stop" mechanisms at each module may be employed to adjust the modules angle.
  • visible Beam-1 orientations may be made to match various overhead source fixture heights.
  • Such coordinated fixture and beam-forming module height adjustments may either be continuous, or in the form of a step function over a
  • Adjustment for varied height of Console players 0 47 is achieved by utilizing such as a variable-height stool or bench, or ideally for the standing player a mechanically adjustable floor section, to change player height position. Alternatively this may be accomplished by adjusting the Console's floor stand or base 0 3 1) to change the Console's height. In either case, the relative positioning ( 889 > 89 °. 89i ) of the Console to its IR/Visible flood fixture 0 25 ) remains constant, since the fixture is mounted upon extension arms 0 26 ) affixed to the Consoles base 0 3i ).
  • the "thin" Platform embodiments (871-876) feature a plurality of Platform subsections (for example seven hexagons) 0.2) which may at times be disassembled and stacked for transport or storage, and at other times easily reassembled by placing the appropriate sections adjacent to each other and sliding together, thus interlocking and forming a single flat, firmly integrated, and flush obstruction-free Platform surface.
  • Type II sensor modules O1 3 may be housed in add-on modules ( 1 i7 ) which flush- connect and interlock with the primary Type I Platform sections.
  • the Console embodiments in particular Variations 1-4 (878-881) mav inco ⁇ orate the ability to fold, collapse and/or telescope into a much more compact form, and the ability to easily reverse this process (manually or with servo-mechanism assistance) so as to be made ready for performance use.
  • the Integrated Console embodiments (882-885) jnco ⁇ orating integral LCD touch-display, PC computer, removable media drives, and MIDI and audio modules, would be relatively less collapsible, although still tending to become progressively more so over time as relevant technologies continue to miniaturize.
  • the assembled Platform incorporates outer edges with sloping bevels ( 3 > H8) and also includes a continuously illuminated fiber-optic safety light ( 4 > 11 9 ) for unmistakable edge visibility.
  • the Platform is typically textured on top and provides a secure, non-slip surface.
  • the Series A drawings disclose: (a) the overall optomechanics for Platform embodiments of the invention, (b) example free-space biometrics and corresponding visual feedback for player interception of Type I sensor trigger regions, and (c) details of the overhead infrared (IR) and visible flood fixture.
  • IR infrared
  • FIGS. A10, A11, A1 and A9 illustrate Platform embodiments each inco ⁇ orating one of the four alternate types of Type I Sensor/LED Modules respectively Class A, Class B, Class C, and Class D (for modules detail refer to [Sheets D4, D5, D6 and D7] respectively).
  • FIGs. A2-a and A3-a illustrate example player body positions for Type I sensor line- ⁇ f- sight trigger zone interceptions (Shadow and Un-shadow actions). Each interception example shown represents one case of the seven possible resulting sensor/LED module visual Response States.
  • the seven possible Response States to shadow/unshadow player actions over one Type I sensor are: [Figs. A2-d and A4-d] Near Attack, [Figs. A2-b and A4-b] Attack-Hold, [Figs. A2-c and A4-c] Attack Auto-Sustain, [Figs. A2-e, A3-e, A4-e and A5-e] Finish, [Figs. A3-d and A5-d] Near Re-Attack, [Figs. A3-b and A5-b] Re-Attack-Hold, and [Figs. A3-c and A5-c] Re-Attack Auto-Sustain.
  • Each of these seven states is in turn comprised of a certain combination of three possible ("trinary") visual feedback conditions (Attack, Re-Attack or
  • FIGs. A4-a and A5-a repeat the player Motions of [Fjgs. A2-a and A3-a] respectively, however instead showing the Microbeams in their ⁇ patial configuration as visible in a fogged environment, and symbolically indicating their Response States for the two differently timed Motion examples.
  • FIG. A1-a shows an overhead view of the Platform embodiment, with typical use of distinct geometric shapes (octagon, hexagon, circle) for each Zone (5-inner left, 5-inner right, 6- outer) of Class C Type I Sensor/LED modules.
  • the preferred thin Platform form-factor for transportable systems is shown in [Fig. A1-a].
  • Data I/O edge panel connectors are detailed in [Fig. A1-d].
  • FIG. A2-a shows Motion Case One of player arm-swing timing, in relation to line-of-sight Type I trigger regions. Player's left arm has shadowed a Type I sensor module previously
  • A2-b Attack-Hold (comprising LP-1, LP-2 and Beam-1 all in Attack feedback), after previously passing over (shadowing/un-shadowing) an adjacent Type I sensor whose LED module Response State changed from Attack-Hold to the [Fig. A2-c] Attack-Auto-Sustain shown (comprising only LP-2 and Beam-1 in Attack feedback).
  • FIG. A3-a shows Motion Case Two of player arm-swing timing, in relation to line-of-sight Type I trigger regions. Player's left arm has re-shadowed a Type I sensor module previously in Attack-Auto Sustain thus generating the [Fig. A3-d] Near Re-Attack shown (comprising only LP-1 in Re-Attack feedback), after previously passing over (shadowing/un-shadowing) an adjacent Type I sensor whose LED module Response State has returned from an Attack Auto-Sustain (or a Re-Attack Auto-Sustain) to the Finish [Fig. A3-e] shown (comprising LP-1, LP-2 and Beam-1 all in Finish feedback).
  • Motion Case One is shown exactly as in [Sheet A2], except illustrated in relation to visible fogged microbeams on-axis supe ⁇ osing/surrounding the invisible Type-1 line-of-sight trigger regions.
  • Motion Case Two is shown exactly as in [Sheet A3], except illustrated in relation to visible fogged microbeams on-axis supe ⁇ osing/surrounding the invisible Type-1 light-of-sight trigger regions.
  • Fig. [A6-a] illustrates (for Motion Case One) the formation of invisible infrared (IR) shadow over one or more Type I sensor/LED modules by means of player's intercepting (blocking) the fixture-mounted overhead invisible IR source flood, and formation of the supe ⁇ osed visible shadow formed by means of player's intercepting (blocking) the fixture-mounted overhead visible source flood.
  • IR infrared
  • FIG. A6-b illustrates how sufficiently scaled IR- and visible-shadow projections are formed for various player heights by means of corresponding adjustment to the overhead fixture height relative to the Platform position.
  • "Sufficient” here means in biometric terms the capability of a centrally positioned (standing) player to effect 16-sensor polyphonic operation by means of fully horizontally outstretched arms with little or moderate bending of the torso (reaching), noting that such sufficiency is a relative biometric frame of reference only and not intended to constrain players to any particular positions or motions.
  • FIG. A9-b illustrates a Platform with the preferred Class D sensor/LED modules, all having one geometry of LEDs Light-Pipes. This embodiment is contrasted to the three fixed sensor-zones (5 inner-left, 5 inner-right, and 6 outer) shown in [Figs. A1-A8] that being
  • [Sheet A10] illustrates a Platform with the simplest visual feedback configuration, having Class A [Sheet D4] fixed hue LEDs illuminating surface Light-Pipes 1 and 2 only, and with no microbeams. This is suitable for use where fogging materials are not used, and/or for achieving greatest hardware economy. Even when applying groups of like-hued LEDs into functional zones, the additional use of geometric shape differentials is recommended to further aid in player's zone recognition (and for benefit of those players who are color perception challenged.)
  • [Sheet A11] illustrates a Platform with Class B [Sheet D5] sensor/LED modules, having no microbeams, however with full RGB LEDs allowing "floating" Zone Maps as described in the summary for [Sheet A9].
  • FIG. A12-a illustrates an overhead fixture showing the internal optomechanics and (summary of) electronics for beam-combined continuous visible flood and supe ⁇ osed clock-pulsed IR flood.
  • External housing form-factor, microbeam stop baffle configuration, and floods exit beam angle shown are suitable for over-Platform use, whereas all other fixture components are equivalent for both over-Platform and over-Console use.
  • FIGs. B1-a and B2-a illustrate the most-preferred embodiment of the invention, referred to in Series F, H, i, and J as "Platform #1.”
  • FIGs. B2-b and B3-c illustrate the difference in overhead fixture for 0-of-6 vs. 3-of-6 Type II sensors fixture-mounted respectively.
  • FIGs. B2-a and B3-a show an example spatial distribution of Type II sensors and their respective trigger (height detection) regions and how these typically supe ⁇ ose or overlap the Type I trigger regions.
  • the Type I sensor/LED modules of Class D are employed in a system configuration where Type II sensors are also employed, as shown in [Sheets B1 , B2 and B3]. This is because variable RGB color output for surface Light Pipes as well as microbeams provides the dynamic range for subtle and varied visual feedback options reflecting Type II sensor data attributes.
  • the seven interlocking hexagonal Platform segments for the 7ype-/-only Platform embodiments are supplemented as illustrated in [Fig. B1-a] by six additional, triangular Platform segments each containing one Type //sensor module.
  • An outer bevel surrounds all 13 segments forming a circular outer edge, and also includes an embedded fiber-light within the bevel slope for safety pu ⁇ oses.
  • FIG. B2-a illustrates Type II sensors all mounted in-Platform, angularity spaced at even 60° intervals.
  • FIG. B3-a shows an alternate instance having 3 of 6 Type II sensors in-Platform and the remaining 3 of 6 in-fixture mounted. Thus only three of the additional triangular Platform segments have Type II sensor modules, and three do not.
  • FIG. B3-b shows the 120° angular spacing preferred for the 3 of 6 in-Platform Type II sensors, as a group 60° angularly rotated with respect to the 3 of 6 in-fixture Type II sensors also 120° angularly
  • the Series C drawings disclose the Free-space Console or floor-stand-mounted embodiment of the invention, exhibiting the partially constrained biometrics of upper torso motions vs. full body completely unconstrained biometrics in the Platform case.
  • the Console embodiment favors 1 player per each unit, vs. the Platform's 1 , 2 or n players.
  • the Console system contains an accessible space near the IR/visible flood fixture, where all of the Type I trigger regions are scaled together near the apex of the cone [Figs. C3, C4]. This facilitates, more conveniently for the Console vs. the Platform embodiment, rapid finger and hand gesture detection and a more ha ⁇ -like feel to the spatial interface.
  • the Console requires one-eighth the installation volume (2 meters 3 ) and one-fourth the floor space (2 meters 2 ) of the Platform's (4 meters 3 ) volume and (4 meters 2 ) floor space. While a Platform may reside on as little as a (2.7 meters 2 ) footprint, (4 meters 2 ) is recommended for perimeter safety considerations and to allow unconstrained play from either inside or from around the outside of the Platform, and to allow multiple players (if playing) sufficient space. Thus a cluster of four Consoles (if packed together) can require as little as the floor space recommended for one Platform.
  • the Series C drawings show a Console inco ⁇ orating both Type I and Type II sensors, and exclusively utilizing Class D sensor/LED modules, in a form factor suitable for Console embodiment (detailed in [Fig. D9].
  • the Console LED modules detailed in [Figs. C2-c, D8 and D9] include more 3D complex LP-1 and LP-2 Light Pipe shapes compared to the flush-constrained Platform's LP-1 and LP-2 planar equivalents. These provide enhanced ergonomics for wide-angle viewing perspectives, and a more dramatic appearance (increased cm 2 of light pipe optical surface area per each module).
  • a Console without microbeams is not illustrated in the drawing Series C but may be easily inferred and implemented, having such as the Class B sensor/LED modules [Fig. D8] for use in untagged environments.
  • the Console as illustrated in [Figs. C1 , C2 and C4] also includes an integrated touch-screen interface for content title selection and/or
  • the Console includes integrated PC computer system(s) and may include removable magnetic and optical storage media [Fig. C1].
  • a Console system without integral LCD interface may be organized, in its internal electronic hardware and software, identically to the firmware-based Remote Platform [Sheet F3] and connect via its MIDI I/O panel [Fig. C1-b] to a Remote Platform Server computer system [Sheet F2]. Or, as shown in [Sheet F1] an Integrated Console enclosure may also include internally the functions of the Remote Platform Server [Sheet F2], and in this case via its MIDI I/O panel connect to associated Other MIDI Software and Sequencer modules running on an external host computer. Or, the equivalent to the Remote Platform plus the Remote Platform Server modules together, plus also the Other MIDI Software and Sequencer modules [Sheets F4, F5 and F6] may all be included within the Console enclosure.
  • MIDI I/O panel [Fig. C1-b] may be optionally used for connecting to such as supplemental immersive Robotic Lighting systems, MIDI-controlled Computer Graphics systems (typically large-format projected), and/or link to Other Free-space systems.
  • FIG. C1-s illustrates the system orientated as facing a player, and showing microbeams as spatially arrayed in a fogged environment.
  • FIG. 1 illustrates how in the Console case, the angular separation between adjacent Type I sensor trigger regions (at sensor/LED module height) compacts to only 18° for inner sensors and 36° for outer sensors, compared to the Platforms 30° and 60° respectively.
  • the array of sensors as a whole is compressed into a 180° hemisphere.
  • the off-center translation of the IR/visible source fixture position makes this necessary, since were the array to extend further than 180° around, the players body would unavoidably and inadvertently trigger sensors behind them.
  • the Type II modules are compacted to
  • FIG. C2-d illustrates an example Type II sensor module with separate optical or ultrasonic active transmitter and receiver.
  • FIG. 3 illustrates a side view of the Console, showing how the Type I sensor/LED modules each tilt variously to retain an on-axis line-of sight to the IRvisible flood fixture, not only for the sensor well but for the LED Light Pipes also.
  • the tilt of the line-of-sight-orthogonal modules aids ' the player in perceiving the in-space orientations of the Type I sensor trigger regions.
  • the overall slanted angle of the top surface of the enclosure parallels the baseline biometric reference swing for the Console: moving between arm(s) out and forward horizontally and arms hanging vertically down at the sides.
  • This is contrasted to the equivalent biometric reference swing for the Platform: moving with arm(s) outstretched horizontally and either spinning the entire body in place or just twisting the torso or hips back and forth.
  • the advantage of these baseline swings in each case is in maximizing ergonomic/biometric simplicity and ease of playing the most common musical situations such as a ⁇ eggios and melodic scale phrases.
  • the Console Type I array's trigger region geometry makes a slight sacrifice in terms of lesser simplicity, being non-symmetric (slanted) and a 180° half-cone vs.
  • the Console does however yield in positive trade-off the benefits of (a) its reduced installation space, (b) an increased accessibility of the compact "tight play” trigger region near the fixture, and (c) the option for an additional type of conventional 2D (touchscreen) interface situated within, and not interfering with, the 3D free-space media environment.
  • FIG. C4-a Console top view illustrates (a)Hts overlapping Type II and Type I sensor trigger regions, (b) example player position and (c) generated visible shadow.
  • the player's shadow is a less prominent visual feedback than for the Platform case, given (a) the small upper surface area of the Console, (b) the off-center, forward-translated fixture position relative to typical player position, and (c) the asymmetric position of shadow falling mostly behincfthe player.
  • the preferred Console embodiment includes the use of Class D modules with fogged microbeams [Sheet D9] and for the un-fogged case also inco ⁇ orates the more dramatic LED [Sheets D8 and D9] Light Pipe modules.
  • the Series D drawings disclose: (a) The Type I sensor/LED module's visual and MIDI Notes Response State C ⁇ anges map, as it applies universally to both Platform and Console embodiments and to all classes of modules; (b) the ergonomic regions of Spatial Displacement of Feedback between a Type I sensor trigger region and its local LED-illuminated visual feedback elements; and (c) the internal optomechanical apparatus of each of the Class A, Class B, Class C, and Class D sensor/LED modules for the Platform, as well as alternative Class B and Class D modules designed for the Console and for a "thick" form-factor Platform.
  • All four module Classes A, B, C, D [Sheets D4 through D9] are designed with certain critical ergonomic form-factor constraints in common, so that players changing between (or upgrading to) different free-space systems employing the various Class modules will experience the same essential aspects of ergonomic look-and-feel, and without confusion.
  • These common constraints include the ratios of diameter between LP-1 and LP-2, and the Spatial Displacements of Feedback between active visible responses with their greater diameters surrounding on-axis the substantially lesser diameter invisible Type I sensor trigger region [Figs. D2-a, D3-a].
  • the difference is how the visual parameters for those three states respectively are defined as stored in Local Visuals CZB Setups Data [Sheets F1, F2] for the, given zone and module, and as may be adjusted by: (a) virtuoso player or content composer via the touch interface with Creative Zone Behavior (CZB) Local Visuals Command Panel [Sheets K2, K3, K4], or (b) by content CZB Local Visuals control tracks [Sheets F4, F5, F6 and G1].
  • CZB Creative Zone Behavior
  • the whole-module Response States for the three Attack cases are exactly equivalent to the three for Re-Attack (Near Re-Attack, Re-Attack Hold, and Re-Attack Auto-Sustain) except having visual feedback elements LP1 , LP2 and Beam-1 in [Finish or Attack] vs. [Finish or Re-Attack] states respectively.
  • the Response State change vectors and their conditions amongst the three Attack cases vs. amongst the three Re-Attack cases are very similar, the differences arising in inte ⁇ lay (change vectors) between Attacks and Re-Attacks. Out of the eighteen possible State Change vectors, seven occur most commonly, while the remaining eleven State Change vectors occur only sometimes or rarely because their conditions to initiate are more restricted.
  • Sheet D1b refers to the same information illustrated graphically in the State Change Map [Sheet D1], except presented in a table format, and including MIDI Note message output and details on the exact timing conditions which together with player actions (shadow vs. un-shadow) define each change vector.
  • FIG. D2-a illustrates the critical ergonomic form-factor considerations for the Class A and Class B Type I sensor/LED modules (having no microbeam) in achieving a specific transparent entrainment effect.
  • Type I sensor trigger regions are typically shadowed and un-shadowed by lateral body motion across a module.
  • the differentials in radius (measured from sensor axis) of the visual elements in the module are designed to entrain the players perception of events as follows.
  • the initial Shadow action is inte ⁇ reted as only moving into a "proximity" or Near-Attack before a subsequent (delay time-quantized) and precise "real" Attack action is made (whether in the form of Attack-Hold or Attack Auto-Sustain).
  • LP-1 is an outer concentric ring (circular, hexagonal or octagonal) so that the effect is identical for lateral motions coming from any direction over the module.
  • This effect is a transparent biofeedback entrainment; refer to the Series E drawings [Figs. E1-d through E10-d] for 28 specific examples of this entrainment effect in the context of 14 of the 18 total State Change vectors employed [Sheets D1 , D1-b].
  • FIG. D3-a illustrates how the Class C and Class D modules also achieve the effect disclosed in the Summary for [Fig. D2] above, with the addition of the microbeams.
  • Sheet P4 Platform Type I Sensor / LED Module "Class A"
  • FIG. D4-a illustrates the external top view
  • FIG. D4-b illustrates the corresponding cross section of internal optomechanics for Class A, the simplest Type I sensor/LED module.
  • This Class has the advantages of lowest implementation cost, as well as potentially extremely thin Platform thickness (25.0 mm +/- 5.0 mm), due to the simplicity and compactness of the optics.
  • Sheet P5 Platform Type I Sensor / LED Module "Class B"
  • FIG. D5-a] illustrates the external top view
  • FIG. D5-b illustrates the corresponding cross section of internal optomechanics for Class B sensor/LED module.
  • This Class also may be implemented in very thin Platforms similarly to the Class A case, and has the additional feature of RGB LED responses for illuminating each of LP-1 and LP-2 independently, thus allowing fully “floating" sensor zones [Sheet H6]. This is the preferred
  • Sheet D6 Platform Type I Sensor / LED Module "Class C"
  • FIG. D6-a illustrates the external top view
  • FIG. D6-b illustrates the corresponding cross section of internal optomechanics for the Class C sensor/LED module.
  • This Class implements an microbeam output on-axis both within the surrounding outer LP-1 and also itself surrounding the Type I sensor (and its trigger region).
  • the considerably more complex optics includes a perforated elliptical mirror and a microbeam-forming optics housing. These microbeam-related optics require a slightly thicker Platform enclosure than the Class A or B cases, on the order of (50.0 mm +/- 10.0 mm).
  • Sheet P7 Platform Type I Sensor / LEP Module "Class P"
  • FIG. D7-a illustrates the external top view
  • FIG. D7-b illustrates the corresponding cross section of internal optomechanics for the Class D sensor/LED module.
  • This is the preferred module embodiment for (transportable) Platforms, providing fully independent RGB response for both surface LP-1 and LP-2 as well as microbeam.
  • This module is essentially identical to Class C [Fig. D6-a] with the addition of the RGB LEDs vs. the single- LEDs of Class C.
  • Sheet D8 Console Type I Sensor / LEP Module "Class B"
  • FIG. D8-a illustrates the external top view
  • FIG. D8-c illustrates the external side view
  • FIG. D8-b illustrates the corresponding cross section of internal optomechanics for the Class B sensor/LED module as configured for the Console embodiment.
  • This is the preferred embodiment for a Console module not used with fog and thus without microbeams.
  • the Console is typically implemented with Type I and also Type II sensors, only the RGB implementations are shown as these provide the additional degrees of freedom desirable to adequately reflect the Type II data in visual feedback.
  • FIG. D9-a illustrates the external top view
  • FIG. D9-c illustrates the external side view
  • FIG. D9-b illustrates the corresponding cross section of internal optomechanics for a Class D sensor/LED module configured for the Console.
  • This is the preferred embodiment for a Console used with fog and thus having microbeams.
  • the internal (modified Schmidt- Cassegrain) Class D module optomechanics differ substantially from the perforated elliptical mirror type of Class D module.
  • the Series E drawings illustrate ten specific examples in practice of player actions and system responses for a single Type I sensor/LED module (identical for either Platform or Console). Cases of one pair, two pairs, and three pairs of player's [Shadow plus Un-Shadow] actions (being equivalent to one, two or three musical Notes respectively) are shown in the various examples. The examples taken together represent a collection of player "gestures" over a single sensor with corresponding system responses. Any and all forms of polyphonic (multiple sensor) responses for any zone may be directly inferred from these monophonic examples, as being comprised of combinations of the monophonic behaviors shown.
  • FIG. E1-a through E10-a illustrates one of six different Creative Zone Behavior (CZB) Setups, in terms of the CZB Command Panel for Notes [Sheet H2] and its graphical user interface (GUI) icons [defined on Sheet H1].
  • CZB Creative Zone Behavior
  • GUI graphical user interface
  • the "a" drawing CZB Setup for the zone's Time Quantization is also shown in the form of an adjacent "b" drawing 'TQ slot” pulse waveform (each TQ point or "slot” being exactly one tick wide but shown exaggerated for clarity).
  • the "a” drawing CZB Setup for the zone's Sustain is also shown in terms of an adjacent "b” drawing showing the equivalent musical notes defining the Setup's default sustain durations at each TQ slot.
  • the time axis for the "b" sheets is shown in terms of the MIDI standard of 480 ticks per quarter note.
  • Ticks are the tempo-invariant time metric, thus all examples hold true for any tempo, including for a tempo varying during the gestures.
  • FIG. E1-b through E10b illustrates a specific case of player actions over a Type I sensor trigger region in terms of a "binary" input timing waveform (Shadow vs. Un-Shadow) since those are the only two player actions available as regards the Type I aspect of the system. However, those two actions are within a time context [as detailed on [Sheets D1 and D1b]. From the players perspective the distinction between generating an Attack vs.
  • a Re-Attack is simple: (1) shadowing a sensor while it is in Finish state yields an Attack, and (2) shadowing a sensor while it is already in Attack Auto-Sustain (or Re-Attack Auto-Sustain) state yields a Re-Attack.
  • the module's system response is shown in ergonomic terms as the "ternary" output timing waveform, comprised of three Primary Response Events which players are entrained to identify with, namely: Attack, Re-Attack, and Finish.
  • State Change Vectors [V 2 , V 4 , V 5 , V 7 , V 8 , V 9 , V 10 , V 12 , V 14 , V 15 , V 16 , V 17 , and V 18 ] generate perception of transition to Primary Response Events, and are distinguished from the Secondary State Change Vectors [V 1( V 3 , V 6 , V 11; and V 13 ] by being those state changes where both: (a) the MIDI Note ON or OFF messages are sent, typically generating an audio result, and (b) the module's inner concentric visual elements LP-2 (and Beam-1 when employed) transition from Finish to either Attack or Re-Attack conditions or back to Finish.
  • fast state change vectors are conservatively excluded from being identified as perceptual-motor Entrainment instances, although subjective reports from further experiments with players may reveal otherwise, such as identification of an "entrainment threshold" where the fast action occurs close enough in time to the TQ response to still entrain the Kinesthetic Spatial Sync effect.
  • [Sheet E2] illustrates a common variation of the case shown in [Sheet E1], that is when the player holds the Shadow state beyond the end of the next Time Quantize point in the active Grid or Groove and Un-Shadows before the end of the current Auto-Sustain duration value [Figs. E2-a and E2-b].
  • the note Extends by Auto-Sustain and Finish comes at end of the Auto-Sustain duration value.
  • the subtleties of this behavior vs. the following behaviors shown in [Sheet E3] are highly dependent upon the relationship of the particular Quantize and Sustain CZB settings [Figs. E2-a and E3-a] together with the timing of player actions.
  • [Sheet E3] illustrates another common variation of the case shown in [Sheet E1], that is when the player holds the Shadow state beyond the end of the current Auto-Sustain duration, and the Un-Shadow comes before the next Time Quantization point for the applicable Grid or Groove [Figs. E3-a and E3-b].
  • the Un-Shadow action Truncates the note, that is, the Finish response is simultaneous with Un-Shadow action, since there is no currently active Auto-Sustain value by which to extend the note.
  • [Sheet E4] illustrates the identical player gesture as [Sheet E1] "Performance Example #1," however with a different value for the Sustain Anchor CZB Setup parameter, e.g. 85% vs. 100% [Fig.E4-a and Fig. E4-b side bar].
  • Sustain Anchor generates a unique degree of random variation to each Auto-Sustain value thus providing a "humanized" quality to the Sustain aspect of the performance.
  • Quantize Anchor The generation of the "random" aspect of Quantize Anchor is accomplished, however somewhat differently than for the Sustain Anchor case which uses an artificially generated random number.
  • Quantize Anchor the player's natural variation in gap duration between Shadow action and next Time Quantize slot (according to the applicable CZB Quantize Setup [Fig. E5-a]) is exploited as being set as the 100% frame of reference, to which lesser percentage Quantize Anchor values are applied [Fig. E5-b side bar]. This also allows the musical feel of "playing ahead" since values less than 100% translate into a relative shift forward in time of the TQ Attack, a feature useful for
  • [Sheet E8] illustrates an example of employing Speed (detection of lateral motion rate over a Type I sensor) as a parameter which affects the definition of a Sustain duration uniquely for each Attack Response.
  • An "Inverse Map” is shown whereby a faster Shadow action Speed results in a shorter Attack Sustain duration [Fig. E8-b and E8-b side bar]. While many other maps [Sheet i4] may be employed applying Speed to Sustain, this example is particularly "natural" in feel.
  • FIG E8-f shows detail of the Speed Control Panel settings [Sheet i4] for this example, indicating the frame of reference Grid to which (or "OVER") the Speed is applied as a percentage to calculate the resulting sustain value, as well as the minimum (“LO”) and maximum (“HI”) values, and the resolution or number of mapped-to values ("# VAL"). It is also possible to map Speed to a range of MIDI values, or even to ticks directly [Sheet i4], depending on which CZB Notes behavior it is applied to [Fig. H1-c] and the effect desired.
  • FIG. E10-f shows detail of the Height Control Panel settings, a variation of those shown on [Sheet i3j, indicating the frame of reference is direct mapping to MIDI ticks.
  • Sustain must be either by a Grid or a "bridged to" [Fig. H2-d] Height control for Sustain.
  • Group 1 [Sheets F1 and F1b] illustrate in summary overview fashion how the two primary functional control modules of the invention - the Free-Space Interface (Firmware and Hardware) Module (470 > 5 ° ) and Creative Zone Behaviors (CZB) Processing (Software) Module (461) - may either co-reside within a single Integrated Console enclosure (13 °. 131) or, reside in a Free-Space Interface (543) enclosure distinct from a system enclosure such as a 19" rack mount for a Host Computer ( 48 ) with Audio systems ( 480 > 481 > 482) . These two modules intercommunicate via MIDI messages
  • Group 2 [Sheets F2 and F3] illustrate the internal details within the CZB Processing Module and Free-Space Interface Module, respectively.
  • Group 3 [Sheets F4, F5 and F6] illustrate three variations on Clock Master (472) and Global Sync Architecture, and details the data flows between the CZB Processing Module software and other ancillary software and equipment. This group also illustrates the distinctions in data flow pathways used only for interactive content authoring (491 > 96 > 50 ° ) versus those used for both live interactive play and authoring (504, 510, 511, 512) ( as W ⁇
  • CZB Command Protocol 502
  • Group 4 [Sheet F7] illustrates the modular internal electronics for a Platform embodiment, although the Embedded Free-Space Microcontroller (530) circuit board detailed in [Fig. F7-b], however may be used for all Platform [Series A and B] and all Console [Series C] free-space interface configurations.
  • the CZB Processing Module ( 461 ) communicates with suitable companion software including MIDI sequencer (44 ° ) and Other MIDI Processing (439) software co-residing on a multi-tasking PC-type computer (487) , as well as with other MIDI-compatible media equipment including computer graphic systems (438) with large-format displays, and intelligent robotic lighting systems (437) .
  • suitable companion software including MIDI sequencer (44 ° ) and Other MIDI Processing (439) software co-residing on a multi-tasking PC-type computer (487) , as well as with other MIDI-compatible media equipment including computer graphic systems (438) with large-format displays, and intelligent robotic lighting systems (437) .
  • MIDI messages are employed in most of these cases (e.g.
  • MIDI Protocols The Series F drawings illustrate the contexts of three distinct and novel uses 444 - 445, 502 ) of the MIDI protocol, designed specifically for the free-space interactive system, as well as additional uses ( 496 - 503 > 510 > 512 ) of "pre-existing" MIDI message usage (e.g. being compliant with manufacturers' MIDI implementation) however in the free-space context. All of these MIDI protocol uses in their functional assignments and specific MIDI messages employed, are identical whether used over original MIDI serial, RS-485, RS-232C, internal shared memory, or via other high speed communications standards such as FireWire or USB.
  • S3, 466 us j n g either the protocol's ( 45) Note ON / Note OFF or Control Change messages.
  • the MIDI Channel value in these messages indicates CZB Zone assignment, Note Number or Controller Number indicates sensor physical position in the interface, and Note Velocity or Control Change Data value indicate the player's Speed (581) parameter (speed of lateral motion across TypeJ. trigger region).
  • Type II events (669 ) (height detection data) are reported using Control Change messages.
  • (j) -r ne Visuals Protocol is comprised of two functional groups of messages.
  • LED Configuration Commands setup firmware-accessed RGB color lookup tables in memory ( 4 6 9 ) , and also set MIDI message assignments.
  • LED Control Commands change the active LED states pursuant to the logic in software (429) as per [Sheets D1, D1b].
  • LED Configuration Commands include both System Exclusive and Control Change messages.
  • LED Control commands employ either Control Change or Note ON / Note OFF messages, determined by previous LED Configuration Commands or factory defaults, (ii)
  • the Sensor Mode Protocol uses System Exclusive messages to configure the characteristics of Type I and Type II messages subsequently sent via the Free-Space Event Protocol (445) .
  • Type I configuration options include MIDI message assignment, AGC (Automatic Gain Control) modes and parameters, sensor-to-Zone assignments, and dynamic range of Speed (581 ) reporting.
  • Type II configuration options include MIDI message assignment, multiple sensor interpolation and spatial averaging modes, sensor-to-Zone assignments, time averaging and reporting modes, and dynamic range of Height (58 ° ) reporting.
  • C Creative Zone Behavior (CZB) Command Protocol.
  • Sheets F4, F5 and F6 illustrate the contexts of use (49 L 501) f or t e CZB Command Protocol (502), ⁇ n is protocol both indexes to, and encodes within MIDI messages (491 > 50 1 ) external to the CZB Processing Module ( 4 6 1 ) , the four types of CZB Setups Data residing within the CZB Processing Module [Sheet F2], namely for Notes ( 4 30) , MIDI Controllers (431), Local Visuals ( 4 3 ) and External Visuals ( 4 3 3 ) .
  • the CZB Setups Data stores the control and parameter values for ergonomic response behaviors of the free-space system (e.g.
  • the CZB Setups Data serve this role in software (429) identically whether the source of their configuration data originated from either: (a) an author/composer's (or expert player's) use of the GUI (Graphic User Interface) Command Panels [Series H ⁇ i, J and K drawings], or (b) via an input MIDI stream of CZB Command Protocol messages including from CZB Command Tracks (492, 493, 494, 495) sto red within a sequencer (4 0 or 499) MIDI song file (filename.mid) as shown in [Sheets F4, F5 and F6].
  • the CZB Processing Module ( 4 6 1 ) software includes in its pre-stored CZB Setups Data (write-protected) library of "factory defaults" various pre-configured Zone Map (656 ) assignments [Sheet H6] and Creative Zone Behaviors for Notes (430) such as shown in the detailed examples [Sheets H4 and H5].
  • the most "compact" use of the CZB Command Protocol e.g. efficient in terms of minimizing MIDI communications overhead) is to simply select from the "factory default” CZB Setups Data configurations, or from previously "user defined” and previously stored CZB Setups.
  • the simplest CZB Command Protocol ( 502 ) context consists of two aspects. First, a MIDI System Exclusive Master Zone Allocation message (i) assigns a Zone Map [Sheet H6] or sensor allocation map to physical Free-Space Interface Module(s) (5 07 ), and (ii) assigns one CZB Command Receive Channel ( 626 > 627 ' 628 ) to each Zone for all free- space interfaces connected to the CZB Processing Module's (461) host computer ( 487 ). These CZB Command Receive Channel assignments also determine the assignment of which incoming Free-Space Event Protocol (444) Type I and Type II sensor messages are processing according to which Zone's (6 29 > 630, 631) CZB Setups ( 29 5. 296, 297).
  • a MIDI System Exclusive Master Zone Allocation message assigns a Zone Map [Sheet H6] or sensor allocation map to physical Free-Space Interface Module(s) (5 07 ), and (ii) assigns one CZB Command Receive Channel ( 626 > 627 ' 6
  • CZB Banks The number of CZB Banks is memory (° f 487) dependent. Available memory is allocated to (n) read-only Banks for "factory default” pre-stored (write-protected) CZB Setups, plus another (n) Banks for "user” CZB Setups which may be freely designed and configured, typically by initially copying the "factory” setups into “user memory” and then
  • the CZB Command Panels (599 > 600 ' 6 °1 ) GUI are used to configure the CZB Setups for "Notes” shown in [Series H, i and J], "Nuance” (free-space continuous Controller modes) not disclosed but suggested in [Sheet G2 ( 566) ], “Local Visuals” (LEDs response) shown in [Series K], and “External Visuals” not disclosed but suggested in [Sheet G2 (5 0)].
  • GUI Command Panels generate [Sheets F4, F5, F6] corresponding MIDI "authoring" ("fl") output (49 1 ) of the CZB Command Protocol messages which are recorded into tracks (49 2, 493, 494, 495 ) 0 n the host-resident sequencer ( 44 0 or 4 99) .
  • This "Sysex" message also includes assignment of its CZB Setup data to a "user" Setup or memory index number, so that subsequent to the first instance of use, the more compact Control Change messages for CZB Bank and CZB Setup may be employed which simply index into the user CZB Setups data memory previously loaded by the CZB Zone Data Dump message, to make it active.
  • Control Change messages are used to affect any and all of the large number of individual CZB Setup Control Types [Fig. H1-b] with their parameters detailed in [Series i]. These Control Change messages utilize the extended scope of device-specific data via the MIDI protocol's Non Registered Parameter Numbers (NRPN) with LSB (least significant byte) and MSB (most significant byte), and may be used at any time to adjust any characteristics of response during play.
  • NRPN Non Registered Parameter Numbers
  • MSB most significant byte
  • these CZB Command Protocol Control Change messages include the equivalents to all GUI actions, including for example: changing the application of player's Type II Height data ( 580) from Attack Velocity (267 ) to Attack Range (27 °), changing the Lock to Groove ( 84) for Attack Quantize (269) from one Groove to a different Groove (697) , changing the Attack Channels (27i) from pre-assigned values to being determined by the player's Precision (288) parameter, and the vast number of other permutations of ergonomic control illustrated in [Series H, i and J]. [Fig.
  • H1-c details the 71 possible (valid) CZB Behaviors for Notes
  • [Series i] details the Control Types and their parameters available for assignment to Notes behaviors
  • [Series J] illustrates specific examples of useful applications in practice; ail of these are individually configurable by use of the CZB Command Protocol ( 502 ) .
  • MIDI controllers For authoring of audio accompaniment to be used as part of free-space interactive content titles, conventional MIDI controllers ( 86 ) may be used ( 500 ) to capture accompaniment tracks (497) including common uses of Notes ON/OFF messages with velocity, Continuous Controllers for such as portamento, breath control, and modulation Control Change messages and/or a pitch bend device for generating Pitch Bend Change messages.
  • External Visuals accompaniment e.g. non-interactive aspects of a total immersive media environment
  • this may similarly use the conventional MIDI controller ( 4 8 6 ) or other devices such as memory lighting controllers, and store such "lighting queues" also into tracks ( 97) for playback during interactive play sessions.
  • the CZB Processing Module ( 4 6 1 ) outputs "conventional" Note ON / Note OFF messages ( 510 ) to Other MIDI Processing Software ( 43 9 ). These messages reflect Player's Type I sensor shadow/unshadow actions (sometimes combined together with influence of Type II sensor data if employed), however these messages are temporally adjusted or scheduled (43 4 ) by logic (429) to be in Kinesthetic Spatial Sync alignment [Figs. E1-c,d&e through E10 c,d&e].
  • the function of the Other MIDI Software ( 39) in typically and primarily (but not exclusively) to adjust or translate the note number (byte two) according to various schemes of chord/scale adjustment under control of its own Other MIDI Processing Command Tracks (498) , and to then send (511) these adjusted Note ON/OFF messages (still within the Kinesthetic Spatial Sync timing, e.g. passed through without other time processing) on to sound modules and effects units (48 ° ) .
  • Type II sensor data may alternatively be passed ( 51° ) to Other MDI Software (439) directly in the form of Control Change messages which may affect a
  • Free-space Internal (CZB Processing Module) software ( 46 1 ) acting as Clock Master (506 ) is shown in [Sheet F4].
  • Enhanced CD (CD+), CD-ROM, and DVD content may similarly serve as Master Clock sources; although these are not separately shown in the drawings, they may be derived from the other examples illustrated.
  • the free-space architecture brings into the precise Kinesthetic Spatial Sync ergonomic alignment of all these diverse media elements, in-sync with whichever MIDI Master Clock, while many media components in the environment do not need to actually receive in their MIDI streams ( 51°. 5 11 , 51 2 ) the clock data (MIDI System Realtime byte $F8 hex).
  • This avoids a communications overhead which is very significant since many types of MIDI devices and software commonly exhibit substantial delays, dropped messages, or can even fail (lock-up) altogether when the very dense System Realtime MIDI beat clock is inter-mixed with much other (non-System-Realtime) MIDI data.
  • This is in practice equivalent to a kind of "pseudo-clock-master" (474 ) e.g. without needing the $F8 System Realtime clock data stream.
  • This includes the free-space software ( 461 ) in some cases simultaneously functioning in the capacity of a bona-fide MIDI Clock Slave (51 8 ) and a (pseudo-) MIDI Clock Master (474 ) simultaneously.
  • FIG. F1-a illustrates the Integrated Console hardware/software architecture which includes together the functions illustrated in [Sheets F2, F3, and F6] within a single physical enclosure ( 30 . 1 3 1 ) .
  • the Integrated Console enclosure includes the Embedded Free-Space Microcontroller (53 °) with its Free-Space Interface Firmware ( 4 70) for Type I Sensor/LED 28) and Type II Sensor ( 113) processing, and Multitasking PC computer ( 48 7 ) with integral touch-display 27) .
  • the enclosure-internal PC computer In addition to the CZB Processing Module ( 4 6 ) software, the enclosure-internal PC computer also runs the coresident MIDI Sequencer (44 ° ) and Other MIDI software (439) .
  • the integral touch-display and data storage subsystems are shared (488 ) via operating system BIOS and OS
  • FIG. F1-a shows partitioning for MIDI synthesizer(s) and digital audio (D.A.) hardware in its most compact form, within one or more circuit cards ( 48 ° * ) residing within the PC's expansion bus slot(s). This overcomes limitations of such as 41k external MIDI speeds and allows for optimal timing performance and integration with the software modules ( 439 . 440, 461) running on the PC.
  • Internal audio amplifier ( 482 ) and speakers (484, 485) are included, although external MIDI sound and effects modules (480), mixers (481) and external audio systems may also be used as shown in [Fig. F6-a]. While not shown in [F1-a], when external audio systems are used (such as in Pro performance venues) the internal .amp and speakers may serve as local "monitors" for the performer, and internal MIDI synth may be disabled. In professional stage or themed venues, or for visuals content authoring, the MIDI I/O panel (135) connects to external MIDI-controlled graphics (438), Robotic lighting (437), and/or other Free-Space Hosts (441) via inter-host extensions to the CZB Command Protocol ( 502 ).
  • FIG. 5b illustrates the Interface-and-Host Architecture which partitions the free-space interactive media system into multiple enclosures, primarily an Interface enclosure (543) and a Host PC (487) p
  • This "split" architecture is preferred for three configurations: professional stage Platform, consumer Platform, and consumer Console.
  • the "split enclosures" architecture is suitable for a basic (economical) consumer-type or "home” Console embodiment lacking an integral PC and touch-display.
  • This type of Console a free-space interactive PC peripheral MIDI interface, is connected by conventional MIDI, RS-232C or RS-485 serial cable to a separate home PC computer running the co-resident software modules ( 461. 43 9 , 440, 488 ) .
  • over this cable link are identical in nature to those used within an Integrated Console [Sheet F1].
  • the audio is typically handled by means of PC-integrated sound card (480 * ), however may alternatively in pro-sumer case be in the form of separates (480, 481, 4 8 2 ) .
  • PC-integrated sound card 480 *
  • the enclosure-internal electronics for such a home Console, with embedded firmware (530, 470 ) anc j Type II sensor modules ( 113) are identical to the Platform case.
  • the internal cabling and interconnects however are Console-specific, and the Console style of Type I sensor/LED modules [Sheets D8, D9] are used.
  • [Sheet F2] illustrates the CZB Processing Module software internal architecture and data flow.
  • This software is "host-resident", residing within a PC-type computer (487 ).
  • the CZB Processing Module ( 6i) software always complements one or more Embedded Free-Space Microcontroller module(s) ( 53 °) illustrated in [Sheets F3 and F7-b].
  • the CZB Processing Module functions as logic processor, scheduler and mediator between the Free-Space Interface ( 507 ) data streams (444, 445) an d the other host-resident MIDI software modules ( 39 > 440), MIDI audio (480) and (when employed) computer graphics (438) and robotic lighting equipment t 43 ) .
  • the CZB Processing Module further manages with Display Device (44 ) and its control software (422) together with Input Device ( 44 3 ) and its software (42 1 ) a GUI interface logic implementing the functions shown in the [Series H, i, J and K] drawings, using low-level of I/O via OS/BIOS display and input device resources ( 4 8 8 ) shared with other ( 4 s9 . 44 0) host coresident software [Sheets F1 , F1b, F4, F5 and F6].
  • the MIDI IN and OUT (446, 448 ) are shown as one item each, although in practice they represent a more complex mix of both internal software data flows and external communications ports as further detailed in [Sheets F4, F5 and F6].
  • the function of the MIDI IN Parser (a) (42 0 ) is to filter out any data errors, and then to split the incoming valid MIDI ( 44 6) and RS-485 (45 0 ) Data In, into three data streams and route them to the appropriate internal software modules.
  • Incoming MIDI clock data ($F8 messages) from external source (509 or 516) is converted into a beat-clock-synced metronome format (424) and distributed to both the Free-Space Event Processor ( 429) and the Scheduler (434) .
  • Incoming Free-Space Event Protocol (444) messages are routed to the Remote Performance Pre-Processor (426) , where they along with any equivalent GUI commands detected by ( 2 1 ) for simulated performance [Figs.
  • K4-a, K4-d] are converted into an internal uniform format of event messages for the Free-Space Event Processor (429) .
  • Incoming Creative Zone Behavior (CZB) Command Protocol ( 502) messages (501 ) originating in external sequencer ( 40 or " ) are routed to the CZB Command Processor
  • the CZB Command Processor (423) receives and parses CZB Command Protocol (50 2 ) messages and when these are deemed valid, makes the relevant modifications to the
  • the CZB Command Processor ( 23 ) also interprets user GUI actions via the Input Device ( 43 ) and software ( 42 1), and if MIDI output is enabled for content authoring (491) , or inter-host protocol extension is enabled for link to Other Free-Space Hosts ( 441 ) , ft then structures CZB Command Protocol 502) messages and sends them to the MIDI OUT Message Assembler ( 435 ) for MIDI output (44 8) .
  • the Free-Space Event Processor (429 > implements the core realtime functional logic of the Creative Zone Behaviors paradigm.
  • the combined output of these tests is the determination of which one of the 18 possible State Change Vectors (V t through V 18 ) should follow from the Shadow (“S") or Un-Shadow ("US”) or ⁇ T-only input event instance.
  • Free-Space Event Processor logic (429) outputs to the Scheduler (43 4 ) (always with a time stamp delay value of zero) the appropriate LED Control Command in protocol (4 4 ) , namely one of the 7 cases of Module Elements Feedback States for LP1 (93 » 97 . 1 3 . 70) , LP2 0 . 98 > 14, 71 ) an d B1 ( 5, 72) sno wn in [Sheets D1 and D1b].
  • the resulting RGB output values of the Module Elements for these 7 cases is dependent upon software ( 4 70) previous receipt of LED Configuration Commands in protocol ( 444 ) for each Zone (see Section 5.6.4, MIDI Protocols), and their consequential RGB lookup table settings in the memory ( 4 6 8 ) 0 f Free-Space Microcontroller (53 ° ) .
  • Type I Sensor Processing software employs a floating differential type of AGC or Automatic Gain Control on the digitized Type I data, in order to: (a) allow for variance in IR source flood ( 8 3 1) intensity due to varied relative positioning (5 > 6 > 7 > 8) of each sensor on the free-space interface surface; (b) allow for variations in source flood intensity due to such as intermittent fogging materials introduced in the intervening air; and (c) allows for variations in the flood fixture's height ( 83 3) [Fig. A6-a].
  • software (42 ) qualifies as valid a Type I sensor event (23 > 24) it creates an internal sensor event message including sensor position ID and speed parameter.
  • the MIDI OUT Message Assembler (435 ) interprets this internal sensor event message and creates the assigned type of MIDI message (either Note ON/Note OFF or Control Change) and sends it out MIDI (8 3 - 4 6 6) a nd/or RS-485 (81. 467 ) .
  • This output Free-Space Event Protocol ( 444) message has values of appropriate Note Number or Control Number (for sensor ID), Channel (for Zone ID), and Velocity or Control Data (for speed parameter) according to previous MIDI message format configurations set by protocol (445).
  • Type II Sensors ( H 3) are pre-processed by local electronics and software on their PCB modules (415 ) and sent via cable ( 417 ) to Type II Sensor Serial I/O (53 ⁇ ) .
  • Type II sensor modules (415) are self-contained microprocessor subsystems which create a serial output stream of Type II data which is sent and forwarded down cable ( 41 7) in a cascading scheme resulting in one Type II status packet delivered to serial port (538) .
  • Type II sensor Processing software (428) polls port (538) at fixed intervals for this periodic packet of combined Type II data representing state of all Type II modules in the interface, regardless of timing and nature of player actions.
  • Type II data is generated at much higher rates at each Type II module (4 1 5), the collection into one periodic "global" (all Type II sensors) Type II packet constitutes an efficient data reduction scheme in the time domain.
  • the polling rate for such a serial scheme need not be too high (for example 30 msec or even longer), as time-averaging or "last value" of data is typically used by remote host (487)
  • modules ( 4 5) may be used between modules ( 4 5) and the EFM PCB (530) which rather than cascading into a single "gobal" packet reporting for all modules, instead reports individual Type II module data packets to port ( 538 ) and thus to software
  • the host co-resident software architecture [Fig. F4-a] shows the CZB Processing Module (461) acting as MIDI Clock Master ( 06) to the third-party Other MIDI Processing ( 439 ) coresident application with its embedded Sequencer Module ( 499 ).
  • Sheet F5 Global Sync Architecture "CD-Audio/Other MIDI" Clock Master
  • FIG. F5-a shows the embedded Sequencer Module ( 99) of Other MIDI Processing (439 ) co-resident application acting as MIDI Clock Master (506) to CZB Processing Module ( 461) thus acting as Clock Slave (5 8 ).
  • origination of the conventional MIDI Clock stream ($F8 bytes) from sequencer ( 499 ) is itself internally synced to another clock source process.
  • the third-party Other MIDI Software i 439 includes capability of playback of Redbook audio CD tracks (51 3 ) on PC (4 8 7 ) CD-ROM drive with low-level timing synchronization provided to the embedded sequencer ( 4 9 9 ) .
  • playback of the CD-audio track is used by an author to manually create using devices ( 443 or 4 86) a tempo Beat-Alignment Track ( 515 ) within the sequencer song file.
  • this low- level timing logic in Other MIDI Software then automatically synchronizes the Beat Alignment Track ( 5i5) to the CD-audio track (5 3 ) , thus effectively making the CD-audio a "meta-clock" master M, (5 14) in turn controlling the tempo of conventional clock master Ma ( 516) output.
  • Sheet F6 Global Sync Architecture "Sequencer" Clock Master
  • FIG. F6-a illustrates a more complex host ( 487 ) co-resident software architecture, where the functions of Other MIDI Software ( 39 ) are reduced to primarily its note-number translation functions (as described in Section 5.6.4 part (D) Other Third Party MIDI Protocol Uses and Conventions), and its embedded Sequencer Module ( 499 ) functions are replaced by those of another third-party Sequencer Application i 440 ).
  • the Other MIDI software ( 439 ) Command Tracks ( 98 ) are stored in the song file on sequencer i 440 ), but otherwise function the same as in cases shown on [Figs.
  • the advantage of this configuration includes the use of much more fully-featured (and varied cases of)"sequencers (440) than embedded sequencer ( "), while still retaining the unique features of software (43 9 ) in the total host MIDI software architecture. Additional features of such sequencers (440) include sophisticated internal management of Digital Audio tracks ( 525 ) for seamlessly integrated MIDI and digital audio processing, composing and editing. During authoring sessions (denoted by symbol "3"), audio ( 524 ) is captured using such as microphones or pickups ( 523 ) and recorded into tracks ( 525 ). For both recording and playback, audio ( 529 > feeds to mixer ( 48 1) and may route also into samplers and/or effects units (48°).
  • FIG. F7 illustrates the modular hardware for the preferred embodiment or Free-Space Interactive "Platform #1" (543) . although much of the drawirig elements may be applied as well to internal electronics for Console embodiments. Many of the elements of the hardware shown in [Fig. F7-a] are discussed above, in Description of Drawings for Sheet F3: Free-Space Interface Module, since theliardware operates intimately with the software ( 47 0) discussed therein. All elements are also noted in the Legend to [Sheet F7]. Type I Sensor/LED and light pipe modules, detailed in [Sheets D4, D5, D6 and D7], all interface to a printed circuit board (53 1 ) shown in this [Fig.
  • F7-a which includes a connector to cable of type (532) to centrally located Embedded Free-Space Microcontroller board ( 530) via connector of type (5 1 ) .
  • the center hex enclosure (2) of the Platform has a removable cover 1 allowing access to the central electronics within, and the PCB ( 53 0) includes a hole ( 54 2) allowing use of a steel support post to the cover to protect the electronics from the repeated and continuous player impacts in typical use.
  • Type I sensor/LED light pipe modules [Sheets D8, D9] printed circuit boards (243 > 262) interface to an identical EFM card (53 °) centrally located within Console enclosure 0 30 ) also using cables of type (532) differing only in length and orientation suitable for the Console case.
  • H1-a see text on pages 96, 115, 140, 145; H1-b: 147; H1-c: 96, 146, 147, 156).
  • Sheet J7 Set Value ON Applied to Notes Re-Attack Velocity Behavior
  • Sheet J9 Set Value Aftertouch Applied to Notes Re-Attack Aftertouch Behavior
  • Simultaneity and Synesthesia are critical to perception of "Synesthesia" ( 56 °) which is that type of perception where multiple sensory stimuli (546, 547, 548) are perceived coherently as aspects or features of a single event or stimulus.
  • a key enabler to reaching the threshold of a synesthetic event is in fact simply that perceptions are being experienced at the same time.
  • Non-simultaneity reinforces perception of multiple (distinct) events across the sensory modalities, thus directly negating Synesthesia which by definition must be a unified perception amongst those sensory modalities.
  • Non-simultaneity precludes, or at least greatly suppresses the chance for Synesthesia.
  • Entrainment Phase 1 The invention stimulates players into an evoked Gestalt of "My body is the instrument," which may also be expressed in terms of:
  • the Kinesthetic Spatial Sync experience continuously provides a visceral (physical) body kinesthetic perception of the otherwise rarely juxtaposed properties of:
  • Free-space media systems employing the invention's Creative Zone Behaviors biofeedback paradigm for interactive music are uniquely able to provide transparent transfer functions ( 551 > 552, 553) for all feature spaces ( 54 6, 547, 548) thus comprising an Omni-Synesthetic Manifold (571) of experience.
  • the invention co-registers all of these synesthetic transparencies within a unified clear kinesthetic and visceral perceptual-motor ergonomic paradigm. In so doing, in free-space, rhythm is the "last" (most recent in the evolution " f musical instruments) musical transfer function to be made simultaneously transparent and symmetric.
  • rhythmic processing is a critical enabler when employed simultaneously with the other transparent transfer functions previously available (for timbre and pitch).
  • What is enabled by the Kinesthetic Spatial Sync effect is the evoking of a perceptual-motor pstalt of Creative Unity, and the unconditional subjective "ownership" of effortless virtuoso precision in aesthetic creative expression.
  • Disclosed Human Factors Reflect a "Process".
  • the implementation and fabrication methods including sensor electronic hardware, sensor control software, system enclosures, mechanical packaging, sensor array spatial configuration, LED indicators, external visual response systems, and musical response systems
  • Kinesthetic Spatial Sync feedback paradigm namely the operational process of the Creative Zone Behaviors.
  • One skilled in the relevant arts could execute a variety of implementations employing varied control means, alternative optical and electronic materials and technologies, all the while exhibiting the disclosed ergonomic, optical, cybernetic, algorithmic, and human factors design constraints.
  • Test Player Reports Utilizing developmental ⁇ arototype reductions to practice, hundreds of trial players encompassing a broad player demographic (including those with no prior musical skill or training) have reported various experiences which we loosely categorize into the following common results:
  • the invention provides the experience that body motion (input) is spatially supe ⁇ osed and simultaneous to aesthetic media creation (output).
  • body motion input
  • a more psychological perspective might describe this in terms such as "creative physical expression becomes inescapably synonymous with sharable beauty and harmony in perception”.
  • This powerful positive feedback encourages continued creative expression and exploration through continued body motion.
  • the combination of unrestricted free- space interface and aesthetic musical and visual responses thus collectively entrain continuous player body motion.
  • Continuous body motion in turn further amplifies and sustains the desired ergonomic effect of "effortlessly creating aesthetic experience.”
  • the continuously positive and synesthetic feedback to full-body creativity appears to spontaneously evoke the "Creative Wellness Response” which further empowers creativity, thus forming a self-reinforcing biofeedback process.
  • Evoked Euphoria A subjective "euphoric" nature of the disclosed free-space-interactive experience was reported by many trial players, a condition however being simultaneous with increased alertness, self-awareness and enhancement of perceptual-motor performance.
  • the psycho-motor "group-body” metaphor may both express and further evoke unforeseen and spontaneously emergent group mental and psychological skills including for example some form of functional "group mind” phenomena. This may be akin to flock behaviors of birds, or to schools of fish, or be entirely different and distinctly human in characteristic. Such skills if engendered may furthermore have broad practical applications in telepresence, telerobotics, and control and cybernetic systems for distributed propulsive, biomechanical, and/or navigational applications.
  • the invention allows the creation of intersubjectively aesthetic music performances even by the deaf (utilizing the multiple visual feedback), as well as the creation of intersubjectively aesthetic visual responses even by the blind (utilizing the musical feedback). Sufficient practice may yield even virtuoso levels of performance in both of these extreme cases.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Electrophonic Musical Instruments (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

L'invention concerne un procédé et un système qui confèrent à des lecteurs de média interactifs une expérience durable de synchronisation spatiale kinésthésique ('Kinesthetic Spatial Sync') définie comme une simultanéité perçue et une superposition spatiale entre un processus de commande d'entrée d'un corps intégral, non tactile, et une rétroaction multisensorielle immersive. Des actions d'entrée d'un lecteur asynchrone et des événements de rétroaction de média à horloges synchrones transforment une synésthésie sans coupure ou des événements multisensoriels fusionnés en une perception événementielle intégrale, le phénomène ayant lieu entre le son musical (écoute), des réponses visuelles (vue) et une kinésthésie du corps (extension radiale, position angulaire, hauteur, vitesse, synchronisation et précision). Ce processus d'interfaçage non tactile et de rétroaction multisensorielle ('voir et ressentir') est conçu comme une interface humaine ergonomique optimale destinée à une musique interactive, et comme un contrôleur de média immersive entièrement interactive avec le corps présentant six degrés de liberté. L'invention concerne en outre une large palette de fonctions de transfert entièrement reconfigurables entre des éléments d'entrée kinésthésiques et des réponses de média ('comportements dans des zones de création') gérés au moyen d'un protocole MIDI et/ou de commandes d'interface d'affichage.
PCT/US2003/041798 2002-12-30 2003-12-30 Interface humaine en espace libre (non tactile) pour musique interactive, instrument de musique integrale, commande de media immersive WO2004060509A2 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP03808641A EP1584087A4 (fr) 2002-12-30 2003-12-30 Interface humaine en espace libre (non tactile) pour musique interactive, instrument de musique integrale, commande de média immersive
AU2003303523A AU2003303523A1 (en) 2002-12-30 2003-12-30 Free-space (non-tactile) human interface for interactive music, full-body musical instrument, and immersive media controller

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US43729302P 2002-12-30 2002-12-30
US60/437,293 2002-12-30

Publications (2)

Publication Number Publication Date
WO2004060509A2 true WO2004060509A2 (fr) 2004-07-22
WO2004060509A3 WO2004060509A3 (fr) 2005-01-27

Family

ID=32713163

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2003/041798 WO2004060509A2 (fr) 2002-12-30 2003-12-30 Interface humaine en espace libre (non tactile) pour musique interactive, instrument de musique integrale, commande de media immersive

Country Status (3)

Country Link
EP (1) EP1584087A4 (fr)
AU (1) AU2003303523A1 (fr)
WO (1) WO2004060509A2 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8099459B2 (en) 2006-06-23 2012-01-17 Microsoft Corporation Content feedback for authors of web syndications
US8198526B2 (en) 2009-04-13 2012-06-12 745 Llc Methods and apparatus for input devices for instruments and/or game controllers

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3749810A (en) * 1972-02-23 1973-07-31 A Dow Choreographic musical and/or luminescent appliance
US5045687A (en) * 1988-05-11 1991-09-03 Asaf Gurner Optical instrument with tone signal generating means
US5414256A (en) * 1991-10-15 1995-05-09 Interactive Light, Inc. Apparatus for and method of controlling a device by sensing radiation having an emission space and a sensing space
US6142849A (en) * 1996-06-05 2000-11-07 Hasbro, Inc. Musical toy
US6492775B2 (en) * 1998-09-23 2002-12-10 Moshe Klotz Pre-fabricated stage incorporating light-actuated triggering means

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2502823A1 (fr) * 1981-03-27 1982-10-01 Szajner Bernard Controle de synthetiseur musical par laser
DE3436703A1 (de) * 1984-10-06 1986-04-17 Franz Dipl.-Ing. 6209 Heidenrod Ertl Betaetigungseinrichtung zum ausloesen elektronisch erzeugter musikalischer vorgaenge
GB2183889B (en) * 1985-10-07 1989-09-13 Hagai Sigalov Optical control means
US5081896A (en) * 1986-11-06 1992-01-21 Yamaha Corporation Musical tone generating apparatus
US5998727A (en) * 1997-12-11 1999-12-07 Roland Kabushiki Kaisha Musical apparatus using multiple light beams to control musical tone signals

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3749810A (en) * 1972-02-23 1973-07-31 A Dow Choreographic musical and/or luminescent appliance
US5045687A (en) * 1988-05-11 1991-09-03 Asaf Gurner Optical instrument with tone signal generating means
US5414256A (en) * 1991-10-15 1995-05-09 Interactive Light, Inc. Apparatus for and method of controlling a device by sensing radiation having an emission space and a sensing space
US6142849A (en) * 1996-06-05 2000-11-07 Hasbro, Inc. Musical toy
US6492775B2 (en) * 1998-09-23 2002-12-10 Moshe Klotz Pre-fabricated stage incorporating light-actuated triggering means

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP1584087A2 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8099459B2 (en) 2006-06-23 2012-01-17 Microsoft Corporation Content feedback for authors of web syndications
US8198526B2 (en) 2009-04-13 2012-06-12 745 Llc Methods and apparatus for input devices for instruments and/or game controllers

Also Published As

Publication number Publication date
EP1584087A2 (fr) 2005-10-12
AU2003303523A8 (en) 2004-07-29
AU2003303523A1 (en) 2004-07-29
WO2004060509A3 (fr) 2005-01-27
EP1584087A4 (fr) 2010-08-04

Similar Documents

Publication Publication Date Title
US7402743B2 (en) Free-space human interface for interactive music, full-body musical instrument, and immersive media controller
Blaine et al. Contexts of collaborative musical experiences
Blaine et al. Collaborative musical experiences for novices
Lyons et al. 2003: Designing, Playing, and Performing with a Vision-Based Mouth Interface
Paradiso The brain opera technology: New instruments and gestural sensors for musical interaction and performance
US5952599A (en) Interactive music generation system making use of global feature control by non-musicians
US9646588B1 (en) Cyber reality musical instrument and device
US7212213B2 (en) Color display instrument and method for use thereof
Fels Designing for intimacy: Creating new interfaces for musical expression
US20150103019A1 (en) Methods and Devices and Systems for Positioning Input Devices and Creating Control
JP2005526264A (ja) 楽器装置及び方法
Ng Music via motion: transdomain mapping of motion and sound for interactive performances
WO2006078597A9 (fr) Procede et appareil pour generer des images visuelles sur la base de compositions musicales
WO2012158227A1 (fr) Dispositif multimédia permettant à un utilisateur de reproduire un contenu audio en association avec une vidéo affichée
Ward et al. Music technology and alternate controllers for clients with complex needs
Pressing Some perspectives on performed sound and music in virtual environments
Fels et al. Musikalscope: A graphical musical instrument
Siegel Dancing the music: Interactive dance and music
Brent The Gesturally Extended Piano.
EP1584087A2 (fr) Interface humaine en espace libre (non tactile) pour musique interactive, instrument de musique integrale, commande de média immersive
Fabiani et al. Systems for interactive control of computer generated music performance
Refsum Jensenius et al. Performing the electric violin in a sonic space
Beilharz et al. Hyper-shaku (Border-crossing): Towards the Multi-modal Gesture-controlled Hyper-Instrument.
Baldassarri et al. Immertable: a configurable and customizable tangible tabletop for audiovisual and musical control
Zadel Graphical performance software in contexts: explorations with different strokes

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 2003808641

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 2003808641

Country of ref document: EP

NENP Non-entry into the national phase in:

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP