WO2007059614A1 - Dispositif d'entrée actionné par la bouche - Google Patents

Dispositif d'entrée actionné par la bouche Download PDF

Info

Publication number
WO2007059614A1
WO2007059614A1 PCT/CA2006/001909 CA2006001909W WO2007059614A1 WO 2007059614 A1 WO2007059614 A1 WO 2007059614A1 CA 2006001909 W CA2006001909 W CA 2006001909W WO 2007059614 A1 WO2007059614 A1 WO 2007059614A1
Authority
WO
WIPO (PCT)
Prior art keywords
lip
oral cavity
radiation
input device
user
Prior art date
Application number
PCT/CA2006/001909
Other languages
English (en)
Inventor
Christopher Graham
Original Assignee
Photon Wind Research Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Photon Wind Research Ltd. filed Critical Photon Wind Research Ltd.
Publication of WO2007059614A1 publication Critical patent/WO2007059614A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05GCONTROL DEVICES OR SYSTEMS INSOFAR AS CHARACTERISED BY MECHANICAL FEATURES ONLY
    • G05G1/00Controlling members, e.g. knobs or handles; Assemblies or arrangements thereof; Indicating position of controlling members
    • G05G1/52Controlling members specially adapted for actuation by other parts of the human body than hand or foot
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05GCONTROL DEVICES OR SYSTEMS INSOFAR AS CHARACTERISED BY MECHANICAL FEATURES ONLY
    • G05G9/00Manually-actuated control mechanisms provided with one single controlling member co-operating with two or more controlled members, e.g. selectively, simultaneously
    • G05G9/02Manually-actuated control mechanisms provided with one single controlling member co-operating with two or more controlled members, e.g. selectively, simultaneously the controlling member being movable in different independent ways, movement in each individual way actuating one controlled member only
    • G05G9/04Manually-actuated control mechanisms provided with one single controlling member co-operating with two or more controlled members, e.g. selectively, simultaneously the controlling member being movable in different independent ways, movement in each individual way actuating one controlled member only in which movement in two or more ways can occur simultaneously
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0008Associated control or indicating means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05GCONTROL DEVICES OR SYSTEMS INSOFAR AS CHARACTERISED BY MECHANICAL FEATURES ONLY
    • G05G9/00Manually-actuated control mechanisms provided with one single controlling member co-operating with two or more controlled members, e.g. selectively, simultaneously
    • G05G9/02Manually-actuated control mechanisms provided with one single controlling member co-operating with two or more controlled members, e.g. selectively, simultaneously the controlling member being movable in different independent ways, movement in each individual way actuating one controlled member only
    • G05G9/04Manually-actuated control mechanisms provided with one single controlling member co-operating with two or more controlled members, e.g. selectively, simultaneously the controlling member being movable in different independent ways, movement in each individual way actuating one controlled member only in which movement in two or more ways can occur simultaneously
    • G05G9/047Manually-actuated control mechanisms provided with one single controlling member co-operating with two or more controlled members, e.g. selectively, simultaneously the controlling member being movable in different independent ways, movement in each individual way actuating one controlled member only in which movement in two or more ways can occur simultaneously the controlling member being movable by hand about orthogonal axes, e.g. joysticks
    • G05G2009/0474Manually-actuated control mechanisms provided with one single controlling member co-operating with two or more controlled members, e.g. selectively, simultaneously the controlling member being movable in different independent ways, movement in each individual way actuating one controlled member only in which movement in two or more ways can occur simultaneously the controlling member being movable by hand about orthogonal axes, e.g. joysticks characterised by means converting mechanical movement into electric signals
    • G05G2009/04759Light-sensitive detector, e.g. photoelectric
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05GCONTROL DEVICES OR SYSTEMS INSOFAR AS CHARACTERISED BY MECHANICAL FEATURES ONLY
    • G05G9/00Manually-actuated control mechanisms provided with one single controlling member co-operating with two or more controlled members, e.g. selectively, simultaneously
    • G05G9/02Manually-actuated control mechanisms provided with one single controlling member co-operating with two or more controlled members, e.g. selectively, simultaneously the controlling member being movable in different independent ways, movement in each individual way actuating one controlled member only
    • G05G9/04Manually-actuated control mechanisms provided with one single controlling member co-operating with two or more controlled members, e.g. selectively, simultaneously the controlling member being movable in different independent ways, movement in each individual way actuating one controlled member only in which movement in two or more ways can occur simultaneously
    • G05G9/047Manually-actuated control mechanisms provided with one single controlling member co-operating with two or more controlled members, e.g. selectively, simultaneously the controlling member being movable in different independent ways, movement in each individual way actuating one controlled member only in which movement in two or more ways can occur simultaneously the controlling member being movable by hand about orthogonal axes, e.g. joysticks
    • G05G2009/0474Manually-actuated control mechanisms provided with one single controlling member co-operating with two or more controlled members, e.g. selectively, simultaneously the controlling member being movable in different independent ways, movement in each individual way actuating one controlled member only in which movement in two or more ways can occur simultaneously the controlling member being movable by hand about orthogonal axes, e.g. joysticks characterised by means converting mechanical movement into electric signals
    • G05G2009/04762Force transducer, e.g. strain gauge
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/155User input interfaces for electrophonic musical instruments
    • G10H2220/361Mouth control in general, i.e. breath, mouth, teeth, tongue or lip-controlled input devices or sensors detecting, e.g. lip position, lip vibration, air pressure, air velocity, air flow or air jet angle
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/155User input interfaces for electrophonic musical instruments
    • G10H2220/371Vital parameter control, i.e. musical instrument control based on body signals, e.g. brainwaves, pulsation, temperature, perspiration; biometric information
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/155User input interfaces for electrophonic musical instruments
    • G10H2220/405Beam sensing or control, i.e. input interfaces involving substantially immaterial beams, radiation, or fields of any nature, used, e.g. as a switch as in a light barrier, or as a control device, e.g. using the theremin electric field sensing principle
    • G10H2220/411Light beams

Definitions

  • This application relates to mouth-operated input devices which may be used as input and/or control devices for a wide variety of systems.
  • Particular embodiments of the invention provide mouth-operated input devices for electronic instrument simulators and mouth-operated input devices for operation and/or control of various components and/or systems.
  • Non-limiting examples of such systems include: electronic wind instrument simulators; systems for the assistance of disabled individuals who may be without the use of their hands or who may not have hands; systems which allow people to interact with various devices when their hands are otherwise encumbered or occupied (i.e. "hands free” operation); and systems which allow mouth-operated input in addition to more traditional input means to interact with various devices.
  • Electronic instrument simulators such as keyboard-activated synthesizers which use hand-operated inputs to emulate a conventional piano and other instruments, are relatively popular.
  • Electronic instrument simulators and apparatus for electronically controlling conventional instruments have been disclosed in US patent Nos. 4,085,646; 5,543,580; 5,459,280; 5,149,904; and 7,049,503, for example.
  • electronic wind instrument simulators have not achieved the same level of popularity as their conventional hand-operated counterparts.
  • One reason for the lack of popularity of electronic wind instrument simulators is the limited ability of prior art simulators to emulate the response of the instrument to the musician's "embouchure".
  • Embouchure may generally be described as the way that the musician's mouth (including, without limitation, lips, tongue, cheeks, throat, jaw and teeth) interact with the instrument mouthpiece.
  • the sound of the instrument is heavily influenced by embouchure.
  • Embouchure may have many different aspects which may related to one or more physical gestures.
  • Non-limiting examples of various aspects of embouchure include the force with which the lips are pressed on the mouthpiece, the position of the lips and tongue relative to the mouthpiece, the tilting and/or curvature of the lips and tongue and the spatial characteristics of the oral cavity.
  • SUS&TITUTe SHEEt (RULg 20> musicians often learn to control various aspects of their embouchure subconsciously.
  • embouchure typically influences the characteristics of the resonant system formed by the musician's mouth, the instrument mouthpiece and the instrument body. It has proven to be difficult to sense and/or simulate embouchure.
  • Prior art wind instrument simulators have not been able to provide the sensitivity, dynamic range or multiple degrees of freedom necessary to simulate embouchure in a manner that is satisfactory to musicians or in a manner that allows the wind instrument simulators to faithfully reproduce the variety sounds generated by their conventional counterparts.
  • Mouth-operated input devices may have a wide variety of applications other than for the control of instrument simulators. Mouth-operated input devices may enable hands free operation of various components and/or systems. Mouth-operated input devices may also be used as additional input and/or control devices for systems incorporating more conventional (e.g. hand-operated) input devices. By way of non-limiting example, mouth-operated input devices may be used by disabled individuals with limited capability of using conventional hand-operated input devices and by underwater divers or astronauts whose protective suits limit their ability to use their hands to operate input devices. Such mouth-operated input devices can be used as input and/or control devices for a wide variety of systems.
  • Such user-interaction characteristics may include, for example, the force with which the lips and/or tongue are pressed on the input device, the position of the lips and tongue relative to the input device, the tilting and/or curvature of the lips and tongue and the spatial characteristics of the oral cavity.
  • Examples of prior art mouth-operated input devices include:
  • SUBSTITUT6 SHEET • the electricjoyTM joystick device disclosed at http : //www . genesisone .net/electricj oy .htm;
  • a first aspect of the invention provides a mouth-operated input device.
  • the device comprises a lip receiving element for receiving a user's lip.
  • the lip receiving element comprises at least one convexity such that application of lip-force against the lip receiving element causes a first portion of the lip to deform around the convexity into a region of space adjacent the convexity.
  • the device also comprises an optical lip-force sensor.
  • the optical lip force sensor comprising an emitter for directing radiation into the region and a detector for detecting radiation emanating from the region and for generating a lip-force output signal based on an amount of detected radiation.
  • the lip-force output signal varies in response to changes in the lip-force.
  • Another aspect of the invention provides a method for generating control signals using a user's mouth.
  • the method comprises: providing a lip receiving element for receiving a user's lip, the lip receiving element comprising at least one convexity such that application of lip-force against the lip receiving element causes a first portion of the lip to deform around the convexity into a region of space adjacent the convexity; directing radiation into the region; detecting radiation emanating from the region; and generating a lip-force output signal based on an amount of detected radiation.
  • the lip-force output signal varies in response, to changes in the lip-force.
  • the device comprises: a plurality of cavity sensors, each cavity sensor comprising an emitter for directing radiation into a user's oral cavity and a detector for detecting corresponding oral cavity radiation reflected from one or more surfaces of the oral cavity and for generating a corresponding oral cavity sensor output signal based on a corresponding amount of detected oral cavity radiation; and electronics connected to receive the oral cavity sensor output signals and configured to generate therefrom one or more processed oral cavity signals, each of the processed oral cavity signals varying in response to changes in a different physical characteristics of the oral cavity.
  • the emitter of at least one first oral cavity sensor is configured to emit a beam of radiation having a center that is oriented in a first direction at an angle in a range of 0°-45° above a longitudinal axis of the device and the emitter of at least one second oral cavity sensor is configured to emit a beam of radiation having a center that is oriented in a second direction at an angle in a range of 0°-45° below the longitudinal axis of the device, the longitudinal axis oriented generally parallel to a roof of a user's mouth.
  • a mouth-operated input device comprising: a plurality of cavity sensors, each cavity sensor comprising an emitter for directing radiation into a user's oral cavity and a detector for detecting corresponding oral cavity radiation reflected from one or more surfaces of the oral cavity and for generating a corresponding oral cavity sensor output signal based on a corresponding amount of detected oral cavity radiation; and electronics connected to receive the oral cavity sensor output signals and configured to generate therefrom one or more processed oral cavity signals, each of the processed oral cavity signals varying in response to changes in a different physical characteristics of the oral cavity.
  • the emitter of at least one first optical oral cavity sensor is configured to emit a beam of radiation having a center that is oriented in a first direction at an angle in a range of 0°-45° on one transverse side of a longitudinal axis of the device and the emitter of at least one second optical oral cavity sensor is configured to emit a beam of radiation having a center that is oriented in a second direction at an angle in a range of 0°-45° on an opposing transverse side of the longitudinal axis of the device, the longitudinal axis oriented generally parallel to the user's cheeks.
  • Figure 1 is schematic diagram of a wind instrument simulator incorporating a mouth-operated input device according to a particular embodiment of the invention
  • Figures 2A, 2B, 2C, 2D, 2E and 2F schematically illustrate an oral cavity sensing system suitable for use with the Figure 1 mouth-operated input device
  • Figures 3 A and 3B are different isometric views of a mouth-operated input device according to another embodiment of the invention.
  • Figure 4 is a schematic block diagram of the oral cavity sensing system of the Figure 3 mouth-operated input device;
  • Figures 5A, 5B and 5C (collectively, Figure 5) schematically illustrate a lip sensing system according to a particular embodiment of the invention, which is suitable for use with the Figure 1 mouth-operated input device;
  • Figure 6 is a schematic block diagram of a lip sensing system of the Figure 3 mouth-operated input device
  • Figure IA, IB, 7C (collectively, Figure 7) schematically depict the detection of lip curvature by the lip sensing system of Figure 6;
  • Figure 8A, 8B, 8C (collectively, Figure 8) schematically depict the detection of lip angle by the lip sensing system of Figure 6;
  • Figure 9 is a schematic diagram depicting the breath-pressure-sensor of the Figure 1 mouth-operated input device;
  • Figure 10 is a schematic diagram depicting the breath-pressure sensing system of the Figure 3 mouth-operated input device
  • Figure 11 is a schematic block diagram of an instrument according to a stand alone embodiment of the invention.
  • Figure 12 is a schematic block diagram of an instrument according to a computer-hosted embodiment of the invention.
  • Figures 13A, 13B and 13C schematically illustrate examples of lip sensing systems for mouth-operated input devices according to other embodiments of the invention.
  • Figure 14 is a schematic depiction of a normalization circuit that may be used to normalize oral cavity sensor output signal(s) in accordance with a particular embodiment of the invention
  • Figure 15 is a partial isometric view of a mouth-operated input device according to another embodiment of the invention.
  • Figure 16 is a schematic block diagram of the lip sensing system of the Figure 15 mouth-operated input device
  • Figure 17A is an exploded isometric view of a mouth-operated joystick device according to another embodiment of the invention
  • Figure 17B is an exploded isometric view of the force/joystick sensor system of the Figure 17A device.
  • Figure 18 is a schematic depiction of the Figure 17A device.
  • Mouth-operated input devices are provided for interaction with various systems.
  • mouth-operated input devices are used to control wind instrument simulators.
  • mouth- operated input devices according to the invention sense various characteristics of the user's mouth and output corresponding sensor output signals which may be used as inputs for a variety of different systems.
  • a mouth-operated input device comprises a convexity around which a user's lip may deform and an optical sensor for emitting radiation into a region of space on a first side of the convexity.
  • This region of space adjacent the first side of the convexity may comprise a concavity.
  • the optical sensor may be configured to detect radiation reflected by the user's lip as it deforms around the convexity and into the region of space on the first side of the convexity.
  • the optical sensor may additionally or alternatively be configured to detect radiation transmitted through the region as the user's lip deforms around the
  • a first portion of the user's lip may be located on the first side of the convexity and a second portion of the user's lip may be received on an opposing side of the convexity.
  • the application of force to the lip may cause the first portion of the user's lip to deform into the region of space adjacent the first side of the convexity.
  • An amount of reflected or transmitted radiation detected by the sensor may be related to an amount of deformation of the user's lip into this region of space.
  • the application of different levels of force to the lip causes different amounts of lip deformation and different amounts of extension of the first portion of the lip into the region, which in turn causes different amounts of reflected or transmitted radiation detected by the sensor.
  • the convexity may be formed by a step profile between two spaced-apart surfaces.
  • the input device may also comprise an optical oral cavity sensor for emitting radiation into the user's oral cavity and detecting reflected radiation from one or more surfaces of the user's oral cavity.
  • the oral cavity sensor may be insertable into the user's oral cavity.
  • the radiation emitted by the sensor may be oriented such that the reflected radiation detected by the sensor is correlated with a distance between the sensor and a forward facing surface of the user's tongue.
  • the radiation emitted by the sensor may be oriented such that the reflected radiation detected by the sensor is correlated with a distance between an apex of the user's tongue and a roof of the user's mouth.
  • the mouth-operated input device may also comprise a force sensor which senses the force that the user's mouth is applying (in one or more directions) to the device itself.
  • the mouth-operated input device maps this detected force to a position.
  • FIG. 1 schematically depicts a wind instrument simulator 10 incorporating a mouth-operated input device 12 according to a particular embodiment of the invention.
  • instrument simulator 10 may comprise a plurality of additional inputs 14 which may perform a variety of additional functions.
  • inputs 14 may comprise finger operated keys for note selection, timbre manipulation or other control functions.
  • Inputs 14 may additionally or alternatively be provided on mouth- operated input device 12.
  • SUBSTITUTE SHEET (RULg ⁇ ) device 12 may also comprise instrument electronics 16 for generating an output signal 18 based on inputs 14 and/or data received from mouth-operated input device 12.
  • instrument electronics 16 comprise a digital processor for generating output signal 18 and output signal 18 may control the audio output of synthesizer 13.
  • Output signal 18 may be a control signal which conforms to a known audio control protocol or to some other communication and/or control protocol.
  • audio protocols may include: MIDI (Musical Instrument Digital Interface), MIDI over USB (Universal Serial Bus), OSC (Open Sound Control), and SKINI (Synthesis tool Kit Instrument Network Interface).
  • output signal 18 may alternatively be used as an input signal or as input parameter(s) to an analog audio synthesizer or a digital audio synthesis algorithm.
  • Mouth-operated input device 12 comprises a housing 20 having a mouth- engaging end 22 with a proximal surface 24, an upper surface 30 and a lower surface 36.
  • a user interacts with mouth-engaging end 22 of housing 20 by placing their upper lip 26 (and possibly one or more of their upper teeth 28) on upper surface 30 and their lower lip 32 (and possibly one or more of their lower teeth 34) on lower surface 36.
  • Lower surface 36 and/or upper surface 30 may comprise one or more convexities 37.
  • lower surface 36 has a stepped profile comprising a first portion (upper portion 36B) and a second portion (lower portion 36A) located on either side of convexity 37.
  • lower surface 36 also comprises a concavity 39 between lower portion 36A and upper portion 36B.
  • lower surface 36 may comprise more than one convexity 37 and/or more than one concavity 39.
  • convexity 37 provides a relatively sharp corner.
  • a radius of curvature of convexity 37 is in a range of 0.1 mm - 3 mm.
  • the radius of curvature of convexity 37 is in a range of 0.25 mm - 1 mm.
  • convexity 37 subtends an angle in a range of 30°-150° . In currently preferred embodiments, convexity 37 subtends an angle in a range of 75°-120°.
  • lower portion 36A and upper portion 36B of lower surface 36 are connected by intermediate surface 40 which extends therebetween.
  • Intermediate surface 40 is shown in the illustrated embodiment as being approximately orthogonal with lower portion 36A and upper portion 36B, but this is not necessary.
  • the user's lower lip 32 preferably interacts with the portion of lower surface 36 at or near convexity 37 (i.e. in a region of the transition between lower portion 36A and upper portion 36B).
  • Housing 20 may contain a number of sensors for sensing various aspects of the interaction between the user's mouth and input device 12.
  • input device 12 comprises a breath-pressure-sensor 42, which is connected to the mouth-engaging end 22 by a suitable conduit 44.
  • input device 12 also comprises an oral cavity sensor 46 and a lower lip sensor 48.
  • oral cavity sensor 46 comprises one or more optical reflectivity sensors
  • lip sensor 48 comprises one or more optical reflectivity sensors and/or one or more optical transmission sensors.
  • Housing 20 and/or instrument 10 may also contain signal conditioning electronics 50, which condition the output signals from sensors 42, 46, 48 before providing them to instrument electronics 16.
  • Instrument electronics 16 may also be located in housing 20 and/or in instrument 10.
  • sensors 42, 46, 48 share signal conditioning electronics 50.
  • device 12 and/or instrument 10 comprise independent signal conditioning electronics 50 for each sensor 42, 46, 48.
  • signal conditioning electronics 50 may work cooperatively with instrument electronics 16 to receive signals (not shown in Figure 1) from various sensors 42, 46, 48 and to use these sensor signals to generate output signal 18, which may in turn control the audio output of synthesizer 13.
  • instrument electronics 16 may be replaced by other similar electronics suitable to such other systems or components.
  • Mouth-operated input device 12 is capable of sensing a number of user- interaction characteristics related to the way in which a user interacts with device 12. Such user-interaction characteristics may be indicative of various aspects of
  • Figure 2 schematically depicts how device 12 is capable of sensing spatial characteristics of the user's oral cavity 52 including the user's tongue 54. For clarity, some details of device 12 not related to sensing tongue and/or oral cavity characteristics have been removed from Figure 2.
  • oral cavity sensor 46 is an optical reflection-type sensor which emits radiation and senses the intensity of reflected radiation.
  • Oral cavity sensor 46 comprises a radiation emitter.
  • the radiation emitter is a LED emitter 6OA.
  • LED emitter 6OA may emit radiation in the visible, infrared or near infrared spectrum. The approximately 400-660 nanometer wavelength common to a wide variety of LEDs is a preferred choice, as saliva is relatively transparent at such wavelengths.
  • Oral cavity sensor 46 also comprises a radiation detector for sensing reflected radiation.
  • the radiation detector is a phototransistor 6OB.
  • Optical reflection-type sensors of this type are known in the art. Examples of such sensors include the OPB745 manufactured and sold by Optek Technology, Inc. of Carrollton, Texas.
  • sensor 46 comprises discrete components.
  • LED emitter 60A may be implemented by the FAl 105W-TR LED from Stanley Electric Sales of America, Inc. of Irvine,
  • California and phototransistor 6OB may be implemented by the PT-IOOMC-OMP from Lumex, Inc. of Palatine, Illinois.
  • oral cavity sensor 46 comprises, or is optically coupled to, a light pipe 58, such that some component(s) of oral cavity sensor 46 (e.g. LED 60A and/or phototransistor 60B) may be located remotely (i.e. away from mouth-engaging end 22 of device 12).
  • light pipe 58 is not required and LED 6OA and phototransistor 6OB (or other radiation emitting and radiation detecting components of sensor 46) may be located adjacent to proximal surface 24 of input device 12.
  • radiation emitted from LED 6OA (or other radiation emitting device) may travel through a different light pipe than radiation that is incident on phototransistor 6OB (or other radiation detection device).
  • additional optics 62 A may be provided for shaping and/or directing the sensor output radiation and additional
  • SUBSTITUTE SHEET (RULE 2S) optics 62B may be provided for coupling remotely located LED 6OA and phototransistor 6OB (or other radiation emitting and radiation detecting components of sensor 46) to light pipe 58.
  • additional beam shaping and/or directing optics 62 may comprise, without limitation, lenses, mirrors, prisms, gratings, beam splitters, polarizers and other beam-shaping and/or coupling optical elements.
  • Such additional optics 62 may be static or moveable.
  • Optics 62 may comprise, or have their outer surfaces coated with, hydrophilic material, such that saliva does not bead on their outer surfaces.
  • Optics 62 may comprises polarizer (s) and/or polarizing filter(s). Polarization of the sensor output beam using a polarizing filter may reduce the effect of specular reflection caused by saliva. If reflected radiation is passed through a polarizing filter orthogonal to that of the sensor output beam, then specularly reflected radiation will tend to be blocked by the orthogonal filter because specularly reflected radiation retains its polarization whereas diffusely reflected radiation does not.
  • sensor 46 may optionally be controlled by signal 23 from instrument electronics 16 and/or by signal 25 from signal conditioning electronics 50.
  • Instrument electronics 16 and signal conditioning electronics 50 may comprise one or more controllers suitably configured for this purpose.
  • Signals 23, 25 may comprise one way or two way control signals. For clarity, signals 23, 25 are not shown in Figures 2B-2F.
  • Radiation emitted from sensor 46 may be reflected from one or more surfaces of the user's mouth, including, for example, lips 26, 32, teeth 28, 34, tongue 54, roof 53, cheeks (not shown) and throat 56.
  • One characteristic of the user's oral cavity 52 is the position of the user's tongue 54 relative to the front of oral cavity 52 and/or relative to device 12 (i.e. the forward-backward position of tongue 54).
  • Oral cavity sensor 46 is capable of detecting radiation reflected from a "forward-facing surface" 51 of tongue 54.
  • "Forward-facing surface” 51 may be defined as any surface of tongue 54 capable of directly receiving radiation emitted from sensor 46. Radiation reflected from forward-facing surface 51 of tongue 54 may be sensed by oral cavity sensor 46 and may be used to obtain an oral cavity sensor output signal 64 which is related to the position of the user's tongue 54 relative to the front of oral cavity 52.
  • oral cavity sensor 46 is designed such that oral cavity sensor output signal 64 changes monotonically as forward-facing surface 51 of tongue 54 moves forward or backward within oral cavity 52.
  • Monotonic variation of oral cavity sensor output signal 64 with the position of forward-facing surface 51 of tongue 54 may be achieved by designing and/or selecting various parameters of oral cavity sensor 46 including, by way of non-limiting example: angular beam shape or profile, beam width, beam orientation, uniformity of illumination across the beam and polarization of the beam.
  • these parameters are also designed and/or selected to provide oral cavity sensor output signal 64 with a suitably high dynamic range.
  • Figures 2A-2D illustrate various possible positions of the user's tongue 54.
  • the user has configured their tongue 54, such that forward-facing surface 51 is positioned relatively close to proximal surface 24 of device 12.
  • forward-facing surface 51 is positioned relatively close to proximal surface 24 of device 12.
  • the user's tongue 54 is configured such that its forward-facing surface 51 is further from proximal surface 24 of device 12.
  • the user has curved their tongue 54 such that tongue 54 is pointed upwardly and forward- facing surface 51 is actually the surface typically referred to as the undersurface of tongue 54.
  • forward-facing surface 51 of tongue 54 is positioned relatively close to proximal surface 24 of device 12.
  • forward-facing surface of tongue 54 is positioned relatively far from proximal surface 24 of device 12.
  • Another characteristic of the user's oral cavity 52 is the height of the apex 55 of the user's tongue 54 relative to roof 53 of oral cavity 52 and to device 12. This oral cavity characteristic may be correlated to a cross-sectional area of oral cavity 52 in the region of apex 55.
  • the user has configured their tongue 54 and/or their jaw (not shown), such that the apex 55 of tongue 54 is relatively far from the roof 53 of their mouth and the cross-sectional area of the user's oral cavity 52 in the region of apex 55 is relatively large.
  • the user has configured their tongue 54 and/or their jaw, such that the apex 55 of tongue 54 is relatively close to the roof 53 of their mouth and the cross-sectional area of the user's oral cavity 52 in the region of apex 55 is relatively small.
  • apex 55 of tongue 54 may also be adjusted relative to device 12 when tongue 54 is pointed upwardly as shown in Figures 2C and 2D. In such a configuration, apex 55 is at or near the tip of tongue 54.
  • Figure 2C the user has adjusted their tongue 54 and/or their jaw such that apex 55 is relatively far from the roof 53 of oral cavity 52 and in Figure 2D, the user has adjusted their tongue 54 and/or their jaw such that apex 55 is relatively close to the roof 53 of oral cavity 52.
  • a musician may change the shape and/or size of the resonant cavity in their mouth and may thereby change the characteristics of the resonant cavity formed by the combination of the instrument and the musician's mouth. These changes in the resonant cavity may cause corresponding changes in the frequency, pitch, timbre or other characteristics of the sound created by the instrument.
  • the spatial characteristics of the musician's oral cavity represent an aspect of the musician's embouchure.
  • Device 12 models this aspect of a conventional instrument.
  • Oral cavity sensor 46 senses the spatial characteristics of the user's oral cavity 52 (including the user's tongue 54) and creates an oral cavitjTsensor output signal 64 representative of these spatial characteristics.
  • Signal processing electronics 50 and/or instrument electronics 16 may use oral cavity sensor output signal 64 to generate one or more processed oral cavity signals 17.
  • Instrument electronics 16 may further process processed oral cavity signals 17 to generate output signal 18.
  • Output signal 18 may in turn control the audio output of synthesizer 13 ( Figure 1).
  • Oral cavity sensor output signal 64 may be processed by signal conditioning electronics 50 and/or instrument electronics 16.
  • the processing performed on oral cavity sensor output signal 64 by signal conditioning electronics 50 and/or instrument electronics 16 may take place in the analog and/or digital domain and may generally involve any of a variety of processing techniques, including, without limitation: digitizing, amplifying, inverting, filtering, scaling, offsetting, linearizing, differentiating, integrating, averaging, normalizing, performing linear and/or nonlinear mathematical transformations (e.g. Fourier transforms and logarithmic functions) and the like.
  • the result of the processing by signal conditioning electronics 50 and/or instrument electronics 16 is one or more processed oral cavity signal(s) 17, which may generally comprise any function ⁇ of oral cavity sensor output signal 64.
  • the function ⁇ used to generate processed oral cavity signal(s) 17 may include any of the analog or digital processing techniques discussed above.
  • FIG 14 schematically illustrates a normalization circuit 61 according to a particular embodiment of the invention, which represents a technique for processing oral cavity sensor output signal 64 to generate a normalized oral cavity signal 64' according to a particular embodiment of the invention.
  • normalization circuit 61 may be implemented as a part of signal conditioning electronics 50 and/or instrument electronics 16. Normalization circuit 61 normalizes oral cavity sensor output signal 64 to account for variations in the amount of ambient light in the environment in which device 12 is being used. Such ambient light may penetrate the cheeks of a user and be detected by optical sensor 46, for example. Normalization circuit 61 or components of normalization circuit 61 may be controlled by a controller (not shown).
  • Normalization circuit 61 takes two temporarily spaced apart samples of oral cavity sensor output signal 64.
  • a first sample 64A is taken when LED 6OA (see
  • Figure 2 is ON and a second sample 64B is taken a short time ( ⁇ t) later when LED 6OA is OFF.
  • Sample 64B represents the ambient light level detected by oral cavity sensor 46.
  • the time ( ⁇ t) between the first and second samples may be less than about 1 ms. In currently preferred embodiments, the time ( ⁇ t) is less than about 100 ⁇ s.
  • Oral cavity sensor output signal samples 64A, 64B are amplified by corresponding linear amplifiers to provide amplified oral cavity sensor output signal samples 64 A', 64B'.
  • Circuit 61 comprises a sample and hold circuit 71 which is used to temporarily retain one of the one of the amplified oral cavity sensor output signal samples (sample 64A' in the illustrated embodiment). ,
  • Amplified oral cavity sensor samples 64A', 64B' are then provided to difference amplifier 59, which amplifies a difference between amplified oral cavity sensor samples 64A', 64B' to provide difference signal 19.
  • difference signal 19 is representative of the normalized oral cavity output signal (i.e. an oral cavity output signal that is attributable directly to the signal from LED 6OA and not to ambient radiation).
  • circuit 61 also comprises optional signal conditioning circuitry 73A and a D/A converter 73B which digitize difference signal 19 to output normalized oral cavity signal 64'.
  • Optional signal conditioning circuitry 73A may comprise a low pass filter and additional amplification for example.
  • Normalization circuit 61 represents one particular circuit for processing oral cavity sensor output signal 64 to generate a normalized oral cavity output signal 64' . Normalized oral cavity sensor output signal 64' may be further processed using other signal processing circuits and/or processing algorithms to produce one or more different processed oral cavity output signal(s) 17. In some embodiments, normalization circuit 61 is not required. Those skilled in the art will appreciate that other normalization circuits may be envisaged to reduce the impact of ambient light on processed oral cavity output signal(s) 17. In particular, the difference between the ambient lip-force signal (i.e. when the emitter is inactive) and the lip-force signal when the emitter is active may be obtained in the digital (rather than analog) domain.
  • Processed oral cavity signal(s) 17 may be used by instrument electronics 16 to generate output signal 18 which may in turn control the audio output of synthesizer 13.
  • Processed oral cavity signal(s) 17 are related to the sensed spatial characteristics of the user's oral cavity 52. Accordingly, using mouth-operated input device 12, the audio output of synthesizer 13 may be controlled by the spatial characteristics of the user's oral cavity 52. Such spatial characteristics may include: the volume of oral cavity 52, the distance of the apex 55 of tongue 54 from the proximal surface 24 of device 12, the distance between the apex 55 of tongue 54 and the roof 53 of the user's mouth, and/or the cross-sectional area of oral cavity 52 in the region of apex 55, for example.
  • Processed oral cavity signal(s) 17 may additionally or alternatively be correlated to the rate of change of any of these sensed spatial characteristics of the user's oral cavity 52.
  • signal conditioning electronics 50 and instrument electronics 16 are separate components.
  • Signal conditioning electronics 50 may additionally or alternatively be a part of oral cavity sensor 46, instrument electronics 16 and/or synthesizer 13.
  • Instrument electronics 16 may also be a part of signal conditioning electronics 50 and/or synthesizer 13.
  • oral cavity output signal 64 may have many forms.
  • signal conditioning electronics 50 cooperate with instrument electronics 16 to generate audio output signal 18 based at least in part on oral cavity sensor output signal 64.
  • the radiation emitted by oral cavity sensor 46 has a conical shape with a divergence profile in a range of 20°-100°.
  • a divergence profile provides oral cavity sensor 46 (and oral cavity sensor signal 64) with a suitably high dynamic range which can accurately detect small changes in the spatial characteristics of oral cavity 52.
  • the radiation emitted by oral cavity sensor 46 is substantially uniform throughout the conical profile.
  • the radiation emitted from oral cavity sensor 46 may be oriented slightly downwardly (i.e. toward throat 56), so that sensor 46 may accurately sense reflection from tongue 54.
  • the radiation emitted from sensor 46 may be oriented at a downward angle in a range of 0°-80° with respect to a longitudinal axis 21 (see Figure 1) of input device 12 (i.e. an axis of input device 12 which extends into the user's mouth and which may be generally parallel with a roof 53 of the user's mouth).
  • the radiation emitted from sensor 46 is oriented at a downward angle in a range of 0°-45° with respect to the longitudinal axis 21 of input device 12. In currently preferred embodiments, this angular range is 5°-30°.
  • the radiation emitted from oral cavity sensor 46 may alternatively be oriented slightly upwardly.
  • the radiation emitted from sensor 46 is oriented at an upward angle in a range of 0°-45° with respect to the longitudinal axis 21 of input device 12. In currently preferred embodiments, this angular range is 5°-30°.
  • the orientation with which the user holds instrument 10 ( Figure 1) and input device 12 relative to their mouth will impact the angle of radiation emitted from oral cavity sensor 46 relative to throat 56. That is, the user may be capable of altering the orientation of the longitudinal axis 21 of input device 12 and, in so doing, may also be capable of altering the angle of radiation emitted from oral cavity sensor 46.
  • the radiation emission angle of oral cavity sensor 46 is made adjustable, for example by one or more suitable mechanical micro-manipulators.
  • a suitable micro-manipulator may be an adjustment screw similar to the type used in corrective eyewear, for example.
  • a user may also purposefully adjust the orientation of instrument 10 and input device 12 to influence the oral cavity reflectivity response characteristics.
  • FIG. 3 depicts a mouth-operated input device 212 according to another embodiment of the invention.
  • mouth-operated input device 212 of Figure 3 is similar to mouth-operated input device 12 of Figure 1.
  • Features of device 212 which are similar to features of device 12 are referred to using similar reference numerals that are preceded by the digit "2" .
  • One aspect of device 212 that differs from device 12 is that device 212 has an oral cavity sensing system 209 which comprises two oral cavity sensors 246', 246".
  • oral cavity sensors 246', 246" are transversely spaced apart from one another, although they may be spaced apart in some other direction.
  • FIG 4 is a schematic block diagram of oral cavity sensing system 209 of the Figure 3 input device 212 according to a particular embodiment of the invention.
  • Each of oral cavity sensors 246', 246" may be implemented in a manner similar to, and have characteristics similar to, oral cavity sensor 46 of device 12 described above.
  • the radiation emitted from oral cavity sensors 246', 246" may be oriented in different directions. In the illustrated embodiment, radiation emitted from sensor 246' is oriented slightly downwardly (i.e. toward tongue 54 and throat 56) and radiation emitted from sensor 246" is oriented slightly upwardly (i.e. toward roof 53 of the user's mouth).
  • the radiation emitted from sensor 246' may be oriented at a downward angle in a range of 0°-80° with respect to the longitudinal axis 221 of input device 212 and the radiation emitted from sensor 246" may be oriented at an upward angle in a range of 0°-80° with respect to the longitudinal axis 221 of input device 212.
  • sensor 246' is oriented at a downward angle in a range of 0°-45° and the radiation emitted from sensor 246" is oriented at an upward angle in a range of 0°-45°.
  • these angular range are 5°-30°. In this manner, mouth-operated input device 212 and its oral cavity sensors 246', 246" sense different spatial characteristics of the user's oral cavity 52.
  • oral cavity sensing system 209 may comprise an optional, additional or alternative pair of oral cavity sensors 246A' 246A" .
  • Additional or alternative oral cavity sensors 246 A' 246A" may be oriented such that sensor 246 A'
  • SUBSTITUTE SHEET ' (RUL6 ⁇ 8) is oriented at a first transverse (i.e. sideways) angle with respect to the longitudinal axis 221 of device 212 and sensor 246A" is oriented at an opposing transverse angle with respect to the longitudinal axis 221 of device 212.
  • the opposing transverse angles of sensors 246A', 246A" may be in a range of 0°-90° with respect to the longitudinal axis 221 of device 212. In currently preferred embodiments, thee angular range are 10°-45°.
  • oral cavity sensors 246', 246" sense spatial characteristics of the user's oral cavity 52 and generate corresponding oral cavity sensor output signals 264', 264" representative of these spatial characteristics.
  • Oral cavity sensor output signals 264', 264" may be related to: the volume of oral cavity 52, the distance of the forward-facing surface 51 of tongue 54 from the proximal surface 24 of device 12, the distance between the apex 55 of tongue 54 and the roof 53 of the user's mouth (which may be dependent on tongue shape and jaw angle) and/or the cross-sectional area of oral cavity 52 in the region of apex 55 (which may also be dependent of tongue shape and jaw angle).
  • Oral cavity sensor output signals 264', 264" may be normalized using a circuit similar to circuit 61 ( Figure 14) or some other normalization process to substract out the effect of ambient light.
  • Oral cavity sensor output signals 264', 264" may be used to generate one or more processed oral cavity signals 217.
  • Instrument electronics 216 use processed oral cavity signals 217 to generate output signal 218.
  • oral cavity sensor output signals 264', 264" are used to generate two processed oral cavity signals 217A, 217B.
  • Instrument electronics 216 may make use of one or both of processed oral cavity signals 217A, 217B to generate output signal 218.
  • Output signal 218 may in turn control the audio output of synthesizer 13 ( Figure 1).
  • oral cavity sensor output signals 264', 264" are used to generate a different number of processed oral cavity signal(s) 217.
  • oral cavity sensor output signals 264', 264" may be processed in the analog and/or digital domain by their corresponding signal conditioning electronics 250', 250" and/ or by instrument electronics 216.
  • Signal conditioning electronics 250', 250" and instrument electronics 216 may have features similar to signal conditioning electronics 50 and instrument electronics 16 described above.
  • processed oral cavity signals 217A, 217B may be any function & of one or both of oral cavity sensor output signals 264' , 264" .
  • the function ⁇ used to generate processed oral cavity signals 217A, 217B may include any of the analog or digital processing techniques discussed above.
  • oral cavity sensor output signals 264', 264" may be normalized using a circuit similar to circuit 61 ( Figure 14) as a part of any processing performed by signal conditioning electronics 250', 250" and/or instrument electronics 216. For the discussion that follows, it is assumed that oral cavity output sensor output signals 264', 264" are normalized.
  • Normalized oral cavity output signals 264', 264" may be further processed.
  • processed oral cavity signal 217 A may be a difference function between normalized oral cavity sensor output signals 264" and 264' and processed oral cavity signal 217B may be a scaled and possibly offset function of normalized oral cavity sensor output signal 264'.
  • oral cavity sensor output signal 264' is related to the distance of the forward-facing surface 51 of tongue 54 from the proximal surface 24 of device 12.
  • Processed oral cavity signals 217A, 217B may generally comprise any function ⁇ " of one or both of normalized oral cavity sensor output signals 264', 264" .
  • one of the resonant frequencies of a user's oral cavity 52 is related to the volume of the cavity between the forward-facing surface 51 of the tongue 54 and user's lips 26, 32.
  • a processed oral cavity signal 217A, 217B representative of the location of forward-facing surface 51 relative to the proximal surface 24 of device 12 may be used as a basis for approximating this resonance.
  • another one of the resonant frequencies of a user's oral cavity is related to the cross-sectional area of the constriction between the apex 55 of the user's tongue 54 and the roof 53 of the user's mouth.
  • a processed oral cavity signal 217A, 217B representative of the height of apex 55 relative to roof 53 may be used as a basis for approximating this resonance.
  • Figure 4 depicts optional, additional or alternative oral cavity sensors 246A', 246A".
  • These oral cavity sensors 246A', 246 A” may generate oral cavity sensor output signals 264A', 264A", which may be processed by signal conditioning electronics 250A', 250A” and by instrument electronics 216.
  • the processing of oral cavity sensor output signals 264A', 264A” may be similar in many respects to that discussed above for oral cavity sensor output signals 264' , 264" .
  • oral cavity sensor output signals 264A', 264 A” may be normalized using a circuit similar to circuit 61 ( Figure 14) as a part of any processing performed by signal conditioning electronics 250A', 250A” and/or instrument electronics 216.
  • circuit 61 Figure 14
  • Oral cavity sensor output signals 264A', 264A" may be used by instrument electronics 216 to help generate processed oral cavity signals 217A, 217B.
  • processed oral cavity signals 217A, 217B may generally comprise any function ⁇ of oral cavity sensor output signals 264', 264" and/or oral cavity sensor output signals 264A', 264A".
  • Oral cavity sensor output signals 264A', 264A" may also be used by instrument electronics 216 to generate additional or alternative processed oral cavity
  • SUBSTITUTE SHEET (RULE 28) signals 217 (not shown in Figure 4).
  • Such additional or alternative processed oral cavity signals 217 may generally be any function i ⁇ of oral cavity sensor output signals 264', 264" and/or oral cavity sensor output signals 264A', 264A".
  • oral cavity sensor output signals 264A', 264A" may be related to the side to side position of tongue 54 in oral cavity 52.
  • the difference between oral cavity sensor output signals 264A' and 264A" may be related to the side to side position of tongue 54
  • Another example of generating a processed oral cavity signal 217 involves applying a high pass filter to one of oral cavity sensor output signal 264', 264", 264A', 264A" and/or some combination of oral cavity sensor output signals 264', 264", 264A', 264A".
  • a processed oral cavity signal 217 which has been high-pass filtered may be related to the gesture of "flutter tongue".
  • Flutter tongue is a gesture used by conventional wind instrument musicians, where the musician vibrates their tongue at a relatively high frequency in a manner similar to that involved in rolling an 'R' sound in speech.
  • Another example of generating a processed oral cavity signal 217 involves applying a threshold filter to one of oral cavity sensor output signal 264', 264", 264A', 264A" and/or some combination of oral cavity sensor output signals 264', 264", 264A', 264A".
  • a processed oral cavity signal 217 which is above a certain threshold may indicate that the user's tongue is touching proximal surface 24 which may be related to the gesture of "tonguing" .
  • Tonguing is a gesture which may used by conventional wind instrument musicians to articulate spaces between notes by influencing the flow of air through the instrument.
  • Another form of tonguing is a gesture which may be used by players of conventional reed instruments to stop the vibration of the reed with the surface of their tongue.
  • oral cavity sensing system 209 incorporates an additional sensor (not shown) for detecting a signal representative of the short range
  • SUBSTITUTE SHEET (RULE t ⁇ ) proximity of the end of the user's tongue 54.
  • a sensor may also be an optical reflection-type sensor and the sensor and/or its signal conditioning electronic may be specifically designed and/or selected for short range proximity detection or threshold proximity detection.
  • Such a sensor may be oriented sharply downwardly from proximal surface 224 of device 212 and can be used to detect when tongue 54 touches mouthpiece 212 at or near proximal surface 224. This sensor may also be capable of providing a sensor output signal related to the concept of "tonguing" .
  • Instrument electronics 216 may make use of processed oral cavity signals 211 K, 217B (and any additional processed oral cavity signals 217 which are not shown in Figure 4) to generate output signal 218 which may in turn control the audio output of synthesizer 13 ( Figure 1).
  • processed oral cavity signals 217A, 217B are related to a pair of sensed spatial characteristics of the user's oral cavity 52 (e.g. the distance between the apex 55 of tongue 54 and the roof 53 of the user's mouth and the distance of the forward-facing surface 51 of tongue 54 from the proximal surface 24 of device 12).
  • mouth-operated input device 212 may be used to independently control the audio output of synthesizer 13 by way of two different spatial characteristics of the user's oral cavity 52.
  • oral cavity sensor output signals 264', 264", 264A', 264A” may be processed to generate any number of processed oral cavity signal(s) 217, each of which may (alone or in combination with other oral cavity sensor output signals 264', 264", 264A', 264A") be related to a different spatial characteristic of the user's oral cavity.
  • Processed oral cavity signal(s) 217 may additionally or alternatively be correlated to the rate of change of any of these sensed spatial characteristics of the user's oral cavity 52.
  • Figure 5 schematically depicts how mouth-operated input device 12 (Figure 1) is capable of sensing the force applied to the mouth-engaging end 22 of device 12 by lower lip 32 according to a particular embodiment of the invention. For clarity, details of device 12 which are not related to sensing the force applied by lower lip 32 are not shown in Figures 5. As discussed above, in the illustrated embodiment the
  • SUBSTITUTE SHEET (RULE 28 ⁇ user's lower lip 32 preferably contacts mouth-engaging end 22 of device 12 at or near convexity 37.
  • Lower lip 32 comprises an inward portion 32A (located on an inward side 37A of convexity 37) and an outward portion 32B (located on an outward side 37B of convexity 37). Outward portion 32B of lip 32 may be received on lower portion 36 A of lower surface 36.
  • inward portion 32A of lower lip 32 is not substantially deformed.
  • inward portion 32 A and outward portion 32B of lip 32 both extend upwardly approximately to the level of lower portion 36A of lower surface 36.
  • the user is applying an intermediate force with inward portion 32A of lower lip 32, such that inward portion 32A of lip 32 deforms around convexity 37 and extends upwardly above lower portion 36A and into a region of space 33 on inward side 37A of convexity 37 and adjacent intermediate surface 40.
  • Region 33 may be located between convexity 37 and concavity 39.
  • Region 33 may be located between a plane substantially parallel with a portion of lower portion 36A immediately outside 37B of convexity 37 and a plane substantially parallel with a portion of upper portion 36B on the inside 37A of convexity 37.
  • outward portion 32B of lower lip 32 need not change significantly as the user applies more lip-force to inner portion 32A of lower lip 32.
  • Outward portion 32B of lower lip 32B may continue to rest on lower portion 36A of lower surface 36 and may be independently controlled by the user for a different purpose as discussed in more detail below.
  • the aperture between the outward portions of upper lip 26 and lower Hp 32 is maintained at a relatively constant size, which is approximately equal to the circumference of device 12.
  • inward portion 32A of lower lip 32 is applying a relatively large force with inward portion 32A of lower lip 32, such that inward portion 32A of lip 32 deforms around convexity 37 and extends a significant distance above lower portion 36A and into region 33. Inward portion 32A of lip 32 may also deform such that it is closer to intermediate surface 40.
  • inward portion 32A of lip 32 may reach upper portion 36B of lower surface 36 and may become flattened against upper portion 36B.
  • inward portion 32A of lip 32 may reach intermediate surface 40 and may become flattened against intermediate
  • mouth-operated input device 12 comprises a lip sensor 48.
  • Lip sensor 48 may be a optical reflection-type sensor which emits radiation and senses the intensity of reflected radiation of the same general type as oral cavity sensor 46 discussed above.
  • Lip sensor 48 is preferably oriented to direct radiation into region 33 on inward side 37A of convexity 37.
  • lip sensor 48 is positioned to direct radiation out of intermediate surface 40 and into region 33.
  • lip sensor 48 is positioned to direct radiation out of upper surface 36B and into region 33.
  • lip sensor 48 may comprise, or may be optically coupled to, a light pipe 66, such that some components of lip sensor 48 (e.g. radiation emitter 7OA and radiation detector 70B) may be located remotely. In other embodiments, light pipe 66 is not required and radiation emitter 7OA and radiation detector 7OB may be located and/or oriented to direct radiation directly into region 33.
  • Lip sensor 48 and/or light pipe 66 may comprise additional optics 68A for shaping and/or directing the sensor output radiation and/or additional optics 68B for coupling remotely located radiation emitter 7OA and detector 7OB to light pipe 66. Optics 68 may be similar to optics 62 discussed above.
  • region 33 Radiation emitted from lip sensor 48 into region 33 is reflected from inward portion 32A of the user's lower lip 32.
  • the shape of the mouth-engaging end 22 of device 12 i.e. providing region 33 on inward side 37A of convexity 37
  • region 33 is created by the stepped profile of lower surface 36 (i.e. spaced-apart lower and upper portions 36A, 36B), which permit lower lip 32 to deform around convexity such that inward portion 32A of lip 32 extends into region 33.
  • Figure 5A shows that when the force applied by the user to the inward portion 32A of lower lip 32 is relatively small, lower portion 36A and convexity 37 maintain the inward portion 32 A of lower lip 32 at a location that is, for the most part, at or below the level of lower portion 36A (i.e. not extending substantially into region 33).
  • inward portion 32A of lower lip 32 does not extend substantially
  • lip sensor 48 detects relatively little reflected radiation, because a significant portion of radiation 72 emitted by sensor 48 either does not impinge on lower lip 32 or reflects from lower lip 32, but is not received by sensor 48.
  • Lip sensor 48 detects a larger amount of reflected radiation when the force applied to inward portion 32A of lower lip 32 is at the intermediate level of Figure 5B, because inward portion 32A of lower lip 32 deforms around convexity 37, extends above lower portion 36A and into region 33 on inward side 37A of convexity 37. When inward portion 32 A of lip 32 extends into region 33, it reflects more radiation back toward sensor 48.
  • Figure 5C shows that when the force applied to inward portion 32A of lower lip 32 is at a relatively high level, inward portion 32A of lower lip 32 deforms around convexity 37, extends significantly above lower portion 36A and into region 33.
  • inward portion 32A of lip 32 is forced harder, it may deform further by flattening against upper portion 36B and/or against intermediate surface 40.
  • lip sensor 48 detects even more reflected radiation, because a larger percentage of the radiation emitted by lip sensor 48 is reflected by inward portion 32A of lip 32 and detected by sensor 48.
  • a musician When playing a conventional wind instrument, a musician frequently (and often subconsciously) changes the force applied by their lip(s) on the mouthpiece of the instrument to modulate the sound emanating from the instrument. For example, by applying different levels of force, a musician may alter the frequency, pitch, timbre or other characteristics of the instrument and thereby change the sound created by the instrument.
  • the force applied by the musician's lip(s) represents an aspect of the musician's embouchure.
  • Device 12 models this aspect of a conventional instrument.
  • Lip sensor 48 senses the force applied by inward portion 32A of the user's lip 32 and creates a lip sensor output signal 76 representative of this force.
  • Lip sensor output signal 76 may be used by signal conditioning electronics 50 and/or instrument electronics 16 to generate one or more processed lip signals 77.
  • Instrument electronics 16 may then use processed lip signals 77 to generate output signal 18.
  • Output signal 18 may in turn be used to control the audio output of synthesizer 13 ( Figure 1).
  • region 33 i.e. the distance between upper portion 36B and lower portion 36A
  • SUBSTITUTE SHEET (RULESSV of lip sensor 48 may be designed so that lip sensor output signal 76 is a monotonically increasing function of the force applied by inward portion 32A of lip 32 to mouth-engagement end 22 of device 12.
  • the spacing between upper and lower portions 36A, 36B is in a range of 2-15 mm.
  • Other aspects of device 12 which may be used to provide a monotonic relationship between lip sensor output signal 76 and the force applied by inward portion 32A of lip 32 include, without limitation: angular beam shape or profile, beam width, beam orientation, uniformity of illumination across the beam, polarization of the beam, the radius of curvature of convexity 37 and reflectivity of mouthpiece 12.
  • these parameters can also be designed to provide lip sensor output signal 76 with a suitably high dynamic range which varies between about zero applied force to the maximum lip-force that a normal user may apply.
  • lip sensor output signal 76 is processed by signal conditioning electronics 50 and/or instrument electronics 16 to output a single processed lip signal 77.
  • lip sensor output signal 76 may be processed by signal conditioning electronics 50 and/or instrument electronics 16 to output a plurality of processed lip signal(s) 77.
  • Signal conditioning electronics 50 and instrument electronics 16 may function in a manner similar to, and have characteristics similar to, those discussed above with respect to oral cavity sensor output signal 64.
  • signal conditioning electronics and/or instrument electronics 16 may comprise a normalization circuit that operates to provide normalized lip sensor output signals 76. Such a normalization circuit may operate in a manner substantially similar to normalization circuit 61 ( Figure 14) discussed above for oral cavity sensor output signal 64. For the discussion that follows, it is assumed that lip sensor output signal 76 is normalized.
  • Processed lip signal 77 may generally be any function 5F of lip sensor signal 76. Instrument electronics 16 may use processed lip signal 77 to generate output signal 18 which in turn controls the audio output of synthesizer 13. Processed lip signal 77 is related to the sensed lip-force applied by the user to inward portion 32A of lower lip 32. Accordingly, mouth-operated input device 12 may be used to modulate the audio output of synthesizer 13 based on the lip-force applied by a user. Processed lip signal(s) 77 may additionally or alternatively be correlated to the rate of change of the sensed lip-force applied by the user.
  • Convexity 37 and/or the spaced apart lower and upper portions 36A, 36B of lower surface 36 may also function to provide a user with tactile feedback. The user will be able to feel their lip 32 deforming around convexity 37 and into region 33 as they apply greater lip-force. Convexity 37 and/or the spaced apart lower and upper portions 36A, 36B of lower surface 36 also provide the user with an identifiable "home position" around which the user can control their lip-force and lip-position with high precision.
  • the user may change the position of their lip 32 with respect to region 33 by moving their lip 32 or by moving instrument 10 to reflect differing amounts of radiation into lip sensor 48 and to thereby influence lip sensor output signal 76, processed lip signal 77, output signal 18 and the sound created by instrument 10 ( Figure 1).
  • a user may also move their lower lip 32 and/or instrument 10 to change the relative size of lower lip portions 32A, 32B (i.e. the amount of lip 32 that is on either side of convexity 37). If lower lip 32 is positioned relative to convexity 37, such that inward portion 32 A is relatively small and outward portion 32B is relatively large, then a relatively large amount of force will be required to cause inward portion 32A to deform a certain distance into region 33 and to thereby effect a certain lip sensor output signal 76.
  • Mouth-operated input device 212 of the Figure 3 embodiment incorporates a lip sensing system 211 which comprises three lip sensors 248L, 248C and 248R.
  • Lip sensors 248L, 248C, 248R are oriented to direct radiation into region 233 on inward side 237A of convexity 237.
  • lip sensors 248L, 248C, 248R are provided at transversely spaced apart locations and are configured to direct radiation out of intermediate surface 240.
  • Lip sensors 248L, 248C, 248R may be used to sense the force applied to inward portion 32A of lip 32 in a manner similar to lip sensor 48 of device 12.
  • Lip sensors 248L, 248C, 248R may be also able to sense other characteristics of a user's lower lip 32 as discussed in more detail below.
  • FIG. 6 is a schematic block diagram of lip sensing system 211 of the Figure 3 input device 212 according to a particular embodiment of the invention.
  • Each of lip sensors 248L, 248C, 248R may be implemented in a manner similar to, and have characteristics similar to, lip sensor 48 of device 12 described above.
  • Lip sensors 248L, 248C, 248R may optionally be controlled by signals 207L, 207C, 207R from instrument electronics 216 and/or by signals 215L, 215C, 215R from signal conditioning electronics 250L, 250C, 250R.
  • Instrument electronics 216 and signal conditioning electronics 250L, 250C, 250R may comprise one or more controllers suitably configured for this purpose.
  • Signals 207L, 207C, 207R, 215L, 215C, 215R may comprise one way or two way control signals.
  • lip sensors 248L, 248C, 248R sense characteristics of the user's lower lip 32 and create corresponding lip sensor output signals 276L, 276C, 276R representative of these characteristics.
  • Lip sensor output signals 276L, 276C, 276R may be correlated to the force that the user applies to their lower lip 32, the position of lower lip 32 with respect to device 212, the shape of lower lip 32 (e.g. the curvature of lip 32 or the tilt angle of lip 32) and other characteristics of lower lip 32.
  • Lip sensor output signals 276L, 276C, 276R may be processed by their respective signal conditioning circuits 250L, 250C, 250R and/or by instrument electronics 216 to generate one or more processed lip signals 277.
  • Instrument electronics 216 may use one of more processed lip signals 277 to generate output signal 218 on the basis of these characteristics.
  • lip sensor output signals 276L, 276C, 276R are used to generate three processed lip signals 277 A, 277B, 277C, each of which may be used independently by instrument electronics 216 in the generation of output audio signal 218.
  • lip sensor output signals 276L, 276C, 276R are used to generate a different number of processed lip signal(s) 277.
  • Output signal 218 in turn controls the audio output of synthesizer 13 ( Figure 1).
  • lip sensor output signals 276L, 276C, 276R may be processed in the analog and/or digital domain by their corresponding signal conditioning electronics 250L, 250C, 250R and/or instrument electronics 216.
  • Signal conditioning electronics 250L, 250C, 250R and instrument electronics 216 may have features similar to signal conditioning electronics 50 and instrument electronics 16 described above.
  • the processing performed on lip sensor output signals 276L, 276C, 276R by signal conditioning electronics 250L, 250C, 250R and/or instrument electronics 216 may involve any of a variety of analog and digital signal processing techniques similar to those described above for oral cavity sensor output signals 264', 264".
  • lip sensor output signals 276L, 276C, 276R may be normalized using a circuit similar to circuit 61 ( Figure 14) as a part of any processing performed by signal conditioning
  • SUBSTITUTE SHEET (RULE 28) electronics 250L, 250C, 250R and/or instrument electronics 216.
  • lip sensor output signals 276L, 276C, 276R are normalized.
  • processed lip signals 277A, 277B, 277C may be any function cF of one or more of lip sensor output signals 276L, 276C, 276R.
  • An example of a processed lip signal 277A, 277B, 277C is a signal representative of overall force applied to inward portion 32 A of lip 32. The sensing of lip-force for a mouth- operated input device 12 incorporating a single lip sensor 48 was discussed above ( Figure 5). Increasing the force applied to inward portion 32A of lower lip 32 causes increasing reflection detected by lip sensor 48 and corresponding increases in lip sensor output signal 76.
  • increasing the force applied to inward portion 32A of lower lip 32 causes increasing reflection detected by each of lip sensors 248L, 248C, 248R and corresponding increases in lip sensor output signals 276L, 276C, 276R.
  • a processed lip signal 277 representative of overall force applied to inward portion 32A of lip 32 may be an additive combination of lip sensor output signals 276L, 276C, 276R.
  • the coefficients a, b, c, d may be selected to provide function SF with a suitable dynamic range. The coefficients may also be selected to minimize zero offset.
  • the signals L, C, R may be filtered, linearized (e.g.
  • lip sensor output signals 276L, 276C, 276R could also be used in a similar manner to generate a plurality of processed lip signals 277 A, 277B, 277C representative of localized lip- force.
  • a processed lip signal 277 is a signal representative of the curvature of inward portion 32A of lip 32.
  • the force applied to, and position of, inward portion 32A of lip 32 may be measured by a plurality of sensors 248 which are configured to measure certain parts of the lip and to generate corresponding lip sensor output signals 276 and the lip sensor output signals 276 may be combined to provide a processed lip signal 277 which varies with the curvature of inward portion 32A of Hp 32.
  • Figures 7A, 7B and 7C are schematic drawings which
  • the coefficients a, b, c, d may be selected to provide function ⁇ " with a suitable dynamic range. The coefficients may also be selected to minimize zero offset.
  • the condition (L +R) /2 ⁇ C can be defined to represent the negative curvature of Figure 7B
  • the condition (L +R) /2 > C can be defined to represent the positive curvature of Figure 7C.
  • the signals L, R, C may be filtered, linearized (e.g. using a look up table), scaled and/or offset prior to using them in a curvature function of this nature.
  • the output of the function ⁇ " may be filtered, linearized, scaled and/or offset.
  • a processed lip signal 277 is a signal representative of the angle of inward portion 32A of lip 32.
  • Figures 8A, 8B and 8C are schematic drawings which respectively represent flat lip angle, negative lip angle and positive lip angle.
  • the coefficients a, b, c, d may be selected to provide function & ⁇ with a suitable dynamic range. The coefficients may also be selected to minimize zero offset.
  • lip sensor output signals 276L and 276R are given equal weight and the offset parameter d is zero, then the condition L «R can be defined to represent the flat angle of inward portion 32A of lip 32 (Figure 8A), the condition L > R can be defined to represent the negative angle of inward portion 32A of lip 32 ( Figure 8B) and the condition L ⁇ R can be defined to represent the positive angle of inward portion 32A of lip 32 ( Figure 8C).
  • the signals L and R may be filtered, linearized (e.g. using a look up table), scaled and/or offset prior to using them in a lip angle function of this nature.
  • the output of the function JF " may be filtered, linearized, scaled and/or offset.
  • subsets of lip sensor output signals 276L, 276C, 276R could also be used in a similar manner to
  • SUBSTITUTE SHEET (RULg 28) generate one or more processed lip-position signals 277 A, 277B, 277C representative of localized lip angle.
  • Processed lip signals 277 may generally comprise any function J ⁇ of one or more of lip sensor output signals 276L, 276C, 276R.
  • Individual processed lip signals 277A, 277B, 277C may be defined such that they can be substantially independently controlled by a skilled user.
  • one of processed lip signals 277A, 277B, 277C relates to the force applied by inward portion 32A of lower lip 32 and another one of processed lip signals 277 'A, 277B, 277C relates to the angle of inward portion 32A of lower lip 32
  • a user who becomes proficient at using input device 212 may be able to substantially independently control the force that they apply to inward portion 32A of their lower lip 32 and the angle of inward portion 32 A of lower lip 32, so as to use these characteristics to independently control processed lip signals 277A, 277B, 277C and to thereby independently influence the generation of output signal 218.
  • Instrument electronics 216 may use processed lip signals 277A, 277B, 277C to generate output signal 218 which in turn controls the audio output of synthesizer 13.
  • lip sensor output signals 276L, 276C, 276R may be processed to generate a different number of processed lip signals 277, each of which may (alone or in combination with other processed lip signals 277) be related to sensed spatial characteristics of the user's lower lip 32.
  • the provision of multiple lip sensors 248L, 248C, 248R allows for a model of the user's lip that has multiple degrees of freedom (i.e. multiple processed lip signals 277 that may relate to a corresponding plurality of physical gestures/characteristics).
  • Processed lips signals 277 may also be related to the rate of change of such sensed spatial characteristics of the user's lower lip 32.
  • Figure 15 is a partial isometric view of a mouth-operated input device 312 according to another embodiment of the invention.
  • mouth-operated input device 312 of Figure 15 is similar to mouth-operated input device 12 ( Figure 1) and mouth-operated input device 212 ( Figure 3).
  • Features of device 312 which are similar to features of device 12 and device 212 are referred to using similar reference numerals that are preceded by the digit "3".
  • oral cavity sensing system 309 of mouth-operated input device 312 is substantially similar to oral cavity sensing system 209 of mouth-operated input device 212.
  • device 312 incorporates a transmissive optical sensor 348 for detecting characteristic(s) of inward portion 32 A of a user's lower lip 32.
  • Transmissive optical lip sensor 348 is depicted schematically in Figure 16. For clarity, some details of device 312 not related to sensing characteristic(s) of inward portion 32 A of a user's lower lip 32 have been removed from Figure 16.
  • Lip sensor 348 is an optical transmission-type sensor which emits radiation and senses the intensity of transmitted radiation detected at a remote location.
  • Lip sensor 348 comprises a radiation emitter 370A which may be substantially similar to the radiation emitters described above.
  • Lip sensor 348 also comprises a radiation detector 370B for sensing transmitted radiation which may be substantially similar to the radiation detectors described above.
  • Detector 370B is spaced apart from emitter 370A.
  • emitter 370A is preferably located on a first transverse side of region 333 (i.e. between convexity 337 and concavity 339 and between lower portion 336A and upper portion 336B of lower surface 336).
  • detector 370B is preferably located on a transversely opposing second side of region 333 (i.e. so as to detect radiation that is transmitted through region 333 by emitter 370A).
  • emitter 370A is located on lobe 34 IA on one transverse side of region 333 and detector 370B is located on lobe 34 IB on the opposing transverse side of region 333, although other configurations are possible.
  • emitter 370A comprises, or is optically coupled to, a light pipe 366A and detector 370B comprises, or is optically coupled to, a light pipe 366B, such that emitter 370A and/or detector 370B may be located remotely (i.e. away from mouth-engaging end 322 of device 312).
  • additional optics 368A may be provided for shaping and/or directing the radiation emitted from emitter 370A and/or light pipe 366A and additional optics 368B may be provided for receiving radiation at remotely located detector 370B and/or light pipe 366B.
  • Such optics may be similar to the coupling optics described above.
  • Lip sensor 348 (including emitter 370A and/or detector 370B) may optionally be controlled by signal 307 from instrument electronics 316 and/or by signals 315 from signal conditioning electronics 350.
  • Instrument electronics 316 and signal conditioning electronics 350 may comprise one or more controllers suitably
  • Signals 307, 315 may comprise one way or two way control signals.
  • Lip sensor 348 detects the force that a user is applying to the inward portion 32A of lip 32.
  • a user applies a relatively small amount of force to the inward portion 32A of their lower lip 32, the deformation of inward portion 32A around convexity 337 and into region 333 will be relatively low (see Figure 5A which schematically depicts this situation). Consequently, a relatively large amount of the radiation emitted by emitter 370A will be detected by detector 370B on the transversely opposing side of region 333 and lip sensor output signal 376 will be relatively high.
  • inward portion 32A of their lower lip 32 As the user applies an intermediate level of force to the inward portion 32A of their lower lip 32, the deformation of inward portion 32A around convexity 337 and into region 333 will be moderate (see Figure 5B which schematically depicts this situation). Under these conditions, inward portion 32A of lip 32 will intercept some of the radiation from emitter 370A before such radiation crosses region 333 to reach detector 370B. As a result, the amount of radiation detected by detector 370B will be at an intermediate level and lip sensor output signal 376 will be at an intermediate level (i.e. less than the low force situation described above).
  • inward portion 32A of lip 32 will intercept a significant amount of the radiation from emitter 370A before such radiation crosses region 333 to reach detector 370B.
  • the amount of radiation detected by detector 370B will be at a relatively low level and lip sensor output signal 376 will be at a relatively low level (i.e. less than the low force and intermediate force situations described above).
  • lip sensor 348 creates a lip sensor output signal 376 that is inversely representative of the force applied to inward portion 32A of lip 32. As shown best in Figure 15, lobes 341A, 341B, upper portion 336B of lower surface 336 and intermediate surface 340 enclose region 333 and thereby improve the dynamic range of transmissive sensor 348. In a manner similar to that described above, lip sensor output signal 376 may be used by signal conditioning electronics 350 and/or instrument electronics 316 to generate one or
  • SUBSTITUTE SHEET (RULE 28) more processed lip signals 377. Instrument electronics 316 may then use processed lip signals 377 to generate output signal 318. Output signal 318 may in turn be used to control the audio output of synthesizer 13 ( Figure 1).
  • lip sensing system 311 of device 312 may be similar to, or may be varied in manners similar to, the lip sensing systems 11, 211 discussed above.
  • the size of region 333 i.e. the distance between upper portion 336B and lower portion 336A
  • other parameters of lip sensor 348 may be designed so that lip sensor output signal 376 is a monotonically changing function of the force applied by inward portion 32A of lip 32 to mouth-engagement end 322 of device 312.
  • lip sensor output signal 376 may be processed by signal conditioning electronics 350 and/or instrument electronics 316 to output a plurality of processed lip signals 377 and such processed lip signal(s) may generally be any function ⁇ of lip sensor signal 376.
  • mouth operated input device 312 may make use of a plurality of transmissive sensors for detecting various characteristics related to the inward portion 32A of a user's lip 32.
  • a plurality of transmissive sensors may be used in a manner similar to the three lip sensors 248L, 248C, 248R of device 212.
  • Mouth-operated input device 12 of Figure 1 also incorporates a sensor 42 for detecting the user's breath-pressure.
  • Breath-pressure-sensor 42 is depicted in more detail in Figure 9.
  • details of device 12 which are not related to sensing the user's breath-pressure are not shown in Figure 9.
  • the user's lips 26, 32 preferably contact mouth-engaging end 22 of device 12, such that the user may blow into mouth-engaging end 22.
  • proximal surface 24 of mouth-engaging end 22 has an aperture 80 which leads through conduit 44 to breath-pressure-sensor 42.
  • breath-pressure-sensors are known in the art. Any suitable breath-pressure-sensor may be used with the device 12 of the illustrated embodiment.
  • a suitable breath-pressure-sensor 42 is a variable resistance pressure sensor of the type described in US patent No. 5,543,580 which is incorporated herein by reference.
  • Other examples of sensors suitable for use as breath-pressure-sensor 42 include the MPXC2011 from Freescale Semiconductor, Inc. of Austin, Texas and the 26PC05SMT from Honeywell International, Inc. of Morris Township, New Jersey.
  • a musician When playing a conventional wind instrument, a musician frequently (and often subconsciously) changes their breath-pressure to modulate the sound emanating from the instrument. For example, by applying different levels of breath-pressure, a musician may alter the frequency, pitch, timbre or other characteristics of the instrument and thereby change the sound created by the instrument.
  • Mouth-operated input device 12 models this aspect of a conventional instrument.
  • Breath-pressure-sensor 42 senses the user's breath-pressure and creates a breath-pressure-sensor output signal 82 representative of the user's breath-pressure.
  • Breath-pressure-sensor output signal 82 may be used to generate one or more processed breath-pressure signals 84.
  • Instrument electronics 16 may make use of processed breath-pressure signal(s) 84 in generating output signal 18.
  • Output signal 18 in turn controls the audio output of synthesizer 13 ( Figure 1).
  • conduit 44 and breath-pressure- sensor 42 are sealed, such that there is substantially no air flow between the user's oral cavity 52 and sensor 42.
  • the total volume of conduit 44 may be less than 500 mm 3 . In some embodiments, the volume of conduit 44 is less than 200 mm 3 .
  • breath-pressure-sensor output signal 82 may be processed by signal conditioning electronics 50 and/or instrument electronics 16 to output processed breath-pressure signal 84 in a manner similar to that of the other sensor output signals discussed above.
  • breath-pressure-sensor output signal 82 may be processed by signal conditioning electronics 50 and/or instrument electronics 16 to output a plurality of processed breath-pressure signals 84.
  • breath-pressure signals 84 may include one signal related to the measured breath-pressure and another signal related to the rate of change of the measured breath signal.
  • Signal conditioning electronics 50 and instrument electronics 16 may function in a manner similar to, and have characteristics similar to, those discussed above with respect to oral cavity sensor output signal 64 and lip sensor output signal 76.
  • processed breath-pressure signal 84 may be any function & ⁇ of breath-pressure-sensor signal 82.
  • the coefficients a and b may be selected to provide processed breath-pressure signal 84 with a suitably high dynamic range. These coefficients may also be selected to minimize zero offset.
  • the offset coefficient b may be selected such that processed breath-pressure signal 84 is a positive signal when the user is blowing into device 12 and a negative signal when the user is sucking on device 12.
  • Instrument electronics 16 may make use of processed breath-pressure signal 84 to generate output signal 18 which in turn controls the audio output of synthesizer 13.
  • Processed breath-pressure signal 84 is related to the sensed breath-pressure applied by the user. Accordingly, using mouth-operated input device 12, the audio output of synthesizer 13 may be modulated by the user's breath-pressure.
  • Mouth-operated input device 212 of the Figure 3 embodiment incorporates a pair of apertures 280', 280" in proximal surface 224 for admitting the user's breath.
  • Apertures 280', 280" are depicted schematically in Figure 10.
  • Aperture 280' leads through a conduit 244 to a breath-pressure-sensor 242.
  • Breath-pressure-sensor 242 may be substantially similar to breath-pressure-sensor 42 described above.
  • Breath- pressure-sensor 242 may generate a breath-pressure-sensor signal 282, which may be processed in the analog and/or digital domain by signal conditioning electronics 250 and instrument electronics 216 to generate one or more processed breath-pressure signal(s) 284.
  • the processing of breath-pressure-sensor signal 282 to generate processed breath-pressure signal(s) 284 may be similar to the processing of breath- pressure signal 82 described above.
  • Aperture 280" leads through conduit 286 to a flow regulator 288.
  • Flow regulator 288 may be used to give instrument 10 a realistic feel by allowing the user's breath to flow through the instrument.
  • flow regulator 288 is user adjustable.
  • flow regulator 288 comprises a screw (not shown) which may be adjusted to various positions which restrict the flow of air through conduit 286.
  • the screw may be adjusted inwardly to a position where it occupies a large volume of space in conduit 286 and substantially restricts the flow through conduit 286 and the screw may be withdrawn to a position where it occupies only a small volume (or none) of conduit 286, such that the user's breath can flow through conduit 286.
  • Conduit 286 preferably exhausts from device 212 at an exhaust port 290 external to the user's mouth. Exhaust port 290 (which is not shown in Figure 3) may be located on the side of device 212.
  • Mouth-operated input device 312 also comprises a pair of apertures 380', 380" which may be substantially similar to apertures 280', 280" of device 212.
  • the breath pressure sensor and flow control regulator (not explicitly shown) of device 312 may be substantially similar to those of device 212.
  • Mouth-operated input device 212 of the Figure 3 embodiment also comprises an optional fin 251 which projects downwardly from upper portion 236B of lower surface 236 between proximal surface 224 and intermediate surface 240.
  • Fin 251 may serve to reduce the amount of radiation emitted from lip sensors 248L, 248C, 248R and reflected back to lip sensors 248L, 248C, 248R from the user's oral cavity 52 (i.e. as opposed to lip 32).
  • radiation emitted from lip sensors 248L, 248C, 248R may be reflected back from tongue 52 or other surfaces of oral cavity 52 and provide spurious results for lip sensors 248L, 248C, 248R.
  • fin 251 may extend downwardly by an amount substantially similar to the distance between upper portion 236 A and lower portion 236B of lower surface 236.
  • fin 251 is in the output path of radiation emitted from lip sensors 248L, 248C, 248R.
  • fin 251 is shaped, such that most of the radiation incident on fin 251 from lip sensors 248L, 248C, 248R is reflected away from (and not spuriously detected by) lip sensors 248L, 248C, 248R.
  • fin 251 is shaped such that its surface 257 facing intermediate surface 240 is skewed by greater than 30° with respect to intermediate surface 240 (or forms an angle greater than 120° with respect to the longitudinal axis 221 of input device 212). In other embodiments, fin 251 is shaped such that surface 257 forms an angle greater than 120° with the plane of incidence of radiation from lip sensors 248L, 248C, 248R. In some embodiments, fin 251 comprises (or is coated, at least on surface 257, with) a material that has a low reflectivity at the wavelength of the radiation used by lip sensors 248L, 248C, 248R. In one particular embodiment, surface 257 of fin 251 has a reflectivity of less than 30% at the wavelength used by lip sensors 248L, 248C, 248R.
  • Fin 251 may be designed such that at any lip-force, the radiation emitted from lip sensors 248L, 248C, 248R and reflected from fin 251 is significantly less than the radiation reflected from lip 32.
  • this reflectivity difference may be effected by using a sensor with a suitable wavelength, by varying
  • SUBSTITUTE SHEET (RULS 2S) the shape or orientation of fin 251 and/or by varying the material on surface 257 of fin 251.
  • Mouth-operated input devices 12, 212, 312 may also incorporate one or more outputs 92 (Figure 1).
  • An output 92 may comprise any suitable output device for providing information to a user, such as, without limitation, an LED, an alphanumeric display, a graphical display, an audio output device and a tactile output device.
  • Outputs 92 may indicate when a particular tone is flat (too low), sharp (too high) or on pitch.
  • the determination of whether a note is flat, sharp or on pitch may be made by instrument electronics 16, 216, 316 which may compare the output of synthesizer 13 (as measured by an acoustic sensor (not shown)) with a predetermined value that is defined as being "on-pitch".
  • the predetermined on-pitch value may be determined during calibration of device 12, 212, 312 and/or the acoustic sensor, for example.
  • Outputs 92 may also be used to indicate the direction to a nearest harmonic.
  • Outputs 92 may also be used to train musicians. For example, to train a musician to apply a particular gesture, an ideal gesture may be graphically displayed on an output 92 alongside a graphical representation of the user's current gesture (as measured by the various sensors of device 12, 212, 312). By comparing the graphical gestures, the user can refine their gesture to be closer to that of the ideal.
  • Mouth-operated input devices 12, 212, 312 may also comprise one or more hand-operated inputs 14.
  • hand-operated inputs 14 comprise buttons, but hand- operated inputs 14 may generally comprise any suitable types of input mechanisms, such as multi-position switches, alphanumeric keys, sliders, knobs, joysticks or the like.
  • Hand-operated inputs 14 may be used to provide ON/OFF functions for device 12, 212, 312 or for various applications, such as pitch-sensing and the like. Hand- operated inputs 14 may also be used for note selection or to otherwise alter the tone of signal 18.
  • FIG. 11 depicts a schematic block diagram of an instrument 10 according to a particular embodiment of the invention.
  • Instrument 10 incorporates a mouth- operated input device 12, which may generally comprise any of the mouth-operated input devices described herein.
  • mouth-operated input device 12 incorporates a number of sensors, including breath-pressure-sensor 42, oral cavity sensor 46 and lip sensor 48, together with their associated signal conditioning electronics 50.
  • the conditioned signals from sensors 42, 46, 48 are provided to instrument electronics 16.
  • instrument electronics 16 comprise an embedded processor 29A.
  • Processor 29A is preferably a programmable microprocessor which has access to a memory 29B for storage of program instructions and other useful data.
  • Instrument electronics 16 also receives input signals from inputs 14. Inputs 14 originate from mouth-operated input device 12 and/or instrument 10. In addition, as shown in Figure 11, instrument 10 may receive inputs 14 from external sources. In the Figure 11 embodiment, instrument 10 comprises a single output 92, but, in general, instrument 10 may comprise a plurality of outputs 92. As with inputs 14, outputs(s) 92 may be provided as a part of mouth-operated input device 12, instrument 10 and/or external output device(s).
  • Instrument electronics 16 output an output signal 18 for controlling synthesizer 13. As discussed above, output signal 18 may conform to a known audio protocol. In the Figure 11 embodiment, instrument electronics 16 also receive control signal(s) 69 from synthesizer 13. Control signal(s) 69 may generally be used for any purpose. By way of non-limiting example, control signal(s) 69 could be used to configure instrument electronics 16. The configuration parameters of instrument electronics 16 may be different when instrument 10 is used to simulate a brass instrument as compared to when instrument 10 is used to simulate a woodwind instrument, for example. Those skilled in the art will appreciate that there are a wide variety of other uses for control signal(s) 69, such as tonal adjustment, sensor sampling rate adjustment and the like.
  • FIG. 12 is a schematic block diagram of an instrument 10' according to another embodiment of the invention.
  • Instrument 10' incorporates a mouth-operated input device 12, which may generally comprise any of the mouth-operated input devices described herein.
  • a portion 16A of instrument electronics 16 is embedded in instrument 10' and another portion 16B of instrument electronics 16 is hosted on a computer 27.
  • portion 16A of instrument electronics 16 is embedded in mouth-operated input device 12.
  • a wired or wireless communication link 67 is provided between portions 16A, 16B of instrument electronics 16.
  • Communication link 67 may be implemented using any of a variety of suitable protocols known to those skilled in the art.
  • communication link 67 may be implemented in accordance with the MIDI, USB or OSC protocols.
  • computer 27 (which comprises portion 16B of instrument electronics 16, a microprocessor 29 A and memory 29B) controls output device 92 and communicates with synthesizer 13 via output signal 18 and control signal 69. While not shown explicitly in Figure 12, portion 16A of instrument electronics 16 may also comprise a suitable processor and/or suitable memory.
  • output 92 is an external output device which is controlled by portion 16B of instrument electronics 16.
  • instrument 10' may generally comprise any suitable number of outputs 92 which may be provided as a part of mouth-operated input device 12, instrument 10', computer 27 and/or external output device(s). Such output(s) may additionally or alternatively be controlled by portion 16A of instrument electronics 16.
  • Figure 12 instrument 10' is similar to the Figure 11 instrument 10.
  • Mouth-operated input devices 12, 212, 312 are described above as having application to an instrument, where devices 12, 212, 312 sense characteristics of the user's mouth (e.g. characteristics associated with oral cavity, tongue position, tongue shape, lip-position and breath-pressure) and generate corresponding sensor output signals which are processed and then ultimately used to control a synthesizer 13. Mouth-operated input devices 12, 212, 312 described herein may be used for other applications, such as to enable disabled individuals to control various systems with their mouths or to enable people whose hands are otherwise encumbered (e.g. people in protective suits, such as space suits or underwater suits) to control various systems. Different characteristics of the way in which a user interacts with the mouth-operated input device (e.g.
  • the output signals of mouth-operated input devices 12, 212, 312 may be processed to generate one or more output signals 18 which ultimately control different system parameters.
  • a battery-operated wheel chair may be controlled using lip-force and tongue position. If the lip-force is greater, the mouth-operated input device may output a first output signal 18 which causes the chair's motor to move faster and if the lip-force is lower, the mouth- operated input device may change the level of the first output signal 18 such that the chair's motor may move more slowly. If the tongue position is more leftward, then the mouth-operated input device may output a second output signal 18 which directs
  • SUBSTITUTE SHEET (RULE 28) the chair's motor to turn the chair in a leftward direction and if the tongue position is more rightward, then the second output signal 18 may directed the chair's motor to turn the chair in a rightward direction.
  • a user's tongue could be used to control the X-Y position of a mouse pointer or a joystick in a computer user interface and a puff of breath or a squeeze of a lip may represent a click of the mouse/joystick button.
  • These signals may also be used to control the velocity (rather than the position) of the mouse pointer, as in an isometric mouse.
  • the tilt angle and force of a user's lip may control the X-Y position of a mouse pointer or a joystick in a computer user interface and a puff of breath or a squeeze of a lip may represent a click of the mouse/joystick button.
  • the angle of the user's lip and the force applied to the lip may be used as X-Y control to highlight a particular object within a two dimensional array of objects shown on a computer screen.
  • Such a display may show an alphanumeric keyboard for example.
  • a puff of breath could select the currently highlighted object on the display.
  • a puff of breath could select a letter on a displayed alphanumeric keyboard.
  • the mouth-operated input devices of the invention include speech synthesis or silent communication. Many of the sounds made by human vocal cords depend on motion of the tongue, jaw and lips as well as breath- pressure.
  • the mouth-operated input devices of the invention may be configured to output control signals which relate to the gestures associated with speech and which could be provided to a speech synthesizer which emulates the sounds of speech based on the control signals.
  • the control signals relating to the gestures associated with speech could be encoded by the device and later decoded as sounds. This would enable a person to "talk" without making sound.
  • the mouth-operated input devices described herein may be provided with an infrared emitter corresponding to the IRDA standard which is commonly used to control consumer devices, such as remote controls and PDA devices.
  • a mapping may be provided such that combinations of the types of gestures described herein can control consumer devices such as entertainment systems.
  • Another application of the mouth-operated input devices described herein is to select between a one or two dimensional array of locations which may be predefined on the roof of the user's mouth.
  • the user may select any of these locations with the tip of their tongue.
  • Such locations may be discriminated using oral cavity /tongue sensors of the general type described above together with the breath pressure sensor.
  • a user may emit a puff of breath similar to a "t" sound with their tongue starting at any of these locations.
  • the position of the user's tongue may be detected by the oral cavity /tongue sensors and the puff of breath may be detected by the breath pressure sensor. This puff may cause the signal processing system to emit a signal related to a virtual button defined for that starting location.
  • the tongue may be positioned in one of these locations and a puff of air similar to the "c” in “cat” or the “h” in “hat” may be emitted, causing the signal processing system to emit a signal that is associated with the current location of the tongue.
  • the mouth-operated input device may be configured to distinguish between a “t", a “c” or an "h”.
  • An output 92 may be used to display (to the user) the action or symbol that will result if the "t” , "c” or “h” breath puff is emitted with the tongue at the current location. This capability may be used for example for "typing” (e.g.
  • FIG. 17A is an exploded isometric view of a mouth-operated input device 410 according to yet another embodiment of the invention.
  • Mouth-operated input device 410 functions as a joystick type input device for controlling some external system (not shown) which may be an instrument, but which may alternatively be any other suitably configured system, such as a personal computer, a video game console, a wheelchair control system or the like.
  • mouth-operated joystick device 410 incorporates a mouth-operated input device 312 that is substantially similar to mouth-operated input device 312 of Figure 15. Mouth- s ⁇ BST ⁇ T v ⁇ E SHEET CRULE ssy 1
  • operated joystick device 410 also comprises a force/joystick sensor system 412 capable of detecting the direction and/or the amount of force applied to mouth- operated input device 312 by a user's mouth (e.g. by the user's teeth, lips, tongue) or, if desired, by some other part of the user's body.
  • a force/joystick sensor system 412 capable of detecting the direction and/or the amount of force applied to mouth- operated input device 312 by a user's mouth (e.g. by the user's teeth, lips, tongue) or, if desired, by some other part of the user's body.
  • Force sensing system 412 which is shown in more detail in Figure 17B, includes: an upper housing element 422 and a lower housing element 417 which are moveable relative one another and a substrate 414 which houses a plurality (e.g. four) optical sensors 416A, 416B, 416C, 416D (collectively, sensors 416).
  • sensors 416 are reflective optical sensors which are configured to emit radiation in the general direction of arrow 418.
  • Sensors 416 may be substantially similar to reflective optical sensors 46, 48 discussed above. In other embodiments, sensors 416 could be implemented as transmissive type optical sensors.
  • Force sensing system 412 also includes a reflective element 420 which, in the illustrated embodiment, is coupled to upper housing element 422 by fastener components 423 A, 423B in such a manner that reflective element 420 is deformable or otherwise moveable relative to substrate 414.
  • fastener component 423B projects loosely through aperture 425 in substrate 414 and is snugly received in a correspondingly shaped aperture 427 in reflective element 420 such that movement of upper housing element 420 causes corresponding movement of reflective element 420 relative to substrate 414.
  • the range of movement of reflective element 420 relative to substrate 414 is less than about lmm.
  • element 420 is reflective. Suitable materials for reflective element 420 include reflective spring steel, for example.
  • element 420 comprises an opaque element 420 and element 420 may be fabricated from an opaque material.
  • reflective/opaque element 420 is formed from a first material which is coated (at least partially) with a second material having the desired optical characteristics.
  • Reflective element 420 is shown in detail in Figure 17B.
  • reflective element 420 comprises a plurality (e.g. four) of tabs 426A, 426B, 426C, 426D (collectively, tabs 426) which extend generally radially from a central region 428.
  • tabs 426 are generally located below corresponding sensors 416 at generally equal distances from sensors 416. Accordingly, when device 410 is in an ambient state, the reflected radiation
  • SUBSTITUTE SHEET (RULE 28) detected from each of sensors 416 and the corresponding sensor output signals is approximately equal. Due to variations in the characteristics of individual optical sensors 416, the output signals from sensors 416 may require calibration (e.g. adjustment by scaling, offset or other suitable technique) to achieve this equality.
  • Upper housing element 422 may be physically coupled to one or more sidewalls 430 of device 410 or may otherwise be physically coupled to device 410, such that force applied by the user to mouth-operated input device 312 results in corresponding force applied via fastener component 423B and to reflective element 420. Such force can move reflective element 420 relative to substrate 414 and can move one or more of tabs 426 toward or away from their corresponding sensors 416, resulting in changes to the amount of radiation detected by individual sensors 416.
  • tabs 426B, 426C may move closer to their corresponding sensors 416B, 416C and tabs 426A, 426D may move further from their corresponding sensors 416A, 416D.
  • Such movement may result in sensors 416B, 416C detecting an increased amount of radiation and sensors 416A, 416D detecting a decreased amount of radiation.
  • this same effect can be achieved by physically coupling substrate 414 to upper housing element 422 such that force applied by a user to mouth-operated input device 312 results in movement of substrate 414 relative to reflective element 420.
  • FIG. 18 represents a schematic depiction of device 410 according to a particular embodiment of the invention.
  • device 410 comprises a mouth-operated input device 312 which incorporates lip sensor 348, a pair of oral cavity sensors 346' , 346" and breath sensor 380' . These sensors produce lip sensor output signal 376, oral cavity sensor output signals 364', 364" and breath sensor signal 382, which are provided to device electronics 416.
  • Mouth-operated input device 312 may also comprise one or more optional inputs 314 which provide their own signals to device electronics 416.
  • Device 410 also comprises a plurality of joystick sensors 416 which provide a corresponding plurality of joystick signals 452 to device electronics 416. Although not shown explicitly in Figure 18, signal conditioning electronics may also be provided for joystick sensors 416. In the illustrated embodiment, device 410
  • SUBSTITUTE SHEET (RUlS 26) comprises four joystick sensors 416 (see Figure 17B) which provide four corresponding joystick signals 452 to device electronics 416.
  • One or more optional additional inputs 414 may provide their own signals to device electronics 416. Such optional additional inputs may be located on device 410 or may be external to device 410.
  • device electronics 416 comprises a microprocessor 429 A and accessible memory 429B.
  • Device electronics 416 processes the signals received from the various sensors and inputs and generates a plurality of corresponding output signals 418 which are provided to I/O port 444.
  • I/O port 444 may generally be connected to any system (not shown) so that device 410 can be used to provide input to the system and thereby allow the user to control the system using their mouth.
  • Output signals 418 may comprise one or more signals corresponding to each of lip sensors 348, oral cavity sensors 346', 346", breath sensors 380', joystick sensors 416, optional inputs 314, 414 and/or various combinations of these sensors and/or inputs. This can provide a wide range of control options.
  • Device 410 may also be capable of receiving one or more signals 469 from the external system via port 444. Such signals 469 can be provided to device electronics 416 and can be used by device electronics 416 to control device 410 and/or to help process the various other signals received at device electronics 416. Device 410 may also comprise one or more of its own optional outputs 492 which may be similar to outputs 92 described above.
  • instrument electronics 416 need not comprise a microprocessor 429A and/or memory 429B. In a manner similar to the difference between the embodiments of Figures 11 and 12, such components may be provided in the external system.
  • Device 410 can function as a two axis joystick which is controllable by the user's mouth. Force applied by the user's mouth to mouth-operated input device 312 along a first axis could cause variation in the positions of opposing tabs 426B, 426D and corresponding variation in the sensor signals 452 from sensors 416B, 416D. Similarly, force applied by the user's mouth along an orthogonal axis could cause variation in the positions of opposing tabs 426 A, 426C and corresponding variation in the sensor signals 452 from sensors 416A, 416C. Forces in other directions will result in various linear combinations of the movements along one of these orthogonal axes. Such forces may be applied to mouthloperated input device using the user's lips
  • SUBSTITUTE SHEET (RUU! ⁇ and/or the user's teeth, for example.
  • a user can control the joystick action independently of any of the other sensors (i.e. the breath sensor(s), oral cavity sensor(s) and/or lip sensor(s).
  • a user can control the joystick action with their teeth while independently interacting with the other sensors.
  • a user can independently control this joystick action with an outer portion 32B of their lower lip 32 while using the inward portion 32A of their lower lip 32 to interact with lip sensor 348. This independent control may be facilitated in part by convexity 337 (see Figure 15).
  • Device 410 may be used to provide a wide variety of functionalities and to provide input to a wide variety of external systems.
  • functionalities such as, but not limited to, a keyboard, a mouse, joystick, etc.
  • Device 410 is usable as a joystick which provides input to an external system (e.g. a computer system, a video gaming system, a wheel chair operating system or the like).
  • an external system e.g. a computer system, a video gaming system, a wheel chair operating system or the like.
  • the relative position of reflective element 420 maps to a joystick direction and velocity and various tongue, lip, oral cavity or breach actions can be used to provide other joystick functionalities (e.g. buttons clicks).
  • Device 410 is usable as a pointing device (i.e. in a manner similar to a conventional computer mouse) for an external system such as a computer or the like.
  • the relative position of reflective element i.e. in a manner similar to a conventional computer mouse
  • pointing device 420 (as detected by sensors 416) can be used to map a position of the pointing device and various tongue, lip, oral cavity or breach actions can be used to provide other pointing device functionalities (e.g. left click, right click, scroll wheel etc.).
  • a certain force range on the inward portion 32A of lip 32 may be used to activate/deactivate the pointing device in a manner which would provide a functionality similar to picking up and repositioning a mouse.
  • these force ranges may be subdivided into a multiple levels to provide a variety of functionalities.
  • a low level of lip force may correspond to turning "OFF" the operation of the pointing device
  • a moderate level of lip force may correspond to normal pointing device activity
  • a high level of lip force may correspond to "dragging" a virtual object with the pointing device (i.e. similar to "clicking and dragging" with a conventional mouse).
  • Device 410 could be used as a stylus type input device for an external system such as a handheld computer or the like.
  • the relative position of reflective element 420 (as detected by sensors 416) can be used to map a position of the stylus and various tongue, lip, oral cavity or breach actions can be used to provide other functionalities (e.g. activating/deactivating the stylus etc.).
  • Device 410 can be used as a simulator for a musical instrument such as a trombone, harmonica or Theremin, where the relative position of reflective element 420 (as detected by sensors 416) can be used to map a pitch (i.e. like the stepped side to side pitch control of a harmonica or the continuous front to back pitch control of a trombone).
  • breath pressure may be used to map volume and the lip, tongue and/or oral cavity sensors can be used to control timbre or other characteristics.
  • each optical sensor comprises a radiation emitter and a radiation sensor for detecting reflected radiation.
  • Radiation from a single emitter may be detected by multiple radiation detectors and radiation from multiple emitters may be detected by a single radiation detector.
  • lip sensors and oral cavity sensors may be pulsed at different times from one another or individual lip sensors may be pulsed at different times than one another.
  • mouth-operated input device 312 of Figure 15 makes use of transmissive-type radiation sensors to implement lip sensor 348 (i.e. to implement lip sensor 348
  • SUBSTITUTE SHEET detect one or more characteristics of inward portion 32A of a user's lip 32).
  • radiation is transmitted from an emitter to a radiation detector that is located remotely, but in the optical path of the transmitted beam, such that the amount of radiation received at the remotely located detector is reduced by interference from various parts of the user's mouth.
  • Transmissive-type optical sensors may have other configurations.
  • mouth-operated input device 212 may be modified to provide one or more transmissive-type optical sensors for detecting the force applied to the inward portion 32A of lip 32.
  • Radiation emitters may be located inside oral cavity 52 (e.g.
  • Similar transmissive lip sensors may be designed such that both their radiation emitters and radiation detectors are located at or near intermediate surface 40 and fin 251 is configured to reflect emitted radiation back toward the radiation detectors or such that both their radiation emitters and radiation detectors are located in fin 251 and intermediate surface 40 and is configured to reflect emitted radiation back toward the radiation detectors
  • the instrument electronics and/or signal conditioning electronics described above may comprise one or more digital processor(s) capable of performing the operations discussed above.
  • processor(s) may comprise, without limitation, a microprocessor, a programmable logic array, a computer-on-a- chip, the CPU of a computer or any other suitably programmable controller.
  • the lip sensors described above generate lip sensor output signals (and, ultimately, processed lip signal(s)) which represent the force applied by the user on inward portion 32 A of lower lip 32. These signals may also represent the position of inward portion 32A of lower lip 32 relative to an aspect of the mouth-operated input device (e.g. relative to the convexity of the mouth- operated input device and/or relative to the upper portion of the lower surface of the mouth-operated input device) .
  • the mouth-operated input devices described above have lower surfaces (e.g. lower surface 36) having lip-receiving convexities (e.g. convexity 37) provided by the stepped profile of spaced apart lower and upper portions (e.g.
  • SUBSTITUTE SHEET lower portion 36A and upper portion 36B.
  • lip receiving convexities provide distinct advantages when measuring characteristics of inward portion 32 A of lower lip 32 as discussed above.
  • the provision of a lip sensing system on one side of a mouth-operated input device i.e. on the lower surface of the device
  • similar lip receiving convexities may additionally or alternatively be provided on the upper surfaces of the mouth-operated input devices described above, for providing similar advantages relating to the measurement of characteristics of upper lip 26.
  • Similar lip sensors may also be provided on one or both transverse sides of the mouth-operated input devices described above, for detecting the position (or other characteristics) of the user's lips relative to a transverse side of the device.
  • FIG. 13A-13C schematically depict mouth- operated input devices 12 according to a number of other embodiments having various lip-receiving profiles.
  • device 12 comprises at least one lip-receiving convexity 37 and a region of space 33 located adjacent convexity 37 such that a portion of lip 32 can deform around convexity 37 and into region 33.
  • Region 33 may be located between first and second spaced apart surfaces 36A, 36B.
  • At least one of surfaces 36A, 36B comprises lip-receiving convexity 37.
  • At least one optical sensor 48 is configured to emit radiation into region 33 and to detect radiation either transmitted through region 33 and/or reflected from lip 32 when lip 32 is located in region 33.
  • Optical sensor 48 is capable of detecting a signal related to how much of lip 32 is located in region 33 which in turn is related to the force applied to the user's lip 32.
  • a portion of lip 32 deforms around the lip-receiving convexity and further into region 33. Variation of the deformation of lip 32 into region 33 causes corresponding variation in the amount of transmitted and/or reflected radiation and the output level of sensor 48.
  • the outputs of all of the sensors are used by the instrument electronics to generate a single output signal. This is not necessary.
  • the instrument electronics may make use of one or more sensors of the types described above to generate a plurality of independent output signals.
  • joystick/force sensing system 412 of device 410 represents one particular embodiment of a force sensing system.
  • other types joystick/force sensing systems could be used in the place of system 412.
  • Mouth-operating input devices incorporating such alternative joystick/force sensing systems could still take advantage of the functionalities afforded by the lip pressure sensing systems and oral cavity sensing systems described herein.
  • the joystick/force sensing system could provide position mapping and various tongue, lip, oral cavity or breach actions could be used in to provide other pointing device functionalities (e.g. left click, right click, scroll wheel etc.).
  • Device 410 described above incorporates mouth-operated input device 312.
  • device 410 may comprise any of the other mouth- operated input devices described herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Position Input By Displaying (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

L'invention concerne des dispositifs d'entrée actionnés par la bouche qui comprennent diverses combinaisons de capteurs permettant de détecter diverses caractéristiques de la bouche d'un utilisateur. Les caractéristiques de la bouche de l'utilisateur peuvent alors être utilisées pour commander un système externe. Des capteurs optiques de force labiale peuvent être utilisés pour détecter diverses caractéristiques des lèvres d'un utilisateur; des capteurs optiques de cavité buccale peuvent être utilisés pour détecter diverses caractéristiques de la langue et de la cavité buccale d'un utilisateur; des capteurs de pression d'haleine peuvent être utilisés pour détecter la pression de l'haleine; et des capteurs optiques à manche à balai peuvent être utilisés pour détecter le mouvement du dispositif.
PCT/CA2006/001909 2005-11-23 2006-11-23 Dispositif d'entrée actionné par la bouche WO2007059614A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US73896605P 2005-11-23 2005-11-23
US60/738,966 2005-11-23

Publications (1)

Publication Number Publication Date
WO2007059614A1 true WO2007059614A1 (fr) 2007-05-31

Family

ID=38066871

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2006/001909 WO2007059614A1 (fr) 2005-11-23 2006-11-23 Dispositif d'entrée actionné par la bouche

Country Status (1)

Country Link
WO (1) WO2007059614A1 (fr)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008141459A1 (fr) * 2007-05-24 2008-11-27 Photon Wind Research Ltd. Dispositif de saisie à commande buccale
EP2006834A1 (fr) * 2007-06-20 2008-12-24 Yamaha Corporation Instrument à vent électronique
WO2010010338A3 (fr) * 2008-07-21 2010-04-01 Eigenlabs Limited Interface de création de son
WO2011039713A3 (fr) * 2009-09-29 2011-05-26 Edigma.Com Sa Procédé et dispositif de détection multipoint à sensibilité élevée et son utilisation à des fins d'interaction à travers des masses d'air, de vapeur ou d'air soufflé
EP2410513A1 (fr) * 2010-07-23 2012-01-25 Yamaha Corporation Appareil de contrôle de génération de sons
EP2215444A4 (fr) * 2007-11-28 2015-09-02 My Music Machines Inc Système adaptatif de contrôleur à vent midi
JP2017058461A (ja) * 2015-09-15 2017-03-23 カシオ計算機株式会社 電子管楽器、楽音発生指示方法およびプログラム
US10170091B1 (en) * 2017-06-29 2019-01-01 Casio Computer Co., Ltd. Electronic wind instrument, method of controlling the electronic wind instrument, and computer readable recording medium with a program for controlling the electronic wind instrument
US20210201872A1 (en) * 2018-05-25 2021-07-01 Roland Corporation Electronic wind instrument (electronic musical instrument) and manufacturing method thereof
WO2023247811A1 (fr) * 2022-06-20 2023-12-28 Xpnd Technologies, Sl Dispositif de commande d'actionnement buccal

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6222524B1 (en) * 1997-08-25 2001-04-24 International Business Machines Corporation Mouth operated input device for an electronically responsive device
US20040164881A1 (en) * 2003-02-21 2004-08-26 Kelvin Kwong Loun Mok Mouth activated input device for an electronically responsive device
WO2004093046A1 (fr) * 2003-04-07 2004-10-28 University Of Delaware Unite d'entree a commande linguale pour le controle de systemes electroniques
JP2005215818A (ja) * 2004-01-28 2005-08-11 Takuya Shinkawa 入力装置および入力処理装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6222524B1 (en) * 1997-08-25 2001-04-24 International Business Machines Corporation Mouth operated input device for an electronically responsive device
US20040164881A1 (en) * 2003-02-21 2004-08-26 Kelvin Kwong Loun Mok Mouth activated input device for an electronically responsive device
WO2004093046A1 (fr) * 2003-04-07 2004-10-28 University Of Delaware Unite d'entree a commande linguale pour le controle de systemes electroniques
JP2005215818A (ja) * 2004-01-28 2005-08-11 Takuya Shinkawa 入力装置および入力処理装置

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008141459A1 (fr) * 2007-05-24 2008-11-27 Photon Wind Research Ltd. Dispositif de saisie à commande buccale
CN101329862B (zh) * 2007-06-20 2012-02-01 雅马哈株式会社 电子管乐器
US7781667B2 (en) 2007-06-20 2010-08-24 Yamaha Corporation Electronic wind instrument
EP2006834A1 (fr) * 2007-06-20 2008-12-24 Yamaha Corporation Instrument à vent électronique
EP2215444A4 (fr) * 2007-11-28 2015-09-02 My Music Machines Inc Système adaptatif de contrôleur à vent midi
US8934088B2 (en) 2008-07-21 2015-01-13 Eigenlabs Limited Sound-creation interface
WO2010010338A3 (fr) * 2008-07-21 2010-04-01 Eigenlabs Limited Interface de création de son
WO2011039713A3 (fr) * 2009-09-29 2011-05-26 Edigma.Com Sa Procédé et dispositif de détection multipoint à sensibilité élevée et son utilisation à des fins d'interaction à travers des masses d'air, de vapeur ou d'air soufflé
EP2410513A1 (fr) * 2010-07-23 2012-01-25 Yamaha Corporation Appareil de contrôle de génération de sons
US8309837B2 (en) 2010-07-23 2012-11-13 Yamaha Corporation Tone generation control apparatus
JP2017058461A (ja) * 2015-09-15 2017-03-23 カシオ計算機株式会社 電子管楽器、楽音発生指示方法およびプログラム
US10170091B1 (en) * 2017-06-29 2019-01-01 Casio Computer Co., Ltd. Electronic wind instrument, method of controlling the electronic wind instrument, and computer readable recording medium with a program for controlling the electronic wind instrument
US20210201872A1 (en) * 2018-05-25 2021-07-01 Roland Corporation Electronic wind instrument (electronic musical instrument) and manufacturing method thereof
US11682371B2 (en) * 2018-05-25 2023-06-20 Roland Corporation Electronic wind instrument (electronic musical instrument) and manufacturing method thereof
WO2023247811A1 (fr) * 2022-06-20 2023-12-28 Xpnd Technologies, Sl Dispositif de commande d'actionnement buccal

Similar Documents

Publication Publication Date Title
WO2007059614A1 (fr) Dispositif d'entrée actionné par la bouche
WO2008141459A1 (fr) Dispositif de saisie à commande buccale
KR100946680B1 (ko) 개선된 무선제어장치
US10490174B2 (en) Electronic musical instrument and control method
US9857892B2 (en) Optical sensing mechanisms for input devices
US20020061217A1 (en) Electronic input device
US20120183156A1 (en) Microphone system with a hand-held microphone
US8841537B2 (en) Systems and methods for a digital stringed instrument
US7897866B2 (en) Systems and methods for a digital stringed instrument
JP5622572B2 (ja) 触覚感知のための圧力センサーアレイ装置及び方法
US8994648B2 (en) Processor interface
US20100083808A1 (en) Systems and methods for a digital stringed instrument
US10489669B2 (en) Sensor utilising overlapping signals and method thereof
US11460293B2 (en) Surface quality sensing using self-mixing interferometry
JP2012503244A (ja) 指に装着される装置および相互作用方法および通信方法
WO2023025889A1 (fr) Dispositif de commande de synthétiseur audio basé sur un geste
JP4266370B2 (ja) 姿勢角度検出装置を用いた電子楽器およびその制御方法
WO2010042508A2 (fr) Systèmes et procédés pour un instrument à cordes numérique
Challis et al. Applications for proximity sensors in music and sound performance
JPH05249973A (ja) 楽音コントロール用操作体

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06804764

Country of ref document: EP

Kind code of ref document: A1