CN117222967A - Method for excluding a speech muscle movement control system - Google Patents

Method for excluding a speech muscle movement control system Download PDF

Info

Publication number
CN117222967A
CN117222967A CN202280029744.9A CN202280029744A CN117222967A CN 117222967 A CN117222967 A CN 117222967A CN 202280029744 A CN202280029744 A CN 202280029744A CN 117222967 A CN117222967 A CN 117222967A
Authority
CN
China
Prior art keywords
wearer
sensor
input signal
processor
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280029744.9A
Other languages
Chinese (zh)
Inventor
J·B·罗斯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Olglass Medical Co ltd
Original Assignee
Olglass Medical Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Olglass Medical Co ltd filed Critical Olglass Medical Co ltd
Priority claimed from PCT/US2022/025324 external-priority patent/WO2022225912A1/en
Publication of CN117222967A publication Critical patent/CN117222967A/en
Pending legal-status Critical Current

Links

Landscapes

  • Prostheses (AREA)

Abstract

Systems and methods for operating a controlled device via a wearable activation accessory that includes a sensor configured to detect a relaxed state and an active state of muscles associated with occlusion, activity, and/or lateral displacement of the wearer's muscles, thereby allowing the wearer to generate control signals for controlled elements. The sensor is coupled to a controller having an output coupled to a control signal interface. The controller is programmed to receive and evaluate input signals from the sensor to determine whether the input signals represent commands to the controlled device by evaluating a signal pattern of the input signals indicative of a plurality of voluntary muscle movement actions of a wearer of the wearable activated accessory. If/when the processor determines that the input signal represents a valid command, the processor decodes the command and transmits an associated control signal to the controlled device via the control signal interface.

Description

Method for excluding a speech muscle movement control system
Related application
The present application claims priority from U.S. provisional application Ser. No. 63/201,280 filed on 21/4/2021, ser. No. 63/232,084 filed on 8/2021, and U.S. provisional application Ser. No. 63/261,052 filed on 9/2021.
Technical Field
The present invention relates to systems and methods for operating controlled devices in a hands-free manner through autonomous muscle movement of a wearer, including systems and methods that improve the accuracy and overall performance of hands-free actuation and control of head-mounted devices provided by bite muscles and other maxillofacial movements.
Background
Simple headsets (e.g., surgical lights and outdoor entertainment headlamps) and more advanced systems (e.g., audio headsets and virtual reality headsets) have used tactile buttons and switches, touch activated control surfaces, and gesture techniques for input that may also rely on head tracking techniques and eye tracking techniques as a means of controlling device operation. All these input means require the user to use his/her hand to effect input to the device. The improvement of the hands-free function of such devices is mainly limited to speech recognition techniques, which have limitations when used in noisy or sound-sensitive environments, or eye-tracking techniques, which require the user to look at a specific object before detection, which requires a "dwell time", thus increasing the input delay.
Drawings
The present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
fig. 1 illustrates an example of an activation accessory for a controlled device configured in accordance with an embodiment of the present invention.
Fig. 2A-2F illustrate examples of devices that operate under the control of an activation accessory configured in accordance with embodiments of the present invention.
Fig. 3 illustrates an example of an activation accessory secured in a mount of a headset configured in accordance with an embodiment of the present invention.
Fig. 4 shows an example of an activation accessory secured in a mask according to an embodiment of the invention.
Fig. 5 shows an example of an activation accessory having an adhesive film on one surface for attachment to a wearer.
Fig. 6 shows an example of the activation accessory shown in fig. 5 secured to the wearer's face by an adhesive.
Fig. 7 illustrates an example of an activation accessory for a controlled device configured with multiple sensors, according to an embodiment of the present invention.
Fig. 8A-8D illustrate examples of arrangements for securing a wearable module of an activation accessory into a headset case according to embodiments of the invention.
Fig. 9 shows an example of an input signal received by a processor of a wearable module from a sensor of the wearable module, according to an embodiment of the invention.
Fig. 10 illustrates a method of operating a controlled device in a hands-free manner by autonomous muscle action of a wearer, according to an embodiment of the invention.
Fig. 11A-14B illustrate various embodiments of a head-mounted vision device having a wearable module configured in accordance with embodiments of the present invention.
Fig. 15 illustrates a method of distinguishing an autonomous muscle action of a wearer from a voice of the wearer according to an embodiment of the invention.
Detailed Description
Systems and methods for hands-free operation of controlled devices such as lighting systems, push-to-talk systems (push-to-talk systems), and other devices are described herein. These systems and methods are characterized in part by employing a switch or switching element (e.g., a hall effect sensor or other sensor) that is positioned on or near the user's face, such as to cover an area of the user's mandible, chin, bite muscle, or another area of the face or head, such as the temple, such that occlusion, movement, and/or lateral displacement of the user's mandible, and/or movement of the mandible, whether side-to-side movement or other movement, such as by manipulation of the bite muscle, the pectoral muscle (or a combination thereof), and/or movement of other maxillofacial regions, may be activated or deactivated. In one embodiment, the switch or switch element is used in conjunction with a headset, glasses, mask or face piece, etc. that is adapted to be worn in a variety of environments, including military, law enforcement, health care, and other environments (e.g., consumer environments). A headset, glasses, mask or face piece, etc. places the switch or switching element over the wearer's mandible (e.g., the bite muscle or a portion thereof, or the medial or lateral pterygoid muscle, or the mandible, chin, or other area of the wearer's face or head (e.g., the temple)), such that occlusion, movement, and/or lateral displacement (or a combination thereof) of the wearer's mandible activates the switch or switching element; or the switch or switching element is located near the wearer's face so that occlusion, movement and/or lateral displacement (or a combination thereof) of the wearer's mandible can be detected, for example by an optical sensor or proximity sensor, and movement for activating the switch or switching element is detected so as to allow hands-free operation of the controlled device in either or both cases. Other embodiments of the present invention utilize a switch or switching element as part of other head mounted lighting, imaging and/or communication systems. In some cases, the switch or switching element may be in a position that allows for activation/deactivation of the switch or switching element by manipulating muscles associated with the eyebrows, temples, etc. of the wearer. It is noted that the activation or deactivation of the switch or switch element is not due to any speech or other sounds associated with occlusion, movement and/or lateral displacement of the wearer's mandible, but rather due to movement of the mandible (or other portion of the wearer's face) caused by such movements.
Reference herein to a switch or switching element refers to one or more components of a switch, which may be a mechanical switch, an electrical switch, a virtual switch (e.g., a switch implemented in software running on a controller or other form of programmable device), etc. Where only a single number is used to refer to a switch or switching element, it is to be understood that either or both refer to a switch or switching element. In various embodiments, activation of the switch or switching element may be accomplished by movement of one of its components relative to one or more of its other components, such as by one or more autonomous muscle movements.
One embodiment of the present invention provides a system and method for improving the accuracy and overall performance of hands-free actuation and control of devices provided by the bite muscles and/or other maxillofacial movements. These systems and methods feature, in part, the use of a switch or switching element (e.g., overlaying one or more muscles of the wearer) located on or near the user. Particular embodiments of the present invention relate to a switch or switching element that is located on or near the wearer's mandible, chin, or other region of the wearer's face or head (e.g., the temple) such that occlusion, movement, and/or lateral displacement of the wearer's mandible (e.g., by manipulation of the bite muscles, the winged internal muscles, and/or the winged external muscles (or a combination thereof)) activates or deactivates the switch or switching element. However, the present invention also relates to positioning the switch/switching element more broadly such that the switch/switching element may be activated, deactivated or otherwise controlled by snapping, releasing, activating, deactivating or otherwise moving one or more muscles in an area of the wearer's body proximate to, adjacent to, below or covering the location of the switch or switching element. Thus, although much of the discussion herein relates to a switch or switching element located near the wearer's mandible, the reader should consider this description as an example for convenience and not as a limitation on the invention. In other embodiments, the switch or switching element may be placed on or near the arms, legs, torso, chest, hands, fingers, feet, toes, temples, or other areas of the wearer's body.
As used herein, when referring to an area of the wearer's face that covers the mandible or the bite muscle, such as a switch or switch element that covers such an area, it means that the wearable module or the wearable device (e.g., the wearable electronic controller) has one or more surfaces (e.g., control surfaces (e.g., hall effect sensors, myoelectric sensors, piezoelectric switches, adhesive tape switches, fabric switches, etc.) that are located in contact with the right and/or left side of the wearer's face (e.g., in the area below the ear canal to the bottom of the mandible) and extend anteriorly in the zygomatic arch and the lower zygomatic bone that is formed between the zygomatic process of the temporal bone and the temporal process of the zygomatic bone. The control surface may be a switch and/or a switching element, or other control surface configured to detect the relaxed state and the active/displaced state of one or more muscles (e.g., the bite muscle of the wearer) (see, e.g., fig. 9), so that the wearer may generate input signals via muscle manipulation to control the electronic system components. Alternatively or additionally, the control surface may be designated as a wearable module, wearable electronic controller, or other wearable element located in contact with the right and/or left side of the wearer's face at or near the temple or above the eyebrow. Further, the control surface may be a wearable module, wearable electronic controller, or other wearable element designated to be in contact with the chin of the wearer or other part of the wearer's body (e.g., arm, leg, torso, chest, hand, or foot). In either case, the control surface is configured to detect (see, e.g., fig. 9) the relaxed and active states of one or more muscles (e.g., muscles associated with occlusion, movement, and/or lateral displacement of the wearer's mandible, or muscles associated with displacement of the mandible itself), thereby enabling the wearer to generate input signals for controlling components of the electronic system via such autonomous muscle manipulation. The wearable module, wearable electronic controller, or other wearable element is adjustable in its positioning such that one or more active control surfaces are located within a desired area covering a portion of the wearer's face or head, and means for adjusting the contact pressure of the active control surfaces against the wearer's face or head may be provided. The wearable module, wearable electronic controller, or other wearable element may be configured to house one or more electronic system components (e.g., lights, cameras, displays, laser pointers, haptic engines in the form of vibrating motors, etc.) that are controlled by muscle manipulation.
In still further embodiments, the wearable module, wearable electronic controller, or other wearable element may be positioned without having to bring the active control surface into physical contact with the body of the wearer. For example, a non-contact sensor (e.g., an optical sensor employing visible or infrared light or a proximity sensor) may be positioned around the user's body (e.g., mounted on eyeglasses, ear covers, ear plugs, head-mounted devices, head straps, masks, face masks, nose pads, etc.) and oriented to detect movements (e.g., occlusal, active, and/or lateral displacement) of the user's mandibular or other muscles, such as produced by autonomous manipulation of the bite muscles, the winged internal muscles, and/or the winged external muscles (or combinations thereof). The detection of such movement may be used to activate or deactivate a switch which in turn activates or deactivates a controlled element, or the non-contact sensor itself may be considered a switching element and used to send an input to a programmed controller which in turn sends a signal to activate or deactivate a controlled element.
In much of the discussion herein, examples of the switch being located on or near the chin, or other areas of the wearer's face or head are used, but this is for convenience only, the invention being applicable to a switch or switch element covering an area of the wearer's body such that the engagement, activity, and/or lateral displacement (or combination thereof) of muscles on or near the wearer's body activates, deactivates, or otherwise operates the switch, or the switch being located near the wearer such that the engagement, activity, and/or lateral displacement (or combination thereof) of muscles of the wearer's body or nearby can be detected, for example, by an optical sensor or proximity sensor, and the movement for activating, deactivating, or otherwise operating the switch is detected, allowing hands-free operation of the controlled device in either or both cases.
The use of "snap interactions" is considered a viable control technique. For example, U.S. patent application publication No. U.S. PGPUB 2020/0097084, xu et al, "Clench Interaction: novel Biting Input Techniques," Proc.2019CHI Conference on Human Factors in Computing Systems (CHI 2019), may 4-9,2019,Glasgow,Scotland UK, and Koshnam, E.K. et al, "handles-Free EEG-Based Control of a Computer Interface based on Online Detection of Clenching of Jaw," in: rojas I.,F. (eds) Bioinformatics and Biomedical Engineering, IWBBIO 2017, pp.497-507 (April 26-28,2017) all provide examples of such techniques. In the study of Xu et al, the use of an bite force interface can provide some advantages in some applications, however, the present application employs different approaches as it relies on a sensor placed outside the user's mouth. Such sensors are more suitable for applications where placement of the sensor in a person's mouth may be uncomfortable or impractical. In the Koshnam et al study, EEG sensors are located outside the mouth, placed at temporal locations T7 and T8 of the wearer's head, but do not provide a mechanism to alert the wearer when a command signal initiated by a mandibular bite motion is recognized. Thus, the system is considered to have an excessively long lag time in identifying and performing the bite action, which adversely affects its use as a control element for the remote device.
Referring to fig. 1, an example of an activation accessory 10 of a controlled device is shown. The activation accessory 10 includes an optional vibration motor 12, a wearable module 14 including a sensor 16 (e.g., a hall effect sensor), and a controller 18. In some embodiments, when a vibration motor 12 is present, the vibration motor 12 may also be included in the wearable module 14. The sensor 16 is communicatively coupled to the controller 18 through an analog-to-digital (a/D) converter 20, the analog-to-digital converter 20 converting an analog output of the sensor 16 into a digital signal that is provided as an input to a processor 22 of the controller 18. In some cases, the a/D converter 20 is not required, for example, where the output of the sensor 16 has been digitized, or the a/D converter 20 may be incorporated within the controller 18. The processor 22 also has an output coupled to the control signal interface 24 and the vibration motor 12.
The hall effect sensor is merely one example of a sensor 16 that may be used in conjunction with the activation accessory 10, and in other embodiments one or more such sensors may be used, with or without the sensor 16 being a hall effect sensor. In general, the sensor 16 is useful when the activation accessory 10 is in physical contact with the wearer (e.g., at or near the wearer's face), as is the case with the various embodiments of the invention discussed herein. In other embodiments, the sensor 16 may be any of an ultrasonic motion sensor, a camera or LIDAR unit, a motion sensor (e.g., employing one or more light emitting diodes to detect motion), a laser sensor (e.g., a Vertical Cavity Surface Emitting Laser (VCSEL) sensor), or more generally, a time-of-flight sensor that may detect motion using optical and/or acoustic means. Alternatively, the sensor 16 may be another form of proximity sensor or motion sensor. In this case, the sensor 16 need not be in physical contact with the wearer's face, but may be positioned to be able to detect movements (e.g., occlusal, active, and/or lateral displacement) of the user's mandible, for example, by autonomous manipulation of the bite muscles, the pterygoid muscles and/or the extrapterygoid muscles (or a combination thereof), the temple, the eyebrow, the chin, or other aspects of the user's head or face. In these cases, the vibration motor 12 (for providing haptic feedback to the user to indicate successful recognition of command input when the vibration motor 12 is present) is likely not contained in the same wearable module as the sensor 16, but may be contained in a separate module and worn separately from the sensor 16. Likewise, the sensor 16 itself may be worn separately from the controller 18 and other components of the activation accessory 10. Thus, wearable module 14 depicted in dashed outline should be understood to be optional and in some cases represent several different wearable modules that may be worn at different locations of the user. For ease of reference, a single wearable module 14 will be described herein, but it should be remembered that this is for illustrative purposes only.
The processor 22 of the controller 18 is also coupled to a memory 26 that stores processor-executable instructions that, when executed by the processor 22, cause the processor 22 to receive and evaluate input signals from the sensor 16. The controller 18 (i.e., the processor 22) evaluates the input signals to determine whether the input signals represent commands of the controlled device by evaluating a signal pattern of the input signals that is indicative of a plurality of autonomous mandibular movements or other maxillofacial movements or actions of the wearer of the wearable module 14. If/when the processor 22 determines that the input signal from the sensor 16 represents a command for a controlled device, the processor 22 decodes the command and transmits an associated control signal to the controlled device (not shown in this view) and an activation signal to the vibration motor 12 (if present) via the control signal interface 24, as discussed more fully below. On the other hand, if the processor 22 determines that the input signal from the sensor 16 does not represent a command for a controlled device, then no control signal or activation signal is transmitted and the processor 22 continues to evaluate additional/new input signals from the sensor 16 in a similar manner as the original input signal. In one embodiment, the activation signal of the vibration motor 12 is a pulse width modulated signal. The haptic feedback provided by vibration motor 12 in response to the mandibular bite/motion or other maxillofacial motion action of the wearer may also be activated by another user (e.g., through communication with the wearer of wearable module 14) to provide a means of silent communication.
In order to improve the accuracy and overall performance of the activation accessory 10, means are also provided for excluding (blanching) or ignoring the input signals generated by the maxillofacial movements involved in forming the speech. The maxillofacial movements involved in forming speech may inadvertently create a command input pattern, resulting in activation of accessory 10 to produce a false input command. By audibly detecting such speech using decibel sensing, tone analysis, or other means by an integrated microphone or remote microphone/accessory microphone, and ignoring all or some degree/range of maxillofacial movement when such audible speech is detected, inadvertent input commands generated by speech may be reduced or eliminated.
Thus, the microphone 200 and the a/D converter 200 are provided as components of the activation accessory 10. Although microphone 200 is shown as being integrated as part of wearable module 14, a remote microphone and associated remote a/D converter may be used. Microphone 200 detects audible sounds, such as speech by a wearer activating accessory 10, and generates analog outputs in response thereto. The analog output is digitized by the a/D converter 202 and provided as an input to the processor 22. For example, the processor 22 may periodically sample the output of the A/D converter 202 and process the signal so output to determine whether the microphone 200 detects any wearer's voice. Suitable filters may be employed to distinguish the wearer's voice from other people's voices and/or ambient noise. For example, a threshold filter may be employed to distinguish the wearer's voice from other people's voices, since the wearer's voice is expected to be greater than the voices of other people nearby. In addition, speech and noise may be distinguished based on spectral content and/or other parameters.
Fig. 15 illustrates a method 300 of distinguishing between wearer speech and voluntary mandibular movement, according to an embodiment of the present invention. At 302, the controller 18 receives input signals from the microphone 200 in the wearable module 14 through the a/D converter 202, as well as other mandibular motion inputs from hall effect sensors or other sensors. At 304, the processor 22 of the controller 18 evaluates the microphone input signal according to processor-executable instructions stored in the memory 26 and by executing the processor-executable instructions stored in the memory 26 to determine whether the input signal is representative of the wearer's voice. As described above, this evaluation is made by the processor evaluating 306 the signal pattern, amplitude, and/or other indication of the input signal that prompts the wearer for speech or other utterances. If processor 22 determines that the input signal is not indicative of the wearer's voice, processor 22 proceeds with other decoding operations to determine if command input 310 is sensed, step 308. Such decoding operations are discussed further below. Otherwise, the processor 22 considers the mandibular movement signal to be associated with the wearer's voice and does not further process the associated mandibular movement signal, but continues to evaluate the next input signal 312 from the microphone and hall effect sensor in a similar manner as the first input signal.
Additionally or alternatively, vocal cord activation by the wearer when forming speech may be detected by a tuned vibration sensor mounted in the head mounted device associated with the activation accessory 10 and/or by applying a remote vibration sensor to the wearer's neck, lower jaw, or other location via a wired or wireless connection. For inputs provided by such sensors, an elimination or false signal suppression method similar to that shown in fig. 15 and described above may be employed to reduce or eliminate false input commands caused by associated maxillofacial movements. Furthermore, EMG sensors and/or integrated cameras or remote cameras may be employed in addition to or in place of microphones to detect that the wearer's mouth or tongue movements and operations are in an open or closed state when speaking, yawing or sneezing. In addition, other sensors, such as high sensitivity pressure sensors, may be used to detect exhaled air from the mouth. The input signals from the mouth position sensor and the exhalation sensor may be used in a similar manner to that described above to eliminate erroneous mandibular bite input commands, as neither signal is detectable during mandibular bite.
In addition to voice detection, embodiments of the present invention may also provide voice recognition and/or voice recognition. Speech recognition typically involves recognition and translation of spoken words. Various methods of speech recognition are known in the art, and many modern technologies employ hidden markov models (Hidden Markov Models): a statistical model of the sequence of symbols or numbers is output based on the input speech signal. For example, a speech signal provided by microphone 200 may be sampled by the controller and the samples applied as inputs to a hidden Markov model process running on the controller to produce an output sequence vector. These vectors are then used to identify the relevant phonemes, and these phonemes are used to identify the most likely spoken word. Thus, such a system can be used to interpret spoken commands of a controlled device, e.g., commands that are independent of non-speech related mandibular voluntary movements. Alternatively or additionally, the controller may be configured to enhance the speech recognition process by decoding mandibular motion associated with the speech recognition process, which may be used to decode commands made by the mandibular motion. This may be achieved by analysing the signals generated by the sensor 16 as the user speaks and correlating these signals with the output of the speech recognition process, thereby increasing the likelihood of correctly recognizing spoken language.
On the other hand, voice recognition generally involves speaker recognition, and may or may not include recognition of the word actually spoken. In embodiments of the present invention, voice recognition techniques may be employed to identify the user's identity, whether by autonomous mandibular movement or otherwise, prior to executing and/or accepting instructions made by the user. Voice recognition may also include the controller sampling the speech signal from microphone 200 and then using one or more pattern matching and/or other techniques to identify the identity of the speaker with a particular probability. The controller may allow the user-entered command to be executed if the speaker is identified as having a sufficiently high likelihood of activating the authorized user of the accessory.
In addition to wearable technology devices, these sound/speech excluding methods can also be used to improve input signal processing for assisting PC control and navigation, as well as other system/device control for disabled users who currently rely on EMG sensors placed on the head or face to detect input signals while speaking, which can interfere with intentional inputs generated by different maxillofacial movements that generate command signals.
Regardless of the sensing technique and detection method used, embodiments of the present invention allow for a more accurate activation of the hands-free system by ignoring detection of maxillofacial motion as an input pattern when the wearer is speaking, and then immediately detecting and processing maxillofacial motion again as a system input instruction when the wearer is no longer speaking.
In addition to sound/speech exclusion, the processor 22 may be programmed to execute one or more rules when the wearer's speech is detected and decoded. For example, while the above procedure is used to distinguish between actual command inputs for mandibular occlusion/movement and speech, in the event that speech is detected, the processor 22 may be further configured to execute a speech recognition routine to decode commands relayed by the speech inputs and thereafter issue signals to execute those commands. Also, the processor 22 may be configured to execute a voice recognition routine to ensure that only authorized wearers, as determined by the voice recognition of the wearer, are able to execute the autonomous mandibular bite/movement command.
Referring now to fig. 2A-2F, various examples of controlled devices and arrangements for communicatively coupling the controlled devices to wearable module 14 are shown. In fig. 2A, the controlled device is a lighting element 30 comprised of one or more LEDs 32. As described above, the processor of the controller 18 is coupled to the control signal interface 24 and is adapted to transmit control signals to the controlled device (in this case the lighting element 30) via the control signal interface 24. Drivers and other interface elements not shown in the figures may amplify and/or otherwise condition the control signals so that they are suitable for use with the lighting element 30.
Fig. 2B shows an example in which wearable module 14 is coupled to transmitter 34 through control signal interface 24. The transmitter 34 may be a low power/short range transmitter, e.g., bluetooth TM Bluetooth Low Energy (BLE), zigbee, infrared transmitter, wiFi HaLow (IEEE 802.22 h) or other WiFi, Z-wave, thread, sigFox, dash7 or other transmitter. The transmitter 34 itself may be a controlled device, alternatively, as shown in fig. 2D, the transmitter 34 may be a component of a wireless communication system that includes a receiver 36 communicatively coupled to a controlled device (e.g., a two-way radio 38). In this arrangement, the transmitter 34 is adapted for radio frequency communication with a receiver 36 at the controlled device. Thus, control signals issued by the processor 22 of the controller 18Coupled to the control signal interface 24 and transmitted from the transmitter 34 to the controlled device via radio frequency signals.
Fig. 2C shows another alternative in which wearable module 14 is directly coupled to two-way radio 36. In this example, the control signal interface 24 may be coupled to the two-way radio 36 by a cable having a plug configured to mate with a jack at the two-way radio 36 (or more generally, the controlled device). In this way, the wearable module 14 may function as a push-to-talk (PTT) unit (or more generally, an activation switch for a controlled device) for the two-way radio 36. Alternatively, as shown in fig. 2E and 2F, the wearable module 14 may be used as an auxiliary PTT element of the PTT adapter 40 of the two-way radio 36 (or more generally, the controlled device). As shown in fig. 2E, the connection between the wearable module 14 (control signal interface 24) and the PTT adapter 40 may be wired, for example using a cable with a plug configured to mate with a jack at the PTT adapter, or wireless, using a transmitter/receiver pair 34, 36. Of course, other arrangements for transmitting control signals generated by the processor 22 (or more generally, the controller 18) of the wearable module 10 to the controlled device may be used.
In addition to the examples described above, the processor 22 may also communicate with and control other peripheral devices (e.g., heads-up displays, audio input/output units, non-head-mounted units, etc.). The processor 22 is a hardware-implemented module and may be a general purpose processor, or a dedicated circuit or logic such as a Field Programmable Gate Array (FPGA) or Application Specific Integrated Circuit (ASIC), or other form of processing unit. Memory 26 may be a readable/writable memory, such as an electrically erasable programmable read-only memory or other storage device.
Referring now to fig. 3, in various embodiments, the wearable module 10 may be supported on the mount 42 of the headset 44, or in other arrangements. For example, such mounts 42 may be movable relative to a frame 46 of the headgear or a component of the headgear (e.g., earmuffs 48) to allow wearable module 14 to be positioned at different locations on the wearer. More generally, such mounts 42 may be configured to position wearable module 14 so as to cover a mandibular region of the wearer or other portion of the wearer's head or face.
In some cases, as shown in fig. 4, wearable module 14 may be supported in a mask 52 (e.g., a mask used by a firefighter, diver, crew member, or another wearer), where mask 52 is configured to position wearable module 14 to cover the chin area of the wearer or other portions of the wearer's head or face. Alternatively, as shown in fig. 5, wearable module 14 may have adhesive applied to its surface to enable wearable module 14 to be worn on the face or head of a wearer (see, e.g., fig. 6). In one instance, such adhesive may be in the form of a removable film 54 that adheres to the surface of wearable module 14.
Wearable module 14 may include more than one sensor 16, with multiple sensors being arranged relative to one another to allow individual and/or group activation of the sensors by the wearer's associated autonomous mandibular movement. For example, FIG. 7 shows a wearable module 14' that includes two sensors 16-1, 16-2. Each sensor 16-1 and 16-2 may be a hall effect sensor or other sensor and is associated with a respective paddle switch 56-1 and 56-2 that may be depressed by the autonomous mandibular movement of the wearer. Pressing the paddle switch activates a sensor associated with the paddle switch.
Further, as shown in fig. 1, 3, 4 and 6, there may be a visual activation indicator 50 in addition to the vibration motor 12. Such a visual activation indicator (e.g., one or more LEDs) may be coupled to receive a visual activation indication signal from controller 18 (processor 22), and processor-executable instructions stored in memory 26, when executed by processor 22, may further cause processor 22 to transmit the visual activation indication signal to visual activation indicator 50 to illuminate the one or more LEDs for a short period of time if/when processor 22 determines that the input signal from sensor 16 signal is representative of a command for a controlled device. As shown in the various illustrations, the visual activation indicator 50 may be located on the headset 44, on the helmet 60, or on an indicator panel 62 associated therewith, or the visual activation indicator 50 may be located as an integral or other attachment to a pair of eyeglasses 64 or safety glasses, such as on a temple piece 66 of the eyeglasses. This type of activation indicator is particularly useful when the wearable module 14 is used to control devices such as PTT controllers/adapters associated with tactical radios or the radios themselves. When providing microphone actuation using such a radio, a "microphone state LED" may be included in the visual activation indicator 50 to provide visual perception in microphone state. This LED emits light (visible only to the wearer) inside the glasses 64. This provides effective light control in tactical situations. The light will be visible when the microphone is in use (i.e. on) and will be extinguished when the microphone is not in use (i.e. off).
As described above, in various embodiments, wearable module 14 is positioned flush (or nearly flush) with the wearer's face over the chin, bite muscles, or other area of the wearer's face such that occlusion/activity or other displacement of the chin activates sensor 16 to transmit a signal to processor 22. The power supply and control electronics for the wearable module 14 may be incorporated within the module itself and/or within a garment, frame, or mask or elsewhere that supports the wearable module 14. In the arrangement shown in fig. 3, the wearable module 14 is mounted to the earphone cover 48 of the headset 44 by a frame 46 around the earphone cover. In an alternative arrangement such as shown in fig. 8A-8D, the frame 46 may be mounted to the earphone cover 48 by friction fit frame 46-1 or frames 46-2, 46-3, 46-4, the frames 46-2, 46-3, 46-4 being mounted by screws 68, rivets or pins 70 or other connection means. In some embodiments, wearable module 14 may be attached to or integrated into a movable portion of mount 42 that may rotate about a rivet, pin, or other joint or hinge, and may also be flexible so as to move closer to or farther from the wearer's face. This helps to prevent unwanted actuation of the sensor 16. Such movable portions of mount 42 may be hingedly attached to frame 46 by a spring-loaded hinge that may hold wearable module 14 against the wearer's face even when the wearer moves their head, unless moved away from the wearer's face to a sufficient extent to engage a stop that may prevent wearable module 14 from returning to a position adjacent the wearer's face unless manually adjusted by the wearer. Such hingeable devices may incorporate any type of spring-loaded hinge, such as a spring-loaded piano hinge, a butt-joint hinge, a barrel hinge, a butterfly hinge, a pivot hinge, or other device.
Returning to fig. 6, when the glasses are worn, the illumination element 50 may be attached to the inside of the temple 66 or slid over the temple piece to contact the temple area of the wearer. This also provides a suitable location for the vibration motor 12. In this position of the user, when the processor of wearable module 14 detects voluntary facial movements of the wearer, such as occlusion of the wearer's bite muscles, activity, or other displacement of the wearer's lower jaw, eyebrow, temple, etc., the voluntary facial movements of the wearer are then converted into command signals for activating, deactivating, or controlling the controlled device (e.g., changing the volume of audio communication or music, turning on the integrated lighting module, or answering a phone), vibration motor 12 may be activated to provide feedback indicating successful recognition of the input command. As described below, a unique "language" (sometimes referred to hereinafter as "bite language") may be programmed to control certain functions of the controlled device using a particular bite muscle bite (or other wearer's mandibular action) sequence or pattern. The vibration motor 12 may also provide haptic feedback to the user as a notification of microphone status or other enabling system. For example, a slight vibration of the vibration motor 12 in a particular pattern may alert the wearer that the microphone is on to prevent an "open microphone" condition from occurring, thereby disabling other people from communicating on the public channel.
Furthermore, since the temple region has proven to be a useful location for sensing certain vital signs, additional sensors, such as for monitoring the vital signs of the wearer, may also be integrated into the temple 66 to provide remote biological monitoring of the wearer. Such sensors may be integrated into the temple 66 (permanently attached as an accessory) or attached to the interior of the temple using tape, glue, magnets, hook and loop fasteners, screws or tongue and groove or dovetail profile connection mechanisms. The sensor signals may be transmitted through a power cable/router or through a wireless connection (e.g., bluetooth or near field magnetic induction).
As is apparent from the discussion above, the use of an activation accessory does not require the wearing of a headgear or mask. Instead, the activation accessory may be worn separately, for example by using an adhesive. The incorporation of an activated accessory in a headset is typically common practice for aircraft flight or handling crews, but a headset such as that shown in the above figures is not limited to use by flight/aircraft crews, ground forces, navy/coastal police personnel, and civilians may also be used. For example, workers may use the head-mounted devices as described herein at and around construction sites, sports fields, movie and television production sites, amusement parks, and many other sites. By using a headset equipped with an activation accessory such as described herein, its wearer can conveniently activate/deactivate/operate the lighting system, imaging system, gaming system, and/or communication system in a hands-free manner. Note that while fig. 3 shows a headset with a left and right ear cover, this is for example purposes only and the present system may be used with a headset having only a single ear cover or one or two earpieces. Indeed, the present system may even be used with a headset that does not include any headphones or earphones, such as to a strap that is worn on the head or neck, or to a boom of a helmet or other headset.
When evaluating the input signals from the sensors 16 to obtain a signal pattern indicative of the wearer's multiple autonomous mandibular movements, the processor 22 may evaluate the input signals according to a stored library of command signal representations, where each command signal represents an associated command that characterizes the controlled device. Alternatively or additionally, the input signal may be evaluated in terms of its corresponding power spectral density over a certain period of time. Alternatively, particularly in the case of hall effect sensors, the input signal may be evaluated based on the count value of the hall effect sensor received over a particular period of time. Still further, the input signals may be evaluated according to a training model of command signal representations, where each command signal represents an associated command that characterizes the controlled device.
Fig. 9 shows an example of an input signal received by the processor 22 from the sensor 16. Trace 72 depicts the "count" of hall effect sensors 16 received by processor 22 over time. In this case, the "count" represents the externally applied magnetic field detected by the hall effect sensor 16, which varies as the wearer bites into the mandibular motion. Other output parameters of the hall effect sensor or other sensors may be measured to provide similar results, including voltage output and/or current output measured by the sensor. More generally, in embodiments of the present invention, the wearable module of the activation accessory 10 includes one or more switching elements (e.g., hall effect sensors or other sensors discussed herein) that are sensitive to movement of the wearer's mandible or other muscles and are communicatively coupled to a controller 18, the controller 18 having a processor 22 and a memory 26 coupled thereto, and storing processor-executable instructions. The processor 22 is also coupled to provide an output signal to an indicator (e.g., the lighting element 50 and/or the vibration motor 12). The wearable module 16 may be mounted to a body worn article (e.g., a headset, a mask, or glasses/goggles or other associated elements) by an elongated member so as to be positionable to allow one or more control surfaces associated with one or more switching elements to contact the wearer at a desired location or to observe/detect movement of the muscles of the wearer. The switching element, when executed by the processor 22, causes the processor to receive input signals from one or more sensors, such as by level or edge detection of the input signals to detect relaxation (high signal level in fig. 9) and bite/activity (low signal level) or other states (e.g., forward and backward or lateral displacement) of the wearer's muscles. Based on these input signals, the processor 22 decodes the relaxed state and bite/active state into commands (74, 76, 78, etc.) for controlling electronic system components communicatively coupled to the controller and alerts the wearer to the success of the command decoding by providing output signals to the indicator.
As shown in fig. 9, trace 72 shows significant changes in the count value corresponding to the period of time that the wearer is relaxed (high signal level) and occluded (low signal level) his or her lower jaw while wearing wearable module 14. The detection of such an action by the processor 22 may be edge-sensitive or level-sensitive. Further, as described above, the sensor signals may be decoded according to language to distinguish between activation, deactivation and operation commands of the controlled device. The example shown in fig. 9 depicts a decoded signal representative of a command for a lighting unit. Signal sets 74 and 78, one short bite followed by one long bite, represent activation ("on") and deactivation ("off) commands. That is, when the processor 22 recognizes such a set of input signals, the lighting module is commanded to change the operating state from on or off in the current state to off or on in the opposite state, respectively. The signal group 76 represents a command to change an output characteristic (e.g., brightness) and corresponds to two short-shots followed by a long-shot. Two short bite signals indicate an output change and a long bite signal indicates that the brightness of the lighting unit should change (e.g., from low to high) during a bite event. Of course, other bite languages for various controlled devices and sensor-muscle arrangements may be implemented. For example, in addition to the double-bite/activity input representing a subsequent command input, the triple-bite/activity input may also be identified as a signal valid command input, unlike the command associated with the double-bite input. In addition, multiple bite inputs and/or bite-and-hold inputs may also be recognized as representing different commands. Such multiple bite/activity inputs are useful for eliminating inadvertent actuation of the sensor 16, which may be caused by involuntary muscle movements or the wearer chewing food, chewing gum, etc., or biting/activity of their mandible during other activities. In general, the intended command may be identified by decoding detected relaxation and bite/activity states of the wearer's muscles according to a language that identifies such command according to identifying a plurality of bite/activity actions detected over a period of time, such as a plurality of short detected actions and long (e.g., bite/activity and hold) actions identified over a period of time. The active form of the snap/activity input may be used to turn on/off the light emitting element and/or individual LEDs in the light emitting element, thereby adjusting the intensity of one or more of the light emitting LEDs, or to signal other desired operations. In general, the timing, repetition, and duration of the bite/activity input actuation sequence may each be used alone and/or in combination to designate different command inputs for one or more controlled devices.
Fig. 10 illustrates a method 80 of operating a controlled device according to an embodiment of the invention. At 82, the controller 18 receives a first input signal from the sensor 16 in the wearable module 14, the wearable module 14 being communicatively coupled to the controller. At 84, the processor 22 of the controller 18 evaluates the first input signal according to the processor-executable instructions stored in the memory 26 and by executing the processor-executable instructions stored in the memory 26 to determine whether the first input signal represents a command for a controlled device. As described above, this evaluation 84 is performed by the processor evaluating 86 a signal pattern of the first input signal, which signal pattern is indicative of a plurality of autonomous mandibular movements of the wearer of the wearable module 14. If the processor 22 determines that the first input signal represents a command, step 88, the processor 22 decodes the command 90, for example, by recognizing the input signal as one of a plurality of modes of language as described above, and by a communication element communicatively coupled to the processor, transmits 92 an associated control signal to the controlled device, and optionally, transmits 94 an activation signal to the vibration motor of the wearable module. As described above, the communication element may be a cable having a plug configured to mate with a jack at the controlled device, a transmitter adapted for radio frequency communication with a receiver at the controlled device, or other element. Decoding the command signal may include determining a number of short bite/activity actions preceding the long bite/activity action to determine a nature of the subsequent one or more long bite/activity and/or short bite/activity actions, and decoding the command signal is also dependent on a current operating state of the controlled device. Otherwise, at step 96, the processor 22 does not transmit control signals and activation signals, but continues to evaluate the second/next input signal 96 from the sensor in a similar manner as the first input signal.
In general, the sensor 16 requires little or no mechanical displacement of the control element to signal or effect a change (or desired change) in the state of the controlled system. Hall effect sensors are one example of such devices. Other examples of such devices include EMG sensors or piezoelectric switches, such as piezoelectric proximity sensors manufactured by australia Communicate AT Pty ltd. Of Dee Why, tape switches, fabric switches or other switches requiring little or no mechanical displacement of control elements. Piezoelectric switches typically have an on/off output state in response to an electrical pulse generated by a piezoelectric element. When the piezoelectric element is under pressure, an electrical pulse is generated, for example, due to pressure from the wearer's bite into his or her mandible, causing pressure to be applied to the switch. Although pulses are only generated when pressure is present (e.g., when the wearer's mandible or other muscles are in motion), additional circuitry may be provided so that the output state of the switch remains in an "on" or "off" state until a second actuation of the switch occurs. For example, a flip-flop may be used to hold the switch output logic high or logic low, with a state change due to sequential input pulses from the piezoelectric element. One advantage of such a piezoelectric switch is that there are no moving parts (except the front plate which must be deformed by a few microns each time the wearer's mandibular bite) and the entire switch can be sealed from the environment, which makes the piezoelectric switch particularly suitable for marine and/or outdoor applications.
Another example is a micro-tactile switch. Although tactile switches use mechanical elements that are subject to wear, they may be more suitable than hall effect sensors or piezo switches for some applications because they can provide mechanical feedback to the user (the tactile feedback provided by the vibration motor 12 also provides an acceptable level of feedback to the user and thus may be adequate in most cases). This feedback may ensure that the switch has been activated or deactivated. Momentary contact tactile switches may also be used, but because they require a continuous force (e.g., a force using a mandibular bite switch), they are best suited for applications requiring only momentary engagement or brief engagement of the active element under control of the switch, such as signal light blinking, burst transmission, or other short duration applications, or as described above, the trigger is used to maintain the output state until a subsequent input is received. Other forms of switches include ribbon switches (e.g., manufactured by tape switch corporation of farm, new york) and conductive printed circuit board surface elements activated by carbon balls on an overlay keyboard.
Other embodiments of the sensor 16 include embodiments that do not require any physical contact with the wearer's mandible or the like. For example, the sensor 16 may be any of an ultrasonic motion sensor, a camera or LIDAR unit, a motion sensor (e.g., employing one or more light emitting diodes to detect motion), a laser sensor (e.g., a Vertical Cavity Surface Emitting Laser (VCSEL) sensor), or more generally a time-of-flight sensor that uses optical and/or acoustic means to detect motion. Alternatively, the sensor 16 may be another form of proximity sensor or motion sensor. In this case, the sensor 16 need only be positionable to be able to detect movements of the user's mandible, temple, eyebrow, chin, or other aspects of the user's head or face, such as occlusions, movements, and/or lateral displacements. For example, the sensor 16 may be located on a headset, glasses or other element worn by the user and oriented such that it can detect such movement.
Further, in various embodiments, the controlled device may include one or more LEDs that emit light at one or more wavelengths. Further, the controlled device may include one or more digital cameras and/or video imaging cameras. In some cases, the lighting elements may be worn on one side of the headset and the imaging system on the opposite side, each controlled by a separate active accessory mounted on the respective opposite side of the headset, or if the lighting system and the lighting system respond to different command signals, by an active accessory, in a manner similar to a computer cursor control device (e.g., a touchpad, mouse, etc.), may respond to a single, two, three, or multiple clicks, respectively, etc. In practice, the activation accessory itself may be used to control a cursor as part of a user-computer interface. For example, any or all of cursor type, cursor movement, and cursor selection may be controlled by wearable module 14. Applications for such use include computer game interfaces, which nowadays typically comprise head-mounted communication devices. One or more wearable modules 14 configured in accordance with embodiments of the present invention may be mounted to such a headset (either when manufactured or added as after-market) to provide cursor control capabilities. The connection to a console, personal computer, tablet, cell phone, or other device used as a game or other host may be provided using conventional wired or wireless communication means. The use of such a human-machine interface may provide users who are not available or convenient to use their hands with a particular application and a convenient way for them to interact with a personal computer, tablet, cell phone or similar device.
Further, the controlled device may include one or more microphones. Such a microphone may be mounted or integrated into the head-mounted device and transmit audio signals using bone conduction transducers. Alternatively or additionally, wearable module 14 may be used to adjust the presence, absence, and/or volume of audio played through one or more headphones or other earpieces. Furthermore, the wearable module 14 may be used to control non-earphone devices, for example, through a wireless transmitter.
One or more of the above-described embodiments may allow for the generation of a signal through a control surface that may be activated by direct or indirect force, an articulating paddle, a touch-sensitive surface, or other tactile actuation device. Devices configured according to these embodiments may employ a movable structure (e.g., a paddle) that houses a sensor to detect changes in the electromagnetic field as the corresponding magnet moves into proximity of the sensor. Such devices may be in the form of accessories for remote (e.g., handheld) devices, or fully integrated into wearable form elements (e.g., eyeglasses and head-mounted devices). Other sensors may also be used as discussed herein.
By providing left and right activation devices (or any number of left and right activation devices) that may be configured to allow various types of inputs (e.g., different numbers of activations similar to single, double, or other mouse clicks, etc.), a user may provide different commands to an associated device. For example, different command activation sequences may be used to zoom the camera, pan direction in a virtual/visual environment, or many other commands for controlling the camera, audio transmission (volume up or down), etc. Further, when associated with cursor control operations of a computer system or similar device (including but not limited to cell phones, tablet computers, etc.), a sequence of commands for sliding, advancing or repeating tracks (music, video or audio/video) between views/screens, activating windows on the screen, etc. may also be included. In addition to the foregoing, the use of gyroscopes and/or accelerometers while engaged and held may allow objects to be selected and moved in a virtual field. This is similar to clicking and holding and then moving a cursor with a mouse or joystick, as it allows a user to move objects (e.g., icons), open menus, select commands, etc. around a virtual desktop by snapping and moving the head. The gyroscope and/or accelerometer may be incorporated in the wearable module 14 or elsewhere (e.g., in a frame supporting the wearable module).
In addition to or as an alternative to gyroscopes and/or accelerometers, embodiments of the present invention may be employed as head-mounted devices, augmented reality/virtual reality head-mounted devices, or as other devices, operating a controlled device, particularly an active accessory for a controlled device, in a hands-free manner by autonomous muscle movements of the wearer, the active accessory comprising sensors configured to detect the relaxed and curved state of one or more muscles associated with engagement, activity and/or lateral displacement of a body part of the wearer, or with displacement of the body part of the wearer itself, and having tactile buttons and switches, touch-active control surfaces, and gesture techniques, as well as input means which may also rely on head tracking techniques and eye tracking techniques, as a means of controlling their operation. For example, eye-tracking techniques may use a response of a user's gaze at a particular object as a signal for moving a cursor or other controlled item to a location indicated by the user's gaze, which techniques may be used in conjunction with an activation accessory such that the activation accessory may select, and hold and/or other control operations on screen elements at the screen location indicated by the user's gaze.
Referring now to fig. 11A-14B, various embodiments of a head-mounted vision device having a wearable module 14 are shown. Such head-mounted vision devices are suitable for use in a variety of environments, including military, law enforcement, health care, field maintenance, and other environments (e.g., consumer environments). Unlike handheld and/or manual vision devices that typically require a user to operate a control unit or console with his or her hands, vision devices configured in accordance with embodiments of the invention may operate in a hands-free manner and may or may not wear helmets or other headwear, communication devices, etc. In addition to the visual device, the frame carrying the visual device provides a platform for audio/video capture and/or communication. For example, one or more speakers, earpieces, and/or microphones may be provided integral with or attached to the frame. The use of the wearable module 14 facilitates hands-free operation of the visual device, the wearable module 14 including a bite switch as described above that may be activated when a user bites or otherwise manipulates his or her mandible, temple, etc.
Fig. 11A-11B and 12A-12B illustrate an embodiment of a vision device in the form of a head-mounted virtual reality glasses 100 having an integrated wearable module 14 (fig. 11A-11B) and an attachable wearable module 14 (fig. 12A-12B) configured in accordance with the present invention. Fig. 13A-13B and 14A-14B illustrate an embodiment of a vision device in the form of head-mounted augmented reality glasses 102 having an integrated wearable module 14 (fig. 13A-13B) and an attachable wearable module 14 (fig. 14A-14V) configured in accordance with the present invention. As shown, each of the various vision devices includes a frame 104 that is worn over the ear.
In some cases, the vision devices 100, 102 may personalize the wearer by creating a physical or digital model of the wearer's head and face, and manufacturing the vision devices 100, 102 (or simply the frame 104) specifically suited to the wearer according to the dimensions provided by the model. Modern additive manufacturing processes (commonly referred to as 3D printing) make such customization economically viable, even for consumer applications, and the vision devices 100, 102 (or just the frame 104) can be easily generated from images of the wearer's head and face captured using computer-based cameras and transmitted to a remote server hosting a web service for purchasing the vision devices (or frames). For example, a user may capture multiple still images and/or short videos of his/her head and face in accordance with instructions provided by a network-based service. By including objects of known dimensions (e.g., ruler, credit card, etc.) within the camera field of view at the approximate location of the user's head when capturing the image, a 3D model of the user's head and face can be created at the server. The user may then be provided with an opportunity to customize the visual device 100, 102 (or the frame 104) to fit the size of the model, select, for example, the color, material, location on the ear where the visual device 100, 102 will be worn, etc. Once customization is specified and payment is collected, the visual device specifications may be sent to a manufacturing plant that manufactures the visual devices.
The vision devices 100, 102 may also support one or more communication handsets (not shown) and/or one or more microphones (not shown) that allow for communication with the wearer. The earpiece and microphone may be communicatively connected to a transceiver carried elsewhere on the wearer using a wired or wireless connection. In other embodiments, the earpiece and/or microphone may be eliminated and the audio communication facilitated by the bone conduction element. Portions of the illumination devices 100, 102 are in contact with the wearer's head. Thus, the bone conduction headset is not an earpiece, but a headset that decodes and converts signals from the receiver into vibrations and can transmit these vibrations directly to the cochlea of the wearer. The receiver and bone conduction headphones may be embedded directly in the vision device 100, 102, or in some cases, the receiver may be located external to the vision device 100, 102. One or more bone conduction headphones may be provided. For example, headphones may resemble bone conduction speakers used by divers and may be comprised of a piezo flexure disc encapsulated in a molded portion of the vision device 100, 102 that contacts the wearer's head behind only one or both ears. Similarly, a bone conduction microphone may be provided.
Although not shown in the various views, a power supply for the electronic device is provided and may be housed within or external to the vision device 100, 102 (e.g., worn on a vest or belt pack). In some cases, the primary power source may be external to the vision apparatus 100, 102 and the secondary power source is integrated into the vision apparatus 100, 102. This will allow the primary power source to be decoupled from the vision apparatus 100, 102, and then the vision apparatus 100, 102 will at least temporarily resume use of the secondary power source (e.g., small battery, etc.). To facilitate this operation, the vision devices 100, 102 may be provided with one or more ports to allow connection of different forms of power sources. Further, status indicators (e.g., LEDs or other indicators) may be provided to provide information regarding the imaging element, communication element, available power, etc. In some embodiments, haptic feedback may be used for various indications, such as low battery, etc.
The frame 104 of the various vision devices 100, 102 may be made of a variety of materials including, but not limited to, plastic (e.g., celluloid), metal and/or metal alloys, carbon fiber, wood, cellulose acetate (including, but not limited to, nylon), natural horn and/or bone, leather, epoxy, and combinations thereof. Manufacturing processes include, but are not limited to, injection molding, sintering, milling, and die cutting. Alternatively or additionally, one or more additive manufacturing processes, such as extrusion, photo-curing, powder bed melting, material jetting, or direct energy jetting, may be used to form the lighting device and/or components thereof.
The activation/deactivation and/or other operation of the imaging elements and/or audio communication elements of the vision devices 100, 102 may be accomplished through the use of an integrated wearable module 14 or an attachable wearable module 14, as applicable. Each sensor may comprise any of the types of sensors described above. The sensor is responsive to a minimum displacement of the wearer's mandible, temple or other facial element, where the sensor is located on or near or in an observable position when the associated vision apparatus 100, 102 is worn, e.g., where the bite switch covers the user's mandibular region when the vision apparatus 100, 102 is worn, so that biting, movement or other displacement (or the like) of the wearer's mandible will cause the sensor to signal such movement. The use of such a sensor allows hands-free operation of the imaging element (and optionally other elements) of the device.
In the vision devices 100, 102, the integrated wearable module 14 is included at the end of the frame element 106 or near the end of the frame element 106, which frame element 106 is a molded component of the original frame 104, and when the vision device 100, 102 is worn, the wearable module 14 is positioned by the frame element 106 flush (or nearly flush) with the wearer's face over the wearer's mandible, so that occlusion, movement, or other displacement of the wearer's mandible may cause the sensors in the wearable module to signal such movement. In a vision device 100, 102 that does not include an integrated wearable module 14, an attachable wearable module 14 may be provided. The attachable wearable module 14 includes a buckle 108 that slides over the temple pieces of the frame 104, and an elongated member 110 that includes a region extending downward from the temple pieces to the wearer's face near the mandible, such that a sensor included in the wearable module 14 at or near the end of the elongated member 110 is located on the wearer's mandible. Accordingly, the attachable wearable module 14 may be provided as an after-market accessory for the visual apparatus 100, 102 that is not initially equipped with a hands-free operation device. The position of the wearable module 14, whether as part of an attachable wearable module or as part of an integrated wearable module, is adjustable, for example by providing a telescoping elongate member 110 or frame element 106 (if applicable). In this way, the sensors may be positioned at different distances from the temple pieces of the frame 104 in order to accommodate different sizes of faces of the wearer.
As can be seen immediately from these illustrations, the use of the wearable module 14 allows for the activation/deactivation of the imaging element, the communication element, and/or other elements of the vision devices 100, 102 in a hands-free manner. In some cases, the elements of the vision devices 100, 102 may be controlled using wearable modules 14 positioned on different sides of the wearer's face, or by a single wearable module 14, with the sensors actuated in response to multiple occlusions (or other displacements) of the wearable module 14, as described above. In some embodiments, wearable module 14 may be hingedly attached to frame 104. This helps to prevent unwanted actuation of the sensor as it can be either far from or near the wearer's face as desired. Such hinge means may comprise a spring loaded hinge which holds the switch against the wearer's face even when the wearer moves his head, unless moved away from the wearer's face to a sufficient extent to engage a stop which prevents return to a position adjacent the wearer's face unless manually adjusted by the wearer. The hingeable device of wearable module 14 may include any type of spring-loaded hinge, such as a spring-loaded piano hinge, a butt-joint hinge, a barrel hinge, a butterfly hinge, a pivot hinge, or other device.
Accordingly, systems and methods for operating a controlled device in a hands-free manner through autonomous mandibular biting action and/or other muscular movements of a wearer have been described, particularly using an activation accessory for the controlled device that includes sensors configured to detect the state of relaxation and state of activity of one or more of the wearer's muscles, such as muscles associated with occlusion, movement and/or movement of the wearer's mandible.

Claims (14)

1. An activation accessory for a controlled device, comprising:
a wearable module comprising a sensor communicatively coupled to a controller having a first output coupled to a control signal interface and an input capable of receiving a signal generated by a microphone, the controller comprising a processor and a memory coupled to the processor and storing instructions executable by the processor, the instructions when executed by the processor causing the processor to perform steps comprising:
a first input signal is received from the sensor, and a second input signal is received from the microphone,
evaluating the first input signal and the second input signal to determine whether the second input signal represents a wearer's voice by evaluating the second input signal to determine whether the first input signal represents a command to the controlled device, if so, ignoring the first input signal, if not, evaluating the first input signal for a signal pattern indicative of a plurality of voluntary actions of the wearer of the wearable module, and
Decoding the command if the processor determines that the first input signal represents a command and (i) transmitting an associated control signal to the control signal interface and (ii) transmitting an activation signal to the vibration motor, otherwise the processor does not transmit the control signal and the activation signal and continues to evaluate the second input signal from the sensor in a similar manner as the first input signal; and
a communication element coupled to the control signal interface and adapted to transmit control signals from the processor to the controlled device.
2. The activation accessory of claim 1, wherein the sensor is one of a hall effect sensor, an Electromyogram (EMG) sensor, a piezoelectric switch, a tape switch, a fabric switch, an optical sensor, or a proximity sensor.
3. The activated accessory of claim 1, wherein the sensor is configured to detect movement of the wearer's mandible by physical, optical or proximity measurement.
4. The activation accessory of claim 1, wherein the sensor is configured to detect movement of a muscle of the wearer by physical, optical, or proximity measurement.
5. The activated attachment of claim 1, wherein the voluntary action of the wearer comprises maxillofacial movement.
6. The activation accessory of claim 1, wherein the instructions, when executed by the processor, cause the processor to evaluate the first input signal and the second input signal to determine whether they represent wearer speech by evaluating the second input signal to determine whether the first input signal represents a command to the controlled device by one or more of: the threshold filter is employed to distinguish the wearer's voice, the spectral content of the second input signal is evaluated, and when the wearer's voice is formed, vocal cord activation of the wearer is detected by a second sensor associated with the activating accessory.
7. The activation accessory of claim 1, wherein the instructions, when executed by the processor, cause the processor to execute one or more rules upon detection of the wearer's voice.
8. The activation accessory of claim 7, wherein one or more rules include decoding a command relayed by voice input and then signaling to execute the command.
9. The activation accessory of claim 7, wherein the one or more rules include executing a voice recognition routine to ensure that only an authorized wearer, as determined by voice recognition of the wearer, can execute the voluntary movement command.
10. A wearable module configured to detect muscle movement and control an electronic device, the wearable module comprising: a switch configured to contact the wearer such that the switch provides a first input signal to the controller when the wearer performs autonomous movement of a region of the wearer's body beneath the switch; a microphone coupled to provide a second input signal to the controller; and a controller coupled to receive the first input signal and the second input signal, wherein the controller is configured to distinguish a command to the controlled device from a wearer's voice based on the first signal and the second signal, and to decode the command in response to a determination that autonomous movement is associated with the command of the controlled device, and to transmit an associated control signal for the controlled device.
11. The wearable module of claim 10, wherein the switch is configured to be worn near an exterior of a wearer's face and to be activated/deactivated by movement of the wearer's mandible.
12. The wearable module of claim 10, wherein the switch is mounted on a headset.
13. The wearable module of claim 10, wherein the controller is configured to control operation of the controlled device in response to different actuations of the switch.
14. The wearable module of claim 10, wherein the controller is configured to distinguish commands to the controlled device from wearer speech by one or more of: the second input is evaluated using a threshold filter to distinguish between the wearer's voice, the spectral content of the second input signal is evaluated, and a signal indicative of the wearer's vocal cord activation is detected when the wearer's voice is formed from the second sensor.
CN202280029744.9A 2021-04-21 2022-04-19 Method for excluding a speech muscle movement control system Pending CN117222967A (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US63/201,280 2021-04-21
US63/232,084 2021-08-11
US202163261052P 2021-09-09 2021-09-09
US63/261,052 2021-09-09
PCT/US2022/025324 WO2022225912A1 (en) 2021-04-21 2022-04-19 Methods for voice blanking muscle movement controlled systems

Publications (1)

Publication Number Publication Date
CN117222967A true CN117222967A (en) 2023-12-12

Family

ID=89039405

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280029744.9A Pending CN117222967A (en) 2021-04-21 2022-04-19 Method for excluding a speech muscle movement control system

Country Status (1)

Country Link
CN (1) CN117222967A (en)

Similar Documents

Publication Publication Date Title
US20200209966A1 (en) Detection device, detection method, control device, and control method
US10342428B2 (en) Monitoring pulse transmissions using radar
CN105009202B (en) It is divided into two-part speech recognition
CN112739254A (en) Neuromuscular control of augmented reality systems
US11144125B2 (en) Hands-free switch system
CN109120790A (en) Call control method, device, storage medium and wearable device
CN109032384A (en) Music control method, device and storage medium and wearable device
CN216221898U (en) Head-mounted system, device and interface unit and hands-free switching system
US20230244303A1 (en) Voice blanking muscle movement controlled systems
WO2019087495A1 (en) Information processing device, information processing method, and program
US11778428B2 (en) Clench activated switch system
CN117222967A (en) Method for excluding a speech muscle movement control system
CN113995416A (en) Apparatus and method for displaying user interface in glasses
US11698678B2 (en) Clench-control accessory for head-worn devices
US20240112871A1 (en) Eyeglasses nose piece switch
US20220058942A1 (en) Switch system for operating a controlled device
CN116868150A (en) Occlusion control accessory for a head-mounted device
JP7252313B2 (en) Head-mounted information processing device
KR20200080410A (en) Apparatus and method for controlling operation of robot capable of mounting accessory
US11998066B2 (en) Kigurumi staging support apparatus, kigurumi staging support system, and kigurumi staging support method
CN109101101A (en) Control method, device, storage medium and the wearable device of wearable device
US20210329982A1 (en) Kigurumi staging support apparatus, kigurumi staging support system, and kigurumi staging support method
Nakamura Embedded Facial Surface Sensing and Stimulation: Toward Facial Surface Interaction in Virtual Environment
CN117377927A (en) Hand-held controller with thumb pressure sensing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination