WO2023239350A1 - Dental sensor - Google Patents

Dental sensor Download PDF

Info

Publication number
WO2023239350A1
WO2023239350A1 PCT/US2022/032410 US2022032410W WO2023239350A1 WO 2023239350 A1 WO2023239350 A1 WO 2023239350A1 US 2022032410 W US2022032410 W US 2022032410W WO 2023239350 A1 WO2023239350 A1 WO 2023239350A1
Authority
WO
WIPO (PCT)
Prior art keywords
disposed
patient
information
sensor
user
Prior art date
Application number
PCT/US2022/032410
Other languages
French (fr)
Inventor
Scott W. Lewis
Original Assignee
Percept Technologies, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Percept Technologies, Inc. filed Critical Percept Technologies, Inc.
Publication of WO2023239350A1 publication Critical patent/WO2023239350A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6813Specially adapted to be attached to a specific body part
    • A61B5/6814Head
    • A61B5/682Mouth, e.g., oral cavity; tongue; Lips; Teeth
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0002Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network
    • A61B5/0015Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network characterised by features of the telemetry system
    • A61B5/0024Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network characterised by features of the telemetry system for multiple sensor units attached to the patient, e.g. using a body or personal area network
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/45For evaluating or diagnosing the musculoskeletal system or teeth
    • A61B5/4538Evaluating a particular part of the muscoloskeletal system or a particular medical condition
    • A61B5/4542Evaluating the mouth, e.g. the jaw
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0002Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network
    • A61B5/0015Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network characterised by features of the telemetry system
    • A61B5/0022Monitoring a patient using a global network, e.g. telephone networks, internet

Definitions

  • the mouth (and more generally, oral structures) provides access to a rich source of information about the user’s medical conditions or information.
  • the mouth is also a rich source of information provided by the user to other persons and devices (such as by mouth gestures and by voice) .
  • One problem in the known art is that access to the mouth is often overloaded by multiple functions; this can have the effect that access to one set of information prevents access to other sets of information.
  • One method of access to information from the mouth is to dispose equipment in or near a user’s mouth, so as to transmit that information from the mouth to other persons or devices, while concurrently allowing access to other information from the mouth. While this method can provide access to more than one set of information from the mouth, it has the drawback that it can overload the user’s mouth with equipment, thus inconveniencing the user and interfering with concurrent use of the mouth for more than one function.
  • This Application describes a system, and methods of use, capable of sensing information about a user and providing one or more processes with respect to that information.
  • the processes can include communication with an external device, or one or more processes with respect to a medical condition or information, one or more techniques for communication at the user’s behest, one or more techniques for altering sensory information with respect to the user, or otherwise.
  • a system can include a sensor disposed to couple information between a sensor, coupled to a user, and an external device disposed to review information from the sensor and perform the one or more processes described herein.
  • the sensor can be disposed within the user’s tooth or other dental structure and can be disposed to detect medical information with respect to the user, whether actively or passively.
  • the sensor can be coupled to a blood vessel and can be disposed to passively detect information such as blood pressure or pulse rate.
  • the sensor can be coupled to a sonic emitter and can be disposed to actively emit sonic signals and detect responses, so as to determine cavities or other anomalies in the user’s dental structures.
  • the system can also include a power source disposed to power the sensor and any devices coupled thereto.
  • the power source can include a battery or another energy storage element, which can be disposed to be recharged, such as by an external inductor disposed to receive externally stored energy, an energy harvester disposed to receive energy from local electromagnetic fields, an internal inductor disposed to receive energy from the user’s mouth or head movements, or another non- invasive energy recharging technique.
  • the sensor can be coupled to a computing device and/or a communication device.
  • the computing device can be disposed to encode/de- code information in exchange with the sensor, communicate with the communication device or to one or more other devices (whether internal or external to the user), or to perform processes with respect to one or more medical conditions or information (as described herein).
  • the communication device can include a transceiver disposed to couple the sensor and / or the computing device to the one or more other devices.
  • an external device can include one or more of: an augmented reality or virtual reality system, a lens or eyewear such as described in the Incorporated Disclosures, a smartphone or other mobile device, an audio/video presentation device, an electromagnetic or ultrasonic transceiver/ transmitter, one or more vehicle controls, a remote computing system, a remote database or other storage system, or a remote presentation system.
  • FIG. 1 shows a conceptual drawing of an example system, including a sensor disposed in a dental position.
  • FIG. 2 shows a conceptual drawing of an example method of using a system.
  • This Application describes a system, and methods of use, capable of sensing information about a user and providing one or more processes with respect to that information.
  • the processes can include communication with an external device, or one or more processes with respect to a medical condition or information, or one or more techniques for communication at the user’s behest.
  • a system can include a sensor disposed to couple information between the user and either a computing device or a communication device.
  • the computing device can be disposed to encode/ decode information in exchange with the sensor, communicate with the external device, or perform processes with respect to one or more medical conditions or information.
  • the communication device can include a transceiver disposed to couple the sensor or the computing device to one or more devices internal or external to the user.
  • the system can also include a power source disposed to power the sensor, the computing device, or the communication device.
  • the power source can include a battery or another energy storage element, which can be disposed to be recharged, such as by an external inductor disposed to receive externally stored energy, an energy harvester disposed to receive energy from local electromagnetic fields, an internal inductor disposed to receive energy from the user’s mouth or head movements, or another noninvasive energy recharging technique.
  • the senor can be disposed within a user’s tooth or other dental structure and disposed to detect medical information with respect to the user, whether actively or passively.
  • a passive sensor within a user’s tooth can be coupled to a blood vessel and be disposed to detect information obtainable from the blood vessel.
  • an active sensor within a user’s dental structure can be disposed to emit energy (such as electromagnetic or ultrasonic signals, or otherwise as described herein) and, possibly in collaboration with another device, determine the presence of any user medical conditions or other information. This can have the effect of allowing the sensor to report information about user medical conditions or other information in real time and without the need for the presence of medical personnel or special equipment.
  • a sensor is disposed to exchange information with the user in response to the sensor’s oral location, such as when the user moves their head, performs an oral gesture, makes a speaking motion (such as with a cheek, the jaw, one or more teeth, the tongue, or by breathing).
  • the user’s oral gesture can be measured, and information exchanged with an external device in response thereto.
  • Information from an external device can be provided to the user at or near an oral location (such as to an ear canal, an oropharyngeal canal, a sinus cavity or other void defined by the user’s head or neck, or by stimulation of bony structures in the user’s head or neck).
  • a sensor is disposed to exchange information with medical devices coupled to other user anatomy, such as one or more audio/video devices (such as eyewear as described in the Incorporated Disclosures), one or more hearing aids, a heart monitor, a pacemaker, a separate blood glucose monitor, a separate blood pressure monitor, a separate blood oxygenation monitor, an ultrasonic or x-ray device disposed to determine one or more features of the user’s anatomy, one or more other devices disposed to provide an augmented reality or virtual reality presentation to the user, or one or more other devices disposed to otherwise stimulate one or more elements of the user’s anatomy.
  • audio/video devices such as eyewear as described in the Incorporated Disclosures
  • hearing aids such as eyewear as described in the Incorporated Disclosures
  • a heart monitor such as a pacemaker, a separate blood glucose monitor, a separate blood pressure monitor, a separate blood oxygenation monitor, an ultrasonic or x-ray device disposed to determine one or more features of the user’s anatomy, one or more other devices disposed to
  • the senor can be disposed to determine information with respect to an impulse response, a phonic modification, or another set of phonic features of the user’s dental or oral elements.
  • the computing device can be disposed to receive that information from the sensor and to adjust an oral output from the user in response thereto, so as to improve an audio acuity of the user’s voice at an intended receiver. For example, when the user reduces the volume of their voice so as to avoid being overheard, the computing device can determine an intended message and can amplify that message at a focal point at the intended receiver’s location.
  • eyewear generally refers to any device coupled to a wearer’s (or other user’s) input senses, including without limitation: glasses (such as those including lens frames and lenses), contact lenses (such as so-called “hard” and “soft” contact lenses applied to the surface of the eye, as well as lenses implanted in the eye), retinal image displays (RID), laser and other external lighting images, “heads-up” displays (HUD), holographic displays, electro-optical stimulation, artificial vision induced using other senses, transfer of brain signals or other neural signals, headphones and other auditory stimulation, bone conductive stimulation, wearable and implantable devices, and other devices disposed to influence (or be influenced by) the wearer.
  • glasses such as those including lens frames and lenses
  • contact lenses such as so-called “hard” and “soft” contact lenses applied to the surface of the eye, as well as lenses implanted in the eye
  • RID retinal image displays
  • HUD heads-up” displays
  • electro-optical stimulation electro-optical stimulation
  • the digital eyewear can be wearable by the user, either directly as eyeglasses or as part of one or more clothing items, or implantable in the user, either above or below the skin, in or on the eyes (such as contact lenses), or otherwise.
  • the digital eyewear can include one or more devices operating in concert, or otherwise operating with other devices that are themselves not part of the digital eyewear.
  • the phrases “highlight”, “de-highlight”, and variants thereof, generally refers to emphasizing or de-emphasizing a view of an object in a field of view.
  • a message generally refer to any signal, or representation thereof, including or representing information received or sent, or to be received or sent.
  • a message can include a string of words, whether presented on a screen or transmitted using electromagnetic signals.
  • a message component can include any portion, in whole or in part, of a message.
  • a mobile device generally refers to any relatively portable device disposed to receive inputs from and provide outputs to, one or more users.
  • a mobile device can include a smartphone, an MP3 player, a laptop or notebook computer, a computing tablet or phablet, or any other relatively portable device disposed to be capable as further described herein.
  • the mobile device can include input elements such as a capacitive touchscreen; a keyboard; an audio input; an accelerometer or haptic input device; an input coupleable to an electromagnetic signal, to an SMS or MMS signal or a variant thereof, to an NFC or RFID signal or a variant thereof, to a signal disposed using TCP/IP or another internet protocol or a variant thereof, to a signal using a telephone protocol or a variant thereof; another type of input device; or otherwise.
  • the term “periodic”, the phrase “from time to time”, and variants thereof, generally refers to any timing process or technique other than continuously.
  • a periodic operation can include one that occurs at regular intervals, at random intervals, at irregular intervals that are not random or are subject to internal or external factors, or otherwise.
  • the term “periodic” generally refers to an operation that occurs at regular intervals, or nearly so; the phrase “from time to time” generally refers to an operation that occurs at other intervals, such as when triggered by a condition or event.
  • real time generally refer to timing, particularly with respect to sensory input or adjustment thereto, operating substantially in synchrony with real world activity, such as when a user is performing an action with respect to real world sensory input.
  • “real time” operation of digital eyewear with respect to sensory input generally includes user receipt of sensory input and activity substantially promptly in response to that sensory input, rather than user receipt of sensory input in preparation for later activity with respect to other sensory input.
  • a remote device generally refers to any device disposed to be accessed, and not already integrated into the accessing device, such as disposed to be accessed by digital eyewear.
  • a remote device can include a database or a server, or another device or otherwise, coupled to a communication network, accessible using a communication protocol.
  • a remote device can include one or more mobile devices other than a user’s digital eyewear, accessible using a telephone protocol, a messaging protocol such as SMS or MMS or a variant thereof, an electromagnetic signal such as NFC or RFID or a variant thereof, an internet protocol such as TCP/IP or a variant thereof, or otherwise.
  • sensory input generally refer to any input detectable by a human or animal user.
  • sensory inputs include audio stimuli such as in response to sound; haptic stimuli such as in response to touch, vibration, or electricity; visual stimuli such as in response to light of any detectable frequency; nasal or oral stimuli such as in response to aroma, odor, scent, taste, or otherwise; other stimuli such as balance; or otherwise.
  • shade generally refer to any technique for altering a sensory input, including but not limited to:
  • altering a selected set of frequencies associated with an auditory signal such as to provide a “false frequency” image of an auditory signal not originally viewable by the human hear, such as to provide an auditory signal in response to a sound outside the range of human hearing or other information ordinarily not available to human senses;
  • altering a sensory input other than auditory sensory inputs such as reducing/increasing an intensity of a haptic input or of another sense.
  • signal input generally refer to any input detectable by digital eyewear or other devices.
  • signal inputs can include
  • electromagnetic signals other than human senses such as signals disposed in a telephone protocol, a messaging protocol such as SMS or MMS or a variant thereof, an electromagnetic signal such as NFC or RFID or a variant thereof, an internet protocol such as TCP/IP or a variant thereof, or similar elements;
  • user input generally refers to information received from the user, such as in response to audio/video conditions, requests by other persons, requests by the digital eyewear, or otherwise.
  • user input can be received by the digital eyewear in response to an input device (whether real or virtual), a gesture (whether by the users’ eyes, hands, or otherwise), using a smartphone or controlling device, or otherwise.
  • a system can include a sensor disposed to couple information between the user and either a computing device or a communication device.
  • the computing device can be disposed to encode/ decode information in exchange with the sensor, communicate with the external device, or perform processes with respect to one or more medical conditions or information.
  • the communication device can include a transceiver disposed to couple the sensor or the computing device to one or more devices internal or external to the user.
  • the system can also include a power source disposed to power the sensor, the computing device, or the communication device.
  • the power source can include a battery or another energy storage element, which can be disposed to be recharged, such as by an external inductor disposed to receive externally stored energy, an energy harvester disposed to receive energy from local electromagnetic fields, an internal inductor disposed to receive energy from the user’s mouth or head movements, or another noninvasive energy recharging technique.
  • a sensor can be disposed in a dental location, such as: in a tooth or under a portion thereof, in a cap or crown or under a portion thereof, between two teeth, cemented or otherwise coupled to a tooth, affixed to or implanted in or near a gum (or a cheek or the tongue) or other oral bone or tissue (such as a gum, jaw, or tongue), or affixed to or implanted in or near a dental device or other dental equipment (such as a cap, a denture, a filling, any orthodontia equipment, or any other oral medical device).
  • a dental location such as: in a tooth or under a portion thereof, in a cap or crown or under a portion thereof, between two teeth, cemented or otherwise coupled to a tooth, affixed to or implanted in or near a gum (or a cheek or the tongue) or other oral bone or tissue (such as a gum, jaw, or tongue), or affixed to or implanted in or near a dental device
  • the sensor can be coupled to a blood vessel, coupled to a gap or void in dental matter or dental tissue, or otherwise disposed to measure a medical condition of the user.
  • the sensor can be disposed within a user’s tooth or other dental structure and disposed to detect medical information with respect to the user, whether actively or passively.
  • a passive sensor within a user’s tooth can be coupled to a blood vessel and be disposed to detect information obtainable from the blood vessel.
  • an active sensor within a user’s dental structure can be disposed to emit energy (such as electromagnetic or ultrasonic signals, or otherwise as described herein) and, possibly in collaboration with another device, determine the presence of any user medical conditions or other information. This can have the effect of allowing the sensor to report information about user medical conditions or other information in real time and without the need for the presence of medical personnel or special equipment.
  • the external device can include one or more of: an AmazonTM EchoTM or other active communication device, an augmented reality or virtual reality system, a FitbitTM or other fitness or activity monitor, a lens or eyewear such as described in the Incorporated Disclosures, a smartphone or other mobile device, a speaker, an electromagnetic or ultrasonic transceiver / transmitter, one or more vehicle controls, a remote computing system, a remote database or other storage system, or a remote presentation system.
  • the processes with respect to the one or more medical conditions or information can include one or more of: predicting/ detecting, monitoring/ recording, treating/ ameliorating, or encouraging self-care or exchanging information with medical personnel for care, with respect to the one or more medical conditions, such as described in the Incorporated Disclosures.
  • the medical conditions can include one or more of: blood glucose, blood oxygenation, blood pressure, pulse rate, or other measures possibly available from a sensor located in a dental position or near a blood vessel.
  • the senor can be disposed to determine information with respect to an impulse response, a phonic modification, or another set of phonic features of the user’s dental or oral elements.
  • the computing device can be disposed to receive that information from the sensor and to adjust an oral output from the user in response thereto, so as to improve an audio acuity of the user’s voice at an intended receiver. For example, when the user reduces the volume of their voice so as to avoid being overheard, the computing device can determine an intended message and can amplify that message at a focal point at the intended receiver’s location.
  • Fig. 1 Sensor disposed with a dental structure
  • FIG. 1 shows a conceptual drawing of an example system, including a sensor disposed in a dental position.
  • the body-facing portion 110 can be disposed primarily or substantially internally to a target mechanism 11 lb in a body or a device.
  • the body-facing portion 110 can be disposed inside, adjacent to, or affixed to, a mechanical structure suitable to be monitored or repaired, such as a hydraulic line or a turbine blade, or other structural elements of the target mechanism 111b.
  • the body-facing portion 110 can also be disposed to monitor and/or repair those structural conditions, where possible, such as by replacing elements related those structural conditions with alternative elements.
  • the system 100 can also include an interface portion 120, intended to be coupled to the body-facing portion 110, and disposed to couple the body-facing portion to a user-facing portion 130.
  • the user-facing portion 130 can be disposed primarily or substantially external to the patient 11 la or the target mechanism 11 lb.
  • the interface portion 120 can be disposed to exchange information between the body-facing portion 110 and the user-facing portion 130, such as by receiving information from the body-facing portion and sending that information to the user-facing portion, or vice versa.
  • a system 100 can also include the user-facing portion 130, intended to be disposed primarily or substantially externally to the user (as described herein).
  • the system 100 can be disposed to present information to the user, and to receive commands/ controls or other information from the user.
  • the system 100 can use the commands/ controls or other information from the user to adjust or operate the body-facing portion 110, or any portion thereof.
  • the user-facing portion 130 can interface with a user 140 (not part of the system 100), such as a human being.
  • the user 140 can be the patient I l la described herein: the patient 11 la is disposed to be monitored or treated and including internal medical structures, such as described with respect to the body-facing portion 110.
  • the user 140 can include medical personnel or emergency responders other than the patient I l la: the medical personnel or emergency responders can monitor or treat the patient and / or the patient’s medical structures.
  • the user 140 would include medical personnel or emergency responders other than the patient I l la.
  • the user 140 when the patient 111b is a non-living device, the user 140 would include engineering or scientific personnel, or a mechanic or other operational or repair personnel capable of examining and adjusting/ repairing the patient 11 lb. [65] This can have the effect that the user 140 can exchange information with the system 100, such as by receiving information from the patient’s medical structures (or other structures) and/or sending commands/ controls or other information to the system 100.
  • the user 140 can (A) receive audio /video or other information from the system 100, (B) send commands/ controls or other information to the system, or (C) act otherwise consistent with elements and method steps described with this Application.
  • the body-facing portion 110 can be disposed primarily or substantially inside the patient I l la, such as a person, and can be disposed inside, adjacent to, or affixed to, a medical structure, such as a dental structure 112a, such as a blood vessel within a tooth.
  • a dental structure 112a such as a blood vessel within a tooth.
  • the dental structure 112a can include a tooth 112a and a crown 112b coupled to the tooth, arranged so as to provide a location 112c in which one or more portions of the body-facing portion 110 can be disposed.
  • the patient can include a target mechanism 111b such as an electrical or mechanical device, such as a computing device, a heating/ cooling system, an engine or motor, a vehicle (or a control portion thereof), or otherwise.
  • a target mechanism 111b such as an electrical or mechanical device, such as a computing device, a heating/ cooling system, an engine or motor, a vehicle (or a control portion thereof), or otherwise.
  • the target mechanism 111b might be subject to diagnosis/ examination and adjustment/ repair by engineering or scientific personnel, or by a mechanic or other operational or repair personnel.
  • the body- facing portion 110 can include a sensor 113 disposed in the described location 112c, the sensor 113 being disposed so as to exchange information with the dental structure 112, other medical structure, or mechanical structure (not shown).
  • the sensor 113 is disposed in a location 112b defined in a dental structure 112a, such as between a tooth and a crown, there is no particular requirement for any such limitation.
  • the sensor 113 can be disposed in any alternative dental or alternative medical location, such as one or more of:
  • a dental device such as a denture, a filling, any type of orthodontia equipment, any other oral medical device, or any other dental medical equipment
  • the other medical structure can include a different medical structure 112a, such as possibly a lip, the tongue, the throat, the uvula, or a vocal cord.
  • the other medical structure 112a can include another part of the body, such as any blood vessel or other stricture, any bony structure, any ligament or muscle, or in the case of a mechanical device, any structure therein (for example, in the case of an airplane, an electrical or mechanical structure 112b disposed in or near an engine, cabin, fuselage, weapons system, wing, or other part thereof).
  • the senor 113 can be disposed to receive information from a selected portion of the medical structure 112, such as from a blood vessel 114.
  • the blood vessel 114 can provide information indicating blood glucose levels or other blood serum levels (including toxins), blood oxygenation, blood pressure, blood volume, pulse rate, white blood cell or platelet count, any other blood measure, or as otherwise consistent with the description in this Application.
  • the sensor 113 can be disposed to receive information from a blood vessel 114, there is no particular requirement for any such limitation.
  • the sensor 113 can be disposed to receive information from one or more of:
  • a substance capable of indicating presence of a toxin or a structure subject to extreme treatment such as an overheated or tired muscle
  • a non-living device such as an avionics engine, or a non-living portion of a body such as a hip replacement, a mechanical structure suitable for monitoring or repair; or
  • the senor 113 can include a receiver disposed to determine an audio image presented by the effect of the patient’s medical structures 112 and other structures, such as the patient’s mouth.
  • the receiver can operate by absorption, reflection, refraction, sonic interference, or otherwise consistent with description herein, with respect to the audio energy emitted by the active emitter 115 and returned to the sensor 113. This can have the effect that the sensor 113 can determine the presence or absence of a cavity or other void in the patient’s medical structures 112 (such as dental structures 112).
  • the cavity or other void might be due to tooth decay, orthodontic misalignment, misalignment of a filling with a cavity or other void, misconstruction of a material used in a filling, or another medical source of a dental cavity or void.
  • the sensor 113 and the active emitter 115 can determine the accidental presence of dental equipment, or other foreign objects, in the patient’s mouth.
  • the patient’s dental structures 112 might present, to the sensor 113, one or more unusual spots or other features in an audio or electromagnetic image. This can possibly indicate an effect of excessive reflection of the audio or electromagnetic emission back to the sensor 113, such as due to a metallic object in the patient’s mouth or dental structures 112.
  • the senor 113 and the active emitter 115 can use other types of energy, such as audio energy above a human hearing frequency (ultrasound) or below a human hearing frequency (infrasound); electromagnetic energy in a microwave, infrared, ultraviolet, or x-ray, frequency; or combinations or conjunctions thereof.
  • the sensor 113 and the active emitter 115 can impose a known signal on the emitted energy, such as using amplitude or frequency modulation, pulse-code or pulse-width modulation, code-division multiplexing, or another type of signal adapted to detecting selected medical conditions or information with respect to the patient’s medical structures 112 (such as dental structures 112).
  • the sensor 113 and the active emitter 115 can be disposed to determine whether the patient’s medical structures include one or more of:
  • a non-living device such as an avionics engine, or a non-living portion of a body such as a hip replacement, a mechanical structure suitable for monitoring or repair; or
  • the sensor 113 can be disposed to exchange information with the user 140, or with an external device, in response to information deliberately generated by the patient I l la for communication.
  • the sensor 113 can be disposed to exchange information with the user 140 (as described herein) in response to the sensor’s physical location.
  • the patient I l la might move their head, perform an oral gesture, make a speaking motion (such as with a cheek, the jaw, one or more teeth, the tongue, or by breathing).
  • the sensor 113 can be responsive to any change in location or velocity, any change in audio information (such as speech by the patient I l la), or any other change in the patient’s disposition, and can be disposed to exchange information with the user 140, or an external device, in response thereto.
  • the sensor 113 can adjust its behavior in response to commands or information from the user-facing portion 130.
  • the commands or information from the user-facing portion 130 can include information with respect to an ambient environment near the patient I l la.
  • one aspect of the ambient environment might include a loudness of sound near the patient I l la. This can have the effect that the computing device 121 and/or the communication device 122 can adjust their behavior such as provide noise cancellation or echo cancellation in response to the patient’s voice and in response to the ambient environment.
  • the active emitter 115 can be disposed to provide information from an external device (not shown) to the patient I l la.
  • the external device can include a microphone or camera, a sighting scope or telescope, or another device disposed to receive external information.
  • the external device can be disposed to provide that information in an audio/video form, haptic form, or other detectable form, to the patient I l la, such as at or near an oral location (such as to an ear canal, an oropharyngeal canal, a sinus cavity or other void defined by the user’s head or neck, by stimulation of bony structures in the user’s head or neck), one or more other devices disposed to provide an augmented reality or virtual reality presentation to the user, or one or more other devices disposed to otherwise stimulate one or more elements of the user’s anatomy.
  • an oral location such as to an ear canal, an oropharyngeal canal, a sinus cavity or other void defined by the user’s head or neck, by stimulation of bony structures in the user’s head or neck
  • one or more other devices disposed to provide an augmented reality or virtual reality presentation to the user or one or more other devices disposed to otherwise stimulate one or more elements of the user’s anatomy.
  • the sensor 113 and/or active emitter 115 operate without other devices to obtain information with respect to the patient I l la
  • the sensor 113 can be disposed to exchange information with medical devices coupled to other user anatomy, such as one or more of the following:
  • audio/video devices such as eyewear as described in the Incorporated Disclosures
  • heart monitors or pacemakers separate from the body-facing portion 110;
  • ultrasonic or x-ray devices disposed to determine one or more features of the user’s anatomy, separate from the body- facing portion 110.
  • the treatment device 116 can include an insulin pump or another device disposed to inject the patient I l la with, or otherwise infuse into the patient I l la, a medical treatment.
  • the sensor 113 and/or active emitter 115 can operate in a feedback loop with the treatment device 116, under control of the computing device 121 or under control of the user-facing portion 130. This can have the effect that the 113 and/or active emitter 115 can provide nearly immediate information with respect to a medical status of the patient I l la, allowing the treatment device 116 to provide a nearly immediate medical response to that medical status.
  • the senor 113 (and possibly other elements of the system 100) can be coupled to a power source 117a.
  • the power source 117a can include a battery or capacitor.
  • the power source 117a can also be disposed to receive energy so as to charge itself, or at least to decrease a rate at which it discharges, in response to energy from a charger 117b.
  • the charger 117b can be disposed to receive energy from external to the power source 117a.
  • the charger 117b can include a battery, capacitor, or inductor disposed to generate an electromagnetic field from which the power source 117a can receive and store energy for later use by the sensor 113 or other devices.
  • the charger 117b can include an electromagnetic receiver (such as an antenna disposed to charge a battery or capacitor, in response to an ambient electromagnetic field, or in response to an electromagnetic field generated at least in part by the patient’s body).
  • the electromagnetic receiver can be disposed to receive electromagnetic energy from ambient electromagnetic fields generated by one or more of:
  • the electromagnetic receiver can be disposed to receive electromagnetic energy using an antenna, such as one tuned to a frequency associated with a power source associated with the electrical appliance (such as a frequency of about 60 Hz).
  • the charger 117b can include a device disposed to capture chemical energy, such as in response to a chemical reaction generated at least in part by the patient’s body.
  • the interface portion 120 can be coupled to the body-facing portion 110 and disposed to interface between the bodyfacing portion and the user-facing portion 130.
  • the interface portion 120 can include a computing device 121 coupled to the body-facing portion 110, such as to the sensor 113 and / or the active emitter 115.
  • the computing device 121 can be coupled to: (A) both the sensor 113 and the active emitter 115, (B) only the sensor 113, which is coupled to the active emitter 115, or (C) only the active emitter 115, which is coupled to the sensor 113. In each such case, the computing device 121 can exchange messages with both the sensor 113 and the active emitter 115
  • the computing device 121 can be disposed to encode information into a first format used by the sensor 113 or the active emitter 115, or to decode information from a second format used by the sensor 113 or the active emitter 115.
  • the first format and the second format will be the same or similar, although in at least some cases they need not be.
  • the computing device 121 can be disposed to decode commands/ controls from the user 140 (or a device being used by the user 140) and to encode those commands/ controls for use by the sensor 113 or the active emitter 115; similarly, the computing device 121 can be disposed to decode information from the sensor 113 or the active emitter 115 and to encode that information for the user 140 (or a device being used by the user 140).
  • the encoded or decoded commands/ controls or information, or both can include error-detection/ correction coding, so as to provide relatively robust communication between the computing device 121 and either the sensor 113 or the active emitter 115.
  • the computing device 121 can be disposed to perform one or more processes with respect to information from the sensor 113 (whether or not the information from the sensor 113 was provided in response to the active emitter 115).
  • the computing device 121 can be disposed to perform one or more processes with respect to information from the sensor 113 or the active emitter 115, the same processes with respect to other medical conditions, such as those described herein. Other and further details with respect to operation of the computing device 121 to perform such processes is shown and described in the Incorporated Disclosures.
  • migraine onset and migraine events collectively “migraines” or “migraine activity”), photophobia, or neuro-ophthalmic disorders;
  • the commands or information from the user-facing portion 130 can also include information with respect to an ambient environment near the patient I l la.
  • the ambient environment might include a loudness of sound near the patient I l la.
  • the sensor 113 can determine information with respect to the ambient environment; in such cases, the sensor can use that information to determine its own set point, or a baseline for a “normal” patient’s response thereto.
  • the sensor 113 can be disposed to send that information to the user-facing portion 130, such as for presentation to the user 140.
  • the user-facing portion 130 can be disposed substantially externally to the patient I l la or substantially externally to the target mechanism 11 lb.
  • the computing device 121 and the communication device 122 can be disposed to send commands and information to, and receive responses from, the user-facing portion 130 to the body-facing portion 110 (or a selected part thereof). This can have the effect that the user-facing portion 130 can be disposed as a controller and monitor for the body-facing portion 110 (or a selected part thereof) .
  • the app 132 can be disposed to maintain information with respect to the patient I l la in the database 134, storing that information from time to time (periodically or in response to one or more triggering events) and retrieving that information when needed.
  • the app 132 can be disposed to send commands to, and receive responses from, the server 133, so as to use one or more capabilities thereof.
  • the server 133 can be disposed to provide computing power, specialized hardware or software, one or more virtual machines, or otherwise as described herein.
  • the specialized hardware or software, or the virtual machines can provide an artificial intelligence or machine learning system disposed to interpret and / or control the body-facing portion 110.
  • the user 140 can include a mechanical engineer or another mechanic, operators associated with the target mechanism, or other persons knowledgeable with respect to the target mechanism.
  • the user 140 can (A) receive audio /video or other information from the system 100, (B) send commands/ controls or other information to the system, or (C) act otherwise consistent with elements and method steps described with this Application.
  • the user 140 can be (or include) a computing device 142a or a computer- controlled device 142b.
  • the computing device 142a can include an artificial intelligence or machine learning system disposed to control the body-facing portion 110, such as to predict or detect, monitor or record, prevent or treat, or encouraging self- care or calling for other care, with respect to one or more medical conditions (such as migraines or photophobia).
  • the computer-controlled device 142b can include a device capable of one or more of: — communication with a server or other logically remote device, such as an AmazonTM EchoTM or other active communication device;
  • One or more portions of the method 200 are sometimes described as being performed by particular elements of the system 100, as described with respect to fig. 1, or sometimes by “the method” itself.
  • a flow point or method step is described as being performed by “the method,” it can be performed by one or more of those elements, by one or more portions of those elements, by an element not described with respect to the figure, by a combination or conjunction thereof, or otherwise.
  • This Application primarily describes one possible implementation, in which one or more elements of the system 100 include programmable computing devices interacting with other elements.
  • one or more elements of the system 100 include programmable computing devices interacting with other elements.
  • there is no particular requirement for any of these limitations for example, the same or similar effects can be achieved using one or more electronic circuits, one or more specialized processing devices operating using specialized control elements, one or more remote computing devices directing operation of one or more of the sensing and operating devices described, or otherwise to achieve the same or similar effects as described herein.
  • a flow point 200A indicates a beginning of the method.
  • the patient I l la or a user 140 can direct the system 100 to begin the method 200.
  • the system 100 and its elements can set /reset their states and can prepare to conduct their respective portions of the method 200.
  • the method 200 can operate continuously. Alternatively, the method 200 can operate from time to time (periodically or aperiodically), waking up to perform its steps and going back to sleep until its next wake-up. Alternatively, the method 200 can operate when triggered, such as by the patient I l la or a user 141, or such as by the sensor 113, or such as by an input from the user-facing portion 130 receiving a message from an external device.
  • these techniques for triggering the method 200 are exemplary only; there is no particular requirement for any such limitation. Operation of body-facing portion
  • a flow point 210 indicates that the body-facing portion 110 is ready to operate.
  • the elements of the body-facing portion 110 are disposed so as to perform their described functions.
  • the sensor 113 and the active emitter 115 are disposed in the dental structure 112.
  • the sensor 113 and the active emitter 115 are disposed in an electrical or mechanical structure 112.
  • the treatment device 116 is disposed in the dental structure 112.
  • the treatment device 116 is disposed in the electrical or mechanical structure 112.
  • the power source 117 is coupled to the bodyfacing portion 110. As described herein, the power source 117 can also be coupled to the interface portion 120. In one embodiment, the power source 117 can be disposed in the dental or other body structure 112, or in the electrical or mechanical structure 112. However, there is no particular requirement for any such limitation.
  • the sensor 113 receives medical information from the blood vessel 114 or other body structure.
  • the sensor 113 receives technical information from the electrical or mechanical structure 112.
  • the medical information can include one or more elements of data with respect to the patient 11 la or the target mechanism 11 lb.
  • the active emitter 115 sends an energetic signal into the dental structure or other body structure 112.
  • the active emitter 115 sends an energetic signal into the electrical or mechanical structure 112.
  • the energetic signal can include an electromagnetic signal, an ultrasonic signal, or another signal, possibly encoded so as to maximize return of useful information from the patient 11 la or the target mechanism 111b.
  • the sensor 113 and the active emitter 115 send their information to the computing device 121.
  • the sensor 113 and the active emitter 115 can each use a selected format for the information they send.
  • one or more of the sensor 113 or the active emitter 115 can send their information directly to the communication device 122.
  • the treatment device 116 is disposed and ready to perform treatment of a selected medical condition.
  • the treatment device 116 receives a message from the interface portion 120 directing it to perform treatment of the patient I l la. Alternatively, the treatment device 116 receives a message from the interface portion 120 directing it to perform treatment of the target mechanism 111b.
  • the elements of the body-facing portion 110 are disposed so as to draw energy from the power source 117.
  • the senor 113, the active emitter 115, and the treatment device 116 draw energy from the power source 117.
  • the interface portion 120 can also draw energy from power source 117.
  • a flow point 220 indicates that the interface portion 120 is ready to operate.
  • the computing device 121 is disposed so as to perform functions with respect to selected medical conditions.
  • the computing device 121 is disposed so as to perform functions with respect to selected electrical or mechanical conditions. As described herein, these functions can also be performed by an external device, such as in the user-facing portion 130.
  • the computing device 121 is ready to receive information from the sensor 113 or the active emitter 115.
  • the computing device 121 performs one or more operations on the information it receives from the sensor 113 or the active emitter 115.
  • the computing device 121 can be disposed to determine one or more of: with respect to the patient I l la, prediction/ determination of whether the patient I l la is subject to a selected medical condition, monitoring/ recording information with respect to the selected medical condition, determining how to prevent/ treat the selected medical condition, determining how to encourage the patient 11 la to perform self-care or exchanging information with other personnel to perform care with respect to the selected medical condition.
  • the computing device 121 can be disposed to determine one or more of: with respect to the target mechanism 111b, prediction/ determination of whether the target mechanism 111b is subject to a selected electrical or mechanical condition, monitoring/re- cording information with respect to the selected electrical or mechanical condition, determining how to prevent/ repair the selected electrical or mechanical condition, determining how to encourage personnel associated with the target mechanism 111b (such as a pilot) patient 11 la to perform prevention/ repair or exchanging information with other personnel to perform prevention / repair with respect to the selected electrical or mechanical condition.
  • personnel associated with the target mechanism 111b such as a pilot
  • the computing device 121 sends information with respect to prevention/ treatment of the selected medical condition to the treatment device 116.
  • the treatment device 116 receives this information and performs treatment of the selected medical condition in response thereto.
  • the treatment device 116 receives this information and performs repairs of the selected electrical or mechanical condition in response thereto.
  • the computing device 121 decodes information from the particular formats associated with the sensor 113 and the active emitter 115. [164] At a sub-step 222b, the computing device 121 encodes information into the particular formats associated with the communication device 122.
  • the computing device 121 sends the encoded information to the communication device 122.
  • the computing device 121 can send the encoded information directly to the userfacing portion 130.
  • the communication device 122 is disposed so as to transmit information from the computing device 121 to the user-facing portion 130.
  • the communication device 122 receives information from the computing device 121 for delivery to the user-facing portion 130.
  • the communication device 122 can receive information directly from one or more of the sensor 113 or the active emitter 115 that is for delivery to the user-facing portion 130.
  • the communication device 122 sends the information it received from the computing device 121 to the user-facing portion 130.
  • the communication device 122 can send the information it received directly from one or more of the sensor 113 or the active emitter 115 to the user-facing portion 130.
  • the elements of the interface portion 120 are disposed so as to draw energy from the power source 117 (in the body-facing portion 110).
  • the computing device 121 and the communication device 122 draw energy from the power source 117.
  • the power source 117 recharges itself.
  • the method 200 can proceed with the flow point 200B.
  • a flow point 230 indicates that the user-facing portion 130 is ready to operate.
  • the elements of the user-facing portion 130 are disposed so as to perform their described functions.
  • the mobile device 131 is coupled to the interface portion 120 and disposed and ready to receive information therefrom, and disposed and ready to interact with the user 140.
  • the information includes medical information.
  • the information includes technical information from the electrical or mechanical structure 112.
  • the mobile device 131 under control of the app 132, receives information from the interface portion 120.
  • the mobile device 131 under control of the app 132, is disposed so as to perform functions with respect to selected medical conditions.
  • the mobile device 131, under control of the app 132 is disposed so as to perform functions with respect to selected electrical or mechanical conditions. As described herein, these functions can also be performed by one or more devices called upon by the mobile device 131, such as the server 133.
  • the mobile device 131 receives information from the interface portion 120.
  • the mobile device 131 can receive information directly from the body-facing portion.
  • the mobile device 131 performs one or more operations on the information it receives from the interface portion 120 (and possibly from the body-facing portion 110), such as operations described with respect to the sub-step 221c. As part of this sub-step, the mobile device 131 can call upon other devices, such as the server 133 (and possibly another external device).
  • the mobile device 131 provides information with respect to prediction/ determination, monitoring/ recording, and other information possibly useful to the user 140 at an output.
  • the mobile device 131 can call upon other devices, such as the server 133 (and possibly another external device).
  • the mobile device 131 can send information with respect to monitor- ing/recording to the database 134 (and possibly another external device).
  • the mobile device 131 provides information with respect to prevention/ treatment of the selected medical condition to the interface portion 120 for delivery to the treatment device 116, as described with respect to the sub-step 22 Id.
  • the mobile device 131 provides information with respect to prevention/ treatment of the selected medical condition to the interface portion 120 for delivery to the treatment device 116, as described with respect to the sub-step 22 Id. As part of this sub-step, the mobile device 131 can call upon other devices, such as the server 133.
  • the mobile device 131 is disposed so as to receive commands or information from the user 140 for delivery to the body-facing portion 110.
  • the mobile device 131 receives information from the user 140 for delivery to the body-facing portion 110.
  • the mobile device 131 sends the information it received from the user 140 to the interface portion 120 for delivery to the body-facing portion 110.
  • the mobile device 131 receives responses from the interface portion 120 for delivery to the user 140.
  • the mobile device 131 provides the responses it received from the interface portion 120 to the user 140.
  • the method 200 can proceed with the flow point 200B.
  • a flow point 200B indicates an end of the method.
  • the patient I l la or a user 140 can direct the system 100 to end the method 200.
  • the system 100 and its elements can set/reset their states and can prepare to restart their respective portions of the method 200.
  • the method can be restarted at the flow point 200A.

Abstract

A system includes a sensor disposed in a dental location, coupled to a blood vessel or to dental matter or a void therein, measuring medical conditions or information, responsive to a user gesture or speaking motion. A computing device coupled to the sensor can encode / decode information in exchange with the sensor, communicate with the external device, or perform processes regarding the medical conditions or information. A communication device coupled to the sensor or the computing device can exchange information with devices internal/ external to the user. An internal device can include: a medical device or sensor, or another device disposed to stimulate the user's anatomy. An external device can include: an augmented reality or virtual reality system, an activity monitor, digital eyewear, a smartphone, vehicle controls, or remote computing devices. A power source includes energy storage, rechargeable using noninvasive techniques.

Description

Dental sensor
Incorporated Disclosures
[1] This Application describes technologies that can be used with inventions, and other technologies, described in one or more of the following documents.
[2] This Application can be used with inventions, and other technologies, described in one or more of the following documents:
Application 14/660,565, filed Mar. 17, 2015, naming the same inventor, titled “Enhanced Optical and Perceptual Digital Eyewear”, Attorney Docket No. 5266 C13, currently pending; which is a continuation of
— Application 14/589,817, filed Jan. 5, 2015, naming the same inventor, and having the same title, Attorney Docket No. 5266 C1C2, currently pending; which is a continuation of
— Application 14/288, 189, filed May 27, 2014, naming the same inventor, and having the same title, Attorney Docket No. 5266 C1C, currently pending; which is a continuation of
— Application 13/965,050, filed Aug. 12, 2013, naming the same inventor, and having the same title, Attorney Docket No. 5266 Cl, currently pending; which is a continuation of — Application 13/841, 141, filed Mar. 15, 2013, naming the same inventor, and having the same title, Attorney Docket No. 5266P, now issued as US 8,696, 113 on Apr. 15, 2014.
[3] This Application can also be used with inventions, and other technologies, described in one or more of the following documents:
— Application 16/684,534, filed Nov. 14, 2019, naming the same inventor, titled “Dynamic visual optimization”, Attorney Docket No. 6601, currently pending; which is a continuation-in-part of
— Application 16/684,479, filed Nov. 14, 2019, naming the same inventor, titled “Dynamic visual optimization”, Attorney Docket No. 6501, currently pending; which is a continuation-in-part of
— Application 16/264,553, filed Jan. 31, 2019, naming the same inventor, titled “Digital eyewear integrated with medical and other services”, Attorney Docket No. 6401, currently pending; which is a continuation-in-part of
— Application 16/ 138,941, filed Sept. 21, 2018, naming the same inventor, titled “Digital eyewear procedures related to dry eyes”, Attorney Docket No. 6301, currently pending; which is a continuation-in-part of
— Application 15/942,951, filed Apr. 2, 2018, naming the same inventor, titled “Digital Eyewear System and Method for the Treatment and Prevention of Migraines and Photophobia”, Attorney Docket No. 6201, currently pending; which is a continuation-in-part of
— Application 15/460, 197, filed March 15, 2017, naming the same inventor, titled “Digital Eyewear Augmenting Wearer’s Interaction with their Environment”, unpublished, Attorney Docket No. 6101, currently pending; which is a continuation-in-part of
— Application 14/660,565, filed Mar. 17, 2015, naming the same inventor, and having the same title, Attorney Docket No. 5266 C3, currently pending; which is a continuation of
— Application 13/841,550, filed Mar. 15, 2013, naming the same inventor, titled “Enhanced Optical and Perceptual Digital Eyewear”, Attorney Docket No. 5087 P, currently pending.
[4] Each of these documents is hereby incorporated by reference as if fully set forth herein. Techniques described in this Application can be elaborated with detail found therein. These documents are sometimes referred to herein as the “Incorporated Disclosures,” or variants thereof.
Copyright Notice
[5] A portion of the disclosure of this patent document contains material subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
Background
[6] This background is provided as a convenience to the reader and does not admit to any prior art or restrict the scope of the disclosure or the invention. This background is intended as an introduction to the general nature of technology to which the disclosure or the invention can be applied. [7] Field of the disclosure. This Application generally relates to a dental sensor disposed to be coupled to a user’s medical conditions or information, mouth gestures, or vocalizations.
[8] Related art. The mouth (and more generally, oral structures) provides access to a rich source of information about the user’s medical conditions or information. The mouth is also a rich source of information provided by the user to other persons and devices (such as by mouth gestures and by voice) . One problem in the known art is that access to the mouth is often overloaded by multiple functions; this can have the effect that access to one set of information prevents access to other sets of information.
[9] One method of access to information from the mouth is to dispose equipment in or near a user’s mouth, so as to transmit that information from the mouth to other persons or devices, while concurrently allowing access to other information from the mouth. While this method can provide access to more than one set of information from the mouth, it has the drawback that it can overload the user’s mouth with equipment, thus inconveniencing the user and interfering with concurrent use of the mouth for more than one function.
[10] Each of these issues, as well as other possible considerations, might cause difficulty in aspects of obtaining concurrent multiple sets of information from a user’s mouth. Summary of the Disclosure
[11] This summary of the disclosure is provided as a convenience to the reader and does not limit or restrict the scope of the disclosure or the invention. This summary is intended as an introduction to more detailed description found in this Application, and as an overview of techniques explained in this Application. The described techniques have applicability in other fields and beyond the embodiments specifically reviewed in detail.
[12] This Application describes a system, and methods of use, capable of sensing information about a user and providing one or more processes with respect to that information. The processes can include communication with an external device, or one or more processes with respect to a medical condition or information, one or more techniques for communication at the user’s behest, one or more techniques for altering sensory information with respect to the user, or otherwise.
[13] A system can include a sensor disposed to couple information between a sensor, coupled to a user, and an external device disposed to review information from the sensor and perform the one or more processes described herein. The sensor can be disposed within the user’s tooth or other dental structure and can be disposed to detect medical information with respect to the user, whether actively or passively. For one example, the sensor can be coupled to a blood vessel and can be disposed to passively detect information such as blood pressure or pulse rate. For another example, the sensor can be coupled to a sonic emitter and can be disposed to actively emit sonic signals and detect responses, so as to determine cavities or other anomalies in the user’s dental structures. [14] The system can also include a power source disposed to power the sensor and any devices coupled thereto. For example, the power source can include a battery or another energy storage element, which can be disposed to be recharged, such as by an external inductor disposed to receive externally stored energy, an energy harvester disposed to receive energy from local electromagnetic fields, an internal inductor disposed to receive energy from the user’s mouth or head movements, or another non- invasive energy recharging technique.
[15] The sensor can be coupled to a computing device and/or a communication device. The computing device can be disposed to encode/de- code information in exchange with the sensor, communicate with the communication device or to one or more other devices (whether internal or external to the user), or to perform processes with respect to one or more medical conditions or information (as described herein). The communication device can include a transceiver disposed to couple the sensor and / or the computing device to the one or more other devices.
[16] The devices external to the user can include one or more devices
(A) suitable for communication with a server or other logically remote device, such as an Amazon™ Echo™ or other active communication device;
(B) suitable for performing specific processes with respect to medical information about the user, such as by an app operating on a smartphone and/or a remote server; (C) suitable for recording or processing medical information with respect to the user, such as a Fitbit™ or other fitness or activity monitor; (D) suitable for presenting information to the user or allowing the user to control one or more other devices. For example, an external device can include one or more of: an augmented reality or virtual reality system, a lens or eyewear such as described in the Incorporated Disclosures, a smartphone or other mobile device, an audio/video presentation device, an electromagnetic or ultrasonic transceiver/ transmitter, one or more vehicle controls, a remote computing system, a remote database or other storage system, or a remote presentation system.
[17] After reading this Application, those skilled in the art will recognize that while the disclosure includes described of particular embodiments, there is no particular requirement for limitation to any one or more of them. Other and further embodiments within the scope and spirit of the disclosure would be workable, without new invention or undue experimentation.
Brief Description of the Figures
[18] In the figures, like references generally indicate similar elements, although this is not strictly required.
[19] Fig. 1 shows a conceptual drawing of an example system, including a sensor disposed in a dental position.
[20] Fig. 2 shows a conceptual drawing of an example method of using a system.
[21] After reading this Application, those skilled in the art would recognize that the figures are not necessarily drawn to scale for construction, nor do they necessarily specify any particular location or order of construction.
Detailed Description
GENERAL DISCUSSION
[22] This Application describes a system, and methods of use, capable of sensing information about a user and providing one or more processes with respect to that information. The processes can include communication with an external device, or one or more processes with respect to a medical condition or information, or one or more techniques for communication at the user’s behest.
[23] A system can include a sensor disposed to couple information between the user and either a computing device or a communication device. The computing device can be disposed to encode/ decode information in exchange with the sensor, communicate with the external device, or perform processes with respect to one or more medical conditions or information. The communication device can include a transceiver disposed to couple the sensor or the computing device to one or more devices internal or external to the user.
[24] The system can also include a power source disposed to power the sensor, the computing device, or the communication device. For example, the power source can include a battery or another energy storage element, which can be disposed to be recharged, such as by an external inductor disposed to receive externally stored energy, an energy harvester disposed to receive energy from local electromagnetic fields, an internal inductor disposed to receive energy from the user’s mouth or head movements, or another noninvasive energy recharging technique. [25] In one embodiment, a sensor can be disposed in a dental location, such as: in a tooth or under a portion thereof, in a cap or crown or under a portion thereof, between two teeth, cemented or otherwise coupled to a tooth, affixed to or implanted in or near a gum (or a cheek or the tongue) or other oral bone or tissue (such as a gum, jaw, or tongue), or affixed to or implanted in or near a dental device or other dental equipment (such as a cap, a denture, a filling, any orthodontia equipment, or any other oral medical device). The sensor can be coupled to a blood vessel, coupled to a gap or void in dental matter or dental tissue, or otherwise disposed to measure a medical condition of the user.
[26] For example, the sensor can be disposed within a user’s tooth or other dental structure and disposed to detect medical information with respect to the user, whether actively or passively. In one case, a passive sensor within a user’s tooth can be coupled to a blood vessel and be disposed to detect information obtainable from the blood vessel. In another case, an active sensor within a user’s dental structure can be disposed to emit energy (such as electromagnetic or ultrasonic signals, or otherwise as described herein) and, possibly in collaboration with another device, determine the presence of any user medical conditions or other information. This can have the effect of allowing the sensor to report information about user medical conditions or other information in real time and without the need for the presence of medical personnel or special equipment.
[27] In one embodiment, a sensor is disposed to exchange information with the user in response to the sensor’s oral location, such as when the user moves their head, performs an oral gesture, makes a speaking motion (such as with a cheek, the jaw, one or more teeth, the tongue, or by breathing). The user’s oral gesture can be measured, and information exchanged with an external device in response thereto. Information from an external device (either the same or another such device) can be provided to the user at or near an oral location (such as to an ear canal, an oropharyngeal canal, a sinus cavity or other void defined by the user’s head or neck, or by stimulation of bony structures in the user’s head or neck).
[28] For example, the external device can include one or more of: an Amazon™ Echo™ or other active communication device, an augmented reality or virtual reality system, a Fitbit™ or other fitness or activity monitor, a lens or eyewear such as described in the Incorporated Disclosures, a smartphone or other mobile device, a speaker, an electromagnetic or ultrasonic transceiver/ transmitter, one or more vehicle controls, a remote computing system, a remote database or other storage system, or a remote presentation system.
[29] In one embodiment, a sensor is disposed to exchange information with medical devices coupled to other user anatomy, such as one or more audio/video devices (such as eyewear as described in the Incorporated Disclosures), one or more hearing aids, a heart monitor, a pacemaker, a separate blood glucose monitor, a separate blood pressure monitor, a separate blood oxygenation monitor, an ultrasonic or x-ray device disposed to determine one or more features of the user’s anatomy, one or more other devices disposed to provide an augmented reality or virtual reality presentation to the user, or one or more other devices disposed to otherwise stimulate one or more elements of the user’s anatomy.
[30] The processes with respect to the one or more medical conditions or information can include one or more of: predicting/ detecting, monitoring/ recording, treating/ ameliorating, or encouraging self-care or exchanging information with medical personnel for care, with respect to the one or more medical conditions, such as described in the Incorporated Disclosures. For example, the medical conditions can include one or more of: blood glucose, blood oxygenation, blood pressure, pulse rate, or other measures possibly available from a sensor located in a dental position or near a blood vessel.
[31] In one embodiment, the sensor can be disposed to determine information with respect to an impulse response, a phonic modification, or another set of phonic features of the user’s dental or oral elements. The computing device can be disposed to receive that information from the sensor and to adjust an oral output from the user in response thereto, so as to improve an audio acuity of the user’s voice at an intended receiver. For example, when the user reduces the volume of their voice so as to avoid being overheard, the computing device can determine an intended message and can amplify that message at a focal point at the intended receiver’s location.
TERMS AND PHRASES
[32] The following terms and phrases are exemplary only, and not limiting.
[33] The phrases “this application”, “this description”, and variants thereof, generally refer to any material shown or suggested by any portions of this Application, individually or collectively, and including all inferences that might be drawn by anyone skilled in the art after reviewing this Application, even if that material would not have been apparent without reviewing this Application at the time it was filed.
[2] The term “e-sun reader”, and variants thereof, generally refers to any device disposed to use a shading/inverse-shading effect to provide a readable portion of the wearer’s field of view in bright light, such as in bright sunlight. For example, and without limitation, an e-sun reader can include eyewear disposed to shade/inverse-shade one or more lenses so as to adjust brightness on a smartphone, tablet/ phablet, or computer screen or other screen. This can have the effect that the wearer of the eyewear can read the screen even in sunlight (or other bright light) that would otherwise wash out the display on the screen and make it difficult to read.
[34] The terms “encoded” or “decoded”, and variants thereof, generally refer to any process or technique having a result changing the format or meaning. For example, as used herein, encoding can include a process or function which changes a communication protocol, such as a hardware or software level of abstraction, so as to allow a first device to use a communication link at a first coupling location, and so as to allow a second device to use that communication link at a second coupling location. [35] The terms “encrypted” or “obfuscated”, and variants thereof, generally refer to any process or technique having a result substantially inaccessible to others without a key, shared secret, or technique for entry. For example, as used herein, encryption can include any process or function which cannot readily be reversed by observation of the results, without access to a key used to perform that process or function. For another example, as used herein, obfuscation can include any process or function which cannot readily be reversed by observation of the results, but from which some information can be retrieved by application of sufficient processing power.
[36] The term “eyewear”, and variants thereof, generally refers to any device coupled to a wearer’s (or other user’s) input senses, including without limitation: glasses (such as those including lens frames and lenses), contact lenses (such as so-called “hard” and “soft” contact lenses applied to the surface of the eye, as well as lenses implanted in the eye), retinal image displays (RID), laser and other external lighting images, “heads-up” displays (HUD), holographic displays, electro-optical stimulation, artificial vision induced using other senses, transfer of brain signals or other neural signals, headphones and other auditory stimulation, bone conductive stimulation, wearable and implantable devices, and other devices disposed to influence (or be influenced by) the wearer. For example, the digital eyewear can be wearable by the user, either directly as eyeglasses or as part of one or more clothing items, or implantable in the user, either above or below the skin, in or on the eyes (such as contact lenses), or otherwise. The digital eyewear can include one or more devices operating in concert, or otherwise operating with other devices that are themselves not part of the digital eyewear. [37] The phrases “highlight”, “de-highlight”, and variants thereof, generally refers to emphasizing or de-emphasizing a view of an object in a field of view. For example, to highlight an object, a mobile device can brighten the object relatively to its environment, either by increasing a luminance of the object or by decreasing a luminance of a region near the object (such as a region surrounding the object). Similarly, to de-highlight an about, a mobile device can perform one or more inverses of highlighting functions.
[38] The term “message”, the phrase “message component”, and variants thereof, generally refer to any signal, or representation thereof, including or representing information received or sent, or to be received or sent. For example, a message can include a string of words, whether presented on a screen or transmitted using electromagnetic signals. A message component can include any portion, in whole or in part, of a message.
[39] The phrase “mobile device”, and variants thereof, generally refers to any relatively portable device disposed to receive inputs from and provide outputs to, one or more users. For example, a mobile device can include a smartphone, an MP3 player, a laptop or notebook computer, a computing tablet or phablet, or any other relatively portable device disposed to be capable as further described herein. The mobile device can include input elements such as a capacitive touchscreen; a keyboard; an audio input; an accelerometer or haptic input device; an input coupleable to an electromagnetic signal, to an SMS or MMS signal or a variant thereof, to an NFC or RFID signal or a variant thereof, to a signal disposed using TCP/IP or another internet protocol or a variant thereof, to a signal using a telephone protocol or a variant thereof; another type of input device; or otherwise. [40] The term “periodic”, the phrase “from time to time”, and variants thereof, generally refers to any timing process or technique other than continuously. For example, as used herein, a periodic operation can include one that occurs at regular intervals, at random intervals, at irregular intervals that are not random or are subject to internal or external factors, or otherwise. Where context indicates, the term “periodic” generally refers to an operation that occurs at regular intervals, or nearly so; the phrase “from time to time” generally refers to an operation that occurs at other intervals, such as when triggered by a condition or event.
[41] The term “random”, and variants thereof, generally refers to any process or technique having a substantially nonpredictable result. For example, as used herein, the random processes include quantum-mechanical processes, pseudo-random processes and functions, and include processes and functions that are subject to internal or external factors not readily determinable by observation of those processes or functions.
[42] The phrase “real time”, and variants thereof, generally refer to timing, particularly with respect to sensory input or adjustment thereto, operating substantially in synchrony with real world activity, such as when a user is performing an action with respect to real world sensory input. For example, “real time” operation of digital eyewear with respect to sensory input generally includes user receipt of sensory input and activity substantially promptly in response to that sensory input, rather than user receipt of sensory input in preparation for later activity with respect to other sensory input.
[43] The phrase “remote device”, and variants thereof, generally refers to any device disposed to be accessed, and not already integrated into the accessing device, such as disposed to be accessed by digital eyewear. For example, a remote device can include a database or a server, or another device or otherwise, coupled to a communication network, accessible using a communication protocol. For another example, a remote device can include one or more mobile devices other than a user’s digital eyewear, accessible using a telephone protocol, a messaging protocol such as SMS or MMS or a variant thereof, an electromagnetic signal such as NFC or RFID or a variant thereof, an internet protocol such as TCP/IP or a variant thereof, or otherwise.
[44] The phrases “sensory input”, “external sensory input”, and variants thereof, generally refer to any input detectable by a human or animal user. For example, sensory inputs include audio stimuli such as in response to sound; haptic stimuli such as in response to touch, vibration, or electricity; visual stimuli such as in response to light of any detectable frequency; nasal or oral stimuli such as in response to aroma, odor, scent, taste, or otherwise; other stimuli such as balance; or otherwise.
[45] The phrases “shading”, “shading/inverse-shading”, “inverse-shading”, and variants thereof, generally refer to any technique for altering a sensory input, including but not limited to:
— altering a loudness associated with an auditory signal, such as by reducing loudness at substantially each portion of the auditory signal;
— altering a loudness associated with a portion of an auditory signal, such as by reducing loudness at a selected set of times or frequencies in that auditory signal;
— altering a loudness associated with a portion of an auditory signal, such as by increasing loudness at a selected set of times or frequencies in that auditory signal, to improve listening to that portion of the image, or otherwise; — altering a selected set of frequencies associated with an auditory signal, such as to change a first frequency into a second frequency, for the entire auditory signal, for a portion of the auditory signal, or otherwise;
— altering a selected set of frequencies associated with an auditory signal, such as to provide a “false frequency” image of an auditory signal not originally viewable by the human hear, such as to provide an auditory signal in response to a sound outside the range of human hearing or other information ordinarily not available to human senses;
— altering a sensory input other than auditory sensory inputs, such as reducing/increasing an intensity of a haptic input or of another sense.
[46] The phrases “signal input”, “external signal input”, and variants thereof, generally refer to any input detectable by digital eyewear or other devices. For example, in addition to or in lieu of sensory inputs and external sensory inputs, signal inputs can include
— information available to digital eyewear in response to electromagnetic signals other than human senses, such as signals disposed in a telephone protocol, a messaging protocol such as SMS or MMS or a variant thereof, an electromagnetic signal such as NFC or RFID or a variant thereof, an internet protocol such as TCP/IP or a variant thereof, or similar elements;
— information available to digital eyewear in response to an accelerometer, a gyroscope, a GPS signal receiver, a location device, an ultrasonic device, or similar elements;
— information available to digital eyewear in response to a magnetometer, a medical imaging device, an MRI device, a tomography device, or similar elements; or otherwise as described herein. [47] The phrase “user input”, and variants thereof, generally refers to information received from the user, such as in response to audio/video conditions, requests by other persons, requests by the digital eyewear, or otherwise. For example, user input can be received by the digital eyewear in response to an input device (whether real or virtual), a gesture (whether by the users’ eyes, hands, or otherwise), using a smartphone or controlling device, or otherwise.
[48] The phrase “user parameters”, and variants thereof, generally refers to information with respect to the user as determined by digital eyewear, user input, or other examination about the user. For example, user parameters can include measures of whether the user is able to distinguish objects from audio/video background signals, whether the user is currently undergoing an overload of audio / video signals (such as from excessive luminance or sound), a measure of confidence or probability thereof, a measure of severity or duration thereof, other information with respect to such events, or otherwise.
[49] After reviewing this Application, those skilled in the art would recognize that these terms and phrases should be interpreted in light of their context in the specification.
FIGURES AND TEXT
[50] A system can include a sensor disposed to couple information between the user and either a computing device or a communication device. The computing device can be disposed to encode/ decode information in exchange with the sensor, communicate with the external device, or perform processes with respect to one or more medical conditions or information. The communication device can include a transceiver disposed to couple the sensor or the computing device to one or more devices internal or external to the user.
[51] The system can also include a power source disposed to power the sensor, the computing device, or the communication device. For example, the power source can include a battery or another energy storage element, which can be disposed to be recharged, such as by an external inductor disposed to receive externally stored energy, an energy harvester disposed to receive energy from local electromagnetic fields, an internal inductor disposed to receive energy from the user’s mouth or head movements, or another noninvasive energy recharging technique.
[52] In one embodiment, a sensor can be disposed in a dental location, such as: in a tooth or under a portion thereof, in a cap or crown or under a portion thereof, between two teeth, cemented or otherwise coupled to a tooth, affixed to or implanted in or near a gum (or a cheek or the tongue) or other oral bone or tissue (such as a gum, jaw, or tongue), or affixed to or implanted in or near a dental device or other dental equipment (such as a cap, a denture, a filling, any orthodontia equipment, or any other oral medical device). The sensor can be coupled to a blood vessel, coupled to a gap or void in dental matter or dental tissue, or otherwise disposed to measure a medical condition of the user. [53] For example, the sensor can be disposed within a user’s tooth or other dental structure and disposed to detect medical information with respect to the user, whether actively or passively. In one case, a passive sensor within a user’s tooth can be coupled to a blood vessel and be disposed to detect information obtainable from the blood vessel. In another case, an active sensor within a user’s dental structure can be disposed to emit energy (such as electromagnetic or ultrasonic signals, or otherwise as described herein) and, possibly in collaboration with another device, determine the presence of any user medical conditions or other information. This can have the effect of allowing the sensor to report information about user medical conditions or other information in real time and without the need for the presence of medical personnel or special equipment.
[54] In one embodiment, a sensor is disposed to exchange information with the user in response to the sensor’s oral location, such as when the user moves their head, performs an oral gesture, makes a speaking motion (such as with a cheek, the jaw, one or more teeth, the tongue, or by breathing). The user’s oral gesture can be measured, and information exchanged with an external device in response thereto. Information from an external device (either the same or another such device) can be provided to the user at or near an oral location (such as to an ear canal, an oropharyngeal canal, a sinus cavity or other void defined by the user’s head or neck, or by stimulation of bony structures in the user’s head or neck).
[55] For example, the external device can include one or more of: an Amazon™ Echo™ or other active communication device, an augmented reality or virtual reality system, a Fitbit™ or other fitness or activity monitor, a lens or eyewear such as described in the Incorporated Disclosures, a smartphone or other mobile device, a speaker, an electromagnetic or ultrasonic transceiver / transmitter, one or more vehicle controls, a remote computing system, a remote database or other storage system, or a remote presentation system.
[56] In one embodiment, a sensor is disposed to exchange information with medical devices coupled to other user anatomy, such as one or more audio/video devices (such as eyewear as described in the Incorporated Disclosures), one or more hearing aids, a heart monitor, a pacemaker, a separate blood glucose monitor, a separate blood pressure monitor, a separate blood oxygenation monitor, an ultrasonic or x-ray device disposed to determine one or more features of the user’s anatomy, one or more other devices disposed to provide an augmented reality or virtual reality presentation to the user, or one or more other devices disposed to otherwise stimulate one or more elements of the user’s anatomy.
[57] The processes with respect to the one or more medical conditions or information can include one or more of: predicting/ detecting, monitoring/ recording, treating/ ameliorating, or encouraging self-care or exchanging information with medical personnel for care, with respect to the one or more medical conditions, such as described in the Incorporated Disclosures. For example, the medical conditions can include one or more of: blood glucose, blood oxygenation, blood pressure, pulse rate, or other measures possibly available from a sensor located in a dental position or near a blood vessel.
[58] In one embodiment, the sensor can be disposed to determine information with respect to an impulse response, a phonic modification, or another set of phonic features of the user’s dental or oral elements. The computing device can be disposed to receive that information from the sensor and to adjust an oral output from the user in response thereto, so as to improve an audio acuity of the user’s voice at an intended receiver. For example, when the user reduces the volume of their voice so as to avoid being overheard, the computing device can determine an intended message and can amplify that message at a focal point at the intended receiver’s location.
Fig. 1 — Sensor disposed with a dental structure
[59] Fig. 1 shows a conceptual drawing of an example system, including a sensor disposed in a dental position.
[60] In one embodiment, a system 100 can include a body-facing portion 110, intended to be disposed primarily or substantially internally to a patient I l la. For example, the body-facing portion 110 can be disposed inside, adjacent to, or affixed to, a medical structure such as a dental structure 112a such as a blood vessel within a tooth. In such cases, the body-facing portion 110 can be disposed to sense one or more medical conditions or other information with respect to the patient I l la, whether passively by receiving information or actively by emitting energy or signals and determining any response thereto, as otherwise and further described herein. The body-facing portion 110 can also be disposed to monitor and/or treat those medical conditions.
[61] Alternatively, the body-facing portion 110 can be disposed primarily or substantially internally to a target mechanism 11 lb in a body or a device. For example, the body-facing portion 110 can be disposed inside, adjacent to, or affixed to, a mechanical structure suitable to be monitored or repaired, such as a hydraulic line or a turbine blade, or other structural elements of the target mechanism 111b. The body-facing portion 110 can also be disposed to monitor and/or repair those structural conditions, where possible, such as by replacing elements related those structural conditions with alternative elements.
[62] In one embodiment, the system 100 can also include an interface portion 120, intended to be coupled to the body-facing portion 110, and disposed to couple the body-facing portion to a user-facing portion 130. In such cases, the user-facing portion 130 can be disposed primarily or substantially external to the patient 11 la or the target mechanism 11 lb. The interface portion 120 can be disposed to exchange information between the body-facing portion 110 and the user-facing portion 130, such as by receiving information from the body-facing portion and sending that information to the user-facing portion, or vice versa.
[63] In one embodiment, a system 100 can also include the user-facing portion 130, intended to be disposed primarily or substantially externally to the user (as described herein). In such cases, the system 100 can be disposed to present information to the user, and to receive commands/ controls or other information from the user. For example, the system 100 can use the commands/ controls or other information from the user to adjust or operate the body-facing portion 110, or any portion thereof.
[64] In one embodiment, the user-facing portion 130 can interface with a user 140 (not part of the system 100), such as a human being. The user 140 can be the patient I l la described herein: the patient 11 la is disposed to be monitored or treated and including internal medical structures, such as described with respect to the body-facing portion 110. Alternatively, the user 140 can include medical personnel or emergency responders other than the patient I l la: the medical personnel or emergency responders can monitor or treat the patient and / or the patient’s medical structures. For example, when the patient 11 la is an animal or a human being incapable of operating the system, the user 140 would include medical personnel or emergency responders other than the patient I l la. Similarly, when the patient 111b is a non-living device, the user 140 would include engineering or scientific personnel, or a mechanic or other operational or repair personnel capable of examining and adjusting/ repairing the patient 11 lb. [65] This can have the effect that the user 140 can exchange information with the system 100, such as by receiving information from the patient’s medical structures (or other structures) and/or sending commands/ controls or other information to the system 100. The user 140 can (A) receive audio /video or other information from the system 100, (B) send commands/ controls or other information to the system, or (C) act otherwise consistent with elements and method steps described with this Application.
Body-facing portion
[66] In one embodiment, the body-facing portion 110 can be disposed primarily or substantially inside the patient I l la, such as a person, and can be disposed inside, adjacent to, or affixed to, a medical structure, such as a dental structure 112a, such as a blood vessel within a tooth. For example, the dental structure 112a can include a tooth 112a and a crown 112b coupled to the tooth, arranged so as to provide a location 112c in which one or more portions of the body-facing portion 110 can be disposed.
Alternative patients
[67] While this Application primarily describes a human patient I l la, there is no particular requirement for any such limitation. For example, the patient I l la can include an animal, such as might be subject to prevention/ diagnosis of diseases or other difficulties, or subject to operation/ treatment by veterinary personnel. Similarly, the patient I l la might include a plant, such as might be subject to prevention/ diagnosis of diseases or other difficulties, or subject to operation/ treatment by one or more arborists or related personnel. Similarly, the patient I l la might include an experimental animal, such as might be subject to experiment or treatment by scientific personnel.
[68] While this Application primarily describes a live patient I l la, there is no particular requirement for any such limitation. For example, the patient can include a target mechanism 111b such as an electrical or mechanical device, such as a computing device, a heating/ cooling system, an engine or motor, a vehicle (or a control portion thereof), or otherwise. In such cases, the target mechanism 111b might be subject to diagnosis/ examination and adjustment/ repair by engineering or scientific personnel, or by a mechanic or other operational or repair personnel.
[69] For example, the body-facing portion 110 can be disposed primarily or substantially inside the target mechanism 11 lb, such as an aircraft, and can be disposed inside, adjacent to, or affixed to, a mechanical structure suitable to be monitored or repaired, such as a hydraulic line or a turbine blade, or other structural elements of the target mechanism 111b. For example, the mechanical structure can include a turbine blade (not shown) disposed within a jet engine (not shown), arranged so as to provide a location (not shown) in which one or more portions of the body-facing portion 110 can be disposed.
[70] The patient I l la and the target mechanism 111b are sometimes collectively referred to herein as the “patient” 111.
Locations and sensors
[71] In one embodiment, the body- facing portion 110 can include a sensor 113 disposed in the described location 112c, the sensor 113 being disposed so as to exchange information with the dental structure 112, other medical structure, or mechanical structure (not shown).
[72] While this Application primarily describes the sensor 113 as being disposed in a location 112b defined in a dental structure 112a, such as between a tooth and a crown, there is no particular requirement for any such limitation. The sensor 113 can be disposed in any alternative dental or alternative medical location, such as one or more of:
— within or affixed to (such as cemented or otherwise coupled) one or more teeth, or fillings, or under a portion thereof;
— within or affixed to the crown (or any other type of cap), or under a portion thereof;
— between two teeth, such as affixed or wedged in a gap between those teeth;
— cemented or otherwise coupled to a tooth;
— coupled or affixed to, or implanted in, other oral bone or tissue (such as a cheek or gum, or the jaw or tongue), or any other oral structure, or any other anatomical structure in or near the mouth;
— coupled or affixed to, or implanted in, a gap or void in dental matter or dental tissue;
— coupled to or implanted in a dental device (such as a denture, a filling, any type of orthodontia equipment, any other oral medical device, or any other dental medical equipment); or
— as otherwise consistent with the description in this Application.
[73] While this Application is primarily described with respect to an individual sensor 113, there is no particular requirement for any such limitation. For example, a plurality of sensors 113, either (A) each operating to provide a separate set of information, or (B) operating collectively to provide a collective set of information, can be disposed in lieu of or instead of the one sensor 113 primarily described herein. For example, as described herein, whenever a sensor 113 can be coupled to a blood vessel 114, so as to determine information with respect to the patient’s blood serum levels or other circulatory information, a plurality of sensors 113 can alternatively be coupled to a plurality of blood vessels 114, so as to determine a plurality of sets of information with respect to the patient’s blood serum levels and other circulatory information. In such cases, the plurality of sets of information can be reported independently or can be combined into a collective report.
[74] While this Application is primarily described with the sensor 113 exchanging information with a dental structure 112a, there is no particular requirement for any such limitation. For example, the other medical structure can include a different medical structure 112a, such as possibly a lip, the tongue, the throat, the uvula, or a vocal cord. For another example, the other medical structure 112a can include another part of the body, such as any blood vessel or other stricture, any bony structure, any ligament or muscle, or in the case of a mechanical device, any structure therein (for example, in the case of an airplane, an electrical or mechanical structure 112b disposed in or near an engine, cabin, fuselage, weapons system, wing, or other part thereof).
[75] While this Application is primarily described with the sensor 113 exchanging information with a medical structure 112a, there is no particular requirement for any such limitation. For example, the electrical or mechanical structure (shown as 112b) can include one or more of: a computing device, a heating/ cooling system, an engine or motor, a vehicle, or otherwise. Similarly, the electrical or mechanical structure 112b can include a portion of one or more such structures, such as a control structure (such as an aileron or an elevator) , an operating surface (such as a turbine compressor), a transmission structure (such as an electrical circuit, a hydraulic line, a similar structure, or otherwise), or otherwise. In such cases, the sensor 113 can be disposed to provide a measure of whether the electrical or mechanical structure 112 is operating at its maximum capacity, is presenting errors/jitter while operating, is subject to problems that might lead to failure, or otherwise.
[76] The dental or other medical structure 112 and the electrical or mechanical structure 112 are sometimes collectively referred to herein as the “medical structures” 112.
Blood vessels and alternatives
[77] In one embodiment, the sensor 113 can be disposed to receive information from a selected portion of the medical structure 112, such as from a blood vessel 114. For example, the blood vessel 114 can provide information indicating blood glucose levels or other blood serum levels (including toxins), blood oxygenation, blood pressure, blood volume, pulse rate, white blood cell or platelet count, any other blood measure, or as otherwise consistent with the description in this Application.
[78] While this Application primarily describes a system 100 in which the sensor 113 can be disposed to receive information from a blood vessel 114, there is no particular requirement for any such limitation. For example, the sensor 113 can be disposed to receive information from one or more of:
— a cavity or other void defined by one or more teeth;
— a cyst or tumor;
— a gland or lymph node;
— an infection or wound; — a ligament or muscle;
— a lung or other natural void;
— a substance capable of indicating presence of a toxin or a structure subject to extreme treatment (such as an overheated or tired muscle);
— any other body structure suitable for monitoring or treatment;
— in a non-living device such as an avionics engine, or a non-living portion of a body such as a hip replacement, a mechanical structure suitable for monitoring or repair; or
— as otherwise consistent with this Application.
Active emitter
[79] In one embodiment, the sensor 113 can be coupled to an active emitter 115, such as one or more of: a sonic emitter, an infrared or ultraviolet emitter, a current or voltage source, a microwave emitter, any other device disposed to interact with body structures, or as otherwise consistent with this Application.
[80] For example, the active emitter 115 can include a sonic emitter disposed to scan the patient’s medical structures 112. The sonic emitter can be disposed to scan the patient’s medical structures 112, either by broadcast emission, or by emission in one or more selected directions (in either case, from one or more point-sources), so as to provide a defined acoustic wave emitted into the patient’s mouth and dental structures (or other medical structures or mechanical structures) 112.
[81] In such cases, the sensor 113 can include a receiver disposed to determine an audio image presented by the effect of the patient’s medical structures 112 and other structures, such as the patient’s mouth. For example, the receiver can operate by absorption, reflection, refraction, sonic interference, or otherwise consistent with description herein, with respect to the audio energy emitted by the active emitter 115 and returned to the sensor 113. This can have the effect that the sensor 113 can determine the presence or absence of a cavity or other void in the patient’s medical structures 112 (such as dental structures 112). In such cases, the cavity or other void might be due to tooth decay, orthodontic misalignment, misalignment of a filling with a cavity or other void, misconstruction of a material used in a filling, or another medical source of a dental cavity or void.
[82] Alternatively, for another example, the sensor 113 and the active emitter 115 can determine the accidental presence of dental equipment, or other foreign objects, in the patient’s mouth. In such cases, the patient’s dental structures 112 might present, to the sensor 113, one or more unusual spots or other features in an audio or electromagnetic image. This can possibly indicate an effect of excessive reflection of the audio or electromagnetic emission back to the sensor 113, such as due to a metallic object in the patient’s mouth or dental structures 112.
[83] Alternatively, the sensor 113 and the active emitter 115 can use other types of energy, such as audio energy above a human hearing frequency (ultrasound) or below a human hearing frequency (infrasound); electromagnetic energy in a microwave, infrared, ultraviolet, or x-ray, frequency; or combinations or conjunctions thereof. The sensor 113 and the active emitter 115 can impose a known signal on the emitted energy, such as using amplitude or frequency modulation, pulse-code or pulse-width modulation, code-division multiplexing, or another type of signal adapted to detecting selected medical conditions or information with respect to the patient’s medical structures 112 (such as dental structures 112). [84] Alternatively, the sensor 113 and the active emitter 115 can be disposed to determine whether the patient’s medical structures include one or more of:
— a cyst or tumor, or a different cavity or other void defined by a body structure;
— an infection or wound;
— an inflamed body structure, such as a broken bone, a gland or lymph node, a herniated body element; or a torn ligament or muscle;
— a punctured lung or other natural void;
— in a non-living device such as an avionics engine, or a non-living portion of a body such as a hip replacement, a mechanical structure suitable for monitoring or repair; or
— as otherwise consistent with this Application.
[85] While this Application primarily describes a system 100 in which the sensor 113 is directly coupled to the active emitter 115, so as to emit energy or signals into bodily structures and receive energy or signals in response thereto, there is no particular requirement for any such limitation. For example, the active emitter 115 can operate independently of the sensor 113 or can be controlled by another element of the system 100, so as to cause the patient’s medical structures to otherwise generate or reflect energy or other signals that can be detected by the sensor 113.
Communication by patient
[86] While this Application primarily describes a system 100 in which the sensor 113 is coupled to medical information, there is no particular requirement for any such limitation. For example, the sensor 113 can be disposed to exchange information with the user 140, or with an external device, in response to information deliberately generated by the patient I l la for communication.
[87] In one embodiment, the sensor 113 can be disposed to exchange information with the user 140 (as described herein) in response to the sensor’s physical location. For example, the patient I l la might move their head, perform an oral gesture, make a speaking motion (such as with a cheek, the jaw, one or more teeth, the tongue, or by breathing). In such cases, the sensor 113 can be responsive to any change in location or velocity, any change in audio information (such as speech by the patient I l la), or any other change in the patient’s disposition, and can be disposed to exchange information with the user 140, or an external device, in response thereto.
[88] When the patient 11 la is attempting to communicate with an external recipient (such as another person, a telephone, a voice-response system such as an Amazon™ Echo™ system, or otherwise as described herein), the sensor 113 can adjust its behavior in response to commands or information from the user-facing portion 130. For example, the commands or information from the user-facing portion 130 can include information with respect to an ambient environment near the patient I l la. In such cases, one aspect of the ambient environment might include a loudness of sound near the patient I l la. This can have the effect that the computing device 121 and/or the communication device 122 can adjust their behavior such as provide noise cancellation or echo cancellation in response to the patient’s voice and in response to the ambient environment.
[89] In one embodiment, the active emitter 115 can be disposed to provide information from an external device (not shown) to the patient I l la. For example, the external device can include a microphone or camera, a sighting scope or telescope, or another device disposed to receive external information. The external device can be disposed to provide that information in an audio/video form, haptic form, or other detectable form, to the patient I l la, such as at or near an oral location (such as to an ear canal, an oropharyngeal canal, a sinus cavity or other void defined by the user’s head or neck, by stimulation of bony structures in the user’s head or neck), one or more other devices disposed to provide an augmented reality or virtual reality presentation to the user, or one or more other devices disposed to otherwise stimulate one or more elements of the user’s anatomy.
Other patient sensors
[90] While this Application primarily describes a system 100 in which the sensor 113 and/or active emitter 115 operate without other devices to obtain information with respect to the patient I l la, there is no particular requirement for any such limitation. For example, the sensor 113 can be disposed to exchange information with medical devices coupled to other user anatomy, such as one or more of the following:
— audio/video devices (such as eyewear as described in the Incorporated Disclosures);
— hearing aids;
— heart monitors or pacemakers separate from the body-facing portion 110;
— external blood glucose, blood pressure, blood oxygenation, or other monitors, separate from the body-facing portion 110;
— ultrasonic or x-ray devices disposed to determine one or more features of the user’s anatomy, separate from the body- facing portion 110. Treatment device
[91] In one embodiment, a treatment device 116 can be disposed within the patient 11 la or within the target mechanism 11 lb. The treatment device 116 can be disposed to interface with the patient’s body or with one or more portions of the target mechanism, such as (with respect to a live patient I l la) with the blood vessel 114 or (with respect to a target mechanism 111b) with an electrical or mechanical structure 112.
[92] For example, with respect to a live patient I l la, the treatment device 116 can include an insulin pump or another device disposed to inject the patient I l la with, or otherwise infuse into the patient I l la, a medical treatment. In such cases, the sensor 113 and/or active emitter 115 can operate in a feedback loop with the treatment device 116, under control of the computing device 121 or under control of the user-facing portion 130. This can have the effect that the 113 and/or active emitter 115 can provide nearly immediate information with respect to a medical status of the patient I l la, allowing the treatment device 116 to provide a nearly immediate medical response to that medical status.
[93] For another example, with respect to a target mechanism 111b, the treatment device 116 can include a pump or another device disposed to lubricate a selected mechanical part in the target mechanism. Similarly, in such cases, the sensor 113 and/or active emitter 115 can operate in a feedback loop with the treatment device 116, again under control of the computing device 121 or under control of the user-facing portion 130. Also similarly, this can have the effect that the 113 and/or active emitter 115 can provide nearly immediate information with respect to an electrical or mechanical status of the target mechanism 111b, allowing the treatment device 116 to provide a nearly immediate electrical or mechanical response to that status.
Power source and charger
[94] In one embodiment, the sensor 113 (and possibly other elements of the system 100) can be coupled to a power source 117a. For example, the power source 117a can include a battery or capacitor. The power source 117a can also be disposed to receive energy so as to charge itself, or at least to decrease a rate at which it discharges, in response to energy from a charger 117b.
[95] In one embodiment, the charger 117b can be disposed to receive energy from external to the power source 117a. For example, the charger 117b can include a battery, capacitor, or inductor disposed to generate an electromagnetic field from which the power source 117a can receive and store energy for later use by the sensor 113 or other devices.
[96] For another example, the charger 117b can include an electromagnetic receiver (such as an antenna disposed to charge a battery or capacitor, in response to an ambient electromagnetic field, or in response to an electromagnetic field generated at least in part by the patient’s body). In such cases, the electromagnetic receiver can be disposed to receive electromagnetic energy from ambient electromagnetic fields generated by one or more of:
— nearby cell phones or other mobile devices, nearby Wi-Fi routers or other communication devices, or nearby RFID devices such as transponders and transponder beacons. In such cases, the electromagnetic receiver can be disposed to receive electromagnetic energy using an antenna, such as one tuned to a frequency associated with the ambient electromagnetic field.
— nearby electrical appliances such as microwave ovens or refrigerators. In such cases, the electromagnetic receiver can be disposed to receive electromagnetic energy using an antenna, such as one tuned to a frequency associated with a power source associated with the electrical appliance (such as a frequency of about 60 Hz).
— electromagnetic fields generated by the body such as by movement of the patient’s eyes (and consequent movement of the eyes’ dipole field) or by the patient’s muscles. In such cases, the electromagnetic receiver can be disposed to receive electromagnetic energy generated by movement of the patient’s head in an ambient electromagnetic field.
[97] For another example, the charger 117b can include a device disposed to capture chemical energy, such as in response to a chemical reaction generated at least in part by the patient’s body.
[98] For another example, the charger 117b can include a device disposed to capture mechanical energy from movement of the patient’s jaw, or otherwise at least in part by the patient’s body. In such cases, the charger 117b can include a mechanical device (such as a ratchet, a selfwinding watch structure, a strain gauge, or a similar device), coupled to an electromagnetic transceiver disposed to electrically couple to the power source 117a.
[99] While this Application is primarily described with the power source 117a and the charger 117b using electromagnetic energy, there is no requirement for any such limitation. For example, the power source 117a and the charger 117b can be disposed to use chemical, mechanical, or other non-electromagnetic energy, and can be coupled to the sensor 113 or other devices so as to power those devices using that non- electromagnetic energy.
Interface portion
[100] In one embodiment, the interface portion 120 can be coupled to the body-facing portion 110 and disposed to interface between the bodyfacing portion and the user-facing portion 130. The interface portion 120 can include a computing device 121 coupled to the body-facing portion 110, such as to the sensor 113 and / or the active emitter 115. For example, the computing device 121 can be coupled to: (A) both the sensor 113 and the active emitter 115, (B) only the sensor 113, which is coupled to the active emitter 115, or (C) only the active emitter 115, which is coupled to the sensor 113. In each such case, the computing device 121 can exchange messages with both the sensor 113 and the active emitter 115
Encoding and decoding
[101] In one embodiment, the computing device 121 can be disposed to encode information into a first format used by the sensor 113 or the active emitter 115, or to decode information from a second format used by the sensor 113 or the active emitter 115. In some cases, the first format and the second format will be the same or similar, although in at least some cases they need not be.
[102] For example, the computing device 121 can be disposed to decode commands/ controls from the user 140 (or a device being used by the user 140) and to encode those commands/ controls for use by the sensor 113 or the active emitter 115; similarly, the computing device 121 can be disposed to decode information from the sensor 113 or the active emitter 115 and to encode that information for the user 140 (or a device being used by the user 140). In such cases, the encoded or decoded commands/ controls or information, or both, can include error-detection/ correction coding, so as to provide relatively robust communication between the computing device 121 and either the sensor 113 or the active emitter 115.
[103] For another example, the computing device 121 can be disposed to receive information from the sensor 113 in the first format and to send commands/ controls to the active emitter 115 in the second format. For another example, the computing device 121 can be disposed to receive information from, and to send commands/ controls to, the sensor 113 in the first format, and to receive information from, and to send commands/ controls to, the active emitter 115 in the second format. In such cases, either the first format, the second format, or both, can include error-detection/ correction coding, so as to provide relatively robust communication between the computing device 121 and either the sensor 113, the active emitter 115, or both.
Computing device
[104] In one embodiment, the computing device 121 can be disposed to perform one or more processes with respect to information from the sensor 113 (whether or not the information from the sensor 113 was provided in response to the active emitter 115).
[105] For example, the computing device 121 can be disposed to perform, in response to information from the sensor 113 or the active emitter 115, one or more of: — determining whether the patient 111 is about to or is currently, or predicting whether the patient is likely to be about to, undergo a medical condition;
— detecting a degree of severity for the patient 111 of the medical condition;
— preventing the patient 111 from undergoing the medical condition, or from undergoing the medical condition at a selected amount of severity;
— treating the patient 111 when they are undergoing the medical condition, or when they are undergoing the medical condition at a selected amount of severity;
— training the patient 111 to conduct self-care, or to call for others to conduct care, with respect to the medical condition; related processing of the patient 111 with respect to the medical condition; or as otherwise consistent with this Application.
[106] In one embodiment, the computing device 121 can be disposed to perform one or more processes with respect to information from the sensor 113 or the active emitter 115, the same processes with respect to other medical conditions, such as those described herein. Other and further details with respect to operation of the computing device 121 to perform such processes is shown and described in the Incorporated Disclosures.
[107] This Application provides techniques for monitoring, detecting, and predicting medical conditions, for preventing those medical conditions, for treating those medical conditions (such as in real time), and for training patients to conduct self-care with respect to those medical conditions. The medical conditions can include one or more of: — dental damage, dental decay, dental and gum infections (collectively “dental conditions”), jaw grinding, or orthodontic maladjustments;
— food allergies or food poisoning, gastroenterological disorders, GERD (gastroenterological reflux disorders), or stomach ulcers;
— gunshot wounds, piercing or stab wounds, or other penetration of the outer membranes of the body;
— ingestion, inhalation, or progress, of allergens, foreign objects, or poisons;
— inhalation or insertion of choking or drowning hazards, or other objects possibly blocking the patient’s breathing;
— low blood oxygenation, arrythmia, bradycardia, tachycardia, coronary disease, coronary infarction, inefficient pulmonary transfer of oxygen from the lungs, lung disease or blockage (including by fluids), or other cardio-pulmonary disorders;
— migraine onset and migraine events (collectively “migraines” or “migraine activity”), photophobia, or neuro-ophthalmic disorders;
— ocular ulcers, foreign object or adverse chemical presence in the eye, retinal detachment, or other eye damage;
— seizures or strokes, brain bleeding, concussions, or traumatic brain injuries; related medical conditions, or combinations of two or more such medical conditions; or as otherwise consistent with this Application.
[108] The computing device 121 can maintain information about the progress of the medical conditions and combinations of more than one such medical condition, for each patient 111 individually or for a set of patients collectively. The computing device 121 can use that information to determine, with or without assistance from the patient 111, whether those medical conditions (or combinations thereof) are occurring or are likely to occur near-term. The digital eyewear can conduct actions that are predicted to ameliorate or treat those medical conditions (or combinations thereof). The digital eyewear can train the patient (such as using a reward procedure) to conduct self-care (such as those patient actions beyond the scope of the digital eyewear) that can ameliorate or treat those medical conditions and combinations thereof.
[109] For example, the computing device 121 can be disposed to determine, in response to information from the sensor 113, information with respect to a patient’s medical condition. As described herein, the information with respect to the medical condition can include one or more of:
— determining whether the patient 111 is likely to be about to undergo, or is currently undergoing, the medical condition, thus, predicting/ detecting the medical condition;
— determining a degree of severity for the patient 111, and maintaining a record of, the medical condition, thus, monitoring/ recording the medical condition;
— attempting to treat the patient 111 with respect to the medical condition, or ameliorate the effects of the medical condition, thus, treating/ ameliorating the medical condition;
— encouraging the patient 111 to conduct self-care, or exchanging information with others to conduct care (such as emergency responders or medical personnel), with respect to the medical condition.
[110] Other and further details with respect to operation of the computing device 121 to make such a determination is shown and described in the Incorporated Disclosures. Communication device
[111] In one embodiment, the interface portion 120 can include a communication device 122 coupled to the body-facing portion 110, such as using the computing device 121. In such cases, the communication device 122 can be disposed to send messages to the computing device 121, which can encode them for the sensor 113 and/or the active emitter 115. Similarly, in such cases, the computing device 121 can be disposed to receive messages from the sensor 113 and/or the active emitter 115 and decode those messages to be delivered to the communication device 122.
[112] Alternatively, the communication device 122 can be coupled directly to the sensor 113, directly to the active emitter 115, or directly to both.
[113] The communication device 122 can be disposed to couple information between first, the sensor 113 and/or the active emitter 115, and second, a user-facing portion 130. This can have the effect that the sensor 113 and / or the active emitter 115 can exchange commands and / or information with the user-facing portion 130. The user-facing portion 130 can be disposed to issue commands to, and/or provide information to, the sensor 113 and/or the active emitter 115. Similarly, the user-facing portion 130 can be disposed to receive responses from the sensor 113 and/or the active emitter 115.
[114] The communication device 112 can be disposed to couple data from the sensor 113 and/or the active emitter 115 to the user-facing portion 130. For example, the data can include data from the sensor 113, such as a measure of blood oxygenation, blood pressure, serum toxicity with respect to a selected toxin, or otherwise as described herein. [115] The communication device 112 can be disposed to couple commands or information from the user-facing portion 130 to the sensor 113 and/or the active emitter 115. For example, the user 140 can send commands or information to the body-facing portion 110 so as to direct the sensor 113 to increase or decrease its baseline or sensitivity. This can have the effect of adjusting the sensor’s relative response to raw inputs. One example might include when the sensor 113 is responding to the user’s blood pressure; in such cases, the user might direct the sensor to respond at a zero point of 120/80 (systolic/ diastolic pressure) and with a sensitivity of 0. 1 units of pressure.
[116] The commands or information from the user-facing portion 130 can also include information with respect to an ambient environment near the patient I l la. For example, one aspect of the ambient environment might include a loudness of sound near the patient I l la. Alternatively, the sensor 113 can determine information with respect to the ambient environment; in such cases, the sensor can use that information to determine its own set point, or a baseline for a “normal” patient’s response thereto. The sensor 113 can be disposed to send that information to the user-facing portion 130, such as for presentation to the user 140.
[117] As otherwise and further described herein, the user-facing portion 130 can be disposed primarily or substantially external to the patient 11 la or the target mechanism 11 lb. This can have the effect that the sensor 113 and/or the active emitter 115 can still be controlled and/or monitored even when disposed in a location in a live patient I l la that is infeasible or medically unwise to access, such as within a tooth (or underneath a cap for a tooth), within an internal organ or other internal body structure, within a wound subject to monitoring, or otherwise as described herein. Similarly, this can have the effect that the sensor 113 and/or the active emitter 115 can still be controlled and / or monitored even when disposed in a location in a target mechanism 111b that is infeasible or unsafe to access, such as within an electrical or hydraulic line, within an internal part, within a rapidly moving or sensitively calibrated part (such as an aircraft propeller, an engine turbine blade), or otherwise as described herein.
[118] In one embodiment, the sensor 113 can be disposed to determine information with respect to an impulse response, a phonic modification, or another set of phonic features of the patient’s dental or oral elements. The computing device 121 can be disposed to receive that information from the sensor 113 and to adjust an oral output from the patient I l la in response thereto, so as to improve an audio acuity of the patient’s voice at an intended external receiver. For example, when the patient I l la reduces the volume of their voice so as to avoid being overheard, the computing device 121 can determine an intended message and can amplify that message at a focal point external to the patient I l la and at the intended receiver’s location.
User-facing portion
[119] In one embodiment, the user-facing portion 130 can be disposed substantially externally to the patient I l la or substantially externally to the target mechanism 11 lb. As described herein, the computing device 121 and the communication device 122 can be disposed to send commands and information to, and receive responses from, the user-facing portion 130 to the body-facing portion 110 (or a selected part thereof). This can have the effect that the user-facing portion 130 can be disposed as a controller and monitor for the body-facing portion 110 (or a selected part thereof) .
[120] For example, the user-facing portion 130 can be disposed to operate, or to adjust the operation of, the sensor 113 and/or the active emitter 115. In such cases, with respect to a live patient 11 la, the user-facing portion 130 can be disposed to present medical information to a user 140 (not part of the system 100). Similarly, the user-facing portion 130 can be disposed to receive commands or information from the user 140 so as to provide medical care for the patient I l la.
[121] In one embodiment, the user-facing portion 130 can include a smartphone or another mobile device 131, such as an Apple™ iPhone™ disposed to operate under the control of an application 132 (“app”). The app 132 can be disposed to control the mobile device 131 so as to present information from the sensor 113 and/or the active emitter 115, or from the computing device 121, to the user 140. The app 132 can also be disposed to control the mobile device 131 so as to send commands and/or information to the body-facing portion 110 or to the computing device 121. This can have the effect that the user 140 can tailor operation of the bodyfacing portion 110 so as to provide medical monitoring or treatment for the patient I l la.
[122] In one embodiment, the user-facing portion 130 can include a server 133, a database 134, or another external device 135, disposed to be coupled to the mobile device 131. In such cases, the app 132 can be disposed to couple to one or more of the server 133, a database 134, or the other external device 135, such as using a communication system. The communication system can include one or more of: wireless communication, the Internet, text or voice messaging, social media, Bluetooth™ communication, a local mesh network, or other communication techniques.
[123] For example, the app 132 can be disposed to maintain information with respect to the patient I l la in the database 134, storing that information from time to time (periodically or in response to one or more triggering events) and retrieving that information when needed.
[124] For another example, the app 132 can be disposed to send commands to, and receive responses from, the server 133, so as to use one or more capabilities thereof. The server 133 can be disposed to provide computing power, specialized hardware or software, one or more virtual machines, or otherwise as described herein. The specialized hardware or software, or the virtual machines, can provide an artificial intelligence or machine learning system disposed to interpret and / or control the body-facing portion 110.
[125] For another example, the app 132 can be disposed to provide an augmented reality or virtual reality presentation to the user 140, so as to allow the user to more easily visualize or interact with the body-facing portion 110.
User
[126] In one embodiment, the user-facing portion 130 can interface with a user 140 (not part of the system 100), such as a human being 141. The user 140 can be the patient I l la themselves: the patient is disposed to be monitored or treated and includes the internal medical structures 112, such as described with respect to the body-facing portion 110. Alternatively, the user 140 can include other persons such as emergency responders, medical personnel, relatives or caregivers for the patient, or volunteers or other “good Samaritans” who are available to assist the patient. The other persons can monitor or treat the patient I l la or the patient’s medical structures 112.
[127] Similarly, in one embodiment, with respect to a target mechanism 111b, the user 140 can include a mechanical engineer or another mechanic, operators associated with the target mechanism, or other persons knowledgeable with respect to the target mechanism.
[128] This can have the effect that the user 140 can exchange information with the system 100, such as by receiving information from the patient’s medical structures 112 and/or sending commands/ controls or other information to the body- facing portion 110 and/or the computing device 121. The user 140 can (A) receive audio /video or other information from the system 100, (B) send commands/ controls or other information to the system, or (C) act otherwise consistent with elements and method steps described with this Application.
[129] In one embodiment, the user 140 can be (or include) a computing device 142a or a computer- controlled device 142b. For example, the computing device 142a can include an artificial intelligence or machine learning system disposed to control the body-facing portion 110, such as to predict or detect, monitor or record, prevent or treat, or encouraging self- care or calling for other care, with respect to one or more medical conditions (such as migraines or photophobia).
[130] For another example, the computer-controlled device 142b can include a device capable of one or more of: — communication with a server or other logically remote device, such as an Amazon™ Echo™ or other active communication device;
— performing specific processes with respect to medical information about the user, such as by an app operating on a smartphone and/or a remote server;
— recording or processing medical information with respect to the user, such as a Fitbit™ or other fitness or activity monitor;
— presenting information to the user or allowing the user to control one or more other devices.
Fig. 2 — Example method of operation
[131] Fig. 2 shows a conceptual drawing of an example method of using a system.
[132] A method 200 can include flow points and method steps, such as shown in the figure and described herein.
[133] A method 200 can include flow points and method steps, such as shown in the figure and described herein. By the nature of the written word, these flow points and method steps are described in a particular order; however, this description does not limit the method to this particular order or any other particular order. These flow points or method steps can be performed in a different order; concurrently or partially concurrently; otherwise in a parallel, pipelined, quasi-parallel manner; or in another manner. These flow points or method steps can be performed in part, paused, and returned to for completion. These flow points or method steps can be performed as co-routines or otherwise. In the context of the invention, there is no particular reason for any of these limitations.
[134] One or more portions of the method 200 are sometimes described as being performed by particular elements of the system 100, as described with respect to fig. 1, or sometimes by “the method” itself. When a flow point or method step is described as being performed by “the method,” it can be performed by one or more of those elements, by one or more portions of those elements, by an element not described with respect to the figure, by a combination or conjunction thereof, or otherwise.
[135] This Application primarily describes one possible implementation, in which one or more elements of the system 100 include programmable computing devices interacting with other elements. However, there is no particular requirement for any of these limitations. For example, the same or similar effects can be achieved using one or more electronic circuits, one or more specialized processing devices operating using specialized control elements, one or more remote computing devices directing operation of one or more of the sensing and operating devices described, or otherwise to achieve the same or similar effects as described herein.
Beginning of method
[136] A flow point 200A indicates a beginning of the method. When appropriate, the patient I l la or a user 140 can direct the system 100 to begin the method 200. The system 100 and its elements can set /reset their states and can prepare to conduct their respective portions of the method 200.
[137] In one embodiment, the method 200 can operate continuously. Alternatively, the method 200 can operate from time to time (periodically or aperiodically), waking up to perform its steps and going back to sleep until its next wake-up. Alternatively, the method 200 can operate when triggered, such as by the patient I l la or a user 141, or such as by the sensor 113, or such as by an input from the user-facing portion 130 receiving a message from an external device. However, these techniques for triggering the method 200 are exemplary only; there is no particular requirement for any such limitation. Operation of body-facing portion
[138] A flow point 210 indicates that the body-facing portion 110 is ready to operate.
[139] At a step 211, the elements of the body-facing portion 110 are disposed so as to perform their described functions.
[140] At a sub-step 211a, with respect to the patient I l la, the sensor 113 and the active emitter 115 are disposed in the dental structure 112. Alternatively, with respect to the target mechanism 111b, the sensor 113 and the active emitter 115 are disposed in an electrical or mechanical structure 112.
[141] At a sub-step 211b, the treatment device 116 is disposed in the dental structure 112. Alternatively, the treatment device 116 is disposed in the electrical or mechanical structure 112.
[142] At a sub-step 211c, the power source 117 is coupled to the bodyfacing portion 110. As described herein, the power source 117 can also be coupled to the interface portion 120. In one embodiment, the power source 117 can be disposed in the dental or other body structure 112, or in the electrical or mechanical structure 112. However, there is no particular requirement for any such limitation.
[143] At a step 212, the elements of the body-facing portion 110 are disposed so as to receive information with respect to their described functions. [144] At a sub-step 212a, the sensor 113 and the active emitter 115 are disposed and ready to receive medical information from the blood vessel 114 or other body structure. Alternatively, the sensor 113 and the active emitter 115 are ready to receive technical information from the electrical or mechanical structure 112.
[145] At a sub-step 212b, the sensor 113 receives medical information from the blood vessel 114 or other body structure. Alternatively, the sensor 113 receives technical information from the electrical or mechanical structure 112. As described herein, the medical information can include one or more elements of data with respect to the patient 11 la or the target mechanism 11 lb.
[146] At a sub-step 212c, the active emitter 115 sends an energetic signal into the dental structure or other body structure 112. Alternatively, the active emitter 115 sends an energetic signal into the electrical or mechanical structure 112. As described herein, the energetic signal can include an electromagnetic signal, an ultrasonic signal, or another signal, possibly encoded so as to maximize return of useful information from the patient 11 la or the target mechanism 111b.
[147] At a sub-step 212d, the active emitter 115 receives a response to the energetic signal from the dental structure or other body structure 112. Alternatively, the active emitter 115 receives a response to the energetic signal from the electrical or mechanical structure 112. As described herein, for example, the energetic signal can include an ultrasonic signal and the response can include an echo from the body structure 112 of the patient 11 la or from the electrical or mechanical structure 112 of the target mechanism 111b. [148] At a step 213, the elements of the body-facing portion 110 are disposed so as to send information with respect to their described functions.
[149] At a sub-step 213a, the sensor 113 and the active emitter 115 send their information to the computing device 121. As described herein, the sensor 113 and the active emitter 115 can each use a selected format for the information they send. Alternatively, as described herein, one or more of the sensor 113 or the active emitter 115 can send their information directly to the communication device 122.
[150] At a step 214, the treatment device 116 is disposed and ready to perform treatment of a selected medical condition.
[151] At a sub-step 214a, the treatment device 116 receives a message from the interface portion 120 directing it to perform treatment of the patient I l la. Alternatively, the treatment device 116 receives a message from the interface portion 120 directing it to perform treatment of the target mechanism 111b.
[152] At a step 215, the elements of the body-facing portion 110 are disposed so as to draw energy from the power source 117.
[153] At a sub-step 215a, the sensor 113, the active emitter 115, and the treatment device 116, draw energy from the power source 117. As described herein, the interface portion 120 can also draw energy from power source 117.
[154] At a sub-step 215b, the power source 117 recharges itself. [155] The method 200 can proceed with the flow point 220.
Operation of interface portion
[156] A flow point 220 indicates that the interface portion 120 is ready to operate.
[157] At a step 221, with respect to the patient I l la, the computing device 121 is disposed so as to perform functions with respect to selected medical conditions. Alternatively, with respect to the target mechanism 111b, the computing device 121 is disposed so as to perform functions with respect to selected electrical or mechanical conditions. As described herein, these functions can also be performed by an external device, such as in the user-facing portion 130.
[158] At a sub-step 221a, the computing device 121 is ready to receive information from the sensor 113 or the active emitter 115.
[159] At a sub-step 221b, the computing device 121 receives information from one or more of the sensor 113 or the active emitter 115.
[160] At a sub-step 221c, the computing device 121 performs one or more operations on the information it receives from the sensor 113 or the active emitter 115. For example, as described herein, the computing device 121 can be disposed to determine one or more of: with respect to the patient I l la, prediction/ determination of whether the patient I l la is subject to a selected medical condition, monitoring/ recording information with respect to the selected medical condition, determining how to prevent/ treat the selected medical condition, determining how to encourage the patient 11 la to perform self-care or exchanging information with other personnel to perform care with respect to the selected medical condition. Alternatively, as described herein, the computing device 121 can be disposed to determine one or more of: with respect to the target mechanism 111b, prediction/ determination of whether the target mechanism 111b is subject to a selected electrical or mechanical condition, monitoring/re- cording information with respect to the selected electrical or mechanical condition, determining how to prevent/ repair the selected electrical or mechanical condition, determining how to encourage personnel associated with the target mechanism 111b (such as a pilot) patient 11 la to perform prevention/ repair or exchanging information with other personnel to perform prevention / repair with respect to the selected electrical or mechanical condition.
[161] At a sub-step 22 Id, the computing device 121 sends information with respect to prevention/ treatment of the selected medical condition to the treatment device 116. As described herein, with respect to the patient I l la, the treatment device 116 receives this information and performs treatment of the selected medical condition in response thereto. Alternatively, with respect to the target mechanism 111b, the treatment device 116 receives this information and performs repairs of the selected electrical or mechanical condition in response thereto.
[162] At a step 222, the computing device 121 is disposed so as to send information to the user-facing portion 130.
[163] At a sub-step 222a, the computing device 121 decodes information from the particular formats associated with the sensor 113 and the active emitter 115. [164] At a sub-step 222b, the computing device 121 encodes information into the particular formats associated with the communication device 122.
[165] At a sub-step 222c, the computing device 121 sends the encoded information to the communication device 122. Alternatively, the computing device 121 can send the encoded information directly to the userfacing portion 130.
[166] At a step 223, the communication device 122 is disposed so as to transmit information from the computing device 121 to the user-facing portion 130.
[167] At a sub -step 223a, the communication device 122 receives information from the computing device 121 for delivery to the user-facing portion 130. Alternatively, the communication device 122 can receive information directly from one or more of the sensor 113 or the active emitter 115 that is for delivery to the user-facing portion 130.
[168] At a sub-step 223b, the communication device 122 sends the information it received from the computing device 121 to the user-facing portion 130. Alternatively, the communication device 122 can send the information it received directly from one or more of the sensor 113 or the active emitter 115 to the user-facing portion 130.
[169] At a step 224, the elements of the interface portion 120 are disposed so as to draw energy from the power source 117 (in the body-facing portion 110). [170] At a sub-step 224a, as described herein, the computing device 121 and the communication device 122 draw energy from the power source 117.
[171] At a sub-step 224b, as described herein, the power source 117 recharges itself.
[172] The method 200 can proceed with the flow point 200B.
Operation of user-facing portion
[173] A flow point 230 indicates that the user-facing portion 130 is ready to operate.
[174] At a step 231, the elements of the user-facing portion 130 are disposed so as to perform their described functions.
[175] At a sub-step 231a, the mobile device 131 is coupled to the interface portion 120 and disposed and ready to receive information therefrom, and disposed and ready to interact with the user 140. With respect to the patient I l la, the information includes medical information. Alternatively, with respect to the target mechanism 11 lb, the information includes technical information from the electrical or mechanical structure 112.
[176] At a sub-step 231b, the mobile device 131, under control of the app 132, receives information from the interface portion 120.
[177] At a step 232, with respect to the patient I l la, the mobile device 131, under control of the app 132, is disposed so as to perform functions with respect to selected medical conditions. Alternatively, with respect to the target mechanism 111b, the mobile device 131, under control of the app 132, is disposed so as to perform functions with respect to selected electrical or mechanical conditions. As described herein, these functions can also be performed by one or more devices called upon by the mobile device 131, such as the server 133.
[178] At a sub-step 232a, the mobile device 131 is ready to receive information from the interface portion 120 (and possibly directly from the body-facing portion).
[179] At a sub-step 232b, the mobile device 131 receives information from the interface portion 120. Alternatively, the mobile device 131 can receive information directly from the body-facing portion.
[180] At a sub-step 232c, the mobile device 131 performs one or more operations on the information it receives from the interface portion 120 (and possibly from the body-facing portion 110), such as operations described with respect to the sub-step 221c. As part of this sub-step, the mobile device 131 can call upon other devices, such as the server 133 (and possibly another external device).
[181] At a sub-step 232d, the mobile device 131 provides information with respect to prediction/ determination, monitoring/ recording, and other information possibly useful to the user 140 at an output. As part of this sub-step, the mobile device 131 can call upon other devices, such as the server 133 (and possibly another external device). Also as part of this substep, the mobile device 131 can send information with respect to monitor- ing/recording to the database 134 (and possibly another external device). [182] At a sub-step 232e, with respect to the patient I l la, the mobile device 131 provides information with respect to prevention/ treatment of the selected medical condition to the interface portion 120 for delivery to the treatment device 116, as described with respect to the sub-step 22 Id. Alternatively, with respect to the target mechanism 111b, the mobile device 131 provides information with respect to prevention/ treatment of the selected medical condition to the interface portion 120 for delivery to the treatment device 116, as described with respect to the sub-step 22 Id. As part of this sub-step, the mobile device 131 can call upon other devices, such as the server 133.
[183] At a step 233, the mobile device 131 is disposed so as to receive commands or information from the user 140 for delivery to the body-facing portion 110.
[184] At a sub-step 233a, the mobile device 131 receives information from the user 140 for delivery to the body-facing portion 110.
[185] At a sub-step 233b, the mobile device 131 sends the information it received from the user 140 to the interface portion 120 for delivery to the body-facing portion 110.
[186] At a step 234, the mobile device 131 is disposed so as to receive responses from the interface portion 120 for delivery to the user 140.
[187] At a sub-step 234a, the mobile device 131 receives responses from the interface portion 120 for delivery to the user 140.
[188] At a sub-step 234b, the mobile device 131 provides the responses it received from the interface portion 120 to the user 140. [189] The method 200 can proceed with the flow point 200B.
End of method
[190] A flow point 200B indicates an end of the method. When appropriate, the patient I l la or a user 140 can direct the system 100 to end the method 200. The system 100 and its elements can set/reset their states and can prepare to restart their respective portions of the method 200.
[191] As part of the flow point 200B, the method can be restarted at the flow point 200A.
ALTERNATIVE EMBODIMENTS
[192] While this Application primarily describes a system, and techniques for use therewith, related to living patients and their dental structures, sensors and active emitters with respect to blood vessels, and related matters; those skilled in the art would recognize that techniques described herein can also be used with respect to other medical structures, with respect to electrical or mechanical devices and their structures, and otherwise as described herein.
[193] After reading this Application, those skilled in the art will recognize that the techniques described herein are applicable to a wide variety of different types of patients and their medical conditions, and a wide variety of different types of target mechanisms and their electrical or mechanical conditions; to a wide variety of information about medical conditions and electrical or mechanical conditions; to a wide variety of different techniques for prediction/ prediction/ determination, monitoring/ recording, prevention/ treatment or prevention/ repair, encouragement of self- care or repair or exchanging information for care or repair; or otherwise as described herein.
[194] This Application describes a preferred embodiment with preferred process steps and, where applicable, preferred data structures. After reading this Application, those skilled in the art would recognize that, where any calculation or computation is appropriate, embodiments of the description can be implemented using general purpose computing devices or switching processors, special purpose computing devices or switching processors, other circuits adapted to particular process steps and data structures described herein, or combinations or conjunctions thereof, and that implementation of the process steps and data structures described herein would not require undue experimentation or further invention.
[195] The claims are incorporated into the specification as if fully set forth herein.

Claims

Claims
1. Apparatus including a sensor disposed to couple information between a patient and one or more of: a communication device or a computing device; wherein the communication device is disposed to couple one or more of: the sensor or the computing device, to one or more of: a first device internal to the patient at the behest of an external device, or a second device external to the patient; the computing device is disposed to perform one or more of: encoding/ decoding information in exchange with the sensor, communicating with an external device, or performing a process with respect to one or more medical conditions or information relating to the patient.
2. Apparatus as in claim 1, including a power source disposed to power one or more of: the sensor, the communication device, or the computing device; wherein the power source includes rechargeable energy storage; wherein the energy storage is rechargeable by one or more of: an external inductor disposed to receive externally stored energy, an energy harvester disposed to receive energy from local electromagnetic fields, an internal inductor disposed to receive energy from the patient’s mouth or head movements.
3. Apparatus as in claim 1, wherein the sensor is disposed in a dental location.
4. Apparatus as in claim 3, wherein the dental location is selected from one or more of: in a tooth or under a portion thereof; in a cap or crown or under a portion thereof; between two teeth; cemented or otherwise coupled to a tooth; affixed to or implanted in or near a gum, a cheek, the jaw, or the tongue; or affixed to or implanted in or near a dental device or other dental equipment.
5. Apparatus as in claim 4, wherein the dental device or other dental equipment includes a cap, a denture, a filling, any orthodontia equipment, or any oral medical device.
6. Apparatus as in claim 3, wherein the sensor is coupled to a blood vessel, coupled to a gap or void in dental matter or dental tissue, or otherwise disposed to measure a medical condition of the patient.
7. Apparatus as in claim 1, wherein the sensor is disposed to exchange information with the patient or with an external device, in response to the sensor’s oral location.
8. Apparatus as in claim 7, wherein the external device can include one or more of: a voice-activated communication device, an augmented reality or virtual reality system, a fitness or activity monitor, a lens or eyewear, a smartphone or other mobile device, a speaker, an electromagnetic or ultrasonic transceiver/ transmitter, one or more vehicle controls, a remote computing system, a remote database or other storage system, or a remote presentation system.
9. Apparatus as in claim 7, wherein the sensor is disposed to exchange information with medical devices coupled to other anatomy of the patient.
10. Apparatus as in claim 9, wherein the medical devices include one or more of: one or more audio/ video devices, one or more hearing aids, a heart monitor, a pacemaker, a separate blood glucose monitor, a separate blood pressure monitor, a separate blood oxygenation monitor, an ultrasonic or x-ray device disposed to determine one or more features of the user’s anatomy, one or more other devices disposed to provide an augmented reality or virtual reality presentation to the user, or one or more other devices disposed to otherwise stimulate one or more elements of the patient’s anatomy.
11. Apparatus as in claim 1 , wherein the sensor is disposed to exchange information in response to one or more of: when the patient moves their head, when the patient performs an oral gesture, or when the patient makes a speaking motion.
12. Apparatus as in claim 11, wherein the speaking motion is defined in response to a cheek, the jaw, one or more teeth, the tongue, or by breathing.
13. Apparatus as in claim 11, wherein the information is disposed to be exchanged in response to a measurement of the oral gesture.
14. Apparatus as in claim 1, wherein the communication device is disposed to couple information from an external device to the user at or near an oral location.
15. Apparatus as in claim 14, wherein the oral location includes one or more of: an ear canal, an oropharyngeal canal, a sinus cavity or other void defined by the user’s head or neck, or bony structures in the user’s head or neck.
16. Apparatus as in claim 1, wherein the processes with respect to the one or more medical conditions or information include one or more of: predicting/ detecting, monitoring/ recording, treating/ ameliorating, or encouraging self-care or exchanging information with medical personnel for care, with respect to the one or more medical conditions.
17. Apparatus as in claim 1, wherein the one or more medical conditions include one or more of: a measurement of blood glucose, a measurement of blood oxygenation, a measurement of blood pressure, a measurement of pulse rate, or another measurement from a sensor located in a dental position or near a blood vessel.
18. Apparatus as in claim 1, wherein the sensor is disposed to determine information with respect to one or more of: an impulse response, a phonic modification, or another set of phonic features of the user’s dental or oral elements.
19. Apparatus as in claim 18, wherein the computing device is disposed to receive that information from the sensor and to adjust an oral output from the patient in response thereto, so as to improve an audio acuity of the user’s voice at an intended receiver.
20. Apparatus as in claim 19, wherein the computing device is disposed so that when the patient reduces a volume of their voice, the computing device receives that information from the sensor and adjusts an oral output from the user in response thereto; wherein an audio acuity of the patient’s voice at an intended receiver is improved.
21. Apparatus as in claim 19, wherein the computing device is disposed so that when the patient reduces a volume of their voice, the computing device receives that information from the sensor and adjusts an oral output from the patient in response thereto; wherein an audio message encoded by the patient’s voice is amplified at an intended receiver.
22. Apparatus as in claim 1, including an active emitter, the active emitter disposed to: receive a command from the computing device; emit a signal within a body structure of the patient; receive a response to the signal; provide a measurement with respect to the patient in response to the response to the signal.
23. Apparatus including a crown coupled to a tooth; a circuit underneath the crown and coupled to a blood vessel, the circuit disposed to measure a medical condition; a computing device coupled to the circuit, the computing device being disposed to perform a process with respect to information from the circuit; wherein the process includes one or more of: predicting/ detecting, monitoring/ recording, treating/ ameliorating, or encouraging self-care or exchanging information with caregivers, with respect to a medical condition of a patient.
24. Apparatus as in claim 23, wherein the medical condition includes one or more of: blood oxygen, blood glucose, blood pressure, pulse rate, or serum level.
25. Apparatus as in claim 23, wherein the medical condition includes one or more of: dental damage, dental decay, dental and gum infections, jaw or tooth grinding, or orthodontic maladjustments.
26. Apparatus as in claim 23, wherein the medical condition includes one or more of: migraine, photophobia, or neuro -ophthalmic disorder.
27. Apparatus as in claim 23, wherein the medical condition includes one or more of: food allergies or food poisoning, gastroenterological disorders, gastroenterological reflux disorders, or stomach ulcers.
28. Apparatus as in claim 23, wherein the medical condition includes one or more of: gunshot wounds, piercing or stab wounds, or other penetration of the outer membranes of the body.
29. Apparatus as in claim 23, wherein the medical condition includes one or more of: ingestion, inhalation, or progress, of one or more of: allergens, foreign objects, or poisons.
30. Apparatus as in claim 23, wherein the medical condition includes one or more of: inhalation or insertion of choking or drowning hazards, or other objects blocking the patient’s breathing.
31. Apparatus as in claim 23, wherein the medical condition includes one or more of: low blood oxygenation, arrythmia, bradycardia, tachycardia, coronary disease, coronary infarction, inefficient pulmonary transfer of oxygen from the lungs, lung disease or blockage, or other cardio-pulmonary disorders.
32. Apparatus as in claim 23, wherein the medical condition includes one or more of: ocular ulcers, foreign object or adverse chemical presence in the eye, retinal detachment, or other eye damage; or
33. Apparatus as in claim 23, wherein the medical condition includes one or more of: seizures or strokes, brain bleeding, concussions, or traumatic brain injuries.
34. Apparatus as in claim 23, wherein the medical condition includes combinations of two or more of a measurement of blood glucose, a measurement of blood oxygenation, a measurement of blood pressure, a measurement of pulse rate, or another measurement from a sensor located in a dental position or near a blood vessel; migraine, photophobia, or neuro -ophthalmic disorder; dental damage, dental decay, dental and gum infections, jaw grinding, or orthodontic maladjustments; food allergies or food poisoning, gastroenterological disorders, gastroenterological reflux disorders, or stomach ulcers; gunshot wounds, piercing or stab wounds, or other penetration of the outer membranes of the body; ingestion, inhalation, or progress, of one or more of: allergens, foreign objects, or poisons; inhalation or insertion of choking or drowning hazards, or other objects blocking the patient’s breathing; low blood oxygenation, arrythmia, bradycardia, tachycardia, coronary disease, coronary infarction, inefficient pulmonary transfer of oxygen from the lungs, lung disease or blockage, or other cardio-pulmonary disorders; ocular ulcers, foreign object or adverse chemical presence in the eye, retinal detachment, or other eye damage; or seizures or strokes, brain bleeding, concussions, or traumatic brain injuries.
35. Apparatus as in claim 23, wherein the coupling between the crown and the tooth is disposed to be removable.
36. Apparatus as in claim 23, including a transmitter coupled to the circuit, the transmitter being disposed to send information to an external device.
37. Apparatus as in claim 36, wherein the external device can include one or more of: a voice- activated communication device or other active communication device, an augmented reality or virtual reality system, a fitness or activity monitor, a lens or eyewear, a smartphone or other mobile device, a speaker, an electromagnetic or ultrasonic transceiver / transmitter, one or more vehicle controls, a remote computing system, a remote database or other storage system, or a remote presentation system.
38. Apparatus as in claim 36, wherein the circuit is disposed to exchange information using the transmitter with medical devices coupled to patient anatomy other than the dental structure.
39. Apparatus as in claim 38, wherein the medical devices include one or more of: one or more audio/ video devices, one or more hearing aids, a heart monitor, a pacemaker, a separate blood glucose monitor, a separate blood pressure monitor, a separate blood oxygenation monitor, an ultrasonic or x-ray device disposed to determine one or more features of the user’s anatomy, one or more other devices disposed to provide an augmented reality or virtual reality presentation to the user, or one or more other devices disposed to otherwise stimulate one or more elements of the patient’s anatomy.
40. Apparatus as in claim 23, including a power source disposed to power the circuit or the computing device; wherein the power source includes rechargeable energy storage; wherein the energy storage is rechargeable by one or more of: an external inductor disposed to receive externally stored energy, an energy harvester disposed to receive energy from local electromagnetic fields, an internal inductor disposed to receive energy from the patient’s mouth or head movements.
41. Apparatus as in claim 23, wherein the circuit is disposed to determine information with respect to one or more of: an impulse response, a phonic modification, or another set of phonic features of the user’s dental or oral elements.
42. Apparatus as in claim 41 , wherein the computing device is disposed to receive that information from the sensor and to adjust an oral output from the patient in response thereto, so as to improve an audio acuity of the user’s voice at an intended receiver.
43. Apparatus as in claim 42, wherein the computing device is disposed so that when the patient reduces a volume of their voice, the computing device receives that information from the sensor and adjusts an oral output from the user in response thereto; wherein an audio acuity of the patient’s voice at an intended receiver is improved.
44. Apparatus as in claim 42, wherein the computing device is disposed so that when the patient reduces a volume of their voice, the computing device receives that information from the sensor and adjusts an oral output from the patient in response thereto; wherein an audio message encoded by the patient’s voice is amplified at an intended receiver.
45. Apparatus as in claim 23, wherein the communication device is disposed to couple information between the sensor and a mobile device.
46. Apparatus as in claim 45, wherein the mobile device is disposed to call upon a server to perform a function with respect to information from the sensor, the function including one or more of predicting/ detecting, monitoring/ recording, treating/ ameliorating, or encouraging self-care or exchanging information with medical personnel for care, with respect to one or more patient medical conditions.
47. Apparatus as in claim 45, wherein the mobile device is disposed to make an augmented reality presentation to a caregiver, the augmented reality presentation including information with respect to a medical condition of the patient.
48. Apparatus as in claim 45, wherein the mobile device is disposed to maintain a record of information from the sensor in a remote database.
49. Apparatus as in claim 23, including an active emitter, the active emitter disposed to: receive a command from the computing device; emit a signal within a body structure of the patient; receive a response to the signal; provide a measurement with respect to the patient in response to the response to the signal.
50. Apparatus including a sensor disposed to couple information between a patient and a computing device, the sensor being embedded in a dental structure of the patient; a computing device coupled to the sensor and disposed to encode / decode information in exchange with the sensor; a communication device coupled to the computing device and disposed to exchange information between the computing device and an external device; wherein the external device is disposed to exchange information between the sensor and a caregiver.
51. Apparatus as in claim 50, including a power source disposed to power one or more of: the sensor, the communication device, or the computing device; wherein the power source includes rechargeable energy storage; wherein the energy storage is rechargeable by one or more of: an external inductor disposed to receive externally stored energy, an energy harvester disposed to receive energy from local electromagnetic fields, an internal inductor disposed to receive energy from the patient’s mouth or head movements.
52. Apparatus as in claim 50, wherein the dental structure is selected from one or more of: in a tooth or under a portion thereof; in a cap or crown or under a portion thereof; between two teeth; cemented or otherwise coupled to a tooth; affixed to or implanted in or near a gum, a cheek, the jaw, or the tongue; or affixed to or implanted in or near a dental device or other dental equipment.
53. Apparatus as in claim 52, wherein the dental device or other dental equipment includes a cap, a denture, a filling, any orthodontia equipment, or any oral medical device.
54. Apparatus as in claim 52, wherein the sensor is coupled to a blood vessel, coupled to a gap or void in dental matter or dental tissue, or otherwise disposed to measure a medical condition of the user.
55. Apparatus as in claim 50, wherein the external device can include one or more of: a voice- activated communication device or other active communication device, an augmented reality or virtual reality system, a fitness or activity monitor, a lens or eyewear, a smartphone or other mobile device, a speaker, an electromagnetic or ultrasonic transceiver / transmitter, one or more vehicle controls, a remote computing system, a remote database or other storage system, or a remote presentation system.
56. Apparatus as in claim 55, wherein the sensor is disposed to exchange information with medical devices coupled to patient anatomy other than the dental structure.
57. Apparatus as in claim 56, wherein the medical devices include one or more of: one or more audio/ video devices, one or more hearing aids, a heart monitor, a pacemaker, a separate blood glucose monitor, a separate blood pressure monitor, a separate blood oxygenation monitor, an ultrasonic or x-ray device disposed to determine one or more features of the user’s anatomy, one or more other devices disposed to provide an augmented reality or virtual reality presentation to the user, or one or more other devices disposed to otherwise stimulate one or more elements of the patient’s anatomy.
58. Apparatus as in claim 50, wherein the sensor is disposed to exchange information in response to one or more of: when the patient moves their head, when the patient performs an oral gesture, or when the patient makes a speaking motion.
59. Apparatus as in claim 58, wherein the speaking motion is defined in response to a cheek, the jaw, one or more teeth, the tongue, or by breathing.
60. Apparatus as in claim 58, wherein the information is disposed to be exchanged in response to a measurement of the oral gesture.
61. Apparatus as in claim 50, wherein the communication device is disposed to couple information from an external device to the user at or near an oral location.
62. Apparatus as in claim 61 , wherein the oral location includes one or more of: an ear canal, an oropharyngeal canal, a sinus cavity or other void defined by the user’s head or neck, or bony structures in the user’s head or neck.
63. Apparatus as in claim 50, wherein the computing device is disposed to perform one or more processes with respect to one or more medical conditions in response to information from the sensor; the processes with respect to the one or more medical conditions include one or more of: predicting/ detecting, monitoring/ recording, treating/ ameliorating, or encouraging self-care or exchanging information with medical personnel for care, with respect to the one or more medical conditions.
64. Apparatus as in claim 50, wherein the medical conditions include one or more of: a measurement of blood glucose, a measurement of blood oxygenation, a measurement of blood pressure, a measurement of pulse rate, or another measurement from a sensor located in a dental position or near a blood vessel.
65. Apparatus as in claim 50, wherein the medical conditions include one or more of: migraine, photophobia, or neuro -ophthalmic disorder.
66. Apparatus as in claim 50, wherein the medical conditions include one or more of: dental damage, dental decay, dental and gum infections, jaw grinding, or orthodontic maladjustments.
67. Apparatus as in claim 50, wherein the medical conditions include one or more of: food allergies or food poisoning, gastroenterological disorders, gastroenterological reflux disorders, or stomach ulcers.
68. Apparatus as in claim 50, wherein the medical conditions include one or more of: gunshot wounds, piercing or stab wounds, or other penetration of the outer membranes of the body.
69. Apparatus as in claim 50, wherein the medical conditions include one or more of: ingestion, inhalation, or progress, of one or more of: allergens, foreign objects, or poisons.
70. Apparatus as in claim 50, wherein the medical conditions include one or more of: inhalation or insertion of choking or drowning hazards, or other objects blocking the patient’s breathing.
71. Apparatus as in claim 50, wherein the medical conditions include one or more of: low blood oxygenation, arrythmia, bradycardia, tachycardia, coronary disease, coronary infarction, inefficient pulmonary transfer of oxygen from the lungs, lung disease or blockage, or other cardio-pulmonary disorders.
72. Apparatus as in claim 50, wherein the medical conditions include one or more of: ocular ulcers, foreign object or adverse chemical presence in the eye, retinal detachment, or other eye damage.
73. Apparatus as in claim 50, wherein the medical conditions include one or more of: seizures or strokes, brain bleeding, concussions, or traumatic brain injuries.
74. Apparatus as in claim 50, wherein the medical conditions include combinations of two or more of: a measurement of blood glucose, a measurement of blood oxygenation, a measurement of blood pressure, a measurement of pulse rate, or another measurement from a sensor located in a dental position or near a blood vessel; migraine, photophobia, or neuro -ophthalmic disorder; dental damage, dental decay, dental and gum infections, jaw grinding, or orthodontic maladjustments; food allergies or food poisoning, gastroenterological disorders, gastroenterological reflux disorders, or stomach ulcers; gunshot wounds, piercing or stab wounds, or other penetration of the outer membranes of the body; ingestion, inhalation, or progress, of one or more of: allergens, foreign objects, or poisons; inhalation or insertion of choking or drowning hazards, or other objects blocking the patient’s breathing; low blood oxygenation, arrythmia, bradycardia, tachycardia, coronary disease, coronary infarction, inefficient pulmonary transfer of oxygen from the lungs, lung disease or blockage, or other cardio-pulmonary disorders; ocular ulcers, foreign object or adverse chemical presence in the eye, retinal detachment, or other eye damage; or seizures or strokes, brain bleeding, concussions, or traumatic brain injuries.
75. Apparatus as in claim 50, wherein the sensor is disposed to determine information with respect to one or more of: an impulse response, a phonic modification, or another set of phonic features of the user’s dental or oral elements.
76. Apparatus as in claim 75, wherein the computing device is disposed to receive that information from the sensor and to adjust an oral output from the patient in response thereto, so as to improve an audio acuity of the user’s voice at an intended receiver.
77. Apparatus as in claim 76, wherein the computing device is disposed so that when the patient reduces a volume of their voice, the computing device receives that information from the sensor and adjusts an oral output from the user in response thereto; wherein an audio acuity of the patient’s voice at an intended receiver is improved.
78. Apparatus as in claim 76, wherein the computing device is disposed so that when the patient reduces a volume of their voice, the computing device receives that information from the sensor and adjusts an oral output from the patient in response thereto; wherein an audio message encoded by the patient’s voice is amplified at an intended receiver.
79. Apparatus as in claim 50, wherein the communication device is disposed to couple information between the sensor and a mobile device.
80. Apparatus as in claim 79, wherein the mobile device is disposed to call upon a server to perform a function with respect to information from the sensor, the function including one or more of predicting/ detecting, monitoring/ recording, treating/ ameliorating, or encouraging self-care or exchanging information with medical personnel for care, with respect to one or more patient medical conditions.
81. Apparatus as in claim 79, wherein the mobile device is disposed to make an augmented reality presentation to a caregiver, the augmented reality presentation including information with respect to a medical condition of the patient.
82. Apparatus as in claim 79, wherein the mobile device is disposed to maintain a record of information from the sensor in a remote database.
PCT/US2022/032410 2022-06-06 2022-06-06 Dental sensor WO2023239350A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202217833283A 2022-06-06 2022-06-06
US202217833301A 2022-06-06 2022-06-06
US17/833,301 2022-06-06
US17/833,283 2022-06-06

Publications (1)

Publication Number Publication Date
WO2023239350A1 true WO2023239350A1 (en) 2023-12-14

Family

ID=83191837

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/032410 WO2023239350A1 (en) 2022-06-06 2022-06-06 Dental sensor

Country Status (1)

Country Link
WO (1) WO2023239350A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8696113B2 (en) 2005-10-07 2014-04-15 Percept Technologies Inc. Enhanced optical and perceptual digital eyewear
US20160367188A1 (en) * 2015-06-17 2016-12-22 Bela Malik Oral sensor alerting and communication system and developers' tool kit
US20190223770A1 (en) * 2018-01-24 2019-07-25 Bela Malik Biological sample extraction and detection system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8696113B2 (en) 2005-10-07 2014-04-15 Percept Technologies Inc. Enhanced optical and perceptual digital eyewear
US20160367188A1 (en) * 2015-06-17 2016-12-22 Bela Malik Oral sensor alerting and communication system and developers' tool kit
US20190223770A1 (en) * 2018-01-24 2019-07-25 Bela Malik Biological sample extraction and detection system

Similar Documents

Publication Publication Date Title
US20230359910A1 (en) Artificial intelligence and/or virtual reality for activity optimization/personalization
US20210369144A1 (en) Device Administered Tests and Adaptive Interactions
US20190046794A1 (en) Multi-factor control of ear stimulation
US6461297B1 (en) Apparatus and methods for measuring physical disorders
US10092236B2 (en) Emergency medical services smart watch
US10430557B2 (en) Monitoring treatment compliance using patient activity patterns
CN111163693A (en) Customization of health and disease diagnostics
US11134844B2 (en) Systems and methods for modulating physiological state
Hong et al. Usefulness of the mobile virtual reality self-training for overcoming a fear of heights
US20160140986A1 (en) Monitoring treatment compliance using combined performance indicators
US20200155840A1 (en) System for aiding early detection and management of breathing disorers
US20160135738A1 (en) Monitoring treatment compliance using passively captured task performance patterns
CN109757926A (en) The hardness adjusting method and device of intelligent mattress
US20190279182A1 (en) Participative health kiosk
JP2023527717A (en) Systems and methods for monitoring user activity
WO2023239350A1 (en) Dental sensor
US20160140317A1 (en) Determining treatment compliance using passively captured activity performance patterns
US20220225949A1 (en) Wearable device network system
KR102080591B1 (en) Method for curing tinnitus based on virtual reality
US11291405B2 (en) Determining and conveying sleep factors
US20180113991A1 (en) Interactive Apparatus and Devices for Personal Symptom Management and Therapeutic Treatment Systems
JP2021515620A (en) High frequency battlement oscillator
US20230343434A1 (en) Systems and methods for using ai/ml and for cardiac and pulmonary treatment via an electromechanical machine related to urologic disorders and antecedents and sequelae of certain urologic surgeries
US20230395261A1 (en) Method and system for automatically determining a quantifiable score
US20230107691A1 (en) Closed Loop System Using In-ear Infrasonic Hemodynography and Method Therefor

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22764902

Country of ref document: EP

Kind code of ref document: A1