WO2017120288A1 - Visiocasque optique à réalité augmentée pour surveillance, diagnostic et traitement médicaux - Google Patents

Visiocasque optique à réalité augmentée pour surveillance, diagnostic et traitement médicaux Download PDF

Info

Publication number
WO2017120288A1
WO2017120288A1 PCT/US2017/012266 US2017012266W WO2017120288A1 WO 2017120288 A1 WO2017120288 A1 WO 2017120288A1 US 2017012266 W US2017012266 W US 2017012266W WO 2017120288 A1 WO2017120288 A1 WO 2017120288A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
video
processor
patient
person
Prior art date
Application number
PCT/US2017/012266
Other languages
English (en)
Inventor
Peter Killcommons
Timothy King
Jerod VENEMA
Original Assignee
Nexsys Electronics, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nexsys Electronics, Inc. filed Critical Nexsys Electronics, Inc.
Publication of WO2017120288A1 publication Critical patent/WO2017120288A1/fr

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2562/00Details of sensors; Constructional details of sensor housings or probes; Accessories for sensors
    • A61B2562/02Details of sensors specially adapted for in-vivo measurements
    • A61B2562/0219Inertial sensors, e.g. accelerometers, gyroscopes, tilt switches
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue
    • A61B5/1455Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue using optical sensors, e.g. spectral photometrical oximeters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6802Sensor mounted on worn items
    • A61B5/6803Head-worn items, e.g. helmets, masks, headphones or goggles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/742Details of notification to user or communication with user or patient ; user input means using visual displays
    • A61B5/744Displaying an avatar, e.g. an animated cartoon character
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/90Identification means for patients or instruments, e.g. tags
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0141Head-up displays characterised by optical features characterised by the informative content of the display

Definitions

  • the inventive arrangements relate to wearable computers and displays. More particularly the inventive arrangements concern augmented reality optical head mounted displays for integrating electronic data with real world environments.
  • optical head-mounted displays have been identified as having potential use in such healthcare applications.
  • Optical head-mounted displays which can be worn like a pair of eyeglasses are commercially available. These types of displays are sometimes referred to as smartglasses, digital eye glass or personal imaging systems.
  • LCD liquid crystal displays
  • LCOS liquid crystal on silicon
  • DMD digital micro-mirrors
  • OLED organic light-emitting diodes
  • Embodiments of the invention concern an optical head mounted display system.
  • the system is comprised of a headset which positions a transparent screen in alignment with the line of sight of at least one eye of a wearer of the headset. Data and images can be presented on the transparent display screen.
  • a video camera is mounted to the headset and is positioned to capture video images of a scene coincident with the line of sight of the wearer of the headset.
  • a computer processor is mounted in or on the headset which is configured to perform various actions.
  • These actions include receiving video image data obtained from the video camera; processing the video image data to obtain a pulse of a person in the scene observable by the wearer in accordance with the line of sight; receiving wirelessly transmitted patient data from a patient module connected to one or more patient sensors which are directly connected or attached to the person; correlating the transmitted patient data with the person in the scene by comparing the pulse derived from the video image, with a second pulse specified in the transmitted patient data; and displaying at least one data element from the transmitted patient data on the transparent screen based on the correlating step.
  • the process applied to the video image data to obtain the pulse comprises the Eulerian Video Magnification (EVM) algorithm.
  • the transmitted patient data is selected from the group consisting of heart rate or pulse, respiration rate, blood pressure, blood oxygen levels, blood type and body temperature.
  • the transmitted patient data can also include data (other than data acquired by the patient sensors) which has been previously inputted or stored in the patient module.
  • Such additional transmitted patient data can include one or more of patient name, age, room number, insurance information, bed location, guardian contact information, diagnosis, injury, medications received, time when most recent medication was received, and known drug allergies, without limitation.
  • the system described herein can include a plurality of video cameras.
  • the resulting video images that are captured can comprise at least one of two-dimensional image data, three-dimensional image data, or four-dimensional image data.
  • Such video cameras can be configured to generate video imagery from light in the visible wavelength range and in the non- visible wavelength range.
  • the at least one processor included in the optical head mounted display advantageously is configured to perform certain video analytic operations to
  • a laser source included with the system can be capable of illuminating at least a portion of the person.
  • An optical detector can similarly be provided to capture scattered light reflected from the skin of the person.
  • the at least one processor can be configured to determine one or more physiological parameters based on the scattered light.
  • the one or more physiological parameters can be selected from the group consisting of oxygen saturation and C02 saturation.
  • the at least one processor can be configured to process the video data to identify a type of medication appearing in the video data comprising a tablet or capsule.
  • the processor can further identify based on the video data a dose of the type of medication which has been identified. This information can be used by the health care practitioner to verify that the correct medication and dose is being administered to a patient.
  • the processor is configured to receive one or more of imagery data, position data and tracking data from a medical scanning device disposed adjacent to the person.
  • the processor in the optical head mounted display system can cause the received imagery data from the medical scanning device to be displayed on the transparent screen of the display in alignment with the line of sight of at least one eye of a wearer of the headset.
  • the imagery data can be advantageously presented on a portion of said transparent screen in a location which is dynamically varied in accordance with a position of the medical scanning device.
  • the at least one processor is configured to receive stored scan imagery obtained from a remote database concurrent with the operation of the scanning device on the person.
  • the processor can be configured to display the stored scan imagery on the transparent screen concurrent with the imagery obtained from the scanning device for comparison purposes.
  • the processor can analyze the video image data to determine a position and orientation of the person within a field of view of the at least one video camera.
  • the processor can cause to be displayed on the transparent screen stored scan imagery from a remote database in a correct anatomical orientation relative to the position and orientation of the person.
  • the processor can be configured to display on the transparent screen a correct location and angle of instruments which are to be inserted into the person for at least one of arthroscopic or endoscopic surgery.
  • the processor can be configured to display on the transparent screen a correct position and orientation of a prosthetic device.
  • FIG. 1 is a block diagram that is useful for understanding a optical head-mounted display computer system.
  • FIG. 2 is a conceptual diagram that is useful for understanding the operation of the optical head-mounted display computing system.
  • FIG. 3 is a drawing that is useful for understanding the information that is presented to a wearer of the optical head-mounted display computing systems.
  • FIG. 4 is a flowchart that is useful for understanding the invention.
  • the OHMD 100 can include various components. These may include a power source (not shown), a computer processor (e.g. a microprocessor) 112, and a data storage unit (memory) 106.
  • the OHMD 100 can also include a transparent optical display unit (TODU) (i.e. a heads-up display) 102, and one or more short range data transceivers 114, 116 which are capable of communicating in accordance with a short range wireless data communication standard.
  • TODU transparent optical display unit
  • data transceivers 114, 116 which are capable of communicating in accordance with a short range wireless data communication standard.
  • such data transceivers can be compatible with the well-known Bluetooth communication standard and/or IEEE 802.1 lb/g (WiFi) standard.
  • the computer processor 112 in an exemplary optical head-mounted display as described herein may be under the control of an operating system 113 such as the ubiquitous Android operating system.
  • an operating system 113 such as the ubiquitous Android operating system.
  • a data bus 122 is provided to facilitate communications of data among the various components.
  • the data storage unit 106 comprises a computer readable medium 110 on which certain instructions 108 are stored for the
  • a user input device 104 can comprise one or more switches or controls which are responsive to user touch activation. The touch switches are used to facilitate control of certain actions performed by the computer processor 112.
  • the computer-readable storage medium 110 should be understood to include a single medium or multiple media that store the one or more sets of instructions.
  • the term "computer- readable storage medium” shall also be understood to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure.
  • the term "computer-readable medium” shall accordingly be taken to include, but not be limited to one or more read-only (non- volatile) memories, random access memories, or other re-writable (volatile) memories. Accordingly, the disclosure is considered to include any one or more of a computer- readable medium as listed herein and to include recognized equivalents and successor media, in which the software implementations herein are stored.
  • An OHMD 100 as described herein can collect information from sensors which are internal or external to the device.
  • Exemplary sensor components which can be included into the OHMD include one or more video camera(s) 118 and infrared sensor(s) 120.
  • the one or more video cameras 118 can be a conventional video camera or a stereoscopic video camera. If more than one video camera 118 is provided, image data from the video camera or camera array can facilitate generation of conventional two-dimensional (2D) imagery, three-dimensional (3D) imagery, and/or 4D imagery in which time, temperature or some other parameter is the fourth dimension.
  • the one or more video cameras 118 can be comprised of sensors which are capable of generating images based on visible light, non- visible light, or both.
  • one or more of the video cameras 118 can be comprised of image sensors that are capable of capturing images in the near infrared, far infrared and/or ultraviolet spectrum.
  • the image data that is collected by the video camera or camera array can facilitate basic functions such as patient detection (e.g. facial recognition), measurement of patient vital signs (e.g. patient heartbeat, blood pressure), and patient identification (e.g., using facial recognition methods). Further, the image data can facilitate detection of physiological changes, the presence of foreign substances.
  • sensors which can be provided in the OHMD 100 include gyroscopes 103, accelerometers 105, magnetometers (compass), ambient light sensors, and audio transducers 123. The purpose of these components will become more apparent as the discussion progresses.
  • FIG. 1 is one possible example of an OHMD computer system.
  • the invention is not limited in this regard and any other suitable computer system architecture can also be used without limitation.
  • Dedicated hardware implementations including, but not limited to, application- specific integrated circuits, programmable logic arrays, and other hardware devices can likewise be constructed to implement the methods described herein.
  • Applications that can include the apparatus and systems of various embodiments broadly include a variety of electronic and computer systems. Some embodiments may implement functions in two or more specific interconnected hardware modules or devices with related control and data signals communicated between and through the modules, or as portions of an application-specific integrated circuit.
  • the exemplary system is applicable to software, firmware, and hardware implementations .
  • the methods described herein are stored as software programs in a computer-readable storage medium 110 and are configured for running on the computer processor 112.
  • software implementations can include, but are not limited to, distributed processing, component/object distributed processing, parallel processing, virtual machine processing, which can also be constructed to implement the methods described herein.
  • data transceivers 116, 118 can facilitate data communications with a network environment in accordance with instructions 108 to facilitate such processing.
  • the computer processor 112 continuously analyzes video image data acquired by the video camera 118.
  • the video analysis can be used to facilitate various functionalities.
  • the video analysis involves recognition of the presence of a human face.
  • the computer processor can be configured with facial recognition software which is capable of identifying the presence of human faces in a scene that is captured by the video camera. Facial image analytics and facial recognition algorithms are well known in the art and therefore will not be described here in detail.
  • the computer processor 112 can determine the presence of a person in a video image frame based on such analysis.
  • the process can further involve comparing the facial image to a database of stored facial image data which can be used to identify the person associated with the face. The purpose of such identification is described below in further detail.
  • Other video analytics functions performed by the computer processor 112 can involve identification of certain objects which may be pertinent to the medical practitioner.
  • the video analytics can automatically identify various medications when presented in pill form.
  • the term "pill” as used herein can include without limitation tablets, capsules and variants thereof, including hard-shelled capsules and soft-shelled capsules.
  • medications of this kind can have a unique appearance to avoid confusion among different medication types.
  • the pills can have a shape, a size, a color, a color pattern, and markings (including alphanumeric markings) which can be used to differentiate various medications.
  • These visual cues can be detected by the one or more imaging devices (i.e., e.g., video cameras 118) and analyzed with the video analytics software to identify the medication type.
  • the video analytics operations can differentiate between a Valium pill and a Viagra pill.
  • the video analytics operations described herein can also be configured to differentiate between different dosage concentrations contained in various pills if such information is derivable from the shape, size, color, color pattern and/or markings on the pill.
  • the video imagery captured for use in connection with these identification operations need not be limited to the visible light spectrum. Instead (or in addition to such visible light imagery) the video cameras 118 can capture images in non- visible light spectrum to help facilitate and/or verify such medication identification.
  • an audio transducer 123 can be used to audibly annunciate such information.
  • an audio transducer can be used to audibly annunciate a medication type which has been detected based on the video analysis. If more than one pill type is present, this information can be similarly communicated to a medical practitioner. Such pill identification can help prevent errors with respect to medication type and dosing.
  • the OHMD 100 also uses video data acquired by video camera 118 to determine the pulse of an individual or patient 202 who is being observed by the wearer through the transparent display screen 206 of the OHMD 100. This information is then displayed to the wearer of the OHMD in the TODU 102 in real time.
  • the pulse information is advantageously derived by computer processor 112 by using video analytics which measure temporal changes in color and/or other variations occurring at the patient's skin to accurately estimate heart rate.
  • the foregoing technique is known as the Eulerian Video Magnification (EVM) algorithm and consists of two sequential processing steps.
  • a spatial filtering is applied to the video frames to suppress high frequency video noise.
  • each pixel is analyzed over a period of time and certain movements and/or color variations in the skin are evaluated using filtering techniques.
  • the algorithm facilitates accurate determination of a person's pulse rate without direct contact and based only on video analysis.
  • This patient data or information is then displayed to the individual wearing the OHMD 100 via a transparent screen 206 of the TODU 102 that overlays the field of vision of the wearer.
  • the displayed patient data 208 is conceptually shown in FIGs. 2 and 3.
  • the pulse rate data determined using the Eulerian algorithm can be further used to derive blood pressure information for the observed patient. This blood pressure information can also be displayed on the transparent screen 206.
  • the OHMDS 100 can include a combination of different optical emitters (e.g. lasers) and optical sensors to facilitate measurements pertaining to the patient's body.
  • one or more optical emitters 119 can output one or more laser beams. These laser beams can have an optical wavelengths in the near infrared and/or far infrared frequency range which are used for certain measurement purposes.
  • the optical emitters 119 can also include a laser beam having an optical wavelength in the visible spectrum. As explained below, such a laser beam in the visible spectrum can facilitate aiming the near and far infrared optical emitters (which are in the non-visible range) to portions of the patient body which provide reliable optical measurement results.
  • the optical sensors can then detect and analyze the scattered or reflected light from the patient to measure various patient conditions. For example, principles of spectro-photometry and/or plethysmography can be applied to measure the percentages of oxyhemoglobin and deoxyhemoglobin in the blood. As is known, light emitted at 660 nm is better absorbed by saturated (oxygenated) hemoglobin and light emitted at 940 nm is better absorbed by reduced (deoxygenated) hemoglobin. So the detected scattered or reflected light from the laser(s) can be used to determine blood Oxygen saturation.
  • optical measuring techniques can be combined with principles of augmented reality (AR) to automatically identify a standardized or preferred location on the patient's body that will reliably provide consistent readings for parameters such as temperature, blood pressure, and Oxygen saturation or Co2 saturation.
  • AR augmented reality
  • a video marker or telestration in combination with a visible laser can be used to facilitate aiming of the optical laser(s) and optical sensors described herein.
  • Video analytics can be used to identify a preferred location on the patient's body where an optical measurement is known to provide reliable results. Once such location has been identified, the video marker or telestration can be displayed on the transparent optical display unit 102, so that it appears overlaid on the particular portion of the patient's body which is predetermined to provide such reliable readings.
  • the medical practitioner who is wearing the OHMDS 100 can then adjust their head position so that a visible laser beam emitted from the OHMDS 100 impinges upon the patient's body at the marked location.
  • the pulse rate data (and optionally the blood pressure information) derived by the Eulerian algorithm is used to correlate or identify certain additional or secondary information pertaining to the patient who is being visually observed.
  • one or more wearable sensors 210 can be disposed on or attached to the patient 202 to collect health-related data.
  • health related data can include pulse rate and a variety of other secondary health-related vital information data derived from the wearable sensors 210.
  • This data is collected at a patient module 204 and then transmitted using a personal area networks such as Bluetooth or WiFi that is compatible with at least one the wireless data transceivers 116, 118 provided in the OHMD.
  • the patient module can broadcast this information directly from the module itself using an included wireless transceiver (not shown) which is contained therein.
  • the information can be broadcast periodically.
  • the information can be broadcast in response to certain conditions.
  • the information can be broadcast only when the patient module 204 detects the presence of the OHMD 100.
  • the presence of the OHMD can be detected based on RF signal emission from the OHMD 100 (e.g., Bluetooth or WiFi signals) or by any other suitable means.
  • the transmitted patient data is received by the OHMD 100 and is analyzed to determine whether the received data pertains to the patient who is being observed by the wearer of the OHMD through the transparent display lenses of that device.
  • TPD transmitted patient data
  • an OHMD in a healthcare environment may concurrently be receiving TPD from a multiplicity of different patient modules. Accordingly, it is important to ensure that the correct TPD information is displayed for the OHMD wearer when a particular patient is observed.
  • the correlation function is performed by comparing the pulse rate data derived by the Eulerian algorithm to the pulse rate data contained in the TPD. Once a particular TPD set has been correlated to the patient who is being observed, secondary information contained in the TPD can be displayed as shown in FIG.
  • the transparent display screen 206 by using the transparent display screen 206.
  • the TPD is caused to be correlated with the observed patient.
  • the TPD data provides a more complete overview of the patient information and vitals.
  • the process also allows a particular patient to be located relative to other patients in the same room.
  • the information collected at the OHMD is easily associated with the individual by using visual telestration on the lens to indicate which individual identified is the source of the health information presented.
  • the TPD can include a wide variety of patient information, including identifying information, secondary vital signs and/or health related information.
  • the transmitted patient data is selected from the group consisting of heart rate or pulse, respiration rate, blood pressure, blood Oxygen levels, blood type and body temperature.
  • the transmitted patient data can also include data (other than data acquired by the patient sensors 210) which has been previously inputted or stored in the patient module.
  • Such additional transmitted patient data can include patient name, age, room number, insurance information, bed location, guardian contact information, diagnosis, injury, medications received, time when most recent medication was received, and known drug allergies, without limitation.
  • the correlation process described herein can be enhanced by using a long-range infrared sensor 120 to determine a patient's temperature.
  • Long range infrared thermal sensors are well known in the art and therefore will not be described here in detail.
  • the long range infrared thermal sensor can be used to remotely determine a temperature of a person.
  • the temperature information can be captured at the OHMD and then used together with the pulse rate data derived by the Eulerian algorithm to correlate the data measured at the OHMD 100 with the TPD.
  • the facial image recognition process described above can be used as a further means to verify the correlation process.
  • An OHMD as described herein can also use other sensors to detect additional vital parameters about the patient being observed, including surface temperature, calculated core temperature, size and shape of lesions, induration, inflammation, and other parameters helpful to the health practitioner.
  • the glasses can also incorporate additional sensors to detect chemicals, odors, etc. useful for detecting explosives, or dangerous gases.
  • AI and video analytics can be used to constantly monitor the image data acquired by the OHMD 100 and associated with each patient 202.
  • the AI and video analytics is advantageously configured to perform health diagnostic functions which are designed to identify the presence of disease conditions.
  • video analytics can be used to identify the presence of certain types of skin cancers (e.g. melanoma).
  • chemical sensors could be used to detect the occurrence of ketoacidosis which is associated with patients who are suffering from diabetes.
  • the AI can automatically alert the wearer of the OHMD 100 using telestration, audio prompts or other indicia to identify the presence of such disease condition.
  • the OHMD 100 also incorporates the ability for a remote collaborator to see the camera output of the glasses, so that the remote collaborator can see exactly what the wearer sees.
  • the remote collaborator can then use a finger or stylus to write on the live view.
  • This causes a telestration to appear as an overlay on the transparent display screen 206, so that the remote observer is able to guide, point out, or provide "Over the shoulder" advice, using the telestration to point out specific objects of interest in the field of view.
  • context sensitive differential diagnosis and treatment suggestions can be delivered to the OHMDS from artificial intelligence tools, such as IBM Watson or similar systems to assist in rapid diagnosis and suggested medication from existing formulary.
  • Collaborative medical telestration takes this concept further by incorporating analytics and stereoscopic measurements using sensors that are mounted on the glasses. These sensors give a back end analytics computer the ability to measure, document , and suggest suitable components during surgery or other health procedures that derive from the combination of telestration to show the analytics engine the area of interest, and then cause it to share relevant clinical information about that area.
  • the OHMD 100 can make use of remote analytics to select relevant imaging or data from the patient's medical chart and display that data on the transparent display screen 206.
  • This information is accessed by means of data transceiver 116 and/or data transceiver 118 from a remote server.
  • the data can then be displayed, navigated, or dictated using integrated voice recognition and head gestures.
  • the ability to display relevant X-rays or ultrasound data is achieved via a head movement to launch the viewer, and a second to "stow" the viewer.
  • the head movements can be detected by means of one or more gyroscopes 103, accelero meters 105 or any other suitable means.
  • the data viewer may also be visualized using only voice commands which are detected by audio transducer 123.
  • the voice command processing described herein, as well as the video analytics can be performed using computer processor 112, or can be performed with the assistance of a remote server.
  • Fiducial markers such as microelectromechanical systems (MEMS) sensors (such as digital compasses and six-degree of freedom accelerometer gyroscopes can be used to track a position of a medical device (e.g. an ultrasound imaging head).
  • MEMS microelectromechanical systems
  • SLAM simultaneous localization and tracking
  • markerless tracking methods such as Parallel Tracking and Mapping PTAM
  • PTAM is a camera tracking system for AR which requires no markers, pre-made maps, known templates, or inertial sensors.
  • position tracking methods can be used to data to facilitate insertion of a virtual display of the output of a position tracked medical device so that the display is convenient to the area of operation of the clinician.
  • the processor 112 can receive real-time ultrasound images output from the ultrasound device.
  • the processor can then cause OHMDS 100 to display the output image generated by the ultrasound device.
  • the output images can be presented as a floating image that is positioned near the ultrasound probe in the field of view of the medical practitioner who is viewing the scene through the transparent optical display unit 102. With such an arrangement, the sonographer can avoid the need to turn away from the patient to view the output.
  • the processor can receive from a remote database imagery captured at an earlier time involving a previous ultrasound scan of the same body location which is currently undergoing ultrasound scanning. Such imagery from a previous scan can be correlated with the current position of the ultrasound scanning head and automatically displayed in the transparent optical display unit for comparison purposes.
  • the OHMDS 100 can facilitate certain types of medical procedures by automatically displaying transverse slices or 3D representations of underlying organs of the body.
  • the processor 112 can utilize video analytics to determine a portion of a patient's body which is being observed and a point of view of such observation.
  • the processor 112 can also have remote access to database imagery specific to the patient.
  • imagery can include imagery that has been obtained using one more techniques such as Magnetic Resonance Imaging (MRI), computerized tomography (CT), positron emission tomography (PET), or ultrasound scanning.
  • MRI Magnetic Resonance Imaging
  • CT computerized tomography
  • PET positron emission tomography
  • ultrasound scanning a technique that has been obtained using one more techniques such as MRI, computerized tomography (CT), positron emission tomography (PET), or ultrasound scanning.
  • the stored imagery data can then be overlaid upon the patient in the transparent optical display unit 102.
  • the medical practitioner will be presented with a live view of the patient observed through the transparent optical display. Overlaid on such live view of the patient will be the stored imagery data, which is automatically rotated and scaled to its correct anatomical position relative to the position of the patient in the field of view of the medical practitioner.
  • the overlaid imagery can be advantageously presented to the medical practitioner overlaid on a patient surgical field to better identify the specific location of the underlying organ, tumor, or disease process of interest.
  • the processor 112 can be configured to automatically identify and visually indicate a correct position of prosthetic device components such as hips, knees, acetabular cups, and so on to assist in the process of reconstructive surgery.
  • the processor 112 calculate and causes to be displayed to the medical practitioner, using OHMDS 100, certain information pertinent to the surgical procedure.
  • the processor can utilize video analytics to automatically identify a portion of a patient's body observed through the OHMDS where a medical procedure is to be performed. This process can be facilitated by use of fiducial markers disposed on the patient.
  • the processor can generate overlay imagery. Such imagery, when viewed by the medical practitioner through the transparent optical display unit, will appear in the medical practitioner's field of view in the surgical field.
  • imagery can graphically mark a predetermined location and angle/orientation of instruments which are to be inserted for arthroscopic or endoscopic surgeries.
  • the surgeon or medical practitioner can then use the graphical display to assist in positioning and orienting such instruments.
  • these techniques can provide a virtual endoscopy view to the medical practitioner that is updated in real time as the instrument is advanced into the patient.
  • the combination of virtual colonoscopy data from a CT study can be combined with a live imagery from an actual colonoscopy in order to guide the wearer of the OHMDS 100 to the correct location of a specific lesion that needs to be removed.
  • This process is performed by using the processor 112 to compare the calculated distances in the virtual colonoscopy which is needed in order to access the lesion, to the real distance that the colonoscope has been advanced.
  • the imagery associated with the virtual colonoscopy can be presented on the transparent display of the OHMDS 100 along with the real video view from the scope, and a real-time map overview indicating the scope location in the colon.
  • a further use of the accelerometer and SLAM data can involve identifying a region outside of the current area of interest in which to display relevant controls to manipulate AR interactions. For example, looking 10 degrees above a horizon line parallel with the floor would reveal relevant medical device controls which can be activated using hand gestures in front of the display.
  • the OHMDS 100 can also facilitate other useful actions in a surgical environment to aid and assist a medical practitioner.
  • an embodiment OHMDS can display an automated checklist and count of devices, sponges, and so one which are required in the due course of surgery.
  • the OHMDS 100 can facilitate other functions and activities in a healthcare environment.
  • the OHMDS can have stored in memory (e.g., data storage unit 106) information concerning a specific location or locations in a facility where certain resources can be found. This information can be particularly important with respect to locations of certain types of equipment that can be needed on very short notice to deal with medical emergencies.
  • An example of equipment of this type can include an automated defibrillator.
  • the processor can cause to be displayed in the transparent optical display unit 102 directional prompts (e.g. audio and/or visual prompts) to guide the wearer to the sought after medical resource.
  • the emergency equipment may be periodically moved to different locations (e.g. to different parts of a facility) it can be advantageous to incorporate a tracking and reporting device (such as a GPS tracker) within the emergency equipment.
  • the device can then periodically report on its position within the facility to a central database.
  • location information can then be remotely accessed by the OHMDS 100 when needed by using one or more of the data transceivers 114, 116.
  • the wearer of the OHMDS can be directed to such equipment as described herein.

Landscapes

  • Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

L'invention concerne un système de visiocasque qui comprend un casque qui positionne un écran transparent en alignement avec la ligne de visée d'au moins un œil d'un porteur du casque. Des données et des images peuvent être présentées sur l'écran d'affichage transparent. Une caméra vidéo est montée sur le casque et est positionnée pour capturer des images vidéo d'une scène coïncidant avec la ligne de visée du porteur du casque. Un processeur informatique monté dans le casque est configuré afin de recevoir des données d'image vidéo obtenues par la caméra vidéo ; de traiter les données d'image vidéo pour obtenir un pouls d'une personne ; de recevoir des données de patient transmises de manière sans fil depuis un module de patient connecté à un ou plusieurs capteurs de patient qui sont directement connectés ou reliés à la personne ; de corréler les données de patient transmises avec la personne dans la scène en comparant le pouls dérivé de l'image vidéo, avec un second pouls spécifié dans les données de patient transmises ; et d'afficher au moins un élément de données provenant des données de patient transmises sur l'écran transparent sur la base de l'étape de corrélation.
PCT/US2017/012266 2016-01-05 2017-01-05 Visiocasque optique à réalité augmentée pour surveillance, diagnostic et traitement médicaux WO2017120288A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201662275092P 2016-01-05 2016-01-05
US62/275,092 2016-01-05

Publications (1)

Publication Number Publication Date
WO2017120288A1 true WO2017120288A1 (fr) 2017-07-13

Family

ID=59274432

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/012266 WO2017120288A1 (fr) 2016-01-05 2017-01-05 Visiocasque optique à réalité augmentée pour surveillance, diagnostic et traitement médicaux

Country Status (1)

Country Link
WO (1) WO2017120288A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107693117A (zh) * 2017-09-29 2018-02-16 黑龙江蓝智科技有限公司 增强现实辅助手术系统及将3d模型与手术患者进行自动重合匹配的方法
WO2019211713A1 (fr) * 2018-04-30 2019-11-07 Telefonaktiebolaget Lm Ericsson (Publ) Plateforme de rendu en réalité augmentée automatisé pour fourniture d'assistance par expert à distance
CN110584779A (zh) * 2019-08-20 2019-12-20 周苹 一种头戴可视手术部位导航系统及其操作方法
WO2023166311A1 (fr) * 2022-03-03 2023-09-07 Intelligent Ultrasound Limited Appareil et procédé de surveillance d'une procédure médicale

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7774044B2 (en) * 2004-02-17 2010-08-10 Siemens Medical Solutions Usa, Inc. System and method for augmented reality navigation in a medical intervention procedure
US20150005644A1 (en) * 2013-04-18 2015-01-01 Digimarc Corporation Dermoscopic data acquisition employing display illumination
WO2015110859A1 (fr) * 2014-01-21 2015-07-30 Trophy Procédé de chirurgie d'implant utilisant la visualisation augmentée
WO2015126466A1 (fr) * 2014-02-21 2015-08-27 The University Of Akron Système d'imagerie et d'affichage permettant de guider des interventions médicales
US20150360038A1 (en) * 2014-06-13 2015-12-17 Boston Scientific Neuromodulation Corporation Heads-Up Display and Control of an Implantable Medical Device
WO2016014384A2 (fr) * 2014-07-25 2016-01-28 Covidien Lp Environnement de réalité chirurgicale augmentée

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7774044B2 (en) * 2004-02-17 2010-08-10 Siemens Medical Solutions Usa, Inc. System and method for augmented reality navigation in a medical intervention procedure
US20150005644A1 (en) * 2013-04-18 2015-01-01 Digimarc Corporation Dermoscopic data acquisition employing display illumination
WO2015110859A1 (fr) * 2014-01-21 2015-07-30 Trophy Procédé de chirurgie d'implant utilisant la visualisation augmentée
WO2015126466A1 (fr) * 2014-02-21 2015-08-27 The University Of Akron Système d'imagerie et d'affichage permettant de guider des interventions médicales
US20150360038A1 (en) * 2014-06-13 2015-12-17 Boston Scientific Neuromodulation Corporation Heads-Up Display and Control of an Implantable Medical Device
WO2016014384A2 (fr) * 2014-07-25 2016-01-28 Covidien Lp Environnement de réalité chirurgicale augmentée

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107693117A (zh) * 2017-09-29 2018-02-16 黑龙江蓝智科技有限公司 增强现实辅助手术系统及将3d模型与手术患者进行自动重合匹配的方法
CN107693117B (zh) * 2017-09-29 2020-06-12 苏州蓝软智能医疗科技有限公司 辅助手术系统及3d模型与手术患者自动重合匹配的方法
WO2019211713A1 (fr) * 2018-04-30 2019-11-07 Telefonaktiebolaget Lm Ericsson (Publ) Plateforme de rendu en réalité augmentée automatisé pour fourniture d'assistance par expert à distance
US11681970B2 (en) 2018-04-30 2023-06-20 Telefonaktiebolaget Lm Ericsson (Publ) Automated augmented reality rendering platform for providing remote expert assistance
CN110584779A (zh) * 2019-08-20 2019-12-20 周苹 一种头戴可视手术部位导航系统及其操作方法
WO2023166311A1 (fr) * 2022-03-03 2023-09-07 Intelligent Ultrasound Limited Appareil et procédé de surveillance d'une procédure médicale

Similar Documents

Publication Publication Date Title
US11190752B2 (en) Optical imaging system and methods thereof
RU2740259C2 (ru) Позиционирование датчика ультразвуковой визуализации
US20220192776A1 (en) Augmented Reality Viewing and Tagging For Medical Procedures
US10932689B2 (en) Model registration system and method
JP6987893B2 (ja) 診断試験をリアルタイムの治療に統合する汎用デバイスおよび方法
CN103735312B (zh) 多模影像超声引导手术导航系统
US20240008935A1 (en) Tracking methods for image-guided surgery
JP6985262B2 (ja) 患者の体内における内視鏡の位置を追跡するための装置及び方法
US20170296292A1 (en) Systems and Methods for Surgical Imaging
US20210186355A1 (en) Model registration system and method
CN110494921A (zh) 利用三维数据增强患者的实时视图
CN109998678A (zh) 在医学规程期间使用增强现实辅助导航
WO2016138348A1 (fr) Systèmes et procédés de surveillance de procédure médicale
US20190365339A1 (en) Augmented reality interventional system providing contextual overylays
WO2017120288A1 (fr) Visiocasque optique à réalité augmentée pour surveillance, diagnostic et traitement médicaux
US20160073854A1 (en) Systems and methods using spatial sensor data in full-field three-dimensional surface measurement
JP6116754B2 (ja) 低侵襲手術において画像データを立体視表示するための装置およびその装置の作動方法
CN107669338A (zh) 用于导航和可视化的系统和方法
TW202108086A (zh) 應用於與手術導航整合之混合實境系統之數位影像實境對位套件與方法
Gsaxner et al. Augmented reality in oral and maxillofacial surgery
CN111417353A (zh) 外科手术形状传感光纤光学设备及其方法
US20130338493A1 (en) Surgical devices, systems and methods for highlighting and measuring regions of interest
US20220096004A1 (en) System for visualizing patient stress
CN114830638A (zh) 用于具有空间记忆的远程图解的系统和方法
JP6795744B2 (ja) 医療支援方法および医療支援装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17736296

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17736296

Country of ref document: EP

Kind code of ref document: A1