US11744652B2 - Visualization of predicted dosage - Google Patents

Visualization of predicted dosage Download PDF

Info

Publication number
US11744652B2
US11744652B2 US17/871,885 US202217871885A US11744652B2 US 11744652 B2 US11744652 B2 US 11744652B2 US 202217871885 A US202217871885 A US 202217871885A US 11744652 B2 US11744652 B2 US 11744652B2
Authority
US
United States
Prior art keywords
collimator
pose
generating
display
headset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US17/871,885
Other versions
US20220354591A1 (en
Inventor
Long QIAN
Christopher Morley
Osamah Choudhry
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Medivis Inc
Original Assignee
Medivis Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/148,522 external-priority patent/US11172996B1/en
Priority claimed from US17/395,233 external-priority patent/US11857274B2/en
Application filed by Medivis Inc filed Critical Medivis Inc
Priority to US17/871,885 priority Critical patent/US11744652B2/en
Publication of US20220354591A1 publication Critical patent/US20220354591A1/en
Priority to US18/208,136 priority patent/US20230320788A1/en
Assigned to MediVis, Inc. reassignment MediVis, Inc. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MORLEY, CHRISTOPHER, CHOUDHRY, OSAMAH, QIAN, Long
Application granted granted Critical
Publication of US11744652B2 publication Critical patent/US11744652B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/39Markers, e.g. radio-opaque or breast lesions markers
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/25User interfaces for surgical systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/105Modelling of the patient, e.g. for ligaments or bones
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/107Visualisation of planned trajectories or target regions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2068Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis using pointers, e.g. pointers having reference marks for determining coordinates of body points
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body
    • A61B2090/365Correlation of different images or relation of image positions in respect to the body augmented reality, i.e. correlating a live optical image with another image
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/372Details of monitor hardware
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/39Markers, e.g. radio-opaque or breast lesions markers
    • A61B2090/3983Reference marker arrangements for use with image guided surgery
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/50Supports for surgical instruments, e.g. articulated arms
    • A61B2090/502Headgear, e.g. helmet, spectacles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/361Image-producing devices, e.g. surgical cameras
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/90Identification means for patients or instruments, e.g. tags
    • A61B90/94Identification means for patients or instruments, e.g. tags coded with symbols, e.g. text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Definitions

  • Surgical planning is necessary for every medical procedure.
  • a surgeon and their team must have a plan for a case before entering an operating room, not just as a matter of good practice but to minimize malpractice liabilities and to enhance patient outcomes.
  • Surgical planning is often conducted based on medical images including DICOM scans (MRI, CT, etc.), requiring the surgeon to flip through numerous views/slices, and utilizing this information to imagine a 3D model of the patient so that the procedure may be planned. Accordingly, in such a scenario, the best course of action is often a surgeon's judgment call based on the data that they are provided.
  • FIG. 1 Various embodiments of an apparatus, methods, systems and computer program products described herein are directed to a Field Visualization Engine.
  • Various embodiments are described herein as including an external instrument comprising a collimator. It is understood that external instruments of various embodiments of the Field Visualization Engine are not limited solely to a collimator. That is, embodiments of the Field Visualization Engine may substitute the collimator for any module that generates and transmits an energy beam or electric field (such as a magnetic field).
  • embodiments described herein include any type of external instrument that provides some form of treatment (or performs some type of diagnosis) of a patient.
  • the Field Visualization Engine tracks one or more positions and orientations of a collimator (“collimator poses”) relative to one or more positions and orientations of an Augmented Reality (AR) headset device (“headset poses” or “device poses”) worn by a user.
  • Each respective collimator pose and each respective headset pose corresponds to a three-dimensional (3D) unified coordinate space (“3D space”).
  • the Field Visualization Engine generates an AR representation of a beam (such as a radiation beam) emanating from the collimator based at least on a current collimator pose and a current headset pose.
  • the Field Visualization Engine further generates an AR visualization of emanation of the beam throughout an AR display of medical data.
  • AR visualization of emanation of the beam may be based on a simulated beam (such as a simulated radiation beam) or may be a real-time AR visualization of an actual beam currently being delivered by the collimator.
  • a user may wear an AR headset device as part of a component of the Field Visualization Engine.
  • the AR headset device generates an AR display that includes 3D medical model data displayed as an overlay over a physical patient.
  • the 3D medical model data may be representative of a portion of the physical patient's anatomy.
  • the Field Visualization Engine captures the current headset pose and retrieves medical model data that corresponds with the user's perspective view.
  • the Field Visualization Engine continually updates the overlay by rendering newly retrieved medical model data.
  • the Field Visualization Engine incorporates mask layers into the AR display of the 3D medical model data.
  • the mask layers provide a preview of how a sphere of a planned dose of radiation may emanate throughout the patient's internal anatomy and also provides a visual indication of predicted radiation fall off.
  • the Field Visualization Engine determines the collimator pose for a collimator physically positioned proximate to the patient.
  • the AR headset further generates an AR visualization of a simulated radiation beam generated by the collimator and targeted at the patient.
  • the AR headset device displays the AR visualization of a simulated radiation beam relative to a current headset pose.
  • the AR visualization of a simulated radiation beam represents a view of the simulated radiation beam from the user's current perspective view as the user moves and the headset pose continually changes.
  • the AR display includes concurrent display of the medical model data overlay with mask layers and the AR visualization of the simulated radiation beam.
  • the Field Visualization Engine determines predicted interactions between the simulated radiation beam and the various types of targeted tissue, muscle and/or organs of the patient's internal anatomy represented by the medical model data.
  • the AR headset device incorporates visual indications in the medical model data overlay to provide a visualization of how the simulated radiation beam may emanate throughout the internal anatomy, the fall off of the simulated radiation beam and the effects the internal anatomy may experience as a consequence of exposure to the simulated radiation beam.
  • the Field Visualization Engine triggers display of a visual cue upon determining the emanation of the simulated radiation beam and fall off that corresponds with the collimator's current pose is in alignment with the emanation of the planned dose of radiation and its fall off.
  • the Field Visualization Engine (or a portion of the Field Visualization Engine) may be implemented in an external computer device(s) to track the collimator poses and the headset poses.
  • Various embodiments include a module(s) and/or one or more functionalities to redact privacy information/data (such as medical data), to encrypt information/data and to anonymize data to ensure the confidentiality and security of user, patient and system information/data as well as compliance with medical regulatory and privacy law(s) in the United State and/or international jurisdictions.
  • redact privacy information/data such as medical data
  • encrypt information/data to anonymize data to ensure the confidentiality and security of user, patient and system information/data as well as compliance with medical regulatory and privacy law(s) in the United State and/or international jurisdictions.
  • FIG. 1 A is a diagram illustrating an exemplary environment in which some embodiments may operate.
  • FIG. 1 B is a diagram illustrating an exemplary environment in which some embodiments may operate.
  • FIG. 2 is a diagram illustrating an exemplary method that may be performed in some embodiments.
  • FIG. 3 is a diagram illustrating an exemplary environment in which some embodiments may operate.
  • FIG. 4 is a diagram illustrating an exemplary environment in which some embodiments may operate.
  • FIG. 5 is a diagram illustrating an exemplary environment in which some embodiments may operate.
  • FIGS. 6 A and 6 B are each a diagram illustrating an exemplary environment in which some embodiments may operate.
  • FIG. 7 is a diagram illustrating an exemplary environment in which some embodiments may operate.
  • steps of the exemplary methods set forth in this exemplary patent can be performed in different orders than the order presented in this specification. Furthermore, some steps of the exemplary methods may be performed in parallel rather than being performed sequentially. Also, the steps of the exemplary methods may be performed in a network environment in which some steps are performed by different computers in the networked environment.
  • a computer system may include a processor, a memory, and a non-transitory computer-readable medium.
  • the memory and non-transitory medium may store instructions for performing methods and steps described herein.
  • FIG. 1 A A diagram of exemplary network environment in which embodiments may operate is shown in FIG. 1 A .
  • two clients 141 , 142 are connected over a network 145 to a server 150 having local storage 151 .
  • Clients and servers in this environment may be computers.
  • Server 150 may be configured to handle requests from clients.
  • the exemplary environment 140 is illustrated with only two clients and one server for simplicity, though in practice there may be more or fewer clients and servers.
  • the computers have been termed clients and servers, though clients can also play the role of servers and servers can also play the role of clients.
  • the clients 141 , 142 may communicate with each other as well as the servers.
  • the server 150 may communicate with other servers.
  • the network 145 may be, for example, local area network (LAN), wide area network (WAN), telephone networks, wireless networks, intranets, the Internet, or combinations of networks.
  • the server 150 may be connected to storage 152 over a connection medium 160 , which may be a bus, crossbar, network, or other interconnect.
  • Storage 152 may be implemented as a network of multiple storage devices, though it is illustrated as a single entity.
  • Storage 152 may be a file system, disk, database, or other storage.
  • the client 141 may perform the method 200 or other method herein and, as a result, store a file in the storage 152 . This may be accomplished via communication over the network 145 between the client 141 and server 150 .
  • the client may communicate a request to the server 150 to store a file with a specified name in the storage 152 .
  • the server 150 may respond to the request and store the file with the specified name in the storage 152 .
  • the file to be saved may exist on the client 141 or may already exist in the server's local storage 151 .
  • the server 150 may respond to requests and store the file with a specified name in the storage 151 .
  • the file to be saved may exist on the client 141 or may exist in other storage accessible via the network such as storage 152 , or even in storage on the client 142 (e.g., in a peer-to-peer system).
  • embodiments can be used to store a file on local storage such as a disk or on a removable medium like a flash drive, CD-R, or DVD-R. Furthermore, embodiments may be used to store a file on an external storage device connected to a computer over a connection medium such as a bus, crossbar, network, or other interconnect. In addition, embodiments can be used to store a file on a remote server or on a storage device accessible to the remote server.
  • Cloud computing is another example where files are often stored on remote servers or remote storage systems.
  • Cloud computing refers to pooled network resources that can be quickly provisioned so as to allow for easy scalability. Cloud computing can be used to provide software-as-a-service, platform-as-a-service, infrastructure-as-a-service, and similar features.
  • a user may store a file in the “cloud,” which means that the file is stored on a remote network resource though the actual hardware storing the file may be opaque to the user.
  • FIG. 1 B illustrates a block diagram of an example system 100 for various embodiments that includes at least a pose module 102 , an AR representation module 104 and an AR emanation module 106 .
  • the pose module 102 may perform functionality as illustrated in FIGS. 3 , 4 , 5 , 6 A and 6 B (“ FIGS. 2 - 6 B ).
  • the pose module 102 may determine pose data and pose data changes for various types of devices, objects and/or items.
  • the AR representation module 104 may perform functionality as illustrated in FIGS. 2 - 6 B .
  • the AR representation module 104 may generate AR representation of a beam or magnetic field.
  • the AR emanation module 106 may perform functionality as illustrated in FIGS. 2 - 6 B .
  • the AR emanation module 106 may generate an AR visualization of emanation of the beam throughout an AR display of medical data.
  • the AR emanation module 106 may generate an AR visualization of emanation of a magnetic field throughout an AR display of medical data.
  • the system 100 may further includes one or more user devices 140 (such as one or more Augmented Reality headset devices) to display output, via a user interface generated by an application engine.
  • An exemplary user device may include, for example, a spatial transformation module, a camera module, a physical landmark module, a user manipulation module and an augmented reality display module 118 .
  • the user device(s) 140 may further include one or more of the modules 102 , 104 , 106 or respective portions of any respective module(s) may be distributed and implemented amongst a plurality of user devices 140 and one or more workstations.
  • Any module or component of the system 100 may have access to a 3D model of medical data 122 or may have one or more portions of the 3D model 122 stored locally. While the databases(s) 120 is displayed separately, the databases and information maintained in a database may be combined together or further separated in a manner the promotes retrieval and storage efficiency and/or data security.
  • a database(s) associated with the system 100 maintains information, such as 3D medical model data, in a manner the promotes retrieval and storage efficiency and/or data security.
  • the 3D medical model data may include rendering parameters, such as data based on selections and modifications to a 3D virtual representation of a medical model rendered for a previous Augmented Reality display.
  • one or more rendering parameters may be preloaded as a default value for a rendering parameter in a newly initiated session of the system 100 .
  • an Augmented-Reality (AR) headset device may implement one or more modules of the Field Visualization Engine.
  • a user may wear the AR headset device that generates and displays an AR display.
  • the AR headset device tracks one or more poses (and changes in poses) of various items and/or object.
  • a camera(s) disposed on the AR headset device captures one or more images of the various items and/or object as the AR headset device's position and orientation changes due to user movements.
  • the AR headset device generates an AR display, a 3D virtual representation of a medical model (“3D virtual medical model”), and a 3D virtual representation of a body part of the user (“3D virtual hands”).
  • the Field Visualization Engine accesses one or more storage locations that contain respective portions of 3D medical model data.
  • the 3D medical model data may include a plurality of slice layers of medical data associated with external and/or internal anatomies.
  • the 3D medical model data may include a plurality of slice layers of medical data for illustrating external and internal anatomical regions of a user's head, brain and skull, etc. It is understood that various embodiments may be directed to generating displays of any internal or external anatomical portions of the human body and/or animal bodies.
  • respective pose data of the AR headset device, a collimator, and a physical patient each represents a physical position and orientation in a 3D space defined by a unified coordinate system.
  • the AR headset device identifies respective coordinates for pose data relative to a predefined fixed reference point.
  • the collimator (or external instrument arm) and the patient may each have one or more fiducial markers tracked by the AR headset device.
  • the respective pose data may be based on tracked positions and orientations of the fiducial markers.
  • the Field Visualization Engine may further apply various types of spatial transformations to pose data.
  • the Field Visualization Engine tracks one or more positions and orientations of a collimator (“collimator poses”) relative to one or more positions and orientations of an Augmented Reality (AR) headset device (“headset poses” or “device poses”) worn by a user.
  • Each respective collimator pose and each respective headset pose corresponds to a three-dimensional (3D) unified coordinate space (“3D space”).
  • the Field Visualization Engine generates an AR representation of a beam (such as a radiation beam) emanating from the collimator based at least on a current collimator pose and a current headset pose.
  • the Field Visualization Engine further generates an AR visualization of emanation of the beam throughout an AR display of medical data.
  • Various embodiments generate an AR representation of a simulated radiation beam and/or generate a real-time AR representation an actual radiation beam currently being delivered.
  • Various embodiments generate an AR representation of a simulated magnetic field and/or a real-time AR representation of an actual magnetic field that is currently being generated.
  • the Field Visualization Engine further generates an AR visualization of emanation of the beam throughout an AR display of medical data.
  • the AR visualization may be of emanation of a magnetic field.
  • the Field Visualization Engine generates an AR overlay of medical data 306 .
  • the AR headset device displays the overlay 306 in an AR display.
  • the overlay 306 may be based on 3D medical model data representative of a portion of an internal anatomy of a patient 304 .
  • the patient may be position proximate to a collimator 308 .
  • the Field Visualization Engine determines a display position for the overlay 306 according to a position and orientation defined according to the 3D space such that the overlay 306 is displayed over a portion of the physical patient's body in alignment with the patient's internal anatomy.
  • the Field Visualization Engine captures the current headset pose and retrieves medical model data that corresponds with the user's perspective view as represented by the headset pose as the user continually moves and the headset pose continually changes.
  • the Field Visualization Engine updates the overlay 306 by rendering newly retrieved medical model data in the overlay 306 .
  • the Field Visualization Engine determines a collimator pose for a collimator 308 physically positioned proximate to the patient 304 .
  • the AR headset further generates an AR visualization 402 of a simulated radiation beam targeted at the patient 304 .
  • the Field Visualization Engine may utilize various types of parameters in generating the AR visualization 402 of a simulated radiation beam such as, for example, the distance between the collimator 308 and the patient 304 , the patient's current position and orientation in the 3D space, an angle of the collimator and various power settings of the collimator.
  • the AR headset device displays the AR visualization 402 of a simulated radiation beam relative to a current headset pose. The user is thereby provided a view of the simulated radiation beam from the user's current perspective view as the user moves and the headset pose continually changes.
  • an AR display 502 generated by the AR headset device may further include concurrent display of the medical model data overlay 306 and the AR visualization 402 of a simulated radiation beam. Both the overlay 306 and the AR visualization 402 are generated by the AR headset device relative to headset pose(s), individually, to provide views of the overlay 306 and the AR visualization 402 from the user's perspective.
  • the Field Visualization Engine generates one or more mask layers 502 and incorporates the mask layers into the overlay 306 .
  • Each respective mask layer may represent a portion(s) of predicted emanation of a sphere of a planned radiation dose that is to be applied to the patient's internal anatomy.
  • a planned radiation dose may include an amount of radiation targeting a particular part of internal anatomy at a selected angle.
  • the mask layers provide a preview of how the sphere of the planned dose of radiation will emanate throughout the patient's internal anatomy and also provides a visual indication of predicted radiation fall off.
  • the Field Visualization Engine may further determine predicted interactions between the simulated radiation beam and the various types of tissue, muscle and/or organs of the patient's internal anatomy represented by the medical model data.
  • the AR headset device may generate an AR display 606 that incorporate visual indications in the overlay 306 to provide the user with a visualization of how the simulated radiation beam may emanate throughout the internal anatomy, the fall off of the simulated radiation beam and effects 506 the internal anatomy may experience as a consequence of exposure to the simulated radiation beam.
  • the Field Visualization Engine may trigger display in the AR display 606 of a visual cue upon determining that the emanation of the simulated radiation beam that corresponds with the collimator's current pose and power settings meets a similarity threshold (or is in alignment) with respect to the emanation of the planned dose of radiation and its fall off as displayed in the overlay 306 according to the mask layers 502 .
  • the visual cue informs the user that the collimator's current pose and power settings that correspond with the simulated radiation beam are optimal for actually delivering the planned radiation dose to the patient.
  • FIG. 7 illustrates an example machine of a computer system within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed.
  • the machine may be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, and/or the Internet.
  • the machine may operate in the capacity of a server or a client machine in client-server network environment, as a peer machine in a peer-to-peer (or distributed) network environment, or as a server or a client machine in a cloud computing infrastructure or environment.
  • the machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
  • PC personal computer
  • PDA Personal Digital Assistant
  • STB set-top box
  • STB set-top box
  • a cellular telephone a web appliance
  • server a server
  • network router a network router
  • switch or bridge any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
  • machine shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
  • the example computer system 700 includes a processing device 702 , a main memory 704 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory 706 (e.g., flash memory, static random access memory (SRAM), etc.), and a data storage device 718 , which communicate with each other via a bus 730 .
  • main memory 704 e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.
  • DRAM dynamic random access memory
  • SDRAM synchronous DRAM
  • RDRAM Rambus DRAM
  • static memory 706 e.g., flash memory, static random access memory (SRAM), etc.
  • SRAM static random access memory
  • Processing device 702 represents one or more general-purpose processing devices such as a microprocessor, a central processing unit, or the like. More particularly, the processing device may be complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device 702 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device 702 is configured to execute instructions 726 for performing the operations and steps discussed herein.
  • CISC complex instruction set computing
  • RISC reduced instruction set computing
  • VLIW very long instruction word
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • DSP digital signal processor
  • network processor or the like.
  • the processing device 702 is configured to execute instructions 726 for performing the operations and steps discussed here
  • the computer system 700 may further include a network interface device 708 to communicate over the network 720 .
  • the computer system 700 also may include a video display unit 710 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device 712 (e.g., a keyboard), a cursor control device 714 (e.g., a mouse), a graphics processing unit 722 , a signal generation device 716 (e.g., a speaker), graphics processing unit 722 , video processing unit 728 , and audio processing unit 732 .
  • a video display unit 710 e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)
  • an alphanumeric input device 712 e.g., a keyboard
  • a cursor control device 714 e.g., a mouse
  • a graphics processing unit 722 e.g., a signal generation device 716 (
  • the data storage device 718 may include a machine-readable storage medium 724 (also known as a computer-readable medium) on which is stored one or more sets of instructions or software 726 embodying any one or more of the methodologies or functions described herein.
  • the instructions 726 may also reside, completely or at least partially, within the main memory 704 and/or within the processing device 702 during execution thereof by the computer system 700 , the main memory 704 and the processing device 702 also constituting machine-readable storage media.
  • the instructions 726 include instructions to implement functionality corresponding to the components of a device to perform the disclosure herein.
  • the machine-readable storage medium 724 is shown in an example implementation to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions.
  • the term “machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure.
  • the term “machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media and magnetic media.
  • the present disclosure also relates to an apparatus for performing the operations herein.
  • This apparatus may be specially constructed for the intended purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer.
  • a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus.
  • the present disclosure may be provided as a computer program product, or software, that may include a machine-readable medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure.
  • a machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer).
  • a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices, etc.

Abstract

Various embodiments of an apparatus, methods, systems and computer program products described herein are directed to Field Visualization Engine. The Field Visualization Engine tracks one or more collimator poses relative to one or more Augmented Reality (AR) headset device poses. Each respective collimator pose and each respective headset device pose corresponds to a three-dimensional (3D) unified coordinate space (“3D space”). The Field Visualization Engine generates an AR representation of a beam emanating from the collimator based at least on a current collimator pose and a current headset device pose. The Field Visualization Engine further generates an AR visualization of emanation of the beam throughout an AR display of medical data.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS
This application is a continuation-in-part of U.S. patent application Ser. No. 17/395,233, filed on Aug. 5, 2021, titled “MEDICAL INSTRUMENT WITH FIDUCIAL MARKERS,” which claims priority to U.S. patent application Ser. No. 17/148,522 (now U.S. Pat. No. 11,172,996), the entirety of which is incorporated herein by reference.
This application is a continuation-in-part of U.S. patent application Ser. No. 17/502,030, filed on Oct. 14, 2021, titled “INSTRUMENT-BASED REGISTRATION AND ALIGNMENT FOR AUGMENTED REALITY ENVIRONMENTS,” which also claims priority to U.S. patent application Ser. No. 17/148,522 (now U.S. Pat. No. 11,172,996), the entirety of which is incorporated herein by reference.
BACKGROUND
Current conventional systems have limitations with regard to two-dimensional (2D) and three-dimensional (3D) images in surgical settings. Surgical planning is necessary for every medical procedure. A surgeon and their team must have a plan for a case before entering an operating room, not just as a matter of good practice but to minimize malpractice liabilities and to enhance patient outcomes. Surgical planning is often conducted based on medical images including DICOM scans (MRI, CT, etc.), requiring the surgeon to flip through numerous views/slices, and utilizing this information to imagine a 3D model of the patient so that the procedure may be planned. Accordingly, in such a scenario, the best course of action is often a surgeon's judgment call based on the data that they are provided.
SUMMARY
Various embodiments of an apparatus, methods, systems and computer program products described herein are directed to a Field Visualization Engine. Various embodiments are described herein as including an external instrument comprising a collimator. It is understood that external instruments of various embodiments of the Field Visualization Engine are not limited solely to a collimator. That is, embodiments of the Field Visualization Engine may substitute the collimator for any module that generates and transmits an energy beam or electric field (such as a magnetic field). For example, embodiments described herein include any type of external instrument that provides some form of treatment (or performs some type of diagnosis) of a patient.
According to various embodiments, the Field Visualization Engine tracks one or more positions and orientations of a collimator (“collimator poses”) relative to one or more positions and orientations of an Augmented Reality (AR) headset device (“headset poses” or “device poses”) worn by a user. Each respective collimator pose and each respective headset pose corresponds to a three-dimensional (3D) unified coordinate space (“3D space”). The Field Visualization Engine generates an AR representation of a beam (such as a radiation beam) emanating from the collimator based at least on a current collimator pose and a current headset pose. The Field Visualization Engine further generates an AR visualization of emanation of the beam throughout an AR display of medical data.
According to various embodiments, AR visualization of emanation of the beam may be based on a simulated beam (such as a simulated radiation beam) or may be a real-time AR visualization of an actual beam currently being delivered by the collimator.
In various embodiments, a user may wear an AR headset device as part of a component of the Field Visualization Engine. The AR headset device generates an AR display that includes 3D medical model data displayed as an overlay over a physical patient. The 3D medical model data may be representative of a portion of the physical patient's anatomy. As the user continually moves and the headset pose continually changes, the Field Visualization Engine captures the current headset pose and retrieves medical model data that corresponds with the user's perspective view. The Field Visualization Engine continually updates the overlay by rendering newly retrieved medical model data.
In various embodiments, the Field Visualization Engine incorporates mask layers into the AR display of the 3D medical model data. The mask layers provide a preview of how a sphere of a planned dose of radiation may emanate throughout the patient's internal anatomy and also provides a visual indication of predicted radiation fall off.
In various embodiments, the Field Visualization Engine determines the collimator pose for a collimator physically positioned proximate to the patient. The AR headset further generates an AR visualization of a simulated radiation beam generated by the collimator and targeted at the patient.
In one or more embodiments, the AR headset device displays the AR visualization of a simulated radiation beam relative to a current headset pose. The AR visualization of a simulated radiation beam represents a view of the simulated radiation beam from the user's current perspective view as the user moves and the headset pose continually changes.
In various embodiments, the AR display includes concurrent display of the medical model data overlay with mask layers and the AR visualization of the simulated radiation beam.
According to one or more embodiments, the Field Visualization Engine determines predicted interactions between the simulated radiation beam and the various types of targeted tissue, muscle and/or organs of the patient's internal anatomy represented by the medical model data.
In one or more embodiments, the AR headset device incorporates visual indications in the medical model data overlay to provide a visualization of how the simulated radiation beam may emanate throughout the internal anatomy, the fall off of the simulated radiation beam and the effects the internal anatomy may experience as a consequence of exposure to the simulated radiation beam.
In some embodiments, the Field Visualization Engine triggers display of a visual cue upon determining the emanation of the simulated radiation beam and fall off that corresponds with the collimator's current pose is in alignment with the emanation of the planned dose of radiation and its fall off.
According to various embodiments, the Field Visualization Engine (or a portion of the Field Visualization Engine) may be implemented in an external computer device(s) to track the collimator poses and the headset poses.
Various embodiments include a module(s) and/or one or more functionalities to redact privacy information/data (such as medical data), to encrypt information/data and to anonymize data to ensure the confidentiality and security of user, patient and system information/data as well as compliance with medical regulatory and privacy law(s) in the United State and/or international jurisdictions.
Further areas of applicability of the present disclosure will become apparent from the detailed description, the claims and the drawings. The detailed description and specific examples are intended for illustration only and are not intended to limit the scope of the disclosure.
BRIEF DESCRIPTION OF THE DRAWINGS
The present disclosure will become better understood from the detailed description and the drawings, wherein:
FIG. 1A is a diagram illustrating an exemplary environment in which some embodiments may operate.
FIG. 1B is a diagram illustrating an exemplary environment in which some embodiments may operate.
FIG. 2 is a diagram illustrating an exemplary method that may be performed in some embodiments.
FIG. 3 is a diagram illustrating an exemplary environment in which some embodiments may operate.
FIG. 4 is a diagram illustrating an exemplary environment in which some embodiments may operate.
FIG. 5 is a diagram illustrating an exemplary environment in which some embodiments may operate.
FIGS. 6A and 6B are each a diagram illustrating an exemplary environment in which some embodiments may operate.
FIG. 7 is a diagram illustrating an exemplary environment in which some embodiments may operate.
DETAILED DESCRIPTION
In this specification, reference is made in detail to specific embodiments of the invention. Some of the embodiments or their aspects are illustrated in the drawings.
For clarity in explanation, the invention has been described with reference to specific embodiments, however it should be understood that the invention is not limited to the described embodiments. On the contrary, the invention covers alternatives, modifications, and equivalents as may be included within its scope as defined by any patent claims. The following embodiments of the invention are set forth without any loss of generality to, and without imposing limitations on, the claimed invention. In the following description, specific details are set forth in order to provide a thorough understanding of the present invention. The present invention may be practiced without some or all of these specific details. In addition, well known features may not have been described in detail to avoid unnecessarily obscuring the invention.
In addition, it should be understood that steps of the exemplary methods set forth in this exemplary patent can be performed in different orders than the order presented in this specification. Furthermore, some steps of the exemplary methods may be performed in parallel rather than being performed sequentially. Also, the steps of the exemplary methods may be performed in a network environment in which some steps are performed by different computers in the networked environment.
Some embodiments are implemented by a computer system. A computer system may include a processor, a memory, and a non-transitory computer-readable medium. The memory and non-transitory medium may store instructions for performing methods and steps described herein.
A diagram of exemplary network environment in which embodiments may operate is shown in FIG. 1A. In the exemplary environment 140, two clients 141, 142 are connected over a network 145 to a server 150 having local storage 151. Clients and servers in this environment may be computers. Server 150 may be configured to handle requests from clients.
The exemplary environment 140 is illustrated with only two clients and one server for simplicity, though in practice there may be more or fewer clients and servers. The computers have been termed clients and servers, though clients can also play the role of servers and servers can also play the role of clients. In some embodiments, the clients 141, 142 may communicate with each other as well as the servers. Also, the server 150 may communicate with other servers.
The network 145 may be, for example, local area network (LAN), wide area network (WAN), telephone networks, wireless networks, intranets, the Internet, or combinations of networks. The server 150 may be connected to storage 152 over a connection medium 160, which may be a bus, crossbar, network, or other interconnect. Storage 152 may be implemented as a network of multiple storage devices, though it is illustrated as a single entity. Storage 152 may be a file system, disk, database, or other storage.
In an embodiment, the client 141 may perform the method 200 or other method herein and, as a result, store a file in the storage 152. This may be accomplished via communication over the network 145 between the client 141 and server 150. For example, the client may communicate a request to the server 150 to store a file with a specified name in the storage 152. The server 150 may respond to the request and store the file with the specified name in the storage 152. The file to be saved may exist on the client 141 or may already exist in the server's local storage 151. In another embodiment, the server 150 may respond to requests and store the file with a specified name in the storage 151. The file to be saved may exist on the client 141 or may exist in other storage accessible via the network such as storage 152, or even in storage on the client 142 (e.g., in a peer-to-peer system).
In accordance with the above discussion, embodiments can be used to store a file on local storage such as a disk or on a removable medium like a flash drive, CD-R, or DVD-R. Furthermore, embodiments may be used to store a file on an external storage device connected to a computer over a connection medium such as a bus, crossbar, network, or other interconnect. In addition, embodiments can be used to store a file on a remote server or on a storage device accessible to the remote server.
Furthermore, cloud computing is another example where files are often stored on remote servers or remote storage systems. Cloud computing refers to pooled network resources that can be quickly provisioned so as to allow for easy scalability. Cloud computing can be used to provide software-as-a-service, platform-as-a-service, infrastructure-as-a-service, and similar features. In a cloud computing environment, a user may store a file in the “cloud,” which means that the file is stored on a remote network resource though the actual hardware storing the file may be opaque to the user.
FIG. 1B illustrates a block diagram of an example system 100 for various embodiments that includes at least a pose module 102, an AR representation module 104 and an AR emanation module 106.
The pose module 102 may perform functionality as illustrated in FIGS. 3, 4, 5, 6A and 6B (“FIGS. 2-6B). The pose module 102 may determine pose data and pose data changes for various types of devices, objects and/or items.
The AR representation module 104 may perform functionality as illustrated in FIGS. 2-6B. The AR representation module 104 may generate AR representation of a beam or magnetic field.
The AR emanation module 106 may perform functionality as illustrated in FIGS. 2-6B. The AR emanation module 106 may generate an AR visualization of emanation of the beam throughout an AR display of medical data. The AR emanation module 106 may generate an AR visualization of emanation of a magnetic field throughout an AR display of medical data.
The system 100 may further includes one or more user devices 140 (such as one or more Augmented Reality headset devices) to display output, via a user interface generated by an application engine. An exemplary user device may include, for example, a spatial transformation module, a camera module, a physical landmark module, a user manipulation module and an augmented reality display module 118. It is understood that the user device(s) 140 may further include one or more of the modules 102, 104, 106 or respective portions of any respective module(s) may be distributed and implemented amongst a plurality of user devices 140 and one or more workstations.
Any module or component of the system 100 may have access to a 3D model of medical data 122 or may have one or more portions of the 3D model 122 stored locally. While the databases(s) 120 is displayed separately, the databases and information maintained in a database may be combined together or further separated in a manner the promotes retrieval and storage efficiency and/or data security.
According to various embodiments, a database(s) associated with the system 100 maintains information, such as 3D medical model data, in a manner the promotes retrieval and storage efficiency and/or data security. In addition, the 3D medical model data may include rendering parameters, such as data based on selections and modifications to a 3D virtual representation of a medical model rendered for a previous Augmented Reality display. In various embodiments, one or more rendering parameters may be preloaded as a default value for a rendering parameter in a newly initiated session of the system 100.
In one or more embodiments, an Augmented-Reality (AR) headset device may implement one or more modules of the Field Visualization Engine. A user may wear the AR headset device that generates and displays an AR display. The AR headset device tracks one or more poses (and changes in poses) of various items and/or object. In various embodiments, a camera(s) disposed on the AR headset device captures one or more images of the various items and/or object as the AR headset device's position and orientation changes due to user movements. In some embodiments, the AR headset device generates an AR display, a 3D virtual representation of a medical model (“3D virtual medical model”), and a 3D virtual representation of a body part of the user (“3D virtual hands”).
In various embodiments, the Field Visualization Engine accesses one or more storage locations that contain respective portions of 3D medical model data. The 3D medical model data may include a plurality of slice layers of medical data associated with external and/or internal anatomies. For example, the 3D medical model data may include a plurality of slice layers of medical data for illustrating external and internal anatomical regions of a user's head, brain and skull, etc. It is understood that various embodiments may be directed to generating displays of any internal or external anatomical portions of the human body and/or animal bodies.
According to various embodiments, respective pose data of the AR headset device, a collimator, and a physical patient each represents a physical position and orientation in a 3D space defined by a unified coordinate system. In some embodiments, the AR headset device identifies respective coordinates for pose data relative to a predefined fixed reference point. In some embodiments, the collimator (or external instrument arm) and the patient may each have one or more fiducial markers tracked by the AR headset device. The respective pose data may be based on tracked positions and orientations of the fiducial markers. The Field Visualization Engine may further apply various types of spatial transformations to pose data.
As shown in flowchart 200 of FIG. 2 , the Field Visualization Engine tracks one or more positions and orientations of a collimator (“collimator poses”) relative to one or more positions and orientations of an Augmented Reality (AR) headset device (“headset poses” or “device poses”) worn by a user. (Act 202) Each respective collimator pose and each respective headset pose corresponds to a three-dimensional (3D) unified coordinate space (“3D space”).
The Field Visualization Engine generates an AR representation of a beam (such as a radiation beam) emanating from the collimator based at least on a current collimator pose and a current headset pose. The Field Visualization Engine further generates an AR visualization of emanation of the beam throughout an AR display of medical data. (Act 204). Various embodiments generate an AR representation of a simulated radiation beam and/or generate a real-time AR representation an actual radiation beam currently being delivered. Various embodiments generate an AR representation of a simulated magnetic field and/or a real-time AR representation of an actual magnetic field that is currently being generated. The Field Visualization Engine further generates an AR visualization of emanation of the beam throughout an AR display of medical data. (Act 206) In various embodiments, the AR visualization may be of emanation of a magnetic field.
In various embodiments, as shown in FIG. 3 , the Field Visualization Engine generates an AR overlay of medical data 306. The AR headset device displays the overlay 306 in an AR display. The overlay 306 may be based on 3D medical model data representative of a portion of an internal anatomy of a patient 304. The patient may be position proximate to a collimator 308. The Field Visualization Engine determines a display position for the overlay 306 according to a position and orientation defined according to the 3D space such that the overlay 306 is displayed over a portion of the physical patient's body in alignment with the patient's internal anatomy. As the user moves around the patient, the Field Visualization Engine captures the current headset pose and retrieves medical model data that corresponds with the user's perspective view as represented by the headset pose as the user continually moves and the headset pose continually changes. The Field Visualization Engine updates the overlay 306 by rendering newly retrieved medical model data in the overlay 306.
In one or more embodiments, as shown in FIG. 4 , the Field Visualization Engine determines a collimator pose for a collimator 308 physically positioned proximate to the patient 304. The AR headset further generates an AR visualization 402 of a simulated radiation beam targeted at the patient 304. It is understood that the Field Visualization Engine may utilize various types of parameters in generating the AR visualization 402 of a simulated radiation beam such as, for example, the distance between the collimator 308 and the patient 304, the patient's current position and orientation in the 3D space, an angle of the collimator and various power settings of the collimator. The AR headset device displays the AR visualization 402 of a simulated radiation beam relative to a current headset pose. The user is thereby provided a view of the simulated radiation beam from the user's current perspective view as the user moves and the headset pose continually changes.
According to one or more embodiments, as shown in FIG. 5 , an AR display 502 generated by the AR headset device may further include concurrent display of the medical model data overlay 306 and the AR visualization 402 of a simulated radiation beam. Both the overlay 306 and the AR visualization 402 are generated by the AR headset device relative to headset pose(s), individually, to provide views of the overlay 306 and the AR visualization 402 from the user's perspective.
In an embodiment(s), as shown in FIG. 6A, the Field Visualization Engine generates one or more mask layers 502 and incorporates the mask layers into the overlay 306. Each respective mask layer may represent a portion(s) of predicted emanation of a sphere of a planned radiation dose that is to be applied to the patient's internal anatomy. For example, a planned radiation dose may include an amount of radiation targeting a particular part of internal anatomy at a selected angle. As such, the mask layers provide a preview of how the sphere of the planned dose of radiation will emanate throughout the patient's internal anatomy and also provides a visual indication of predicted radiation fall off.
In various embodiments, as shown in FIG. 6B, the Field Visualization Engine may further determine predicted interactions between the simulated radiation beam and the various types of tissue, muscle and/or organs of the patient's internal anatomy represented by the medical model data. The AR headset device may generate an AR display 606 that incorporate visual indications in the overlay 306 to provide the user with a visualization of how the simulated radiation beam may emanate throughout the internal anatomy, the fall off of the simulated radiation beam and effects 506 the internal anatomy may experience as a consequence of exposure to the simulated radiation beam.
In some embodiments, the Field Visualization Engine may trigger display in the AR display 606 of a visual cue upon determining that the emanation of the simulated radiation beam that corresponds with the collimator's current pose and power settings meets a similarity threshold (or is in alignment) with respect to the emanation of the planned dose of radiation and its fall off as displayed in the overlay 306 according to the mask layers 502. As such, the visual cue informs the user that the collimator's current pose and power settings that correspond with the simulated radiation beam are optimal for actually delivering the planned radiation dose to the patient.
FIG. 7 illustrates an example machine of a computer system within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed. In alternative implementations, the machine may be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, and/or the Internet. The machine may operate in the capacity of a server or a client machine in client-server network environment, as a peer machine in a peer-to-peer (or distributed) network environment, or as a server or a client machine in a cloud computing infrastructure or environment.
The machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
The example computer system 700 includes a processing device 702, a main memory 704 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory 706 (e.g., flash memory, static random access memory (SRAM), etc.), and a data storage device 718, which communicate with each other via a bus 730.
Processing device 702 represents one or more general-purpose processing devices such as a microprocessor, a central processing unit, or the like. More particularly, the processing device may be complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device 702 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device 702 is configured to execute instructions 726 for performing the operations and steps discussed herein.
The computer system 700 may further include a network interface device 708 to communicate over the network 720. The computer system 700 also may include a video display unit 710 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device 712 (e.g., a keyboard), a cursor control device 714 (e.g., a mouse), a graphics processing unit 722, a signal generation device 716 (e.g., a speaker), graphics processing unit 722, video processing unit 728, and audio processing unit 732.
The data storage device 718 may include a machine-readable storage medium 724 (also known as a computer-readable medium) on which is stored one or more sets of instructions or software 726 embodying any one or more of the methodologies or functions described herein. The instructions 726 may also reside, completely or at least partially, within the main memory 704 and/or within the processing device 702 during execution thereof by the computer system 700, the main memory 704 and the processing device 702 also constituting machine-readable storage media.
In one implementation, the instructions 726 include instructions to implement functionality corresponding to the components of a device to perform the disclosure herein. While the machine-readable storage medium 724 is shown in an example implementation to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term “machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media and magnetic media.
Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the above discussion, it is appreciated that throughout the description, discussions utilizing terms such as “identifying” or “determining” or “executing” or “performing” or “collecting” or “creating” or “sending” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage devices.
The present disclosure also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the intended purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus.
Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the method. The structure for a variety of these systems will appear as set forth in the description above. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the disclosure as described herein.
The present disclosure may be provided as a computer program product, or software, that may include a machine-readable medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). For example, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices, etc.
In the foregoing disclosure, implementations of the disclosure have been described with reference to specific example implementations thereof. It will be evident that various modifications may be made thereto without departing from the broader spirit and scope of implementations of the disclosure as set forth in the following claims. The disclosure and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.

Claims (11)

What is claimed is:
1. A computer-implemented method, comprising:
tracking one or more positions and orientations of a collimator (“collimator poses”) relative to one or more positions and orientations of an Augmented Reality (AR) headset device (“headset poses”), each respective collimator pose and each respective headset pose corresponding to a three-dimensional (3D) unified coordinate space (“3D space”);
generating an AR representation of a beam emanating from the collimator based at least on a current collimator pose and a current headset pose;
determining a current position and an orientation of a physical patient (“patient pose”), the patient pose corresponding to the 3D space; and
generating an AR visualization of emanation of the beam throughout an AR display of medical data based, comprising:
generating the AR display of the medical data as a visual overlay based at least on the patient pose and relative to the current headset pose, the medical data corresponding to one or more volumes for a portion of internal anatomy of the physical patient, wherein generating the AR display of the medical data comprises:
generating one or more mask layers, each respective mask layer comprising a visual representation of at least a portion of a predicted emanation of a planned dose of radiation; and
incorporating at least one of the mask layers into the AR display of medical data;
determining whether an alignment exists between the predicted emanation of the planned dose of radiation and the AR visualization of emanation based on the current collimator and headset poses; and
triggering a visual cue upon determining the alignment exists.
2. The computer-implemented method of claim 1, wherein generating an AR visualization of emanation of the beam throughout the AR display of medical data comprises:
generating a visual representation of at least a portion of fall off of the beam; and
incorporating display of the visual representation of the portion of fall off with the AR display of the medical data.
3. The computer-implemented method of claim 1, wherein generating an AR representation of a beam emanating from the collimator comprises:
generating the AR representation of the beam further based at least on one or more collimator settings and an amount of distance between the collimator and a beam target.
4. The computer-implemented method of claim 3, wherein the beam target comprises a portion of internal anatomy of a physical patient.
5. A system comprising one or more processors, and a non-transitory computer-readable medium including one or more sequences of instructions that, when executed by the one or more processors, cause the system to perform operations comprising:
tracking one or more positions and orientations of a collimator (“collimator poses”) relative to one or more positions and orientations of an Augmented Reality (AR) headset device (“headset poses”), each respective collimator pose and each respective headset pose corresponding to a three-dimensional (3D) unified coordinate space (“3D space”);
generating an AR representation of a beam emanating from the collimator based at least on a current collimator pose and a current headset pose;
determining a current position and an orientation of a physical patient (“patient pose”), the patient pose corresponding to the 3D space; and
generating an AR visualization of emanation of the beam throughout an AR display of medical data, comprising:
generating the AR display of the medical data as a visual overlay based at least on the patient pose and relative to the current headset pose, the medical data corresponding to one or more volumes for a portion of internal anatomy of the physical patient, wherein generating the AR display of the medical data comprises:
generating one or more mask layers, each respective mask layer comprising a visual representation of at least a portion of a predicted emanation of a planned dose of radiation; and
incorporating at least one of the mask layers into the AR display of medical data;
determining whether an alignment exists between the predicted emanation of the planned dose of radiation and the AR visualization of emanation based on the current collimator and headset poses; and
triggering a visual cue upon determining the alignment exists.
6. The system of claim 5, wherein generating an AR visualization of emanation of the beam throughout the AR display of medical data comprises:
generating a visual representation of at least a portion of fall off of the beam; and
incorporating display of the visual representation of the portion of fall off with the AR display of the medical data.
7. The system of claim 5, wherein generating an AR representation of a beam emanating from the collimator comprises:
generating the AR representation of the beam further based at least on one or more collimator settings and an amount of distance between the collimator and a beam target.
8. The system of claim 7, wherein the beam target comprises a portion of internal anatomy of a physical patient.
9. A computer program product comprising a non-transitory computer-readable medium having a computer-readable program code embodied therein to be executed by one or more processors, the program code including instructions for:
tracking one or more positions and orientations of a collimator (“collimator poses”) relative to one or more positions and orientations of an Augmented Reality (AR) headset device (“headset poses”), each respective collimator pose and each respective headset pose corresponding to a three-dimensional (3D) unified coordinate space (“3D space”);
generating an AR representation of a beam emanating from the collimator based at least on a current collimator pose and a current headset pose;
determining a current position and an orientation of a physical patient (“patient pose”), the patient pose corresponding to the 3D space; and
generating an AR visualization of emanation of the beam throughout an AR display of medical data, comprising:
generating the AR display of the medical data as a visual overlay based at least on the patient pose and relative to the current headset pose, the medical data corresponding to one or more volumes for a portion of internal anatomy of the physical patient, wherein generating the AR display of the medical data comprises:
generating one or more mask layers, each respective mask layer comprising a visual representation of at least a portion of a predicted emanation of a planned dose of radiation; and
incorporating at least one of the mask layers into the AR display of medical data;
determining whether an alignment exists between the predicted emanation of the planned dose of radiation and the AR visualization of emanation based on the current collimator and headset poses; and
triggering a visual cue upon determining the alignment exists.
10. The computer program product of claim 9, wherein generating an AR visualization of emanation of the beam throughout the AR display of medical data comprises:
generating a visual representation of at least a portion of fall off of the beam; and
incorporating display of the visual representation of the portion of fall off with the AR display of the medical data.
11. The computer program product of claim 9, wherein generating an AR representation of a beam emanating from the collimator comprises:
generating the AR representation of the beam further based at least on one or more collimator settings and an amount of distance between the collimator and a beam target.
US17/871,885 2021-01-13 2022-07-22 Visualization of predicted dosage Active US11744652B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US17/871,885 US11744652B2 (en) 2021-01-13 2022-07-22 Visualization of predicted dosage
US18/208,136 US20230320788A1 (en) 2021-09-29 2023-06-09 Surgical navigation trajectory in augmented reality display

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US17/148,522 US11172996B1 (en) 2021-01-13 2021-01-13 Instrument-based registration and alignment for augmented reality environments
US17/395,233 US11857274B2 (en) 2021-01-13 2021-08-05 Medical instrument with fiducial markers
US17/502,030 US20220218420A1 (en) 2021-01-13 2021-10-14 Instrument-based registration and alignment for augmented reality environments
US17/871,885 US11744652B2 (en) 2021-01-13 2022-07-22 Visualization of predicted dosage

Related Parent Applications (3)

Application Number Title Priority Date Filing Date
US17/395,233 Continuation-In-Part US11857274B2 (en) 2021-01-13 2021-08-05 Medical instrument with fiducial markers
US17/502,030 Continuation-In-Part US20220218420A1 (en) 2021-01-13 2021-10-14 Instrument-based registration and alignment for augmented reality environments
US17/961,423 Continuation-In-Part US20230024958A1 (en) 2021-01-13 2022-10-06 Stereo video in augmented reality

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/489,693 Continuation-In-Part US11931114B2 (en) 2021-03-05 2021-09-29 Virtual interaction with instruments in augmented reality

Publications (2)

Publication Number Publication Date
US20220354591A1 US20220354591A1 (en) 2022-11-10
US11744652B2 true US11744652B2 (en) 2023-09-05

Family

ID=83901776

Family Applications (1)

Application Number Title Priority Date Filing Date
US17/871,885 Active US11744652B2 (en) 2021-01-13 2022-07-22 Visualization of predicted dosage

Country Status (1)

Country Link
US (1) US11744652B2 (en)

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080228064A1 (en) 2005-10-17 2008-09-18 Koninklijke Philips Electronics N. V. Marker Tracking for Interventional Magnetic Resonance
US7427272B2 (en) 2003-07-15 2008-09-23 Orthosoft Inc. Method for locating the mechanical axis of a femur
US20100002921A1 (en) 2008-07-07 2010-01-07 Matthias Fenchel Medical image acquisition apparatus and operating method therefor
US20130113802A1 (en) * 2011-04-29 2013-05-09 University Health Network Methods and systems for visualization of 3d parametric data during 2d imaging
US20140171787A1 (en) 2012-12-07 2014-06-19 The Methodist Hospital Surgical procedure management systems and methods
US20150265367A1 (en) 2014-03-19 2015-09-24 Ulrich Gruhler Automatic registration of the penetration depth and the rotational orientation of an invasive instrument
US20180063386A1 (en) * 2016-08-31 2018-03-01 Siemens Healthcare Gmbh Machine learning based real-time radiation dose assessment
US20180193097A1 (en) 2017-01-11 2018-07-12 Stewart David MCLACHLIN Patient reference device
US20180253856A1 (en) 2017-03-01 2018-09-06 Microsoft Technology Licensing, Llc Multi-Spectrum Illumination-and-Sensor Module for Head Tracking, Gesture Recognition and Spatial Mapping
US20180272153A1 (en) * 2016-07-29 2018-09-27 Brainlab Ag System For Monitoring The Position Of A Patient Receiving 4 pi Radiation Therapy
US20190090955A1 (en) 2016-03-01 2019-03-28 Mirus Llc Systems and methods for position and orientation tracking of anatomy and surgical instruments
US20190311490A1 (en) * 2018-04-09 2019-10-10 Globus Medical, Inc. Predictive visualization of medical imaging scanner component movement
US20200005486A1 (en) 2018-07-02 2020-01-02 Microsoft Technology Licensing, Llc Device pose estimation using 3d line clouds
US20200197107A1 (en) * 2016-08-16 2020-06-25 Insight Medical Systems, Inc. Systems and methods for sensory augmentation in medical procedures
US20200352655A1 (en) 2019-05-06 2020-11-12 ARUS Inc. Methods, devices, and systems for augmented reality guidance of medical devices into soft tissue
US20210038181A1 (en) * 2018-01-31 2021-02-11 Siemens Healthcare Gmbh Method of position planning for a recording system of a medical imaging device and medical imaging device
US20210169581A1 (en) * 2019-12-10 2021-06-10 Globus Medical, Inc. Extended reality instrument interaction zone for navigated robotic surgery
US20210378756A1 (en) 2020-06-09 2021-12-09 Globus Medical, Inc. Surgical object tracking in visible light via fiducial seeding and synthetic image registration

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7427272B2 (en) 2003-07-15 2008-09-23 Orthosoft Inc. Method for locating the mechanical axis of a femur
US20080228064A1 (en) 2005-10-17 2008-09-18 Koninklijke Philips Electronics N. V. Marker Tracking for Interventional Magnetic Resonance
US20100002921A1 (en) 2008-07-07 2010-01-07 Matthias Fenchel Medical image acquisition apparatus and operating method therefor
US20130113802A1 (en) * 2011-04-29 2013-05-09 University Health Network Methods and systems for visualization of 3d parametric data during 2d imaging
US20140171787A1 (en) 2012-12-07 2014-06-19 The Methodist Hospital Surgical procedure management systems and methods
US20150265367A1 (en) 2014-03-19 2015-09-24 Ulrich Gruhler Automatic registration of the penetration depth and the rotational orientation of an invasive instrument
US20190090955A1 (en) 2016-03-01 2019-03-28 Mirus Llc Systems and methods for position and orientation tracking of anatomy and surgical instruments
US20180272153A1 (en) * 2016-07-29 2018-09-27 Brainlab Ag System For Monitoring The Position Of A Patient Receiving 4 pi Radiation Therapy
US20200197107A1 (en) * 2016-08-16 2020-06-25 Insight Medical Systems, Inc. Systems and methods for sensory augmentation in medical procedures
US20180063386A1 (en) * 2016-08-31 2018-03-01 Siemens Healthcare Gmbh Machine learning based real-time radiation dose assessment
US20180193097A1 (en) 2017-01-11 2018-07-12 Stewart David MCLACHLIN Patient reference device
US20180253856A1 (en) 2017-03-01 2018-09-06 Microsoft Technology Licensing, Llc Multi-Spectrum Illumination-and-Sensor Module for Head Tracking, Gesture Recognition and Spatial Mapping
US20210038181A1 (en) * 2018-01-31 2021-02-11 Siemens Healthcare Gmbh Method of position planning for a recording system of a medical imaging device and medical imaging device
US20190311490A1 (en) * 2018-04-09 2019-10-10 Globus Medical, Inc. Predictive visualization of medical imaging scanner component movement
US20200005486A1 (en) 2018-07-02 2020-01-02 Microsoft Technology Licensing, Llc Device pose estimation using 3d line clouds
US20200352655A1 (en) 2019-05-06 2020-11-12 ARUS Inc. Methods, devices, and systems for augmented reality guidance of medical devices into soft tissue
US20210169581A1 (en) * 2019-12-10 2021-06-10 Globus Medical, Inc. Extended reality instrument interaction zone for navigated robotic surgery
US20210378756A1 (en) 2020-06-09 2021-12-09 Globus Medical, Inc. Surgical object tracking in visible light via fiducial seeding and synthetic image registration

Also Published As

Publication number Publication date
US20220354591A1 (en) 2022-11-10

Similar Documents

Publication Publication Date Title
US11172996B1 (en) Instrument-based registration and alignment for augmented reality environments
US11354813B2 (en) Dilated fully convolutional network for 2D/3D medical image registration
McJunkin et al. Development of a mixed reality platform for lateral skull base anatomy
US20230008051A1 (en) Utilization of a transportable ct-scanner for radiotherapy procedures
CN110235175B (en) Online learning enhanced map-based automatic segmentation
US20140350863A1 (en) Systems and methods for automatic creation of dose prediction models and therapy treatment plans as a cloud service
RU2604706C2 (en) Correlated image mapping pointer
Wang et al. Surgical instrument tracking by multiple monocular modules and a sensor fusion approach
de Oliveira et al. A hand‐eye calibration method for augmented reality applied to computer‐assisted orthopedic surgery
CN103732297A (en) Augmented-reality range-of-motion therapy system and method of operation thereof
EP3720554B1 (en) Patient positioning using a skeleton model
CN109859833A (en) The appraisal procedure and device of ablative surgery therapeutic effect
JP6099061B2 (en) Use of several different indicators to determine the position change of a radiation therapy target
Xu et al. Design and validation of a spinal surgical navigation system based on spatial augmented reality
Hu et al. Occlusion-robust visual markerless bone tracking for computer-assisted orthopedic surgery
US11744652B2 (en) Visualization of predicted dosage
US11931114B2 (en) Virtual interaction with instruments in augmented reality
Ružický et al. Processing and visualization of medical data in a multiuser environment using artificial intelligence
US11058891B2 (en) Systems and methods for cloud-based radiation therapy treatment planning
WO2023055556A2 (en) Ai-based atlas mapping slice localizer for deep learning autosegmentation
US20230320788A1 (en) Surgical navigation trajectory in augmented reality display
WO2023055907A1 (en) Virtual interaction with instruments in augmented reality
Jang et al. Construction and verification of a safety region for brain tumor removal with a telesurgical robot system
Chu et al. Application of holographic display in radiotherapy treatment planning II: a multi‐institutional study
AU2018219996B2 (en) Videographic display of real-time medical treatment

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

AS Assignment

Owner name: MEDIVIS, INC., NEW YORK

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:QIAN, LONG;MORLEY, CHRISTOPHER;CHOUDHRY, OSAMAH;SIGNING DATES FROM 20220715 TO 20220722;REEL/FRAME:064330/0020

STCF Information on status: patent grant

Free format text: PATENTED CASE