WO2015084837A1 - Improvements for haptic augmented and virtual reality system for simulation of surgical procedures - Google Patents

Improvements for haptic augmented and virtual reality system for simulation of surgical procedures Download PDF

Info

Publication number
WO2015084837A1
WO2015084837A1 PCT/US2014/068138 US2014068138W WO2015084837A1 WO 2015084837 A1 WO2015084837 A1 WO 2015084837A1 US 2014068138 W US2014068138 W US 2014068138W WO 2015084837 A1 WO2015084837 A1 WO 2015084837A1
Authority
WO
WIPO (PCT)
Prior art keywords
rendering logic
haptic
virtual
haptics
user
Prior art date
Application number
PCT/US2014/068138
Other languages
French (fr)
Inventor
P. Pat BANERJEE
Cristian Luciano
Silvio RIZZI
Xiaorui Zhao
Jia Luo
Original Assignee
Immersive Touch, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Immersive Touch, Inc. filed Critical Immersive Touch, Inc.
Publication of WO2015084837A1 publication Critical patent/WO2015084837A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/70Manipulators specially adapted for use in surgery
    • A61B34/76Manipulators having means for providing feel, e.g. force or tactile feedback
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/105Modelling of the patient, e.g. for ligaments or bones

Definitions

  • the present technology relates to methods, devices and systems for haptically- enabled simulation of surgical procedures using a haptic augmented and virtual reality system.
  • Figure 1 illustrates a perspective schematic view of one example of a known open surgery station, which can be used in a haptic augmented and virtual reality system of the present technology.
  • Figure 2 illustrates a block diagram of a known software and hardware architecture for the system of Figure 1.
  • Figure 3 illustrates a second perspective schematic view of the open surgery station of Figure 1.
  • Figure 4 illustrates a perspective schematic view of one example of a microsurgery station, which can be used in a haptic augmented and virtual reality system of the present technology.
  • Figure 5 illustrates a block diagram of an adjustable indicator incorporated into a haptic augmented and virtual reality system of the present technology.
  • Figure 6 illustrates a block diagram of a haptic augmented and virtual reality system of the present technology including an eye movement simulator.
  • Figure 7 illustrates a flow diagram of a method of operation of the eye movement simulator of Figure 6.
  • Figure 8 illustrates one example of a virtual aneurysm clip having a single contact point.
  • Figure 9 illustrates one example of a virtual aneurysm clip having multiple contact points.
  • Figure 10 illustrates one example of point-to-object collision detection.
  • Figure 1 1 illustrates a flow diagram for one example of a collision evaluation that can be used in performing point-to-object collision detection.
  • Figure 12 illustrates a flow diagram for one example computing the collision identification data in a collision evaluation that can be used in performing point-to- object collision detection.
  • Figure 13 illustrates a comparison between graphics volume rendering and haptic transfer functions.
  • Figure 14 illustrates illustrates one example of multi-pint collision detection.
  • Figure 15 illustrates a flow diagram for one example of a collision evaluation that can be used in performing multi-point collision detection.
  • Figure 16 illustrates a flow diagram for a second example of a collision evaluation that can be used in performing multi-point collision detection.
  • Figure 17 illustrates a flow diagram showing a cursor movement locking process.
  • Figure 18 illustrates a flow diagram showing a cursor movement unlocking process.
  • Figure 19 illustrates a plane formed by an ultrasound transducer that can be used in rendering 3D models of patient anatomy.
  • Figure 20 illustrates a spring model that can be used to determine skin deformation.
  • Figure 21 illustrates a surgical procedure simulation where secondary visualization data is provided.
  • the present technology provides devices, systems, and methods that can be used with, and implemented as improvements to, haptic augmented and virtual reality systems.
  • the haptic augmented and virtual reality systems can include at least one surgical simulation station.
  • a surgical simulation station can be an open surgery station that simulates performance of open surgery steps, or a microsurgery station for simulates performance of microsurgery steps. Examples of such systems are described in U.S. Patent Application Serial No. 13/628,841, entitled “Haptic Augmented and Virtual Reality Systems for Simulation of Surgical Procedures," filed on September 27, 2012, and examples of open surgery stations that can be used in such systems are described in U.S. Patent No. 7,812,815, entitled “Compact Haptic and Augments Virtual Reality System,” issued on October 12, 2010, the disclosures of each are hereby incorporated by reference herein in their entirety.
  • the term "open surgery station” should be understood to mean that an environment is provided in which the visual display provided to a user includes virtual reality aspects superimposed over real reality, and in which a user can see aspects of real reality in addition to the superimposed virtual reality aspects while performing steps of simulated open surgery.
  • a user can interact with displayed virtual patient anatomy, and a simulated surgical instrument can be displayed in a manner that it appears to be held in the actual hand of a user holding a haptic stylus as a proxy for the instrument.
  • microsurgery station should be understood to mean that an environment is provided in which the visual display provided to a user consists of virtual aspects, which are computer generated graphics, that the user can see while performing steps of simulated microsurgery.
  • virtual aspects which are computer generated graphics
  • a user can interact with displayed virtual patient anatomy using displayed virtual surgical instruments, and although the user uses a haptic stylus as a proxy for the instrument, the user does not see aspects of real reality.
  • Haptic augmented and virtual reality systems of the present technology can include at least one surgical simulation station.
  • a surgical simulation station is open surgery station 10 that simulates open surgery, one example of which is illustrated in Figures 1 and 3.
  • Another example of a surgical simulation station is microsurgery station 11 that simulates microsurgery, one example of which is illustrated in Figure 4.
  • Open surgery stations 10 and microsurgery stations 11 can each include some of the same types of physical components, and for ease of reference like physical components are labeled with like reference numbers for both stations 10 and 1 1 in Figures 1-4.
  • a user 12 can sit or stand at a physical desktop workspace 14 defined by a housing 16 that has an opening 18 on one side.
  • the open surgery station 10 can include a multi-sensorial computer interface that includes a stereoscopic vision interface 20, at least one haptic device 22, and a 3D sound system 24. Additionally, a head tracking device 26 and a hand tracking device in the form of at least one haptic robot stylus 27 can provide information regarding the user's interaction with the system as well as the user's visual perspective relating to the open surgery station 10.
  • the user With reference to the microsurgery station 11 for simulating microsurgical steps, the user (not shown in Figure 4) can sit or stand at a physical desktop workspace 14 defined by a housing 16 that has an opening 18 on one side.
  • the microsurgery station 11 can include a multi-sensorial computer interface that includes a binocular surgical microscopic eyepiece 31, at least one haptic device 22, and a 3D sound system 24. Additionally, a hand tracking device in the form of at least one haptic robot stylus 27 can provide information regarding the user's interaction with the system as well as the user's visual perspective relating to the microsurgery station 1 1.
  • Surgical procedures that can be simulated using haptic augmented and virtual reality systems of the present technology can include procedures that use a one- handed technique, or that require use of multiple hands.
  • the open surgery station 10 and the microsurgery station 11 can each include at least one, or two, haptic devices 22, which track the user's hand position and orientation and provide force feedback to the user.
  • the methods of aneurysm clipping simulation provided herein can include the simultaneous use of two haptic devices 22.
  • a 3D image of a first surgical tool can be collocated with a first haptic device 22, and an image of a second surgical tool can be collocated with a second haptic device 22.
  • the simulation method can include a user holding a first haptic device 22 in a first hand, such as the right hand, and superimposing an image of a first surgical tool, such as an aneurysm clip holder or an arachnoid knife over the first haptic device 22.
  • the simulation method can also include a user holding a second haptic device 22 in a second hand, such as the left hand, and superimposing an image of a second surgical tool, such as a suction tip over the second haptic device 22.
  • the open surgery station 10 and the microsurgery station 11 can each include a display system that allows the user to acquire depth perception. Each display system can be driven by graphics logic, which can control and update the graphics displayed by the display system.
  • the display system of the open surgery station 10 can use a display screen 28 that can be a single passive stereo monitor, a half-silvered mirror 30 to reflect the image of the display screen 28, and a head tracking system 26 to display a dynamic viewer-centered perspective.
  • the partially transparent mirror 30 can permit the user 12 to see both the virtual reality display and the user's hands, thus providing an augmented reality environment.
  • the user can hold and manipulate the haptic device 22 with its stylus 27 below the mirror 30.
  • the display system of the microsurgery station 11 can display a static perspective by using two display screens 28, which can be non-stereo monitors located side by side and a binocular surgical eyepiece 31 , which can consist of four first-surface mirrors oriented at an angle in such a way that the image of the left monitor is only seen by the left eye, and the image of the right monitor is only seen by the right eye.
  • the orientation and distance between the front-surface mirrors can be adjusted by the user to match his/her interocular distance.
  • a virtual projection plane can be located exactly at the center of the haptic workspace and oriented perpendicular to that line, whereas in the microsurgery station 11 the user can view the virtual projection through the binocular surgical microscopic eyepiece 31.
  • the partially transparent mirror 30 can preferably be sufficiently wide to allow the user to view virtual objects from different viewpoints (displaying the correct viewer-centered perspective) while permitting a comfortable range of movement.
  • the binocular surgical microscopic eyepiece 31 can be adjusted up or down, either manually or by an automatic up-down adjustor, and the interocular distance can also be adjusted for comfortable 3-dimensional viewing.
  • the height of the binocular surgical microscopic eyepiece 31 can be adjusted by adjusting the eyepiece mounting frame 33 can be adjusted up or down by activating a first foot pedal 34 or by a hand switch 35 on housing 16.
  • one or more additional foot pedals 34 can be provided to activate certain simulated surgical instruments, such as a bipolar electrocautery forceps.
  • the computer 32 illustrated in Figure 3 can be operatively connected to any one or more surgical stations in the system, such as the open surgery station 10 and the microsurgery station 11.
  • any one or more surgical station in the system may be operatively connected to a separate computer 32.
  • the open surgery station 10 can be connected to a first computer 32
  • the microsurgery station 11 can be operatively connected to a second computer 32.
  • each separate computer 32 of the system can be linked via a wireless or wired network connection.
  • the one or more computers 32 can be components of a haptic augmented and virtual reality system that includes open surgery station logic that controls and operates the open surgery station 10, and microsurgery station logic that controls and operates the micro-surgery station 11.
  • the haptic augmented and virtual reality system can include a software library that provides, in real time, a high level layer that encapsulates the rendering of a scene graph on either display screen 28, the stereoscopic vision interface 20, the handling of the hand tracking device shown as a haptic robot stylus 27, an interface with a haptic device 22, and playback of 3D spatial audio on a 3D sound system 24.
  • a computer 32 can include haptics rendering logic that drives each haptic device of the open surgery station 10, and graphics logic that drives the display system of the open surgery station 10.
  • the computer 32 connected to the open surgery station 10 can also include open surgery station logic, which can integrate the haptics rendering logic and the graphics logic and provide real-time simulation of open surgery steps of a surgical procedure, including updating the open surgical views in real time in response to user operations performed with a haptic device of the open surgery station 10.
  • the open surgery station logic can also include an instrument library that includes a plurality of virtual surgical instruments that can each be selected by a user and displayed by the display system of the open surgery station 10. Some examples of instruments that can be included in the instrument library for use with the open surgery station 10 are discussed below with respect to the open surgery steps of the aneurysm clipping methodology.
  • a computer 32 can include haptics rendering logic that drives each haptic device of the micro-surgery station 1 1, and graphics logic that drives the display system of the micro-surgery station 1 1.
  • the computer 32 connected to the micro-surgery station 11 can also include microsurgery station logic, which can integrate the haptics rendering logic and the graphics logic and provide real-time simulation of open surgery steps of a surgical procedure including updating the microsurgical surgical views in real time in response to user operations performed with a haptic device of the micro-surgery station 1 1.
  • the micro-surgery station logic can also include an instrument library that includes a plurality of virtual surgical instruments that can each be selected by a user and displayed by the display system of the micro-surgery station 11.
  • instrument library that includes a plurality of virtual surgical instruments that can each be selected by a user and displayed by the display system of the micro-surgery station 11.
  • All logic and software components discussed herein can be implemented as instructions stored on a non-transient computer readable medium, such as a memory operatively connected to computer 32, and can be executed by one or more processors of computer 32.
  • FIG. 2 one example of a software and hardware architecture for an open surgery station 10 is shown.
  • the architecture includes interconnected devices and logic, which are integrated by a 3D application program interface (API) 39.
  • API application program interface
  • Both the open surgery station 10 and the microsurgery station 1 1 include software and hardware for generating image data from scans of actual human anatomy.
  • the volume data pre-processing 40 can receive 2D image data, for example, generated by an input data source 41, which can be a medical scanner.
  • the volume data pre-processing 40 can provide 3D models to the 3D application program interface 39.
  • Examples of medical scanners that can be used as an input data source 40 for characterizing physical objects include a magnetic resonance imaging (MRI) scanner or a CT scanner, such as those typically used for obtaining medical images.
  • the volume data pre-processing 40 segments and combines the 2D images to create a virtual 3D volume of the sample that was scanned, for example a human head.
  • the volume data pre-processing 40 creates detailed 3D structures.
  • the characteristics of the various 3D structures will, with the interface to a haptic device 22, present different feel characteristics in the virtual reality environment, e.g. skin will feel soft and bone hard.
  • Haptics rendering logic 44 can monitor and control each haptic device 22 including each stylus 27.
  • the haptics rendering logic 44 can receive data from each haptic device and determine the position and orientation of each haptic device 22, for example a stylus 27, or a plurality of styluses 27 for different functions or for use by separate hands.
  • the haptics rendering logic 44 computes collision detections between a virtual device corresponding to the haptic device 22 and objects within the virtual 3D environment.
  • the haptics rendering logic 44 can also receive 3D models from the 3D application program interface 39. For example, collisions with a virtual device and imported 3D isosurfaces can be computed, and the haptics rendering logic can direct a haptic device 22 to generate the corresponding force feedback.
  • each isosurface is assigned different haptic materials, according to certain parameters: stiffness, viscosity, static friction and dynamic friction, as well as different physical properties such as density, mass, thickness, damping, bending, etc. Therefore, the user 12 can feel the different surfaces and textures of objects and surfaces in the virtual environment.
  • the user 12 can feel different sensations when touching skin, bone, and internal organs, such as the brain.
  • the graphics and haptics can be on two separate threads, which can be implemented, for example with a dual processor computer.
  • the haptics and graphics have their own update schedule, for example, haptics at 1000 Hz and graphics at about 30 Hz.
  • the system would synchronize the two consecutive graphics update after about every 30 haptic updates, and it is within the skill of artisans to modify the manner in which haptics and graphics update and synchronize.
  • Hand tracking is very useful because it allows users to use both hands to interact with the virtual scene. While the user can feel tactile sensations with a hand holding a haptic stylus 27, it is also possible to use a tracked hand to move the 3D objects, manipulate lights, or define planes in the same 3D working volume.
  • Graphics rendering logic 46 receives 3D models from the 3D application program interface 39. Also, the graphics rendering logic 46 receives virtual tool(s) information from the haptics rendering logic 44. With the models and other information, the graphics rendering logic 46 software generates and continuously updates, in real time, the stereoscopic 3D display that is displayed by a display screen 28.
  • the API 39 can provide a camera node that computes the correct viewer- centered perspective projection on the virtual projection plane. It can properly render both left and right views according to the position and orientation of the user's head given by the tracking system.
  • Sound rendering 49 can also be used to add auditory simulations to a virtual environment through each 3D sound system 24.
  • One example of sound rendering logic is Open Audio Library (OpenAL), which is a freely-available cross-platform 3D audio API that serves as a software interface to audio hardware. OpenAL is can generate arrangements of sound sources around a listener in a virtual 3D environment. It handles sound-source directivity and distance-related attenuation and Doppler effects, as well as special effects such as reflection, obstruction, transmission, and reverberation.
  • OpenAL Open Audio Library
  • the haptics rendering logic 44 can direct a haptic device 22 to generate force feedback corresponding to computed collisions with a virtual device and imported 3D isosurfaces. While the amount of the force feedback can vary within a range correlated to the nature and degree of the computed collision, different users may have different degrees of sensitivity to the force feedback they feel through the haptic device 22. Accordingly, some examples of haptic augmented and virtual reality systems of the present technology can include an adjustable indicator 100 to allow a user to personally tune and adjust the quality of haptic feedback from the simulator by controlling the level of the range of force feedback.
  • an adjustable indicator 100 can be used prior to or during simulation of a surgical procedure to adjust the range of force feedback that the simulator provides to the one or more haptic devices 22 being used during the simulation.
  • the adjustable indicator 100 may allow a user to increase or decrease the range of force feedback that a haptic device 22 provides to a user. Users with a high sensitivity to force feedback, may use a lower level setting than users with a lower sensitivity to force feedback.
  • the adjustable indicator may be of any suitable type, such as a slider interface rotatable knob, and can be placed at any suitable location within a haptic augmented and virtual reality system, such as on a surgical simulation station.
  • Each adjustable indicator 100 can be used to adjust the level of the range of force feedback provided by at least one haptic device 22 of a haptic augmented and virtual reality system. For example, when a surgical simulation station has more than one haptic device 22, a plurality of adjustable indicators 100 can be provided, each adjustable indicator 100 controlling the force feedback level of one of the haptic devices 22. Or, one adjustable indicator 100 can be provided that controls the force feedback level of all of the haptic devices 22 of the surgical simulation station.
  • a user can adjust the adjustable indicator 100 to a desired setting, which corresponds to a desired range level of force feedback.
  • the adjustable indicator 100 sends a signal to the haptics rendering logic 44 indicating the set force feedback range level.
  • the haptics rendering logic 44 receives the set force feedback range level from the adjustable indicator 100, and uses it when calculating the amount of force feedback that should be provided by the haptic device 22 during operation of the system, when a collision is computed between a virtual device and an imported 3D isosurface.
  • the haptics rendering logic 44 then sends a force feedback signal to the haptic device 22 indicating the calculated amount of force feedback, and the haptic device 22 provides the calculated amount of force feedback to the user.
  • haptic augmented and virtual reality systems of the present technology can be used for simulating ocular surgery.
  • Such systems include an eye movement simulator 202, as shown in Figure 6, which simulates eye movement during simulation od ocular surgery.
  • the rotation of patient's eye can be caused by some movements of the instrument, or by the patient's voluntary action. Eye rotation may increase the difficulty of the operation, and significant amounts of eye rotation may even damage the eye when instrument is inside. Therefore in many ocular surgeries, the physician operates a first instrument to conduct the surgery and a second instrument to help keep the eye steady.
  • a haptic augmented and virtual reality system can include a surgical simulation station 200 that includes eye movement simulator 202 also includes a first haptic device 204 and a second haptic device 206, which each provide position and orientation information to the haptics rendering logic 212.
  • the first haptic device 204 can be driven by the haptics rendering logic 212, and can track a user's hand movements and provide force feedback to the user associated with a first simulated surgical instrument.
  • the second haptic device 206 can also be driven by the haptics rendering logic 212, and can track a user's hand movements and provide force feedback to the user associated with a second simulated surgical instrument, such as an eye steadying instrument.
  • graphics rendering logic 208 generates and continuously updates, in real time, a virtual 3D scene that is displayed by a display screen 210.
  • the graphics rendering logic 208 receives instrument data, such as position, force, and orientation information, for each haptic device 204, 206 from the haptics rendering logic 212.
  • the graphics rendering logic 208 causes surgical instrument models to be rendered in the virtual 3D scene and move along with the haptic devices 204, 206 based on the position and orientation information for each haptic device 204, 206 received by the graphics rendering logic 208 from the haptics rendering logic 212.
  • a 3D model of the eye, as well as the patient's face are rendered both graphically and haptically in the scene.
  • the graphics rendering logic 208 continuously updates the virtual 3D scene including the 3D eye model and the 3D models of the first and second surgical instruments based on data received from the haptics rendering logic 212 and the eye movement simulator 202.
  • the haptics rendering logic 212 provides force feedback to the user through the first haptic device 204 and the second haptic device 206 based on the eye movement simulation data received from the eye movement simulator 202.
  • the eye movement simulator 202 can be implemented through eye movement simulation logic that, when executed by a processor of the system, simulates movement of a human eye during an ocular surgery simulation.
  • the eye movement simulation logic can be stored in a non-transient computer readable medium, such as a memory of computer 32, and can be executed by at least one processor of computer 32.
  • the eye movement simulator 202 receives input from the haptics rendering logic 212 and simulates eye movement of the 3D eye model during surgical simulation by providing eye movement simulation data to the haptics rendering logic 212 and the graphics rendering logic 208 of the surgical simulation station 200.
  • operation method 300 of the eye movement simulator 202 is illustrated in Figure 7.
  • operation method 300 of the eye movement simulator 202 starts at step 302, by applying a spherical joint to the eye model. This joint restricts the moving space of the eye model while preserves the freedom of rotation.
  • the eye movement simulator 202 applies swing and twist limitations the spherical joint, which correspond to the rotating range of the human eye.
  • the eye movement simulator 202 may also apply spring and damper forces to the joint according to the resistance of eye rotation.
  • the eye movement simulator 202 receives the intersection force data based on the first haptic device 204 from the haptics rendering logic 212 at step 308. In the same step, the eye movement simulator 202 also receives steadying force data from the haptics rendering logic 212 based on the second haptic device 206. At intersection, the virtual instrument associated with the first haptic device 204 pierces the 3d model of the eye. At step 310, the eye movement simulator 202 then applies a fulcrum force to the first haptic device 204.
  • the eye movement simulator 202 applies a reaction force to the 3D model of the eye.
  • the reaction force applied to the 3D model of the eye can include a force with same magnitude and opposite direction as the force and direction of the intersection between the first haptic device 204 and the 3D model of the eye.
  • the reaction force applied to the 3D model of the eye can also include a voluntary movement force, which can be a force added to simulate voluntary movement of the eye by the patient.
  • the eye movement simulator 202 determines a resultant rotation of the eye, based on the fulcrum force applied to the first haptic device 204, the reaction force applied to the 3D eye model, and the steadying force provided by the virtual steadying device of the second haptic device 206.
  • the eye movement simulator 202 loops back to step 310 and applies a new fulcrum force to the first haptic device 204, based on the rotation of the eye determined in step 314. Since the fulcrum force will again affect the rotation of the eye model, a feedback loop is created. Damping can be applied within the eye movement simulator 202 to keep the loop stable.
  • the performance of the user can be evaluated by the variance of the eye orientation, the mean and variance of forces applied to the 3D eye model, or the fluency of completing a simple task.
  • the performance can be recorded for comparison or future review.
  • a haptic augmented and virtual reality system of the present technology can include surgical station logic that controls and operates each surgical station of the system.
  • the surgical station logic can integrate the haptics rendering logic and the graphics logic of the surgical station and provide real-time simulation of a surgical procedure, including updating the surgical views in real time in response to user operations performed with one or more haptic devices of the surgical station.
  • At least some haptic augmented and virtual reality systems of the present technology include a surgical simulation station having at least one haptic device 22 driven by haptics rendering logic 44, wherein the haptic device 22 tracks a user's hand movements and provides force feedback to the user associated with a first simulated surgical instrument having one or more contact points.
  • the graphics rendering logic 46 generates a virtual 3D scene that is provided to the user on a display, such as display screen 28, and the virtual 3D scene includes models of at least one aspect of patient anatomy and the first simulated surgical instrument.
  • the haptics rendering logic 44 can receive data from the haptic device 22 and determine the position and orientation of the simulated surgical instrument.
  • the haptics rendering logic 44 can also receive data from the graphics rendering logic 46 and determine the location of the at least one aspect of patient anatomy.
  • the haptics rendering logic 44 can use the data from the hatpic device 22 and the graphics rendering logic 46 to perform collision detection to determine when there is a collision between any contact point of the a simulated surgical instrument and any aspect of patient anatomy in the virtual 3D scene.
  • the haptics rendering logic 44 determines whether a collision has occurred for each haptic frame, using the previous haptic frame as the first haptic frame and the current haptic frame as the second haptic frame. In performing the evaluation of whether a collision has occurred, there are two considerations that are desirably taken into account. First, the evaluation is preferably efficient enough to not affect the performance of the haptics rendering servo loop, which may sustain a haptic frame rate of 1 KHz or more. Second, the evaluation is preferably robust enough to avoid undetected collisions.
  • step size with which successive discreet points along any line segment defining the movement of a contact point, such as 504 in in Figure 10 and 910, 912, and 914 in Figure 14, are evaluated. If the step size is too big, a collision could be overlooked, especially for thin structures. On the other hand, a finely grained step size could help guarantee collisions are always detected, but it could also severely impact the haptic frame rate, since the algorithm is executed in the servo loop thread.
  • the following linear interpolation equation can be used, which parametrizes the line segment from start point 502a to end point 502b with parameter i, where i is in the interval [0,1], then gives any point P in the line segment as a function of i:
  • the evaluation performed by the haptics rendering logic can traverse the line segment by varying i from 0 to 1, incrementing it by a value of delta in each successive iteration. Computations of P can be done in continuous space and further converted to discrete voxel coordinates for retrieving voxel values. No sub-voxel resolution is needed.
  • delta can be variable. For example, a variable step size as follows can be used:
  • K is a constant that can depend on the voxel size, and can be equal to or less than the minimum dimension of the voxels in order to prevent undetected collisions.
  • Using a variable delta results in the line segment is divided into a higher number of steps when the haptic stylus is being moved at higher speeds. However, every time the algorithm is executed, the actual distance between successive points P to be evaluated in a given line segment is constant, regardless of the velocity of the haptic stylus.
  • Collision determination can be made using either point-to-object collision detection or multi-point collision detection.
  • Haptics rendering logic including either point-to-object collision detection or multi-point collision detection can be implemented by instructions stored on a non-transient computer readable medium, such as a memory of computer 32, and can be executed by at least one processor of the system, such as a processor of computer 32.
  • Point-to-object collision detection can be used to determine the interaction between a virtual surgical instrument and a virtual 3D model of patient anatomy.
  • Figure 8 illustrates one example of a virtual aneurysm clip 400, having a single contact point 400a for use in point-to-object collision detection.
  • the contact point 400a of aneurysm clip 400 would be evaluated for collisions with isosurfaces representing virtual anatomies.
  • the haptics rendering logic collects two sets of parameters within two consecutive haptic frames, and uses those collected parameters to determine whether a collision has occurred between a virtual tool 500, such as aneurysm clip 400, and an aspect of simulated patient anatomy 600.
  • the surface of the aspect of simulated patient anatomy 600 can be represented by voxel intensities.
  • the virtual tool 500 is located outside of the aspect of simulated patient anatomy 600, and the contact point of the virtual tool 500 is at start point 502a. Between the first haptic frame and a second haptic frame, the virtual tool 500 moves along path 504.
  • the contact point of the virtual tool 500 is at end point 502b.
  • the haptics rendering logic estimates the intersection point 502' between the surface of the simulated patient anatomy 600 and the line segment defined by path 504, from the start point 502a to the end point 502b.
  • the haptics rendering logic determines that a collision occurred, it provides the 3D coordinates of the intersection point P, the normal vector N of the surface at the intersection point, and the touched side (front or back) of the shape surface.
  • the haptics rendering logic will also determine the forces associated with the collision.
  • FIGs 11 and 12 show flow diagrams of a collision evaluation 700 that can be performed by the haptics rendering logic of a haptic augmented and virtual reality system.
  • the process starts at step 702, where it evaluates whether the 3D coordinates of start point 502a are equal to the 3D coordinates of the end point 502b. If the haptic stylus has not moved in two successive haptic frames and there was no collision in the previous frame, then the start point and end point are equal, and the function returns a result of "false" at step 704.
  • step 706 the logic can perform a rough bounding-box comparison between the line and the volume, proceeding to returning a result of "false” at step 704 if they are disjoint. If the result of step 706 indicates that the line 504 is inside the volume bounding box, for each point P on the line segment from the start point 502a to the end point 502b, the collision evaluation 700 performs a collision determination loop 708 including steps 710, 712, and 714. At step 710, the collision evaluation 700 selects a point P. At step 712, the collision evaluation 700 identifies the voxel closest to the point P.
  • the collision evaluation 700 checks the intensity of the closest voxel V by a set of window transfer functions defining the multiple shapes. If the voxel intensity at step 714 is outside the windows specified through the transfer functions, then the haptic stylus has not yet collided with any simulated aspect of the patient anatomy 600, and the evaluation 700 continues by performing the collision determination loop 708 with the next point P along the line segment 504.
  • the collision evaluation 700 ends the loop 708 at step 720 and proceeds to step 704 and returns a result of "false.” However, when the closest voxel V in step 714 lies within any of the transfer function windows, the collision evaluation 700 returns the 3D coordinates of the point P as the surface contact point and proceeds to step 716. At step 716, the collision evaluation 700 computes the collision identification data as illustrated in Figure 12, and then proceeds to step 718 where the collision evaluation 700 returns a result of "true.”
  • the collision evaluation 700 can compute the collision identification data by starting at step 800 and proceeding to step 802 by providing the 3D coordinates of the point P at which the collision occurred.
  • the collision evaluation 700 can use the density of the closest voxel V, as determined in step 712, to determine which volumetric isosurface acting as a simulated aspect of the patient anatomy 600 has been touched by the haptic stylus by comparing it with the ranges defined by their transfer functions.
  • the collision evaluation 700 can determine the normal vector N, which is perpendicular to the volumetric isosurface at the contact point P, as shown in Figure 10, by computing the gradient of the neighbor voxels using the central differences method.
  • the collision evaluation 700 can define the tangential plane T, as shown in Figure 10, based on the contact point P and the normal vector N.
  • the tangential plane T can serve as an intermediate representation of the volumetric isosurface acting as a simulated aspect of the patient anatomy 600. Proceeding to step 810, the tangential plane T can be used to determine if the haptic stylus is touching either the front or back side of the volumetric isosurface.
  • the collision evaluation 700 determines whether the start point 502a is in front of the tangential plane T and the end point 502b is behind the tangential plane T.
  • the collision evaluation 700 continues to step 812, and determines that the collision occurred at the front face of the volumetric isosurface acting as a simulated aspect of the patient anatomy 600.
  • the collision evaluation 700 determines the direction of the normal vector N. If the start point 502a is not in front of the tangential plane T and the end point 502b is not behind the tangential plane T, the collision evaluation 700 continues to step 816, and determines that the collision occurred at the back face of the volumetric isosurface acting as a simulated aspect of the patient anatomy 600.
  • the collision evaluation 700 determines the direction of the normal vector N, which is the opposite of the direction that would be determined at step 814. From either step 814 or step 818, the collision evaluation 700 ends computation of the collision identification data at step 820.
  • the haptics rendering logic of a haptic augmented and virtual reality system can simultaneously detect multiple simulated aspects of the patient anatomy 600, where each volumetric isosurface acting as a simulated aspect of the patient anatomy 600 can be defined by its individual ranges of voxel intensities.
  • a transfer function can be defined for each haptic shape whereby a binary output value is assigned to every possible voxel intensity, and the set of transfer functions can be used at step 714 of Figure 11.
  • FIG. 13 illustrates a comparison between graphics volume rendering and haptic transfer functions.
  • the first transfer function 840 exemplifies opacity as a function of voxel intensities, as can be used for graphics volume rendering. Gradually increasing or decreasing values of opacity, represented by ramps, are allowed and commonly used.
  • a haptics transfer function as shown in the second transfer function 860, only discrete binary outputs are permitted.
  • voxels with intensities within the rectangular window defined by the transfer function will be regarded as belonging to the shape and will, therefore, be touchable.
  • the collision detection algorithm finds a voxel whose intensity is within the rectangular window, it will return TRUE, indicating a collision with the shape was detected.
  • a single transfer function may simultaneously specify graphics and haptics properties for each shape.
  • Figure 13 it is shown how a haptic transfer function can be obtained from its graphics counterpart.
  • a second advantage is that preprocessing steps such as segmentation and construction of polygonal meshes for each shape are no longer needed.
  • the haptic transfer functions can resemble an operation of binary thresholding, by which different subsets may be determined from the original dataset according to their voxel intensities. Therefore, the specification of transfer functions can provide all the information needed to generate graphics and haptics visualization, operating only with the original (unmodified) 3D dataset.
  • Multi-point collision detection can be used to provide haptic feedback to a user with respect to interactions between multiple points of the simulated surgical instrument and patient anatomy.
  • the simulated surgical instrument has a plurality of contact points.
  • Figure 9 illustrates the virtual aneurysm clip 400 of Figure 8, modified to have multiple contact points 400a, 400b, 400c, 400d, 400e, 400f, 400g and 400h.
  • each contact point 400a, 400b, 400c, 400d, 400e, 400f, 400g and 400h of the aneurysm clip 400 can be evaluated for collisions with isosurfaces representing aspect of patient anatomy.
  • the haptics rendering logic 44 can perform collision detection to determine when there is a collision between any one or more of the plurality of contact points of the simulated surgical instrument and at least one aspect of patient anatomy.
  • the haptics rendering logic can collect two sets of parameters within two consecutive haptic frames, and uses those collected parameters to determine whether a collision has occurred between a simulated surgical instrument 900, such as aneurysm clip 400, and an aspect of simulated patient anatomy 902.
  • the surface of the aspect of simulated patient anatomy 902 can be represented by voxel intensities.
  • the simulated surgical instrument 900 is located outside of the aspect of simulated patient anatomy 902, and the each of the plurality of contact points of the simulated surgical instrument 900 is at a start point 904a, 906a, and 908a.
  • a first contact point is located at the tip 916 of the simulated surgical instrument 900 and has start point 904a
  • a final contact point is located at the tail end 918 of the simulated surgical instrument 900 and has start point 908a.
  • the virtual tool 900 moves in accordance with the hand movement of a user, which is tracked by the haptic device 22, such as a haptic stylus 27 as shown in Figures 1 and 3.
  • the contact point that started at 904a is at end point 904b
  • the contact point that started at 906a is at end point 906b
  • the contact point that started at 908a is at end point 908b.
  • the haptics rendering logic can perform a collision evaluation for each line segment 910, 912, and 914 representing the travel path of each contact point from a first haptic frame to a second haptic frame.
  • the haptics rendering logic determines that a collision occurred with respect to at least one of the contact points, it can provide the 3D coordinates of the first intersection point PI, the normal vector N of the surface at the intersection point PI, and the touched side (front or back) of the shape surface.
  • the haptics rendering logic can also determine the forces associated with the collision.
  • the graphics rendering logic can update the virtual 3D scene to display the simulated surgical instrument 900 in accordance with the positions of each of the contact points that correspond to the occurrence of the collision. For example, as shown in Figure 14, intersection point PI is first contact point of the simulated surgical instrument 900 that would collide with the aspect of simulated patient anatomy 902.
  • the haptics rendering logic 44 can determine that intermediate point 904' corresponds to the location of the contact point of the simulated surgical instrument 900 that started at start point 904a.
  • Graphics rendering logic 46 can update the virtual 3D scene based on the intersection point PI, and/or optionally on any intermediate points, such as intermediate point 904', provided by the haptics rendering logic 44. As a result, the display of the virtual 3D scene generated by the graphics rendering logic 46 will depict the simulated surgical instrument 900 at the location of the collision.
  • FIG. 15 illustrates one flow diagram of a collision evaluation 1000 that can be performed by the haptics rendering logic 44 of a haptic augmented and virtual reality system.
  • the process starts at step 1002, and proceeds to step 1004, where it evaluates whether the 3D coordinates each start point 904a, 906a, and 908a of a contact point of the simulated surgical instrument 900 are equal to the 3D coordinates of each end point 904b, 906b, and 908b of a contact point of the simulated surgical instrument 900. If the haptic device 22 has not moved in two successive haptic frames and there was no collision in the previous frame, then the start point and end point are equal, and the function returns a result of "false" at step 1006.
  • step 1008 the haptics rendering logic 44 logic can perform a rough bounding-box comparison between the line and the volume, proceeding to returning a result of "false" at step 1006 if they are disjoint. If the result of step 1008 indicates that any of the line segments 910, 912, 914, representing the travel paths of the contact points, is inside the volume bounding box, the collision evaluation 1000 performs a collision determination loop 1010 for each point P on each line segment from the corresponding start point to the corresponding end point.
  • the collision evaluation 1000 selects each point P along each line segment and determines whether a collision occurred. When no collision occurred on a given line segment, the process returns to step 1012 and performs steps 1012, 1014, and 1016 for each of the points on the next line segment. If no collision occurred for any point on any of the line segments, the process ends loop 1010 at step 1018 and returns a result of "false" at step 1006.
  • the collision evaluation 1000 can proceed to step 1022, where it can compute the collision identification data for the intersection point PI in accordance with the process of Figure 12, including the direction and magnitude of the Normal vector N, as well as determining intermediate point P'. The collision evaluation 1000 then proceeds to step 1024 and returns a result of "true.”
  • the order in which the collision evaluation 1000 can perform the collision determination loop 1010 for each point along each line segment 910, 912, and 914, can start with the first contact point at the tip 916 of the simulated surgical instrument 900, such as the contact point having the start point 904a and the end point 904b, and conclude with the final contact point at the tail end 918 of the simulated surgical instrument 900, such as the contact point having the start point 908a and the end point 908b.
  • the collision evaluation 1000 may stop once a first intersection point PI is determined.
  • such an implementation is most accurate when the simulated surgical instrument 900 is only moved side to side and forward (tip always preceding tail).
  • the simulated surgical instrument 900 may be moved backwards (tail preceding tip), or tangentially to the isosurface representing an aspect of patient anatomy 902, as may be the case when simulating procedures such as navigation of the contour of spinal pedicles, the order in which the contact points composing the virtual instruments are evaluated for collisions may play an important role in preventing fall-through effects.
  • Figure 16 illustrates a flow diagram of a collision evaluation 1100 that can be performed by the haptics rendering logic 44 in which the contact points are evaluated in both directions, in order from a tip of the simulated surgical instrument to a tail end of the simulated surgical instrument, and in order from the tail end of the simulated surgical instrument to the tip of the simulated surgical instrument.
  • Use of collision evaluation 1100 can identify the true first intersection point PI, or multiple simultaneous intersection points PI .
  • 908a of a contact point of the simulated surgical instrument 900 are equal to the 3D coordinates of each end point 904b, 906b, and 908b of a contact point of the simulated surgical instrument 900. If the haptic device 22 has not moved in two successive haptic frames and there was no collision in the previous frame, then the start point and end point are equal, and the function returns a result of "false” at step 1106. If the haptic device 22 has moved, and any start point is not equal to the corresponding end point, then the process continues to step 1108, where the haptics rendering logic 44 logic can perform a rough bounding-box comparison between the line and the volume, proceeding to returning a result of "false” at step 1106 if they are disjoint.
  • the collision evaluation 1100 performs a collision determination loop 1110 for each point P on each line segment from the corresponding start point to the corresponding end point.
  • the collision determination loop 11 10 determines any intersection points by performing the evaluation both in order from a tip 916 of the simulated surgical instrument 900 to a tail end 918 of the simulated surgical instrument 900, and in order from the tail end 918 of the simulated surgical instrument 900 to the tip 916 of the simulated surgical instrument 900.
  • the collision evaluation 1100 selects each point P along each line segment, from the tip 916 of the simulated surgical instrument 900 to the tail end 918 of the simulated surgical instrument 900, until a first collision point PI is determined on an intersecting line segment K.
  • the collision evaluation 1100 proceeds to step 11 18 and then back to step 1 112.
  • the collision evaluation 1100 selects each point P along each line segment in the reverse order, from the tail end 918 of the simulated surgical instrument 900 to the tip 916 of the simulated surgical instrument 900, until a second collision point PI is determined.
  • the collision evaluation 1100 can proceed to step 1128, where it can compute the collision identification data for each intersection point PI in accordance with the process of Figure 12, including the direction and magnitude of the Normal vector N, as well as determining intermediate point P ⁇ If there are multiple intersection points PI, the direction and magnitude of the Normal vector N can be an average of the Normal vectors for each intersection point PI . The collision evaluation 1100 then proceeds to step 1130 and returns a result of "true.”
  • Multi-point collision detection can be used in simulating any surgical procedure.
  • the virtual Jamshidi needle can be represented by multiple contact points, and multi-point collision detection can be used to assist a user in proper placement of the needle during insertion.
  • subclavian central line where the needle is advanced under and along the inferior border of the clavicle, is another procedure that can from the multi-point collision detection.
  • the needle can be represented by multiple contact points, and multi-point collision detection can be used to indicate failure if a collision is detected between the simulated clavicle and any contact point prior to the needle being inserted in the subclavian vein.
  • Locking the cursor movement effectively locks the visual representation of the simulated surgical instrument rendered by the graphics rendering logic 46 in the virtual 3D scene with respect to the simulated aspect of patient anatomy 902 with which the simulated surgical instrument 900 has collided, thus preventing the potential fall-through.
  • Traversing the points in the virtual instrument from both ends, using collision evaluation 1100 as shown in Figure 16, may result in identification of at least two different intersection points PI, where the simulated surgical instrument 900 collides with a simulated aspect of patent anatomy 902 at the same time. If these points are detected to be at a distance larger than a predefined constant D (in millimeters) along the length of the simulated surgical instrument 900, it signals that the undesired effect of fall-through is starting to occur.
  • D in millimeters
  • the cursor locking mechanism can be implemented. Accordingly, the cursor movement can be locked when the absolute value of the coordinates of the first intersection point PI minus the second intersection point PI is greater than a predefined constant D (in millimeters) along the length of the simulated surgical instrument 900.
  • D in millimeters
  • the conditions to lock the cursor are satisfied, it is also necessary to establish the geometric conditions under which the cursor will be unlocked.
  • a normal vector must exist at intersection point PI.
  • the current dot product between the normal vector N at intersection point PI and the contact plane is determined to provide the orientation of the simulated surgical instrument 900, and can be stored in a memory of the system and used to check if the cursor can be unlocked.
  • FIG 17 is a flow chart showing a cursor movement locking process 1200 that can be implemented by the surgical station logic, such as be graphics rendering logic 46 or haptics rendering logic 44.
  • Cursor movement locking process 1200 starts at step 1202 and proceeds to step 1204, where it determines whether the cursor movement is locked. If the cursor movement is already locked, the cursor movement locking process 1200 ends at step 1206. If the cursor movement is not already locked, the process proceeds to step 1208, and determines whether the conditions for locking cursor movement are met, such as whether the absolute value of the coordinates of the first intersection point PI minus the second intersection point PI is greater than a predefined constant D (in millimeters) along the length of the simulated surgical instrument 900.
  • D in millimeters
  • the cursor movement locking process 1200 ends at step 1206. If the conditions are met, the process proceeds to step 1210, where it determines whether a normal vector exists, such as the normal vector associated with intersection point PI as determined by collision evaluation 1100. If a normal vector does not exist, the cursor movement locking process 1200 ends at step 1206. If a normal vector does exist, the process proceeds to step 1212, where the contact plane, such as tangential plane T in Figure 14, is determined. The process then proceeds to step 1214, where the orientation of the simulated surgical instrument 900 based on the dot product between the normal vector N at intersection point PI and the contact plane. The process then proceeds to step 1216, where a result of "true" is provided and the cursor movement is locked.
  • a normal vector such as the normal vector associated with intersection point PI as determined by collision evaluation 1100. If a normal vector does not exist, the cursor movement locking process 1200 ends at step 1206. If a normal vector does exist, the process proceeds to step 1212, where the contact plane, such as
  • the conditions to unlock the cursor may be checked when there is an updated orientation of the simulated surgical instrument. If the dot product based on the normal vector of the current orientation of the simulated surgical instrument is larger than the dot product based on the normal vector of the simulated surgical instrument at the time that the cursor movement was locked, that means that the simulated surgical instrument has been rotated in the opposite direction that caused the locking, and therefore it can be unlocked.
  • FIG 18 is a flow chart showing a cursor movement unlocking process 1300 that can be implemented to determine when cursor movement unlocking conditions are met and cursor movement can be unlocked.
  • the process starts at step 1302 and proceeds to step 1304, where it determines whether the cursor movement is locked. If the cursor movement is not already locked, the cursor movement locking process 1300 ends at step 1306. If the cursor movement is already locked, the process proceeds to step 1308, and determines the orientation of the simulated surgical instrument 900 based on the original dot product between the normal vector N at • intersection point PI and the contact plane at the time the cursor movement was locked. The process then proceeds to step 1310, where the current dot product is determined based on any movement of the simulated surgical instrument 900.
  • the cursor movement unlocking process 1300 determines whether the current dot product is larger than the original dot product. If the current dot product is not larger than the original dot product, the process ends at step 1306. If the current dot product is larger than the original dot product, at step 1314, the cursor movement unlocking process 1300 returns a result of "false" and the cursor movement is unlocked.
  • a surgeon may be able to view both the patient and have secondary visual data.
  • a surgeon may be able to see the patient, and may also obtain secondary visual guidance from a screen providing ultrasound or fluoroscopic data.
  • the display of a surgical station in a haptic augmented and virtual reality system may include virtual 3D scene that has at least two viewing areas, where a first viewing area depicts primary visual model, such as a virtual stereoscopic 3D model of a portion of the patient, and a second viewing area depicts a secondary visual model.
  • Figure 21 illustrates a display 1600, which can be for example, a display screen 28 (as shown in Figures 1-4).
  • Display 1600 has a first viewing area 1602 depicting a virtual stereoscopic 3D model of a portion of the patient 1612, specifically the head, neck, and upper torso.
  • the second viewing area 1604 of the display 1600 depicts virtual secondary visual model 1614.
  • the secondary visual model 1614 as illustrated is virtual ultrasound imaging.
  • the secondary visual data 1614 may be virtual fluoroscopy.
  • the graphics rendering logic 46 can continuously update the virtual 3D scene, including the primary visual model and the secondary visual model based on data received from the haptics rendering logic 44. When one or more simulated surgical devices are also present in the virtual 3D scene, the graphics rendering logic 46 can also continuously update the virtual 3D scene to include the placement and movement of each simulated surgical device in both the primary visual model and the secondary visual model.
  • FIG. 21 One example of a surgical procedure simulation where secondary visualization data can be useful is central venous catheterization, as shown in Figure 21.
  • the secondary visual data 1614 is preferably virtual ultrasound imaging. If the simulated surgical procedure is percutaneous needle insertion or subclavian central line placement, virtual fluoroscopy might be preferred for use as the secondary visual data 1614.
  • a haptic augmented and virtual reality system including at least one surgical station can be used to simulate a central venous catheterization (CVC) procedure, also known as central line placement.
  • CVC central venous catheterization
  • This percutaneous procedure consists of inserting a needle 1610 that will guide a catheter into a large vein in the neck, chest, or groin of a patient for administering medication or fluids, or obtaining blood tests or cardiovascular measurements.
  • ultrasound imagery 1614 can be used to provide visual guidance for the surgeon to recognize the internal anatomical structure of the neck, as well as the position and orientation of the needle being inserted.
  • the at least one surgical station of the haptic augmented and virtual reality system can have at least two haptic devices 22.
  • a first haptic device 22 can be driven by the haptics rendering logic 44, and can track a user's hand movements and provide force feedback to the user associated with a first simulated surgical instrument, such as an ultrasound transducer 1608 (shown in Figure 21).
  • a second haptic device 22 can also be driven by the haptics rendering logic 44, and can track a user's hand movements and provide force feedback to the user associated with a second simulated surgical instrument, such as needle 1610 (shown in Figure 21).
  • the haptics rendering logic 44 can use collision detection, such as multi-point collision detection described above with reference to Figure 14, to determine any collisions between either the ultrasound transducer 1608 or the needle 1610 and any of the volumetric isosurfaces representing aspects of the patient anatomy, such as the multiple layers of soft and hard tissues. Force feedback can be provided to the user through either the first or second haptic device, as appropriate, based on the collision detection performed by the haptics rendering logic 44.
  • collision detection such as multi-point collision detection described above with reference to Figure 14
  • Force feedback can be provided to the user through either the first or second haptic device, as appropriate, based on the collision detection performed by the haptics rendering logic 44.
  • Volume data pre-processing 40 can receive 2D image data, for example, generated by an input data source 41, which can be a CT scanner, and can generate 3D models that can be used by the graphics rendering logic 46 to generate the virtual 3D scene.
  • an input data source 41 which can be a CT scanner
  • 3D models that can be used by the graphics rendering logic 46 to generate the virtual 3D scene.
  • the original volume can be obtained from a dataset of CT DICOM images. This volume can be used to render a natural echoic image of anatomical structure.
  • Imaging can depend on the capabilities of the CT scanner and the protocol to follow for the particular surgical procedure.
  • CT imaging can be recorded in 16-bit imaging and converted to 8-bit to reduce GPU (Graphical Processing Unit) memory and computational requirements.
  • Gray-scale values from 0 to 255 can be used to correspond to the density of the tissue.
  • the distribution of values in the gray-scale can be used to provide a realistic appearance of the ultrasound image rendering.
  • the ultrasound can be displayed using a customized transfer function that properly displays the pixel information from the graph above.
  • Each pixel can be correlated to a specific tissue density. For example, the density of skin can be found around value 45, compared to bone around value 200.
  • a CT with contrast can be used.
  • the contrast dye can affect the opacity of the vessels in the CT, making them look the same as bone.
  • manual segmentation can be performed to separate specific anatomy: bone, vein, artery, etc.
  • a customized mask volume can also be retrieved, in which the distinct individual anatomical structures are assigned labels during the segmentation process. This information can be used to determine what anatomy is interacting with the transducer 1608 and if it will deform when applying pressure with the virtual transducer 1608.
  • the mask volume histogram can have a variant contour form compared to the original volume, as the voxels of the same type can be aggregated to specific intensity with opacity coefficients.
  • VolView An open source toolkit named VolView, which is specialized on visualizing and analyzing 3-D medical and scientific data, can be used to fulfill the volume retrieving objective.
  • the two volumes can be imported into the virtual scene and precisely collocated with the previously mentioned CUDA volume for the further image rendering.
  • the piezoelectric material at the bottom of the transducer emits sector-shaped beams which forms a plane.
  • the plane 1404 can be interpolated by a set of the clipped slices from the view volume.
  • the slice stack can be aligned with the sector plane.
  • the virtual perspective camera node 1402 can be mounted right above the plane 1404, and faced downward.
  • the rotation of the camera can be carefully defined so that the lateral border of the ultrasound image is consistent with the lateral border of the transducer 1406.
  • the margin between the near and far distance attributes of the camera can define the depth of the image stack clipped.
  • the depth can be set to an optimal value to guarantee the resolution of the image, as it can fuse the consecutive slices and merge all the clipped isosurfaces into one single compounded image.
  • the height angle attribute can be set to scale the field of view region for the image.
  • the camera 1402 can be transformed corresponding to the placement of the virtual transducer 1406 in global view and can generate a 2D image of the clipped volume and carry out perspective rendering.
  • Open Inventor Graphics SDK can be utilized to render the perspective that the camera captures.
  • Open Inventor is an advanced object-oriented toolkit for OpenGL programming.
  • the lower level atomic instructions that are sent to the GPU devices are encapsulated into node classes.
  • the nodes including the fundamental manipulation (transformation, texture generation and binding, lighting, etc.) are organized into a hierarchical tree structure (also known as a scene graph), and work in discipline to render the "virtual world”.
  • the ultrasound image rendering of the soft tissue mainly includes the development of two classes derived from Open Inventor nodes: SoTissueDeformationShader and SoUltrasoundShader. Both classes contain internal GLSL shader implementation for photorealistic ultrasound image rendering.
  • SoTissueDeformationShader can store clipped image slices from both the original volume and the segmented volume in corresponding texture unit group node. These two types of images will be passed to a GLSL shade for the vertex and fragment rendering in parallel.
  • the shader uses coordinate to access the pixel within both images and replace the fragment color of current pixel by that of another pixel at the computed position after deformation.
  • the output of the shader can be be used to render a rectangle shape under the entire scene as the input to the second developed node, SoUltrasoundShader. SoUltrasoundShader specifically deals with the ultrasound effect rendering.
  • this geometry is put under a scene texture node, which will also be passed to the internal GLSL shader to render the ultrasound effect.
  • the scene texture node is necessary, instead of the ordinary texture node, since there is no way to read out the content inside an active texture unit because of an Open Inventor limitation. Thus, this texture buffer cannot be transferred to the shader directly.
  • the texture can be eventually visualized onto the quad in the virtual scene as the final ultrasound image.
  • the original clipped images from both volumes can go through a two-phase rendering pipeline: first phase for tissue deformation rendering in particular, in the second phase the internal shader uses the rendering scene from the first phase and generates the result ultrasound image.
  • the haptics rendering logic 44 can determine deformation and ultrasound effects that can be implemented under the corresponding shader nodes.
  • the haptics rendering logic 44 can include Young's Modulus properties for different tissues,which indicates the elasticity of a material to resume its original size and shape after being subjected to a deforming force or stress.
  • Young's Modulus properties for different tissues,which indicates the elasticity of a material to resume its original size and shape after being subjected to a deforming force or stress.
  • a mass-spring system can be used to represent deformation of soft tissue.
  • the bottom line of the 2-D image can be considered as the farthest end of the field of view region that the force transmission can reach, and thus no deformation is applied between the first image 1500 and the second image 1502.
  • the clipped plane of the upper body volume can be divided into parallel columns 1504a, 1506a, and 1508a, as shown in the first image 1500.
  • each column can be equal to the dimension of one single pixel, and each column contains a stack of pixels.
  • column 1506a contains a stack of pixels, each having an initial length L01, L02, L03... LOn, that results in a column height of L0.
  • Each pixel within a column can be treated as an equivalent spring. Under a contact force F0 at one single point on the skin surface, the spring system can be be compressed, and the deformation extent of every point inside the tissue can be represented by a computed coefficient with respect to the original length of that pixel. According to the Hook's Law, each pixel can have an output for the shrunk length that depends on the Young's modulus factor as its physical metric. The shrunk length of each pixel L01 ', L02', L03', L04'... LOn' results in a shrunk length LO'of the column 1506b.
  • the surface contact force F0 and an internal distributed force can be determined by the haptics rendering logic 44.
  • the vector of the surface contact force can be provided based upon the user's manipulation of a haptic device 22, while the surface deformation curve can obey a Normal distribution function. It can be assumed that the force along the column traverses through the isotropic tissue and thus remains as constant. Considering the actual anatomical structure on the intersection plane, it can also reasonably be assumed that the skin surface will remain in a smooth contour obeying the 2-D normal distribution function. Hence, the deformation length at that particular point can be used to calculate a compensated force based on over all elasticity coefficient, the deformation of the particular pixel along the column below can be calculated accordingly as follows:
  • Reflection defines the ultrasound wave that echoes back to the transducer.
  • the gradient of the acoustic impedances between two adjacent pixels on incident interface is calculated. This scale is used to further compute the light intensity transmitted back to the transducer.
  • Radial blur consists of two aspects: in radial direction, the light intensity attenuates exponentially; while in tangential direction, the gray values of the pixels are smoothed so that it will have a blurred effect to simulate the realistic inherent and coherent imaging of ultrasound machine.
  • a perlin based noise pattern is designed and precomputed onto the ultrasound image.
  • the seed of the pseudo noise generator is related to the real-time transducer position, orientation, and the clock time.
  • the volume data pre-processing 40 can provide the generated 3D models to the 3D application program interface 39, which can be received and used by the graphics rendering logic 46 to generate the virtual 3D scene.
  • the primary visual model displayed in the first viewing area 1602 can include a stereoscopic view of the 3D virtual upper torso and neck, including skin, muscle, bone, organs, veins, and arteries using CUDA.
  • a GPU-based ray-casting algorithm can be executed by the volume data pre-processing 40 based on the 3D model generated from the CT data, and the 3D application program interface 39 can render a dynamic virtual ultrasound image using GLSL. Visual effects of ultrasonic reflection and absorption, as well as soft tissue deformation and vein vessel collapse, can be displayed in the second viewing area 1604 to provide an ultrasound guided simulation.
  • a user can place a virtual ultrasound transducer 1608 on the surface of a patient's neck to assess the position and the appearance of the internal jugular vein and carotid artery next to it.
  • the haptic library 39 can render the perceptional deformation on the contact between the virtual ultrasound transducer 1608 and the skin.
  • the IJ vein is compressible compared to the carotid artery, which is not compressible
  • a user can manipulate the virtual ultrasound transducer 1608 through manipulation of its associated haptic device 22 in order apply a small amount of force on the skin and view the characteristic compression of the vein in the second viewing area 1604 of the display 1600.
  • the vessels will have different interpretation in the resulting image.
  • Different alignment of the needle 1610 will also affect the portion of the needle displayed in the ultrasound image in the second viewing area 1604. For example, when the needle is inserted perpendicular to the ultrasound beam, only a portion of the needle will be visualized, while in a parallel alignment the entire course of the needle can be visualized during the traversal.
  • the user can manipulate the virtual needle 1610 through manipulation of its associated haptic device 22 to insert the needle 1610 into the patient.
  • Haptic augmented and virtual reality systems as described above can be used in various applications. For example, such systems can be used for training, where residents can develop the kinesthetic and psychomotor skills required for conducting surgical procedures. Such systems can also be used for pre-surgical planning, particularly for challenging cases of patients with abnormal anatomy.
  • haptic augmented and virtual reality systems of the present technology can be integrated with existing hospital information systems.
  • An operation room (OR) workspace can be mapped onto and simulated by the workspace of a haptic augmented and virtual reality system by using appropriate coordinate transformations. While the OR is usually tracked optically, the tracking can be electromagnetic and physical when using a haptic augmented and virtual reality system.
  • the graphic display and haptic feedback provided to the user can be based not only on the simulated patient anatomy, but also on the simulated OR workspace.
  • the haptics rendering logic 44 can cause the a haptic device 22 to provide force feedback to the user based on aspects of the OR workspace that have been mapped into the workspace of the haptic augmented and virtual reality system.

Abstract

The present technology provides methods, devices and systems for haptically-enabled simulation of surgical procedures using a haptic augmented and virtual reality system.

Description

IMPROVEMENTS FOR HAPTIC AUGMENTED AND VIRTUAL REALITY SYSTEM FOR SIMULATION OF SURGICAL PROCEDURES
RELATED APPLICATIONS
[0001] This application claims priority to U.S. Provisional Patent Application Serial No. 61/910,806, filed December 2, 2013, currently pending.
FIELD OF THE INVENTION
[0002] The present technology relates to methods, devices and systems for haptically- enabled simulation of surgical procedures using a haptic augmented and virtual reality system.
BRIEF DESCRIPTION OF THE DRAWINGS
[0003] Specific examples have been chosen for purposes of illustration and description, and are shown in the accompanying drawings, forming a part of the specification.
[0004] Figure 1 illustrates a perspective schematic view of one example of a known open surgery station, which can be used in a haptic augmented and virtual reality system of the present technology.
[0005] Figure 2 illustrates a block diagram of a known software and hardware architecture for the system of Figure 1.
[0006] Figure 3 illustrates a second perspective schematic view of the open surgery station of Figure 1.
[0007] Figure 4 illustrates a perspective schematic view of one example of a microsurgery station, which can be used in a haptic augmented and virtual reality system of the present technology. [0008] Figure 5 illustrates a block diagram of an adjustable indicator incorporated into a haptic augmented and virtual reality system of the present technology.
[0009] Figure 6 illustrates a block diagram of a haptic augmented and virtual reality system of the present technology including an eye movement simulator.
[0010] Figure 7 illustrates a flow diagram of a method of operation of the eye movement simulator of Figure 6.
[0011] Figure 8 illustrates one example of a virtual aneurysm clip having a single contact point.
[0012] Figure 9 illustrates one example of a virtual aneurysm clip having multiple contact points.
[0013] Figure 10 illustrates one example of point-to-object collision detection.
[0014] Figure 1 1 illustrates a flow diagram for one example of a collision evaluation that can be used in performing point-to-object collision detection.
[0015] Figure 12 illustrates a flow diagram for one example computing the collision identification data in a collision evaluation that can be used in performing point-to- object collision detection.
[0016] Figure 13 illustrates a comparison between graphics volume rendering and haptic transfer functions.
[0017] Figure 14 illustrates illustrates one example of multi-pint collision detection.
[0018] Figure 15 illustrates a flow diagram for one example of a collision evaluation that can be used in performing multi-point collision detection.
[0019] Figure 16 illustrates a flow diagram for a second example of a collision evaluation that can be used in performing multi-point collision detection.
[0020] Figure 17 illustrates a flow diagram showing a cursor movement locking process.
[0021] Figure 18 illustrates a flow diagram showing a cursor movement unlocking process. [0022] Figure 19 illustrates a plane formed by an ultrasound transducer that can be used in rendering 3D models of patient anatomy.
[0023] Figure 20 illustrates a spring model that can be used to determine skin deformation.
[0024] Figure 21 illustrates a surgical procedure simulation where secondary visualization data is provided.
DETAILED DESCRIPTION
[0025] The present technology provides devices, systems, and methods that can be used with, and implemented as improvements to, haptic augmented and virtual reality systems. The haptic augmented and virtual reality systems can include at least one surgical simulation station. A surgical simulation station can be an open surgery station that simulates performance of open surgery steps, or a microsurgery station for simulates performance of microsurgery steps. Examples of such systems are described in U.S. Patent Application Serial No. 13/628,841, entitled "Haptic Augmented and Virtual Reality Systems for Simulation of Surgical Procedures," filed on September 27, 2012, and examples of open surgery stations that can be used in such systems are described in U.S. Patent No. 7,812,815, entitled "Compact Haptic and Augments Virtual Reality System," issued on October 12, 2010, the disclosures of each are hereby incorporated by reference herein in their entirety.
[0026] As used herein, the term "open surgery station" should be understood to mean that an environment is provided in which the visual display provided to a user includes virtual reality aspects superimposed over real reality, and in which a user can see aspects of real reality in addition to the superimposed virtual reality aspects while performing steps of simulated open surgery. For example, at an open surgery station, a user can interact with displayed virtual patient anatomy, and a simulated surgical instrument can be displayed in a manner that it appears to be held in the actual hand of a user holding a haptic stylus as a proxy for the instrument. [0027] As used herein, the term "microsurgery station" should be understood to mean that an environment is provided in which the visual display provided to a user consists of virtual aspects, which are computer generated graphics, that the user can see while performing steps of simulated microsurgery. For example, at a microsurgery station, a user can interact with displayed virtual patient anatomy using displayed virtual surgical instruments, and although the user uses a haptic stylus as a proxy for the instrument, the user does not see aspects of real reality.
Haptic Augmented and Virtual Reality Systems
[0028] Haptic augmented and virtual reality systems of the present technology can include at least one surgical simulation station. One example of a surgical simulation station is open surgery station 10 that simulates open surgery, one example of which is illustrated in Figures 1 and 3. Another example of a surgical simulation station is microsurgery station 11 that simulates microsurgery, one example of which is illustrated in Figure 4. Open surgery stations 10 and microsurgery stations 11 can each include some of the same types of physical components, and for ease of reference like physical components are labeled with like reference numbers for both stations 10 and 1 1 in Figures 1-4.
[0029] With reference to the open surgery station 10 for simulating open surgical steps, a user 12 can sit or stand at a physical desktop workspace 14 defined by a housing 16 that has an opening 18 on one side. The open surgery station 10 can include a multi-sensorial computer interface that includes a stereoscopic vision interface 20, at least one haptic device 22, and a 3D sound system 24. Additionally, a head tracking device 26 and a hand tracking device in the form of at least one haptic robot stylus 27 can provide information regarding the user's interaction with the system as well as the user's visual perspective relating to the open surgery station 10.
[0030] With reference to the microsurgery station 11 for simulating microsurgical steps, the user (not shown in Figure 4) can sit or stand at a physical desktop workspace 14 defined by a housing 16 that has an opening 18 on one side. The microsurgery station 11 can include a multi-sensorial computer interface that includes a binocular surgical microscopic eyepiece 31, at least one haptic device 22, and a 3D sound system 24. Additionally, a hand tracking device in the form of at least one haptic robot stylus 27 can provide information regarding the user's interaction with the system as well as the user's visual perspective relating to the microsurgery station 1 1.
[0031] Surgical procedures that can be simulated using haptic augmented and virtual reality systems of the present technology can include procedures that use a one- handed technique, or that require use of multiple hands. Accordingly, the open surgery station 10 and the microsurgery station 11 can each include at least one, or two, haptic devices 22, which track the user's hand position and orientation and provide force feedback to the user. For example, since many surgical procedures tend to require a two-handed technique, the methods of aneurysm clipping simulation provided herein can include the simultaneous use of two haptic devices 22. A 3D image of a first surgical tool can be collocated with a first haptic device 22, and an image of a second surgical tool can be collocated with a second haptic device 22. For example, the simulation method can include a user holding a first haptic device 22 in a first hand, such as the right hand, and superimposing an image of a first surgical tool, such as an aneurysm clip holder or an arachnoid knife over the first haptic device 22. The simulation method can also include a user holding a second haptic device 22 in a second hand, such as the left hand, and superimposing an image of a second surgical tool, such as a suction tip over the second haptic device 22.
[0032] The open surgery station 10 and the microsurgery station 11 can each include a display system that allows the user to acquire depth perception. Each display system can be driven by graphics logic, which can control and update the graphics displayed by the display system. The display system of the open surgery station 10 can use a display screen 28 that can be a single passive stereo monitor, a half-silvered mirror 30 to reflect the image of the display screen 28, and a head tracking system 26 to display a dynamic viewer-centered perspective. The partially transparent mirror 30 can permit the user 12 to see both the virtual reality display and the user's hands, thus providing an augmented reality environment. The user can hold and manipulate the haptic device 22 with its stylus 27 below the mirror 30. The display system of the microsurgery station 11 can display a static perspective by using two display screens 28, which can be non-stereo monitors located side by side and a binocular surgical eyepiece 31 , which can consist of four first-surface mirrors oriented at an angle in such a way that the image of the left monitor is only seen by the left eye, and the image of the right monitor is only seen by the right eye. The orientation and distance between the front-surface mirrors can be adjusted by the user to match his/her interocular distance.
[0033] In the open surgery station 10, a virtual projection plane can be located exactly at the center of the haptic workspace and oriented perpendicular to that line, whereas in the microsurgery station 11 the user can view the virtual projection through the binocular surgical microscopic eyepiece 31. In the open surgery station 10 the partially transparent mirror 30 can preferably be sufficiently wide to allow the user to view virtual objects from different viewpoints (displaying the correct viewer-centered perspective) while permitting a comfortable range of movement. In contrast, in the microsurgery station 11, the binocular surgical microscopic eyepiece 31 can be adjusted up or down, either manually or by an automatic up-down adjustor, and the interocular distance can also be adjusted for comfortable 3-dimensional viewing. In one example, the height of the binocular surgical microscopic eyepiece 31 can be adjusted by adjusting the eyepiece mounting frame 33 can be adjusted up or down by activating a first foot pedal 34 or by a hand switch 35 on housing 16. In some examples, one or more additional foot pedals 34 can be provided to activate certain simulated surgical instruments, such as a bipolar electrocautery forceps.
[0034] The computer 32 illustrated in Figure 3 can be operatively connected to any one or more surgical stations in the system, such as the open surgery station 10 and the microsurgery station 11. Alternatively, any one or more surgical station in the system may be operatively connected to a separate computer 32. For example, the open surgery station 10 can be connected to a first computer 32, and the microsurgery station 11 can be operatively connected to a second computer 32. In one example each separate computer 32 of the system can be linked via a wireless or wired network connection. The one or more computers 32 can be components of a haptic augmented and virtual reality system that includes open surgery station logic that controls and operates the open surgery station 10, and microsurgery station logic that controls and operates the micro-surgery station 11. The haptic augmented and virtual reality system can include a software library that provides, in real time, a high level layer that encapsulates the rendering of a scene graph on either display screen 28, the stereoscopic vision interface 20, the handling of the hand tracking device shown as a haptic robot stylus 27, an interface with a haptic device 22, and playback of 3D spatial audio on a 3D sound system 24.
[0035] With respect to the open surgery station 10, a computer 32 can include haptics rendering logic that drives each haptic device of the open surgery station 10, and graphics logic that drives the display system of the open surgery station 10. The computer 32 connected to the open surgery station 10 can also include open surgery station logic, which can integrate the haptics rendering logic and the graphics logic and provide real-time simulation of open surgery steps of a surgical procedure, including updating the open surgical views in real time in response to user operations performed with a haptic device of the open surgery station 10. The open surgery station logic can also include an instrument library that includes a plurality of virtual surgical instruments that can each be selected by a user and displayed by the display system of the open surgery station 10. Some examples of instruments that can be included in the instrument library for use with the open surgery station 10 are discussed below with respect to the open surgery steps of the aneurysm clipping methodology.
[0036] With respect to the micro-surgery station 1 1, a computer 32 can include haptics rendering logic that drives each haptic device of the micro-surgery station 1 1, and graphics logic that drives the display system of the micro-surgery station 1 1. The computer 32 connected to the micro-surgery station 11 can also include microsurgery station logic, which can integrate the haptics rendering logic and the graphics logic and provide real-time simulation of open surgery steps of a surgical procedure including updating the microsurgical surgical views in real time in response to user operations performed with a haptic device of the micro-surgery station 1 1. The micro-surgery station logic can also include an instrument library that includes a plurality of virtual surgical instruments that can each be selected by a user and displayed by the display system of the micro-surgery station 11. Some examples of instruments that can be included in the instrument library for use with the micro- surgery station 1 1 are discussed below with respect to the micro-surgery steps of the aneurysm clipping methodology.
[0037] All logic and software components discussed herein can be implemented as instructions stored on a non-transient computer readable medium, such as a memory operatively connected to computer 32, and can be executed by one or more processors of computer 32.
[0038] Referring now to Figure 2, one example of a software and hardware architecture for an open surgery station 10 is shown. The architecture includes interconnected devices and logic, which are integrated by a 3D application program interface (API) 39.
[0039] Both the open surgery station 10 and the microsurgery station 1 1 include software and hardware for generating image data from scans of actual human anatomy. The volume data pre-processing 40 can receive 2D image data, for example, generated by an input data source 41, which can be a medical scanner. The volume data pre-processing 40 can provide 3D models to the 3D application program interface 39.
[0040] Examples of medical scanners that can be used as an input data source 40 for characterizing physical objects include a magnetic resonance imaging (MRI) scanner or a CT scanner, such as those typically used for obtaining medical images. The volume data pre-processing 40 segments and combines the 2D images to create a virtual 3D volume of the sample that was scanned, for example a human head. In an example embodiment for medical images that could be used, for example, for surgical training, the volume data pre-processing 40 creates detailed 3D structures. The characteristics of the various 3D structures will, with the interface to a haptic device 22, present different feel characteristics in the virtual reality environment, e.g. skin will feel soft and bone hard. Haptics rendering logic 44 can monitor and control each haptic device 22 including each stylus 27. The haptics rendering logic 44 can receive data from each haptic device and determine the position and orientation of each haptic device 22, for example a stylus 27, or a plurality of styluses 27 for different functions or for use by separate hands. The haptics rendering logic 44 computes collision detections between a virtual device corresponding to the haptic device 22 and objects within the virtual 3D environment. The haptics rendering logic 44 can also receive 3D models from the 3D application program interface 39. For example, collisions with a virtual device and imported 3D isosurfaces can be computed, and the haptics rendering logic can direct a haptic device 22 to generate the corresponding force feedback. In some examples, each isosurface is assigned different haptic materials, according to certain parameters: stiffness, viscosity, static friction and dynamic friction, as well as different physical properties such as density, mass, thickness, damping, bending, etc. Therefore, the user 12 can feel the different surfaces and textures of objects and surfaces in the virtual environment.
[0041] In a surgical simulation example, the user 12 can feel different sensations when touching skin, bone, and internal organs, such as the brain. In a preferred embodiment, the graphics and haptics can be on two separate threads, which can be implemented, for example with a dual processor computer. The haptics and graphics have their own update schedule, for example, haptics at 1000 Hz and graphics at about 30 Hz. In that example, the system would synchronize the two consecutive graphics update after about every 30 haptic updates, and it is within the skill of artisans to modify the manner in which haptics and graphics update and synchronize.
[0042] Hand tracking is very useful because it allows users to use both hands to interact with the virtual scene. While the user can feel tactile sensations with a hand holding a haptic stylus 27, it is also possible to use a tracked hand to move the 3D objects, manipulate lights, or define planes in the same 3D working volume. Graphics rendering logic 46 receives 3D models from the 3D application program interface 39. Also, the graphics rendering logic 46 receives virtual tool(s) information from the haptics rendering logic 44. With the models and other information, the graphics rendering logic 46 software generates and continuously updates, in real time, the stereoscopic 3D display that is displayed by a display screen 28.
[0043] The API 39 can provide a camera node that computes the correct viewer- centered perspective projection on the virtual projection plane. It can properly render both left and right views according to the position and orientation of the user's head given by the tracking system. [0044] Sound rendering 49 can also be used to add auditory simulations to a virtual environment through each 3D sound system 24. One example of sound rendering logic is Open Audio Library (OpenAL), which is a freely-available cross-platform 3D audio API that serves as a software interface to audio hardware. OpenAL is can generate arrangements of sound sources around a listener in a virtual 3D environment. It handles sound-source directivity and distance-related attenuation and Doppler effects, as well as special effects such as reflection, obstruction, transmission, and reverberation.
Haptic Feedback Adjustment
[0045] As discussed above, the haptics rendering logic 44 can direct a haptic device 22 to generate force feedback corresponding to computed collisions with a virtual device and imported 3D isosurfaces. While the amount of the force feedback can vary within a range correlated to the nature and degree of the computed collision, different users may have different degrees of sensitivity to the force feedback they feel through the haptic device 22. Accordingly, some examples of haptic augmented and virtual reality systems of the present technology can include an adjustable indicator 100 to allow a user to personally tune and adjust the quality of haptic feedback from the simulator by controlling the level of the range of force feedback.
[0046] As shown in Figure 5, an adjustable indicator 100 can be used prior to or during simulation of a surgical procedure to adjust the range of force feedback that the simulator provides to the one or more haptic devices 22 being used during the simulation. For example, the adjustable indicator 100 may allow a user to increase or decrease the range of force feedback that a haptic device 22 provides to a user. Users with a high sensitivity to force feedback, may use a lower level setting than users with a lower sensitivity to force feedback.
[0047] The adjustable indicator may be of any suitable type, such as a slider interface rotatable knob, and can be placed at any suitable location within a haptic augmented and virtual reality system, such as on a surgical simulation station.
[0048] Each adjustable indicator 100 can be used to adjust the level of the range of force feedback provided by at least one haptic device 22 of a haptic augmented and virtual reality system. For example, when a surgical simulation station has more than one haptic device 22, a plurality of adjustable indicators 100 can be provided, each adjustable indicator 100 controlling the force feedback level of one of the haptic devices 22. Or, one adjustable indicator 100 can be provided that controls the force feedback level of all of the haptic devices 22 of the surgical simulation station.
[0049] A user can adjust the adjustable indicator 100 to a desired setting, which corresponds to a desired range level of force feedback. The adjustable indicator 100 sends a signal to the haptics rendering logic 44 indicating the set force feedback range level. The haptics rendering logic 44 receives the set force feedback range level from the adjustable indicator 100, and uses it when calculating the amount of force feedback that should be provided by the haptic device 22 during operation of the system, when a collision is computed between a virtual device and an imported 3D isosurface. The haptics rendering logic 44 then sends a force feedback signal to the haptic device 22 indicating the calculated amount of force feedback, and the haptic device 22 provides the calculated amount of force feedback to the user.
Eye Movement Simulation
[0050] Some examples of haptic augmented and virtual reality systems of the present technology can be used for simulating ocular surgery. Such systems include an eye movement simulator 202, as shown in Figure 6, which simulates eye movement during simulation od ocular surgery.
[0051] Generally, during ocular surgical procedures, the rotation of patient's eye can be caused by some movements of the instrument, or by the patient's voluntary action. Eye rotation may increase the difficulty of the operation, and significant amounts of eye rotation may even damage the eye when instrument is inside. Therefore in many ocular surgeries, the physician operates a first instrument to conduct the surgery and a second instrument to help keep the eye steady.
[0052] Accordingly, as shown in Figure 6, a haptic augmented and virtual reality system can include a surgical simulation station 200 that includes eye movement simulator 202 also includes a first haptic device 204 and a second haptic device 206, which each provide position and orientation information to the haptics rendering logic 212. The first haptic device 204 can be driven by the haptics rendering logic 212, and can track a user's hand movements and provide force feedback to the user associated with a first simulated surgical instrument. The second haptic device 206 can also be driven by the haptics rendering logic 212, and can track a user's hand movements and provide force feedback to the user associated with a second simulated surgical instrument, such as an eye steadying instrument.
[0053] In the system of Figure 6, graphics rendering logic 208 generates and continuously updates, in real time, a virtual 3D scene that is displayed by a display screen 210. The stereoscopic 3D display 3D models of the virtual patient's face and eye being operated on, as well as 3D models of the surgical instruments being used in the procedure. The graphics rendering logic 208 receives instrument data, such as position, force, and orientation information, for each haptic device 204, 206 from the haptics rendering logic 212.
[0054] In this application, the graphics rendering logic 208 causes surgical instrument models to be rendered in the virtual 3D scene and move along with the haptic devices 204, 206 based on the position and orientation information for each haptic device 204, 206 received by the graphics rendering logic 208 from the haptics rendering logic 212. A 3D model of the eye, as well as the patient's face are rendered both graphically and haptically in the scene. The graphics rendering logic 208 continuously updates the virtual 3D scene including the 3D eye model and the 3D models of the first and second surgical instruments based on data received from the haptics rendering logic 212 and the eye movement simulator 202. Additionally, the haptics rendering logic 212 provides force feedback to the user through the first haptic device 204 and the second haptic device 206 based on the eye movement simulation data received from the eye movement simulator 202.
[0055] The eye movement simulator 202 can be implemented through eye movement simulation logic that, when executed by a processor of the system, simulates movement of a human eye during an ocular surgery simulation. The eye movement simulation logic can be stored in a non-transient computer readable medium, such as a memory of computer 32, and can be executed by at least one processor of computer 32. The eye movement simulator 202 receives input from the haptics rendering logic 212 and simulates eye movement of the 3D eye model during surgical simulation by providing eye movement simulation data to the haptics rendering logic 212 and the graphics rendering logic 208 of the surgical simulation station 200.
[0056] The operation method 300 of the eye movement simulator 202 is illustrated in Figure 7. As shown, operation method 300 of the eye movement simulator 202 starts at step 302, by applying a spherical joint to the eye model. This joint restricts the moving space of the eye model while preserves the freedom of rotation. At step 304, the eye movement simulator 202 applies swing and twist limitations the spherical joint, which correspond to the rotating range of the human eye. At step 306, the eye movement simulator 202 may also apply spring and damper forces to the joint according to the resistance of eye rotation.
[0057] During a surgical simulation, when the haptics rendering logic 212 of the system determines that an intersection has occurred between the first haptic device 204 and the 3D model of the eye, the eye movement simulator 202 receives the intersection force data based on the first haptic device 204 from the haptics rendering logic 212 at step 308. In the same step, the eye movement simulator 202 also receives steadying force data from the haptics rendering logic 212 based on the second haptic device 206. At intersection, the virtual instrument associated with the first haptic device 204 pierces the 3d model of the eye. At step 310, the eye movement simulator 202 then applies a fulcrum force to the first haptic device 204. At step 312, the eye movement simulator 202 applies a reaction force to the 3D model of the eye. The reaction force applied to the 3D model of the eye can include a force with same magnitude and opposite direction as the force and direction of the intersection between the first haptic device 204 and the 3D model of the eye. The reaction force applied to the 3D model of the eye can also include a voluntary movement force, which can be a force added to simulate voluntary movement of the eye by the patient.
[0058] At step 314, the eye movement simulator 202 determines a resultant rotation of the eye, based on the fulcrum force applied to the first haptic device 204, the reaction force applied to the 3D eye model, and the steadying force provided by the virtual steadying device of the second haptic device 206. After the rotation is applied to the 3D eye model, the eye movement simulator 202 loops back to step 310 and applies a new fulcrum force to the first haptic device 204, based on the rotation of the eye determined in step 314. Since the fulcrum force will again affect the rotation of the eye model, a feedback loop is created. Damping can be applied within the eye movement simulator 202 to keep the loop stable.
[0059] The performance of the user can be evaluated by the variance of the eye orientation, the mean and variance of forces applied to the 3D eye model, or the fluency of completing a simple task. The performance can be recorded for comparison or future review.
Collision Detection
[0060] As discussed above, a haptic augmented and virtual reality system of the present technology can include surgical station logic that controls and operates each surgical station of the system. The surgical station logic can integrate the haptics rendering logic and the graphics logic of the surgical station and provide real-time simulation of a surgical procedure, including updating the surgical views in real time in response to user operations performed with one or more haptic devices of the surgical station.
[0061] Referring to Figure 2, at least some haptic augmented and virtual reality systems of the present technology include a surgical simulation station having at least one haptic device 22 driven by haptics rendering logic 44, wherein the haptic device 22 tracks a user's hand movements and provides force feedback to the user associated with a first simulated surgical instrument having one or more contact points. The graphics rendering logic 46 generates a virtual 3D scene that is provided to the user on a display, such as display screen 28, and the virtual 3D scene includes models of at least one aspect of patient anatomy and the first simulated surgical instrument. The haptics rendering logic 44 can receive data from the haptic device 22 and determine the position and orientation of the simulated surgical instrument. The haptics rendering logic 44 can also receive data from the graphics rendering logic 46 and determine the location of the at least one aspect of patient anatomy. The haptics rendering logic 44 can use the data from the hatpic device 22 and the graphics rendering logic 46 to perform collision detection to determine when there is a collision between any contact point of the a simulated surgical instrument and any aspect of patient anatomy in the virtual 3D scene.
[0062] The haptics rendering logic 44 determines whether a collision has occurred for each haptic frame, using the previous haptic frame as the first haptic frame and the current haptic frame as the second haptic frame. In performing the evaluation of whether a collision has occurred, there are two considerations that are desirably taken into account. First, the evaluation is preferably efficient enough to not affect the performance of the haptics rendering servo loop, which may sustain a haptic frame rate of 1 KHz or more. Second, the evaluation is preferably robust enough to avoid undetected collisions. Both of these considerations are directly affected by the selection of the step size with which successive discreet points along any line segment defining the movement of a contact point, such as 504 in in Figure 10 and 910, 912, and 914 in Figure 14, are evaluated. If the step size is too big, a collision could be overlooked, especially for thin structures. On the other hand, a finely grained step size could help guarantee collisions are always detected, but it could also severely impact the haptic frame rate, since the algorithm is executed in the servo loop thread. The following linear interpolation equation can be used, which parametrizes the line segment from start point 502a to end point 502b with parameter i, where i is in the interval [0,1], then gives any point P in the line segment as a function of i:
P = (l-i)Start Point + i(End Point)
[0063] The evaluation performed by the haptics rendering logic can traverse the line segment by varying i from 0 to 1, incrementing it by a value of delta in each successive iteration. Computations of P can be done in continuous space and further converted to discrete voxel coordinates for retrieving voxel values. No sub-voxel resolution is needed.
[0064] Users can move the haptic stylus at various speeds, which can be reflected in the evaluation by corresponding variations of the line segment length from the start point 502a to the end point 502b. One approach for selecting delta would be to divide the line segment into a constant number of steps, so the number of iterations is constant for all moving speeds. However, such an approach may fail to detect collisions when the haptic device is moved at high speeds, especially when structures are thin. To address this issue, delta can be variable. For example, a variable step size as follows can be used:
Delta = K
(End Point) - (Start Point)
[0065] In the above equation, K is a constant that can depend on the voxel size, and can be equal to or less than the minimum dimension of the voxels in order to prevent undetected collisions. Using a variable delta results in the line segment is divided into a higher number of steps when the haptic stylus is being moved at higher speeds. However, every time the algorithm is executed, the actual distance between successive points P to be evaluated in a given line segment is constant, regardless of the velocity of the haptic stylus.
[0066] Collision determination can be made using either point-to-object collision detection or multi-point collision detection. Haptics rendering logic including either point-to-object collision detection or multi-point collision detection can be implemented by instructions stored on a non-transient computer readable medium, such as a memory of computer 32, and can be executed by at least one processor of the system, such as a processor of computer 32.
A. Point-To-Object Collision Detection
[0067] Point-to-object collision detection can be used to determine the interaction between a virtual surgical instrument and a virtual 3D model of patient anatomy. Figure 8 illustrates one example of a virtual aneurysm clip 400, having a single contact point 400a for use in point-to-object collision detection. In point-to-object collision detection, the contact point 400a of aneurysm clip 400 would be evaluated for collisions with isosurfaces representing virtual anatomies.
[0068] For example, as shown in Figure 10, for point-to-object collision detection, the haptics rendering logic collects two sets of parameters within two consecutive haptic frames, and uses those collected parameters to determine whether a collision has occurred between a virtual tool 500, such as aneurysm clip 400, and an aspect of simulated patient anatomy 600. The surface of the aspect of simulated patient anatomy 600 can be represented by voxel intensities. In a first haptic frame, the virtual tool 500 is located outside of the aspect of simulated patient anatomy 600, and the contact point of the virtual tool 500 is at start point 502a. Between the first haptic frame and a second haptic frame, the virtual tool 500 moves along path 504. At the second haptic frame, the contact point of the virtual tool 500 is at end point 502b. The haptics rendering logic estimates the intersection point 502' between the surface of the simulated patient anatomy 600 and the line segment defined by path 504, from the start point 502a to the end point 502b. When the haptics rendering logic determines that a collision occurred, it provides the 3D coordinates of the intersection point P, the normal vector N of the surface at the intersection point, and the touched side (front or back) of the shape surface. The haptics rendering logic will also determine the forces associated with the collision.
[0069] Figures 11 and 12 show flow diagrams of a collision evaluation 700 that can be performed by the haptics rendering logic of a haptic augmented and virtual reality system. The process starts at step 702, where it evaluates whether the 3D coordinates of start point 502a are equal to the 3D coordinates of the end point 502b. If the haptic stylus has not moved in two successive haptic frames and there was no collision in the previous frame, then the start point and end point are equal, and the function returns a result of "false" at step 704. If the haptic stylus has moved, then the process continues to step 706, where the logic can perform a rough bounding-box comparison between the line and the volume, proceeding to returning a result of "false" at step 704 if they are disjoint. If the result of step 706 indicates that the line 504 is inside the volume bounding box, for each point P on the line segment from the start point 502a to the end point 502b, the collision evaluation 700 performs a collision determination loop 708 including steps 710, 712, and 714. At step 710, the collision evaluation 700 selects a point P. At step 712, the collision evaluation 700 identifies the voxel closest to the point P. At step 714, the collision evaluation 700 checks the intensity of the closest voxel V by a set of window transfer functions defining the multiple shapes. If the voxel intensity at step 714 is outside the windows specified through the transfer functions, then the haptic stylus has not yet collided with any simulated aspect of the patient anatomy 600, and the evaluation 700 continues by performing the collision determination loop 708 with the next point P along the line segment 504. If none of the points on the line segment 504 collide with any simulated aspect of the patient anatomy 600, the collision evaluation 700 ends the loop 708 at step 720 and proceeds to step 704 and returns a result of "false." However, when the closest voxel V in step 714 lies within any of the transfer function windows, the collision evaluation 700 returns the 3D coordinates of the point P as the surface contact point and proceeds to step 716. At step 716, the collision evaluation 700 computes the collision identification data as illustrated in Figure 12, and then proceeds to step 718 where the collision evaluation 700 returns a result of "true."
[0070] As shown in Figure 12, the collision evaluation 700 can compute the collision identification data by starting at step 800 and proceeding to step 802 by providing the 3D coordinates of the point P at which the collision occurred. At step 804, the collision evaluation 700 can use the density of the closest voxel V, as determined in step 712, to determine which volumetric isosurface acting as a simulated aspect of the patient anatomy 600 has been touched by the haptic stylus by comparing it with the ranges defined by their transfer functions. At step 806, the collision evaluation 700 can determine the normal vector N, which is perpendicular to the volumetric isosurface at the contact point P, as shown in Figure 10, by computing the gradient of the neighbor voxels using the central differences method. As step 808, the collision evaluation 700 can define the tangential plane T, as shown in Figure 10, based on the contact point P and the normal vector N. The tangential plane T can serve as an intermediate representation of the volumetric isosurface acting as a simulated aspect of the patient anatomy 600. Proceeding to step 810, the tangential plane T can be used to determine if the haptic stylus is touching either the front or back side of the volumetric isosurface. At step 810, the collision evaluation 700 determines whether the start point 502a is in front of the tangential plane T and the end point 502b is behind the tangential plane T. If the start point 502a is in front of the tangential plane T and the end point 502b is behind the tangential plane T, the collision evaluation 700 continues to step 812, and determines that the collision occurred at the front face of the volumetric isosurface acting as a simulated aspect of the patient anatomy 600. At step 814, the collision evaluation 700 determines the direction of the normal vector N. If the start point 502a is not in front of the tangential plane T and the end point 502b is not behind the tangential plane T, the collision evaluation 700 continues to step 816, and determines that the collision occurred at the back face of the volumetric isosurface acting as a simulated aspect of the patient anatomy 600. At step 818, the collision evaluation 700 determines the direction of the normal vector N, which is the opposite of the direction that would be determined at step 814. From either step 814 or step 818, the collision evaluation 700 ends computation of the collision identification data at step 820.
[0071] When performing a collision evaluation 700, the haptics rendering logic of a haptic augmented and virtual reality system can simultaneously detect multiple simulated aspects of the patient anatomy 600, where each volumetric isosurface acting as a simulated aspect of the patient anatomy 600 can be defined by its individual ranges of voxel intensities. A transfer function can be defined for each haptic shape whereby a binary output value is assigned to every possible voxel intensity, and the set of transfer functions can be used at step 714 of Figure 11.
[0072] In graphics volume rendering techniques, piece-wise linear transfer functions are commonly used to specify color intensities and transparency. Similarly, transfer functions can be used to determine whether a voxel should be touchable or not based on its intensity. Figure 13 illustrates a comparison between graphics volume rendering and haptic transfer functions. The first transfer function 840 exemplifies opacity as a function of voxel intensities, as can be used for graphics volume rendering. Gradually increasing or decreasing values of opacity, represented by ramps, are allowed and commonly used. On the other hand, in a haptics transfer function, as shown in the second transfer function 860, only discrete binary outputs are permitted. In this way, voxels with intensities within the rectangular window defined by the transfer function will be regarded as belonging to the shape and will, therefore, be touchable. In other words, when the collision detection algorithm finds a voxel whose intensity is within the rectangular window, it will return TRUE, indicating a collision with the shape was detected. [0073] There can be several advantages to using haptic transfer functions. For example, since they are similar to the ones commonly used in volume visualization techniques, a single transfer function may simultaneously specify graphics and haptics properties for each shape. In Figure 13 it is shown how a haptic transfer function can be obtained from its graphics counterpart. As a result, all non-opaque values will be touchable and haptic parameters such as stiffness, static friction, and dynamic friction will be assigned to the corresponding voxels. A second advantage is that preprocessing steps such as segmentation and construction of polygonal meshes for each shape are no longer needed. In essence, the haptic transfer functions can resemble an operation of binary thresholding, by which different subsets may be determined from the original dataset according to their voxel intensities. Therefore, the specification of transfer functions can provide all the information needed to generate graphics and haptics visualization, operating only with the original (unmodified) 3D dataset.
B. Multi-Point Collision Detection
[0074] Multi-point collision detection can be used to provide haptic feedback to a user with respect to interactions between multiple points of the simulated surgical instrument and patient anatomy. When the haptice rendering logic 44 uses multipoint collision detection, the simulated surgical instrument has a plurality of contact points. For example, Figure 9 illustrates the virtual aneurysm clip 400 of Figure 8, modified to have multiple contact points 400a, 400b, 400c, 400d, 400e, 400f, 400g and 400h. In multi-point collision detection, each contact point 400a, 400b, 400c, 400d, 400e, 400f, 400g and 400h of the aneurysm clip 400 can be evaluated for collisions with isosurfaces representing aspect of patient anatomy. The haptics rendering logic 44 can perform collision detection to determine when there is a collision between any one or more of the plurality of contact points of the simulated surgical instrument and at least one aspect of patient anatomy.
[0075] For example, as shown in Figure 14, for multi-point collision detection, the haptics rendering logic can collect two sets of parameters within two consecutive haptic frames, and uses those collected parameters to determine whether a collision has occurred between a simulated surgical instrument 900, such as aneurysm clip 400, and an aspect of simulated patient anatomy 902. The surface of the aspect of simulated patient anatomy 902 can be represented by voxel intensities. In a first haptic frame, the simulated surgical instrument 900 is located outside of the aspect of simulated patient anatomy 902, and the each of the plurality of contact points of the simulated surgical instrument 900 is at a start point 904a, 906a, and 908a. In the illustrated example, a first contact point is located at the tip 916 of the simulated surgical instrument 900 and has start point 904a, and a final contact point is located at the tail end 918 of the simulated surgical instrument 900 and has start point 908a. Between the first haptic frame and a second haptic frame, the virtual tool 900 moves in accordance with the hand movement of a user, which is tracked by the haptic device 22, such as a haptic stylus 27 as shown in Figures 1 and 3. At the second haptic frame, each of the plurality of contact points of the simulated surgical instrument 900 is at an end point. Specifically, the contact point that started at 904a is at end point 904b, the contact point that started at 906a is at end point 906b, and the contact point that started at 908a is at end point 908b.
[0076] The haptics rendering logic can perform a collision evaluation for each line segment 910, 912, and 914 representing the travel path of each contact point from a first haptic frame to a second haptic frame. When the haptics rendering logic determines that a collision occurred with respect to at least one of the contact points, it can provide the 3D coordinates of the first intersection point PI, the normal vector N of the surface at the intersection point PI, and the touched side (front or back) of the shape surface. The haptics rendering logic can also determine the forces associated with the collision.
[0077] In order to prevent the graphic fall-through illustrated in Figure 14, where a portion of the simulated surgical instrument 900 is within the aspect of simulated patient anatomy 902, the graphics rendering logic can update the virtual 3D scene to display the simulated surgical instrument 900 in accordance with the positions of each of the contact points that correspond to the occurrence of the collision. For example, as shown in Figure 14, intersection point PI is first contact point of the simulated surgical instrument 900 that would collide with the aspect of simulated patient anatomy 902. The haptics rendering logic 44 can determine that intermediate point 904' corresponds to the location of the contact point of the simulated surgical instrument 900 that started at start point 904a. Graphics rendering logic 46 can update the virtual 3D scene based on the intersection point PI, and/or optionally on any intermediate points, such as intermediate point 904', provided by the haptics rendering logic 44. As a result, the display of the virtual 3D scene generated by the graphics rendering logic 46 will depict the simulated surgical instrument 900 at the location of the collision.
[0078] Figure 15 illustrates one flow diagram of a collision evaluation 1000 that can be performed by the haptics rendering logic 44 of a haptic augmented and virtual reality system. The process starts at step 1002, and proceeds to step 1004, where it evaluates whether the 3D coordinates each start point 904a, 906a, and 908a of a contact point of the simulated surgical instrument 900 are equal to the 3D coordinates of each end point 904b, 906b, and 908b of a contact point of the simulated surgical instrument 900. If the haptic device 22 has not moved in two successive haptic frames and there was no collision in the previous frame, then the start point and end point are equal, and the function returns a result of "false" at step 1006. If the haptic device 22 has moved, and any start point is not equal to the corresponding end point, then the process continues to step 1008, where the haptics rendering logic 44 logic can perform a rough bounding-box comparison between the line and the volume, proceeding to returning a result of "false" at step 1006 if they are disjoint. If the result of step 1008 indicates that any of the line segments 910, 912, 914, representing the travel paths of the contact points, is inside the volume bounding box, the collision evaluation 1000 performs a collision determination loop 1010 for each point P on each line segment from the corresponding start point to the corresponding end point. At steps 1012, 1014, and 1016 in the collision determination loop 1010, the collision evaluation 1000 selects each point P along each line segment and determines whether a collision occurred. When no collision occurred on a given line segment, the process returns to step 1012 and performs steps 1012, 1014, and 1016 for each of the points on the next line segment. If no collision occurred for any point on any of the line segments, the process ends loop 1010 at step 1018 and returns a result of "false" at step 1006. At step 1016, when a collision is determined, the collision evaluation 1000 can proceed to step 1022, where it can compute the collision identification data for the intersection point PI in accordance with the process of Figure 12, including the direction and magnitude of the Normal vector N, as well as determining intermediate point P'. The collision evaluation 1000 then proceeds to step 1024 and returns a result of "true."
[0079] The order in which the collision evaluation 1000 can perform the collision determination loop 1010 for each point along each line segment 910, 912, and 914, can start with the first contact point at the tip 916 of the simulated surgical instrument 900, such as the contact point having the start point 904a and the end point 904b, and conclude with the final contact point at the tail end 918 of the simulated surgical instrument 900, such as the contact point having the start point 908a and the end point 908b. In some examples, the collision evaluation 1000 may stop once a first intersection point PI is determined. However, such an implementation is most accurate when the simulated surgical instrument 900 is only moved side to side and forward (tip always preceding tail). When the simulated surgical instrument 900 may be moved backwards (tail preceding tip), or tangentially to the isosurface representing an aspect of patient anatomy 902, as may be the case when simulating procedures such as navigation of the contour of spinal pedicles, the order in which the contact points composing the virtual instruments are evaluated for collisions may play an important role in preventing fall-through effects.
[0080] Figure 16 illustrates a flow diagram of a collision evaluation 1100 that can be performed by the haptics rendering logic 44 in which the contact points are evaluated in both directions, in order from a tip of the simulated surgical instrument to a tail end of the simulated surgical instrument, and in order from the tail end of the simulated surgical instrument to the tip of the simulated surgical instrument. Use of collision evaluation 1100 can identify the true first intersection point PI, or multiple simultaneous intersection points PI .
[0081] As shown in Figure 16, the process starts at step 1102, and proceeds to step
1104, where it evaluates whether the 3D coordinates each start point 904a, 906a, and
908a of a contact point of the simulated surgical instrument 900 are equal to the 3D coordinates of each end point 904b, 906b, and 908b of a contact point of the simulated surgical instrument 900. If the haptic device 22 has not moved in two successive haptic frames and there was no collision in the previous frame, then the start point and end point are equal, and the function returns a result of "false" at step 1106. If the haptic device 22 has moved, and any start point is not equal to the corresponding end point, then the process continues to step 1108, where the haptics rendering logic 44 logic can perform a rough bounding-box comparison between the line and the volume, proceeding to returning a result of "false" at step 1106 if they are disjoint. If the result of step 1108 indicates that any of the line segments 910, 912, 914, representing the travel paths of the contact points, is inside the volume bounding box, the collision evaluation 1100 performs a collision determination loop 1110 for each point P on each line segment from the corresponding start point to the corresponding end point. The collision determination loop 11 10 determines any intersection points by performing the evaluation both in order from a tip 916 of the simulated surgical instrument 900 to a tail end 918 of the simulated surgical instrument 900, and in order from the tail end 918 of the simulated surgical instrument 900 to the tip 916 of the simulated surgical instrument 900. At steps 1112- 1120, the collision evaluation 1100 selects each point P along each line segment, from the tip 916 of the simulated surgical instrument 900 to the tail end 918 of the simulated surgical instrument 900, until a first collision point PI is determined on an intersecting line segment K. When no collision point PI is determined in the tip to tail direction, the collision evaluation 1100 proceeds to step 11 18 and then back to step 1 112. When a collision point is determined, the collision evaluation 1100 selects each point P along each line segment in the reverse order, from the tail end 918 of the simulated surgical instrument 900 to the tip 916 of the simulated surgical instrument 900, until a second collision point PI is determined. If the coordinates of the first collision point PI equals the second collision point PI, then there is only a single point at which the first collision occurred, such as shown in Figure 14. If no collision occurred for any point on any of the line segments, in either direction, the process ends loop 1 110 at step 1 114 and returns a result of "false" at step 1106. At step 1126, when a collision is determined, the collision evaluation 1100 can proceed to step 1128, where it can compute the collision identification data for each intersection point PI in accordance with the process of Figure 12, including the direction and magnitude of the Normal vector N, as well as determining intermediate point P\ If there are multiple intersection points PI, the direction and magnitude of the Normal vector N can be an average of the Normal vectors for each intersection point PI . The collision evaluation 1100 then proceeds to step 1130 and returns a result of "true."
[0082] Multi-point collision detection can be used in simulating any surgical procedure. For example, in simulation of percutaneous needle insertion, the virtual Jamshidi needle can be represented by multiple contact points, and multi-point collision detection can be used to assist a user in proper placement of the needle during insertion. Additionally, subclavian central line, where the needle is advanced under and along the inferior border of the clavicle, is another procedure that can from the multi-point collision detection. The needle can be represented by multiple contact points, and multi-point collision detection can be used to indicate failure if a collision is detected between the simulated clavicle and any contact point prior to the needle being inserted in the subclavian vein.
Limiting Rotation of a Haptic Device
[0083] With at least some haptic devices 22, torque feedback cannot be provided to the haptic device. However, the rotation of a surgical instrument is a significant factor in some particular surgical circumstances, where the surgical instrument cannot be rotated due to the presence of various aspects of the patent anatomy. In order to provide limitation on the rotation of a simulated surgical instrument, cursor movement along the surface of the touching object with respect to the potential collision point can be frozen in order to prevent fall-through. As used herein, the term "cursor movement" means visual rotation of the simulated surgical instrument 900 as rendered by the graphics rendering logic 46 in the virtual 3D scene. Locking the cursor movement effectively locks the visual representation of the simulated surgical instrument rendered by the graphics rendering logic 46 in the virtual 3D scene with respect to the simulated aspect of patient anatomy 902 with which the simulated surgical instrument 900 has collided, thus preventing the potential fall-through. [0084] Traversing the points in the virtual instrument from both ends, using collision evaluation 1100 as shown in Figure 16, may result in identification of at least two different intersection points PI, where the simulated surgical instrument 900 collides with a simulated aspect of patent anatomy 902 at the same time. If these points are detected to be at a distance larger than a predefined constant D (in millimeters) along the length of the simulated surgical instrument 900, it signals that the undesired effect of fall-through is starting to occur. In order to prevent the graphics rendering logic 46 from depicting the fall-through in the virtual 3D scene, the cursor locking mechanism can be implemented. Accordingly, the cursor movement can be locked when the absolute value of the coordinates of the first intersection point PI minus the second intersection point PI is greater than a predefined constant D (in millimeters) along the length of the simulated surgical instrument 900. When the conditions to lock the cursor are satisfied, it is also necessary to establish the geometric conditions under which the cursor will be unlocked. For that purpose, a normal vector must exist at intersection point PI. The current dot product between the normal vector N at intersection point PI and the contact plane is determined to provide the orientation of the simulated surgical instrument 900, and can be stored in a memory of the system and used to check if the cursor can be unlocked.
[0085] Figure 17 is a flow chart showing a cursor movement locking process 1200 that can be implemented by the surgical station logic, such as be graphics rendering logic 46 or haptics rendering logic 44. Cursor movement locking process 1200 starts at step 1202 and proceeds to step 1204, where it determines whether the cursor movement is locked. If the cursor movement is already locked, the cursor movement locking process 1200 ends at step 1206. If the cursor movement is not already locked, the process proceeds to step 1208, and determines whether the conditions for locking cursor movement are met, such as whether the absolute value of the coordinates of the first intersection point PI minus the second intersection point PI is greater than a predefined constant D (in millimeters) along the length of the simulated surgical instrument 900. If the conditions are not met, the cursor movement locking process 1200 ends at step 1206. If the conditions are met, the process proceeds to step 1210, where it determines whether a normal vector exists, such as the normal vector associated with intersection point PI as determined by collision evaluation 1100. If a normal vector does not exist, the cursor movement locking process 1200 ends at step 1206. If a normal vector does exist, the process proceeds to step 1212, where the contact plane, such as tangential plane T in Figure 14, is determined. The process then proceeds to step 1214, where the orientation of the simulated surgical instrument 900 based on the dot product between the normal vector N at intersection point PI and the contact plane. The process then proceeds to step 1216, where a result of "true" is provided and the cursor movement is locked.
[0086] When the cursor is locked (Lock variable is TRUE), the conditions to unlock the cursor may be checked when there is an updated orientation of the simulated surgical instrument. If the dot product based on the normal vector of the current orientation of the simulated surgical instrument is larger than the dot product based on the normal vector of the simulated surgical instrument at the time that the cursor movement was locked, that means that the simulated surgical instrument has been rotated in the opposite direction that caused the locking, and therefore it can be unlocked.
[0087] Figure 18 is a flow chart showing a cursor movement unlocking process 1300 that can be implemented to determine when cursor movement unlocking conditions are met and cursor movement can be unlocked. The process starts at step 1302 and proceeds to step 1304, where it determines whether the cursor movement is locked. If the cursor movement is not already locked, the cursor movement locking process 1300 ends at step 1306. If the cursor movement is already locked, the process proceeds to step 1308, and determines the orientation of the simulated surgical instrument 900 based on the original dot product between the normal vector N at intersection point PI and the contact plane at the time the cursor movement was locked. The process then proceeds to step 1310, where the current dot product is determined based on any movement of the simulated surgical instrument 900. At step 1312, the cursor movement unlocking process 1300 determines whether the current dot product is larger than the original dot product. If the current dot product is not larger than the original dot product, the process ends at step 1306. If the current dot product is larger than the original dot product, at step 1314, the cursor movement unlocking process 1300 returns a result of "false" and the cursor movement is unlocked.
Secondary Visualization
[0088] During various surgical procedures, it is not uncommon for a surgeon to be able to view both the patient and have secondary visual data. For example, a surgeon may be able to see the patient, and may also obtain secondary visual guidance from a screen providing ultrasound or fluoroscopic data.
[0089] In order to simulate such procedures, the display of a surgical station in a haptic augmented and virtual reality system may include virtual 3D scene that has at least two viewing areas, where a first viewing area depicts primary visual model, such as a virtual stereoscopic 3D model of a portion of the patient, and a second viewing area depicts a secondary visual model. For example, Figure 21 illustrates a display 1600, which can be for example, a display screen 28 (as shown in Figures 1-4). Display 1600 has a first viewing area 1602 depicting a virtual stereoscopic 3D model of a portion of the patient 1612, specifically the head, neck, and upper torso. The second viewing area 1604 of the display 1600 depicts virtual secondary visual model 1614. The secondary visual model 1614 as illustrated is virtual ultrasound imaging. In other examples, the secondary visual data 1614 may be virtual fluoroscopy. The graphics rendering logic 46 can continuously update the virtual 3D scene, including the primary visual model and the secondary visual model based on data received from the haptics rendering logic 44. When one or more simulated surgical devices are also present in the virtual 3D scene, the graphics rendering logic 46 can also continuously update the virtual 3D scene to include the placement and movement of each simulated surgical device in both the primary visual model and the secondary visual model.
[0090] One example of a surgical procedure simulation where secondary visualization data can be useful is central venous catheterization, as shown in Figure 21. In such a procedure, the secondary visual data 1614 is preferably virtual ultrasound imaging. If the simulated surgical procedure is percutaneous needle insertion or subclavian central line placement, virtual fluoroscopy might be preferred for use as the secondary visual data 1614. [0091] For the central venous catheterization illustrated in Figure 21, a haptic augmented and virtual reality system including at least one surgical station can be used to simulate a central venous catheterization (CVC) procedure, also known as central line placement. This percutaneous procedure, consists of inserting a needle 1610 that will guide a catheter into a large vein in the neck, chest, or groin of a patient for administering medication or fluids, or obtaining blood tests or cardiovascular measurements. When the internal jugular approach is used, as shown in Figure 21, ultrasound imagery 1614 can be used to provide visual guidance for the surgeon to recognize the internal anatomical structure of the neck, as well as the position and orientation of the needle being inserted. A characteristic collapse of the vein, produced when applying slight pressure to the skin with the ultrasound transducer 1608, helps the user to differentiate vein from artery, as the latter does not collapse.
[0092] With reference to Figure 2, the at least one surgical station of the haptic augmented and virtual reality system can have at least two haptic devices 22. A first haptic device 22 can be driven by the haptics rendering logic 44, and can track a user's hand movements and provide force feedback to the user associated with a first simulated surgical instrument, such as an ultrasound transducer 1608 (shown in Figure 21). A second haptic device 22 can also be driven by the haptics rendering logic 44, and can track a user's hand movements and provide force feedback to the user associated with a second simulated surgical instrument, such as needle 1610 (shown in Figure 21). The haptics rendering logic 44 can use collision detection, such as multi-point collision detection described above with reference to Figure 14, to determine any collisions between either the ultrasound transducer 1608 or the needle 1610 and any of the volumetric isosurfaces representing aspects of the patient anatomy, such as the multiple layers of soft and hard tissues. Force feedback can be provided to the user through either the first or second haptic device, as appropriate, based on the collision detection performed by the haptics rendering logic 44.
[0093] Volume data pre-processing 40 can receive 2D image data, for example, generated by an input data source 41, which can be a CT scanner, and can generate 3D models that can be used by the graphics rendering logic 46 to generate the virtual 3D scene. [0094] There are at least two types of the volumes that can be relevant in this example: one is used as the input of the deformable ultrasound rendering pipeline, and the other is used for CUDA accelerated volumetric rendering and provide a high quality volume display for haptic interaction. Both can be obtained from the CT scanned image. The original volume can be obtained from a dataset of CT DICOM images. This volume can be used to render a natural echoic image of anatomical structure. Resolution of the imaging can depend on the capabilities of the CT scanner and the protocol to follow for the particular surgical procedure. CT imaging can be recorded in 16-bit imaging and converted to 8-bit to reduce GPU (Graphical Processing Unit) memory and computational requirements. Gray-scale values from 0 to 255 can be used to correspond to the density of the tissue. The distribution of values in the gray-scale can be used to provide a realistic appearance of the ultrasound image rendering. The ultrasound can be displayed using a customized transfer function that properly displays the pixel information from the graph above. Each pixel can be correlated to a specific tissue density. For example, the density of skin can be found around value 45, compared to bone around value 200. In order to better visualize vein and arteries, a CT with contrast can be used. However, the contrast dye can affect the opacity of the vessels in the CT, making them look the same as bone. In order to overcome that problem, manual segmentation can be performed to separate specific anatomy: bone, vein, artery, etc. Besides the original volume, a customized mask volume can also be retrieved, in which the distinct individual anatomical structures are assigned labels during the segmentation process. This information can be used to determine what anatomy is interacting with the transducer 1608 and if it will deform when applying pressure with the virtual transducer 1608. The mask volume histogram can have a variant contour form compared to the original volume, as the voxels of the same type can be aggregated to specific intensity with opacity coefficients. An open source toolkit named VolView, which is specialized on visualizing and analyzing 3-D medical and scientific data, can be used to fulfill the volume retrieving objective. The two volumes can be imported into the virtual scene and precisely collocated with the previously mentioned CUDA volume for the further image rendering. [0095] With respect to rendering virtual ultrasound images, it is noted that when using a convex curvilinear array transducer in an actual ultrasound guided procedures, the piezoelectric material at the bottom of the transducer emits sector-shaped beams which forms a plane. For purposes of simulation, as shown in Figure 19, the plane 1404 can be interpolated by a set of the clipped slices from the view volume. The slice stack can be aligned with the sector plane. The virtual perspective camera node 1402 can be mounted right above the plane 1404, and faced downward. The rotation of the camera can be carefully defined so that the lateral border of the ultrasound image is consistent with the lateral border of the transducer 1406. The margin between the near and far distance attributes of the camera can define the depth of the image stack clipped. The depth can be set to an optimal value to guarantee the resolution of the image, as it can fuse the consecutive slices and merge all the clipped isosurfaces into one single compounded image. The height angle attribute can be set to scale the field of view region for the image. After the configuration, the camera 1402 can be transformed corresponding to the placement of the virtual transducer 1406 in global view and can generate a 2D image of the clipped volume and carry out perspective rendering.
[0096] In order to reduce the complexity of the GPU computation codes, Open Inventor Graphics SDK can be utilized to render the perspective that the camera captures. Open Inventor is an advanced object-oriented toolkit for OpenGL programming. The lower level atomic instructions that are sent to the GPU devices are encapsulated into node classes. The nodes including the fundamental manipulation (transformation, texture generation and binding, lighting, etc.) are organized into a hierarchical tree structure (also known as a scene graph), and work in discipline to render the "virtual world". The ultrasound image rendering of the soft tissue mainly includes the development of two classes derived from Open Inventor nodes: SoTissueDeformationShader and SoUltrasoundShader. Both classes contain internal GLSL shader implementation for photorealistic ultrasound image rendering. SoTissueDeformationShader can store clipped image slices from both the original volume and the segmented volume in corresponding texture unit group node. These two types of images will be passed to a GLSL shade for the vertex and fragment rendering in parallel. The shader uses coordinate to access the pixel within both images and replace the fragment color of current pixel by that of another pixel at the computed position after deformation. The output of the shader can be be used to render a rectangle shape under the entire scene as the input to the second developed node, SoUltrasoundShader. SoUltrasoundShader specifically deals with the ultrasound effect rendering. Now that the previous deformed tissue image is already rendered onto the geometry, this geometry is put under a scene texture node, which will also be passed to the internal GLSL shader to render the ultrasound effect. The scene texture node is necessary, instead of the ordinary texture node, since there is no way to read out the content inside an active texture unit because of an Open Inventor limitation. Thus, this texture buffer cannot be transferred to the shader directly. After the ultrasound effect rendering, the texture can be eventually visualized onto the quad in the virtual scene as the final ultrasound image.
[0097] In summary, the original clipped images from both volumes can go through a two-phase rendering pipeline: first phase for tissue deformation rendering in particular, in the second phase the internal shader uses the rendering scene from the first phase and generates the result ultrasound image. Under this framework, the haptics rendering logic 44 can determine deformation and ultrasound effects that can be implemented under the corresponding shader nodes.
[0098] As the initial step for determining deformation, the haptics rendering logic 44 can include Young's Modulus properties for different tissues,which indicates the elasticity of a material to resume its original size and shape after being subjected to a deforming force or stress. As shown in Figure 20, a mass-spring system can be used to represent deformation of soft tissue. As illustrated, the bottom line of the 2-D image can be considered as the farthest end of the field of view region that the force transmission can reach, and thus no deformation is applied between the first image 1500 and the second image 1502. The clipped plane of the upper body volume can be divided into parallel columns 1504a, 1506a, and 1508a, as shown in the first image 1500. The width of each column can be equal to the dimension of one single pixel, and each column contains a stack of pixels. For example, column 1506a contains a stack of pixels, each having an initial length L01, L02, L03... LOn, that results in a column height of L0. Each pixel within a column can be treated as an equivalent spring. Under a contact force F0 at one single point on the skin surface, the spring system can be be compressed, and the deformation extent of every point inside the tissue can be represented by a computed coefficient with respect to the original length of that pixel. According to the Hook's Law, each pixel can have an output for the shrunk length that depends on the Young's modulus factor as its physical metric. The shrunk length of each pixel L01 ', L02', L03', L04'... LOn' results in a shrunk length LO'of the column 1506b.
[0099] The surface contact force F0 and an internal distributed force can be determined by the haptics rendering logic 44. The vector of the surface contact force can be provided based upon the user's manipulation of a haptic device 22, while the surface deformation curve can obey a Normal distribution function. It can be assumed that the force along the column traverses through the isotropic tissue and thus remains as constant. Considering the actual anatomical structure on the intersection plane, it can also reasonably be assumed that the skin surface will remain in a smooth contour obeying the 2-D normal distribution function. Hence, the deformation length at that particular point can be used to calculate a compensated force based on over all elasticity coefficient, the deformation of the particular pixel along the column below can be calculated accordingly as follows:
Figure imgf000034_0001
[00100] The above algebraic equation sets from (l)-(5) describe the computation of pixel substitution map based on the calculated surface pressure F0 above. With original coordinate of the pixels and the deformation coefficient, the new coordinate of a given pixel in vertical direction can be calculated by accumulating the lengths of all the pixels below the point. [00101] Four acoustic characteristics can be compounded into the eventual rendered image: reflection, absorption, radial blur, and Perlin noise based speckle. A GPU based ray casting algorithm to render the image has been implemented in OpenGL Shading Language (GLSL), and can be used.
[00102] Reflection defines the ultrasound wave that echoes back to the transducer. The gradient of the acoustic impedances between two adjacent pixels on incident interface is calculated. This scale is used to further compute the light intensity transmitted back to the transducer.
[00103] With respect to Absorption, while the ultrasound wave is transmitting through the tissue, the energy will be absorbed by the surrounding tissue, thus the light intensity of the wave will be attenuated up to different extent. This will generate a shadow artifact in the ultrasound image.
[00104] Radial blur consists of two aspects: in radial direction, the light intensity attenuates exponentially; while in tangential direction, the gray values of the pixels are smoothed so that it will have a blurred effect to simulate the realistic inherent and coherent imaging of ultrasound machine.
[00105] With respect to Speckle, a perlin based noise pattern is designed and precomputed onto the ultrasound image. The seed of the pseudo noise generator is related to the real-time transducer position, orientation, and the clock time.
[00106] Based on the foregoing, the volume data pre-processing 40 can provide the generated 3D models to the 3D application program interface 39, which can be received and used by the graphics rendering logic 46 to generate the virtual 3D scene. The primary visual model displayed in the first viewing area 1602 can include a stereoscopic view of the 3D virtual upper torso and neck, including skin, muscle, bone, organs, veins, and arteries using CUDA. A GPU-based ray-casting algorithm can be executed by the volume data pre-processing 40 based on the 3D model generated from the CT data, and the 3D application program interface 39 can render a dynamic virtual ultrasound image using GLSL. Visual effects of ultrasonic reflection and absorption, as well as soft tissue deformation and vein vessel collapse, can be displayed in the second viewing area 1604 to provide an ultrasound guided simulation.
[00107] During simulation of a central venous catheterization (CVC) procedure, as shown in Figure 21, a user can place a virtual ultrasound transducer 1608 on the surface of a patient's neck to assess the position and the appearance of the internal jugular vein and carotid artery next to it. Based on the collision detection performed by the haptics rendering logic and a spring-damper model, as discussed with rrespect to Figure 20, the haptic library 39 can render the perceptional deformation on the contact between the virtual ultrasound transducer 1608 and the skin.
[00108] Because the IJ vein is compressible compared to the carotid artery, which is not compressible, a user can manipulate the virtual ultrasound transducer 1608 through manipulation of its associated haptic device 22 in order apply a small amount of force on the skin and view the characteristic compression of the vein in the second viewing area 1604 of the display 1600. Depending on different positioning of the virtual ultrasound transducer 1608, the vessels will have different interpretation in the resulting image. Different alignment of the needle 1610 will also affect the portion of the needle displayed in the ultrasound image in the second viewing area 1604. For example, when the needle is inserted perpendicular to the ultrasound beam, only a portion of the needle will be visualized, while in a parallel alignment the entire course of the needle can be visualized during the traversal. After the targeted placement spot is determined, the user can manipulate the virtual needle 1610 through manipulation of its associated haptic device 22 to insert the needle 1610 into the patient.
Use of Systems
[00109] Haptic augmented and virtual reality systems as described above can be used in various applications. For example, such systems can be used for training, where residents can develop the kinesthetic and psychomotor skills required for conducting surgical procedures. Such systems can also be used for pre-surgical planning, particularly for challenging cases of patients with abnormal anatomy. [00110] When used for planning surgical procedures to be carried out in an operating room, haptic augmented and virtual reality systems of the present technology can be integrated with existing hospital information systems. An operation room (OR) workspace can be mapped onto and simulated by the workspace of a haptic augmented and virtual reality system by using appropriate coordinate transformations. While the OR is usually tracked optically, the tracking can be electromagnetic and physical when using a haptic augmented and virtual reality system. The graphic display and haptic feedback provided to the user can be based not only on the simulated patient anatomy, but also on the simulated OR workspace. For example, with reference to Figure 2, the haptics rendering logic 44 can cause the a haptic device 22 to provide force feedback to the user based on aspects of the OR workspace that have been mapped into the workspace of the haptic augmented and virtual reality system.
[00111] From the foregoing, it will be appreciated that although specific examples have been described herein for purposes of illustration, various modifications may be made without deviating from the spirit or scope of this disclosure. It is therefore intended that the foregoing detailed description be regarded as illustrative rather than limiting, and that it be understood that it is the following claims, including all equivalents, that are intended to particularly point out and distinctly claim the claimed subject matter.

Claims

CLAIMS What is claimed is:
1. A haptic augmented and virtual reality system including a surgical simulation station, wherein the surgical simulation station comprises: a first haptic device driven by haptics rendering logic, wherein the first haptic device tracks a user's hand movements and provides force feedback to the user associated with a first simulated surgical instrument having a plurality of contact points; and a display driven by graphics logic, wherein the graphics rendering logic generates a virtual 3D scene that is provided to the user on the display, the virtual 3D scene including models of at least one aspect of patient anatomy and the first simulated surgical instrument; wherein the haptics rendering logic receives data from the haptic device and determines the position and orientation of the simulated surgical instrument, receives data from the graphics rendering logic and determines the location of the at least one aspect of patient anatomy, and performs collision detection to determine when there is a collision between any one or more of the plurality of contact points of the first simulated surgical instrument and the at least one aspect of patient anatomy; wherein the haptics rendering logic causes the first haptic device to provide the force feedback to the user based on the collision detection, and the graphics rendering logic continuously updates the virtual 3D scene based on data received from the haptics rendering logic that includes the collision detection.
2. The haptic augmented and virtual reality system of claim 1, wherein the haptics rendering logic performs collision detection for each of the plurality of contact points in order from a tip of the simulated surgical instrument to a tail end of the simulated surgical instrument, and in order from the tail end of the simulated surgical instrument to the tip of the simulated surgical instrument.
3. The haptic augmented and virtual reality system of claim 1, wherein the graphics rendering logic locks visual rotation of the simulated surgical instrument generated in the virtual 3D scene when locking conditions are met in order to prevent fall-through.
4. The haptic augmented and virtual reality system of claim 3, wherein the graphics rendering logic unlocks visual rotation of the simulated surgical instrument generated in the virtual 3D scene when unlocking conditions are met.
5. The haptic augmented and virtual reality system of claim 1, further comprising: a second haptic device driven by the haptics rendering logic, wherein the second haptic device tracks a user's hand movements and provides force feedback to the user associated with a second simulated surgical instrument having a plurality of contact points, and the virtual 3D scene includes a 3D eye model and 3D models of the first and second surgical instruments; and an eye movement simulator that receives input from the haptics rendering logic and simulates eye movement of the 3D eye model during surgical simulation by providing eye movement simulation data to the haptics rendering logic and the graphics rendering logic; wherein the graphics rendering logic continuously updates the virtual 3D scene including the 3D eye model and the 3D models of the first and second surgical instruments based on data received from the haptics rendering logic and the eye movement simulator, and the haptics rendering logic causes at least one of the haptic devices to provide force feedback to the user based on data received from the eye movement simulator.
6. The haptic augmented and virtual reality system of claim 1, further comprising: an adjustable indicator located on the system and set by a user that sends a signal to the haptics rendering logic and adjusts the range level of the force feedback.
7. The haptic augmented and virtual reality system of claim 1, wherein the virtual 3D scene comprises at least two viewing areas, including a first viewing area that depicts primary visual model and a second viewing area depicts a secondary visual model, and wherein the graphics rendering logic continuously updates the primary visual model and the secondary visual model based on data received from the haptics rendering logic based on data received from the haptics rendering logic.
8. The haptic augmented and virtual reality system of claim 7, wherein the secondary visual model comprises virtual ultrasound imaging or virtual fluoroscopy.
9. The haptic augmented and virtual reality system of claim 1, further comprising an operating room workspace mapped onto a workspace of the system for surgical planning, wherein the the haptics rendering logic 44 causes the first haptic device to provide force feedback to the user based on aspects of the operating room workspace.
10. A haptic augmented and virtual reality system including a surgical simulation station, wherein the surgical simulation station comprises: a first haptic device driven by haptics rendering logic, wherein the first haptic device tracks a user's hand movements and provides force feedback to the user associated with a first simulated surgical instrument; and a display driven by graphics rendering logic, wherein the graphics rendering logic generates a virtual 3D scene that is provided to the user on the display, the virtual 3D scene having at least two viewing areas, where a first viewing area depicts primary visual model and a second viewing area depicts a secondary visual model; wherein the graphics rendering logic continuously updates the primary visual model and the secondary visual model based on data received from the haptics rendering logic.
11. A haptic augmented and virtual reality system including a surgical simulation station, wherein the surgical simulation station comprises: a first haptic device driven by haptics rendering logic, wherein the first haptic device tracks a user's hand movements and provides force feedback to the user associated with a first simulated surgical instrument; a second haptic device driven by haptics rendering logic, wherein the second haptic device tracks a user's hand movements and provides force feedback to the user associated with a second simulated surgical instrument; graphics rendering logic that generates a virtual 3D scene including a 3D eye model and 3D models of the first and second surgical instruments; and an eye movement simulator that receives input from the haptics rendering logic and simulates eye movement of the 3D eye model during surgical simulation by providing eye movement simulation data to the haptics rendering logic and the graphics rendering logic; wherein the graphics rendering logic continuously updates the virtual 3D scene including the 3D eye model and the 3D models of the first and second surgical instruments based on data received from the haptics rendering logic and the eye movement simulator, and the haptics rendering logic causes at least one of the haptic devices to provide force feedback to the user based on data received from the eye movement simulator.
12. The haptic augmented and virtual reality system of claim 11, wherein the eye movement simulator comprises logic that, when executed by a processor of the system, simulates movement of a human eye by: applying a spherical joint to the 3D eye model; applying swing and twist limitations the spherical joint; receiving data from the haptics rendering logic, the data including intersection force data based on the first haptic device, and steadying force data based on the second haptic device; applying a fulcrum force to the first haptic device; applying a reaction force to the 3D eye model; determining a resultant eye rotation, based on the fulcrum force applied to the first haptic device, the reaction force applied to the 3D eye model, and the steadying force data; and applying the determined resultant eye rotation to the 3D eye model.
13. The haptic augmented and virtual reality system of claim 12, wherein the reaction force applied to the 3D eye model further includes a voluntary movement force.
14. A haptic augmented and virtual reality system including a surgical simulation station, wherein the surgical simulation station comprises: haptics rendering logic that directs a haptic device to generate force feedback to a user within a range; and an adjustable indicator located on the system and set by a user that sends a signal to the haptics rendering logic and adjusts the range level of the force feedback.
PCT/US2014/068138 2013-12-02 2014-12-02 Improvements for haptic augmented and virtual reality system for simulation of surgical procedures WO2015084837A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201361910806P 2013-12-02 2013-12-02
US61/910,806 2013-12-02

Publications (1)

Publication Number Publication Date
WO2015084837A1 true WO2015084837A1 (en) 2015-06-11

Family

ID=53274028

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2014/068138 WO2015084837A1 (en) 2013-12-02 2014-12-02 Improvements for haptic augmented and virtual reality system for simulation of surgical procedures

Country Status (1)

Country Link
WO (1) WO2015084837A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018035310A1 (en) * 2016-08-19 2018-02-22 The Penn State Research Foundation Dynamic haptic robotic trainer
CN110703920A (en) * 2019-10-15 2020-01-17 南京邮电大学 Showpiece touch telepresence feedback equipment
CN110910513A (en) * 2019-12-10 2020-03-24 上海市精神卫生中心(上海市心理咨询培训中心) Augmented reality system for assisting examinee in adapting to magnetic resonance scanning environment
WO2020229890A1 (en) * 2019-05-10 2020-11-19 Firvs Limted Virtual reality surgical training systems
CN112002016A (en) * 2020-08-28 2020-11-27 中国科学院自动化研究所 Continuous curved surface reconstruction method, system and device based on binocular vision
US11272988B2 (en) 2019-05-10 2022-03-15 Fvrvs Limited Virtual reality surgical training systems
CN114387839A (en) * 2022-01-19 2022-04-22 上海石指健康科技有限公司 Force feedback-based biological tissue simulation method and device and electronic equipment
CN115272379A (en) * 2022-08-03 2022-11-01 杭州新迪数字工程系统有限公司 Projection-based three-dimensional grid model outline extraction method and system
DE102021132665A1 (en) 2021-12-10 2023-06-15 Hightech Simulations Gmbh OPERATING SYSTEM WITH HAPTICS

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6704694B1 (en) * 1998-10-16 2004-03-09 Massachusetts Institute Of Technology Ray based interaction system
US20070035511A1 (en) * 2005-01-25 2007-02-15 The Board Of Trustees Of The University Of Illinois. Compact haptic and augmented virtual reality system
US20080010706A1 (en) * 2006-05-19 2008-01-10 Mako Surgical Corp. Method and apparatus for controlling a haptic device
US20080143895A1 (en) * 2006-12-15 2008-06-19 Thomas Peterka Dynamic parallax barrier autosteroscopic display system and method
US20090009492A1 (en) * 2001-07-16 2009-01-08 Immersion Corporation Medical Simulation Interface Apparatus And Method
US20090253109A1 (en) * 2006-04-21 2009-10-08 Mehran Anvari Haptic Enabled Robotic Training System and Method
US20110081001A1 (en) * 2007-12-23 2011-04-07 Oraya Therapeutics, Inc. Methods and devices for orthovoltage ocular radiotherapy and treatment planning
US20110238079A1 (en) * 2010-03-18 2011-09-29 SPI Surgical, Inc. Surgical Cockpit Comprising Multisensory and Multimodal Interfaces for Robotic Surgery and Methods Related Thereto
US20110251483A1 (en) * 2010-04-12 2011-10-13 Inneroptic Technology, Inc. Image annotation in image-guided medical procedures
US20120109152A1 (en) * 2002-03-06 2012-05-03 Mako Surgical Corp., System and method for using a haptic device as an input device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6704694B1 (en) * 1998-10-16 2004-03-09 Massachusetts Institute Of Technology Ray based interaction system
US20090009492A1 (en) * 2001-07-16 2009-01-08 Immersion Corporation Medical Simulation Interface Apparatus And Method
US20120109152A1 (en) * 2002-03-06 2012-05-03 Mako Surgical Corp., System and method for using a haptic device as an input device
US20070035511A1 (en) * 2005-01-25 2007-02-15 The Board Of Trustees Of The University Of Illinois. Compact haptic and augmented virtual reality system
US20090253109A1 (en) * 2006-04-21 2009-10-08 Mehran Anvari Haptic Enabled Robotic Training System and Method
US20080010706A1 (en) * 2006-05-19 2008-01-10 Mako Surgical Corp. Method and apparatus for controlling a haptic device
US20080143895A1 (en) * 2006-12-15 2008-06-19 Thomas Peterka Dynamic parallax barrier autosteroscopic display system and method
US20110081001A1 (en) * 2007-12-23 2011-04-07 Oraya Therapeutics, Inc. Methods and devices for orthovoltage ocular radiotherapy and treatment planning
US20110238079A1 (en) * 2010-03-18 2011-09-29 SPI Surgical, Inc. Surgical Cockpit Comprising Multisensory and Multimodal Interfaces for Robotic Surgery and Methods Related Thereto
US20110251483A1 (en) * 2010-04-12 2011-10-13 Inneroptic Technology, Inc. Image annotation in image-guided medical procedures

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
NEUMANN.: "Virtual reality vitrectomy simulator.", DISS., 2000, Retrieved from the Internet <URL:ftp://II2.ai.mit.edu/pub/cimit/thesis.pdf> *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018035310A1 (en) * 2016-08-19 2018-02-22 The Penn State Research Foundation Dynamic haptic robotic trainer
US11373553B2 (en) 2016-08-19 2022-06-28 The Penn State Research Foundation Dynamic haptic robotic trainer
US11272988B2 (en) 2019-05-10 2022-03-15 Fvrvs Limited Virtual reality surgical training systems
WO2020229890A1 (en) * 2019-05-10 2020-11-19 Firvs Limted Virtual reality surgical training systems
US11839432B2 (en) 2019-05-10 2023-12-12 Fvrvs Limited Virtual reality surgical training systems
CN110703920B (en) * 2019-10-15 2022-03-08 南京邮电大学 Showpiece touch telepresence feedback equipment
CN110703920A (en) * 2019-10-15 2020-01-17 南京邮电大学 Showpiece touch telepresence feedback equipment
CN110910513A (en) * 2019-12-10 2020-03-24 上海市精神卫生中心(上海市心理咨询培训中心) Augmented reality system for assisting examinee in adapting to magnetic resonance scanning environment
CN110910513B (en) * 2019-12-10 2023-04-14 上海市精神卫生中心(上海市心理咨询培训中心) Augmented reality system for assisting examinee in adapting to magnetic resonance scanning environment
CN112002016A (en) * 2020-08-28 2020-11-27 中国科学院自动化研究所 Continuous curved surface reconstruction method, system and device based on binocular vision
CN112002016B (en) * 2020-08-28 2024-01-26 中国科学院自动化研究所 Continuous curved surface reconstruction method, system and device based on binocular vision
DE102021132665A1 (en) 2021-12-10 2023-06-15 Hightech Simulations Gmbh OPERATING SYSTEM WITH HAPTICS
CN114387839A (en) * 2022-01-19 2022-04-22 上海石指健康科技有限公司 Force feedback-based biological tissue simulation method and device and electronic equipment
CN115272379A (en) * 2022-08-03 2022-11-01 杭州新迪数字工程系统有限公司 Projection-based three-dimensional grid model outline extraction method and system
CN115272379B (en) * 2022-08-03 2023-11-28 上海新迪数字技术有限公司 Projection-based three-dimensional grid model outline extraction method and system

Similar Documents

Publication Publication Date Title
WO2015084837A1 (en) Improvements for haptic augmented and virtual reality system for simulation of surgical procedures
US10108266B2 (en) Haptic augmented and virtual reality system for simulation of surgical procedures
US9870446B2 (en) 3D-volume viewing by controlling sight depth
US11183296B1 (en) Method and apparatus for simulated contrast for CT and MRI examinations
US9491443B2 (en) Image processing method and image processing apparatus
US20160299565A1 (en) Eye tracking for registration of a haptic device with a holograph
Barnouin et al. A real-time ultrasound rendering with model-based tissue deformation for needle insertion
Law et al. Ultrasound image simulation with gpu-based ray tracing
Starkov et al. Ultrasound simulation with animated anatomical models and on-the-fly fusion with real images via path-tracing
Tanaiutchawoot et al. A path generation algorithm for biopsy needle insertion in a robotic breast biopsy navigation system
Eagleson et al. Visual perception and human–computer interaction in surgical augmented and virtual reality environments
Williams A method for viewing and interacting with medical volumes in virtual reality
Kim et al. Fast surface and volume rendering based on shear-warp factorization for a surgical simulator
Sutherland et al. Towards an augmented ultrasound guided spinal needle insertion system
Kirmizibayrak Interactive volume visualization and editing methods for surgical applications
Leu et al. Virtual Bone Surgery
Harders et al. New paradigms for interactive 3D volume segmentation
EP4231246A1 (en) Technique for optical guidance during a surgical procedure
Rianto A Virtual Reality Based Surgical Simulation as an Alternative of Halal Surgical Trainings and Better Surgical Planning
Rizzi Volume-based graphics and haptics rendering algorithms for immersive surgical simulation
Hong et al. Virtual angioscopy based on implicit vasculatures
EP3696650A1 (en) Direct volume haptic rendering
Leu et al. Virtual Bone Surgery
Sabater et al. Algorithm for haptic rendering of reconstructed 3D solid organs
Wang et al. Dynamic linear level octree-based volume rendering methods for interactive microsurgical simulation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14868387

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14868387

Country of ref document: EP

Kind code of ref document: A1