WO2007059477A2 - Simulateur de patient virtuel - Google Patents

Simulateur de patient virtuel Download PDF

Info

Publication number
WO2007059477A2
WO2007059477A2 PCT/US2006/060853 US2006060853W WO2007059477A2 WO 2007059477 A2 WO2007059477 A2 WO 2007059477A2 US 2006060853 W US2006060853 W US 2006060853W WO 2007059477 A2 WO2007059477 A2 WO 2007059477A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
medical
data
image data
scenario
Prior art date
Application number
PCT/US2006/060853
Other languages
English (en)
Other versions
WO2007059477A3 (fr
Inventor
Danny L. Murphy
Alan Shih
Phillip C. Shum
Original Assignee
The Uab Research Foundation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The Uab Research Foundation filed Critical The Uab Research Foundation
Publication of WO2007059477A2 publication Critical patent/WO2007059477A2/fr
Publication of WO2007059477A3 publication Critical patent/WO2007059477A3/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical

Definitions

  • Virtual Patient on a display system for example a stereoscopic display system configured to mimic a patient bed.
  • This system can use data representing human anatomy, for example, data from the National Institutes of Health Visible Human Project.
  • the virtual patient can be an all-inclusive, mobile display system, for full- scale human anatomy that can be placed in existing lab spaces.
  • the simulator can be controlled through a graphical user interface (GUI).
  • GUI graphical user interface
  • Figure 1 is an exemplary graphical user interface.
  • Figure 2 is an exemplary display system.
  • Figure 3 is a flowchart illustrating one embodiment of a virtual patient image display data processing method.
  • Figure 4 is a flowchart illustrating an exemplary process for the virtual patient image display data processing method of Fig. 3.
  • Figure 5 is an exemplary operating environment.
  • Ranges may be expressed herein as from “about” one particular value, and/or to "about” another particular value. When such a range is expressed, another embodiment includes from the one particular value and/or to the other particular value. Similarly, when values are expressed as approximations, by use of the antecedent "about,” it will be understood that the particular value forms another embodiment. It will be further understood that the endpoints of each of the ranges are significant both in relation to the other endpoint, and independently of the other endpoint.
  • Virtual Patient on a display system for example a stereoscopic display system configured to mimic a patient bed.
  • this system can use data representing human anatomy, for example and not meant to be limiting, data from the National Institutes of Health Visible Human Project.
  • the virtual patient can be an all-inclusive, mobile display system, for full-scale human anatomy that can be placed in existing lab spaces.
  • the simulator can be controlled through a graphical user interface (GUI).
  • GUI graphical user interface
  • VR virtual reality
  • the computer creates a sensory environment for the user to experience which may be, in one aspect, multisensory (although this is not essential) and the computer creates a sense of reality in response to user inputs or manipulations.
  • IQ one exemplary aspect, the system disclosed can utilize at least two types of
  • Immersive VR creates the illusion that the user is actually in a different environment.
  • the system accomplishes this through the use of such devices as Head Mounted Displays (HMD's), earphones, and input devices such as gloves or wands.
  • HMD's Head Mounted Displays
  • earphones earphones
  • input devices such as gloves or wands.
  • DOF's Degrees of Freedom
  • exemplary DOF's include, without limitation: X,Y,Z, roll, pitch, and yaw.
  • Non-immersive VR creates an environment that is differentiable from the user's surrounding environment. It does not give the illusion that the user is transported to another world.
  • Non-immersive VR works by creating a 3-dimensional image and surround sound through the use of stereo projection systems, computer monitors, and/or stereo speakers.
  • Non-immersive VR can be run from a personal computer without added hardware.
  • movement in Immersive VR can be realized by a system through the use of optical, acoustical, magnetic, or mechanical hardware called trackers.
  • the input devices have as many of these trackers as possible, so that movement can be more accurately represented.
  • virtual gloves can have up to 3 trackers for each index, and more for the palm and wrist, so that the user can grab and press objects.
  • the trackers can be equipped with positioning sensors, that tell a computer which direction the input is facing and how the input device is tilted in all directions. This gives a sensor with six degrees of freedom.
  • HMD head mounted displays
  • LCD Liquid Crystal Display
  • Eye-tracking can be combined with HMDs. This can allow, for example, surgeons to move then" eyes to the part of an image they want to enhance.
  • FIG. 1 Another example of an output device that can be used in the present system is shuttered glasses.
  • This device updates an image to each eye every other frame, with the shutter closed on the other eye.
  • Shuttered glasses require a very high frame rate in order to keep the images from flickering.
  • This device is used for stereo monitors, and gives an accurate 3-d representation of a 2-d object, but does not immerse the user in the virtual world.
  • Another output device that can be used in the present system is a screen with multiple projectors.
  • the screen can be either a plane or bent.
  • a problem when using multiple projectors on the same screen is that there can be visible edges between the projections. This can be remedied be using a soft-edge system wherein the projection goes more and more transparent at the edges and the projections overlap. This produces an almost perfect transition between the images.
  • shuttered glasses can be used. Special glasses have to be used, that alternate between making the glass either completely opaque or completely transparent. When the left eye is opaque, the right one is transparent. This is synchronized to the projectors that are projecting corresponding images on the screen.
  • CAVE Cave Automatic Virtual Environment
  • a CAVE can use mirrors in a cube-shaped room to project stereo images onto the walls, giving the illusion that you are standing in a virtual world.
  • the world is constantly updated using trackers, and the user is allowed to move around almost completely uninhibited.
  • a major use for virtual reality is in medical education and training. Medical students can learn on a computer image of real organs. Examples of environments include laparoscopic surgery simulation, heart catheterization, limb trauma and simulated surgery with feedback.
  • the system can also be used in a 3D visual and aural scenario in which a trauma incident has occurred.
  • the patient is a 3D virtual model with visible and internal injuries, with the appropriate medical signs and symptoms, and true to life behavior.
  • the user can speak to the patient and hear responses.
  • the user has a set of tools and procedures which they may use in order to treat the patient. Afterwards there can be a post-session review which goes through the log of physiological data collected during the session.
  • This simulation can be performed with multiple patients and multiple care givers, more types of trauma and interventions, and interfaces for training aids.
  • any system of the Virtual Patient can be generated and displayed including, but not limited to, the digestive system, the respiratory system, the circulatory system, the muscular system, the skeletal system, the excretory system, the nervous system, the lymphatic system, the reproductive system, the endocrine system, the integumentary system, and the like.
  • any sub-part of a system may be generated and displayed, including but not limited to, organs, tissues, cells, veins, arteries, and the like.
  • any activity of a system or sub-part of a system may be generated and displayed, including but not limited to, blood flow, muscle contraction, digestion, cell death, and the like.
  • FIG. 1 illustrates an exemplary user interface for selecting a part of the virtual patient to display. This user interface allows a user to drill down to the desired part of the virtual patient to display.
  • An exemplary display configuration 200 shown in FIG. 2 can enable users to view human anatomy with high-resolution, full-scale 3D graphics.
  • a screen 205 can be disposed at a slight angle for comfortable viewing.
  • the screen can be disposed at a 5, 10, 15, 20, 25, 30, 35, 40, or 45 degree angle.
  • the screen 205 can be of varying sizes, for example, 72"x40.5".
  • At least one projector 511 can be used for rear-projected passive stereo display. More than one projector 511 can be used, the exemplary systems in FIG. 2 and FIG. 5 illustrate two projectors 511.
  • the projectors 511 can have varying resolutions, for example and without limitation, HDTV resolution (1920x1200).
  • a user can wear lightweight polarized glasses for viewing.
  • the result is an illusion that a full-size, 3D human is laving in front of the user on a virtual, bed that is just below the surface of the screen.
  • the 3D graphics are generated by a high performance graphics workstation, allowing for realtime interactive visualization.
  • the system can utilize one or more high-resolution projectors
  • the viewer views the projection through passive stereoscopic polarized glasses (similar to sunglasses) that route the left-eye image to the left eye, and the right-eye image to the right eye.
  • passive stereoscopic polarized glasses similar to sunglasses
  • the system can be replaced by other types of stereoscopic displays with no functional detriment to the Virtual Patient system.
  • FIG. 2 further illustrates how throws from the projectors 511 can be contained within the system.
  • the light 206 and 207 first hits a small reflective surface, for example a mirror 203, at the top of the system, then a larger reflective surface, for example a mirror 204, at the bottom, and then the back of the screen 205.
  • the system can be easily tipped upright. In this upright position casters can allow the system to roll through doorways. This allows the system to be easily portable within a building.
  • Software in the system can perform various functions including reading data from a Visible Human dataset, generation of 3D stereo display of the data, and segmentation algorithms to differentiate skin, bones, and several other major anatomical features. Further functionality includes the ability to control attributes associated with each anatomical feature, such as color and opacity, the ability to bide and show each anatomical feature, and the ability for user to manipulate the image with zoom, pan, and rotation control. Additionally, users can have the ability to setup a cutting a plane so that viewers can look at the anatomical features on the cutting plane. Also, text overlays can be implemented so that a user can point at objects and their medical terms are displayed. Other features include stress analyses, fluid simulations in the lung airway or arteries, and more user interactions. Users can interact with the system to experience various medical scenarios. The display system can also be reused for other potential educational purposes with more customized software contents.
  • NLM Visible Human Project® http://www.nlm.nih.gov/research/visible/visible_human.html
  • the system can use data firom the Visible Human Project® to create complete, anatomically detailed, three-dimensional representations of the normal male and female human bodies. Data can exemplarily include transverse CT, MR and cryosection images of representative male and female patients. The male was sectioned at one millimeter intervals, the female at one-third of a millimeter interval. This system can use the Visible Male or Visible Female dataset or any other similar data from, other sources, for example, data in DICOM format.
  • the Visible Male dataset consists of MlLI, CT, and anatomical images, including 70rnm high-resolution photos of 1871 cryosections. This data is coregistered so that the information from all sources is in alignment, and it is then further processed into various subsets of the data representing different physical sections and levels of detail, to be dynamically reconstructed by volume rendering algorithms as it is viewed.
  • FIG. 3 shows an exemplary data processing method of the present system.
  • the system can receive raw medical image data, such as volumetric data.
  • This medical image data can be raw photo data, raw CT data, raw MRI data, raw ultrasound data, combinations thereof, and the like.
  • the data can be of common image formats such as BMP, JPEG, PNG, GIF, TIFF, PDF and the like for the photo, and DICOM format, for example, for CT and MRI, which is a common format for the majority of the scanning devices.
  • the raw medical image data can undergo image processing.
  • Image processing can include image smoothing, image enhancement, image restoration, combinations thereof, and the like.
  • Block 302 is further described in blocks 404-406 of FlG. 4.
  • image smoothing can reduce noise in medical image data, for example, medical image data 403.
  • medical image data is generally collected for viewing by physicians, so no strong attempt is usually made to keep the voxels regular; in other words, the slice thickness is quite a bit different from the resolution of each acquired slice.
  • This has the effect of creating "stair-step" artifacts between slices, where the target volumes of interest differ significantly in area from slice to slice, leaving the segmentation algorithms to determine how one slice progresses to the next.
  • a light Gaussian blur filter can be applied. This filter can be set to be stronger in the direction of the individual slices to help smooth the "stair-step” effect. While this blurs the data, little information is lost. Most of the effect is to modify the slice artifacts to create smoother segments.
  • CT data collected during routine screenings with scalar values in the ranges of interest include a significant proportion of noise
  • image enhancement see FlG. 4 block 405
  • Noise reduction techniques can be applied to allow for reasonable segmentations.
  • the predominant noise in CT data is "snowy"; in other words, a significant proportion of the voxels in the image volume differ greatly from their neighboring voxels, creating an appearance similar to the "snow" that appears on broadcast television when the received signal is low.
  • a median convolution filter can be applied, in which each voxel is set to the median value of its neighbor voxels.
  • a negative effect of this filter is the loss of voxel-level precision, as the new value of each voxel may have its true source anywhere in the region of the convolution kernel (in this case, up to one voxel away).
  • the median convolution filter used can be the vtkImageMedian3D class algorithm.
  • the Gaussian blur filter used can exemplarily be the vtklmageGaussianSmooth class algorithm.
  • Image registration see FIG. 4 block 406, can be performed since the source data from photos, CT and MRI are acquired from different image modalities at different times.
  • image registration can apply an Artificial Neuron Network (ANN) algorithm process the image data so that they can co-located properly.
  • ANN based algorithms allow for accurate and fast image registration.
  • Discrete Cosine Transform (DCT) coefficients of the image are applied as the input to the ANN and the ANN outputs transformation model parameters. These parameters are then used to register the target image. Once all the data are processed, block 303, they can be stored into new datasets at block 407.
  • DCT Discrete Cosine Transform
  • FIG. 4 illustrates an exemplary method utilizing the present system.
  • the system can generate a user interface and provide the user interface to a user.
  • the user interface can be displayed through a control terminal (any type of computer display device), a primary stereoscopic display, or both.
  • This interface allows for manipulations and selections of a virtual patient including rotation, translation, alterations to the opacity transfer functions.
  • the user can manipulate a pointer on the stereoscopic display to indicate areas of interest and additionally display relevant text or image annotation.
  • a user can select a part of a virtual patient to display by selecting a geometry of the virtual patient.
  • a geometry of the virtual patient includes any part or sub-part of the virtual patient available in the medical image data.
  • the system can receive medical image data associated with the selected geometry, such as processed data, for example processed data 407, through a data reader module.
  • a data reader module can read data from the Visible Human dataset or other medical scanned data in VTK (Visualization Toolkit, http://www.vtk.org) format and any other format that medical image data is stored.
  • VTK Visualization Toolkit, http://www.vtk.org
  • the system can perform image segmentation on the medical image data, resulting in segmented image data.
  • Volumes of interest can be defined, creating new segment volumes in which the scalar values effectively range from 0 (entirely outside of the segment) to 1 (entirely within the segment).
  • algorithms that exploit localization of values and localization in space to define the segments from the collected data volume can be used.
  • the segment can be defined by a relatively simple binary threshold segmentation filter. This works well when a threshold value exists such that all voxels that are inside the desired segment fall on one side of the threshold value, and all voxels that are outside the desired segment fall on the other side of the threshold value.
  • the approach is to smooth the data volume with, for example, convolution blur filters or Gaussian smooth filters of the appropriate strength to achieve the desired resolution to the resulting segment, and then use a bimodal thresholding filter, for example, to create the segment volume.
  • ITK segmentation algorithms can be used.
  • the simple bimodal threshold filter used can be the itk::BinaryThresholdTmageFilter class algorithm.
  • the connectivity-thresholding filter used can be the itk::ConnectedThresholdImageFilter class algorithm.
  • the binary erosion filter used can be the itk: :BinaryErodeImageFilter class algorithm.
  • the curvature flow filter used can be the itk::MinMaxCurvatureFlowImageFilter class algorithm.
  • This module can have several built-in settings to attempt to process the image data and segment important features automatically to differentiate skin, bones, and other major anatomical features. However, due to the quality of the image data and complexity of human anatomy, it is not always possible to perform this task automatically and robustly. Therefore, the system also allows user interactions to modify the settings for better results.
  • the system can optionally store the segmented image data in a storage device.
  • volume rendering is a technique for directly displaying a sampled 3D scalar field without first fitting geometric primitives to the samples. This exemplary aspect aids in removing the ambiguity problem in generating isosurfaces.
  • the effect of stereoscopic direct volume rendering is achieved using ray-trace methods in realtime on a data volume that has been optimized for common levels of detail, allowing for completely interactive visualization. Using this technique, all parts of the virtual patient can be explored without foreknowledge. Pre-defined segments (such as particular interesting organs) or arbitrarily-chosen regions of the Virtual Patient can be made transparent to whatever degree desired.
  • volume rendering is used to describe the technique for the visualization of a three-dimensional volumetric data. Traditionally, isosurfaces were used to render the geometric surfaces. However, there can be obscure cases that the algorithms can not determine if the surface should pass through a voxel, thus it can produce spurious surfaces or erroneous holes in surfaces. Exemplary volume rendering techniques represent surfaces more accurately because they do not use intermediate geometrical representations. Exemplary volume rendering involves several steps: (a) the forming of an RGBA volume from the data, (b) reconstruction of a continuous function from this discrete data set, and (c) projecting it onto the 2D viewing plane from the user-defined point of view.
  • RGBA volume is a 3D dataset with 4 vectors in each voxel, namely, the red, green, blue and opacity values.
  • the last component, opacity (A) varies between 0 and 1, with 0 representing total transparency and 1 being totally opaque.
  • Volume rendering offers a better representation for displaying weak or fuzzy surfaces. This is a major approach in rendering medical images, where volume data is available from various imaging modalities, such as the X-ray-based Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) scanners. These scanners produce three-dimensional stacks of parallel plane images with distance between each slice being less than lmm to about 8mm. However, the farther apart between the slices, the less accurate representation of the objects will be due to the low resolution, thus lack of spatial information. These slices of image from either CT or MRI can be viewed individually. Unlike conventional X-ray images, a slice of CT image contains information only from that one plane, while the conventional X-ray image contains accumulated information from all the planes.
  • CT Computed Tomography
  • MRI Magnetic Resonance Imaging
  • ray casting can be used to render high-quality images of solid objects.
  • Ray casting can be used in combination with viewing geometry data that contains actual discrete surface coordinates.
  • volume rendering such as ray casting, ray tracing, and splatting.
  • Splatting is one of such techniques that differ from ray casting approach in the projection method by approximating the projection by a Gaussian Splat algorithm.
  • a ray casting algorithm is to render the volumetric data without the use of geometric structure, thus removing the ambiguity issue in isosurface algorithms, such as the Marching Cube method.
  • the algorithm is straightforward in principal, but the computation is intensive. For every pixel in the 2D output image on the screen, an imaginary ray is shot into the data volume, which is evenly partitioned. Along the ray, the color and opacity values are obtained by interpolation and finally composing in the back-to-front order, i.e. starting at the background and moving towards the image plane, to determine the color for that pixel.
  • the color of the outgoing ray C out at a location is related to the color Q n of the incoming ray, and to the color C(X ; ) and the opacity a(x ⁇ ) at that location x. by the transparency formula.
  • the standard transparency formula is:
  • C out is the outgoing intensity for voxel x along the ray
  • C in is the incoming intensity for the voxel.
  • the opacity acts as a data selector. That is, points with opacity values near 1 hide almost all the information along the ray between the background and the points. On the other hand, opacity values close to zero transfer the information almost unaltered, as in the dense-emitter model, where the color indicates the instantaneous emission rate and the opacity indicates the instantaneous absorption rate.
  • ray-trace volume rendering is achieved by virtually shooting rays from the position of the viewer's eyes into the dataset, and then collecting information from the intersecting voxels (volume elements) to determine what color reaches the eyes for that ray.
  • the system exemplarily can shoot between about 400,000 to 20,000,000, or between about 518,400 to 16,588,800 rays for each frame, at a rate of between about 10 to 60 or between about 15 to 30 frames per second.
  • a deconstruction algorithm can be used to convert the volume view into a textured polygonal view that cannot be differentiated from the volume view, but which discards data that does not add to the viewer's perception. Visible surfaces are extracted from the volume using volume contouring methods, and two-dimensional texture maps are then prepared to mimic the original volume view. Transparent regions in the volume are mimicked with multiple transparent surfaces. OpenGL real-time shaders and other techniques can be employed to maintain correct ordering of the indirect view polygons for transparency.
  • the system can display the resulting image.
  • the system can utilize a stereoscopic display, or any other virtual reality display system.
  • the stereoscopic display can comprise at least two display projectors fitted with polarizing lenses, a back-projection screen material that maintains light polarization upon diffusion, special glasses that restrict each eye to see only light of a particular polarization, and the viewer.
  • the image to be viewed can be rendered with two slightly different view transformations, reflecting the different locations of the ideal viewer's two eyes.
  • One projector displays the image rendered for the left eye's position, and the other projector displays the image rendered for the right eye's position.
  • the glasses restrict the light so that the left eye sees only the image rendered for it, and the right eye sees only the image rendered for it.
  • the viewer presented with a reasonable stereoscopic image, will perceive depth.
  • the system can determine if the system is simulating a previously selected medical scenario. In other words, the system can determine if a simulation is currently running and whether the image currently being displayed is a part of that simulation. If, at block 411, the system is not simulating a previously selected medical scenario, the system can receive at least one medical scenario selection from the user at block 412. If, at block 412, no medical scenario selection is made, the system can terminate at block 413.
  • a medical scenario can be any event that affects a portion of the virtual patient that is being displayed. Examples of medical scenarios include, but are not limited to, anatomic exploration, coronary infarctions, kidney failure, drug administration, and the like.
  • the user can also select a desired user-interactive medical scenario wherein the user can manipulate the scenario in real time.
  • the system can determine the consequences necessary for scenario simulation (rendering) at block 414.
  • the system can utilize at least one consequence 415, to allow the system to accurately simulate the scenario.
  • a consequence describes how the part of the virtual patient observed in the image should react in a particular medical scenario.
  • a consequence can be predefined.
  • the consequence can by dynamic.
  • a pre-defined consequence allows for the simulation of how the part of the virtual patient should react based on prior knowledge of similar medical scenarios.
  • a dynamic consequence is a consequence generated by the system to simulate medical scenarios that either have not been recorded or that do not have pre-defined consequences. Dynamic consequences can be generated on the fly utilizing artificial intelligence techniques. For example, a dynamic consequence would be necessary to simulate how a heart would react to a combination of drugs that have never been administered to a heart before.
  • One illustrative example, and not meant to be limiting, of a user-interactive medical scenario and a predefined consequence is to show a system of the blood vessels in the body, and a user can choose to cut one of them. If a wrong choice was made, the user would have cut the artery instead of the vein, which consequently causes a medical emergency and can trigger an auditory alarm sound to indicate the consequence.
  • the system can utilize external medical equipment to indicate the consequence by, for example, displaying to the user the corresponding change in the virtual patient's blood pressure on a blood pressure monitor.
  • the system can simulate the selected medical scenario.
  • the user can control the system, at block 417, through an input device and the user interface.
  • the user can, for example, add another medical scenario for simulation, the user can manipulate the scenario such as by selecting a pre-defined view of the image, moving the image, rotating the image, zooming in on the image, administering a drug, cutting away a section of the image, performing a surgical maneuver, and the like.
  • the system can utilize the predefined consequence 415 to generate a response to the user input and simulate the response in the image of the selected geometry.
  • the system can return to blocks 402-411, if necessary, to segment and volume render the medical image data to display any additional images necessary to simulate the medical scenario.
  • the determination at block 411 will result in the system returning to block 416 to continue the previously selected medical scenario.
  • Interaction with the Virtual Patient takes many forms. It is contemplated that, the viewers can select pre-defined views, they can move and rotate the Virtual Patient in any direction or angle, they can zoom in to view particular features in detail, and they can cut away sections arbitrarily with cutting planes, spheres, etc.
  • the system integrates simulation results and available sound bytes into a graphical representation of the Visible Human datasets so that a student can visually see how the flow is going through the heart and arteries, and correlate those images with sound bytes of the disease to enhance the understanding of the mechanism of such disease.
  • computational fluid dynamics (CFD) and computational structural mechanics (CSM) simulations can be used for sections of body parts that are of medical significance (such as the heart), and the results can be superimposed to Visible Human data.
  • Both CFD and CSM utilize numerical geometry and high quality numerical meshes to take geometric information for computations, and produce visualizations of field information such as velocity, temperature, stress, and the like.
  • field information such as velocity, temperature, stress, and the like.
  • such information can be superimposed in the 3D space with the visible human volumetric data to show the spatial orientation with other organs in the body. It is contemplated that, as more simulated results are created and deposited into the system of the present invention, more scenarios can be planned and expanded based on the availability of such simulated results.
  • the system can interconnect to medical equipment such as heart monitors, ventilators etc. so that different conditions maybe introduced by an instructor prompting students to act as if they were dealing with a real patient.
  • medical equipment such as heart monitors, ventilators etc.
  • the system can accurately display and simulate the effects of different medications on the body.
  • students can inject certain dosages of medication into the virtual patient through haptic devices and actually see the effects (positive/negative) on the different organs of the virtual patient,
  • the display system can be adapted for use in any health related profession, engineering, or any industry requiring 3D stereoscopic representation of an object.
  • ITG. 5 is a block diagram illustrating an exemplary operating environment for performing the disclosed method.
  • This exemplary operating environment is only an example of an operating environment and is not intended to suggest any limitation as to the scope of use or functionality of operating environment architecture. Neither should the operating environment be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary operating environment.
  • the system and method of the present invention can be operational with numerous other general purpose or special purpose computing system environments or configurations.
  • Examples of well known computing systems, environments, and/or configurations that can be suitable for use with the system and method comprise, but are not limited to, personal computers, server computers, laptop devices, and multiprocessor systems. Additional examples comprise set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that comprise any of the above systems or devices, and the like.
  • system and method of the present invention can be described in the general context of computer instructions, such as program modules, being executed by a computer.
  • program modules comprise routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • the system and method of the present invention can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.
  • program modules can be located in both local and remote computer storage media including memory storage devices.
  • the system and method disclosed herein can be implemented via a general-purpose computing device in the form of a computer 501.
  • the components of the computer 501 can comprise, but are not limited to, one or more processors or processing units 503, a system memory 512, and a system bus 513 that couples various system components including the processor 503 to the system memory 512.
  • the system bus 513 represents one or more of several possible types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures.
  • bus architectures can comprise an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, an Enhanced ISA (EISA) bus, a Video Electronics Standards Association (VESA) local bus, an Accelerated Graphics Port (AGP) bus, and a Peripheral Component Interconnects (PCI) bus also known as a Mezzanine bus.
  • ISA Industry Standard Architecture
  • MCA Micro Channel Architecture
  • EISA Enhanced ISA
  • VESA Video Electronics Standards Association
  • AGP Accelerated Graphics Port
  • PCI Peripheral Component Interconnects
  • the bus 513, and all buses specified in this description can also be implemented over a wired or wireless network connection and each of the subsystems, including the processor 503, a mass storage device 504, an operating system 505, application software 506, data 507, a network adapter 508, system memory 512, an Input/Output Interface 510, a display adapter 509, a display device 511, and a human machine interface 502, can be contained within one or more remote computing devices 514a,b,c at physically separate locations, connected through buses of this form, in effect implementing a fully distributed system.
  • the computer 501 typically comprises a variety of computer readable media.
  • Exemplary readable media can be any available media that is accessible by the computer 501 and comprises, for example and not meant to be limiting, both volatile and non-volatile media, removable and non-removable media.
  • the system memory 512 comprises computer readable media in the form of volatile memory, such as random access memory (RAM), and/or non-volatile memory, such as read only memory (ROM).
  • RAM random access memory
  • ROM read only memory
  • the system memory 512 typically contains data such as data 507 and/or program modules such as operating system 505 and application software 506 that are immediately accessible to and/or are presently operated on by the processing unit 503.
  • the computer 501 can also comprise other removable/nonremovable, volatile/non-volatile computer storage media.
  • Figure 1 illustrates a mass storage device 504 which can provide non-volatile storage of computer code, computer readable instructions, data structures, program modules, and other data for the computer 501.
  • a mass storage device 504 can be a hard disk, a removable magnetic disk, a removable optical disk, magnetic cassettes or other magnetic storage devices, flash memory cards, CD- ROM, digital versatile disks (DVD) or other optical storage, random access memories (RAM), read only memories (ROM), electrically erasable programmable read-only memory (EEPROM), and the like.
  • any number of program modules can be stored on the mass storage device 504, including by way of example, an operating system 505 and application software 506.
  • Each of the operating system 505 and application software 506 (or some combination thereof) can comprise elements of the programming and the application software 506.
  • Data 507 can also be stored on the mass storage device 504.
  • Data 507 can be stored in any of one or more databases known in the art. Examples of such databases comprise, DB2®, Microsoft® Access, Microsoft® SQL Server, Oracle®, mySQL, PostgreSQL, and the like.
  • the databases can be centralized or distributed across multiple systems.
  • Data 507 can comprise any data necessary to accomplish the invention, including but not limited to, medical image data both raw and processed, consequence data, and the like.
  • the user can enter commands and information into the computer 501 via an input device (not shown).
  • input devices comprise, but are not limited to, a keyboard, pointing device (e.g., a "mouse"), a microphone, a joystick, a scanner, tactile input devices such as gloves, and other body coverings, and the like
  • a human machine interface 502 that is coupled to the system bus 513, but can be connected by other interface and bus structures, such as a parallel port, game port, an IEEE 1394 Port (also known as a Firewire port), a serial port, or a universal serial bus (USB).
  • a display device 511 can also be connected to the system bus 513 via an interface, such as a display adapter 509. It is contemplated that the computer 501 can have more than one display adapter 509 and the computer 501 can have more than one display device 511.
  • a display device can be a monitor, an LCD (Liquid Crystal Display), or a projector.
  • other output peripheral devices can comprise components such as speakers (not shown) and a printer (not shown) which can be connected to the computer 501 via Input/Output Interface 510.
  • the computer 501 can operate in a networked environment using logical connections to one or more remote computing devices 514a,b,c.
  • a remote computing device can be a personal computer, portable computer, a server, a router, a network computer, a peer device or other common network node, and so on.
  • Logical connections between the computer 501 and a remote computing device 514a,b,c can be made via a local area network (LAJN) and a general wide area network (WAN).
  • LAJN local area network
  • WAN general wide area network
  • a network adapter 508 can be implemented in both wired and wireless environments. Such networking environments are conventional and commonplace in offices, enterprise-wide computer networks, intranets, and the Internet 515.
  • Computer readable media can be any available media that can be accessed by a computer.
  • Computer readable media can comprise “computer storage media” and “communications media.”
  • “Computer storage media” comprise volatile and non-volatile, removable and non- removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules, or other data.
  • Exemplary computer storage media comprises, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer.
  • Intelligence techniques such as machine learning and iterative learning. Examples of such techniques include, but are not limited to, expert systems, case based reasoning, Bayesian networks, behavior based AI, neural networks, fuzzy systems, evolutionary computation (e.g. genetic algorithms), swarm intelligence (e.g. ant algorithms), and hybrid intelligent systems (e.g. Expert inference rules generated through a neural network or production rules from statistical learning).
  • the processing of the disclosed system and method of the present invention can be performed by software components.
  • the disclosed system and method can be described in the general context of computer-executable instructions, such as program modules, being executed by one or more computers or other devices.
  • program modules comprise computer code, routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • the disclosed method can also be practiced in grid-based and distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.
  • program modules can be located in both local and remote computer storage media including memory storage devices.

Abstract

L’invention concerne des systèmes et méthodes pour générer et interfacer un patient virtuel sur un système d’affichage, par exemple un système d’affichage stéréoscopique configuré pour imiter un lit d'hôpital. Ce système peut utiliser des données représentant l’anatomie humaine, par exemple des données du National Institutes of Health Visible Human Project. Le patient virtuel peut être un système d’affichage mobile complet pour une anatomie humaine de taille réelle pouvant être placé dans des espaces existants de laboratoires. Le simulateur peut être commandé par une interface utilisateur graphique (GUI).
PCT/US2006/060853 2005-11-11 2006-11-13 Simulateur de patient virtuel WO2007059477A2 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US73545805P 2005-11-11 2005-11-11
US60/735,458 2005-11-11
US76450806P 2006-02-02 2006-02-02
US60/764,508 2006-02-02

Publications (2)

Publication Number Publication Date
WO2007059477A2 true WO2007059477A2 (fr) 2007-05-24
WO2007059477A3 WO2007059477A3 (fr) 2008-05-02

Family

ID=38049382

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2006/060853 WO2007059477A2 (fr) 2005-11-11 2006-11-13 Simulateur de patient virtuel

Country Status (1)

Country Link
WO (1) WO2007059477A2 (fr)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010057304A1 (fr) * 2008-11-21 2010-05-27 London Health Sciences Centre Research Inc. Système de pointage mains libres
EP2255843A1 (fr) * 2009-05-29 2010-12-01 FluiDA Respi Procédé pour déterminer les traitements en utilisant des modèles de poumon spécifiques aux patients et procédés informatiques
WO2011115835A3 (fr) * 2010-03-17 2011-11-10 Discus Investments, Llc Procédés et appareils de génération et d'enregistrement d'informations médicales
US8520024B2 (en) 2007-05-18 2013-08-27 Uab Research Foundation Virtual interactive presence systems and methods
US9710968B2 (en) 2012-12-26 2017-07-18 Help Lightning, Inc. System and method for role-switching in multi-reality environments
US9886552B2 (en) 2011-08-12 2018-02-06 Help Lighting, Inc. System and method for image registration of multiple video streams
US9940750B2 (en) 2013-06-27 2018-04-10 Help Lighting, Inc. System and method for role negotiation in multi-reality environments
US9959629B2 (en) 2012-05-21 2018-05-01 Help Lighting, Inc. System and method for managing spatiotemporal uncertainty
US20190198169A1 (en) * 2017-12-27 2019-06-27 General Electric Company Patient healthcare interaction device and methods for implementing the same
US11016579B2 (en) 2006-12-28 2021-05-25 D3D Technologies, Inc. Method and apparatus for 3D viewing of images on a head display unit
US11228753B1 (en) 2006-12-28 2022-01-18 Robert Edwin Douglas Method and apparatus for performing stereoscopic zooming on a head display unit
US11275242B1 (en) 2006-12-28 2022-03-15 Tipping Point Medical Images, Llc Method and apparatus for performing stereoscopic rotation of a volume on a head display unit
US11315307B1 (en) 2006-12-28 2022-04-26 Tipping Point Medical Images, Llc Method and apparatus for performing rotating viewpoints using a head display unit
US11532132B2 (en) 2019-03-08 2022-12-20 Mubayiwa Cornelious MUSARA Adaptive interactive medical training program with virtual patients

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040009459A1 (en) * 2002-05-06 2004-01-15 Anderson James H. Simulation system for medical procedures
US20040260170A1 (en) * 2003-06-20 2004-12-23 Confirma, Inc. System and method for adaptive medical image registration

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040009459A1 (en) * 2002-05-06 2004-01-15 Anderson James H. Simulation system for medical procedures
US20040260170A1 (en) * 2003-06-20 2004-12-23 Confirma, Inc. System and method for adaptive medical image registration

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11016579B2 (en) 2006-12-28 2021-05-25 D3D Technologies, Inc. Method and apparatus for 3D viewing of images on a head display unit
US11520415B2 (en) 2006-12-28 2022-12-06 D3D Technologies, Inc. Interactive 3D cursor for use in medical imaging
US11315307B1 (en) 2006-12-28 2022-04-26 Tipping Point Medical Images, Llc Method and apparatus for performing rotating viewpoints using a head display unit
US11275242B1 (en) 2006-12-28 2022-03-15 Tipping Point Medical Images, Llc Method and apparatus for performing stereoscopic rotation of a volume on a head display unit
US11228753B1 (en) 2006-12-28 2022-01-18 Robert Edwin Douglas Method and apparatus for performing stereoscopic zooming on a head display unit
US11036311B2 (en) 2006-12-28 2021-06-15 D3D Technologies, Inc. Method and apparatus for 3D viewing of images on a head display unit
US8520024B2 (en) 2007-05-18 2013-08-27 Uab Research Foundation Virtual interactive presence systems and methods
WO2010057304A1 (fr) * 2008-11-21 2010-05-27 London Health Sciences Centre Research Inc. Système de pointage mains libres
CN102282527A (zh) * 2008-11-21 2011-12-14 伦敦健康科学中心研究公司 无手动指示器系统
EP2255843A1 (fr) * 2009-05-29 2010-12-01 FluiDA Respi Procédé pour déterminer les traitements en utilisant des modèles de poumon spécifiques aux patients et procédés informatiques
WO2010136528A1 (fr) * 2009-05-29 2010-12-02 Fluidda Respi Procédé utilisant des modèles de poumon spécifiques de patient pour déterminer des traitements, et procédés informatiques
US8886500B2 (en) 2009-05-29 2014-11-11 Fluidda Respi Method for determining treatments using patient-specific lung models and computer methods
WO2011115835A3 (fr) * 2010-03-17 2011-11-10 Discus Investments, Llc Procédés et appareils de génération et d'enregistrement d'informations médicales
US8458610B2 (en) 2010-03-17 2013-06-04 Discus Investments, Llc Medical information generation and recordation methods and apparatus
US9886552B2 (en) 2011-08-12 2018-02-06 Help Lighting, Inc. System and method for image registration of multiple video streams
US10181361B2 (en) 2011-08-12 2019-01-15 Help Lightning, Inc. System and method for image registration of multiple video streams
US10622111B2 (en) 2011-08-12 2020-04-14 Help Lightning, Inc. System and method for image registration of multiple video streams
US9959629B2 (en) 2012-05-21 2018-05-01 Help Lighting, Inc. System and method for managing spatiotemporal uncertainty
US9710968B2 (en) 2012-12-26 2017-07-18 Help Lightning, Inc. System and method for role-switching in multi-reality environments
US10482673B2 (en) 2013-06-27 2019-11-19 Help Lightning, Inc. System and method for role negotiation in multi-reality environments
US9940750B2 (en) 2013-06-27 2018-04-10 Help Lighting, Inc. System and method for role negotiation in multi-reality environments
US10957451B2 (en) * 2017-12-27 2021-03-23 General Electric Company Patient healthcare interaction device and methods for implementing the same
CN109979587A (zh) * 2017-12-27 2019-07-05 通用电气公司 患者健康护理交互设备及其实施方法
US20190198169A1 (en) * 2017-12-27 2019-06-27 General Electric Company Patient healthcare interaction device and methods for implementing the same
US11532132B2 (en) 2019-03-08 2022-12-20 Mubayiwa Cornelious MUSARA Adaptive interactive medical training program with virtual patients

Also Published As

Publication number Publication date
WO2007059477A3 (fr) 2008-05-02

Similar Documents

Publication Publication Date Title
WO2007059477A2 (fr) Simulateur de patient virtuel
Sutherland et al. Applying modern virtual and augmented reality technologies to medical images and models
US10592067B2 (en) Distributed interactive medical visualization system with primary/secondary interaction features
AU2018264095B2 (en) System and method for managing spatiotemporal uncertainty
US8520024B2 (en) Virtual interactive presence systems and methods
US9710968B2 (en) System and method for role-switching in multi-reality environments
US20230022782A1 (en) Glasses-Free Determination of Absolute Motion
EP3497600A1 (fr) Système de visualisation médicale interactif distribué ayant des caractéristiques d'interface utilisateur
US20200205905A1 (en) Distributed interactive medical visualization system with user interface and primary/secondary interaction features
Çöltekin Foveation for 3D visualization and stereo imaging
Paul et al. A Deep Learning approach to 3D Viewing of MRI/CT images in Augmented Reality
Westwood Medicine meets virtual reality: art, science, technology: healthcare (r) evolution
Lin Interaction with medical volume data on the responsive workbench
NZ621149A (en) System and method for image registration of multiple video streams

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06839863

Country of ref document: EP

Kind code of ref document: A2