WO2008122006A1 - Computer-based virtual medical training method and apparatus - Google Patents

Computer-based virtual medical training method and apparatus Download PDF

Info

Publication number
WO2008122006A1
WO2008122006A1 PCT/US2008/059001 US2008059001W WO2008122006A1 WO 2008122006 A1 WO2008122006 A1 WO 2008122006A1 US 2008059001 W US2008059001 W US 2008059001W WO 2008122006 A1 WO2008122006 A1 WO 2008122006A1
Authority
WO
WIPO (PCT)
Prior art keywords
simulation
simulated
medical
probe
needle
Prior art date
Application number
PCT/US2008/059001
Other languages
French (fr)
Inventor
Greg Polens
John Stone
Sean Mccauley
David Napotnik
Mark Polaski
Rebecca Fitzgerald
Brian Law
Randall Neatrour
Angela Nichols
Brian Wilson
David Kinsey
Original Assignee
Mountaintop Technologies, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mountaintop Technologies, Inc. filed Critical Mountaintop Technologies, Inc.
Publication of WO2008122006A1 publication Critical patent/WO2008122006A1/en

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B23/00Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes
    • G09B23/28Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes for medicine
    • G09B23/285Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes for medicine for injections, endoscopy, bronchoscopy, sigmoidscopy, insertion of contraceptive devices or enemas

Definitions

  • This invention relates generally to medical training and more particularly to a medical training method and apparatus that is suitable for a web-based training program, and which provides interactive training and feedback. More particularly, the invention provides a virtual medical training method and apparatus incorporating simulation of tactile feedback, three-dimensional spaces, and physiological processes without encumbering data transfer and processing in web-based environment.
  • peripheral nerve block specifically, the supraclavicular block
  • the Air Force has identified the peripheral nerve block (specifically, the supraclavicular block) as an important joint force readiness skill for training transformation, and has determined there is an urgent need for a training program beyond the didactic training courses currently available.
  • the present invention provides virtual medical training with simulation of medical procedures.
  • Data is received corresponding to manipulation of an external input/output interface.
  • a medical procedure is simulated based upon the data generated by the external input/output interface.
  • the simulation of the medical procedure includes a simulated three- dimensional patient model, as well as simulating placement of a medical probe in a simulated three-dimensional patient model.
  • the simulated medical probe comprises a probe hub and a
  • probe tip and the placement of the simulated medical probe comprises positioning the direction of the probe tip in relation to the probe hub and the simulated three-dimensional patient model and positioning the distance the probe tip is inserted into the simulated three- dimensional patient model.
  • the present invention can be embodied in various forms, including business processes, computer implemented methods, computer program products, computer systems and networks, user interfaces, application programming interfaces, and the like.
  • FIG. 1 is a block diagram illustrating the overall VMT apparatus.
  • FIG. 2 is a block diagram illustrating the learning tasks which can be accomplished using the VMT.
  • FIGs. 3A-C are diagrams illustrating the needle insertion point selection, placement of the needle, and dragging of the needle hub.
  • FIG. 4 is a diagram demonstrating the representation of a three-dimensional needle using a two-dimensional graphical interface viewable by the user.
  • FIG. 5 is a diagram illustrating simulation of a needle shifting along a needle vector.
  • FIG. 6 is a diagram illustrating a representation of a needle on various slides.
  • FIG. 7 is a diagram illustrating a representation of a needle on various slides, other than the needlepoint slide.
  • FIG. 8 is a diagram illustrating a representation of a needle on various slides
  • FIG. 9 is a diagram illustrating a representation of a needle on various slides with radius coordinates.
  • FIG. 10 is a diagram illustrating a sphere representing a radius of electrical charge created by a needle.
  • FIG. 11 is a diagram illustrating the position of the needle and the position of
  • FIG. 12 is a diagram illustrating programming logic for the simulation.
  • One or more embodiments of the present invention may provide a training program that provides a web-based simulation with didactic content embedded for support, and further provide haptic feedback to the user or trainee.
  • VMT Medical Trainer
  • users perform three main learning tasks: identify needle insertion point, correct needle placement, and inject anesthetic.
  • Each learning task is made up of a series of constituent skills (hereafter called actions).
  • the learning tasks or scenarios and their constituent actions are listed below. [0027] Identify Needle Insertion Point
  • the VMT uses a simulated three-dimensional training environment having a three-dimensional patient model.
  • the VMT simulation further provides a simulation of structures lying beneath the visible exterior of the three-dimensional patient model, including muscle, bone, vascular system, and nervous system structures.
  • the VMT provides the user with continuous haptic and didactic feedback during the training simulation.
  • the VMT further simulates medical complications, which may arise during a procedure the user is training.
  • the complications should be recognized and identified by the user, and selection of the appropriate course of action to remedy the complication should be made.
  • the VMT is preferably a SCORM-compliant, E-Learning course used to teach
  • Procedure is done to avoid having the patient undergo general anesthesia, which could lead to complications.
  • LMS Learning Management System
  • the VMT provides learning tasks, which are concrete whole-task experiences that the user or learner is typically asked to perform in a real or simulated environment.
  • the learning tasks require application of the constituent skills that make up the learning task, as opposed to studying general information about or related to those skills.
  • Task classes are represented in the degree of difficulty between "game levels" in the VMT simulation.
  • a simulation level consists of a similar task class of each of the learning tasks.
  • Each level is comprised of a set of scenarios in which the user practices all of the learning tasks multiple times under varying conditions.
  • Each level is more difficult than the previous one. Difficulty is increased by:
  • FIG. 1 is a block diagram illustrating an example of the flow of information corresponding to the VMT system.
  • the user 1 manipulates the external input/output interface 2, which may comprise a computer keyboard and mouse.
  • the data generated by manipulation of the keyboard and/or mouse by the user is transferred to the computer 3 which runs the medical procedure training simulations 4 and the three-dimensional patient model on which the simulated medical procedures are performed.
  • the computer then outputs signals to the display 5 based on the medical procedure training simulations 4.
  • the user 1 views the display 5, which provides images representing the medical procedure training simulations 4 and the three-dimensional patient model.
  • the user also receives both didactic and haptic feedback from the display 5.
  • FIG. 2 is a block diagram illustrating the medical procedure training simulations, including learning tasks performed by the user.
  • the block diagram and the following description are provided for illustrative purposes and do not limit the medical procedure training simulations to the particular sequences shown and described. Additionally, the learning tasks and elements shown and described do not exclude the inclusion of additional elements or steps.
  • the learning tasks are virtual three-dimensional simulations generated by the Virtual Medical Trainer (VMT) 6.
  • VMT Virtual Medical Trainer
  • the medical procedure learning tasks are performed by the user on a three-dimensional patient model using simulated medical tools or probes generated by the VMT 6 program.
  • the functionality of the VMT 6 may be provided by executing software instructions that are stored in memory on any computing platform.
  • FIG. 2 illustrates the simulated learning tasks, which may be completed by the user, including the tasks of landmarking 7 (identifying a needle insertion point), placing a needle and setting a nerve stimulator 11, administering anesthesia 16, and inserting and securing a catheter 23.
  • the VMT (such as through the computer display) provides feedback 10 as the user carries out the learning tasks.
  • the user locates and identifies external landmarks 8 and marks external landmarks with an 'X' 9 on the three-dimensional patient model.
  • the user may perform the learning task of placing a needle and setting a nerve stimulator 11.
  • the nerve stimulator is located at the end of a simulated needle.
  • the user first places the needle on the 'X' 12, which was marked during landmarking 7 learning task.
  • the user sets the nerve stimulator 14.
  • the user Based on motor responses simulated by the three-dimensional patient model, the user adjusts the depth and position of the needle and/or the nerve stimulator 15. At this point, the user may be required to return to the landmarking 7 learning task.
  • the user may perform the learning task of injecting anesthesia 16.
  • the user first aspirates for blood 17.
  • the user then administers a test dose of anesthesia 18 to the three-dimensional patient model.
  • the user may be required to return to the learning task of placing a needle and setting a nerve stimulator 11.
  • the user may administer anesthesia to the three-dimensional patient model by injecting the anesthesia 19.
  • Complications 20 may arise during either the learning task of placing a needle and setting a nerve stimulator 11 or the learning task of injecting anesthesia 16. If a complication 20 arises during either of the learning tasks, the user is challenged to recognize and identify the complication and its underlying cause 21. To be successful, the user should preferably select the best course of action to remedy the complication 22.
  • the user may perform the learning task of inserting and securing a catheter 23.
  • the learning task of inserting and securing a catheter 23 is applicable in scenarios calling for a continuous block.
  • the user first inserts the catheter in the three-dimensional patient model 24. The user then withdraws the needle from the three-dimensional patient model 25, and secures the catheter in place 26.
  • the VMT 6 additionally provides a desk reference 27 which is instructional and applicable to the VMT 6 learning tasks.
  • the reference provides didactic content to aid and enhance the user's learning experience.
  • the VMT 6 records a history and summary 29 documenting the user's activities, and the results of the learning tasks.
  • the history and summary 29 are available for viewing following the user's completion of learning tasks.
  • the goal of the VMT is to teach the whole Procedure; however, a preferred primary goal is to have the VMT test the ability of the student ("user") to find the correct nerve.
  • the needle should preferably move in three dimensions.
  • the user selects the insertion point 30 on the body image (see FIG. 3A), which is created by taking a snapshot of a three-dimensional model with the three-dimensional graphics software, Lightwave.
  • the simulation places the needle 31 in the spot selected (see FIG. 3B) appearing to be straight up and down (“Normal Vector"), and the insertion point 30 (the anchor point) and the needle handle (“hub”) can be dragged by the user's mouse. With the needle length remaining constant, dragging the needle hub 32 away from the insertion point 30 increases the angle of the needle 31 from the Normal Vector (see FIG. 3C).
  • the angle of the needle 31 is simulated in two-dimensions by increasing the visual length of the needle 31 on the screen. The further away the needle hub 32 is dragged, the longer the needle 31 appears to be to the student, and the greater the angle from the Normal Vector 33 (FIG. 4).
  • dragging the needle hub 32 controls the needle's angle and pitch.
  • the student may, for example, use the up or down arrow keys of the keyboard. This motion is simulated by moving the needle graphic along the needle vector while changing the length of an image mask 34 located at the insertion point 30.
  • the needle 31 appears to be inserted into the skin by shifting it and hiding (or “masking") the tip.
  • the mask 34 and the needle 31 beneath are invisible to the student (FIG. 5).
  • the simulation With the needle 31 being simulated in three-dimensions and being inserted into and out of the body, the simulation further simulates the needle "hit detection" under the skin - simulating the needle 31 hitting the nerve, bone or other obstacles.
  • the major concept is to have a 3D section of the shoulder "sliced” into layers. These layers or slides 35 show the internal organs (FIG. 6). These internal slides 35 are invisible to the student. The student has to use other cues to determine where the needle 31 is.
  • the main indicator is the simulated electrical stimulator that delivers an electrical charge to the needlepoint 40 and subsequently the nerve. If placed correctly, the electrical charge visually contracts the muscles corresponding to the brachial plexus nerve. Other muscular visual effects occur if incorrect nerves are stimulated allowing the student to know that they inserted the needle 31 incorrectly.
  • the hit detection algorithm may, for example, implement Flash's movie clip hit
  • Test function where given a point, x and y coordinates, the function returns information to the user as to whether the point lies within the movie clip. Given this information, a layer contains multiple movie clips for each body organ it intersects. The organs tracked are the skin, artery, nerves, lung and bones (collar and top rib). The skin is used to find the z coordinate of the insertion point 30. The hit detection algorithm proceeds down the slides 35 until the needle 31 circle intersects skin (FIG. 6).
  • the needle 31 and the electrical charge are virtually represented as a cylinder 36 and sphere respectfully.
  • the needle cylinder 36 and electrical charge sphere are represented as needle circles 39 and electrical charge circles 37 on the slides 35 they intersect.
  • the needle shaft and electrical charge influence can be tracked on slides 35 other than the needlepoint slide 38 (FIG. 7).
  • the needle circles 39 correspond to where the needle 31 intersects the slide 35.
  • These needle circles 39 are of same radius. However their location is dependent on the pitch, angle, and depth of the needle 31.
  • Electrical charge circles 37 also represent the electrical influence but their center is always the X, Y coordinates of the needlepoint 40 on the needlepoint slide 38.
  • Calculating the needle circles 39 on the slides 35 is done through conversion of spherical coordinates to Cartesian coordinates. Knowing the angle and pitch of the exposed needle and the depth of the needle under the skin facilitates calculating the x and y coordinates of the needle circle 39 on every slide 35.
  • the next step is to determine the pitch (gamma) of the needle 31.
  • the pitch is calculated by using the screen (drawn) length 41 of the needle 31 and the actual (real) length 42 of needle 31 above the skin (FIG. 8) then adding 90 degrees; see Formula (2).
  • the radius (r) coordinate of needle circle 39 is calculated using the pitch angle according to Formula (3). If the radius coordinate is greater than the zDepth of the slide 35 then the needle 31 intersects that slide 35 (FIG. 9).
  • the alpha angle is the 90-degree complement of the pitch angle calculated using Formula (1). Unlike the gamma angle and the pitch (theta) angle, which remain constant over all slides 35 for a needle position, the radius coordinate is different for each slide 35.
  • the Cartesian coordinates may then be determined through the translation Formulas (4). Accordingly, it is possible to determine the deepest slide 35 the needle has reached and the (x, y) coordinates of each needle circle 39 on every slide 35 the needle 31 intersects.
  • the second function of the hit detection algorithm is to determine which nerves are influenced by the electrical charge.
  • the electrical charge is a virtual sphere 43 whose center is located at the needlepoint 40. This is represented as electrical charge circles 37 of decreasing radiuses in slides 35 which are further away from the needlepoint slide 38 (FIG. 10).
  • the electrical charge circles 37 vary; however the center of each electrical charge circle 37 remains the same as the needlepoint 40 (FIG. 11).
  • the electrical circles can be calculated by using Pythagorean's Theorem according to Formula (5).
  • the radius for each electrical charge circle 37 is calculated recursively until the radius is zero.
  • the simulation shows if the needle 31 collides with body parts and what body parts the electrical charge influences. For example, when the needle collides with a bone, a message is returned to the needle movement function to stop allowing movement in that direction. The electrical charge sends trigger calls to the body image to animate the correct muscle, which is receiving the shock. Using this method allows the student to perform three-dimensional functions by using two-dimensional software.
  • the VMT web-based simulation is designed to help Air
  • RSVs Readiness Skills Verifications
  • CRNAs certified registered nurse anesthetists
  • VMT simulation software architecture consists of four main layers described as follows:
  • the "navigation" layer provides the common graphic frame or background of the user interface, Main Menu functionality, and common navigation controls.
  • the main menu appears when the learner first launches the simulation.
  • the main menu welcomes the learner and provides navigation links to the introductory tutorial, the practice simulation levels, the testing simulation level, and the Desk Reference.
  • the "content” layer includes the introduction, each of the tasks in the simulation, and the summary.
  • the control layer provides each module with a specific set of state variables as the module is launched. The values held within these variables change as the user acts in the interface. When the learner completes the module, the module passes the state variables back to the control layer. The control layer, in turn, passes the state variables to the next module in the sequence to specify its initial state.
  • the "control layer” is the overseer. It manages the launching of support and simulation modules, tracks the learner's progress through the content, communicates with the learning management system, and determines which support features are available at various levels of play.
  • the "support” layer contains items that provide instructional support
  • Learner A learner is one who uses the simulation to practice learning tasks in order to increase their competence in performing supraclavicular blocks. There are three anticipated groups of learners: anesthesiologists, anesthesiology residents, and CRNAs. Each group has a different beginning level of skill and knowledge in the procedure. Learners may be required to successfully complete the simulation as part of a larger training program, assessment, or certification process.
  • System Administrator A system administrator is responsible for deploying and maintaining the simulation on a SCORM 2004 conformant LMS.
  • Training Administrator A training administrator works through the LMS to extract and report on learner progress and status in the course.
  • the simulation may be designed to adhere to the SCORM 2004 runtime API and packaging.
  • the simulation may be implemented as a single SCO. Simplicity is important not only to minimize confusion on the part of the user, but also to keep development costs within project scopes and to promote robust, high-quality design.
  • GUI graphical user interface
  • the VMT simulation may operate on a client computer with any conventional computing platform.
  • the requirements are as follows. These details are provided by way of example only, as embodiments may be implemented for operation with any number of potential computing platforms.
  • Microsoft Windows XP SP2
  • Microsoft Internet Explorer version 6.0 (with Javascript enabled-
  • the VMT simulation can be launched and run from various resources, including but not limited to a learning management system (LMS) or from CD-ROM. Depending on which of these ways the simulation is launched, there may be slight differences in specific portions of the functionality as described herein.
  • LMS learning management system
  • CD-ROM compact disc read-only memory
  • Helpers contain didactic content that the user can access as needed while performing learning tasks. There are five standard helpers available: the Procedure Helper, the Anatomy Helper, the Medications Helper, the Equipment Helper, and the Risks Helper. Users can look up information in these five content areas, as required to successfully complete the learning-tasks.
  • the didactic content appears over the main simulation interface.
  • the user exits a helper he/she returns to the simulation in the state it was in when the helper was invoked.
  • Feedback mechanisms are user-controlled. Each user can decide if and when he/she wants to view feedback. Examples of feedback mechanisms are described below: [0083] One form of feedback to the user's action is provided through naturally occurring consequences in the simulation. For instance, if the user advances the needle in a medial direction, a pneumothorax may occur. Consequences are presented through visual or textual feedback in the interface.
  • the character of an experienced practitioner may serve as an avatar that provides instructive feedback in response to the user's actions.
  • the coach calls out incorrect or skipped actions with an explanation of the potential consequences.
  • the coach's feedback also directs the user to didactic content supporting the action in question.
  • the user has the option to show or hide the coach in the interface.
  • Cumulative scoring uses specific metric values that rise or fall as the user acts within the simulation, depending on the appropriateness of his/her choices.
  • the three metric values include: success rate, safety level, and patient satisfaction.
  • Success rate reflects how likely the block is to be effective based on the user's actions.
  • Safety level indicates the degree to which the user's actions might contribute to the development of complications and side effects, or harm to the patient.
  • Patient satisfaction is related to the amount of discomfort or anxiety the user's actions inflict on the patient.
  • the cumulative scoring gives the user a dynamic view of how well he/she is doing in the scenario. Cumulative scores are used to determine how successful the user is in each scenario, when a user may progress from one level to another, and ultimately, when the user has successfully completed the course. The user can choose to view the cumulative scoring at all times, or only when he/she wants to check his/her progress.
  • the interior window feedback feature is provided on the needle placement and catheter placement actions.
  • the interior window gives the user another view of the needle so he/she can see how his/her manipulation of the needle relates to the overall position of the body.
  • This interior window is present in early scenarios within a level and it disappears when the user has successfully completed a predetermined number of scenarios.
  • the user has to complete remaining scenarios in the level without the aid of the internal window to pass to the next level.
  • JIT Just-in-time
  • JIT helps learners to embed repetitive processes or concepts to the point of automation.
  • JIT information is provided through helpers and feedback.
  • JIT helper icons appear in the interface contextually, as a shortcut to specific information or practice activities within the didactic content that are relevant to the learning task being performed.
  • the JIT helper icons appear in addition to the five standard icons that are always present.
  • Part-task practice is designed to aid the user in acquiring constituent skills that require a high degree of automaticity while performing the whole task.
  • VMT users have many opportunities for part-task practice within the simulation itself.
  • the user progresses through each scenario by performing constituent actions of the learning task.
  • the integration and synthesis of actions may be promoted by organizing and programming them in such a way that the user's choices in one action contribute to the starting states of future actions.
  • the simulation engine tracks his/her actions and decisions, and his/her resulting impact on the cumulative metrics.
  • the user can review a history of his/her performance in the scenario with an explanation of why the metrics responded as they did. This report helps users to reflect on what they did and the resulting consequences. Through this process of reflection, users can identify how to improve their performance. All simulation levels provide the history feature.
  • the metric values at the conclusion of a scenario determine the success or failure of the scenario. Cumulative metrics also are used to determine when a user is ready to move from one level to another and when the user has achieved a sufficient level of proficiency to graduate the course.
  • a scenario is represented as a patient record containing a description of the patient and his/her injuries, a medical history, results of a physical examination and lab tests, and in some scenarios, predetermined complications that arise.
  • Scenarios are generated from a predefined set of variables. At the start of a scenario, the VMT randomly assembles variables into a unique set, ensuring a large number of possible scenarios with little chance of duplication. Users have the ability to customize the scenario by changing certain variable options selected by the computer.
  • the level of play dictates which variables are used to generate scenarios.
  • the scenarios become progressively more complex in the number and type of variables as the user advances from one level to another.
  • VMT web-based simulation may be delivered through common Web technologies: HTTR, HTML, JavaScript, and Flash, for example.
  • HTTR HyperText Transfer Protocol
  • HTML HyperText Transfer Protocol
  • JavaScript JavaScript
  • Flash Flash
  • the modules are tested on systems that meet the client and server requirements described below.
  • VMT web-based simulation may be designed, developed, and packaged to run from various resources, such as:
  • the VMT may, for example, implement Flash and ActionScript to present a web-based simulation.
  • the product may be built upon a Flash-based courseware framework.
  • One example of such framework uses a model-view-controller (MVQ) design pattern to separate the user interface (view tier) from course content (model tier).
  • MVQ model-view-controller
  • the control tier of the framework controls communication between the view and the model.
  • a fourth tier of functionality, page engines, is where specific content (e.g., simulation, just-in-time instruction, helper game) is rendered.
  • the simulation is designed to adhere to the SCORM 2004 runtime API and packaging.
  • VMT web-based simulation generates and presents a new scenario to the user.
  • Book marking may be used to store the highest difficulty level completed by the user.
  • VMT web-based simulation When running from CD-ROM, the VMT web-based simulation enables users to work through the simulation, but no tracking or book marking data is saved between sessions.
  • Word processing software such as Microsoft Word, for example, may be used to develop storyboards that specify course content.
  • the storyboards are then converted from storyboard content to XML and accompanying simulations, animations, and interactive exercises are produced in Macromedia (Adobe) Flash 8.
  • Photoshop, Lightwave 3D, and/or other graphics authoring tools are used to create image assets. Some functionality is developed using JavaScript.
  • the resulting product may variously consist of XML files, HTML/JavaScript,
  • SWFs Flash Shockwave files
  • supporting graphic files e.g., JPEG, GIF
  • CSS Cascading Style Sheets
  • the VMT web-based simulation may, for example, employ the use of state engines.
  • a state engine is a set of software routines that track a range of variables and their current settings or "states". The software initiates actions or consequences when these states meet or exceed predefined conditions.
  • a data model is created to represent and track the factors and decisions involved in a supraclavicular block.
  • the state engine provides a mathematical representation of the supraclavicular block procedure. That is, it uses numbers and algorithms to model the opening scenario and to represent changes that occur as the user interacts with the simulation.
  • the simulation software (functionality developed in Flash ActionScript) uses random number generation routines to select variables that define the opening scenario.
  • a scenario in the context of the VMT web-based simulation, is an instance of the supraclavicular block procedure with certain pre-defined attributes. These include such things as:
  • Patient information (age, gender, height, weight, etc.)
  • the simulation provides the user with the opportunity to change some of the input variables. For example, if a single-injection scenario is presented, the user may choose to work through a continuous-infusion scenario instead.
  • the simulation engine can evaluate states to do such things as:
  • the simulation cannot retrieve a bookmark from the LMS when the user launches the simulation from CD-ROM or the first time the user launches the simulation from an LMS. hi these circumstances, the simulation defaults to the easiest difficulty level.
  • Flash (the primary development platform used for this project) does not provide a built-in 3D graphics-rendering capability. Nonetheless, the VMT facilitates a 3D experience through the use of various 2D modeling techniques. For example, graphics in the simulation are drawn to create the illusion of depth and perspective. Objects appear smaller as they move toward the background, and so forth.
  • Some simulated tasks require that the user perform within a virtual 3D space. For example, users need to identify the correct adjustments to the needle insertion point and angle to obtain the correct motor response from the simulated patient. This requires that the simulation maintain certain state information regarding the needle: the insertion point, the angle of insertion, and the depth of insertion.
  • the needle position can be calculated mathematically, using (x,y,z) coordinates to track position.
  • Visual feedback of the needle position is provided to the user through 2D graphics.
  • Collision detection is used to determine when the needle tip is close enough to a nerve to produce a motor response or when the needle is too close to a nerve. Feedback and consequences are presented if a nerve or artery is accidentally pierced.
  • Various approaches may be used to implement collision detection in three dimensions. Examples include:
  • Didactic content presented in the helpers is encoded through standard web file formats, such as XML, HTML/Javascript, Flash SWF files, JPEG, GIF, and CSS.
  • JIT instruction is presented in a layer over the movie, using a separate flash template.
  • Feedback is triggered by variables maintained by the state engine and changes that occur within the simulation and practice games as the user interacts with them. Feedback is presented in a layer over the movie, using a separate Flash template.
  • the user can choose to show or hide the following features:
  • the introduction allows the user to select the level at which he/she will play. If the user has previously used the simulation, the introduction screen displays the last level the user successfully completed as the default choice.
  • the introduction also contains an orientation, describing the features and tools available in the interface at each game level and demonstrating how to navigate the simulation. The user may view the orientation to help determine an appropriate level at which to play.
  • the GUI is divided into two main areas: the activity area and the supporting information area.
  • the activity area is on the right side of the screen. It is where all simulation actions and decisions take place.
  • the image in the activity area depicts the room in which blocks are performed, complete with a virtual patient.
  • the view of the space zooms in or out as necessary to support the task the user is performing.
  • a menu dock is available at the bottom of the activity area.
  • the dock includes all of the virtual tools the student needs to perform learning tasks and provides access to user selectable options and helpers.
  • To select a tool the user rolls over the icon for the tool and clicks on it.
  • Tool icons scale up in size when the user rolls over them. The user may show or hide the menu dock at his/her discretion.
  • the supporting information area on the bottom right side of the interface displays information the user may need to successfully perform learning tasks.
  • the information displayed changes depending on what the user chooses to view at any given time and presents a list of links to didactic content in the helpers.
  • Selection may be set up however is desired, but in one example it may be gender and various races, and possibly fictional graphical "fun" characters.
  • the coach may not animate in certain embodiments, but rather is static with a text area that updates as the user performs actions in the simulation.
  • the coach's feedback includes correct/incorrect feedback, hints, suggestions, and links to didactic content that supports the current action.
  • a status box showing the current level the user is playing, the cumulative scoring metrics, and the last action performed are available in the interface.
  • the cumulative scoring metrics are displayed as a dynamic bar chart. The bars increase or decrease to reflect the overall success rate, safety level, and patient satisfaction as they are affected by the user's choices and actions.
  • the interior window feedback is present for certain learning tasks.
  • the window appears in the GUI.
  • the figure above shows one possible implementation of this feature.
  • the interior window displays another view of the position of the needle relative to the head/neck region.
  • helpers are accessed from the simulation, a menu of all the didactic content appears. The user can choose the specific content he/she would like to view. Helper content appears in a window on top of the simulation GUI. The GUI behind the window is grayed out and all game elements deactivated until the helper window is closed. When a helper window is closed, the user is returned to the simulation in the state in which it was in when the helper content was selected.

Abstract

Virtual medical training with simulation of medical procedures. Data is received corresponding to manipulation of an external input/output interface. A medical procedure is simulated based upon the data generated by the external input/output interface. The simulation of the medical procedure includes a simulated three-dimensional patient model, as well as simulating placement of a medical probe in a simulated three-dimensional patient model. The simulated medical probe comprises a probe hub and a probe tip and the placement of the simulated medical probe comprises positioning the direction of the probe tip in relation to the probe hub and the simulated three-dimensional patient model and positioning the distance the probe tip is inserted into the simulated three-dimensional patient model.

Description

COMPUTER-BASED VIRTUAL MEDICAL TRAINING METHOD AND APPARATUS
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority to U.S. Provisional Application Ser. No.
60/907,420, entitled "Computer-Based Virtual Medical Training Method and Apparatus," filed on April 2, 2007, the entire contents of which are hereby incorporated by reference.
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH AND
DEVELOPMENT
[0002] Research and development for this invention was sponsored by the U.S. Air
Force Medical Service Directorate of Modernization, under Contract Award No. W8 IXWH- 06-2-0010.
BACKGROUND OF THE INVENTION
1. Field of the Invention
[0003] This invention relates generally to medical training and more particularly to a medical training method and apparatus that is suitable for a web-based training program, and which provides interactive training and feedback. More particularly, the invention provides a virtual medical training method and apparatus incorporating simulation of tactile feedback, three-dimensional spaces, and physiological processes without encumbering data transfer and processing in web-based environment.
2. Description of the Related Art
[0004] A significant percentage of modern combat injuries involve extremities such as the arms, legs, hands, and feet. Modern day body armor protects a soldier's "kill zones' (the head and torso). However, in order to preserve the soldier's mobility, the body armor leaves extremities vulnerable to damage. Injuries involving shattered bones, exploded muscles, and severed limbs are extremely painful and require aggressive pain management, particularly while the patient is transported to specialized medical facilities, as well as during and after surgery.
[0005] Until recently, battlefield general anesthesia was the preferred method of treatment for pain control. Anesthesia has the capability to mask pain and relax muscles throughout the entire body, but it produces a state of complete unconsciousness, sometimes referred to as a "mini death." The anesthetizing procedure is risky, requiring constant monitoring of the patient's heart rate, blood pressure, and respiration for the duration of the procedure. As traditional anesthesia wears off, pain returns, and patients often require morphine or other highly addictive analgesics.
[0006] To decrease the side effects and risks associated with general anesthesia, battlefield pain control techniques have been modernized with the use of a regional anesthesia technique called a peripheral or regional nerve block. This procedure targets pain signals coming from a single injured area, leaving other body systems outside of the targeted area unaffected. The epidural block performed during childbirth is one familiar application of the regional anesthesia technique.
[0007] Currently, regional anesthesiology textbooks and websites provide merely didactic training content. The Air Force has identified the peripheral nerve block (specifically, the supraclavicular block) as an important joint force readiness skill for training transformation, and has determined there is an urgent need for a training program beyond the didactic training courses currently available.
SUMMARY OF THE INVENTION
[0008] The present invention provides virtual medical training with simulation of medical procedures.
[0009] Data is received corresponding to manipulation of an external input/output interface. A medical procedure is simulated based upon the data generated by the external input/output interface. The simulation of the medical procedure includes a simulated three- dimensional patient model, as well as simulating placement of a medical probe in a simulated three-dimensional patient model. The simulated medical probe comprises a probe hub and a
probe tip and the placement of the simulated medical probe comprises positioning the direction of the probe tip in relation to the probe hub and the simulated three-dimensional patient model and positioning the distance the probe tip is inserted into the simulated three- dimensional patient model.
[0010] The present invention can be embodied in various forms, including business processes, computer implemented methods, computer program products, computer systems and networks, user interfaces, application programming interfaces, and the like.
BRIEF DESCRIPTION OF THE DRAWINGS
[0011] These and other more detailed and specific features of the present invention are more fully disclosed in the following specification, reference being had to the accompanying drawings, in which:
[0012] FIG. 1 is a block diagram illustrating the overall VMT apparatus.
[0013] FIG. 2 is a block diagram illustrating the learning tasks which can be accomplished using the VMT.
[0014] FIGs. 3A-C are diagrams illustrating the needle insertion point selection, placement of the needle, and dragging of the needle hub.
[0015] FIG. 4 is a diagram demonstrating the representation of a three-dimensional needle using a two-dimensional graphical interface viewable by the user.
[0016] FIG. 5 is a diagram illustrating simulation of a needle shifting along a needle vector.
[0017] FIG. 6 is a diagram illustrating a representation of a needle on various slides. [0018] FIG. 7 is a diagram illustrating a representation of a needle on various slides, other than the needlepoint slide.
[0019] FIG. 8 is a diagram illustrating a representation of a needle on various slides,
including radius coordinates.
[0020] FIG. 9 is a diagram illustrating a representation of a needle on various slides with radius coordinates.
[0021] FIG. 10 is a diagram illustrating a sphere representing a radius of electrical charge created by a needle.
[0022] FIG. 11 is a diagram illustrating the position of the needle and the position of
the radius of electrical charge on various slides.
[0023] FIG. 12 is a diagram illustrating programming logic for the simulation.
DETAILED DESCRIPTION OF THE INVENTION
[0024] In the following description, for purposes of explanation, numerous details are set forth, such as flowcharts and system configurations, in order to provide an understanding of one or more embodiments of the present invention. However, it is and will be apparent to one skilled in the art that these specific details are not required in order to practice the present invention.
[0025] One or more embodiments of the present invention may provide a training program that provides a web-based simulation with didactic content embedded for support, and further provide haptic feedback to the user or trainee.
[0026] In the web-based simulation trainer, which may be referred to as a Virtual
Medical Trainer (VMT), users perform three main learning tasks: identify needle insertion point, correct needle placement, and inject anesthetic. Each learning task is made up of a series of constituent skills (hereafter called actions). The learning tasks or scenarios and their constituent actions are listed below. [0027] Identify Needle Insertion Point
• Find external landmarks
• Mark external landmarks
• Mark needle insertion point [0028] Correct Needle Placement
• Set current of nerve stimulator
• Place needle
• Introduce needle
• Achieve general needle placement
• Fine tune needle placement based on motor responses produced by nerve stimulator
• Ensure safe distance between needle tip and nerve [0029] Inject Anesthesia
• Aspirate for blood
• Administer test dose (Raj test)
• Inject anesthesia 5 ml at a time with aspiration every 5 ml and between each syringe
[0030] In scenarios calling for a continuous block, users perform the additional learning task of placing a catheter. This includes the following actions: [0031] Place catheter
• Dilate perineural space
• Introduce catheter
• Withdraw needle
• Secure catheter [0032] The VMT uses a simulated three-dimensional training environment having a three-dimensional patient model. The VMT simulation further provides a simulation of structures lying beneath the visible exterior of the three-dimensional patient model, including muscle, bone, vascular system, and nervous system structures.
[0033] The VMT provides the user with continuous haptic and didactic feedback during the training simulation.
[0034] The VMT further simulates medical complications, which may arise during a procedure the user is training. The complications should be recognized and identified by the user, and selection of the appropriate course of action to remedy the complication should be made.
[0035] The VMT is preferably a SCORM-compliant, E-Learning course used to teach
Air Force medical personnel the Supraclavicular Block of the Brachial Plexus (the
"Procedure"). This Procedure teaches students how to deliver anesthesia directly to the correct nerve in the shoulder that will numb the arm for surgery or to help manage pain. The
Procedure is done to avoid having the patient undergo general anesthesia, which could lead to complications.
[0036] As a SCORM-compliant course, the medical simulation resides on a Learning
Management System ("LMS") and runs totally within a web browser. Therefore, a platform such as Adobe's Flash technology is implemented in the medical simulation and the didactic content. Due to Flash's two-dimensional environment, there are additional features, described herein, that facilitate simulation of the needle, movement of the needle, and the human shoulder.
[0037] The VMT provides learning tasks, which are concrete whole-task experiences that the user or learner is typically asked to perform in a real or simulated environment. The learning tasks require application of the constituent skills that make up the learning task, as opposed to studying general information about or related to those skills. [0038] Task classes are represented in the degree of difficulty between "game levels" in the VMT simulation. A simulation level consists of a similar task class of each of the learning tasks. Each level is comprised of a set of scenarios in which the user practices all of the learning tasks multiple times under varying conditions. Each level is more difficult than the previous one. Difficulty is increased by:
• Increasing the number and type of variables present in the scenarios
• Increasing the number of actions performed or the number of options available within the actions
• Decreasing the amount of supportive and JIT information that is available to the user
[0039] FIG. 1 is a block diagram illustrating an example of the flow of information corresponding to the VMT system. The user 1 manipulates the external input/output interface 2, which may comprise a computer keyboard and mouse. The data generated by manipulation of the keyboard and/or mouse by the user is transferred to the computer 3 which runs the medical procedure training simulations 4 and the three-dimensional patient model on which the simulated medical procedures are performed. The computer then outputs signals to the display 5 based on the medical procedure training simulations 4. The user 1 views the display 5, which provides images representing the medical procedure training simulations 4 and the three-dimensional patient model. The user also receives both didactic and haptic feedback from the display 5.
[0040] FIG. 2 is a block diagram illustrating the medical procedure training simulations, including learning tasks performed by the user. The block diagram and the following description are provided for illustrative purposes and do not limit the medical procedure training simulations to the particular sequences shown and described. Additionally, the learning tasks and elements shown and described do not exclude the inclusion of additional elements or steps. The learning tasks are virtual three-dimensional simulations generated by the Virtual Medical Trainer (VMT) 6. The medical procedure learning tasks are performed by the user on a three-dimensional patient model using simulated medical tools or probes generated by the VMT 6 program. As explained further below, the functionality of the VMT 6 may be provided by executing software instructions that are stored in memory on any computing platform.
[0041] FIG. 2 illustrates the simulated learning tasks, which may be completed by the user, including the tasks of landmarking 7 (identifying a needle insertion point), placing a needle and setting a nerve stimulator 11, administering anesthesia 16, and inserting and securing a catheter 23. The VMT (such as through the computer display) provides feedback 10 as the user carries out the learning tasks.
[0042] During the landmarking 7 learning task, the user locates and identifies external landmarks 8 and marks external landmarks with an 'X' 9 on the three-dimensional patient model.
[0043] Following completion of the landmarking 7 learning task, the user may perform the learning task of placing a needle and setting a nerve stimulator 11. The nerve stimulator is located at the end of a simulated needle. The user first places the needle on the 'X' 12, which was marked during landmarking 7 learning task. The user then sets the nerve stimulator 14. Based on motor responses simulated by the three-dimensional patient model, the user adjusts the depth and position of the needle and/or the nerve stimulator 15. At this point, the user may be required to return to the landmarking 7 learning task.
[0044] Following completion of the learning task of placing a needle and setting a nerve stimulator 11, the user may perform the learning task of injecting anesthesia 16. The user first aspirates for blood 17. The user then administers a test dose of anesthesia 18 to the three-dimensional patient model. Based on the response of the three-dimensional patient model to the test dose, the user may be required to return to the learning task of placing a needle and setting a nerve stimulator 11. Alternatively, the user may administer anesthesia to the three-dimensional patient model by injecting the anesthesia 19.
[0045] Complications 20 may arise during either the learning task of placing a needle and setting a nerve stimulator 11 or the learning task of injecting anesthesia 16. If a complication 20 arises during either of the learning tasks, the user is challenged to recognize and identify the complication and its underlying cause 21. To be successful, the user should preferably select the best course of action to remedy the complication 22.
[0046] Following completion of the learning task of injecting anesthesia 16, the user may perform the learning task of inserting and securing a catheter 23. The learning task of inserting and securing a catheter 23 is applicable in scenarios calling for a continuous block.
The user first inserts the catheter in the three-dimensional patient model 24. The user then withdraws the needle from the three-dimensional patient model 25, and secures the catheter in place 26.
[0047] The VMT 6 additionally provides a desk reference 27 which is instructional and applicable to the VMT 6 learning tasks. The reference provides didactic content to aid and enhance the user's learning experience.
[0048] The VMT 6 records a history and summary 29 documenting the user's activities, and the results of the learning tasks. The history and summary 29 are available for viewing following the user's completion of learning tasks.
[0049] Three-dimensional Simulation [0050] The goal of the VMT is to teach the whole Procedure; however, a preferred primary goal is to have the VMT test the ability of the student ("user") to find the correct nerve. Thus, the needle should preferably move in three dimensions. [0051] As illustrated in FIGs. 3 A-C, the user selects the insertion point 30 on the body image (see FIG. 3A), which is created by taking a snapshot of a three-dimensional model with the three-dimensional graphics software, Lightwave. The simulation places the needle 31 in the spot selected (see FIG. 3B) appearing to be straight up and down ("Normal Vector"), and the insertion point 30 (the anchor point) and the needle handle ("hub") can be dragged by the user's mouse. With the needle length remaining constant, dragging the needle hub 32 away from the insertion point 30 increases the angle of the needle 31 from the Normal Vector (see FIG. 3C).
[0052] The angle of the needle 31 is simulated in two-dimensions by increasing the visual length of the needle 31 on the screen. The further away the needle hub 32 is dragged, the longer the needle 31 appears to be to the student, and the greater the angle from the Normal Vector 33 (FIG. 4).
[0053] Therefore, dragging the needle hub 32 controls the needle's angle and pitch. To insert or remove the needle 31, the student may, for example, use the up or down arrow keys of the keyboard. This motion is simulated by moving the needle graphic along the needle vector while changing the length of an image mask 34 located at the insertion point 30. Thus, the needle 31 appears to be inserted into the skin by shifting it and hiding (or "masking") the tip. The mask 34 and the needle 31 beneath are invisible to the student (FIG. 5). [0054] With the needle 31 being simulated in three-dimensions and being inserted into and out of the body, the simulation further simulates the needle "hit detection" under the skin - simulating the needle 31 hitting the nerve, bone or other obstacles. The major concept is to have a 3D section of the shoulder "sliced" into layers. These layers or slides 35 show the internal organs (FIG. 6). These internal slides 35 are invisible to the student. The student has to use other cues to determine where the needle 31 is. The main indicator is the simulated electrical stimulator that delivers an electrical charge to the needlepoint 40 and subsequently the nerve. If placed correctly, the electrical charge visually contracts the muscles corresponding to the brachial plexus nerve. Other muscular visual effects occur if incorrect nerves are stimulated allowing the student to know that they inserted the needle 31 incorrectly.
[0055] The hit detection algorithm may, for example, implement Flash's movie clip hit
Test function, where given a point, x and y coordinates, the function returns information to the user as to whether the point lies within the movie clip. Given this information, a layer contains multiple movie clips for each body organ it intersects. The organs tracked are the skin, artery, nerves, lung and bones (collar and top rib). The skin is used to find the z coordinate of the insertion point 30. The hit detection algorithm proceeds down the slides 35 until the needle 31 circle intersects skin (FIG. 6).
[0056] The needle 31 and the electrical charge are virtually represented as a cylinder 36 and sphere respectfully. The needle cylinder 36 and electrical charge sphere are represented as needle circles 39 and electrical charge circles 37 on the slides 35 they intersect. By using this information, the needle shaft and electrical charge influence can be tracked on slides 35 other than the needlepoint slide 38 (FIG. 7). As seen in FIG. 7, the needle circles 39 correspond to where the needle 31 intersects the slide 35. These needle circles 39 are of same radius. However their location is dependent on the pitch, angle, and depth of the needle 31. Electrical charge circles 37 also represent the electrical influence but their center is always the X, Y coordinates of the needlepoint 40 on the needlepoint slide 38. The radiuses of the electrical influences, represented by the electrical charge circles 37, decrease in slides 35 further away from the needlepoint slide. [0057] Calculating the needle circles 39 on the slides 35 is done through conversion of spherical coordinates to Cartesian coordinates. Knowing the angle and pitch of the exposed needle and the depth of the needle under the skin facilitates calculating the x and y coordinates of the needle circle 39 on every slide 35.
[0058] The angle (theta) of the needle beneath the skin is opposite of drawn needle; see
Formula (1). The next step is to determine the pitch (gamma) of the needle 31. The pitch is calculated by using the screen (drawn) length 41 of the needle 31 and the actual (real) length 42 of needle 31 above the skin (FIG. 8) then adding 90 degrees; see Formula (2).
θ = aboveScreenAngle + (aboveScreenAngle < 180 ? 1 : -1 ) * π (1 )
φ = π/2 + cos (drawnLength / realjength) (2) r = zDepth / cosα (3)
Spherical coordinates to cartesian coordinates (4) x = r cosθ sinφ y = r sinθ sinφ z = zDepth
[0059] The radius (r) coordinate of needle circle 39 is calculated using the pitch angle according to Formula (3). If the radius coordinate is greater than the zDepth of the slide 35 then the needle 31 intersects that slide 35 (FIG. 9). The alpha angle is the 90-degree complement of the pitch angle calculated using Formula (1). Unlike the gamma angle and the pitch (theta) angle, which remain constant over all slides 35 for a needle position, the radius coordinate is different for each slide 35.
[0060] With the spherical coordinates of each needle circle 39 calculated, the Cartesian coordinates may then be determined through the translation Formulas (4). Accordingly, it is possible to determine the deepest slide 35 the needle has reached and the (x, y) coordinates of each needle circle 39 on every slide 35 the needle 31 intersects. [0061] The second function of the hit detection algorithm is to determine which nerves are influenced by the electrical charge. The electrical charge is a virtual sphere 43 whose center is located at the needlepoint 40. This is represented as electrical charge circles 37 of decreasing radiuses in slides 35 which are further away from the needlepoint slide 38 (FIG. 10).
[0062] The electrical charge circles 37 vary; however the center of each electrical charge circle 37 remains the same as the needlepoint 40 (FIG. 11). The electrical circles can be calculated by using Pythagorean's Theorem according to Formula (5). The radius for each electrical charge circle 37 is calculated recursively until the radius is zero.
. / p 2 newRadius = y orqinalRadius - distanceBetweenSlides (5)
[0063] With the above methods of hit detection the simulation shows if the needle 31 collides with body parts and what body parts the electrical charge influences. For example, when the needle collides with a bone, a message is returned to the needle movement function to stop allowing movement in that direction. The electrical charge sends trigger calls to the body image to animate the correct muscle, which is receiving the shock. Using this method allows the student to perform three-dimensional functions by using two-dimensional software.
[0064] In one embodiment, the VMT web-based simulation is designed to help Air
Force anesthesiologists to meet the Readiness Skills Verifications (RSVs) for regional anesthesiology prior to deployment. It may also be used by anesthesiology residents and certified registered nurse anesthetists (CRNAs) as training prior to attempting a peripheral nerve block on a human patient.
[0065] Although any number of configurations may be provided for the described functionality, including those with fewer, greater, or differently labeled modules and layers, one example of the VMT simulation software architecture consists of four main layers described as follows:
[0066] 1) The "navigation" layer provides the common graphic frame or background of the user interface, Main Menu functionality, and common navigation controls. The main menu appears when the learner first launches the simulation. The main menu welcomes the learner and provides navigation links to the introductory tutorial, the practice simulation levels, the testing simulation level, and the Desk Reference.
[0067] 2) The "content" layer includes the introduction, each of the tasks in the simulation, and the summary. The control layer provides each module with a specific set of state variables as the module is launched. The values held within these variables change as the user acts in the interface. When the learner completes the module, the module passes the state variables back to the control layer. The control layer, in turn, passes the state variables to the next module in the sequence to specify its initial state.
[0068] 3) The "control layer" is the overseer. It manages the launching of support and simulation modules, tracks the learner's progress through the content, communicates with the learning management system, and determines which support features are available at various levels of play.
[0069] 4) The "support" layer contains items that provide instructional support
("scaffolding") to the learner. Some of these items appear in the simulation at lower game levels, but become unavailable as the user progresses to higher levels of play.
[0070] There are three classes of users of the VMT web-based simulation:
[0071] 1) Learner: A learner is one who uses the simulation to practice learning tasks in order to increase their competence in performing supraclavicular blocks. There are three anticipated groups of learners: anesthesiologists, anesthesiology residents, and CRNAs. Each group has a different beginning level of skill and knowledge in the procedure. Learners may be required to successfully complete the simulation as part of a larger training program, assessment, or certification process.
[0072] 2) System Administrator: A system administrator is responsible for deploying and maintaining the simulation on a SCORM 2004 conformant LMS. [0073] 3) Training Administrator: A training administrator works through the LMS to extract and report on learner progress and status in the course.
[0074] As much as possible, all user interface and content text is contained within
XML, rather than encoding it directly into Flash SYYT modules. This facilitates the editing process making it easier to implement textual changes in the simulation content. [0075] To enable the VMT web-based simulation to be hosted on an LMS, the simulation may be designed to adhere to the SCORM 2004 runtime API and packaging. For simplicity of design and a cohesive user experience, the simulation may be implemented as a single SCO. Simplicity is important not only to minimize confusion on the part of the user, but also to keep development costs within project scopes and to promote robust, high-quality design.
[0076] The graphical user interface (GUI) for the VMT web-based simulation may be designed to look and feel like a game. This affects all aspects of the design, including color pallet, image styles, navigation tools, scoring, and the overall organization of GUI elements. The ultimate design goal is to provide an intuitive interface that allows learners to focus on mastering learning tasks, rather than operation of the GUI.
[0077] The VMT simulation may operate on a client computer with any conventional computing platform. For example, in one embodiment, the requirements are as follows. These details are provided by way of example only, as embodiments may be implemented for operation with any number of potential computing platforms. • Microsoft Windows XP, SP2 • Microsoft Internet Explorer version 6.0 (with Javascript enabled-
• Macromedia Flash Player 8
• (If running from AFIADL Meridian LMS) at least 56 kbps Internet network access (If running from CD-ROM) CD-ROM or DVD drive
• 750 MHz Intel Pentium III processor or greater (or equivalent)
• 256 MB of RAM
• Keyboard and pointing device (e.g. mouse)
• A computer monitor with a resolution of 800 x 600, 256 colors
[0078] The VMT simulation can be launched and run from various resources, including but not limited to a learning management system (LMS) or from CD-ROM. Depending on which of these ways the simulation is launched, there may be slight differences in specific portions of the functionality as described herein.
[0079] At the beginning of the simulation, users are given the ability to select the level at which they would like to begin. Users also have the option to remove feedback mechanisms on their own by hiding them in the interface.
[0080] In order to learn a task where performance varies from one situation to another, students require information that helps them to bridge the gap between what they already know and new aspects of the task. Supportive information (traditionally called the "theory" or didactic content) provides this bridge. In the VMT simulation, supportive information takes two forms: helpers and feedback.
[0081] Helpers contain didactic content that the user can access as needed while performing learning tasks. There are five standard helpers available: the Procedure Helper, the Anatomy Helper, the Medications Helper, the Equipment Helper, and the Risks Helper. Users can look up information in these five content areas, as required to successfully complete the learning-tasks. When the user accesses a helper, the didactic content appears over the main simulation interface. When the user exits a helper, he/she returns to the simulation in the state it was in when the helper was invoked.
[0082] There may, for example, be four types of feedback provided in the simulation interface. Feedback mechanisms are user-controlled. Each user can decide if and when he/she wants to view feedback. Examples of feedback mechanisms are described below: [0083] One form of feedback to the user's action is provided through naturally occurring consequences in the simulation. For instance, if the user advances the needle in a medial direction, a pneumothorax may occur. Consequences are presented through visual or textual feedback in the interface.
[0084] The character of an experienced practitioner may serve as an avatar that provides instructive feedback in response to the user's actions. The coach calls out incorrect or skipped actions with an explanation of the potential consequences. The coach's feedback also directs the user to didactic content supporting the action in question. The user has the option to show or hide the coach in the interface.
[0085] Cumulative scoring uses specific metric values that rise or fall as the user acts within the simulation, depending on the appropriateness of his/her choices. The three metric values include: success rate, safety level, and patient satisfaction. Success rate reflects how likely the block is to be effective based on the user's actions. Safety level indicates the degree to which the user's actions might contribute to the development of complications and side effects, or harm to the patient. Patient satisfaction is related to the amount of discomfort or anxiety the user's actions inflict on the patient.
[0086] The cumulative scoring gives the user a dynamic view of how well he/she is doing in the scenario. Cumulative scores are used to determine how successful the user is in each scenario, when a user may progress from one level to another, and ultimately, when the user has successfully completed the course. The user can choose to view the cumulative scoring at all times, or only when he/she wants to check his/her progress.
[0087] The interior window feedback feature is provided on the needle placement and catheter placement actions. The interior window gives the user another view of the needle so he/she can see how his/her manipulation of the needle relates to the overall position of the body.
[0088] This interior window is present in early scenarios within a level and it disappears when the user has successfully completed a predetermined number of scenarios.
The user has to complete remaining scenarios in the level without the aid of the internal window to pass to the next level.
[0089] Just-in-time (JIT) information is provided to help users perform tasks that are completed in a similar way across varied situation. JIT helps learners to embed repetitive processes or concepts to the point of automation. In the VMT simulation, JIT information is provided through helpers and feedback. JIT helper icons appear in the interface contextually, as a shortcut to specific information or practice activities within the didactic content that are relevant to the learning task being performed. The JIT helper icons appear in addition to the five standard icons that are always present.
[0090] The feedback mechanisms described above provide both supportive information and JIT information to support both the recurrent and non-recurrent aspects of the learning tasks.
[0091] Part-task practice is designed to aid the user in acquiring constituent skills that require a high degree of automaticity while performing the whole task.
[0092] VMT users have many opportunities for part-task practice within the simulation itself. The user progresses through each scenario by performing constituent actions of the learning task. Additionally, the integration and synthesis of actions may be promoted by organizing and programming them in such a way that the user's choices in one action contribute to the starting states of future actions.
[0093] In addition to the part-task practice inherent to the simulation, there may be stand alone practice "games" inside the helper content to allow users to practice and master lower level recurrent skills. Tasks that require a high degree of automation during performance of the learning tasks, such as recognizing motor responses to nerve stimulation of-controlling a needle tip from the hub, are handled in this way. [0094] As users perform each action in the learning task, they gain or lose points, contributing to cumulative metrics. Some action outcomes may be absolute, adding to or subtracting from the metrics by a specific amount. Other actions may have a range of acceptable outcomes, each weighted to reflect a greater or lesser degree of impact on the metrics.
[0095] As a user progresses through a scenario, the simulation engine tracks his/her actions and decisions, and his/her resulting impact on the cumulative metrics. At the end of the scenario, the user can review a history of his/her performance in the scenario with an explanation of why the metrics responded as they did. This report helps users to reflect on what they did and the resulting consequences. Through this process of reflection, users can identify how to improve their performance. All simulation levels provide the history feature. [0096] The metric values at the conclusion of a scenario determine the success or failure of the scenario. Cumulative metrics also are used to determine when a user is ready to move from one level to another and when the user has achieved a sufficient level of proficiency to graduate the course.
[0097] Each learning task is performed within the context of a scenario. A scenario is represented as a patient record containing a description of the patient and his/her injuries, a medical history, results of a physical examination and lab tests, and in some scenarios, predetermined complications that arise. Scenarios are generated from a predefined set of variables. At the start of a scenario, the VMT randomly assembles variables into a unique set, ensuring a large number of possible scenarios with little chance of duplication. Users have the ability to customize the scenario by changing certain variable options selected by the computer.
[0098] The level of play dictates which variables are used to generate scenarios. The scenarios become progressively more complex in the number and type of variables as the user advances from one level to another.
[0099] The VMT web-based simulation may be delivered through common Web technologies: HTTR, HTML, JavaScript, and Flash, for example. The modules are tested on systems that meet the client and server requirements described below.
[00100] The VMT web-based simulation may be designed, developed, and packaged to run from various resources, such as:
• A Standard Content Object Reference Model (SCORM) 2004 conformant learning management system (LMS)
• A standalone CD-ROM
[00101] As described, the VMT may, for example, implement Flash and ActionScript to present a web-based simulation. To support communication with a SCORM 2004-conformant LMS, the product may be built upon a Flash-based courseware framework. [00102] One example of such framework uses a model-view-controller (MVQ) design pattern to separate the user interface (view tier) from course content (model tier). The control tier of the framework controls communication between the view and the model. A fourth tier of functionality, page engines, is where specific content (e.g., simulation, just-in-time instruction, helper game) is rendered. [00103] To enable the VMT web-based simulation to be hosted on an LMS, the simulation is designed to adhere to the SCORM 2004 runtime API and packaging.
[00104] On SCORM 2004-conformant LMSs, completion tracking and scoring is saved between sessions.
[00105] Because the SCO is implemented as a state engine simulation, freezing and restoring all state variables between learning sessions would likely exceed the limits of the
SCORM runtime API and database functionality. For this reason, it is preferred not to implement suspend data. Each time the user starts a session the VMT web-based simulation generates and presents a new scenario to the user.
[00106] Book marking may be used to store the highest difficulty level completed by the user.
[00107] When running from CD-ROM, the VMT web-based simulation enables users to work through the simulation, but no tracking or book marking data is saved between sessions.
[00108] Word processing software such as Microsoft Word, for example, may be used to develop storyboards that specify course content. The storyboards are then converted from storyboard content to XML and accompanying simulations, animations, and interactive exercises are produced in Macromedia (Adobe) Flash 8. Photoshop, Lightwave 3D, and/or other graphics authoring tools are used to create image assets. Some functionality is developed using JavaScript.
[00109] The resulting product may variously consist of XML files, HTML/JavaScript,
Flash Shockwave files (SWFs), and supporting graphic files (e.g., JPEG, GIF). As needed, other common web files, such as Cascading Style Sheets (CSS), may be used.
[00110] The VMT web-based simulation may, for example, employ the use of state engines. A state engine is a set of software routines that track a range of variables and their current settings or "states". The software initiates actions or consequences when these states meet or exceed predefined conditions.
[00111] A data model is created to represent and track the factors and decisions involved in a supraclavicular block. The state engine provides a mathematical representation of the supraclavicular block procedure. That is, it uses numbers and algorithms to model the opening scenario and to represent changes that occur as the user interacts with the simulation. [00112] As the user begins a new session, the simulation software (functionality developed in Flash ActionScript) uses random number generation routines to select variables that define the opening scenario. A scenario, in the context of the VMT web-based simulation, is an instance of the supraclavicular block procedure with certain pre-defined attributes. These include such things as:
• Patient information (age, gender, height, weight, etc.)
• Patient condition (injuries, vital signs, lab results)
• Medical history
• Unforeseen complications (which are "predestined" to occur when the scenario is generated)
• Anesthesia and analgesia requirements (block type, duration)
[00113] These attributes vary from one scenario to another, providing users with the opportunity to practice the nerve block procedure with variations that simulate real life to some extent.
[00114] At the start of a scenario, the simulation provides the user with the opportunity to change some of the input variables. For example, if a single-injection scenario is presented, the user may choose to work through a continuous-infusion scenario instead.
[00115] Various states that may exist within the simulation are defined. States identify those factors that: • Change over the course of time within a scenario
• Vary from one scenario to another
[00116] As the simulation is presented to the user, the simulation engine can evaluate states to do such things as:
• Determine the user's progress within the procedure
• Model the patient's condition based on the user's performance in performing the procedure
• Evaluate the user's success in performing the procedure
• Determine the consequences, tools, prompts, and other visual stimuli to present to the user based on the current context
[00117] Users can choose to start a new scenario at any time. When a new practice scenario is started, the simulation prompts users to choose one of two levels of difficulty. Users may select either level, regardless of which levels they have already completed. [00118] When the simulation has been launched from a SCORM-conformant LMS, it provides a status message that tells the user what tasks remain to be completed before the simulation is marked as "completed." This message is not displayed when the simulation is running from CD-ROM.
[00119] For the simulation to be marked as "completed" by the LMS, the user is tested for successful completion of a specified number of each of the following types of supraclavicular blocks at the most difficult level:
• Single-injection block
• Continuous-infusion block
[00120] Specific functionality of the simulation levels include the following:
• The number of blocks that should be completed to ensure an adequate level of competency to graduate the course. • When the scenario is generated, if the user has already completed the specified number of one block type, the other type of block is selected by default.
• The simulation cannot retrieve a bookmark from the LMS when the user launches the simulation from CD-ROM or the first time the user launches the simulation from an LMS. hi these circumstances, the simulation defaults to the easiest difficulty level.
[00121] Flash (the primary development platform used for this project) does not provide a built-in 3D graphics-rendering capability. Nonetheless, the VMT facilitates a 3D experience through the use of various 2D modeling techniques. For example, graphics in the simulation are drawn to create the illusion of depth and perspective. Objects appear smaller as they move toward the background, and so forth.
[00122] Some simulated tasks require that the user perform within a virtual 3D space. For example, users need to identify the correct adjustments to the needle insertion point and angle to obtain the correct motor response from the simulated patient. This requires that the simulation maintain certain state information regarding the needle: the insertion point, the angle of insertion, and the depth of insertion.
[00123] Using ActionScript, the needle position can be calculated mathematically, using (x,y,z) coordinates to track position. Visual feedback of the needle position is provided to the user through 2D graphics. Collision detection is used to determine when the needle tip is close enough to a nerve to produce a motor response or when the needle is too close to a nerve. Feedback and consequences are presented if a nerve or artery is accidentally pierced. [00124] Various approaches may be used to implement collision detection in three dimensions. Examples include:
• Use ActionScript math functions to compare a (x,y,z) location to an area mathematically • Use Flash's standard timeline-based collision dejection routines with layered objects to simulate depth
• Use Flash's standard timeline-based collision detection routines with a two-plane orthographic projection
[00125] Didactic content presented in the helpers is encoded through standard web file formats, such as XML, HTML/Javascript, Flash SWF files, JPEG, GIF, and CSS. JIT instruction is presented in a layer over the movie, using a separate flash template. [00126] Feedback is triggered by variables maintained by the state engine and changes that occur within the simulation and practice games as the user interacts with them. Feedback is presented in a layer over the movie, using a separate Flash template. [00127] In one embodiment, in all levels except the final one, the user can choose to show or hide the following features:
• Coaching (Instructive Feedback)
• Interior Window
[00128] These features, as well as the helpers, are not available in the final level.
Consequences and cumulative scoring, however, are always displayed.
[00129] Upon entering the simulation, the user is presented with an introduction screen.
The introduction allows the user to select the level at which he/she will play. If the user has previously used the simulation, the introduction screen displays the last level the user successfully completed as the default choice.
[00130] The introduction also contains an orientation, describing the features and tools available in the interface at each game level and demonstrating how to navigate the simulation. The user may view the orientation to help determine an appropriate level at which to play. [00131] The GUI is divided into two main areas: the activity area and the supporting information area.
[00132] The activity area is on the right side of the screen. It is where all simulation actions and decisions take place. The image in the activity area depicts the room in which blocks are performed, complete with a virtual patient. The view of the space zooms in or out as necessary to support the task the user is performing.
[00133] A menu dock is available at the bottom of the activity area. The dock includes all of the virtual tools the student needs to perform learning tasks and provides access to user selectable options and helpers. To select a tool, the user rolls over the icon for the tool and clicks on it. Tool icons scale up in size when the user rolls over them. The user may show or hide the menu dock at his/her discretion.
[00134] The supporting information area on the bottom right side of the interface displays information the user may need to successfully perform learning tasks. The information displayed changes depending on what the user chooses to view at any given time and presents a list of links to didactic content in the helpers.
[00135] Instructional feedback is provided to the user through a "coach". The coach is represented as an illustration of a practitioner. There are five characters from which the user may choose his or her coach.
[00136] Selection may be set up however is desired, but in one example it may be gender and various races, and possibly fictional graphical "fun" characters.
[00137] The coach may not animate in certain embodiments, but rather is static with a text area that updates as the user performs actions in the simulation. The coach's feedback includes correct/incorrect feedback, hints, suggestions, and links to didactic content that supports the current action. [00138] A status box showing the current level the user is playing, the cumulative scoring metrics, and the last action performed are available in the interface. The cumulative scoring metrics are displayed as a dynamic bar chart. The bars increase or decrease to reflect the overall success rate, safety level, and patient satisfaction as they are affected by the user's choices and actions.
[00139] The interior window feedback is present for certain learning tasks. When a user initiates an action where the interior window is implemented, the window appears in the GUI. The figure above shows one possible implementation of this feature. The interior window displays another view of the position of the needle relative to the head/neck region. [00140] When helpers are accessed from the simulation, a menu of all the didactic content appears. The user can choose the specific content he/she would like to view. Helper content appears in a window on top of the simulation GUI. The GUI behind the window is grayed out and all game elements deactivated until the helper window is closed. When a helper window is closed, the user is returned to the simulation in the state in which it was in when the helper content was selected.
[00141] Thus embodiments of the present invention produce and provide a virtual medical training method and apparatus. Although the present invention has been described in considerable detail with reference to certain embodiments thereof, the invention may be variously embodied without departing from the spirit or scope of the invention. Therefore, the following claims should not be limited to the description of the embodiments contained herein in any way.

Claims

1. A computer implemented virtual medical training method for simulation of medical procedures, the method comprising; receiving data corresponding to an external input/output interface configured for user manipulation, wherein the external input/output interface generates the data in response to the user manipulation; and simulating a medical procedure based on the data generated by the external input/output interface, wherein the simulation of the medical procedure comprises a simulated three-dimensional patient model, wherein the simulation of the medical procedure comprises simulating placement of a medical probe in a simulated three-dimensional patient model, and wherein the simulated medical probe comprises a probe hub and a probe tip and the placement of the simulated medical probe comprises positioning the direction of the probe tip in relation to the probe hub and the simulated three- dimensional patient model and positioning the distance the probe tip is inserted into the simulated three-dimensional patient model.
2. The virtual medical training method according to claim 1, further comprising: facilitating a display of images based upon the simulation of the medical procedure, wherein the images depict the simulation of the medical procedure, and wherein the display further produces images providing haptic and didactic feedback.
3. The virtual medical training method according to claim 1 , wherein the simulated medical probe is a simulated needle.
4. The virtual medical training method according to claim 1 , wherein the placement of the simulated medical probe comprises a simulation of general needle positioning, a simulation of fine needle positioning, and a simulation of nerve proximity testing.
5. The virtual medical training method according to claim 1, wherein the simulation of the medical procedure further comprises a simulation of injecting anesthetic from the simulated needle into the simulated three-dimensional patient model.
6. The virtual medical training method according to claim 5, wherein the simulation of injecting anesthetic comprises the steps of aspirating for blood, injecting a test dose of anesthetic, and injecting anesthetic.
7. The virtual medical training method according to claim 3, wherein the simulation of the medical procedure comprises a simulation of inserting and securing a simulated catheter in the simulated three-dimensional patient model using the simulated needle.
8. The virtual medical training method according to claim 3, wherein the simulation of the medical procedure further comprises a simulation of setting a current on a nerve stimulator on the simulated needle.
9. The virtual medical training method according to claim 1, wherein the simulation of the medical procedure further comprises a simulation of complications in response to data based upon manipulation of the external input/output interface that falls outside designated parameters in the simulation of the medical procedure.
10. The virtual medical training method according to claim 9, wherein the simulated complications are visually represented, and wherein images depict the simulation of the medical procedure.
11. The virtual medical training method according to claim 9, wherein the simulated complications escalate in severity if the user fails to at least recognize the complications or select a correct course of action to correct the complications.
12. The virtual medical training method according to claim 1, wherein the simulation of the medical procedure further comprises a simulation of locating and marking anatomical landmarks on the simulated three-dimensional patient model.
13. The virtual medical training method according to claim 1, wherein the simulation determines the proximity of the probe in relation to simulated internal three- dimensional patient anatomy structures comprising internal muscle, bone, vascular system, and nervous system structures.
14. The virtual medical training method according to claim 4, wherein simulation of nerve proximity testing comprises a step of nerve stimulation using a nerve stimulator and a needle, wherein the simulated three-dimensional patient model provides a simulated motor response to the nerve stimulation.
15. A virtual medical training system for simulation of medical procedures, the system comprising; means for receiving data corresponding to an external input/output interface configured for user manipulation, wherein the external input/output interface generates the data in response to the user manipulation; and means for simulating a medical procedure based on the data generated by the external input/output interface, wherein the simulation of the medical procedure comprises a simulated three-dimensional patient model, wherein the simulation of the medical procedure comprises simulating placement of a medical probe in a simulated three-dimensional patient model, and wherein the simulated medical probe comprises a probe hub and a probe tip and the placement of the simulated medical probe comprises positioning the direction of the probe tip in relation to the probe hub and the simulated three- dimensional patient model and positioning the distance the probe tip is inserted into the simulated three-dimensional patient model.
16. The virtual medical training system according to claim 15, further comprising: means for facilitating a display of images based upon the simulation of the medical procedure, wherein the images depict the simulation of the medical procedure, and wherein the display further produces images providing haptic and didactic feedback.
17. The virtual medical training system according to claim 15, wherein the simulated medical probe is a simulated needle.
18. The virtual medical training system according to claim 15, wherein the placement of the simulated medical probe comprises a simulation of general needle positioning, a simulation of fine needle positioning, and a simulation of nerve proximity testing.
19. The virtual medical training system according to claim 15, wherein the simulation of the medical procedure further comprises a simulation of injecting anesthetic from the simulated needle into the simulated three-dimensional patient model.
20. The virtual medical training system according to claim 19, wherein the simulation of injecting anesthetic comprises the steps of aspirating for blood, injecting a test dose of anesthetic, and injecting anesthetic.
21. The virtual medical training system according to claim 17, wherein the simulation of the medical procedure comprises a simulation of inserting and securing a simulated catheter in the simulated three-dimensional patient model using the simulated needle.
22. The virtual medical training system according to claim 17, wherein the simulation of the medical procedure further comprises a simulation of setting a current on a nerve stimulator on the simulated needle.
23. The virtual medical training system according to claim 15, wherein the simulation of the medical procedure further comprises a simulation of complications in response to data based upon manipulation of the external input/output interface that falls outside designated parameters in the simulation of the medical procedure.
24. The virtual medical training system according to claim 23, wherein the simulated complications are visually represented, and wherein the images depict the simulation of the medical procedure.
25. The virtual medical training system according to claim 23, wherein the simulated complications escalate in severity if the user fails to at least recognize the complications or select a correct course of action to correct the complications.
26. The virtual medical training system according to claim 15, wherein the simulation of the medical procedure further comprises a simulation of locating and marking anatomical landmarks on the simulated three-dimensional patient model.
27. The virtual medical training system according to claim 15, wherein the simulation determines the proximity of the probe in relation to simulated internal three- dimensional patient anatomy structures comprising internal muscle, bone, vascular system, and nervous system structures.
28. The virtual medical training system according to claim 18, wherein simulation of nerve proximity testing comprises a step of nerve stimulation using a nerve stimulator and a needle, wherein the simulated three-dimensional patient model provides a simulated motor response to the nerve stimulation.
29. A virtual medical training apparatus for simulation of medical procedures, the apparatus comprising; a data module, which receives data corresponding to an external input/output interface configured for user manipulation, wherein the external input/output interface generates the data in response to the user manipulation; and a simulation module, which simulates a medical procedure based on the data generated by the external input/output interface, wherein the simulation of the medical procedure comprises a simulated three-dimensional patient model, wherein the simulation of the medical procedure comprises simulating placement of a medical probe in a simulated three-dimensional patient model, and wherein the simulated medical probe comprises a probe hub and a probe tip and the placement of the simulated medical probe comprises positioning the direction of the probe tip in relation to the probe hub and the simulated three-dimensional patient model and positioning the distance the probe tip is inserted into the simulated three-dimensional patient model.
30. The virtual medical training apparatus according to claim 29, further comprising: a display generation module, which facilitates a display of images based upon the simulation of the medical procedure, wherein the images depict the simulation of the medical procedure, and wherein the display further produces images providing haptic and didactic feedback.
PCT/US2008/059001 2007-04-02 2008-04-01 Computer-based virtual medical training method and apparatus WO2008122006A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US90742007P 2007-04-02 2007-04-02
US60/907,420 2007-04-02
US7838308A 2008-03-31 2008-03-31
US12/078,383 2008-03-31

Publications (1)

Publication Number Publication Date
WO2008122006A1 true WO2008122006A1 (en) 2008-10-09

Family

ID=39808720

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2008/059001 WO2008122006A1 (en) 2007-04-02 2008-04-01 Computer-based virtual medical training method and apparatus

Country Status (1)

Country Link
WO (1) WO2008122006A1 (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101819724A (en) * 2010-05-04 2010-09-01 北京莲宇时空科技有限公司 Virtual training software platform based on SCORM (Sharable Content Object Reference Model)
WO2011001299A1 (en) * 2009-06-29 2011-01-06 Koninklijke Philips Electronics, N.V. Tumor ablation training system
WO2013150436A1 (en) * 2012-04-01 2013-10-10 Ariel-University Research And Development Company, Ltd. Device for training users of an ultrasound imaging device
WO2016141089A1 (en) * 2015-03-02 2016-09-09 Foundation For Exxcellence In Women's Healthcare, Inc. System and method providing customizable and real-time input, tracking, and feedback of a trainee's competencies
WO2018195255A1 (en) * 2017-04-20 2018-10-25 Becton, Dickinson And Company Diabetes therapy training device
US10235904B2 (en) 2014-12-01 2019-03-19 Truinject Corp. Injection training tool emitting omnidirectional light
US10269266B2 (en) 2017-01-23 2019-04-23 Truinject Corp. Syringe dose and position measuring apparatus
US10290232B2 (en) 2014-03-13 2019-05-14 Truinject Corp. Automated detection of performance characteristics in an injection training system
US10500340B2 (en) 2015-10-20 2019-12-10 Truinject Corp. Injection system
US10643497B2 (en) 2012-10-30 2020-05-05 Truinject Corp. System for cosmetic and therapeutic training
US10648790B2 (en) 2016-03-02 2020-05-12 Truinject Corp. System for determining a three-dimensional position of a testing tool
WO2020154782A1 (en) * 2019-02-01 2020-08-06 Levindo Coelho Neto Hélcio Simulated virtual education platform for healthcare professionals and academics
US10743942B2 (en) 2016-02-29 2020-08-18 Truinject Corp. Cosmetic and therapeutic injection safety systems, methods, and devices
US10810907B2 (en) 2016-12-19 2020-10-20 National Board Of Medical Examiners Medical training and performance assessment instruments, methods, and systems
US10849688B2 (en) 2016-03-02 2020-12-01 Truinject Corp. Sensory enhanced environments for injection aid and social training
US10896627B2 (en) 2014-01-17 2021-01-19 Truinjet Corp. Injection site training system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030068606A1 (en) * 2001-10-09 2003-04-10 Medical Technology Systems, Inc. Medical simulator
US20040009459A1 (en) * 2002-05-06 2004-01-15 Anderson James H. Simulation system for medical procedures
US20040064298A1 (en) * 2002-09-26 2004-04-01 Robert Levine Medical instruction using a virtual patient
US20040126746A1 (en) * 2000-10-23 2004-07-01 Toly Christopher C. Medical physiological simulator including a conductive elastomer layer
US20060127867A1 (en) * 2002-12-03 2006-06-15 Jan Grund-Pedersen Interventional simulator system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040126746A1 (en) * 2000-10-23 2004-07-01 Toly Christopher C. Medical physiological simulator including a conductive elastomer layer
US20030068606A1 (en) * 2001-10-09 2003-04-10 Medical Technology Systems, Inc. Medical simulator
US20040009459A1 (en) * 2002-05-06 2004-01-15 Anderson James H. Simulation system for medical procedures
US20040064298A1 (en) * 2002-09-26 2004-04-01 Robert Levine Medical instruction using a virtual patient
US20060127867A1 (en) * 2002-12-03 2006-06-15 Jan Grund-Pedersen Interventional simulator system

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104246855B (en) * 2009-06-29 2017-08-15 皇家飞利浦电子股份有限公司 Tumour ablation training system
WO2011001299A1 (en) * 2009-06-29 2011-01-06 Koninklijke Philips Electronics, N.V. Tumor ablation training system
JP2012532333A (en) * 2009-06-29 2012-12-13 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Tumor ablation training system and training method
CN104246855A (en) * 2009-06-29 2014-12-24 皇家飞利浦电子股份有限公司 Tumor ablation training system
US11562665B2 (en) 2009-06-29 2023-01-24 Koninklijke Philips N.V. Tumor ablation training system
CN101819724A (en) * 2010-05-04 2010-09-01 北京莲宇时空科技有限公司 Virtual training software platform based on SCORM (Sharable Content Object Reference Model)
WO2013150436A1 (en) * 2012-04-01 2013-10-10 Ariel-University Research And Development Company, Ltd. Device for training users of an ultrasound imaging device
US11854426B2 (en) 2012-10-30 2023-12-26 Truinject Corp. System for cosmetic and therapeutic training
US10902746B2 (en) 2012-10-30 2021-01-26 Truinject Corp. System for cosmetic and therapeutic training
US11403964B2 (en) 2012-10-30 2022-08-02 Truinject Corp. System for cosmetic and therapeutic training
US10643497B2 (en) 2012-10-30 2020-05-05 Truinject Corp. System for cosmetic and therapeutic training
US10896627B2 (en) 2014-01-17 2021-01-19 Truinjet Corp. Injection site training system
US10290232B2 (en) 2014-03-13 2019-05-14 Truinject Corp. Automated detection of performance characteristics in an injection training system
US10290231B2 (en) 2014-03-13 2019-05-14 Truinject Corp. Automated detection of performance characteristics in an injection training system
US10235904B2 (en) 2014-12-01 2019-03-19 Truinject Corp. Injection training tool emitting omnidirectional light
WO2016141089A1 (en) * 2015-03-02 2016-09-09 Foundation For Exxcellence In Women's Healthcare, Inc. System and method providing customizable and real-time input, tracking, and feedback of a trainee's competencies
US10500340B2 (en) 2015-10-20 2019-12-10 Truinject Corp. Injection system
US10743942B2 (en) 2016-02-29 2020-08-18 Truinject Corp. Cosmetic and therapeutic injection safety systems, methods, and devices
US10648790B2 (en) 2016-03-02 2020-05-12 Truinject Corp. System for determining a three-dimensional position of a testing tool
US10849688B2 (en) 2016-03-02 2020-12-01 Truinject Corp. Sensory enhanced environments for injection aid and social training
US11730543B2 (en) 2016-03-02 2023-08-22 Truinject Corp. Sensory enhanced environments for injection aid and social training
US10810907B2 (en) 2016-12-19 2020-10-20 National Board Of Medical Examiners Medical training and performance assessment instruments, methods, and systems
US11710424B2 (en) 2017-01-23 2023-07-25 Truinject Corp. Syringe dose and position measuring apparatus
US10269266B2 (en) 2017-01-23 2019-04-23 Truinject Corp. Syringe dose and position measuring apparatus
CN110520934A (en) * 2017-04-20 2019-11-29 贝克顿·迪金森公司 Treating diabetes train equipment
WO2018195255A1 (en) * 2017-04-20 2018-10-25 Becton, Dickinson And Company Diabetes therapy training device
WO2020154782A1 (en) * 2019-02-01 2020-08-06 Levindo Coelho Neto Hélcio Simulated virtual education platform for healthcare professionals and academics

Similar Documents

Publication Publication Date Title
WO2008122006A1 (en) Computer-based virtual medical training method and apparatus
US20210134068A1 (en) Interactive mixed reality system and uses thereof
US10417936B2 (en) Hybrid physical-virtual reality simulation for clinical training capable of providing feedback to a physical anatomic model
Ricciardi et al. A comprehensive review of serious games in health professions
US8480404B2 (en) Multimodal ultrasound training system
Issenberg et al. Simulation and new learning technologies
Williams et al. Implementation and evaluation of a haptic playback system
US20080187896A1 (en) Multimodal Medical Procedure Training System
Brazil et al. Haptic forces and gamification on epidural anesthesia skill gain
Van Loon et al. Establishing the required components for training in ultrasoundguided peripheral intravenous cannulation: a systematic review of available evidence
Cowan et al. A serious game for total knee arthroplasty procedure, education and training.
US20130302765A1 (en) Methods and systems for assessing and developing the mental acuity and behavior of a person
Strada et al. Holo-BLSD–A holographic tool for self-training and self-evaluation of emergency response skills
Bibin et al. SAILOR: a 3-D medical simulator of loco-regional anaesthesia based on desktop virtual reality and pseudo-haptic feedback
Sujatta First of all: Do not harm! Use of simulation for the training of regional anaesthesia techniques: Which skills can be trained without the patient as substitute for a mannequin
Brazil et al. Force modeling and gamification for Epidural Anesthesia training
Dicheva et al. Digital Transformation in Nursing Education: A Systematic Review on Computer-Aided Nursing Education Pedagogies, Recent Advancements and Outlook on the Post-COVID-19 Era
Lin et al. Game for health professional education
Staccini Serious games, simulations, and virtual patients
Liu et al. A computer-based simulator for diagnostic peritoneal lavage
Permanasari et al. Design of Gamification for Anatomy Learning Media
de Melo et al. Modeling the basic behaviors of Anesthesia Training in Relation to Puncture and Penetration Feedback
Yilmaz et al. Nursing education in the era of virtual reality
Simon et al. Design and evaluation of an immersive ultrasound-guided locoregional anesthesia simulator
Haase et al. Virtual reality and habitats for learning microsurgical skills

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08744850

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08744850

Country of ref document: EP

Kind code of ref document: A1