WO2018218175A1 - Laparoscopic training system - Google Patents

Laparoscopic training system Download PDF

Info

Publication number
WO2018218175A1
WO2018218175A1 PCT/US2018/034705 US2018034705W WO2018218175A1 WO 2018218175 A1 WO2018218175 A1 WO 2018218175A1 US 2018034705 W US2018034705 W US 2018034705W WO 2018218175 A1 WO2018218175 A1 WO 2018218175A1
Authority
WO
WIPO (PCT)
Prior art keywords
instrument
surgical
data
previous
cameras
Prior art date
Application number
PCT/US2018/034705
Other languages
French (fr)
Inventor
Joel B. Velasco
Jacob J. Filek
Nico SLABBER
Samantha Chan
Branden CARTER
Zachary MICHAELS
Nathan LANDINO
Brandon PERELES
Eduardo Bolanos
Cory S. HAGUE
Gregory K. Hofstetter
Sean KENNEDAY
Timothy Mcmorrow
Daniel Austin NORDMAN
Lindsey CHASE
Jigar Shah
Original Assignee
Applied Medical Resources Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US201762511246P priority Critical
Priority to US62/511,246 priority
Application filed by Applied Medical Resources Corporation filed Critical Applied Medical Resources Corporation
Publication of WO2018218175A1 publication Critical patent/WO2018218175A1/en

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B23/00Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes
    • G09B23/28Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes for medicine
    • G09B23/285Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes for medicine for injections, endoscopy, bronchoscopy, sigmoidscopy, insertion of contraceptive devices or enemas

Abstract

A system for surgical training is provided. The system includes a laparoscopic surgical instrument with at least one retroreflective marker on its shaft. A typical box trainer is provided with two cameras and two accompanying infrared light sources. When the instrument is inserted through a top of the trainer to perform mock procedures or exercises, light from an infrared light source is reflected back by the retroreflective marker and captured by an adjacent video camera. The position of the instrument is calculated by triangulating the image data obtained from the two cameras. When the markers are obscured behind models or artificial organs disposed inside the trainer, an inertial measurement unit on the handle of the instrument provides data for calculating the instrument position to fill in the gap in useful image data. Instrument position data over time is provided for useful trainee feedback and performance assessment purposes.

Description

LAPAROSCOPIC TRAINING SYSTEM

Cross-Reference to Related Applications

[0001] This application claims priority to and benefit of U.S. Provisional Patent Application Serial No. 62/51 1 ,246 entitled "Laparoscopic training system" filed on May 25, 2017 and incorporated herein by reference in its entirety.

Field of the Invention

[0002] This application relates to surgical training, and in particular, to laparoscopic training wherein a simulated torso is used to practice surgical procedures and techniques and an evaluative system provides feedback on the user's performance.

Background of the Invention

[0003] Laparoscopic surgery requires several small incisions in the abdomen for the insertion of trocars or small cylindrical tubes approximately 5 to 10 millimeters in diameter through which surgical instruments and a laparoscope are placed into the abdominal cavity. The laparoscope illuminates the surgical field and sends a magnified image from inside the body to a video monitor giving the surgeon a close-up view of the organs and tissues. The surgeon watches the live video feed and performs the operation by manipulating the surgical instruments placed through the trocars.

[0004] Minimally invasive surgical techniques performed laparoscopically can greatly improve patient outcomes because of greatly reduced trauma to the body. There is, however, a steep learning curve associated with minimally invasive surgery, which necessitates a method of training surgeons on these challenging techniques. There are a number of laparoscopic simulators on the market, most of which consist of some type of enclosure, and some type of barrier which can be pierced by surgical instruments in order to gain access to the interior. A simulated organ or practice station is placed inside the interior and surgical techniques are practiced on the simulated organ or practice station. Summary of the Invention

[0005] According to one aspect of the invention, a system for surgical training is provided. The system includes a simulated surgical environment also known as a trainer defining an interior cavity between a top and a base. At least two cameras are positioned inside the simulated surgical environment along with at least two infrared light sources. At least one surgical instrument is provided. The instrument has an elongate shaft extending between a tip at a distal end and a handle at a proximal end of the instrument. The tip is manipulated via the handle and can be configured as laparoscopic scissors, grasper, energy-based device, dissector, needle driver or other type of surgical instrument. The distal end of the instrument includes at least one retroreflector, preferably located along the elongate shaft. A computer processor is connected to the at least two light sources and at least two cameras. The computer is configured to receive image data from the at least two cameras and output position data for the at least one instrument. Each light source is paired with a camera such that infrared light emitted from the light source is reflected from the at least one retroreflector back to its source. As such, the associated camera is located very close to the light source because light sources not near a camera would not be reflected back directly to the camera and would only increase the ambient lighting in the cavity and decrease the contrast between the retroreflectors and background in the resulting image which would hinder the tracking of the instrument. The image data includes gray-scale video images and associated time stamp for each frame or group of frames. The computer processor provides the coordinates, such as the Cartesian coordinates, for the location of the tip and a unit vector pointing in the direction of the distal end of the surgical instrument along with an associated time stamp. In one variation, an inertial measurement unit (IMU) is provided on the handle of the surgical instrument. Data from the IMU is employed by the computer processor to fill in any missing position data arising, for example, from the retroreflectors on the instrument being hidden behind an artificial organ inside the trainer. The time spans for missing position data are short and the IMU data is effective for calculating instrument position since the start and end points of any gap in position data is already known from the image data acquired by the cameras. [0006] According to another aspect of the invention, an instrument for surgical training is provided. The instrument includes a handle at a proximal end and an instrument tip at a distal end. An elongate shaft extends between the handle and the tip. The instrument includes at least one retroreflective marker located circumferentially around the elongate shaft near the distal end. The instrument further includes an inertial measurement unit located on the handle.

[0007] According to another aspect of the invention, a method for surgical training is provided. The method includes the step of providing a simulated surgical environment, such as a laparoscopic trainer, defining an interior cavity between a top and a base. At least two cameras are disposed inside the simulated surgical environment along with at least two infrared light sources positioned adjacent to each of the cameras inside the simulated surgical environment. At least one surgical instrument is provided having an elongate shaft extending between a tip at a distal end and a handle at a proximal end. The distal end of the instrument includes at least one retroreflector which is a retroreflective marker. A computer processor is provided and configured to receive image data from the at least two cameras with software and appropriate triangulation algorithms configured to provide position data for the at least one instrument over time. The method includes the steps of inserting the distal end of the surgical instrument into the simulated surgical environment through a port in the top of the trainer and manipulating the instrument about the port inside the interior cavity. The at least one retroreflector on the instrument is exposed to infrared light from the at least two infrared light sources. The infrared light is reflected from the at least one retroreflector and captured by the at least two cameras. The position of the distal end of the surgical instrument over time is calculated by the computer processor using data from the cameras.

Brief Description of the Drawings

[0008] FIG. 1 is a perspective view of a surgical training device according to the present invention.

[0009] FIG. 2 is a perspective view of a surgical training device according to the present invention. [0010] FIG. 3 is a perspective view of a surgical training device according to the present invention.

[0011] FIG. 3A is a top perspective view of a simulated abdominal wall and tray according to the present invention.

[0012] FIG. 4 is a top perspective view of someone performing a simulated procedure in a laparoscopic trainer.

[0013] FIG. 5 is a top perspective view of a surface curved in one direction only.

[0014] FIG. 6 is a top perspective view of a surface curved in two directions.

[0015] FIG. 7 is a top perspective, exploded view of a negative cavity vacuum mold according to the present invention.

[0016] FIG. 8 is a top perspective, exploded section view of a negative cavity vacuum mold according to the present invention.

[0017] FIG. 9 is a top perspective view, section view of a negative cavity vacuum mold according to the present invention.

[0018] FIG. 10 is a top perspective, exploded section view of a frame, piece of foam and vacuum mold according to the present invention.

[0019] FIG. 1 1 A is a top perspective view of a piece of foam in place on a vacuum mold according to the present invention.

[0020] FIG. 1 1 B is a top perspective view of a piece of foam formed on a vacuum mold according to the present invention.

[0021] FIG. 12 is a top perspective, exploded section view of a frame, unformed layer, formed layer and a vacuum mold according to the present invention.

[0022] FIG. 13A is a top perspective, section view of a second piece of foam in place on a vacuum before forming according to the present invention.

[0023] FIG. 13B is a top perspective, section view of layers of foam on a vacuum mold after forming according to the present invention.

[0024] FIG. 14 is a top perspective, exploded section view of a frame, a layer of foam before forming, a plurality of foam layers after forming and a vacuum mold according to the present invention. [0025] FIG. 15A is a top perspective, section view of a frame, a layer of foam before forming, a plurality of foam layers after forming and a vacuum mold according to the present invention.

[0026] FIG. 15B is a top perspective, section view of a frame and a plurality of foam layers after forming and a vacuum mold according to the present invention.

[0027] FIG. 16 is a top perspective, exploded view of a foam layer and an uncured sheet of silicone to make an artificial skin layer according to the present invention.

[0028] FIG. 17A is a top perspective view of a foam layer in place on a layer of silicone to form an artificial skin layer according to the present invention.

[0029] FIG. 17B is a top perspective view of a foam layer adhered to a trimmed layer of silicone forming an artificial skin layer according to the present invention.

[0030] FIG. 18 is a top perspective, exploded section view of a weighted plug, a plurality of adhered foam layers after forming, a frame, a flat artificial skin layer and a vacuum mold according to the present invention.

[0031] FIG. 19A is a top perspective, exploded section view of a weighted plug, a plurality of adhered foam layers after forming and a skin layer before forming in place under a frame on a vacuum mold according to the present invention.

[0032] FIG. 19B is a top perspective, exploded section view of a weighted plug, a plurality of adhered foam layers after forming and a skin layer after forming in place under a frame and on a vacuum mold according to the present invention.

[0033] FIG. 1 9C is a top perspective, exploded section view of a weighted plug, a plurality of adhered foam layers after forming and a skin layer after forming in place under a frame and on a vacuum mold according to the present invention.

[0034] FIG. 19D is a top perspective, section view of a weighted plug, a plurality of adhered foam layers after forming, and a skin layer after forming in place under a frame and on a vacuum mold according to the present invention.

[0035] FIG. 20A is a top perspective view of a simulated abdominal wall according to the present invention. [0036] FIG. 20B is a bottom perspective view of a simulated abdominal wall according to the present invention.

[0037] FIG. 21 is a top perspective view of a simulated abdominal wall and frame according to the present invention.

[0038] FIG. 22 is a top perspective, exploded view of a simulated abdominal wall between two frame halves according to the present invention.

[0039] FIG. 23 is a perspective, section view of a simulated abdominal and two frame halves showing an angled channel according to the present invention.

[0040] FIG. 24A is a top perspective, section view of a bottom frame half showing retention protrusions according to the present invention.

[0041] FIG. 24B is a cross-sectional view of a simulated abdominal wall and frame according to the present invention.

[0042] FIG. 25 is a side elevational view of a typical laparoscopic surgical procedure performed in a simulator according to the present invention.

[0043] FIG. 26A is a side elevational view of a laparoscopic grasper instrument according to the present invention.

[0044] FIG. 26B is a side elevational view of a laparoscopic scissor instrument according to the present invention.

[0045] FIG. 26C is a side elevational view of a laparoscopic dissector instrument according to the present invention.

[0046] FIG. 27 is a side elevational view of a laparoscopic dissector instrument shaft detached from a handle according to the present invention.

[0047] FIG. 28 is a schematic of a laparoscopic trainer containing artificial organs and two laparoscopic surgical instruments and camera connected to an external microprocessor during use according to the present invention.

[0048] FIG. 29 is a top view of a circuit board according to the present invention.

[0049] FIG. 30 is an electrical schematic of a strain gauge configuration according to the present invention.

[0050] FIG. 31 A is a side elevational, section view of an instrument handle assembly and shaft assembly according to the present invention. [0051] FIG. 31 B is an end view of a movement arm and section of a rod of a surgical instrument according to the present invention.

[0052] FIG. 31 C is a top, section view of a movement arm and rod of a surgical instrument according to the present invention.

[0053] FIG. 31 D is an end view of a movement arm and section of a rod of a surgical instrument according to the present invention.

[0054] FIG. 31 E is a top, section view of a movement arm and rod of a surgical instrument according to the present invention.

[0055] FIG. 32 is a top perspective view of a laparoscopic surgical instrument, trocar, camera and simulated organs inside a laparoscopic trainer according to the present invention.

[0056] FIG. 33 is a side elevational view of a laparoscopic instrument having an inertial motion unit on a handle assembly according to the present invention.

[0057] FIG. 34 is a flow chart of steps taken by a system according to the present invention.

[0058] FIG. 35 is a schematic of an accelerometer calibration method and equations for all axes in both positive and negative directions according to the present invention.

[0059] FIG. 36 is a schematic of a magnetometer calibration model according to the present invention.

[0060] FIG. 37 is a strain gauge calibration plot of measured voltage against actual force measured by a load cell for calibration according to the present invention.

[0061] FIG. 38 illustrates a trimming and segmentation method for calculating the timing according to the present invention.

[0062] FIG. 39 is a flow chart of data in a MARG algorithm, an IMU orientation estimation algorithm according to the present invention.

[0063] FIG. 40 illustrates a smoothness algorithm and an equation used for curvature calculations according to the present invention.

[0064] FIG. 41 is a schematic illustrating an economy of motion algorithm and equation according to the present invention. [0065] FIG. 42 is a computer screen shot view of a user interface starting page according to the present invention.

[0066] FIG. 43A is a computer screen shot view of a user interface calibration screen according to the present invention.

[0067] FIG. 43B is a computer screen shot view of a user interface calibration screen according to the present invention.

[0068] FIG. 43C is a computer screen shot view of a user interface calibration screen according to the present invention.

[0069] FIG. 43D is a computer screen shot view of a user interface calibration screen according to the present invention

[0070] FIG. 44 is a computer screen shot view of a user interface lesson selection screen according to the present invention.

[0071] FIG. 45 is a computer screen shot view of a user interface preview screen according to the present invention.

[0072] FIG. 46 is a computer screen shot view of a user interface questionnaire screen according to the present invention.

[0073] FIG. 47 is a computer screen shot view of a user interface learning module screen according to the present invention.

[0074] FIG. 48 is a computer screen shot view of a user interface user feedback screen according to the present invention.

[0075] FIG. 49 is a flowchart illustrating the path of data flow according to the present invention.

[0076] FIG. 50 is a sectional side view of a surgical training device with infrared light sources and cameras, simulated organs and instruments according to the present invention.

[0077] FIG. 51 illustrates two images of an instrument with retro-reflective markers captured by a camera in a surgical training device according to the present invention.

[0078] FIG. 52 is a schematic of two cameras locating a marker in 3D space according to the present invention. [0079] FIG. 53 is a top perspective view of a distal end of an instrument with markers according to the present invention.

[0080] FIG. 54A is a side view of two distal ends of two instruments with markers according to the present invention.

[0081] FIG. 54B is a side view of two distal ends of two instruments with markers according to the present invention.

[0082] FIG. 54C is a side view of two distal ends of two instruments with markers according to the present invention.

[0083] FIG. 54D is a side view of two distal ends of two instruments with markers according to the present invention.

Detailed Description of the Invention

[0084] Turning now to FIGs. 1 -3, there is shown a surgical training device 1 0 that allows a trainee to practice intricate surgical maneuvers in an environment that is safe and inexpensive. The device 10 is generally configured to mimic the torso of a patient, specifically the abdominal region. The surgical training device 10 provides an enclosure for simulating a body cavity 1 2 that is substantially obscured from the user. The cavity 12 is sized and configured for receiving simulated or live tissue or model organs or skill training models and the like. The body cavity 1 2 and the enclosed simulated organs and/or models are accessed via a penetrable tissue simulation region 14 that is penetrated by the user employing devices such as trocars to practice surgical techniques and procedures using real surgical instruments such as but not limited to graspers, dissectors, scissors and energy-based fusion and cutting devices on the simulated tissue or models found located in the body cavity 1 2. The surgical training device 10 is particularly well suited for practicing laparoscopic or other minimally invasive surgical procedures.

[0085] Still referencing FIG. 1 , the surgical training device 1 0 includes a top cover 1 6 connected to and spaced apart from a base 18. The top cover 16 includes an integrally formed depending portion and the base 1 8 includes an upwardly extending portion both of which cooperate to form the sidewalls and backwall of the surgical training device 1 0. The surgical training device 10 includes a frontwall 20 that is hinged to the base 18 to form a door that opens to the cavity 12. The frontwall 20 includes a front opening 22 that provides lateral, side access to the cavity 1 2 which is useful for practicing vaginal hysterectomies and transanal procedures. The frontwall 20 is shown in a closed position in FIG. 1 and in an open position in FIGs. 2-3. A latch 24 is provided and configured to release the tissue simulation region 14 from the top cover 1 6. Another release button is configured to open the door. The tissue simulation region 14 is representative of the anterior surface of the patient and the cavity 1 2 between the top cover 16 and the base 18 is representative of an interior abdominal region of the patient where organs reside. The top cover 1 6 includes an opening that is configured to receive the tissue simulation region 14. The tissue simulation region 14 is convex from the outside to simulate an insufflated abdomen. The tissue simulation region 14 includes numerous layers representing muscle, fat and other layers as described in U.S. Patent No. 8,764,452 issued to Applied Medical Resources Corporation and incorporated herein by reference in its entirety. The tissue simulation region 14 will be described in greater detail below. The base 1 8 includes rails 26 shown in FIG. 3 that extend upwardly from the bottom surface inside the cavity 12. The rails 26 are configured to receive a tray 89 that carries simulated or live tissue, model or training game such as a skill exercise board including but not limited to a pegboard exercise. The tray 89 is useful for an arrangement comprising a plurality of organs and/or retaining fluid or simulated organs made of hydrogel and the like. The tray 89 is placed through the front opening and onto the rails 26 upon which it can then slide into the cavity 1 2. The tray 89 of FIG. 3A includes a platform supported by two depending legs that can be positioned along the rails 26. The legs may include notches to fix the position of the tray 89 along the rails 26 advantageous for reproducing an environment that is fixed with respect to an internal camera 415 and/or fixed insertion ports 90 for all trainee users for evaluation purposes. The platform of the tray 89 may be lined with a hook-and-loop-type fastener to facilitate removable attachment of a model or skill exercise board to the tray 89. The rails advantageously permit deeper trays to carry more artificial organs or to customize the distance between the top of the artificial organs and the simulated abdominal wall. A shorter distance such as provided by a shallower tray provides a smaller working space for surgical instruments and may increase the difficulty and/or realism of the procedure. Hence, the rails permit a second platform for artificial organs other than the bottom floor of the trainer which is considered as the first platform for artificial organs. The second platform is adjustable by interchanging trays, placing the artificial organs therein and sliding the tray onto the rails 26. Lights such as a strip of light emitting diodes (LEDs), sensors and video cameras all generally designated by reference number 28 may also be provided within the cavity 1 2. The surgical training device 1 0 is also provided with a removable adapter 30. The adapter 30 extends between and connects with the top cover 1 6 and base 18. The adapter 30 includes an aperture 32 that is cylindrical in shape and is sized and configured for connecting with a simulated organ such as a simulated vagina or colon and particularly useful for practicing lateral access procedures including but not limited to vaginal hysterectomies and transanal procedures. When a lumen-shaped artificial organ is connected to the adapter the aperture 32 is in communication with the lumen interior. The opening 22 in the frontwall 20 is also in communication with the lumen interior providing access into the lumen from outside the trainer. The adapter 30 connects to prongs in both the top cover 16 and the base 18. When connected, the aperture of the adapter 30 aligns with the opening 22 in the frontwall 20 and is located behind the frontwall 20. The backside of the frontwall 20 may include a recess sized and configured to receive the adapter 30 making it substantially flush with the front side of the frontwall 20. The frontwall 20 when closed and locked also aids in keeping the adapter secure especially when a procedure requires significant force to be applied on the artificial organ. The adapter 30 is interchangeable with an adapter that does not have an aperture 32 and is blank such that, when it is connected to the surgical training device, the opening 22 in the frontwall 20 is covered and light is not permitted to enter the cavity. The blank adapter is employed when the simulation does not require lateral access to the cavity. The base 18 further includes height adjustable legs 34 to accommodate common patient positioning, patient height and angles. In one variation, the legs 34 are made of soft silicone molded around hardware. The hardware includes a cap screw, tee nut and a spacer. The spacer, made of nylon, provides a hard stop that contacts the bottom of the base once the legs are screwed in so that each leg is the same length. The tee nut is used to grip the silicone foot to prevent it from spinning independently from the cap screw. The distal end of each of the legs is provided with a silicone molded foot. The silicone feet are semi-spherical and allow the unit to self-level and dampen vibrations because of the soft silicone composition.

[0086] The surgical training device 10 has an elegant and simple design with the ability to simulate different body types such as patients with high body mass index. The trainer 10 can be used by one or more people at the same time and has a large area in the tissue simulation region to accommodate trocar/port placement for a variety of common procedures. The device 1 0 is configured to resemble a pre- insufflated abdomen and, therefore, more anatomically accurate than other trainers that are simply box-like or do not have large tissue simulation regions curved to simulated an insufflated abdomen. The interior cavity 1 2 is configured to receive a tray that can slide on the rails 26 into the cavity 12 such that moist/wet live or simulated organs made of hydrogel material can be utilized in the practice of electrosurgical techniques. The rails 26 also advantageously permit the floor of the inserted tray to be closer to the tissue simulation region reducing the vertical distance therebetween. The device 1 0 is also conveniently portable by one person.

[0087] The surgical trainer 10 is a useful tool for teaching, practicing and demonstrating various surgical procedures and their related instruments in simulation of a patient undergoing a surgical procedure. Surgical instruments are inserted into the cavity 1 2 through the tissue simulation region 14. Various tools and techniques may be used to penetrate the top cover 16 to perform mock procedures on simulated organs or practice models placed between the top cover 1 6 and the base 18. An external video display monitor connectable to a variety of visual systems for delivering an image to the monitor may be provided. For example, a laparoscope inserted through the tissue simulation region 14 connected to a video monitor or computer can be used to observe, record and analyze the simulated procedure. The surgical instruments used in the procedure may also be sensorized and connected to a computer. Also, video recording is provided via the laparoscope to record the simulated procedure.

[0088] There are a number of ways that the tissue simulation region can be made. One exemplary variation is the tissue simulation region being simulated as an abdominal wall. Previous versions have used layers of different types of flat foam and/or silicone sheets to simulate the look and/or feel of the different types of tissue present in the human abdominal wall. The sheets simulating an abdominal wall are curved in one or more direction.

[0089] One problem with previous versions is that the simulated abdominal wall requires some type of support structure to prevent collapse or buckling of the simulated abdominal wall during use. The support structure holding the simulated abdominal wall generally detracts from the overall feel and visual effect of the simulated abdominal wall, and often gets in the way during simulated procedures, especially during trocar placement.

[0090] An aesthetic shortcoming of this type of simulated abdominal wall is that the foam can only be made to curve in one direction, which greatly detracts from its realism. An actual insufflated abdomen curves in multiple directions, and it is a goal of the present invention to create a more lifelike simulation.

[0091] An abdominal wall with realistic curvature and landmarks is desirable for the training of proper port placement. Proper port placement allows safe access to the abdominal cavity and adequate triangulation for accessing the key anatomical structures throughout a simulated surgical procedure.

[0092] The simulated abdominal wall for use with the surgical training device 1 0 and its method of manufacture will now be described in greater detail. The simulated abdominal wall is a layered foam abdominal wall that has no need for additional internal or external support structures, and has the visual appeal of a truly convex surface with appropriate landmarks. The method of making the simulated abdominal wall involves laminating multiple layers of foam with the use of adhesive. As each subsequent layer of foam is added, the overall structure becomes more rigid. After several layers have been added, the simulated abdominal wall will tend to spring back to its original shape, even after being severely deformed, and retain enough rigidity to allow realistic puncture by trocars. The simulated abdominal wall has the convex visual appearance of an insufflated human abdomen. Also, the simulated abdominal wall of the present invention allows the user to place a trocar anywhere through its surface without interference from unrealistic underlying support structures. The simulated abdominal wall can withstand repeated use. Previous simulated abdomens have a rubber-like skin layer that is not bonded to the supporting foam materials, resulting in a simulated abdominal wall that appears worn only after one or two uses. A skin layer comprised of silicone mechanically bonded to an underlying foam layer has been created and integrated into the simulated abdominal wall. Because the silicone is securely bonded to the underlying foam, a much more durable skin layer is realized, and costs are driven down by reducing the frequency of abdominal wall replacement. Furthermore, in previous versions where the outer skin layer is not bound to the underlying layers, unrealistic spaces open up between the simulated abdominal wall layers during port placement. The present invention eliminates this issue. A method has been developed to give shape to the simulated abdominal wall. This method meets the aforementioned goals, and is described in reference to the figures.

[0093] The method involves the use of a vacuum mold to form and join convex foam sheets. In the process, a foam sheet is placed on the vacuum mold and held in place with a frame. The vacuum pump is then turned on, and heat is applied to the foam. The heat relaxes the foam, allowing it to yield and stretch into and conform to the shape of the mold cavity due to the suction of the vacuum. Spray adhesive is applied to the foam in the mold and/or to a new sheet of foam. Next, a multitude of holes are poked through the first layer of foam so that the vacuum can act on the second layer of foam through the first. The order of hole-poking and glue application can be reversed and the process will still work. The frame is removed, the next sheet of foam is placed glue side down onto the vacuum mold (with the first foam layer still in place, glue side up), and the frame is replaced. Again, the vacuum pump is turned on and heat is applied to the top foam layer. As the two foam layers come into contact they are bonded together. This process is then repeated for each desired foam layer. With the addition of each foam layer, the simulated abdominal wall gains strength.

[0094] Once the desired foam layer configuration is completed, the simulated abdominal wall is then inserted into the abdominal wall frame. The abdominal wall frame is a two-piece component that secures the simulated abdominal wall around the perimeter by compressing it between the top and bottom frame parts, and allows the user to easily install and remove the wall from the surgical simulator enclosure. The geometry of the abdominal wall frame adds further support to the convex form and feel of the simulated abdominal wall by utilizing an angled channel along the perimeter that the simulated abdominal wall is compressed between.

[0095] The method described hereinbelow relies on a bent lamination mechanism formed, in part, by successively gluing surfaces together that have been made to curve. A structure that maintains the desired curvature emerges with each additional layer.

[0096] The method uses vacuum forming to achieve curved surfaces. In this second method, flat sheets of foam are placed over a negative cavity vacuum mold, a frame is placed over the foam to make an air-tight seal, and the vacuum mold is evacuated. As the vacuum is pulled, heat is applied to the foam, which allows the foam to yield and stretch into the mold cavity. When a new layer is to be added, a multitude of holes are poked through the previously formed foam layers. Adhesive is applied between the layers so that they form a bond across the entire curved surface.

[0097] After several layers of foam have been laminated together, the work-piece begins to maintain the curved shape of the mold. By adding or removing layers, the tactile response of the foam layers can be tailored for more lifelike feel.

[0098] Once the desired foam layer configuration is completed, the simulated abdominal wall is then inserted into the abdominal wall frame, which is a two- piece system consisting of a top and bottom frame that secures the simulated abdominal wall along the perimeter by compressing the foam layers in an angled channel created by the top and bottom frame components in a friction-fit or compression fit engagement or the like. The design of the frame allows the user to easily install and remove the frame from the surgical simulator enclosure by snapping the perimeter of the frame to the surgical simulator enclosure. The geometry of the abdominal wall frame adds further support to the convex form of the simulated abdominal wall by utilizing an angled channel along the perimeter that the simulated abdominal wall is compressed between. The angled channel of the frame follows the natural shape of the simulated abdominal wall. Simply compressing the simulated abdominal wall between two flat frame pieces results in significantly increased support for the convex form and produces a realistic feel of the simulated abdominal wall and advantageously prevents unwanted inversion of the simulated abdominal wall during normal use. [0099] With reference to FIG. 4, a surgical training device also called a trainer or surgical simulator 10 for laparoscopic procedures is shown that allows a trainee to practice intricate surgical maneuvers in an environment that is safe and inexpensive. These simulators 1 0 generally consist of an enclosure 1 1 1 comprising an illuminated environment as described above that can be accessed through surgical access devices commonly referred to as trocars 1 1 2. The enclosure is sized and configured to replicate a surgical environment. For instance, the simulator may appear to be an insufflated abdominal cavity and may contain simulated organs 1 13 capable of being manipulated and "operated on" using real surgical instruments 1 14, such as but not limited to graspers, dissectors, scissors and even energy-based fusion and cutting devices. Additionally, the enclosure 10 may contain a simulated abdominal wall 1 15 to improve the realism of the simulation. The simulated abdominal wall 1 15 facilitates the practice of first entry and trocar 1 12 placement and advantageously provides a realistic tactile feel for the instruments moving through the simulated abdominal wall.

[0100] Turning to FIG. 5, a surface 1 16 curved in one direction is shown. Many of the current products on the market make use of a simulated abdominal wall that curves in only one direction as shown in FIG. 5. This shape is an approximation of the real shape of an insufflated abdomen that is curved in several directions. Furthermore, a simulated abdominal wall curved in one direction as shown in FIG. 5 is not as structurally sound as a shape that curves in two directions. Simulated abdominal wall designs that are curved in only one direction often necessitate the use of additional internal support structures beyond a perimeter frame such as a crisscrossing reinforcing spine or buttress. FIG. 6 shows a surface 1 16 that curves in two directions, which is more realistic and also more structurally sound than a surface that curves in only one direction. The simulated abdominal wall 14 of the present invention is curved in two directions as shown in FIG. 6.

[0101] In view of the foregoing, the present invention aims to eliminate the need for internal support structures while creating a shape that has a visual look and tactile feel that more closely mimic the real abdominal wall.

[0102] Turning now to FIG. 7, an exploded view of a negative cavity vacuum mold is shown consisting of a base 1 23, air outlet 1 24, frame 125, and main body 126. FIG. 8 shows an exploded section view of the same vacuum mold. In this view, air-holes 127 are seen to pierce the cavity 128. FIG. 9 shows an assembled section view of the vacuum mold, showing the plenum 129 created between the base 1 23 and main body 126, the frame seal 130 between the base 123 and main body 126, as well as the plenum seal 1 31 between the main body 126 and frame 125.

[0103] Looking now to FIG. 1 0, the vacuum mold is shown with a foam sheet 1 32 ready to be placed on the main body 126, and held in place with frame 125. FIG. 1 1 A shows the flat foam sheet 132 prior to forming located inside the main body and covered by the frame 125. FIG. 1 1 B shows the formed foam sheet 1 33 after application of vacuum across the plenum. During the forming process, air is evacuated through outlet 124, which creates negative pressure in the plenum 129. This negative pressure acts through air holes 1 27, and sucks the flat foam sheet 1 32 towards the inner surface of the cavity 128. While air is being evacuated through outlet 24, heat is applied to the top of the foam, which allows the foam to stretch and make complete contact with the surface of the cavity.

[0104] FIG. 1 2 shows an exploded section view of a foam layer 132 being added to the work-piece. Prior to forming in the vacuum mold, a multitude of holes 142 must be poked through the formed foam layer 133 to allow the suction to act through its thickness, thus pulling the flat foam sheet 1 32 into the cavity. Also prior to placement in the vacuum mold, adhesive must be applied to the top side of the formed foam layer 1 33 as well as to the underside of the flat foam sheet 132. FIGs. 13A-1 3B show the flat foam sheet 132 being simultaneously formed and laminated to the formed foam sheet 1 33, and thus beginning to form the pre-made foam layers 134. Again, different types and colors of foam may be used to simulate the colors and textures present in a real abdominal wall.

[0105] An exploded view of this process is shown after several repetitions in FIG. 14, where a flat foam sheet 132 will be pressed against a plurality of pre-made foam layers 1 34 using frame 125. FIG. 1 5A shows a collapsed view of the aforementioned setup before and, in FIG. 1 5B, after vacuum forming. Again, between adding layers, it is essential to poke a plurality of small holes 142 through the pre-made foam layers 1 34, as well as to apply adhesive to the top of the pre-made foam layers 1 34 and, if needed, to the underside of the next flat foam layer 132.

[0106] Turning now to FIG. 1 6, an exploded view of the skin layer is observed, showing skin foam layer 1 37, and uncured silicone layer 138. FIG. 17A shows the skin foam layer 1 37 in place on the uncured silicone layer 138. When the silicone cures on the foam, it creates a mechanical bond with the slightly porous foam material. Once the silicone is fully cured, the excess is trimmed resulting in the trimmed skin layer 139 shown in FIG. 17B.

[0107] FIG. 18 shows an exploded view of the vacuum mold main body 1 26, the trimmed skin layer 139 with the silicone side facing the main body 1 26, the frame 125, the pre-made foam layers 134 and a weighted plug 140 used to press the layers together. FIG. 19A shows the trimmed skin layer 139 held in place on the vacuum mold's main body 1 26 by the frame 125, prior to evacuation of air in the mold. FIG. 19B shows the trimmed skin layer 1 39 pulled into the cavity of the vacuum mold, with the pre-made foam layers 134 with or without adhesive applied and ready to be pressed down into the cavity by the weighted plug 140. FIG. 1 9C shows the pre-made foam inserts 134 placed into the cavity on top of the trimmed skin layer 139. FIG. 1 9D shows the final step of the process, the placement of the weighted plug 140 on top of the pre-made foam insert 134.

[0108] FIGs. 20A and 20B show right side up and upside down section views of the final simulated abdominal wall 141 in its finished state, prior to having its edges bound by the simulated abdominal wall frame top and bottom halves 143, 144. The simulated abdominal wall 141 is approximately 1 2-15 centimeters wide by approximately 15-1 8 centimeters long and the area of the domed simulated abdominal wall is between approximately 250-280 square inches. The large area permits not only multiple trocar ports to be placed, but also, they can be placed anywhere on the simulated abdominal wall. The simulated abdominal wall is also interchangeable with other simulated abdominal walls including ones configured for obese and pediatric patients. Furthermore, the large simulated abdominal wall is not limited to practicing laparoscopic, minimally invasive procedures, but also, advantageously permits open procedures to be performed through the simulated abdominal wall. [0109] FIG. 21 shows the simulated abdominal wall 141 set into the simulated abdominal wall frame 143, 144. This unit is then fixed into a laparoscopic trainer. FIG. 22 shows the exploded view of the simulated abdominal wall 141 and frame assembly which includes a top frame 143, and a bottom frame 144. The top frame 143 and bottom frame 144 can be assembled together via screws in the case of a re-usable frame system, or snapped together via heat staking or other low-cost assembly method.

[0110] With reference to FIG. 23, one of the key features in the simulated abdominal wall frame 145 is the angled channel 146 in which the simulated abdominal wall 141 is compressed. The angle of the channel 146 follows the contour of the simulated abdominal wall 141 and significantly increases the support and form of the convex simulated abdominal wall 141 . In contrast, a simulated abdominal wall 141 , that is compressed and retained between two flat frames, is relatively weaker and more likely to invert/collapse during use.

[0111] FIG. 24A shows the protrusions 147 that are spaced around the perimeter of the bottom frame 144. These retaining protrusions 147 can also be present on the top frame 143, or both frame halves 143, 144. These retaining protrusions 147 provide additional retention of the simulated abdominal wall 141 within the simulated abdominal wall frame 145 by pressing or biting into the simulated abdominal wall as it is compressed between the frame top 143 and frame bottom 144. With reference to FIG. 24B, a simulated abdominal wall 141 is compressed between the two frame halves 143, 144 and is pierced by a retaining protrusion 147.

[0112] It should be noted that although one method is described here for layering pre-made foam sheets in order to create a curved surface with structural integrity, other methods are also within the scope of the present invention, including a casting mold that allows the user to sequentially build up a multitude of curved layers that are adhered to one another across their entire surface.

[0113] After the surgical training device 1 0 is assembled with the simulated abdominal instrument in place atop the trainer, laparoscopic or endoscopic instruments are used to perform mock surgeries using the surgical training device 10 of the present invention. Generally, artificial tissue structures and organs sized and configured to represent actual anatomical features, skill-specific models or one or more skill practice stations are placed inside the trainer 10. Surgical simulators, such as the surgical training device 10 of the present invention, are especially useful when they include feedback for the user. In the mock procedure, the performance of the user is monitored, recorded and interpreted in the form of user feedback through integration of various sensing technologies into the simulated environment. The present invention provides low-cost sensorized instruments that are capable of monitoring the motion and force applied by a user to the simulated tissue and the like located inside the trainer cavity. The sensorized instruments are connected to a microprocessor, memory and video display and configured to receive data from various sensors including but not limited to sensors located on the surgical instruments, analyze the data and provide appropriate feedback to assist in teaching and training the user. The present invention can be employed with multiple surgical instruments and accessories, including but not limited to graspers, dissectors, scissors, and needle drivers. Data gathered from a mock surgery can be used to compare a trainee's performance to that of an experienced surgeon or that of other trainees to provide appropriate feedback. Such a system may improve the rate of skill acquisition of trainees and, as a result, improve surgical outcomes, and skills.

[0114] The present invention utilizes a number of sensing systems making use of a variety of fundamental sensing principles and technologies such as strain gauges. For example, a strain gauge commonly consists of a metallic foil pattern supported by a flexible backing. When applied properly to a structure of interest, stresses and strains experienced by the structure are transferred to the strain gauge as tension, compression or torsion on the metallic foil pattern. These mechanical stimuli alter the geometry of the foil pattern and, as a result, cause a change in the electrical resistance of the foil pattern, which can be measured. An additional aspect that is important to the use of strain gauges is the configuration in which they are utilized. Strain gauges are typically wired into an electrical circuit, commonly known as the Wheatstone bridge, which consists of two parallel voltage dividers. In this configuration, the difference between the electric nodes at the center of the voltage dividers of the circuit is amplified and measured. The configuration in which the strain gauges are both wired into the circuit and applied to an object of interest determines what loads the sensor system actually measures. For example, to measure axial strain, two strain gauges are aligned on opposite sides of a component and are also wired on opposite sides of the bridge circuit such that they do not share a node.

[0115] Turning now to FIG. 25, surgical simulators 1 0 for laparoscopic procedures have been developed that allow a trainee to practice intricate surgical maneuvers in an environment that is safe and inexpensive. These simulators generally consist of a cavity 12 comprising an illuminated environment that can be accessed through surgical access devices commonly referred to as trocars 21 2 and 213. The enclosure is sized and configured to replicate a surgical environment, such as an insufflated abdominal cavity containing simulated organs 214 that are capable of being manipulated and "operated on" using real surgical instruments 216 and 217, such as but not limited to graspers, dissectors, scissors and even energy-based fusion and cutting devices. Laparoscopes/endoscopes or other cameras 21 5 are inserted into the cavity through the simulated abdominal wall. More advanced simulators may also make use of various sensors to record the user's performance and provide feedback. These advanced systems may record a variety of parameters, herein referred to as metrics, including but not limited to motion path length, smoothness of motion, economy of movement, force, etc.

[0116] In view of the forgoing, the present invention aims to monitor force applied by a trainee, interpret the collected information and use it to improve user performance through feedback and appropriate teaching.

[0117] In reference to FIGs. 26A, 26B and 26C, a variety of laparoscopic instruments are shown including a grasper 218, a dissector 219 and scissors 220, respectively. These devices, although different in function, generally share certain key features. Each instrument includes a handle 221 which controls the operable distal end of the instrument. Actuation of the handle opens and closes the jaw-like tip to perform grasping, dissecting or cutting based on the type of instrument used. Additionally, the instrument is configured to permit rotation of the shaft 227 by way of an adjustable component 222 in reach of the user's fingers. A locking mechanism 223 is also provided at the handle to allow the surgeon/trainee to maintain the jaws of the instrument at a given position.

[0118] With further reference to FIG. 27, the present invention makes use of a scissor type handle 221 that can be reused after each surgical procedure. The handle 221 is designed such that a variety of disposable shafts 227, each with a different tip element 218-220, can be fixed to the same handle 221 . In the present system, the disposable shafts 227 have a ball end 229 connected to a rod 230 which articulates with the instrument's tips 218. This piece fits into a spherical slot 231 at the end of a movement arm 232 inside of the handle 221 that connects to the grips 225 and 226. Movement of the thumb grip 225 actuates the rod 232 which opens or closes the instrument tips 218. The ability of such a system to swap out shafts 227 advantageously permits a single handle 221 to house the necessary electronics while being interchangeable with a variety of different instrument shafts and tips.

[0119] As shown in FIG. 28, the electronics such as the circuit board and sensors for force sensing are enclosed in a housing 240 and connected to the handle 221 of the instrument. The electronics are electronically connected via a USB cord 238, 242 to an external computer 239. The following descriptions reference a reposable system 221 . Previously, the instruments with sensors located on the shaft were disposable and very difficult to sterilize if needed. However, with the sensors on the handle, the shaft assembly can be interchanged and discarded as needed. The reposable handle 221 is modified to incorporate housing 240 for a custom circuit board 241 .

[0120] The circuit board 241 is shown in FIG. 29. The board 241 includes sensors 244, microprocessor 247 and a communication port 242 configured for communication with an external computer 239. The board 241 includes a 9-degree-of- freedom inertial measurement unit (9-DOF IMU) 244 and a high-resolution analog-to- digital converter (ADC) 243. The IMU 244 is comprised of a 3-axis accelerometer, 3- axis gyroscope, and 3-axis magnetometer. There are electrostatic discharge (ESD) diodes located between the ADC and ground to prevent electrical failure when the device is exposed to electrical shock. When utilized together along with appropriate calculations, information regarding the user's movement can be determined. [0121] The ADC 243 compares the voltages of the strain gauge bridge circuit seen in FIG. 30. As can be seen in FIG. 30, the strain gauges 31 3 and 314 are configured such that axially applied loads stress the gauges 313 and 314 resulting in a change in resistance between the gauges and the accompanying resistors 31 5 and 316 which form each node 317 and 318. Each strain gauge is connected to a resistor 315 and 316 such that this change in resistance results in a measurable difference between the resistive components forming each node 31 7 and 318 and, as a result, the voltage 319 measured between the nodes 31 7 and 31 8. The ADC 243 measures this difference and, through the use of appropriate calculations, the force applied at the instrument tip can be determined. In regards to communication with an external computer, as can be seen in FIG. 28, the board 241 located inside the housing 240 is connected to an external computer 239 and powered by way of a micro-USB type 2.0 connector 238, 242.

[0122] In regards to communication with an external computer, as can be seen in FIG. 38, the board 241 is connected to an external computer 239 and powered by way of a micro-USB type 2.0 connector 242.

[0123] Turning now to FIG. 31 A, force sensing technologies coupled to the handle 221 that make use of strain gauges 255 are provided. The present invention positions the strain gauges 255 on the movement arm 232 inside the handle 221 . Wires 256 connected to the strain gauges pass through the handle 221 to the circuit board 241 inside the housing 240. It is worth noting that the present invention places the strain gauges on the movement arm into a half-bridge configuration. With the strain gauge on the handle assembly, the longevity of the instrument is increased because when the shaft is interchanged with the handle there are no stresses placed on the gauge and connecting wires. During interchanging of the shaft, the wires remain advantageously concealed and protected inside the handle assembly and are not exposed or stretched inadvertently as would be the case if the sensors were placed on the shaft. Placement of the sensors in the handle assembly advantageously allows for shorter wires. However, moving parts inside the handle may rub and wear out the wires. Accordingly, the wires are coated in polyetheretherketone (PEEK) to protect and prevent wear from abrasion encountered inside the handle. The small gauge of the wires and the PEEK coating prevent the lead wires from wearing and provide a longer lifetime and more accurate data.

[0124] As can be seen in FIG. 31 B-31 C, strain gauges 255 are applied on opposite sides of the movement arm 232 such that a half-bridge may be formed by connecting the strain gauges 255 in the appropriate manner. In this fashion, applied force is monitored as a function of the axial deformation of the movement arm 232 during use. The sensitivity of this sensing setup is controllable, in part, by changing the material that the movement arm 232 is made of. A larger sensing range is implemented by making the movement arm 232 out of materials with low elastic moduli such as hardened steel. On the other hand, use of materials with higher elastic moduli, such as aluminum, result in a lower overall sensing range and a higher sensitivity as the movement arm 232, and as a result the strain gauges 255, deform more under axial loading. Use of aluminum also increases the likelihood of a failure of the movement arm at the rear webbing and at the socket when exposed to high grasping forces. To mitigate the deformation, the thickness of the rear tabs was increased and the thickness of the front of the socket was also increased.

[0125] With reference to FIGs. 31 D-31 E, the strain gauges 255 on the movement arm 232 are not only sensitive to axial loads produced while interacting with an object at the tips, but are also sensitive to bending stress 257 transferred from the force 258 applied to the instrument shaft 227 and to the movement arm 232. The movement arm 232 is preferably made of aluminum 775. The strain gauge is calibrated for outputting force at the tip of the instrument. This output is compared against a force value pre-determined to harm or damage tissue for a particular procedure. Such information as to the appropriate use of force and level of respect for tissue is provided to the user as feedback at the end of the procedure as will be discussed later herein.

[0126] In addition to measuring the force applied by the user, a user's motion and instrument position may also be monitored in a mock surgical procedure or practice. Systems and methods for tracking instrument position and user movement while training with simulated organ models are provided. Feedback to the user is provided based on the collected and analyzed data to assist in teaching and training the user. Various and multiple surgical instruments and accessories, including but not limited to graspers, dissectors, scissors, needle drivers, etc. can be employed with the systems described herein for motion tracking. Data gathered from the sensorized surgical instruments can be used to compare an inexperienced trainee's performance to that of an experienced surgeon and provide appropriate feedback. The skills gained in this manner may improve the rate of skill acquisition of trainees and, as a result, improve surgical outcomes.

[0127] With reference to FIG. 32, a surgical simulator 1 0 is shown for laparoscopic procedures that permit a trainee to practice intricate surgical maneuvers in an environment that is safe and inexpensive. The simulator 10 generally consists of a cavity 1 2 comprising an illuminated environment that can be accessed through surgical access devices commonly referred to as trocars 41 2. The enclosure is sized and configured to replicate a surgical environment. For instance, the simulator may appear to be an insufflated abdominal cavity and may contain simulated organs 413 capable of being manipulated and "operated on" using real surgical instruments 414, such as but not limited to graspers, dissectors, scissors and even energy based fusion and cutting devices. Additionally, the enclosure often makes use of an internal camera 415 and external video monitor.

[0128] More advanced simulators may also make use of various sensors to record the user's performance and provide feedback. These advanced systems may record a variety of parameters, herein referred to as metrics, including but not limited to motion path length, smoothness of motion, economy of movement, force, etc. The present invention is configured to track the user's movements and the position of utilized instruments, interpret the collected information and use it to improve user performance through feedback and appropriate teaching instructions. Different methods for monitoring and collecting motion and position data will be now described.

[0129] In reference to FIG. 33, a laparoscopic grasper 416 that includes an inertial motion unit (IMU) 41 7 consisting of a magnetometer, gyroscope and accelerometer is shown. Data collected from the IMU 41 7, such as acceleration, angle, etc., is utilized to determine metrics such as, but not limited to, motion smoothness, economy of motion and path length. This information is obtained by collecting the raw IMU data (such as acceleration, angular velocity, and azimuth) in real time and analyzing it on a connected computer.

[0130] After various data is collected from the one or more sensors described above, the data is processed to extract meaningful surgical laparoscopic skills assessment metrics for providing constructive user feedback. User feedback can be tailored to identify strengths and weaknesses without relying on the subjective assistance of a third party. Users can view their feedback after completing a module, task or procedure on the training system. Some examples of metrics that are computed for performance feedback include but are not limited to (i) the total time it takes for the procedure to be completed, (ii) the average smoothness of motion of tool tips, (iii) the average economy of motion (i.e. efficiency), (iv) the average speed of motion at the tool tips, (v) the average work done, and (vi) the average energy efficiency at the tool tips.

[0131] A nine degree-of-freedom (DOF) inertial measurement unit (IMU) is used as the means for motion tracking. The IMU consists of a combination of sensors including an accelerometer, a magnetometer, and a gyroscope. Raw analog voltage measurement is converted into raw digital values in units pertinent to their specific sensor. The accelerometer measures the acceleration of the device on x, y, and z axis (in both positive and negative directions) in reference to gravitational force converted into units of acceleration (m/s2). The magnetometer measures the earth's magnetic field in gauss units. The gyroscope measures the angular velocity of the device about all three axes in radians per second (rad/s). A total of nine values are collected from the IMU per sample. For force measurement, 2 strain gauges are attached to a metal strut situated within the grasper, which is primarily used to translate the grasper actuation to the grasper tips. Each type of sensor is calibrated before data is collected. Samples are received approximately every 20 milliseconds, saved into a database upstream, and passed into the data analysis utility. The data analysis utility includes data preprocessing, orientation analysis, and metrics analysis.

[0132] Once raw data has been collected and calibrated, data is pre- processed, and some preliminary analysis is performed before metrics are calculated. The three reliable and well-tested metrics to measure a user's performance in simulators are (1 ) the time taken to complete the task, (2) smoothness of motion, and (3) economy of motion. Data analysis algorithms aim to quantify these metrics as will be detailed hereinbelow. Other metrics, such as average velocity of the tool tips, and energy efficiency will also be added into the analysis. Once metrics computation is complete, the results are graphically conveyed to the user for performance feedback. This overview of data processing and analysis is illustrated in FIG. 34.

[0133] Before any type of analysis is done with the data, the data is pre- processed to ensure the data itself reflects as closely to the true value as possible. No two sensors are completely identical, and their signal responses will always present a slight margin of error due to inherent hardware variability. By calibrating the sensors, the difference between the raw sensor signal output and the true value is characterized as a constant or a function depending on whether the relationship is linear or nonlinear. Each sensor will have a unique calibration constant or set of coefficients that are used to compensate for errors in all the signals generated from each specific sensor. For this invention, there are a total of four types of sensors (accelerometer, magnetometer, gyroscope, strain gauge) that need to be calibrated, each requiring a different calibration method.

[0134] Turning now to FIG. 35, the accelerometer 501 is calibrated using gravity as its reference. The IMU device is oriented with one of its 3 axes perpendicular to the ground and held at that orientation before the signal is recorded and averaged over a span of a few seconds. The same is done on the opposite orientation (same axis). This is repeated for all three axes. A total of 6 gravity acceleration values are measured, 2 for each x, y, and z axes. The average 518 of the two values will be the offset for each axis.

Figure imgf000029_0001

its reference. Magnetic measurements will be subjected to distortions. These distortions fall in one of two categories: hard or soft iron. Hard iron distortions are magnetic field offsets created by objects that are in the same reference frame as the object of interest. If a piece of ferrous or metallic material is physically attached to the same referencing frame as the sensor, then this type of hard iron distortion will cause a permanent bias in the sensor output. This bias is also caused by the electrical components, the PCB board, and the grasper handle that the circuit board is mounted on. Soft iron distortions are considered deflections or alterations in the existing magnetic field. These distortions will stretch or distort the magnetic field depending upon which direction the field acts relative to the sensor.

[0137] Referring now to FIG. 36, to calibrate the IMU, the IMU is oriented at as many angles and directions as possible to attain an even amount of data points to model a spherical representation of the earth's magnetic field. Once the raw magnetometer data 502 is recorded, it is fit into an ellipsoid using a fitting algorithm. The ellipsoid center and coefficients are calculated. The center values reflect the hard iron bias of the device, while the coefficients characterize the soft iron distortion (i.e. the shape of the distorted magnetic field surrounding the device). Assuming that the earth's magnetic field is at the origin, and is perfectly spherical, the center offset and the transformation matrix can be calculated as follows:

[0138] MagCenter = [ίϊ½ηΐΘΐ_χ> mcenter_y, rncenter_z]

Ms transform

Figure imgf000030_0001

Magcalibrated = (Magraw - MagCenter) Magtransform

[0139] The gyroscope measures angular acceleration, which means that when the device is perfectly still, a perfect gyroscope's signal output will be 0 rad/s. To calibrated the gyroscope, the device is laid completely still while raw gyroscope signals are recorded. A total of 3 values are measured and used to compensate for the error and noise.

[0140] Gyrocaiibrated = Gyroraw - GyroatRest

[0141] The strain gauges are calibrated using a load cell as a reference. Each grasper handle has two strain gauges placed on opposite sides of the metal strut as shown in FIG. 31 B. The strut is loaded axially, and the strain gauges are each interconnected to a Wheatstone bridge, which measures the change in resistance of the strain gauges due to the compression or elongation of the metal bar. Traditionally, a load cell can be used to directly characterize the strain gauge signal in response to load. The manner in which the bar is assembled into the handle is important because it can introduce complications that prevent accurate force measurements using the load cell. One end of the metal bar is connected to the actuator where the grasper is held, and the other end is connected to a rod that in turn actuates the grasper tips. Between each end of the bar and their respective sites of actuation are many joining parts that work together to transfer force from the handle to the grasper tips. These joining parts, in order to allow movement, are designed with clearance. When there is a change in direction of load (e.g. closing the grasper as soon as it has been opened), the separate parts will have to travel through this void before they make contact with their adjacent parts again. This phenomenon causes different force readings on the same load applied in opposite directions (i.e. compression or tension), and is known as "backlash". Calibration is done by observing the difference in response of the strain gauge and the load cell when the grasper is loaded (gripped), and when it is being released. FIG. 37 shows that the strain gauge response will follow a different trend line when force is being loaded and when it is being released due to backlash. An algorithm is written to automatically estimate the closest polynomial fits to both the upper 503 and lower 504 trend lines, resulting in two sets of polynomial coefficients. It separates the two lines by first fitting all the data points to create a single trend line that passes between the top and bottom trend lines of interest, which acts as a threshold line to separate data points that belong to the top line from those that belong to the bottom line. It then sweeps through the data over time and estimates whether the grasper is being loaded in one direction or the other. The algorithm then applies the corresponding polynomial coefficients to solve for true force being applied at the grasper tips.

[0142] To ensure analysis is as relevant to actual surgery as possible, both the user's dominant and non-dominant hand movements are tracked simultaneously. After each of the sensors is calibrated correctly, and prior to performing any analysis, time is one metric that can be obtained. Unfortunately, due to the nature of certain surgical simulation procedures, the user is occasionally required to put down the device mid-session. Since the length of time in which the device stays inactive in this form does not directly reflect on the skill of the user, this idle factor is eliminated from the analysis in one variation. An algorithm to trim off these idle portions 505 is shown in FIG. 38. It does so by sweeping through each of the axes of the accelerometer data, and calculating the derivative over time. When derivative is zero or close to zero, it is assumed that there is no motion along that axis. If the derivative remains to be zero or close to zero for more than 6 seconds, it is considered to be set aside. A buffer 506 of approximately 3 seconds is added to each of the ends of the idle start and end times 505 to account for movements relating to the picking up or putting down of the device. The final start and end idle time is used as a reference for downstream processing to identify the locations at which the data is to be segmented at. Useful data separated by intermediate idle regions, is segmented and stored into an array list separated by order 507 (i.e. a data set with 3 idle periods will have 4 data segments). Segments that belong to a single data set will be individually analyzed successively 507 and be added to find the total active time to complete the task. The total active time to complete the task is when at least one of the tools is not idle, and then the user is considered to be actively performing the task. Once data has been segmented, and calibration has been applied to the raw data, this information can be used to calculate the orientation of the device over time. A sensor fusion algorithm that has been developed to combine accelerometer, magnetometer, and gyroscopic data to estimate the orientation of the device over time is the Magnetic, Angular Rate, and Gravity (MARG) filter developed by Sebastian Madgwick illustrated in FIG. 39.

[0143] In order to understand how the algorithm is implemented, one must first understand how each component in the IMU contributed to the overall estimation of orientation of the device. Since the gyroscope measures angular velocity in all three axes, theoretically, these values can be integrated to obtain angular displacement. Unfortunately, as with the case for most sensors and discrete digital signals, integration and quantization error are almost always unavoidable. The result is that these small errors in the estimated displacement will quickly accumulate over time until the estimated orientation "drifts" significantly and no longer estimates the orientation correctly. The accelerometer and magnetometer is therefore present to provide reference for the gyroscope. Since the accelerometer measures acceleration along all three axis, it is also able to detect the direction gravity is pointed relative to its own orientation. When the device is tilted slightly at an angle, the direction of gravity relative to the orientation of the device also tilts slightly at an angle identical but opposite the tilting motion. With some basic trigonometry, the roll and pitch of the device can be estimated. The roll and pitch are the angles at which the device is rotated about the axis on a plane parallel to the ground. There are several limitations to exclusively using the accelerometer to estimate orientation. Firstly, since accelerometers are also sensitive to acceleration forces other than gravity, data is susceptible to error if there is linear motion of the device. Secondly, yaw, which is the angle of rotation about the axis perpendicular to the ground, cannot be estimated since the direction of gravity with relation to orientation of the device will not change if the device is oriented north or east for example. Yaw is, instead, estimated using the magnetometer. The magnetometer is essentially a digital compass that provides information about the magnetic heading of the device, which can be converted into yaw angles. The accelerometer and magnetometer estimations, when combined with the gyroscope orientation estimations by an algorithm, acts as a filter that helps dampen the effects of integration errors in the gyroscopes.

[0144] When dealing with orientations in algorithms, some common mathematical representations include Euler angles and the quaternion representation. Referring to FIG. 39, the MARG filter uses the quaternion representation, and applies gradient-descent to optimize accelerometer and magnetometer data to the gyroscope data and estimate the measurement error of the gyroscope as a quaternion derivative. Quaternion results are converted back into Euler angles 509 for more intuitive postprocessing of the orientation data.

[0145] Still referencing FIG. 39, the Euler angles (roll 51 0, pitch 51 1 , and yaw 51 2) represent the angle traveled on the x, y, and z axes respectively from the original orientation. Each Euler angle representation can be converted to a unit vector notation that represents orientation. Once the orientation vector is computed, analysis will proceed to metrics computation.

[0146] Total active time has already been estimated prior the beginning of orientation analysis. Other metrics to consider include economy of motion and smoothness. With reference to FIG. 41 , the economy of motion metric measures how well the user chooses the path of the tool tip to complete a specific procedure. In theory, the most optimal path is the shortest path possible to complete the procedure, and economy of motion is the ratio of the shortest, most efficient path to the measured path length of the user. In reality, the optimal path is very difficult to estimate as it depends largely on the type of procedure and the varying approaches that may exist even among the best of surgeons. To solve this problem, instead of estimating and comparing the measured path length to the shortest path length, the measured path length 51 5 is compared to the average path length of a pool of expert surgeons 516. Path length is calculated, first, by taking the dot product of adjacent orientation vectors in the data sequence, which gives the angle of change in orientation. Each angle in the data sequence multiplied by the length of the tool gives the arc length that the tip traveled. The total path length is the sum of this series of arc lengths. The path length calculated using this method is not the absolute path length, as this method assumes that there is no travel along the depth axis (i.e. grasper moving in and out axially through the trocar). The reason for this limitation comes inherently from the IMU's inability to track 3D position. IMUs are only able to accurately track orientation information. The only means to estimate 3D position is through integrating the accelerometer data twice. Although this may be a mathematically correct approach, in reality, accelerometers are very noisy. Even after filtering, the noise will be amplified each time it is integrated. Integrating each data point twice sequentially along the data series allows for error to accumulate exponentially. In other words, three dimensional position tracking can only be achieved for several seconds before the estimation drifts far away from its true 3D position. Nevertheless, the ratio of the expert path length to the user path length 517, though with an error, is assumed to be proportional to the actual ratio using their true path lengths, and is used to measure economy of motion.

[0147] With reference to FIG. 40, smoothness measures the frequency and variance of motion. It is assumed that expert data would typically show smoother motion paths than less experienced surgeons. Factors that may affect smoothness of motion include hesitation, misjudgment of lateral and depth distance of tip to target, collision of the tool tips, and lack of speed and/or force control, all of which are more apparent in novices. To begin, the position of the tool tip is estimated. As described in the previous section, absolute 3D position tracking is not possible while using an IMU. Instead, a pseudo-2D position is projected by the lateral sweeping motion of a grasper pivoting at the entry point, and assumes that there is no movement in depth. This 2D position coordinate represents the path the tip travels. The curvature K of the path over time is first calculated over time using the equation 513. Curvature gives a measure of the abruptness of change in path. The smoother the motion, the smaller the change from one curvature value to the next. Smoothness can then be quantified statistically in relation to the standard deviation of curvature change to mean of change 514. The smaller the resulting smoothness value, the less variability there is in motion, the smoother the motion path, the more skilled the user.

[0148] Other smoothness algorithms that have been tested or considered include one that applied the smoothness equation on each of the accelerometer data series separately and took the average of all the smoothness values; one that applied the smoothness values of each of the position coordinates and took the average of the resultant smoothness values; and one that performed an auto correlation of curvature. Auto-correlation is a way of calculating similarity of a signal with itself at an offset time. This is useful to find whether there is a smooth transition from one sample point to the next by offsetting by only a seconds time or even a single data point by determining how similar the offset signal is to the original signal.

[0149] Other metrics that are explored include average velocity of tool tips and energy efficiency. Average velocity is simply the distance travelled over time. Average velocity can be used in combination with other metrics to gauge confidence and familiarity with the procedure. Path length from one sample to the next has already been computed while determining the overall path length the tip of the tool travelled. Time increment between each sample increment is recorded in the raw data and can be calculated by subtracting the previous time stamp from the most current time stamp along the sequential analysis. A velocity is calculated between each sample increment and the average is taken.

[0150] Lastly, energy efficiency is computed using the force data collected from the strain gauge. Force information is important in determining if the user is using excessive forces in accomplishing the task, and, hence, causing unnecessary tissue damage. Due to the fact that each data set was segmented, each of these algorithms are implemented to each segment sequentially, yielding the same number of metrics as there are segments in the data set. These individual metrics are averaged to determine the overall metric for that data set. Each individual device involved in the simulation session will have computed metrics associated to it, and these metrics will be combined for analysis overall.

[0151] The data is collected and analyzed via an interactive application installed on a computer or other microprocessing device. The application is present via a graphical user interface that is interactive offering various learning modules such as on specific laparoscopic procedures and providing user feedback on collected metrics from the sensorized instruments. The software application guides users through selecting a learning module and provides users with constructive feedback helping users increase surgical instrument handling skills and build manual dexterity.

[0152] The software can employ a variety of technologies, languages and frameworks to create an interactive software system. In particular, JavaFX® software platform, that has cross-platform support, can be used to create the desktop application. JavaFX® applications are written in Java and can use Java® API libraries to access native system capabilities. JavaFX® also supports the use of cascading styling sheets for styling of the user interface. SQLite® software library can also be used in the present invention as a self-contained, serverless, transactional SQL database engine. This database engine is used to create and insert data pertaining to each learning module into a database, as well as data collected from the user to later be analyzed. Each screen of the application is populated with learning module data stored in the SQL database.

[0153] The JavaFX® embedded browser WebKit® which is an open source web browser engine may also be employed. This browser supports most web browser technologies including HTML5, JavaScript®, Document Object Module, and Cascading Style Sheets. Each step of a laparoscopic procedure is displayed in an embedded web browser in the learning module screen.

[0154] The Data Driven Documents (D3) JavaScript® library may also be utilized to provide dynamic interactive visualizations of data. D3 binds data to the web browser technology, Document Object Model, to which then D3 transformations can be applied. D3 visualizations using analyzed data collected during the learning module can then be displayed in an embedded browser in the feedback screen.

[0155] The Webcam Capture Java® API can also be employed to capture images from the connected laparoscope to display to the user. The live feed from the laparoscope is embedded into the learning module screen.

[0156] With reference now to FIG. 42, the module devices screen page of the user interface displays all of the connected devices 601 . The graphical user interface includes virtual buttons displaying whether the magnetometers on each instrument have been calibrated. Selecting the "calibrate" button adjacent to each specific instrument will take the user to the calibration screen page, where magnetometer data from that instrument will be actively recorded and stored for calibration.

[0157] Turning to FIGs. 43A-43D, the device calibration screen page is shown. The calibration screen page displays the three steps 602 of the magnetometer calibration process. The steps include orientation about the three axes to obtain plot magnetometer data on XY, XZ, YZ planes. The application displays an animation that guides the user through the steps to properly calibrate the magnetometer for their specific sensor. In particular, the user is instructed by the calibration screen to rotate the instrument. Once the application has collected a sufficient amount of magnetometer data based on the number of points plotted on a plane in a given number of quadrants, the magnetometer data is then stored in the database 630, to be used in the analytics algorithms. The analytics algorithms correct for magnetometer bias due to encountering sources of magnetic field using an ellipsoid fit algorithm. The other sensors are also calibrated at step 600 of the flow chart shown in FIG. 49.

[0158] With reference now to FIGs. 44 and 49, in the next step 650 the type of training module is selected on the module selection screen page using a virtual button. This screen displays available learning modules for the user to select. On the module selection screen 604, a selectable lesson 603, for example, entitled "Total Laparoscopic Hysterectomy" is displayed and includes the title and short description of the learning module. The lesson module screen is populated by querying the SQL database for stored learning modules titles and descriptions. Examples of training modules include the following surgical procedures: laparoscopic cholecystectomy, laparoscopic right colectomy, and total laparoscopic hysterectomy.

[0159] Turning to FIGs. 45 and 49, in the next step 652, after a learning module is selected on the module selection page 604, the module preview screen page 614that corresponds to the selected a learning module is displayed. The module learning objectives 605, and required instruments 606 are included on the screen and displayed to the user. A preview video 607 of the selected module is also embedded into the screen page. Information for each part of the module preview screen is populated by querying the SQL database for the selected module's description, objectives, required devices and preview video. For example, if a laparoscopic cholecystectomy module is selected, the video 607 at step 652 will explain what a laparoscopic cholecystectomy is and its advantages over other non-laparoscopic procedures. The video will provide a brief overview of major anatomical regions involved, and key techniques and skills required to complete the procedure. The required instruments 606 are displayed, for this example, to be four trocars, one grasper, two dissectors, one scissor, and may further include one clip applier and one optional retrieval bag. Step-by-step instructions are provided via the imbedded video 607. Examples of other learning modules include laparoscopic right colectomy, and total laparoscopic hysterectomy.

[0160] Each practice module is configured to familiarize the practitioner with the steps of the procedure and the relevant anatomy. It also permits the user to practice the surgical technique and strive for proficiency in completing the procedure safely and efficiently. To aid in tracking performance, metrics measuring operative efficiency are also computed and displayed at the end of the procedure.

[0161] Turning to FIGs. 46 and 49, in the next step 654, a demographics questionnaire is presented to the user. Each question with their corresponding set of answers is populated by querying the SQL database 620 for the selected module's survey questions and answers 608. The selected answer is then stored in a SQL database 630. Questions include user's title, level of experience, number of procedures performed, number of procedures assisted, and user's dominant hand. [0162] With reference to FIGs. 47 and 49, in the next step 656, the learning module screen for the selected module is presented to the user. With particular reference to FIG. 47, the left side 609 of the graphical user interface is an embedded video of a live laparoscope image feed of the cavity of the trainer displayed to the user. On the right side each surgical step 61 0 of the laparoscopic procedure is sequentially displayed to user accompanied by a brief instruction of the surgical step and an embedded instructional video 61 1 showing an example of a successful performance of the step. Laparoscopic instruments being used during the learning module are shown on the bottom 612 of the live laparoscope image feed. Data from laparoscopic instruments is streamed through serial ports and stored in the SQL database 630. For example, if a laparoscopic cholecystectomy is selected as the learning module, the surgical steps 610 that are displayed to the user include: (1 ) First entry: place your first port and survey the abdominal cavity for abnormalities and relevant anatomy; (2) Place ancillary ports: under direct visualization, place your ancillary ports; (3) Retract gallbladder: position the patient with your grasper grasp the fundus of the gallbladder and retract cephalad and ipsilaterally to keep the region of the cystic duct, cystic artery and common bile duct exposed; (4) Dissect the Triangle of Calot: with your grasper, grasp the infundibulum of the gallbladder and retract inferio-laterally to expose Calot's Triangle and use your dissector to bluntly dissect the triangle of Calot until the Critical View of Safety is achieved and only two structures are visible entering the gallbladder; (5) Ligate and divide cystic duct and artery: ligate the cystic duct and artery by using your clip applier to place three clips on each structure, two proximally and one distally, and use your scissors to divide the cystic duct and cystic artery; (6) Dissect gallbladder from liver bed: retract the gallbladder in the superio-lateral direction using your grasper holding the infundibulum and scissors with or without electrosurgery; alternatively, a dedicated energy device may be used to carefully dissect the gallbladder entirely from the liver bed; (7) Specimen Extraction: Extract the specimen through one of your ports; and (8) Port Removal: survey the abdominal cavity one last time before removing your ports.

[0163] Turning now to FIGs. 48 and 49, in the next step 658 data that is collected and stored during the learning module from the connected laparoscopic instruments is queried from the database 630, and run through analytics algorithms to output resulting metrics data. Resulting metrics data from the analytics is then displayed to the user on the screen 613 using D3 visualizations in a web browser embedded in the feedback screen 613. As shown in FIG. 48, the user's time is displayed together with the average time and an expert's time to complete the module providing comparative performance analysis to the user. Smoothness of motion and economy of motion are also displayed and compared with the average and expert results. Based on the information collected in the survey at step 654, module results are categorized accordingly as expert or non-expert data. The results are averaged for expert and nonexperts and presented as shown.

[0164] Another method for tracking the position of multiple surgical instruments and accessories, including but not limited to graspers, dissectors, scissors, suture needles, needle drivers, energy devices, trocars, etc. with a high degree of precision and accuracy is also provided in the present invention. The method employs machine vision software to track infrared (IR) tags placed on instruments and accessories. The use of an IR tracking system with one or more fixed internal cameras 415 provides highly accurate and repeatable object tracking at a low cost. Data gathered from the present invention can be used to compare an inexperienced user's performance to that of an experienced surgeon and provide effective feedback to the user. The skills gained in this manner, before live surgery, have been proven to improve the skill level of trainees and surgeons.

[0165] Machine vision techniques accurately and repeatedly ascertain the position of multiple surgical instruments and accessories in a simulated surgical environment. Machine vision is a category of image based sensing technologies which analyzes recorded images to ascertain information including position, physical integrity, etc. In one variation, the present invention employs machine vision for position tracking making use of IR lights, filters, and retro-reflective tape. Infrared light is a type of light having a wavelength of greater than 700 nm. IR light is just beyond the visible spectrum and is useful for a variety of image or light-based technologies where an additional visible light source is undesirable and/or ambient lighting may cause problems with obtaining accurate sensor readings. In general, an optical filter is a device that selectively transmits light such that any light passing through the material will only be of a particular wavelength or set of wavelengths. The present invention makes use of IR filters. A retro-reflective material has the unique property of reflecting all light back to its source. This property enables the use of an infrared light source near an image sensor fit with an IR filter to drastically increase contrast between the retro-reflective material and background. The images produced by this are utilized by machine vision software to extract desired features such as position. In regards to the image analysis, the invention makes use of machine vision software for analysis of the retro-reflective material placed on desired targets. For IR computer vision applications, it is best to use a gray-scale camera image, as the only information of interest is how much IR light is reflected off a surface. The software identifies the desired portions of an image using thresholding. A threshold filter allows only a specified range of pixel values to be passed through to the analysis part of the software. This information is then filtered by area. The total area is calculated by adding together the area of each individual pixel in each discrete marker (portion of information passed through the thresholding filter). The area is used to make sure that the detected markers are of the appropriate relative size in the frame of the image. This is used to filter out noise from the background and eliminate false readings when the camera is blocked. Next, the computer classifies each point by several factors: size of marker, location, and pattern. Relevant coordinates are then calculated and recorded.

[0166] With reference to FIG. 50, a surgical training device 1 0 for laparoscopic procedures is depicted. The device 10 allows a trainee to practice intricate surgical maneuvers in an environment that is safe and inexpensive. The surgical training device 10 generally comprises an enclosure 1 1 including an illuminated environment that can be accessed with surgical instruments 21 6, 217 through surgical access devices commonly referred to as trocars 212, 213. The enclosure 1 1 is sized and configured to replicate a surgical environment, for instance an insufflated abdominal cavity containing simulated organs 214. The enclosure 1 1 usually incorporates a camera 215 and video monitor. In one variation, the present invention makes use of IR position tracking to allow for low-cost, reliable monitoring of user performance in a simulated surgical environment. The enclosure 1 1 is flooded with IR light from one or more IR light sources 718, such as an IR LED ring encompassing the camera. Cameras 215 are modified to incorporate an IR pass-through filter. At least one of the laparoscopic instruments 216, 217, access device such as the trocars 212, 213 and artificial organs 214 used in the mock procedure is fit with one or more appropriately sized retro-reflective markers 723, 724 and 725, 726, respectively. FIG. 51 illustrates the image captured by two of the internal cameras 215 of surgical instruments 21 6, 217 with markers 723, 724, respectively, inside the enclosure 1 1 , or alternatively, of the same surgical instrument 216 with markers 723 by two different cameras to provide paired frames of the target markers 723 in 3D space. With the aid of the IR filter, the image captured by camera 215 effectively shows the position of the distal ends of the instruments 216, 217 by way of the IR reflective markers 723, 724. These isolated markers 723, 724 are then analyzed using an algorithm to obtain the coordinates of each marker. The IR light source 71 8 takes the form of a ring of IR LEDS positioned around the lens of the camera 215. Because the retro-reflective tags reflect light back to its source, the cameras need to be very close to the IR source in order to see it. IR LEDs not near a camera would not highlight the tags and would only increase the ambient lighting inside the trainer which would hinder the tracking of the instruments as there would be less contrast between the tags and the background. Various configurations of the retro-reflective markers can be tracked. In combination with appropriate machine vision software, the design and position of these markers can be used to acquire information useful for surgical technique performance such as depth of insertion, absolute 3D location, heading, and rotation and the like. Also, with the camera 215 in a fixed position, the collected data is highly accurate, precise, and repeatable. An ideal embodiment makes use of retro-reflective markers specifically placed in three bands near the distal tip of the tool as can be seen in FIG. 51 . This provides a unique pattern that can be distinguished from the random noise background. Three cameras 215, each with an IR light source, are shown in FIG. 50. The calculation required to find the three dimensional location of a point in space using at least two cameras is employed. The calculation includes the step of point correspondence matching, and the step of three-dimensional (3D) reconstruction. In order for the algorithm to estimate the 3D location of a marker, the marker must be in view of at least two cameras. If there is more than one marker in view of at least two cameras, then the algorithm needs to match each marker in the first camera to the same marker as seen in the second camera. This problem of point correspondence can be solved using epipolar geometry, which requires exact knowledge of where the cameras are placed relative to each other. Standard methods of camera calibration can be used to find this information in the form of the fundamental matrix. By using this fundamental matrix, it is possible to estimate where the marker should be in the other camera image. By doing this for all the markers in both camera views, it is possible to match them based on which are closest to the estimated locations. The second stage of the algorithm uses a pair of two-dimensional (2D) locations to project them to a single location in 3D space. As illustrated in the schematic of FIG. 52, a ray can be traced from the camera center 758, through the image plane 759, and to the real 3D point 760 in space for both cameras 215a, 215b. As there will always be error in the reading of the marker location, these rays will not intersect. This can be solved by finding the point that is closest to both rays using a least squared linear algebra equation.

[0167] Various types of retro-reflective materials may be used as markers in the present invention. The retro-reflective material may be powder coated onto the instrument/trocar such as the instrument shaft. Retro-reflective material may also be provided as a spray paint and sprayed onto the instrument. Also, retro-reflective tape may be employed and applied as markers to the instruments. The tape can be applied to the instruments in a multitude of ways that allow for various positions and designs to distinguish instruments from one another. The retro-reflective tape may be applied to any instrument/trocar regardless of manufacturer. Instruments that can be used with computer vision include, but are not limited to, laparoscopic graspers, scopes, dissectors, scissors and needle drivers. FIG. 53 illustrates retro-reflective tape 770 applied as a marker 723 to a shaft of an instrument 21 6. The circumferential location on the instrument shaft where tape is to be applied can be recessed so that the upper surface of the tape is flush with the outer surface of the instrument shaft. In this manner, the tape does not get caught in the trocar port seals as easily. In one variation, the retro-reflective material is a retro-reflective fabric attached to the shaft using an adhesive. [0168] Generally, instruments that have a smooth surface finish are prone to reflecting infrared light and ambient light passed through the trocar which may be sensed by the IR camera. In order to mitigate unwanted reflections and prevent the reflections from being mistaken as a retro-reflective marker 723, black oxidation is used to blacken a shiny stainless steel shaft and reduce the unwanted reflections. In another variation, chalkboard spray paint is applied to the instrument shafts to minimize the unwanted reflections. In another variation, the temperature and speed at which the dielectric sleeve is extruded is adjusted to dull the surface finish of the sleeve. The sleeve is then placed over the shaft of the instrument to cover the stainless steel. In another variation, the stainless steel shaft is tumbled in any one or more mediums to reduce the reflections. In another variation, the stainless steel shaft of the instrument is blasted with beads to dull the finish and reduce glare. In another variation, the shaft is wrapped or spray painted with a material having a matte finish. Any of the above- mentioned methods may be combined to reduce unwanted reflections.

[0169] In order to provide objective performance feedback to the user, motion and force data is collected and analyzed to track user activity as shown in FIG. 49. In particular, with continued reference to FIG. 49, during a learning module 656, the software application connects to two IR cameras 21 5 and streams each camera's frame paired with the other to the IR tracking software. The IR tracking software then analyzes the paired frames and returns a list of detected instruments and their location. For each detected instrument, the location information returned includes the x, y, z coordinates of the instrument location, the x, y, z direction vector, and the instrument identification associated with the calculated entry point (i.e., pivot point, port point) of the instrument in the simulated abdominal wall 141 . The instrument location data is stored in the applications SQLite database 630 to be used in the analytics software and provided as module feedback 658 alone or together with the instrument's inertial measurement unit (IMU) data.

[0170] The surgical training device 10 of the present invention employs both computer vision and IMU motion tracking to track motion and position of laparoscopic instruments present in the enclosure 1 1 . Computer vision is a reliable way to track objects in 3D space. In order to track the 3D position of the instrument tips, two cameras 215 are necessary to triangulate position of a marked object. In this invention, IR cameras 215 with an IR light source 71 8 are used. The cameras are positioned approximately 90 degrees from each other to improve the triangulation calculation. The cameras 215 rely on the reflection of IR light from the retro-reflective markers 723-726 that are attached near the tip of each instrument as markers to track the positions of the instruments. In order to ensure continuous tracking of instrument tip position, the retro- reflective material on the instrument has to be in view of both cameras 21 5 at all times. Unfortunately, obstruction of view of the instrument tips from any one of the cameras 215 is likely to occur during simulated procedures within the body form. This can occur in any one of a number of ways. For example, the instrument tip can be blocked by a simulated organ during a simulated procedure, the orientation of the instrument is directly axially pointed towards the camera 21 5 such that the retro-reflected material of the markers cannot be seen, or when the instrument tip of one instrument crosses over another. When this occurs, the computer vision position tracking algorithm will not be able to accurately estimate the position of the instruments. In such cases, the IMU sensor together with computer algorithms permits calculation of the instrument positions even when the markers on the instruments are obstructed. The IMU sensor attached to the instrument is in itself a collection of 3 separate sensors. It contains a 3-axis accelerometer, 3-axis magnetometer, and a 3-axis gyroscope. Together, they form the 9-DOF IMU sensor 244 employed in the sensorized instruments. Through sensor fusion algorithms, the IMU sensor is capable of continuously and accurately estimating the position of the instrument. Although the IMU is incapable of 3D position tracking, it does not suffer from blind spots, and can therefore, reliably and consistently provide orientation information of each instrument over time. The position of the instrument tip in the beginning of obstruction, and when the instrument emerges back into view again will be known from the image data. To fill in the blanks for position within the time of obstruction, the orientation information from the IMU sensors can be used to interpolate and estimate missing data. This information will be used in combination with a predictive algorithm called the Extended Kalman Filter. The Kalman Filter is an algorithm that is used to estimate unknown variables in a series of measurements observed over time. It takes into account the uncertainty of data collected from measurement error, noise, and other sources of error, and iteratively uses Bayesian inference to calculate the most statistically plausible estimate of a data point.

[0171] A Kalman model must first be built to model the linear system the filter is trying to estimate, for example, in this invention, it would be the system of linear equations of motion. By setting an initial state, the Kalman Filter calculates a predicted state and then compares it to the measured state. Once error is taken into account, the algorithm statistically decides how much to trust the predicted value, and how much to trust the measured value by assigned each of them weights. This new value becomes the new previous value, and the filter iterates as such. The Extended Kalman Filter is in its core the same as the Kalman Filter, except the algorithm is modeled for non-linear systems. Since the regular Kalman Filter only works for linear systems, the Extended Kalman Filter circumvents this limitation by linearizing the non-linear equations. In this invention, motion produced by the tools may not necessarily be linear, therefore the Extended Kalman Filter is used. In this invention, the Kalman Filter provides estimates for missing windows of data. The estimated data can then be used as a reference to match the beginning and end points of the missing data window, combined with IMU data through regression analysis.

[0172] With reference back to FIG. 50, the relative attachment point of markers, along with a variety of programming methods, may be used to assist with situations where the marker is blocked by artificial organs or other obstacles. For instance, a plurality of cameras 21 5 can be used to minimize the chance of one instrument blocking another at any angle. In a further embodiment, the velocity of each marker is taken into account to predict where it should be even when blocked. In reference to FIGs. 54A-54D, the number, size, pattern, and/or position of retro-reflective markers 723, 724 on each instrument 216, 217, respectively, may be altered to produce a unique identification (ID) for each instrument/accessory. The ID marker 723, 724 may consist of, but is not limited to, several circumferential lines of retro-reflective material about the distal end of instrument forming a bar-code or other pattern or different sized markers, thicknesses or quantities of circumferential lines constituting a marker whose relative size/distribution along the longitudinal axis of the instrument is used to distinguish between instruments 216, 21 7. An identification may be established on the basis of the total number of circumferential lines in a marker on an instrument or the relative distance between a certain number of markers on each instrument. Further embodiments may even involve a specified number of markers of a given size a specified distance away from one another. In this manner countless ID tags may be established such that appropriate software can identify what type of instrument is being utilized in addition to what port it is being inserted through.

[0173] The IMU has proven to be a very powerful tool for motion tracking due to its ability to precisely estimate the orientation of an object. The IMU provides orientation information about the movements of users and it also serves as useful training data for gesture recognition. An IMU tracks motion through tracking orientation, however, there are instances in which knowing the actual relative position of the object is useful. The sensors in an IMU work together through sensor fusion to mitigate the effects of sensor noise while also strengthening accuracy of orientation estimation. One way to track position is to employ only the accelerometers in the IMU. Although, mathematically speaking, taking the 2nd integration of acceleration will provide the position of an object, because accelerometers are inherently noisy sensors, errors will accumulate quickly such that the estimated position will drift away from its true position within seconds.

[0174] One way to attain absolute position is through computer vision. This is the main purpose for which IR cameras are adopted to serve in the present invention. Not only does computer vision provide absolute position information with high accuracy, it is also capable of estimating orientation of the tools of interest. However, it cannot completely replace the IMU in this application, as the IR computer vision system also has its limitations. IR computer vision is able to identify the number of tools in view, but it is not able to distinguish one from another. For example, if a tool was to be removed from the body-form cavity, or be briefly obscured from the camera, and quickly reappear within view again, this time perhaps at a different location, the system will have no way of knowing that it is the same tool. Secondly, there is a small, but nonetheless, critical chance that the tool will be obscured from camera view for more than just a moment, which may cause significant data loss. This can occur either when the tool is in a blind spot of the camera, or when it is interacting with another object in such a way where the object or another tool blocks the tool from the camera view. The loss of continuity of the data can cause performance feedback to weaken, and even produce unfair assessments between training sessions, or even users. However, advantageously, the IMU data is employed in the present invention to supplement data lost during these situations, as well as assist the computer vision system distinguish one tool from another through inference.

[0175] Since the system of the present invention is intended to be a learning tool for users to track their learning progress, the quality and accuracy of feedback is essential to meet those goals. The enhancement in the ability to better track the position of tools has major implications to the quality of feedback that can be provided. Two major high-level metrics that are being calculated include smoothness and path length, and both these metrics rely on estimated position for accurate results. With the IMU alone, the calculation of these metrics can only rely on estimations of movement trajectory across a spherical surface with a fixed pivot, which is not reflective of the true nature of the tool's motions. With the introduction of computer vision, the true smoothness and path length can be easily estimated and provided in the training performance feedback to the user. This improvement in positional information advantageously also helps to create quality training data for gesture recognition, leading to overall higher feedback quality and accuracy.

[0176] In another variation, a birefringent tracking method is employed. This method includes a light source and a camera that are both cross-polarized relative to each other. For example, a 0-degree linear polarizer is placed in front of a diffuse light source and a 90-degree linear polarizer is placed in front of the camera. This is done in order to block out nearly all of the detectable light from the light source. A birefringent material such as a polycarbonate film or wave retarder may then be introduced between the two polarizers. The birefringent material may be located on an instrument shaft. Incident polarized light may experience different refractive indexes throughout the birefringent material resulting in a rotation of the polarization plane, thereby, allowing light to pass through the 90-degree linear polarizer. This set-up is an effective way to optically filter and isolate birefringent markers placed along or within the instrument that is to be tracked. The light source may be of any wavelength. An additional band pass or notch filter may be placed in front of or behind the 90-degree linear polarizer in order to filter stray ambient light. The birefringent material may consist of any material that is transmissive to the wavelength of interest and exhibits anisotropic properties. This may include but is not limited to polycarbonate, acrylic, quartz, polystyrene, polypropylene, wave retarders, calcite, or any other birefringent material. The light source may be of any type including fluorescent, incandescent, laser or light emitting diode. Ideally, the light source has a narrowband wavelength emission. The polarizer may be a wire grid or coated film, glass or substrate. The camera may be a CCD, CMOS or micro bolometer array.

[0177] The cameras used for IR instrument tracking can be placed anywhere inside the box trainer. The effectiveness of the cameras depends upon how much of the interior cavity of the trainer that the cameras can view. Cameras can be placed in the corners looking inward, or from the top looking downward. Having a camera placed on the bottom of the box looking up is not as desirable as its view would be blocked by the organ tray. Depending on how these cameras detect the instruments and the size and shape of the organs inside the cavity, there could be additional limitations to their locations. In one variation, infrared LEDs are mounted on the cameras. The light emitted from these LEDs is reflected off the retro-reflective tag such as retro-reflective tape on the instruments, and returns to the cameras. Each camera also has an infrared filter that blocks all other wavelengths of light. If the cameras are pointed at each other, their LEDs shine directly into the opposite camera, creating a large bright spot that effectively blinds the camera to any instrument activity in the area. Therefore, the cameras are arranged such that they are not facing each other such as placing both cameras in adjacent corners on one side of the trainer facing towards another side of the trainer. However, other methods can be employed to alleviate this limitation. For example, in one variation, the cameras are equipped with infrared polarizing filters that filter out all infrared light that did not originate from the camera itself, thus allowing the cameras to be in view of each other without being blinded. This same effect can be achieved by using filters of different wavelengths for different cameras. An example of this would be to allow only one camera to see light in the 700nm range, while the other would only see light in the 800nm range. Another more complicated method would be to flash the IR LEDs when the camera was recording the next frame of video. When timed correctly, the LEDs of one camera would be turned off while the other camera had its own LEDs turned on. This method would require synchronizing the cameras to each other. In another variation, the IR LEDs may be provided on the shaft of the instrument, preferably at or near the distal end. These LEDs could be flashed at a specific frequency unique to each instrument in order to distinguish each instrument from one another and to make the instrument readily identifiable. With the cameras detecting the location of the flashing LED.

[0178] It is understood that various modifications may be made to the embodiments disclosed herein. Therefore, the above description should not be construed as limiting, but merely as exemplifications of preferred embodiments. Those skilled in the art will envision other modifications within the scope and spirit of the present disclosure.

Claims

Claims We claim:
1 . A system for surgical training, comprising:
a simulated surgical environment defining an interior cavity between a top and a base;
at least two cameras positioned inside the simulated surgical environment;
at least two infrared light sources positioned inside the simulated surgical environment;
at least one surgical instrument having an elongate shaft extending between a tip at a distal end and a handle at a proximal end of the instrument; the distal end of the instrument including at least one retroreflector; and
a computer processor configured to receive image data from the at least two cameras and provide position data for the at least one instrument.
2. An instrument for surgical training, comprising:
a handle at a proximal end;
an instrument tip at a distal end; and
an elongate shaft extending between the handle and the tip;
at least one retroreflective marker located circumferentially around the elongate shaft near the distal end; and
an inertial measurement unit located on the handle.
3. A method for surgical training comprising the steps of:
providing a simulated surgical environment defining an interior cavity between a top and a base; at least two cameras positioned inside the simulated surgical environment; at least two infrared light source positioned inside the simulated surgical environment; at least one surgical instrument having an elongate shaft extending between a tip at a distal end and a handle at a proximal end of the instrument; the distal end of the instrument including at least one retroreflector; and a computer processor configured to receive image data from the at least two cameras and output position data for the at least one instrument;
inserting the distal end of the surgical instrument into the simulated surgical environment through a port in the top;
exposing the at least one retroreflector to infrared light from the at least two infrared light sources;
capturing, with the at least two cameras, infrared light reflected by the at least one retroreflector;
manipulating the surgical instrument about the port inside the interior cavity; and calculating the position of the distal end of the surgical instrument over time.
4. The system of any one of the previous claims wherein the image data comprises data grey scale images from the at least two cameras.
5. The system of any one of the previous claims wherein the position data comprises coordinates for the location of the tip of the instrument, a unit vector pointing along a longitudinal axis of the elongate shaft and toward the distal end, and/or a timestamp.
6. The system of any one of the previous claims wherein the processor includes software comprising a triangulating algorithm employing the image data received from the at least two cameras to calculate the position data.
7. The system of any one of the previous claims wherein the position data over time is displayed on a computer monitor.
8. The system of any one of the previous claims wherein each infrared light source is positioned adjacent to a camera and directed toward the interior of the cavity.
9. The system of any one of the previous claims wherein each infrared light source comprises a ring of light emitting diodes positioned around the camera and directed toward the interior cavity.
1 0. The system of any one of the previous claims wherein the instrument includes an inertial measurement unit comprising an accelero meter, magnetometer and gyroscope; the unit being located on the handle of the instrument.
1 1 . The system of any one of the previous claims wherein the computer processor is configured to receive data from the inertial measurement unit and the software is configured to use data from the inertial measurement unit to calculate gap position data for the at least one instrument; gap position data having a start point and an end point calculated from the image data.
1 2. The system of any one of the previous claims wherein the instrument is selected from a group of laparoscopic instruments consisting of a grasper, scissors, energy device, needle driver and dissector.
1 3. The instrument of any one of the previous claims wherein the shaft includes an outer surface and at least one circumferential recess in the outer surface; the at least one retroreflective marker being located inside the recess such that the retroreflective marker is flush with the outer surface.
14. The instrument of any one of the previous claims wherein the retroreflective marker is a retroreflective tape, spray or fabric.
1 5. The instrument of any one of the previous claims wherein the at least one retroreflective marker comprises a pattern of retroreflective material located
circumferentially around the shaft and along its length for identifying the instrument.
1 6. The instrument of any one of the previous claims wherein the shaft surrounding the at least one retroreflective marker has an outer surface treated to reduce reflection of light.
1 7. The instrument of any one of the previous claims wherein the inertial
measurement unit includes an accelerometer, a magnetometer, and a gyroscope.
1 8. The method of any one of the previous claims wherein the step of calculating the position of the distal end of the surgical instrument includes using data from images captured by the at least two cameras.
1 9. The method of any one of the previous claims further including the steps of: providing an inertial measurement unit on the handle of the surgical instrument; wherein the step of calculating the position of the distal end of the surgical instrument includes the step of calculating the position of the distal end of the surgical instrument using data from the inertial measurement unit for gaps in data from images captured by the at least two cameras.
20. The method of any one of the previous claims further including the step of providing feedback in the form of a plot of position over time.
PCT/US2018/034705 2017-05-25 2018-05-25 Laparoscopic training system WO2018218175A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US201762511246P true 2017-05-25 2017-05-25
US62/511,246 2017-05-25

Publications (1)

Publication Number Publication Date
WO2018218175A1 true WO2018218175A1 (en) 2018-11-29

Family

ID=62621059

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2018/034705 WO2018218175A1 (en) 2017-05-25 2018-05-25 Laparoscopic training system

Country Status (1)

Country Link
WO (1) WO2018218175A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USD866661S1 (en) * 2017-10-20 2019-11-12 American Association of Gynecological Laparoscopists, Inc. Training device assembly for minimally invasive medical procedures

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008103383A1 (en) * 2007-02-20 2008-08-28 Gildenberg Philip L Videotactic and audiotactic assisted surgical methods and procedures
US20100248200A1 (en) * 2008-09-26 2010-09-30 Ladak Hanif M System, Method and Computer Program for Virtual Reality Simulation for Medical Procedure Skills Training
US20110020779A1 (en) * 2005-04-25 2011-01-27 University Of Washington Skill evaluation using spherical motion mechanism
WO2011127379A2 (en) * 2010-04-09 2011-10-13 University Of Florida Research Foundation Inc. Interactive mixed reality system and uses thereof
US8764452B2 (en) 2010-10-01 2014-07-01 Applied Medical Resources Corporation Portable laparoscopic trainer
WO2014197793A1 (en) * 2013-06-06 2014-12-11 The Board Of Regents Of The University Of Nebraska Camera aided simulator for minimally invasive surgical training
US20160125762A1 (en) * 2014-11-05 2016-05-05 Illinois Tool Works Inc. System and method for welding system clamp assembly

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110020779A1 (en) * 2005-04-25 2011-01-27 University Of Washington Skill evaluation using spherical motion mechanism
WO2008103383A1 (en) * 2007-02-20 2008-08-28 Gildenberg Philip L Videotactic and audiotactic assisted surgical methods and procedures
US20100248200A1 (en) * 2008-09-26 2010-09-30 Ladak Hanif M System, Method and Computer Program for Virtual Reality Simulation for Medical Procedure Skills Training
WO2011127379A2 (en) * 2010-04-09 2011-10-13 University Of Florida Research Foundation Inc. Interactive mixed reality system and uses thereof
US8764452B2 (en) 2010-10-01 2014-07-01 Applied Medical Resources Corporation Portable laparoscopic trainer
WO2014197793A1 (en) * 2013-06-06 2014-12-11 The Board Of Regents Of The University Of Nebraska Camera aided simulator for minimally invasive surgical training
US20160125762A1 (en) * 2014-11-05 2016-05-05 Illinois Tool Works Inc. System and method for welding system clamp assembly

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USD866661S1 (en) * 2017-10-20 2019-11-12 American Association of Gynecological Laparoscopists, Inc. Training device assembly for minimally invasive medical procedures

Similar Documents

Publication Publication Date Title
Okamura et al. Force modeling for needle insertion into soft tissue
Tendick et al. Sensing and manipulation problems in endoscopic surgery: experiment, analysis, and observation
Mountney et al. Three-dimensional tissue deformation recovery and tracking
EP2442743B1 (en) Virtual measurement tool for minimally invasive surgery
Rosen et al. The BlueDRAGON-a system for measuring the kinematics and dynamics of minimally invasive surgical tools in-vivo
EP1051697B1 (en) Endoscopic tutorial system
US20020168618A1 (en) Simulation system for image-guided medical procedures
JP2010015164A (en) Interface device and method for interfacing instrument to medical procedure simulation system
DE69738073T2 (en) System for the training of persons for the performance of minimally invasive surgical procedures
US20100167250A1 (en) Surgical training simulator having multiple tracking systems
US7179220B2 (en) Method for guiding flexible instrument procedures
EP1609431B1 (en) Haptic device for use in surgical simulation systems
Tendick et al. A virtual environment testbed for training laparoscopic surgical skills
US20040234933A1 (en) Medical procedure training system
Basdogan et al. VR-based simulators for training in minimally invasive surgery
KR20140048128A (en) Method and system for analyzing a task trajectory
US7277833B2 (en) Modeling of the workspace and active pending behavior of an endscope using filter functions
CN103211563B (en) The method and system that data realize 3-D tool trackings is exported by combination sensor during mechanical micro-wound surgical operation and/or camera
JP6049788B2 (en) Virtual tool operation system
Grober et al. Validation of novel and objective measures of microsurgical skill: hand‐motion analysis and stereoscopic visual acuity
US20150086955A1 (en) Systems and methods for analyzing surgical techniques
US8706301B2 (en) Obtaining force information in a minimally invasive surgical procedure
US8594841B2 (en) Visual force feedback in a minimally invasive surgical procedure
US9251721B2 (en) Interactive mixed reality system and uses thereof
US7289106B2 (en) Methods and apparatus for palpation simulation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18731688

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase in:

Ref country code: DE