WO2023034123A1 - Ultrasonic sensor superposition - Google Patents

Ultrasonic sensor superposition Download PDF

Info

Publication number
WO2023034123A1
WO2023034123A1 PCT/US2022/041572 US2022041572W WO2023034123A1 WO 2023034123 A1 WO2023034123 A1 WO 2023034123A1 US 2022041572 W US2022041572 W US 2022041572W WO 2023034123 A1 WO2023034123 A1 WO 2023034123A1
Authority
WO
WIPO (PCT)
Prior art keywords
images
landmarks
ultrasound
image
surgical
Prior art date
Application number
PCT/US2022/041572
Other languages
French (fr)
Inventor
Michael G. Fourkas
Rachel R. Spykerman
Ruchi DANA
Fred R. Seddiqui
Original Assignee
Duluth Medical Technologies, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Duluth Medical Technologies, Inc. filed Critical Duluth Medical Technologies, Inc.
Publication of WO2023034123A1 publication Critical patent/WO2023034123A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/50Supports for surgical instruments, e.g. articulated arms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/803Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/647Three-dimensional objects by matching two-dimensional images to three-dimensional objects
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B17/00Surgical instruments, devices or methods, e.g. tourniquets
    • A61B2017/00017Electrical control of surgical instruments
    • A61B2017/00207Electrical control of surgical instruments with hand gesture control or hand gesture recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/105Modelling of the patient, e.g. for ligaments or bones
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body
    • A61B2090/365Correlation of different images or relation of image positions in respect to the body augmented reality, i.e. correlating a live optical image with another image
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/378Surgical systems with images on a monitor during operation using ultrasound
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/361Image-producing devices, e.g. surgical cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Definitions

  • the apparatuses and methods described herein relate generally to ultrasonic imaging tools and system for facilitating robotic surgery and minimally invasive surgery procedures.
  • a robotic device includes of one or many distal movable arms that may have many other attachments at the end.
  • the distal end of the robotic arm can be moved very swiftly and with a very high degree of precision.
  • the surgical arm can have many of the needed instruments attached.
  • a surgical instrument can be attached as an attachment. These attachments are used at the surgeon’s discretion.
  • the robotic arm is highly mobile, and it can also be adjusted by the surgeon to best position an instrument at the surgical site in order to do the operation effectively.
  • Robotic surgery device provides a significant support to the surgeon’s surgical procedure.
  • the instruments can rotate several times over, which is much more than a human wrist can do.
  • the instrument motion can be scaled down to remove a natural tremor or to go in extremely slow motion if desired.
  • the instruments can be paused in any position or action to provide a steady, unmoving point of stability or retraction.
  • there is some physical health benefits to the surgeons while conducting the operation in comparison to conventional surgical procedures. Without the support of robotic arms, a surgery can be physically demanding and lead to neck, shoulder, or back problems of the surgeon. Additionally, having to make a smaller incision compared to conventional procedure, noticeably results in faster recovery.
  • Some surgical robotic systems also use surgical navigation systems, which help the surgeons understand the location of the underlying areas in the patient’s body.
  • surgical navigation system a surgeon can apply the instruments to exactly the place that the instrument had to be applied in the patient surgical site. Often times the patient body is moved during surgery. Even when the patient body is moved during the surgery, the surgical navigation system may help to understand the new location of the surgical site and can make the robotic arms point to the same patient site location with respect to the changed position of the patient body.
  • Robots can function autonomously or semi autonomously.
  • “Semi-autonomous” robots are the robots that operate in a predefined path. For example, if there is a pre-defined tumor or a tissue that has to be extracted from the patient surgical site by cutting pre-defined boundaries, the semi-autonomous robots can be programmed to remove the tissue from the patient site by cutting along the pre-defined boundaries.
  • the semi-autonomous robots oftentimes there is a switch that has to be continuously pressed in order for the robot to function.
  • the surgeon can use his leg or his hand to continuously press the switch for the robot to operate.
  • the robot stops to function completely In autonomous robots, once started, the robot performs the entire procedure with no input from surgeon. Thus, the autonomous robot is essentially performing on its own.
  • the robotic system could be a combination of autonomous semi- autonomous and even manual functions.
  • a semi-autonomous robot function can be to remove the non-healthy bone and to cut the surface of the bone in order to prepare a healthy, smooth bone surface, so that the implant can be positioned on top. This minimizes healthy bone loss.
  • the surgeon can manually switch off the semi-autonomous robot and move the robotic arm and place it back into the position desired. Once the occlusion is removed, the semi-autonomous robot can be turned back on.
  • This example of a combination of semi-autonomous and manual robots may be widely used in many surgeries.
  • some robotic systems also use pins and indexing arrays that are attached and inserted into remote portions of the healthy bone or healthy tissue.
  • the use of these pins and arrays is used to send signals back to the robot.
  • the robot will be able to re-index itself back to the patient surgical site.
  • the use of these indexes and pin arrays may be detrimental to the patient’s health as these pins are inserted into healthy bone and healthy tissues.
  • pins and arrays they should be limited to the portion of the tissue or bone that is supposed to be removed after the surgery.
  • the system requires proper guidance and proper vision during surgery so that the robot is able to re-index itself.
  • it is desirable for robotic systems to have improved sensing technology to provide a better picture of the surgical site.
  • Described herein are systems (including apparatuses and devices) and methods that can address these and other needs related to sensing technologies in robotic surgeries, as well as using in other surgical procedures.
  • the present disclosure relates to surgical methods and devices (apparatuses and systems) that combine ultrasound sensing with other sensing techniques.
  • the ultrasound sensor data and other sensing data can be superposed to provide more accurate information regarding location and other characteristics of anatomical structures, implants, or surgical devices within the body, for example, during a surgical procedure.
  • Ultrasound sensors and other imaging sensors may be positioned at different perspectives with respect to one or more anatomical features within an area of interest of a patient’s body to provide location information in three-dimensions (3D).
  • the other sensing techniques may include other imaging modalities, such as optical, radiography and/or other imaging modalities.
  • the ultrasound transducer system may include multiple transducers to acquire data from different fixed locations around the area of interest.
  • the transducers may be secured to the patient’s body using one or more mechanical fixtures, such as a strap, belt, or adhesive.
  • one or more moveable transducers e.g., handheld transducer is moved around the area of interest to provide the different perspectives.
  • the systems and methods may be well suited for use in robotic surgery.
  • the systems may be integrated into a surgical robotic system or may be a stand-alone system used in conjunction with a surgical robotic system.
  • the systems may employ artificial intelligence (Al) techniques, including Al assisted robotic techniques.
  • Al artificial intelligence
  • These apparatuses may provide navigation and/or treatment planning using an Al agent to identify landmarks, in particular, anatomical landmarks, to navigate the robotic device, or to assist in navigation with the robotic device.
  • the Al-assisted image guidance may be well suited for use in any of a number of types of robotic surgeries, such as orthopedic surgeries, surgical laparoscopic removal of ovarian cysts, treatments of endometriosis, removal of ectopic pregnancy, hysterectomies, prostatectomies, and other types of surgeries.
  • method of operating a robotic surgical device comprising: detecting one or more landmarks within an image of a first field of view of a camera associated with the robotic surgical device and within an image of a second field of view that overlaps with the first field of view using an artificial intelligence (Al); and determining the position of a robotic arm or a device held in the robotic arm by triangulating between the image of the first field of view and the image of the second field of view.
  • Al artificial intelligence
  • Any of these methods may include detecting pathological tissues from the first field of view and displaying an image of the image of the first field of view in which the pathological tissues are marked.
  • Detecting pathological tissues may include determining from a pre-scan of the patient (e.g., from a plurality of CT and MRI scans) a classification and location of pathological tissues on a 3D model.
  • detecting one or more landmarks may include receiving CT and MRI scans to create a patient specific 3D model of a joint.
  • Any of these methods may include creating two separate neural networks to identify the one or more landmarks within the images of the first and second fields of view.
  • a first neural network may be trained from an input dataset comprising a preprocedure CT scan and/or MRI data.
  • the CT scan and/or MRI data may be from a plurality of different subjects.
  • the second neural network may be trained from an input dataset of arthroscopic camera images from a plurality of different surgeries.
  • Any of these methods may include producing a bounding box around a pixel location of the one or more landmarks in each of the in the images of the first and second fields of view.
  • a bounding box may be any shape (not limited to rectangular), and any appropriate size.
  • Any of these methods may include determining the position of the robotic arm or the device held in the robotic arm by triangulating between the image of the first field of view and the image of the second field of view comprises reconstructing a depth of the landmark relative to the robotic arm or the device held in the robotic arm.
  • Any of the methods described herein may also include continuously refining the Al to recognize the landmark.
  • the method may include selecting two or more landmarks to train the Al on, wherein at least one of the landmarks is selected to for having a feature with high variance and a second landmark is selected for having a feature with a low variance.
  • the machine learning (e.g., Al) agent may be periodically or continuously updated.
  • a method or apparatus as described herein may include continuously updating a database of optical images, MRI scans and CT scans used to train the Al.
  • any of these methods may be used with a table or bed mounted robotic arm.
  • any of these methods may include coupling the robotic surgical device to a table on which a patient is positioned.
  • FIG. 1 is a block diagram illustrating how an ultrasonic sensor can integrate into a robotic system.
  • FIG. 2A-2C illustrate the use a handheld ultrasonic wand and anatomic images produced of the shoulder tissue layers;
  • FIG. 2A shows an example of handheld ultrasonic wand being used to gather anatomical images of a patient’s shoulder;
  • FIG. 2B shows an example ultrasound image of internal structures of the shoulder;
  • FIG. 2C shows another example ultrasound image of internal structures of the shoulder.
  • FIG. 3A illustrates an example of three ultrasound transducers positioned around a shoulder joint using a securing strap.
  • FIG. 3B illustrates another example of a fixation device used to secure one or more ultrasound transducers to the patient’s body.
  • FIG. 3C illustrates an example of an ultrasound transducer being positioned against a shoulder using a robotic arm.
  • FIGS. 4A-4H illustrate an example design of a stretchable ultrasonic transducer array
  • FIG. 4A illustrates the transducer array structure
  • FIG. 4B illustrates an exploded view to illustrate each component in an element
  • FIG. 4C illustrates a bottom view of four elements, showing the morphology of the piezoelectric material and bottom electrodes
  • FIG. 4D illustrates a 1-3 piezoelectric composite
  • FIG. 4E illustrates atop view of four elements showing the morphology of the backing layer and top electrodes
  • FIG. 4F illustrates the stretchable transducer array when bent around a developable surface
  • FIG. 4G illustrates the stretchable transducer array when wrapped on a non-developable surface
  • FIG. 4H illustrates the stretchable transducer array in a mixed mode of folding, stretching and twisting.
  • FIG. 5 is an example of using visual imaging for robotic arm location.
  • FIG. 6 illustrates one example of a method of determining depth of a target point using a visual imaging technique as described herein.
  • FIG. 7 shows the architecture of a system and method using Al to determine 3D position based on a set of images.
  • FIG. 8 schematically illustrates a method of training a system to identify landmarks in 2D images.
  • FIG. 9 schematically illustrates a method of determining the depth of a tool (e.g., the robotic arm, an end-effector device, etc.) using multiple images to triangulate to an identified (e.g., an Al-identified) target point.
  • a tool e.g., the robotic arm, an end-effector device, etc.
  • FIG. 10 is an example of identification of a target region in a series of 2D images as described herein; in this example, the images are arthroscopic images of a patient’s shoulder and the identified target region(s) is the subscapularis and the humeral head.
  • FIG. 11 schematically illustrates a method of using Al feature recognition to determine a location of a target region of a patient’s body and/or a location of a robotic arm.
  • FIG. 12 schematically illustrates a method of providing 3D localization and frame of reference.
  • FIG. 13 is an example of procedure locations, interventions, etc. that may be planned as described herein.
  • FIG. 14 is an example of 3D models of the system and/or patient tissue that may be generated as described herein.
  • the apparatuses and methods described herein include surgical systems and methods of operating them.
  • described herein are surgical methods and apparatuses for determining positioning information using a combination of imaging technologies including ultrasonic imaging, visual imaging, and/or artificial intelligence (Al) methods.
  • the imaging apparatuses can include imaging using ultrasound techniques combined with imaging using one or more other imaging modalities, such as optical, radiography and/or other imaging modalities.
  • Optical imaging generally involves imaging techniques using light (e.g., visual wavelengths and/or infrared wavelengths (e.g., including near-IR wavelengths).
  • the combining of different imaging modalities can improve the identification of anatomical structures. For instance, visual imaging alone generally cannot be used to visualize through opaque tissue surfaces, and pre-procedure and architype models generally cannot not identify locations of internal surfaces and features of deformable soft tissues.
  • Coupling ultrasound imaging with other imaging modalities, such as optical imaging can provide a more complete view of anatomical structures.
  • ultrasound imaging can be used to “see through” visually opaque tissue surfaces to provide images of internal anatomical structures (e.g., tissue layers, bone, muscle, tendon, organs, and/or pathological structures).
  • the ultrasound images may be combined with images taken using other imaging modalities, such as visual imaging using visual wavelengths of light, of the same internal anatomical structures. Combining the different imaging modalities may also increase the accuracy of locating an anatomical structure in relation to, for example, a robotic arm end effector used to perform a surgical procedure.
  • the imaging apparatuses and methods described herein may be used in performing orthopedic procedures arthroscopically (through cannulation) and/or may be used in performing open surgical procedures. In either case the procedure may be visualized by the physician. Visualization may be used in coordination with a robotic surgical apparatus.
  • the robotic apparatus may include single or multiple automated, semi-automated, and or manually controlled arms.
  • the apparatuses and methods described herein may be configured to coordinate with the robotic apparatus to assist in controlling the arms, including indicating/ controlling the positioning of the robotic arm(s) and/or surgical devices held within the robotic arm(s) in 3D space.
  • FIG. 1 schematically illustrates one example of a system.
  • the system includes a planning module 105 and a controller module 107.
  • the planning module 105 may receive input from a variety of different inputs, including patientspecific information from one or more sources of patient imaging and anatomy (e.g., a CT/MRI input 103).
  • the planning module 105 may receive input from an anatomy library 111 that include one or more (or a searchable/referenced description of a plurality of different) anatomical examples of the body regions to be operated on by the apparatus, such as a body joint (knee, shoulder, hip, elbow, wrist), body lumen (bowel, large intestine, small intestine, stomach, esophagus, lung, mouth, etc.), vasculature, etc.
  • the planning module may be configured to receive input from a physician, e.g., through one or more physician inputs 113, such as a keyboard, touchscreen, mouse, gesture input, etc.
  • the planning module may include one or more inputs from a tool/implant library 115.
  • a tool/implant library may include functional and/or structural descriptions of the one or more tools that may be operated by the apparatus, including tools having end effectors for operating on tissue.
  • the tools/implants may include cutting/ablating tools, cannula, catheters, imaging devices, etc.
  • the tool/implant library may include a memory or other storage media storing the information on the tool/implant (for the tool/implant library) or the anatomy (for the anatomy library).
  • the planning module 105 may also receive images as input from one or more sensors, such an ultrasonic imaging device 110 (e.g., ultrasonic transducers), optical imaging system (e.g., camera) and/or one or more other sensors of the system.
  • the planning module may use the input collected by the one or more of the sensors along with data from the anatomy library 111, physician input 113 and/or tool/implant library 115 to identify one or more landmarks (e.g., anatomical feature, tool and/or implant) within the collected images.
  • the planning module 105 may use Al to iteratively train itself to recognize features within the collected images.
  • the planning module 105 may provide instructions and/or command information to the controller to control or partially control the robotic arm(s) and/or devices/tools (including end effectors) held or manipulated by the robotic arm(s). For example, a planning module may pre-plan one or a series of movements controlling the operation of a robotic arm(s) and/or a device/tool/implant held in the robotic arm(s). In some variations the planning module may adjust the plan on the fly (e.g., in real or semi -real time), including based on the inputs, including but not limited to the physician input.
  • the planning module 105 and/or controller module 107 may receive information from the ultrasonic imaging device 110 and/or an optical imaging device 119 (e.g., camera).
  • the controller module 107 may control (and provide output) to a robotic arm module 109.
  • the robotic arm module 109 may include one or more robotic arms, such as a primary arm and one or more slave arms.
  • the ultrasonic imaging device 110 and/or optical imaging device 119 e.g., camera
  • the ultrasonic imaging device 110 and/or optical imaging device 119 may be separately affixed to the patient, or manually operated.
  • the controller module 107 may also communicate with an artificial intelligence (Al) module 121 that may receive input from the controller module 107 and/or the planning module 105.
  • the Al module 121 may communicate with an augmented reality (AR) module 123.
  • AR augmented reality
  • controller module 107 may receive and transmit signals to and from one or more of the planning module 105, ultrasonic imaging device 110, optical imaging device 119, robotic arm module 109, Al module 101 and AR module in a feedback and/or feedforward configuration.
  • a remote database (e.g., cloud database 101) may be in communication with the planning module 105 and/or the controller module 107.
  • the remote database may store and/or review/audit information to/from the planning module 105 and/or the controller module 107.
  • the remote database may provide remote viewing and/or archiving of operational parameters, including control instructions, planning instructions, inputs (anatomy inputs, physician inputs, tool/implant library inputs, etc.).
  • the ultrasonic imaging system may include any number of ultrasonic transducers.
  • the ultrasonic imaging system may include a single transducer array.
  • the transducer array may include transducers arranged in a linear, curved, phased, or other configuration.
  • the transducer array may include one or more rows of sensors. In some variations the sensors are arranged to optimally identify tissue layers and identify the location of the tissue layers in the working anatomic space.
  • FIG. 2A-2C illustrate the use a handheld ultrasonic wand and anatomic images produced of the shoulder tissue layers. In this case, the handheld ultrasonic wand is used to gather anatomical images of a patient’s shoulder, as shown in FIG. 2A.
  • FIG. 2B shows an example ultrasound image of internal structures of the shoulder, including the deltoid, humeral head and greater tuberosity.
  • FIG. 2C shows another example ultrasound image of the shoulder, including the biceps tendon.
  • the ultrasound images can be used to identify and distinguish between different anatomical structures, such as bone, muscle and tendon.
  • transition regions between different anatomical structures can be apparent.
  • the visual distinction is based on the echogenicity of the object. Tissues that have higher echogenicity are generally referred to as hyperechoic and are often represented with lighter colors in the images. In contrast, tissues with lower echogenicity are generally referred to as hypoechoic and are often represented with darker colors in the images.
  • Areas that lack echogenicity are generally referred to as anechoic and are often displayed as completely dark in the images. Edges of an anatomical structure or spaces between anatomical structures may appear as light or dark outlines.
  • the ultrasound images can also be used to visualize layers of tissue, such as within a muscle.
  • the ultrasound images may also be used to identify pathological tissues.
  • the system can use ultrasonic imaging to independently landmark anatomic or surgical features, such as one or more of the anatomical features shown in FIGS. 2B and 2C. In some modes, the ultrasonic imaging can be used in conjunction with other imaging techniques to improve the accuracy of identification of the anatomical features.
  • the ultrasound imaging system may include more than one transducer array.
  • the multiple transducer arrays can be positioned around an anatomic space, which may facilitate imaging regions of anatomy that are shadowed in an image of a separate transducer array. Such arrangement may also provide different viewing perspectives, which can be combined to provide to provide a 3D view. For instance, three transducer arrays can be set up orthogonally (e.g., perpendicularly) with respect to each other.
  • FIG. 3A shows an example of an array of ultrasound transducers 302, 304 and 306 positioned posterior, anterior, and superior to a shoulder joint.
  • Each of the transducer arrays 302, 304 and 306 can be configured to gather images of a hyperechoic element, such as bony objects and/or surgical instruments.
  • a hyperechoic element such as bony objects and/or surgical instruments.
  • one transducer array can be used to collect one or more images of the front of the shoulder joint and another transducer array can be used to collect one or more images of the rear of the joint. Collecting ultrasound images from these different perspectives can provide spatial information for construction of a 3D image of the area of interest.
  • the transducer arrays may be coupled and compensated. For instance, the transducer arrays may be timed to be collecting images only when another transducer array is not delivering ultrasonic energy.
  • one of the transducer arrays can be used to deliver energy, and more than one transducer array (e.g., including the transducer array used to deliver energy) can be used to receive energy.
  • This arrangement can allow the system to detect internal structures by not only relying on directly reflected energy.
  • the transducer arrays can fire at different characteristic frequencies and function at the same time. This can allow each receiving array to know which sending array delivered the signal it received. That is, this arrangement can allow the controller(s) to determine which transducer array delivered a signal by its characteristic energy frequency(ies).
  • the ultrasound transducers 302, 304 and 306 can be secured to the patient’s shoulder using one or more straps.
  • the straps may be configured to secure the transducers 302, 304 and 306 to the patient while allowing a surgical instrument (e.g., using a robotic arm) to access the shoulder joint.
  • Cables connected to the transducers 302, 304 and 306 can send and receive signals to and from the controller(s).
  • Other means for securing ultrasound transducers can include adhesive (e.g., adhesive tape) and/or other mechanical fixture.
  • 3B shows another example of a hands-free fixation device 301 that may be used to secure an ultrasound transducer probe to a patient’s body; in this case, to the patient’s chest (ProbeFix distributed by Mermaid Medical Iberia, based in Denmark).
  • one or more ultrasound transducers may be positioned using a robotic arm.
  • FIG. 3C shows an example of an ultrasound transducer mounted to an end effector of a robotic arm to obtain ultrasound images of a patient’s shoulder.
  • the robotic arm can be used to secure the transducer at one location, can facilitate movement and secure positioning driven by an operator, can be programmed to adjust its position in accordance with the rest of the system, and/or can be programmed in a repeated manner to facilitate construction of a 3D image.
  • the robotic arm can move the ultrasound transducer at different angles (e.g., at orthogonal angles) relative to each other in establishing a location of the landmark in 3D coordinate system.
  • one or more ultrasound transducers is configured to operate in a handheld manner, such as shown in FIG 2A.
  • multiple handheld transducers may be arranged around the anatomical area of interest to attain ultrasonic images from different angles to provide different perspectives.
  • the handheld transducer can be positioned at orthogonal positions as described herein.
  • the one or multiple ultrasonic transducers may be made from in a flexible array that can be adhered to the tissue surface.
  • a flexible array is shown FIGS. 4A-4H, which is described in Hu et al., “Stretchable ultrasonic transducer arrays for three-dimensional imaging on complex surfaces,” Science Advances, 23 Mar 2018: Vol. 4, no. 3, which is incorporated by reference herein in its entirety.
  • tape, straps or other mechanical fixtures can be used to facilitate securing of the flexible transducer arrays on the patient.
  • Flexible ultrasound transducers may be mounted and secured onto the tissue surface (e.g., using adhesive), which allows the transducers to remain secured to the patient without repositioning of the transducers.
  • the transducers can remain adhered on the patient and flex with the patient’s skin if the patient moves, or is moved.
  • flexible transducers may be well suited for the Al assisted methods described herein since the transducers may be securely affixed to the patient in a particular location, thereby providing consistent location-based results.
  • Flexible transducers may be well suited to provide ultrasonic image guidance in any of a number of robotic surgeries, like surgical laparoscopic removal of ovarian cysts, treatment of endometriosis, removal of ectopic pregnancy, hysterectomy, prostatectomy and/or orthopedic surgeries.
  • the ultrasonic system can detect tissue layers and types complementary and independent to other modalities. This can be used to increase accuracy of identification of position. With the ultrasonic system, an understanding of tissue distance from the sensor is gained. Using this in addition to classic vision and Al techniques, a 3D location of that particular tissue can be obtained. This 3D location can be fused with other 3D positioning systems (such as Al/classic vision applied to arthroscopic images and Al applied to MRI/CT scans) to obtain a more certain estimation of 3D location in space.
  • 3D positioning systems such as Al/classic vision applied to arthroscopic images and Al applied to MRI/CT scans
  • the ultrasound technology could be used to generate an ultrasound image.
  • the output could be retained in software as a data array.
  • the data could be integrated with the other system data while minimizing the visual stream to the user to an optimized level of visual content.
  • the ultrasonic guidance techniques described herein can be used in any of a number of surgical applications, including arthroscopic surgeries, laparoscopic surgeries, open surgeries (e.g., open orthopedic surgeries) and general surgeries.
  • the technology can provide various procedural advantages over conventional surgeries.
  • the ultrasound transducer arrays can be used to visualize the status and positioning of the sutures, suture management and configuration and positioning of instruments in real time. This can help the surgeon attain maximum precision in arthroscopic shoulder procedures such as those for shoulder instability (recurrent dislocation of shoulder), impingement syndrome (pain on lifting the arm), rotator cuff tears, calcific tendonitis tendinitis (calcium deposition in the rotator cuff), frozen shoulder (periarthritis), and/or removal of loose bodies.
  • Superposition of the ultrasound array images with arthroscopic view images would also help surgeons in cases of shoulder arthroscopic synovectomy for inflammatory conditions like rheumatoid arthritis and/or infections (e.g., tuberculosis), and/or synovial chondromatosis.
  • rheumatoid arthritis and/or infections e.g., tuberculosis
  • synovial chondromatosis e.g., tuberculosis
  • interposition and superior capsule reconstruction which both employ grafts in an attempt to restore joint stability and function.
  • ultrasound guidance in conjunction with the arthroscopic camera view can be used in interposition and/or superior capsule reconstruction by providing information to determine the respective graft measurements.
  • the supraspinatus is usually seen through the lateral portal, the glenoid is assessed and the defect is measured.
  • the ultrasound arrays can be used in assessing and measuring the sizes of various anatomical structures, implants, pathological structures, and/or can be used by the surgeon to measure the distance between two points within the joint. Artificial intelligence can be used to extract location of specific tissues within the image, and a 3D location in space can be extracted by using the location of tissue in the ultrasound image coupled with the distance to that tissue.
  • the ultrasound arrays along with live arthroscopic feed can help the surgeon make better decisions in assessing the graft sizing.
  • the ultrasound transducer arrays can help the surgeon by providing guidance in implant positioning and suture placement.
  • the surgeon can measure the graft size, the bone loss, bony spurs, loose particles, degenerative conditions and/or other features. Because of real-time information during the arthroscopic procedures, the surgeon can be guided during the procedure and can be better equipped to improve his/her technique and increase patient outcomes.
  • the use of ultrasound transducer arrays in conjunction with arthroscopic view can help the surgeon make more informed decisions regarding the arthroscopic repairs of the hip joint.
  • the use of ultrasound guidance along with arthroscopic camera view can allow the surgeon to assess the condition of the pathology and come up with better outcomes because of proper guidance from the superposition of ultrasound with arthroscopic view.
  • the ultrasound imaging techniques can also be used in knee arthroscopy procedures.
  • knee arthroscopy procedures can include those for repairing meniscus tears and/or articular cartilage injuries.
  • Knee arthroscopy is also used for ligament repairs and reconstruction (e.g., anterior cruciate ligament (ACL) reconstruction), removal of loose or foreign bodies, lysis of adhesions (cutting scar tissue to improve motion), debridement and repair of meniscal tears, irrigating out infection, lateral release (cutting tissue to improve patella pain) and/or fixation of fractures or osteochondral defects (bone/cartilage defects).
  • ACL anterior cruciate ligament
  • the ultrasound imaging techniques can allow the surgeon to superposition the ultrasound images with the arthroscopic images, thereby allowing for get better guidance and for making better decisions to get improved precision and thus better outcomes.
  • the ultrasound transducer arrays can provide live guidance in terms of imaging the tissues being punctured through the arthroscope and cannula.
  • the ultrasound techniques can be used to prevent damage to neurovascular structures like the brachial plexus (in case of shoulder) or femoral artery (in case of hip).
  • Ultrasonic guidance can also be useful in case of open orthopedic surgeries like knee replacement, hip replacement and shoulder replacement.
  • the ultrasonic guidance can help in a number of robotic surgery applications, such as in cases where complex procedures are dealt with more precision, flexibility and control than is possible with conventional techniques. For example, kidney surgery, gallbladder surgery, radiosurgery for tumors, orthopedics surgery, cardiovascular surgery, trans-oral surgery and/or prostatectomy.
  • the ultrasonic imaging techniques may be used in laparoscopic procedures, which traditionally are characterized by imaging techniques that provide poor depth perception. The use of ultrasonic images in conjunction with laparoscopic images can help decipher the depth of various tissues and hence increase the safety and efficacy of laparoscopic procedures.
  • ultrasonic image guidance can provide depth perception and/or provide a way to detect the presence of any adhesions or cysts that might get missed using traditional laparoscopic imaging techniques.
  • Ultrasonic superposition can be helpful in cases of lumps and/or cancerous mass removal surgeries in any part of the body. For example, while doing the surgery, the surgeon can see if there are any remnants of a mass in a cavity and at the same time also have a look at the lymph node enlargements in and around the region of the mass. Ultrasonic superposition may be useful to surgeons to provide guidance and allow the surgeon to be in better control of their operations. Thus, the surgeon can make a more informed decision at each and every stage of a surgery.
  • one or multiple robotic arms may be used during the procedure.
  • a single (master) arm that has a precisely determined location relative to the physiologic target.
  • Other robotic arm(s) (slave(s)) can receive positional location from the master arm to determine their positions relative to the master arm.
  • there may be a primary arm for locating the system relative to a physiologic target and some or all of the other arms may be used to provide supplemental information.
  • Multiple arms can also be complementary in that each of the arms can provide locational information that compiled together give precise information of all the elements of the system.
  • location sensing elements are not attached to a robotic arm but are precisely placed relative to the physiologic feature.
  • sensors which will help for safety of the patient and doctor. For example, if there was a disturbance or if there’s an instrument in space in the line of action. With the help of these sensors, the robotic arm would stop. And it would no longer disturb the surgical site or injure anybody during the surgery.
  • sensors which could be based on optical sensing and they could help stop the robot immediately in the position, in the case of any of the limits are reached, these could be position, velocity or acceleration limits.
  • ultrasound imaging and optical imaging may be used for determining positioning of the arm(s) (e.g., master robotic arm, etc.).
  • FIG. 5 shows an example of using optical imaging for master arm location.
  • a robotic arm 501 holds a device including a touch end effector 506 that contacts a bony landmark for the patient’s arm 507.
  • the apparatuses and methods described herein may be configured to perform orthopedic procedures arthroscopically (through cannulation) and/or can be performed through open surgical procedures. In either case the procedure may be visualized by the physician.
  • Ultrasonic arrays and Visualization may be used in coordination with the localization of a robotic arm, the depth perception shall also be very helpful for the surgeon.
  • a visualization system and ultrasonic arrays may be used for position detection.
  • the 2D image obtained from the arthroscopic camera can be used to identify physiologic landmarks.
  • Various classical vision techniques can be used to identify particular landmarks by detecting 2D feature points (e.g., based on color, shape and/or orientation) unique to that landmark.
  • Example feature extraction algorithms include SIFT (Scale-Invariant Feature Transform), ORB (Oriented FAST and Rotated BRIEF), and SURF (Speeded Up Robust Features).
  • a 3D position can be extracted by using a stereo camera projection algorithm.
  • the 2D feature point may provide x and y locations in space, and z distance of the 3D position may be determined.
  • a depth estimation can be determined by analyzing two views of the same landmark and tracking the displacement of detected features across these two images.
  • the robotic arm may obtain the first image view of the landmark, 2D features of the landmark may then be extracted, and then the robotic arm may shift the arthroscopic camera horizontally by a small fixed distance to obtain the second view of the landmark (e.g., as part of a triangulation procedure).
  • the 2D features of the landmark may then be extracted. Matching feature points across the two images may then be found, and the displacement between these points may be determined.
  • the visual imaging could also be coupled with one or more “touch” techniques as described above.
  • a tool could touch the target tissue, and could, for example, have calibrated bands or have known physical dimensions. These could then be correlated to the pixels in the image and therefore provide calibrated indexing to the rest of the visible field.
  • either the same camera may be used and moved between the right and left positions, or a separate left and right camera may be used, and may take images of a target position (point P(x,y,z)) in space.
  • the x and y positions may be determined from the 2D image, and the changes based on a known separation between two or more images (e.g., left camera, right camera) may be used to determine the depth and therefore the z distance.
  • Al Artificial Intelligence based algorithms can be used in place of, or in conjunction with, classical vision techniques to further improve efficiency and accuracy of the robotic arm positioning in any of the apparatuses and methods described herein.
  • classical vision techniques it may be difficult to generalize 2D features of a landmark across various patients, causing inaccuracies when determining the precise location of a landmark.
  • Al can be employed by training a system across a large dataset of images where a ground truth is provided dictating the correct position of a particular landmark for every image. The ground truth is a bounding box drawn around the desired landmark in the training images that is to be detected. The trained system may then be tested on a separate database of new images, and the detections produced by the system are compared against the ground truth to determine the accuracy of that detection. The train and test steps are repeated until an acceptable accuracy is achieved.
  • FIG. 7 illustrates the architecture of a CNN. It takes as an input an image, and then assigns importance to key aspects in the image by creating weights and biases. The image is passed through several layers to extract different features, thereby creating various weights and biases. These weights and biases may be used to determine how well the network is trained by adjusting these values over many rounds of training and testing until an acceptable accuracy is achieved.
  • a bounding box may be produced as an output which provides an accurate pixel location of the landmark in the 2D image. From here, the above classical vision camera projection algorithm may be used to determine the depth portion of the 3D position.
  • each procedure could also be used to further build the image database and improve the accuracy of the neural network.
  • the trained neural network can detect a region (landmark) within the current image (following a similar architectural flow depicted in FIG. 7) which can then be used to determine a 3D position in space; this information may be used to determine relative position of the arm(s) to the body and/or each other.
  • An apparatus or method as described herein may include recording from real or simulated procedures. For example, a system may then be trained so that in procedures it recognizes location without calibration activities. Each procedure may be used to further build and refine the capability of the apparatus (e.g., the Al algorithm). Alternatively, or in conjunction, training may be performed using existing databases of physiologic information such as pre-procedure MRIs or CT scans.
  • a CNN can be trained on CT scans and MRI images to aid in the pre-surgery analysis.
  • the Al can be used in pre-procedure planning to visualize pathological tissues in a 3D model and then help the surgeon locate them during the surgery and further help remove these tissues during the procedure.
  • the system may be trained to identify pathological tissues like osteophytes, calcific tendinitis, pannus formations and also check for synovitis, osteitis and bone erosions.
  • the Al system algorithm may be trained to recognize these pathological tissues and study their removal with safe margins from healthy tissues, and automatically further refine the capability of the Al system algorithm.
  • the methods and apparatuses described herein may include a pre-procedure analysis that includes training the neural network on CT scans and MRI’s, rather than use the arthroscopic 2D image as input to the CNN.
  • the network is trained in a similar fashion to detect landmarks and pathological tissue in these scans.
  • a separate database of CT scans and MRI’s may be used where again each scan will have the ground truth bounding box drawn for that landmark. This database may be fed into the CNN so that it may be trained and tested in the same fashion as explained above until an acceptable detection accuracy is met.
  • the Al, ultrasonic imaging and visual imaging may be used together.
  • any of these apparatuses and methods may employee both classical vision techniques alongside Al based algorithms to determine a 3D location of the robotic arm as well as detect landmarks and pathological tissues from CT and MRI scans while creating a patient specific 3D model of the joint.
  • Two separate neural networks may be trained, e.g., a first having an input dataset general preprocedure CT scan and MRI data from various patients and a second having an input dataset of arthroscopic camera images from various surgeries.
  • the trained neural network in both cases may produce a bounding box of the pixel location of a landmark in the image/scan.
  • a camera projection technique may be used to reconstruct the depth from the 2D image, providing a 3D location in space, while in the case of a CT and MRI scan, a classification and location of pathological tissues may be produced on the 3D model.
  • a database of images and MRI/CT scans will be built, and as more procedures are performed, this database will continue to grow thereby further improving the accuracy of both Al networks and creating a more robust system.
  • the neural network model may be continuously be refined until minimal error and convergence.
  • transfer learning can be used to reduce the time in training.
  • Transfer learning allows the use of knowledge learned from previously trained networks as the basis for our network training.
  • the Faster Region based CNN with Inception Resnet v2 model neural network can be used, which was pre-trained on the COCO dataset of common house objects.
  • the image content of the COCO dataset is vastly different from surgical videos and CT/MRI scans, the features learned from the COCO pre-trained network may share some similarity with the medical imaging.
  • the neural networks may train faster and have an improved performance, allowing for a higher prediction accuracy.
  • a first key metric can be the learning rate, which is the rate at which controls how much adjustment of weights in the neural network is required with respect to the loss gradient.
  • the goal when training a network is to find the minimal error possible, and the learning rate helps define the speed at which this minimum error is found and accuracy of the error.
  • regularization is a general class of techniques used to reduce error on the test set by generalizing the model beyond the training set.
  • L2 is one such technique where larger weights in a network are penalized so as to constrain them from dominating the outcome of network.
  • the last important hyper-parameter is the choice in optimizer used to help improve training speed and result accuracy.
  • the momentum optimizer is one such algorithm that helps accelerate finding the minimum training loss by building up velocity when finding this minimum loss. Since a pre-trained model is being utilized, the hyper-parameters set in this model can be used as a starting point, which will give a better result instead of selecting values at random. The parameters can be further tuned as iterations of training continue.
  • a feature-rich landmark Another factor which can allow for improved prediction accuracy is to determine a feature-rich landmark to detect in the surgical video.
  • landmarks may look similar to each other (for example, the glenoid and humeral bone may, to a neural network, may look similar in color).
  • landmarks may not have a lot of features.
  • a landmark may be chosen that has variance in color and shape. For example, the subscapularis tendon in the shoulder may be a good candidate for detection as it has a difference in colors, and distinct color patterns.
  • the methods and apparatuses described herein may determine one or more features to train on based on variations for the one or more features in a factor for the feature within a training set such variance as color, variance in shape, etc. (which may be manually determined or automatically determined).
  • a second feature may be combined with the first features in which the second feature has a low degree of variance in the same (or different) factor(s).
  • FIG. 8 illustrates an example method of training an Al for an apparatus (e.g., system), including providing the database of images (e.g., ultrasonic images or optical images) for which landmarks have been labeled 801.
  • the database of images with labeled landmarks 801 may be provided and used to train or test a neural network 803. If the accuracy of the trained network is below a threshold (or acceptable accuracy) 805, then it is retrained/tested; otherwise the network may be used to determine the location of a target and surround it with a 2D bounding box on the image that includes the target 807.
  • a threshold or acceptable accuracy
  • FIG. 9 illustrates an example method of training an apparatus including an imaging system (e.g., ultrasound imaging system or optical camera system) for extracting features from two or more pictures (e.g., 2D images), preferably that are separated by a known distance, to determine, based on a common feature within the images, the coordinates (and particularly the depth) of the imaging source (e.g., ultrasonic transducer or optical camera on a robotic arm or arthroscope), and therefore the coordinates of the robotic arm and/or a device held by the robotic arm.
  • an imaging system e.g., ultrasound imaging system or optical camera system
  • two or more pictures e.g., 2D images
  • the imaging source e.g., ultrasonic transducer or optical camera on a robotic arm or arthroscope
  • a pair of images 901, 903 may be taken from two distances apart (e.g., a known distance between them), as illustrated and described above.
  • the Al system 905 may then be provided with both of the images (e.g., all of the images). Using these two images, separated by a known distance, one or more landmarks may be detected in these images 907, 907’ and a bounding box provided around a target feature(s). The same or a different target feature (or a point or region of the feature) may be extracted 911 for each image. The features between the two images may be matched and depth information may be extracted 713, using the known separation of the imaging source between the images, as shown in FIG. 6, and described above. Thus, a location of the robotic arm (or a surgical device on the robotic arm) and/or a location of a target region of the patient’s body in 3D space may be determined 915.
  • FIG. 10 illustrates one example of the methods described above, in which an image is taken by a camera (an arthroscopic view of a shoulder).
  • the system may accurately detect the attached portion of the subscapularis to the humeral head, as shown in the bounding box 1001 marked in FIG. 10. This identified region from each of the images may then be used to determine the location of the tool 1003 and/or robotic arm. As the robotic arm and/or tool move, the images may be constantly updated. In some variations, multiple images may be taken and identified from different positions to triangulate the depth and thus the x, y, z position of the tool and/or robotic arm. In this example, the system recognizes both the subscapularis and the humeral head.
  • Similar recognition and triangulating can be performed using ultrasound images to determine the x, y, z position of a region of interest, such as a feature within the bounding box 1001 (e.g., a portion of the subscapularis or humeral head).
  • a region of interest such as a feature within the bounding box 1001 (e.g., a portion of the subscapularis or humeral head).
  • FIG. 11 illustrates an example method of using Al feature recognition as part of a planning module (see, e.g., FIG. 1) to determine a location of a target region of a patient’s body and/or a location of a robotic arm and/or tool.
  • a planning module may be trained to recognize a feature using a training database 1101 (e.g., anatomy library, physician input, and/or tool/implant library).
  • a training database 1101 e.g., anatomy library, physician input, and/or tool/implant library.
  • the trained planning module may perform a preliminary feature recognition 1107 for one or more landmarks/features in one or more current images 1105 (e.g., collected from an ultrasound transducer and/or camera on the robotic arm or a device held by a robotic arm), applied based on one or more previous images 1103 (e.g., collected from an ultrasound transducer and/or camera on the robotic arm or a device held by a robotic arm).
  • the planning module can determine a location in 3D space (e.g., see FIG. 9) of a target region within the patient’s body and/or the robot arm 1109, and update the planning module accordingly.
  • the location information may also be used to determine a distance between the robotic arm (or a surgical device on the robotic arm) and the location of the one or more landmarks, or a distance between two points within the patient’s body.
  • the location information can be sent as output to the controller, robotic arm (e.g., master arm and/or slave arm) and/or an AR module 1111.
  • FIG. 12 illustrates an example method of combining ultrasound imaging and optical imaging to provide 3D localization using a planning module (see, e.g., FIG. 1).
  • Ultrasound images may be received from multiple perspectives 1201, for example, at different angles (e.g., orthogonally arranged) relative to each other.
  • two or more ultrasound transducers may be positioned around an area of interest (e.g., see FIGS. 3A) and/or one or more ultrasound transducers may be moved around the area of interest (e.g., see FIG. 3C).
  • a location in 3D space of a landmark (or other feature) in the ultrasound images may be determined 1203. This can include using a known reference distance and/or reference angle.
  • a known distance between an ultrasound transducer in a first position and the ultrasound transducer in a second position may provide a reference distance.
  • a known distance between a first ultrasound transducer and a second ultrasound transducer e.g., see FIG. 3 A
  • the system can be configured to receive one or more optical images that include the landmark 1205.
  • the optical image(s) may be taken by a camera associated with a robotic arm (e.g., on the robotic arm or held by the robotic arm). In some instances, the optical image(s) may be taken by a camera on or attached to an endoscope (e.g., arthroscope).
  • the planning module may establish a frame of reference of the optical image by correlating the landmark in the ultrasound and optical images 1207. Such correlation may be performed by extracting and matching features between the ultrasound and optical images. Since the ultrasound and optical images are collected using different modalities, the same objects within the images may appear different. Thus, the correlation may include adjusting for any of such appearance differences.
  • ultrasound imaging may be able to detect anatomical structures within the body that visual imaging techniques may not detect due to opaque tissue surfaces. Correlating the location of such features as detected by ultrasound imaging with surface images detected by visual imaging can provide a frame of reference for the optical image. Likewise, some anatomical structures may be better visualized using visual imaging compare to ultrasound imaging; and establishing a frame of reference for visual image can help to identify the location of such features in the ultrasound images.
  • the ultrasound and optical images may be combined and displayed as a combined image 1209.
  • the combined image may include the optical image(s) overlaid on the ultrasound image(s), or vice versa, using an augmented reality module (e.g., see FIG. 1).
  • an augmented reality module e.g., see FIG. 1
  • only the ultrasound images or the optical images are displayed, for example, to provide the images at higher speed.
  • the single modality images may include labels or marking identifying landmarks or other features of interest.
  • the combined images or single modality images are displayed in real time, for example, to guide a surgeon during a surgical procedure.
  • AR may allow a physician to assess where they are (e.g., in the patient) without having to manipulate the imaging camera.
  • a system may be constructed with Al data, from pre-recorded MRI/CT data, or from other database sources.
  • the AR image(s) could be displayed on a headset or on a monitor, such as the monitor that is displaying the current camera view.
  • Tissue image layers may be presented in such a way to facilitate the procedure; this could be image layers that could be turned on or off, layers that could change in transparency, or other useful characteristics.
  • Tissues such as epidermis, sub-cutaneous fat, sheaths, muscles, cartilages, tendons, bones, lymphatic systems, vascular structures and nerves could be displayed in different color or patterns. This may assist the surgeon in accessing and assessing the surgical procedure with ease. This may allow a fast learning curve, allowing even less-experienced doctors to be better equipped to operate independently with the help of these tools. This may also be used for simulation of surgeries and/or for training of medical residents for surgeries, and/or for continuing medical education for newer procedures.
  • Any of the methods and apparatuses described herein may also or alternatively have associated procedure planning tools. These tools could be constructed from preprocedure MRI or CT data. These could be used to precisely plan procedure even the location of screws or other procedural elements that used in reconstruction. The motions and loading of the joints at the end of the procedure could then be mapped out in advance. The procedure could then be executed more efficiently, and the result may be precise and durable.
  • FIG. 13 illustrates example of a procedure locations, interventions, etc. that may be planned as described herein.
  • Any of these methods may also include one or more 3D models. That may be constructed from images. These 3D models may be constructed from images. Simulations of motions, loads and interventions may be provided and performed using the 3D models.
  • FIG. 14 illustrates examples of 3D models, and in particular examples of how 3D models can be constructed from images. Simulations of motions, loads, and interventions can be performed.
  • pre-procedure MRI or CT data slices may be used to determine a 3D model of the patient’s tissue that can be constructed from the slices.
  • pre-procedure MRI or CT data may be used to prepare a 3D model of the joint, which could also show pathological tissues like calcifications, osteophytes, pannus, and/or bone erosions.
  • this 3D model may allow preplanning of the surgical procedure and different joint implants may be placed/overlapped onto the 3D model to check for the best fit for the patient.
  • the range of motion and joint loads can also be simulated with the help of this 3D model.
  • the plan may then be saved for intra-operative review. Further, the help of ultrasonic arrays will help to reconfirm the incision angle and proper positioning of the instruments during the surgery.
  • a robotic system could help in assessing the accuracy of the procedure and its deviations from the planned procedure.
  • a robotic system could assist the surgeon, by providing real-time view of the operation plane on the 3D model.
  • the surgeon can also perform valgus and varus stress tests during the surgery and check the stressed range of motion under these stress test scenarios on the 3D model. With the help of these stressed range of motion tests, it may be easier to fix the implant and assess the post-operative range of motion.
  • the live feedback may allow a surgeon to modify the surgery in real-time by making adjustments.
  • the robotic system may help in providing real-time view of glenoid retroversion and inclination, and the system can also show in real-time the reaming and drilling depth and screw placement positions.
  • This real-time feedback may give the surgeon added ability to adjust surgical plan intraoperatively, offering more surgeon flexibility during these complex procedures.
  • the robotic system may help reproduce the preoperative plan with precise execution intraoperatively and helps the surgeon by providing live feedback, allowing for real-time adjustments.
  • a robotic surgical navigation system as described herein may index itself in relation to the bony landmarks.
  • the robotic surgical system can reposition itself to point towards the surgical site.
  • the humerus may be dislocated anteriorly several times during the procedure. This would be to prepare the head of the humerus for the implant and proper positioning in the glenohumeral joint.
  • ultrasound arrays and intra-operative visualization a surgeon can virtually position the implant and balance the soft tissue in relation to the implant even in cases of joint movements during surgery.
  • the robotic surgical navigation system may help re-index itself in relation to the joint of the patient and reorient itself back in relation to the surgical plan.
  • Robotic systems as described herein may help in reducing damage to healthy tissues and minimize scar and adhesion formations due to increased precision during surgery. This may lead to minimal pain and suffering for the patient, faster recovery, custom fit, extra flexion and range of motion for the patient and shorter hospital stays. The rehabilitation of such patients could be easier and there could be lesser chances of revision surgeries.
  • any of the robotic systems (including Al) described herein, along with the ultrasonic arrays visualization capabilities may result in added accuracy in identifying the bony landmarks, tissue landmarks such as the nerves, tendon insertions onto the bone even in the presence of scar tissue and adhesions.
  • Any of the methods (including user interfaces) described herein may be implemented as software, hardware or firmware, and may be described as a non-transitory computer-readable storage medium storing a set of instructions capable of being executed by a processor (e.g., computer, tablet, smartphone, etc.), that when executed by the processor causes the processor to control perform any of the steps, including but not limited to: displaying, communicating with the user, analyzing, modifying parameters (including timing, frequency, intensity, etc.), determining, alerting, or the like.
  • a processor e.g., computer, tablet, smartphone, etc.
  • Any of the methods (including user interfaces) described herein may be implemented as software, hardware or firmware, and may be described as a non-transitory computer-readable storage medium storing a set of instructions capable of being executed by a processor (e.g., computer, tablet, smartphone, etc.), that when executed by the processor causes the processor to control perform any of the steps, including but not limited to: displaying, communicating with the user, analyzing, modifying parameters (including timing, frequency, intensity, etc.), determining, alerting, or the like.
  • a processor e.g., computer, tablet, smartphone, etc.
  • spatially relative terms such as “under”, “below”, “lower”, “over”, “upper” and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if a device in the figures is inverted, elements described as “under” or “beneath” other elements or features would then be oriented “over” the other elements or features. Thus, the exemplary term “under” can encompass both an orientation of over and under.
  • the device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.
  • the terms “upwardly”, “downwardly”, “vertical”, “horizontal” and the like are used herein for the purpose of explanation only unless specifically indicated otherwise.
  • first and second may be used herein to describe various features/elements (including steps), these features/elements should not be limited by these terms, unless the context indicates otherwise. These terms may be used to distinguish one feature/ element from another feature/element. Thus, a first feature/element discussed below could be termed a second feature/element, and similarly, a second feature/element discussed below could be termed a first feature/element without departing from the teachings of the present invention.
  • any of the apparatuses and methods described herein should be understood to be inclusive, but all or a sub-set of the components and/or steps may alternatively be exclusive, and may be expressed as “consisting of’ or alternatively “consisting essentially of’ the various components, steps, sub-components or sub-steps. [0107] As used herein in the specification and claims, including as used in the examples and unless otherwise expressly specified, all numbers may be read as if prefaced by the word “about” or “approximately,” even if the term does not expressly appear.
  • a numeric value may have a value that is +/- 0.1% of the stated value (or range of values), +/- 1% of the stated value (or range of values), +/- 2% of the stated value (or range of values), +/- 5% of the stated value (or range of values), +/- 10% of the stated value (or range of values), etc.
  • Any numerical values given herein should also be understood to include about or approximately that value, unless the context indicates otherwise. For example, if the value “10” is disclosed, then “about 10” is also disclosed.
  • any numerical range recited herein is intended to include all sub-ranges subsumed therein. It is also understood that when a value is disclosed that “less than or equal to” the value, “greater than or equal to the value” and possible ranges between values are also disclosed, as appropriately understood by the skilled artisan. For example, if the value “X” is disclosed the “less than or equal to X” as well as “greater than or equal to X” (e.g., where X is a numerical value) is also disclosed. It is also understood that the throughout the application, data is provided in a number of different formats, and that this data, represents endpoints and starting points, and ranges for any combination of the data points.

Abstract

Surgical imaging methods and apparatuses, including systems, for identifying and determining the location of anatomical features and other features using a combination of ultrasound sensor data and other sensor data. The methods and apparatuses may be used for navigation and guidance during a surgical procedure. Also described are methods and apparatuses for robotic surgical procedures, including artificial intelligence (AI) assisted robotic surgeries. The methods may include using a combination of AI landmark-identification, ultrasound imaging and visual imaging. Also described herein are methods and apparatuses for determining how to train the AI.

Description

ULTRASONIC SENSOR SUPERPOSITION
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] The application is related to and claims priority from US Provisional application 63/239,871 filed 01 September 2022, the entirety of which is incorporated herein by reference.
INCORPORATION BY REFERENCE
[0002] All publications and patent applications mentioned in this specification are herein incorporated by reference in their entirety to the same extent as if each individual publication or patent application was specifically and individually indicated to be incorporated by reference including U.S. Provisional Patent Application No. 62/886,302, filed on August 13, 2019, entitled “Al Assisted Robotic Surgical Methods and Apparatuses”.
FIELD
[0003] The apparatuses and methods described herein relate generally to ultrasonic imaging tools and system for facilitating robotic surgery and minimally invasive surgery procedures.
BACKGROUND
[0004] The use of surgical robots provides a number of benefits. One that is important to the patients and surgeons is to assist the surgeon in enhancing the outcomes of surgical procedures. Often, a robotic device includes of one or many distal movable arms that may have many other attachments at the end. The distal end of the robotic arm can be moved very swiftly and with a very high degree of precision. Also, the surgical arm can have many of the needed instruments attached. At the free distal end of the robotic arm, a surgical instrument can be attached as an attachment. These attachments are used at the surgeon’s discretion. The robotic arm is highly mobile, and it can also be adjusted by the surgeon to best position an instrument at the surgical site in order to do the operation effectively. Robotic surgery device provides a significant support to the surgeon’s surgical procedure. One of the many advantages of using a surgical robot is that the instruments can rotate several times over, which is much more than a human wrist can do. The instrument motion can be scaled down to remove a natural tremor or to go in extremely slow motion if desired. The instruments can be paused in any position or action to provide a steady, unmoving point of stability or retraction. Similarly, there is some physical health benefits to the surgeons, while conducting the operation in comparison to conventional surgical procedures. Without the support of robotic arms, a surgery can be physically demanding and lead to neck, shoulder, or back problems of the surgeon. Additionally, having to make a smaller incision compared to conventional procedure, noticeably results in faster recovery.
[0005] Some surgical robotic systems also use surgical navigation systems, which help the surgeons understand the location of the underlying areas in the patient’s body. With the help of surgical navigation system, a surgeon can apply the instruments to exactly the place that the instrument had to be applied in the patient surgical site. Often times the patient body is moved during surgery. Even when the patient body is moved during the surgery, the surgical navigation system may help to understand the new location of the surgical site and can make the robotic arms point to the same patient site location with respect to the changed position of the patient body. This re-indexing of robotic arm is possible with the help of image feedback from the camera to the robot, the detection of bony landmarks of the patient site help in getting the image signals back to the system, hence the surgeon is assisted during the surgery by avoiding confusion due to change of patient position during the surgery.
[0006] Robots can function autonomously or semi autonomously. “Semi-autonomous” robots are the robots that operate in a predefined path. For example, if there is a pre-defined tumor or a tissue that has to be extracted from the patient surgical site by cutting pre-defined boundaries, the semi-autonomous robots can be programmed to remove the tissue from the patient site by cutting along the pre-defined boundaries. In semi-autonomous robots, oftentimes there is a switch that has to be continuously pressed in order for the robot to function. Also, in certain semi-autonomous robots, the surgeon can use his leg or his hand to continuously press the switch for the robot to operate. Also, in some cases of semi- autonomous robots, just as the surgeon stops pressing the button, the robot stops to function completely. In autonomous robots, once started, the robot performs the entire procedure with no input from surgeon. Thus, the autonomous robot is essentially performing on its own.
[0007] There are some robots which are not traditional robots, they are actually manual robots. In these manual robots, the surgeon has to control the robots to make them function. There is a control system, through which the surgeon would enter the commands and may give the directions to the robot as to where to place the robotic arm. Also, in these manual robots, the robotic arm can be manually moved with the help of the surgeon, and the robotic arm can then be affixed to a surgical site. From then on, the robot can help in assisting the surgeon during a surgery. Thus, these types of systems operate in manual mode, while the arm will retain the positioning positional referencing.
[0008] In some cases, the robotic system could be a combination of autonomous semi- autonomous and even manual functions. For example, in case of orthopedic joint replacement surgery, a semi-autonomous robot function can be to remove the non-healthy bone and to cut the surface of the bone in order to prepare a healthy, smooth bone surface, so that the implant can be positioned on top. This minimizes healthy bone loss. However, in certain conditions, there may be a situation in which some extra tissue or an instrument comes in between the surgical site which may occlude the view of surgical site. In such cases, the surgeon can manually switch off the semi-autonomous robot and move the robotic arm and place it back into the position desired. Once the occlusion is removed, the semi-autonomous robot can be turned back on. This example of a combination of semi-autonomous and manual robots, may be widely used in many surgeries.
[0009] Oftentimes, some robotic systems also use pins and indexing arrays that are attached and inserted into remote portions of the healthy bone or healthy tissue. The use of these pins and arrays is used to send signals back to the robot. In cases where the patient body is moved during the surgery, the robot will be able to re-index itself back to the patient surgical site. However, the use of these indexes and pin arrays may be detrimental to the patient’s health as these pins are inserted into healthy bone and healthy tissues. Thus, it would be desirable to have a system in which there are no pins and indexing arrays. Alternatively, in cases where pins and arrays are used, they should be limited to the portion of the tissue or bone that is supposed to be removed after the surgery. In any re-indexing robotic system, the system requires proper guidance and proper vision during surgery so that the robot is able to re-index itself. Hence, it is desirable for robotic systems to have improved sensing technology to provide a better picture of the surgical site.
[0010] Described herein are systems (including apparatuses and devices) and methods that can address these and other needs related to sensing technologies in robotic surgeries, as well as using in other surgical procedures.
SUMMARY OF THE DISCLOSURE
[0011] The present disclosure relates to surgical methods and devices (apparatuses and systems) that combine ultrasound sensing with other sensing techniques. The ultrasound sensor data and other sensing data can be superposed to provide more accurate information regarding location and other characteristics of anatomical structures, implants, or surgical devices within the body, for example, during a surgical procedure. Ultrasound sensors and other imaging sensors may be positioned at different perspectives with respect to one or more anatomical features within an area of interest of a patient’s body to provide location information in three-dimensions (3D). The other sensing techniques may include other imaging modalities, such as optical, radiography and/or other imaging modalities.
[0012] Also described herein are ultrasound transducer systems for sensing different regions of the human body. The ultrasound transducer system may include multiple transducers to acquire data from different fixed locations around the area of interest. The transducers may be secured to the patient’s body using one or more mechanical fixtures, such as a strap, belt, or adhesive. Alternatively or additionally, one or more moveable transducers (e.g., handheld transducer) is moved around the area of interest to provide the different perspectives.
[0013] These systems and methods may be well suited for use in robotic surgery. The systems may be integrated into a surgical robotic system or may be a stand-alone system used in conjunction with a surgical robotic system. The systems may employ artificial intelligence (Al) techniques, including Al assisted robotic techniques. These apparatuses may provide navigation and/or treatment planning using an Al agent to identify landmarks, in particular, anatomical landmarks, to navigate the robotic device, or to assist in navigation with the robotic device. The Al-assisted image guidance may be well suited for use in any of a number of types of robotic surgeries, such as orthopedic surgeries, surgical laparoscopic removal of ovarian cysts, treatments of endometriosis, removal of ectopic pregnancy, hysterectomies, prostatectomies, and other types of surgeries.
[0014] Also described herein are methods of operating robotic surgical devices. For example, described herein are method of operating a robotic surgical device, the method comprising: detecting one or more landmarks within an image of a first field of view of a camera associated with the robotic surgical device and within an image of a second field of view that overlaps with the first field of view using an artificial intelligence (Al); and determining the position of a robotic arm or a device held in the robotic arm by triangulating between the image of the first field of view and the image of the second field of view.
[0015] Any of these methods may include detecting pathological tissues from the first field of view and displaying an image of the image of the first field of view in which the pathological tissues are marked. Detecting pathological tissues may include determining from a pre-scan of the patient (e.g., from a plurality of CT and MRI scans) a classification and location of pathological tissues on a 3D model. In some variations, detecting one or more landmarks may include receiving CT and MRI scans to create a patient specific 3D model of a joint.
[0016] Any of these methods may include creating two separate neural networks to identify the one or more landmarks within the images of the first and second fields of view. For example, a first neural network may be trained from an input dataset comprising a preprocedure CT scan and/or MRI data. The CT scan and/or MRI data may be from a plurality of different subjects. In some variations the second neural network may be trained from an input dataset of arthroscopic camera images from a plurality of different surgeries.
[0017] Any of these methods may include producing a bounding box around a pixel location of the one or more landmarks in each of the in the images of the first and second fields of view. As used herein a bounding box may be any shape (not limited to rectangular), and any appropriate size.
[0018] Any of these methods may include determining the position of the robotic arm or the device held in the robotic arm by triangulating between the image of the first field of view and the image of the second field of view comprises reconstructing a depth of the landmark relative to the robotic arm or the device held in the robotic arm.
[0019] Any of the methods described herein may also include continuously refining the Al to recognize the landmark.
[0020] In some variations, the method (or an apparatus configured to perform the method) may include selecting two or more landmarks to train the Al on, wherein at least one of the landmarks is selected to for having a feature with high variance and a second landmark is selected for having a feature with a low variance.
[0021] The machine learning (e.g., Al) agent may be periodically or continuously updated. For example, a method or apparatus as described herein may include continuously updating a database of optical images, MRI scans and CT scans used to train the Al.
[0022] As mentioned, because of the more accurate and adaptable Al-assisted guidance (e.g., by recognizing anatomical landmarks), any of these methods may be used with a table or bed mounted robotic arm. For example, any of these methods may include coupling the robotic surgical device to a table on which a patient is positioned.
[0023] These and other aspects are described herein.
BRIEF DESCRIPTION OF THE DRAWINGS
[0024] Novel features of embodiments described herein are set forth with particularity in the appended claims. A better understanding of the features and advantages of the embodiments may be obtained by reference to the following detailed description that sets forth illustrative embodiments and the accompanying drawings.
[0025] FIG. 1 is a block diagram illustrating how an ultrasonic sensor can integrate into a robotic system.
[0026] FIG. 2A-2C illustrate the use a handheld ultrasonic wand and anatomic images produced of the shoulder tissue layers; FIG. 2A shows an example of handheld ultrasonic wand being used to gather anatomical images of a patient’s shoulder; FIG. 2B shows an example ultrasound image of internal structures of the shoulder; and FIG. 2C shows another example ultrasound image of internal structures of the shoulder.
[0027] FIG. 3A illustrates an example of three ultrasound transducers positioned around a shoulder joint using a securing strap.
[0028] FIG. 3B illustrates another example of a fixation device used to secure one or more ultrasound transducers to the patient’s body.
[0029] FIG. 3C illustrates an example of an ultrasound transducer being positioned against a shoulder using a robotic arm.
[0030] FIGS. 4A-4H illustrate an example design of a stretchable ultrasonic transducer array; FIG. 4A illustrates the transducer array structure; FIG. 4B illustrates an exploded view to illustrate each component in an element; FIG. 4C illustrates a bottom view of four elements, showing the morphology of the piezoelectric material and bottom electrodes; FIG. 4D illustrates a 1-3 piezoelectric composite; FIG. 4E illustrates atop view of four elements showing the morphology of the backing layer and top electrodes; FIG. 4F illustrates the stretchable transducer array when bent around a developable surface; FIG. 4G illustrates the stretchable transducer array when wrapped on a non-developable surface; and FIG. 4H illustrates the stretchable transducer array in a mixed mode of folding, stretching and twisting.
[0031] FIG. 5 is an example of using visual imaging for robotic arm location.
[0032] FIG. 6 illustrates one example of a method of determining depth of a target point using a visual imaging technique as described herein.
[0033] FIG. 7 shows the architecture of a system and method using Al to determine 3D position based on a set of images.
[0034] FIG. 8 schematically illustrates a method of training a system to identify landmarks in 2D images. [0035] FIG. 9 schematically illustrates a method of determining the depth of a tool (e.g., the robotic arm, an end-effector device, etc.) using multiple images to triangulate to an identified (e.g., an Al-identified) target point.
[0036] FIG. 10 is an example of identification of a target region in a series of 2D images as described herein; in this example, the images are arthroscopic images of a patient’s shoulder and the identified target region(s) is the subscapularis and the humeral head.
[0037] FIG. 11 schematically illustrates a method of using Al feature recognition to determine a location of a target region of a patient’s body and/or a location of a robotic arm. [0038] FIG. 12 schematically illustrates a method of providing 3D localization and frame of reference.
[0039] FIG. 13 is an example of procedure locations, interventions, etc. that may be planned as described herein.
[0040] FIG. 14 is an example of 3D models of the system and/or patient tissue that may be generated as described herein.
DETAILED DESCRIPTION
[0041] The apparatuses and methods described herein include surgical systems and methods of operating them. In particular, described herein are surgical methods and apparatuses for determining positioning information using a combination of imaging technologies including ultrasonic imaging, visual imaging, and/or artificial intelligence (Al) methods.
[0042] The imaging apparatuses (e.g., systems) and methods described herein can include imaging using ultrasound techniques combined with imaging using one or more other imaging modalities, such as optical, radiography and/or other imaging modalities. Optical imaging generally involves imaging techniques using light (e.g., visual wavelengths and/or infrared wavelengths (e.g., including near-IR wavelengths). The combining of different imaging modalities can improve the identification of anatomical structures. For instance, visual imaging alone generally cannot be used to visualize through opaque tissue surfaces, and pre-procedure and architype models generally cannot not identify locations of internal surfaces and features of deformable soft tissues. Coupling ultrasound imaging with other imaging modalities, such as optical imaging, can provide a more complete view of anatomical structures. For example, ultrasound imaging can be used to “see through” visually opaque tissue surfaces to provide images of internal anatomical structures (e.g., tissue layers, bone, muscle, tendon, organs, and/or pathological structures). The ultrasound images may be combined with images taken using other imaging modalities, such as visual imaging using visual wavelengths of light, of the same internal anatomical structures. Combining the different imaging modalities may also increase the accuracy of locating an anatomical structure in relation to, for example, a robotic arm end effector used to perform a surgical procedure.
[0043] In some variations, the imaging apparatuses and methods described herein may be used in performing orthopedic procedures arthroscopically (through cannulation) and/or may be used in performing open surgical procedures. In either case the procedure may be visualized by the physician. Visualization may be used in coordination with a robotic surgical apparatus. The robotic apparatus may include single or multiple automated, semi-automated, and or manually controlled arms. The apparatuses and methods described herein may be configured to coordinate with the robotic apparatus to assist in controlling the arms, including indicating/ controlling the positioning of the robotic arm(s) and/or surgical devices held within the robotic arm(s) in 3D space.
[0044] For example, FIG. 1 schematically illustrates one example of a system. In this example, the system includes a planning module 105 and a controller module 107. The planning module 105 may receive input from a variety of different inputs, including patientspecific information from one or more sources of patient imaging and anatomy (e.g., a CT/MRI input 103). The planning module 105 may receive input from an anatomy library 111 that include one or more (or a searchable/referenced description of a plurality of different) anatomical examples of the body regions to be operated on by the apparatus, such as a body joint (knee, shoulder, hip, elbow, wrist), body lumen (bowel, large intestine, small intestine, stomach, esophagus, lung, mouth, etc.), vasculature, etc. The planning module may be configured to receive input from a physician, e.g., through one or more physician inputs 113, such as a keyboard, touchscreen, mouse, gesture input, etc. In some variations, the planning module may include one or more inputs from a tool/implant library 115. A tool/implant library may include functional and/or structural descriptions of the one or more tools that may be operated by the apparatus, including tools having end effectors for operating on tissue. The tools/implants may include cutting/ablating tools, cannula, catheters, imaging devices, etc. The tool/implant library may include a memory or other storage media storing the information on the tool/implant (for the tool/implant library) or the anatomy (for the anatomy library).
[0045] The planning module 105 may also receive images as input from one or more sensors, such an ultrasonic imaging device 110 (e.g., ultrasonic transducers), optical imaging system (e.g., camera) and/or one or more other sensors of the system. The planning module may use the input collected by the one or more of the sensors along with data from the anatomy library 111, physician input 113 and/or tool/implant library 115 to identify one or more landmarks (e.g., anatomical feature, tool and/or implant) within the collected images. The planning module 105 may use Al to iteratively train itself to recognize features within the collected images.
[0046] The planning module 105 may provide instructions and/or command information to the controller to control or partially control the robotic arm(s) and/or devices/tools (including end effectors) held or manipulated by the robotic arm(s). For example, a planning module may pre-plan one or a series of movements controlling the operation of a robotic arm(s) and/or a device/tool/implant held in the robotic arm(s). In some variations the planning module may adjust the plan on the fly (e.g., in real or semi -real time), including based on the inputs, including but not limited to the physician input. The planning module 105 and/or controller module 107 may receive information from the ultrasonic imaging device 110 and/or an optical imaging device 119 (e.g., camera).
[0047] The controller module 107 may control (and provide output) to a robotic arm module 109. The robotic arm module 109 may include one or more robotic arms, such as a primary arm and one or more slave arms. In some variations the ultrasonic imaging device 110 and/or optical imaging device 119 (e.g., camera) may be attached to or otherwise associated with in the robotic arm. In some variations the ultrasonic imaging device 110 and/or optical imaging device 119 (e.g., camera) may be separately affixed to the patient, or manually operated. The controller module 107 may also communicate with an artificial intelligence (Al) module 121 that may receive input from the controller module 107 and/or the planning module 105. The Al module 121 may communicate with an augmented reality (AR) module 123.
[0048] In some variations the controller module 107 may receive and transmit signals to and from one or more of the planning module 105, ultrasonic imaging device 110, optical imaging device 119, robotic arm module 109, Al module 101 and AR module in a feedback and/or feedforward configuration.
[0049] A remote database (e.g., cloud database 101) may be in communication with the planning module 105 and/or the controller module 107. The remote database may store and/or review/audit information to/from the planning module 105 and/or the controller module 107. The remote database may provide remote viewing and/or archiving of operational parameters, including control instructions, planning instructions, inputs (anatomy inputs, physician inputs, tool/implant library inputs, etc.).
[0050] The ultrasonic imaging system may include any number of ultrasonic transducers. In one configuration, the ultrasonic imaging system may include a single transducer array. The transducer array may include transducers arranged in a linear, curved, phased, or other configuration. The transducer array may include one or more rows of sensors. In some variations the sensors are arranged to optimally identify tissue layers and identify the location of the tissue layers in the working anatomic space. FIG. 2A-2C illustrate the use a handheld ultrasonic wand and anatomic images produced of the shoulder tissue layers. In this case, the handheld ultrasonic wand is used to gather anatomical images of a patient’s shoulder, as shown in FIG. 2A. FIG. 2B shows an example ultrasound image of internal structures of the shoulder, including the deltoid, humeral head and greater tuberosity. FIG. 2C shows another example ultrasound image of the shoulder, including the biceps tendon. As shown, the ultrasound images can be used to identify and distinguish between different anatomical structures, such as bone, muscle and tendon. In addition, transition regions between different anatomical structures can be apparent. The visual distinction is based on the echogenicity of the object. Tissues that have higher echogenicity are generally referred to as hyperechoic and are often represented with lighter colors in the images. In contrast, tissues with lower echogenicity are generally referred to as hypoechoic and are often represented with darker colors in the images. Areas that lack echogenicity are generally referred to as anechoic and are often displayed as completely dark in the images. Edges of an anatomical structure or spaces between anatomical structures may appear as light or dark outlines. The ultrasound images can also be used to visualize layers of tissue, such as within a muscle. The ultrasound images may also be used to identify pathological tissues. In some modes, the system can use ultrasonic imaging to independently landmark anatomic or surgical features, such as one or more of the anatomical features shown in FIGS. 2B and 2C. In some modes, the ultrasonic imaging can be used in conjunction with other imaging techniques to improve the accuracy of identification of the anatomical features.
[0051] The ultrasound imaging system may include more than one transducer array. The multiple transducer arrays can be positioned around an anatomic space, which may facilitate imaging regions of anatomy that are shadowed in an image of a separate transducer array. Such arrangement may also provide different viewing perspectives, which can be combined to provide to provide a 3D view. For instance, three transducer arrays can be set up orthogonally (e.g., perpendicularly) with respect to each other. FIG. 3A shows an example of an array of ultrasound transducers 302, 304 and 306 positioned posterior, anterior, and superior to a shoulder joint. Each of the transducer arrays 302, 304 and 306 can be configured to gather images of a hyperechoic element, such as bony objects and/or surgical instruments. In one example, one transducer array can be used to collect one or more images of the front of the shoulder joint and another transducer array can be used to collect one or more images of the rear of the joint. Collecting ultrasound images from these different perspectives can provide spatial information for construction of a 3D image of the area of interest. To prevent artifacts, the transducer arrays may be coupled and compensated. For instance, the transducer arrays may be timed to be collecting images only when another transducer array is not delivering ultrasonic energy. In some cases, one of the transducer arrays can be used to deliver energy, and more than one transducer array (e.g., including the transducer array used to deliver energy) can be used to receive energy. This arrangement can allow the system to detect internal structures by not only relying on directly reflected energy. In some examples, the transducer arrays can fire at different characteristic frequencies and function at the same time. This can allow each receiving array to know which sending array delivered the signal it received. That is, this arrangement can allow the controller(s) to determine which transducer array delivered a signal by its characteristic energy frequency(ies).
[0052] In the example of FIG. 3 A, the ultrasound transducers 302, 304 and 306 can be secured to the patient’s shoulder using one or more straps. The straps may be configured to secure the transducers 302, 304 and 306 to the patient while allowing a surgical instrument (e.g., using a robotic arm) to access the shoulder joint. Cables connected to the transducers 302, 304 and 306 can send and receive signals to and from the controller(s). Other means for securing ultrasound transducers can include adhesive (e.g., adhesive tape) and/or other mechanical fixture. FIG. 3B shows another example of a hands-free fixation device 301 that may be used to secure an ultrasound transducer probe to a patient’s body; in this case, to the patient’s chest (ProbeFix distributed by Mermaid Medical Iberia, based in Denmark).
[0053] Alternatively or additionally, one or more ultrasound transducers may be positioned using a robotic arm. FIG. 3C shows an example of an ultrasound transducer mounted to an end effector of a robotic arm to obtain ultrasound images of a patient’s shoulder. The robotic arm can be used to secure the transducer at one location, can facilitate movement and secure positioning driven by an operator, can be programmed to adjust its position in accordance with the rest of the system, and/or can be programmed in a repeated manner to facilitate construction of a 3D image. For example, the robotic arm can move the ultrasound transducer at different angles (e.g., at orthogonal angles) relative to each other in establishing a location of the landmark in 3D coordinate system.
[0054] In some examples, one or more ultrasound transducers is configured to operate in a handheld manner, such as shown in FIG 2A. In some variations, multiple handheld transducers may be arranged around the anatomical area of interest to attain ultrasonic images from different angles to provide different perspectives. For example, the handheld transducer can be positioned at orthogonal positions as described herein.
[0055] In some variations, the one or multiple ultrasonic transducers may be made from in a flexible array that can be adhered to the tissue surface. One example of a flexible array is shown FIGS. 4A-4H, which is described in Hu et al., “Stretchable ultrasonic transducer arrays for three-dimensional imaging on complex surfaces,” Science Advances, 23 Mar 2018: Vol. 4, no. 3, which is incorporated by reference herein in its entirety. In some cases, tape, straps or other mechanical fixtures can be used to facilitate securing of the flexible transducer arrays on the patient. Flexible ultrasound transducers may be mounted and secured onto the tissue surface (e.g., using adhesive), which allows the transducers to remain secured to the patient without repositioning of the transducers. For example, the transducers can remain adhered on the patient and flex with the patient’s skin if the patient moves, or is moved. Thus, flexible transducers may be well suited for the Al assisted methods described herein since the transducers may be securely affixed to the patient in a particular location, thereby providing consistent location-based results. Flexible transducers may be well suited to provide ultrasonic image guidance in any of a number of robotic surgeries, like surgical laparoscopic removal of ovarian cysts, treatment of endometriosis, removal of ectopic pregnancy, hysterectomy, prostatectomy and/or orthopedic surgeries.
[0056] Advantage in position identification. Similarly, the ultrasonic system can detect tissue layers and types complementary and independent to other modalities. This can be used to increase accuracy of identification of position. With the ultrasonic system, an understanding of tissue distance from the sensor is gained. Using this in addition to classic vision and Al techniques, a 3D location of that particular tissue can be obtained. This 3D location can be fused with other 3D positioning systems (such as Al/classic vision applied to arthroscopic images and Al applied to MRI/CT scans) to obtain a more certain estimation of 3D location in space.
[0057] The ultrasound technology could be used to generate an ultrasound image. Alternatively, the output could be retained in software as a data array. In this way, the data could be integrated with the other system data while minimizing the visual stream to the user to an optimized level of visual content.
[0058] The ultrasonic guidance techniques described herein can be used in any of a number of surgical applications, including arthroscopic surgeries, laparoscopic surgeries, open surgeries (e.g., open orthopedic surgeries) and general surgeries. The technology can provide various procedural advantages over conventional surgeries.
[0059] For example, in the case of arthroscopic shoulder repair, the ultrasound transducer arrays can be used to visualize the status and positioning of the sutures, suture management and configuration and positioning of instruments in real time. This can help the surgeon attain maximum precision in arthroscopic shoulder procedures such as those for shoulder instability (recurrent dislocation of shoulder), impingement syndrome (pain on lifting the arm), rotator cuff tears, calcific tendonitis tendinitis (calcium deposition in the rotator cuff), frozen shoulder (periarthritis), and/or removal of loose bodies. Superposition of the ultrasound array images with arthroscopic view images would also help surgeons in cases of shoulder arthroscopic synovectomy for inflammatory conditions like rheumatoid arthritis and/or infections (e.g., tuberculosis), and/or synovial chondromatosis.
[0060] In case of shoulder joint, for irreparable rotator cuff tears there are currently two main options: interposition and superior capsule reconstruction, which both employ grafts in an attempt to restore joint stability and function. In both of these options, ultrasound guidance in conjunction with the arthroscopic camera view can be used in interposition and/or superior capsule reconstruction by providing information to determine the respective graft measurements.
[0061] In case of shoulder arthroscopic repair surgeries, the supraspinatus is usually seen through the lateral portal, the glenoid is assessed and the defect is measured. The ultrasound arrays can be used in assessing and measuring the sizes of various anatomical structures, implants, pathological structures, and/or can be used by the surgeon to measure the distance between two points within the joint. Artificial intelligence can be used to extract location of specific tissues within the image, and a 3D location in space can be extracted by using the location of tissue in the ultrasound image coupled with the distance to that tissue. Thus, the ultrasound arrays along with live arthroscopic feed can help the surgeon make better decisions in assessing the graft sizing. For example, the ultrasound transducer arrays can help the surgeon by providing guidance in implant positioning and suture placement. Also, in terms of measurements, with the help of the ultrasound sensing, the surgeon can measure the graft size, the bone loss, bony spurs, loose particles, degenerative conditions and/or other features. Because of real-time information during the arthroscopic procedures, the surgeon can be guided during the procedure and can be better equipped to improve his/her technique and increase patient outcomes.
[0062] In case of the hip joint in cases of hip impingement, such as femoroacetabular impingement (FAI), or labral rears or removal of loose fragments of cartilage inside the hip joint, the use of ultrasound transducer arrays in conjunction with arthroscopic view can help the surgeon make more informed decisions regarding the arthroscopic repairs of the hip joint. In other hip joint conditions such as avascular necrosis, osteoarthritis, septic arthritis and/or pediatric arthroscopic conditions, the use of ultrasound guidance along with arthroscopic camera view can allow the surgeon to assess the condition of the pathology and come up with better outcomes because of proper guidance from the superposition of ultrasound with arthroscopic view.
[0063] The ultrasound imaging techniques can also be used in knee arthroscopy procedures. Such knee arthroscopy procedures can include those for repairing meniscus tears and/or articular cartilage injuries. Knee arthroscopy is also used for ligament repairs and reconstruction (e.g., anterior cruciate ligament (ACL) reconstruction), removal of loose or foreign bodies, lysis of adhesions (cutting scar tissue to improve motion), debridement and repair of meniscal tears, irrigating out infection, lateral release (cutting tissue to improve patella pain) and/or fixation of fractures or osteochondral defects (bone/cartilage defects). The ultrasound imaging techniques can allow the surgeon to superposition the ultrasound images with the arthroscopic images, thereby allowing for get better guidance and for making better decisions to get improved precision and thus better outcomes.
[0064] Also, in case of any arthroscopy, before making arthroscopic portals, a thorough understanding of the local anatomy may be necessary to prevent damage to neurovascular structures. The ultrasound transducer arrays can provide live guidance in terms of imaging the tissues being punctured through the arthroscope and cannula. Thus, the ultrasound techniques can be used to prevent damage to neurovascular structures like the brachial plexus (in case of shoulder) or femoral artery (in case of hip).
[0065] Ultrasonic guidance can also be useful in case of open orthopedic surgeries like knee replacement, hip replacement and shoulder replacement. The ultrasonic guidance can help in a number of robotic surgery applications, such as in cases where complex procedures are dealt with more precision, flexibility and control than is possible with conventional techniques. For example, kidney surgery, gallbladder surgery, radiosurgery for tumors, orthopedics surgery, cardiovascular surgery, trans-oral surgery and/or prostatectomy. [0066] The ultrasonic imaging techniques may be used in laparoscopic procedures, which traditionally are characterized by imaging techniques that provide poor depth perception. The use of ultrasonic images in conjunction with laparoscopic images can help decipher the depth of various tissues and hence increase the safety and efficacy of laparoscopic procedures. For example in case of diagnostic laparoscopic procedure for female infertility, with the help of super-positioning of the ultrasonic array images with the laparoscopic images, the diagnosis of endometriosis, endometrial cysts, salpingitis, pelvic adhesions, ovarian cysts, pelvic inflammatory diseases (tubal abscess or adhesions) and/or tubal dye fertility investigations can be made more quickly and more precisely, thus proving to be more cost efficient. Also, in cases of surgical laparoscopic procedures, such as removal of ovarian cysts, treatment of endometriosis, removal of ectopic pregnancy and/or hysterectomy, ultrasonic image guidance can provide depth perception and/or provide a way to detect the presence of any adhesions or cysts that might get missed using traditional laparoscopic imaging techniques.
[0067] The addition of “depth perception” due to use of ultrasonic superposition can also prove to be tremendous help in general laparoscopic procedures, such as cholecystectomy, appendectomy, adrenalectomy, diaphragmatic hernia repair, weight loss procedures (e.g., Roux-en-Y gastric bypass, sleeve gastrectomy or adjustable gastric band), Nissen fundoplication and/or Heller myotomy.
[0068] Ultrasonic superposition can be helpful in cases of lumps and/or cancerous mass removal surgeries in any part of the body. For example, while doing the surgery, the surgeon can see if there are any remnants of a mass in a cavity and at the same time also have a look at the lymph node enlargements in and around the region of the mass. Ultrasonic superposition may be useful to surgeons to provide guidance and allow the surgeon to be in better control of their operations. Thus, the surgeon can make a more informed decision at each and every stage of a surgery.
[0069] In some procedures, one or multiple robotic arms may be used during the procedure. In one modality, a single (master) arm that has a precisely determined location relative to the physiologic target. Other robotic arm(s) (slave(s)) can receive positional location from the master arm to determine their positions relative to the master arm. In other configurations, there may be a primary arm for locating the system relative to a physiologic target, and some or all of the other arms may be used to provide supplemental information. Multiple arms can also be complementary in that each of the arms can provide locational information that compiled together give precise information of all the elements of the system. In some modalities, location sensing elements are not attached to a robotic arm but are precisely placed relative to the physiologic feature.
[0070] Sometimes, in order to find the location of the surgical site and the bony landmarks, there might be a use of physical screws so that the screws can help determine the position of the bony landmarks and hence help to index the robotic arm with respect to the physiological side in the space. These screws can be put on the physiological landmarks and the robotic arms end effector will be used to touch this screw in and additional surrounding locations, so that the robot can find its position in the 3D space. In cases of joint replacement surgeries, oftentimes, the patient’s joint is moved during the surgery, but even in such cases the robot will be able to re-index itself back to the surgical site, thus helping the surgeon during surgery
[0071] There can also be incorporation of sensors, which will help for safety of the patient and doctor. For example, if there was a disturbance or if there’s an instrument in space in the line of action. With the help of these sensors, the robotic arm would stop. And it would no longer disturb the surgical site or injure anybody during the surgery. Similarly, there could also be other sensors, which could be based on optical sensing and they could help stop the robot immediately in the position, in the case of any of the limits are reached, these could be position, velocity or acceleration limits.
[0072] Alternatively, and/or additionally, ultrasound imaging and optical imaging may be used for determining positioning of the arm(s) (e.g., master robotic arm, etc.). For example, FIG. 5 shows an example of using optical imaging for master arm location. In FIG. 5, a robotic arm 501 holds a device including a touch end effector 506 that contacts a bony landmark for the patient’s arm 507. In general, the apparatuses and methods described herein may be configured to perform orthopedic procedures arthroscopically (through cannulation) and/or can be performed through open surgical procedures. In either case the procedure may be visualized by the physician. Ultrasonic arrays and Visualization may be used in coordination with the localization of a robotic arm, the depth perception shall also be very helpful for the surgeon. As mentioned, in addition or alternatively to using touch for positioning, a visualization system (subsystem) and ultrasonic arrays may be used for position detection. The 2D image obtained from the arthroscopic camera can be used to identify physiologic landmarks. Various classical vision techniques can be used to identify particular landmarks by detecting 2D feature points (e.g., based on color, shape and/or orientation) unique to that landmark. Example feature extraction algorithms include SIFT (Scale-Invariant Feature Transform), ORB (Oriented FAST and Rotated BRIEF), and SURF (Speeded Up Robust Features).
[0073] Once the 2D feature points are detected for a landmark, a 3D position can be extracted by using a stereo camera projection algorithm. The 2D feature point may provide x and y locations in space, and z distance of the 3D position may be determined. Rather than analyzing only one image view of the detected landmark, a depth estimation can be determined by analyzing two views of the same landmark and tracking the displacement of detected features across these two images. The robotic arm may obtain the first image view of the landmark, 2D features of the landmark may then be extracted, and then the robotic arm may shift the arthroscopic camera horizontally by a small fixed distance to obtain the second view of the landmark (e.g., as part of a triangulation procedure). The 2D features of the landmark may then be extracted. Matching feature points across the two images may then be found, and the displacement between these points may be determined. A relationship of depth = 1/displacement may allow the determination of the z distance of the 3D position. This is illustrated, for example, in FIG. 6.
[0074] In this example, the visual imaging could also be coupled with one or more “touch” techniques as described above. A tool could touch the target tissue, and could, for example, have calibrated bands or have known physical dimensions. These could then be correlated to the pixels in the image and therefore provide calibrated indexing to the rest of the visible field. In FIG. 6, either the same camera may be used and moved between the right and left positions, or a separate left and right camera may be used, and may take images of a target position (point P(x,y,z)) in space. As mentioned above, the x and y positions may be determined from the 2D image, and the changes based on a known separation between two or more images (e.g., left camera, right camera) may be used to determine the depth and therefore the z distance.
Using Al for identifying location
[0075] Artificial Intelligence (Al) based algorithms can be used in place of, or in conjunction with, classical vision techniques to further improve efficiency and accuracy of the robotic arm positioning in any of the apparatuses and methods described herein. In classical vision techniques, it may be difficult to generalize 2D features of a landmark across various patients, causing inaccuracies when determining the precise location of a landmark. Al can be employed by training a system across a large dataset of images where a ground truth is provided dictating the correct position of a particular landmark for every image. The ground truth is a bounding box drawn around the desired landmark in the training images that is to be detected. The trained system may then be tested on a separate database of new images, and the detections produced by the system are compared against the ground truth to determine the accuracy of that detection. The train and test steps are repeated until an acceptable accuracy is achieved.
[0076] For training an Al system using images, a deep neural network called a Convolutional Neural Network (CNN) can be used to learn the key features of a landmark across the database of images. FIG. 7 illustrates the architecture of a CNN. It takes as an input an image, and then assigns importance to key aspects in the image by creating weights and biases. The image is passed through several layers to extract different features, thereby creating various weights and biases. These weights and biases may be used to determine how well the network is trained by adjusting these values over many rounds of training and testing until an acceptable accuracy is achieved. Once a landmark has been correctly classified in an image, a bounding box may be produced as an output which provides an accurate pixel location of the landmark in the 2D image. From here, the above classical vision camera projection algorithm may be used to determine the depth portion of the 3D position.
[0077] By using Al to determine a 3D position of the robotic arm, calibration activities may not be required. Each procedure could also be used to further build the image database and improve the accuracy of the neural network. For example, the trained neural network can detect a region (landmark) within the current image (following a similar architectural flow depicted in FIG. 7) which can then be used to determine a 3D position in space; this information may be used to determine relative position of the arm(s) to the body and/or each other. An apparatus or method as described herein may include recording from real or simulated procedures. For example, a system may then be trained so that in procedures it recognizes location without calibration activities. Each procedure may be used to further build and refine the capability of the apparatus (e.g., the Al algorithm). Alternatively, or in conjunction, training may be performed using existing databases of physiologic information such as pre-procedure MRIs or CT scans.
Using Al in Pre-Procedure to Identify Pathological Tissues
[0078] In addition to using Al on the arthroscopic images obtained during the surgery, a CNN can be trained on CT scans and MRI images to aid in the pre-surgery analysis. The Al can be used in pre-procedure planning to visualize pathological tissues in a 3D model and then help the surgeon locate them during the surgery and further help remove these tissues during the procedure. In the case of Joint Replacement Surgeries, with the help of real or simulated procedures and further pre-procedure MRIs and CT Scans, the system may be trained to identify pathological tissues like osteophytes, calcific tendinitis, pannus formations and also check for synovitis, osteitis and bone erosions. Further, during surgical procedures, the Al system algorithm may be trained to recognize these pathological tissues and study their removal with safe margins from healthy tissues, and automatically further refine the capability of the Al system algorithm.
[0079] The methods and apparatuses described herein may include a pre-procedure analysis that includes training the neural network on CT scans and MRI’s, rather than use the arthroscopic 2D image as input to the CNN. The network is trained in a similar fashion to detect landmarks and pathological tissue in these scans. A separate database of CT scans and MRI’s may be used where again each scan will have the ground truth bounding box drawn for that landmark. This database may be fed into the CNN so that it may be trained and tested in the same fashion as explained above until an acceptable detection accuracy is met.
[0080] In any of the methods and apparatuses described herein, the Al, ultrasonic imaging and visual imaging may be used together. For example, any of these apparatuses and methods may employee both classical vision techniques alongside Al based algorithms to determine a 3D location of the robotic arm as well as detect landmarks and pathological tissues from CT and MRI scans while creating a patient specific 3D model of the joint. Two separate neural networks may be trained, e.g., a first having an input dataset general preprocedure CT scan and MRI data from various patients and a second having an input dataset of arthroscopic camera images from various surgeries. The trained neural network in both cases may produce a bounding box of the pixel location of a landmark in the image/scan. In the case of an arthroscopic image, a camera projection technique may be used to reconstruct the depth from the 2D image, providing a 3D location in space, while in the case of a CT and MRI scan, a classification and location of pathological tissues may be produced on the 3D model. A database of images and MRI/CT scans will be built, and as more procedures are performed, this database will continue to grow thereby further improving the accuracy of both Al networks and creating a more robust system. The neural network model may be continuously be refined until minimal error and convergence.
[0081] When training each Al network, transfer learning can be used to reduce the time in training. Transfer learning allows the use of knowledge learned from previously trained networks as the basis for our network training. As an example, the Faster Region based CNN with Inception Resnet v2 model neural network can be used, which was pre-trained on the COCO dataset of common house objects. Although the image content of the COCO dataset is vastly different from surgical videos and CT/MRI scans, the features learned from the COCO pre-trained network may share some similarity with the medical imaging. By using this knowledge, the neural networks may train faster and have an improved performance, allowing for a higher prediction accuracy.
[0082] Key metrics can be fine-tuned to allow for improved accuracy while training the neural network. A first key metric can be the learning rate, which is the rate at which controls how much adjustment of weights in the neural network is required with respect to the loss gradient. The goal when training a network is to find the minimal error possible, and the learning rate helps define the speed at which this minimum error is found and accuracy of the error. By applying a lower learning rate, the guarantee of finding the lowest error increases, however, training time also increases, so adjusting the learning rate to balance these two factors may be crucial. The next important hyper-parameter is regularization, which is a general class of techniques used to reduce error on the test set by generalizing the model beyond the training set. Many times, when training, acceptable loss values are seen in the training phase, but not during the test phase, this is where regularization helps by generalizing the model across training and testing. L2 is one such technique where larger weights in a network are penalized so as to constrain them from dominating the outcome of network. The last important hyper-parameter is the choice in optimizer used to help improve training speed and result accuracy. The momentum optimizer is one such algorithm that helps accelerate finding the minimum training loss by building up velocity when finding this minimum loss. Since a pre-trained model is being utilized, the hyper-parameters set in this model can be used as a starting point, which will give a better result instead of selecting values at random. The parameters can be further tuned as iterations of training continue.
[0083] Another factor which can allow for improved prediction accuracy is to determine a feature-rich landmark to detect in the surgical video. Two difficulties arise when trying to achieve a high prediction accuracy when looking at surgical videos. First, landmarks may look similar to each other (for example, the glenoid and humeral bone may, to a neural network, may look similar in color). Second, landmarks may not have a lot of features. To overcome these two factors, a landmark may be chosen that has variance in color and shape. For example, the subscapularis tendon in the shoulder may be a good candidate for detection as it has a difference in colors, and distinct color patterns. But this may not be enough, so rather than just labeling images to show the ground truth of the subscapularis tendon, more than one landmark within the same ground truth bounding box may be labeled. For example, instead of only labeling the subscapularis in the training images, it may be better to label both a portion of the subscapularis and a portion of the humeral head where the subscapularis connects. This may provide the neural network with more features to leam from since we have more color features and more shape features to differentiate between other classes. Also, ultrasonic imaging will provide additional support to the surgeon to verify the position of the tissues and bones in space.
[0084] Thus, in some variations, the methods and apparatuses described herein may determine one or more features to train on based on variations for the one or more features in a factor for the feature within a training set such variance as color, variance in shape, etc. (which may be manually determined or automatically determined). In some variations a second feature may be combined with the first features in which the second feature has a low degree of variance in the same (or different) factor(s).
[0085] FIG. 8 illustrates an example method of training an Al for an apparatus (e.g., system), including providing the database of images (e.g., ultrasonic images or optical images) for which landmarks have been labeled 801. For example, in FIG. 8, the database of images with labeled landmarks 801 may be provided and used to train or test a neural network 803. If the accuracy of the trained network is below a threshold (or acceptable accuracy) 805, then it is retrained/tested; otherwise the network may be used to determine the location of a target and surround it with a 2D bounding box on the image that includes the target 807.
[0086] FIG. 9 illustrates an example method of training an apparatus including an imaging system (e.g., ultrasound imaging system or optical camera system) for extracting features from two or more pictures (e.g., 2D images), preferably that are separated by a known distance, to determine, based on a common feature within the images, the coordinates (and particularly the depth) of the imaging source (e.g., ultrasonic transducer or optical camera on a robotic arm or arthroscope), and therefore the coordinates of the robotic arm and/or a device held by the robotic arm. For example, a pair of images 901, 903 may be taken from two distances apart (e.g., a known distance between them), as illustrated and described above. The Al system 905 may then be provided with both of the images (e.g., all of the images). Using these two images, separated by a known distance, one or more landmarks may be detected in these images 907, 907’ and a bounding box provided around a target feature(s). The same or a different target feature (or a point or region of the feature) may be extracted 911 for each image. The features between the two images may be matched and depth information may be extracted 713, using the known separation of the imaging source between the images, as shown in FIG. 6, and described above. Thus, a location of the robotic arm (or a surgical device on the robotic arm) and/or a location of a target region of the patient’s body in 3D space may be determined 915.
[0087] FIG. 10 illustrates one example of the methods described above, in which an image is taken by a camera (an arthroscopic view of a shoulder). The system may accurately detect the attached portion of the subscapularis to the humeral head, as shown in the bounding box 1001 marked in FIG. 10. This identified region from each of the images may then be used to determine the location of the tool 1003 and/or robotic arm. As the robotic arm and/or tool move, the images may be constantly updated. In some variations, multiple images may be taken and identified from different positions to triangulate the depth and thus the x, y, z position of the tool and/or robotic arm. In this example, the system recognizes both the subscapularis and the humeral head. Similar recognition and triangulating can be performed using ultrasound images to determine the x, y, z position of a region of interest, such as a feature within the bounding box 1001 (e.g., a portion of the subscapularis or humeral head).
[0088] FIG. 11 illustrates an example method of using Al feature recognition as part of a planning module (see, e.g., FIG. 1) to determine a location of a target region of a patient’s body and/or a location of a robotic arm and/or tool. For example, in FIG. 11, a planning module may be trained to recognize a feature using a training database 1101 (e.g., anatomy library, physician input, and/or tool/implant library). The trained planning module may perform a preliminary feature recognition 1107 for one or more landmarks/features in one or more current images 1105 (e.g., collected from an ultrasound transducer and/or camera on the robotic arm or a device held by a robotic arm), applied based on one or more previous images 1103 (e.g., collected from an ultrasound transducer and/or camera on the robotic arm or a device held by a robotic arm). Once the feature recognition(s) is/are determined to be sufficiently accurate (e.g., see FIG. 8), the planning module can determine a location in 3D space (e.g., see FIG. 9) of a target region within the patient’s body and/or the robot arm 1109, and update the planning module accordingly. The location information may also be used to determine a distance between the robotic arm (or a surgical device on the robotic arm) and the location of the one or more landmarks, or a distance between two points within the patient’s body. The location information can be sent as output to the controller, robotic arm (e.g., master arm and/or slave arm) and/or an AR module 1111.
[0089] FIG. 12 illustrates an example method of combining ultrasound imaging and optical imaging to provide 3D localization using a planning module (see, e.g., FIG. 1). Ultrasound images may be received from multiple perspectives 1201, for example, at different angles (e.g., orthogonally arranged) relative to each other. For example, two or more ultrasound transducers may be positioned around an area of interest (e.g., see FIGS. 3A) and/or one or more ultrasound transducers may be moved around the area of interest (e.g., see FIG. 3C). A location in 3D space of a landmark (or other feature) in the ultrasound images may be determined 1203. This can include using a known reference distance and/or reference angle. For example, a known distance between an ultrasound transducer in a first position and the ultrasound transducer in a second position (e.g., see FIG. 3C) may provide a reference distance. In another example, a known distance between a first ultrasound transducer and a second ultrasound transducer (e.g., see FIG. 3 A) may provide a reference distance. In some cases, a known angle between the different perspectives of the ultrasound transducer(s) may be used to provide a reference angle. Determining the location in 3D space may include surrounding the area of interest in a 2D bounding box, as described herein. The system can be configured to receive one or more optical images that include the landmark 1205. As described herein, the optical image(s) may be taken by a camera associated with a robotic arm (e.g., on the robotic arm or held by the robotic arm). In some instances, the optical image(s) may be taken by a camera on or attached to an endoscope (e.g., arthroscope). The planning module may establish a frame of reference of the optical image by correlating the landmark in the ultrasound and optical images 1207. Such correlation may be performed by extracting and matching features between the ultrasound and optical images. Since the ultrasound and optical images are collected using different modalities, the same objects within the images may appear different. Thus, the correlation may include adjusting for any of such appearance differences. Further, some objects appearing in images taken using one imaging modality may not be as apparent (or not apparent at all) in images taken using different modality. For example, ultrasound imaging may be able to detect anatomical structures within the body that visual imaging techniques may not detect due to opaque tissue surfaces. Correlating the location of such features as detected by ultrasound imaging with surface images detected by visual imaging can provide a frame of reference for the optical image. Likewise, some anatomical structures may be better visualized using visual imaging compare to ultrasound imaging; and establishing a frame of reference for visual image can help to identify the location of such features in the ultrasound images. In some variations, the ultrasound and optical images may be combined and displayed as a combined image 1209. The combined image may include the optical image(s) overlaid on the ultrasound image(s), or vice versa, using an augmented reality module (e.g., see FIG. 1). In other variations, only the ultrasound images or the optical images are displayed, for example, to provide the images at higher speed. In such cases, the single modality images may include labels or marking identifying landmarks or other features of interest. In some instances, the combined images or single modality images are displayed in real time, for example, to guide a surgeon during a surgical procedure.
[0090] In general, it may be advantageous to include AR feedback for the physician in any of the methods and apparatuses described herein. For example, AR may allow a physician to assess where they are (e.g., in the patient) without having to manipulate the imaging camera. Thus, a system may be constructed with Al data, from pre-recorded MRI/CT data, or from other database sources. The AR image(s) could be displayed on a headset or on a monitor, such as the monitor that is displaying the current camera view. Tissue image layers may be presented in such a way to facilitate the procedure; this could be image layers that could be turned on or off, layers that could change in transparency, or other useful characteristics. Tissues such as epidermis, sub-cutaneous fat, sheaths, muscles, cartilages, tendons, bones, lymphatic systems, vascular structures and nerves could be displayed in different color or patterns. This may assist the surgeon in accessing and assessing the surgical procedure with ease. This may allow a fast learning curve, allowing even less-experienced doctors to be better equipped to operate independently with the help of these tools. This may also be used for simulation of surgeries and/or for training of medical residents for surgeries, and/or for continuing medical education for newer procedures.
[0091] Any of the methods and apparatuses described herein may also or alternatively have associated procedure planning tools. These tools could be constructed from preprocedure MRI or CT data. These could be used to precisely plan procedure even the location of screws or other procedural elements that used in reconstruction. The motions and loading of the joints at the end of the procedure could then be mapped out in advance. The procedure could then be executed more efficiently, and the result may be precise and durable. FIG. 13 illustrates example of a procedure locations, interventions, etc. that may be planned as described herein.
[0092] Any of these methods may also include one or more 3D models. That may be constructed from images. These 3D models may be constructed from images. Simulations of motions, loads and interventions may be provided and performed using the 3D models. For example, FIG. 14 illustrates examples of 3D models, and in particular examples of how 3D models can be constructed from images. Simulations of motions, loads, and interventions can be performed. [0093] For example, pre-procedure MRI or CT data slices may be used to determine a 3D model of the patient’s tissue that can be constructed from the slices. In the case of an orthopedic joint replacement procedure, pre-procedure MRI or CT data may be used to prepare a 3D model of the joint, which could also show pathological tissues like calcifications, osteophytes, pannus, and/or bone erosions. Also, this 3D model may allow preplanning of the surgical procedure and different joint implants may be placed/overlapped onto the 3D model to check for the best fit for the patient. The range of motion and joint loads can also be simulated with the help of this 3D model. The plan may then be saved for intra-operative review. Further, the help of ultrasonic arrays will help to reconfirm the incision angle and proper positioning of the instruments during the surgery.
[0094] Any of the apparatuses and methods described herein may be inter-operative. A robotic system could help in assessing the accuracy of the procedure and its deviations from the planned procedure. In case of any deviations, a robotic system could assist the surgeon, by providing real-time view of the operation plane on the 3D model. For example, in the case of knee replacement surgeries, the surgeon can also perform valgus and varus stress tests during the surgery and check the stressed range of motion under these stress test scenarios on the 3D model. With the help of these stressed range of motion tests, it may be easier to fix the implant and assess the post-operative range of motion. The live feedback may allow a surgeon to modify the surgery in real-time by making adjustments. Similarly, in case of shoulder replacement surgeries, the robotic system may help in providing real-time view of glenoid retroversion and inclination, and the system can also show in real-time the reaming and drilling depth and screw placement positions. This real-time feedback may give the surgeon added ability to adjust surgical plan intraoperatively, offering more surgeon flexibility during these complex procedures. The robotic system may help reproduce the preoperative plan with precise execution intraoperatively and helps the surgeon by providing live feedback, allowing for real-time adjustments.
[0095] In shoulder replacement surgery, one of the common challenges faced by the surgeon is to assess the center line for the glenoid placement, this could easily be solved by the use of robotic system and the central peg hole could be made with the help of live- guidance of the robotic navigation system and the help of ultrasonic arrays for accurate visualization. A robotic system with ultrasonic vision end effector as described herein could also help provide clear visibility into the glenoid vault in real-time and thus provide a more consistent, accurate glenoid placement in both standard and in challenging cases of compromised rotator cuffs. Any of the robotic systems described herein could help in a more accurate glenoid placement and thus minimize complications and increase implant survivability for the patients. With the help of any of these robotic apparatuses (e.g. systems), the shoulder replacement surgery may be performed according to the planned implant placement, thus giving accuracy in the version, inclination, tilt and implant placement, this shall be a great improvement over standard surgeries or even surgeries performed with patient-specific instrumentation.
[0096] A robotic surgical navigation system as described herein may index itself in relation to the bony landmarks. Thus, when the patient’s joint is moved, the robotic surgical system can reposition itself to point towards the surgical site. For example, in the case of shoulder replacement surgery, the humerus may be dislocated anteriorly several times during the procedure. This would be to prepare the head of the humerus for the implant and proper positioning in the glenohumeral joint. With the help of planning tools, ultrasound arrays and intra-operative visualization, a surgeon can virtually position the implant and balance the soft tissue in relation to the implant even in cases of joint movements during surgery. Similarly, in case of knee replacement surgery, in order to do the valgus and varus tests, the knee joint is moved, and thus the stressed range of motion tests may be carried out. In these cases of movement of the patients’ joint, the robotic surgical navigation system may help re-index itself in relation to the joint of the patient and reorient itself back in relation to the surgical plan.
[0097] Robotic systems as described herein may help in reducing damage to healthy tissues and minimize scar and adhesion formations due to increased precision during surgery. This may lead to minimal pain and suffering for the patient, faster recovery, custom fit, extra flexion and range of motion for the patient and shorter hospital stays. The rehabilitation of such patients could be easier and there could be lesser chances of revision surgeries.
[0098] Another use of robotic apparatuses described herein in joint replacement surgeries would be in the case of challenging cases, post-fracture cases or cases of revision surgeries. For example, in case of a revision shoulder replacement, often the position of the axillary nerve is misplaced and it is difficult to identify the axillary nerve in the presence of scar tissue and adhesions. It is important to avoid damage to the axillary nerve during the incisions. Similarly, in cases of shoulder replacement surgeries in patients with prior shoulder fractures, the bone structure, axillary nerve and the rotator cuff is often misaligned in comparison to a healthy shoulder joint. The incisions to the fascia should be such that the axillary nerve is salvaged and away from the path of the operation. In such cases, any of the robotic systems (including Al) described herein, along with the ultrasonic arrays visualization capabilities may result in added accuracy in identifying the bony landmarks, tissue landmarks such as the nerves, tendon insertions onto the bone even in the presence of scar tissue and adhesions.
[0099] Any of the methods (including user interfaces) described herein may be implemented as software, hardware or firmware, and may be described as a non-transitory computer-readable storage medium storing a set of instructions capable of being executed by a processor (e.g., computer, tablet, smartphone, etc.), that when executed by the processor causes the processor to control perform any of the steps, including but not limited to: displaying, communicating with the user, analyzing, modifying parameters (including timing, frequency, intensity, etc.), determining, alerting, or the like.
[0100] Any of the methods (including user interfaces) described herein may be implemented as software, hardware or firmware, and may be described as a non-transitory computer-readable storage medium storing a set of instructions capable of being executed by a processor (e.g., computer, tablet, smartphone, etc.), that when executed by the processor causes the processor to control perform any of the steps, including but not limited to: displaying, communicating with the user, analyzing, modifying parameters (including timing, frequency, intensity, etc.), determining, alerting, or the like.
[0101] When a feature or element is herein referred to as being “on” another feature or element, it can be directly on the other feature or element or intervening features and/or elements may also be present. In contrast, when a feature or element is referred to as being “directly on” another feature or element, there are no intervening features or elements present. It will also be understood that, when a feature or element is referred to as being “connected”, “attached” or “coupled” to another feature or element, it can be directly connected, attached or coupled to the other feature or element or intervening features or elements may be present. In contrast, when a feature or element is referred to as being “directly connected”, “directly attached” or “directly coupled” to another feature or element, there are no intervening features or elements present. Although described or shown with respect to one embodiment, the features and elements so described or shown can apply to other embodiments. It will also be appreciated by those of skill in the art that references to a structure or feature that is disposed “adjacent” another feature may have portions that overlap or underlie the adjacent feature.
[0102] Terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. For example, as used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items and may be abbreviated as “/”.
[0103] Spatially relative terms, such as “under”, “below”, “lower”, “over”, “upper” and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if a device in the figures is inverted, elements described as “under” or “beneath” other elements or features would then be oriented “over” the other elements or features. Thus, the exemplary term “under” can encompass both an orientation of over and under. The device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly. Similarly, the terms “upwardly”, “downwardly”, “vertical”, “horizontal” and the like are used herein for the purpose of explanation only unless specifically indicated otherwise.
[0104] Although the terms “first” and “second” may be used herein to describe various features/elements (including steps), these features/elements should not be limited by these terms, unless the context indicates otherwise. These terms may be used to distinguish one feature/ element from another feature/element. Thus, a first feature/element discussed below could be termed a second feature/element, and similarly, a second feature/element discussed below could be termed a first feature/element without departing from the teachings of the present invention.
[0105] Throughout this specification and the claims which follow, unless the context requires otherwise, the word “comprise”, and variations such as “comprises” and “comprising” means various components can be co-jointly employed in the methods and articles (e.g., compositions and apparatuses including device and methods). For example, the term “comprising” will be understood to imply the inclusion of any stated elements or steps but not the exclusion of any other elements or steps.
[0106] In general, any of the apparatuses and methods described herein should be understood to be inclusive, but all or a sub-set of the components and/or steps may alternatively be exclusive, and may be expressed as “consisting of’ or alternatively “consisting essentially of’ the various components, steps, sub-components or sub-steps. [0107] As used herein in the specification and claims, including as used in the examples and unless otherwise expressly specified, all numbers may be read as if prefaced by the word “about” or “approximately,” even if the term does not expressly appear. The phrase “about” or “approximately” may be used when describing magnitude and/or position to indicate that the value and/or position described is within a reasonable expected range of values and/or positions. For example, a numeric value may have a value that is +/- 0.1% of the stated value (or range of values), +/- 1% of the stated value (or range of values), +/- 2% of the stated value (or range of values), +/- 5% of the stated value (or range of values), +/- 10% of the stated value (or range of values), etc. Any numerical values given herein should also be understood to include about or approximately that value, unless the context indicates otherwise. For example, if the value “10” is disclosed, then “about 10” is also disclosed. Any numerical range recited herein is intended to include all sub-ranges subsumed therein. It is also understood that when a value is disclosed that “less than or equal to” the value, “greater than or equal to the value” and possible ranges between values are also disclosed, as appropriately understood by the skilled artisan. For example, if the value “X” is disclosed the “less than or equal to X” as well as “greater than or equal to X” (e.g., where X is a numerical value) is also disclosed. It is also understood that the throughout the application, data is provided in a number of different formats, and that this data, represents endpoints and starting points, and ranges for any combination of the data points. For example, if a particular data point “10” and a particular data point “15” are disclosed, it is understood that greater than, greater than or equal to, less than, less than or equal to, and equal to 10 and 15 are considered disclosed as well as between 10 and 15. It is also understood that each unit between two particular units are also disclosed. For example, if 10 and 15 are disclosed, then 11, 12, 13, and 14 are also disclosed.
[0108] Although various illustrative embodiments are described above, any of a number of changes may be made to various embodiments without departing from the scope of the invention as described by the claims. For example, the order in which various described method steps are performed may often be changed in alternative embodiments, and in other alternative embodiments one or more method steps may be skipped altogether. Optional features of various device and system embodiments may be included in some embodiments and not in others. Therefore, the foregoing description is provided primarily for exemplary purposes and should not be interpreted to limit the scope of the invention as it is set forth in the claims.
[0109] The examples and illustrations included herein show, by way of illustration and not of limitation, specific embodiments in which the subject matter may be practiced. As mentioned, other embodiments may be utilized and derived there from, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. Such embodiments of the inventive subject matter may be referred to herein individually or collectively by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any single invention or inventive concept, if more than one is, in fact, disclosed. Thus, although specific embodiments have been illustrated and described herein, any arrangement calculated to achieve the same purpose may be substituted for the specific embodiments shown. This disclosure is intended to cover any and all adaptations or variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent to those of skill in the art upon reviewing the above description.

Claims

CLAIMS What is claimed is:
1. A method of operating a robotic surgical device, the method comprising: determining a location of the one or more landmarks in three-dimensional space by detecting one or more landmarks in ultrasound images collected from multiple perspectives; and correlating the one or more landmarks in the ultrasound images with one or more landmarks in an optical image from a camera associated with the surgical device to establish a frame of reference of an optical image.
2. The method of claim 1, further comprising generating a combined image using the ultrasound images and the optical image.
3. The method of claim 2, wherein a plurality of combined images is displayed in real time during a surgical procedure.
4. The method of claim 2, wherein the ultrasonic images overlay the optical images in the combined image.
5. The method of claim 2, wherein the optical images overlays the ultrasonic images in the combined image.
6. The method of claim 2, further comprising: detecting an anatomical structure, an implant, or a pathological structure; and displaying the anatomical structure, the implant, or the pathological structure in the combined images.
7. The method of claim 6, wherein the anatomical structure, the implant, or the pathological structure is marked in the combined images.
8. The method of claim 6, wherein the anatomical structure includes layers of tissue.
9. The method of claim 6, wherein the anatomical structure includes a joint.
10. The method of claim 1, further comprising determining a position of the surgical device in the optical image. The method of claim 1, wherein the ultrasound images are collected from two or more ultrasonic transducers that are orthogonally that are fixedly positioned with respect to each other. The method of claim 1, wherein the ultrasound images are collected from one or more ultrasonic transducers that collects the ultrasound images at orthogonally positions with respect to each other. The method of claim 1, wherein the optical images are collected from the camera that is coupled to the surgical device. The method of claim 1, wherein the optical images are collected from an arthroscopic fiber optic camera system. The method of claim 1, further comprising using artificial intelligence to identify the one or more landmarks in the ultrasound images. The method of claim 1, wherein the one or more landmarks includes an anatomical structure, an implant, or a pathological structure. The method of claim 1, further comprising determining a distance of the surgical device to the location of the one or more landmarks. The method of claim 1, further comprising determining a distance between two points within the patient’s body. A system comprising: one or more processors; and memory coupled to the one or more processors, the memory configured to store computer-program instructions, that, when executed by the one or more processors, perform a computer-implemented method comprising: determining a location of the one or more landmarks in three- dimensional space by detecting one or more landmarks in ultrasound images collected from multiple perspectives; and correlating the one or more landmarks in the ultrasound images with one or more landmarks in an optical image from a camera associated with the surgical device to establish a frame of reference of an optical image.
PCT/US2022/041572 2021-09-01 2022-08-25 Ultrasonic sensor superposition WO2023034123A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163239871P 2021-09-01 2021-09-01
US63/239,871 2021-09-01

Publications (1)

Publication Number Publication Date
WO2023034123A1 true WO2023034123A1 (en) 2023-03-09

Family

ID=85411578

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/041572 WO2023034123A1 (en) 2021-09-01 2022-08-25 Ultrasonic sensor superposition

Country Status (1)

Country Link
WO (1) WO2023034123A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070021738A1 (en) * 2005-06-06 2007-01-25 Intuitive Surgical Inc. Laparoscopic ultrasound robotic surgical system
US20100268067A1 (en) * 2009-02-17 2010-10-21 Inneroptic Technology Inc. Systems, methods, apparatuses, and computer-readable media for image guided surgery
US20180055529A1 (en) * 2016-08-25 2018-03-01 Ethicon Llc Ultrasonic transducer techniques for ultrasonic surgical instrument
US20190125454A1 (en) * 2017-10-30 2019-05-02 Ethicon Llc Method of hub communication with surgical instrument systems
US20200297430A1 (en) * 2019-03-22 2020-09-24 Globus Medical, Inc. System for neuronavigation registration and robotic trajectory guidance, robotic surgery, and related methods and devices
WO2021030536A1 (en) * 2019-08-13 2021-02-18 Duluth Medical Technologies Inc. Robotic surgical methods and apparatuses
US20210192759A1 (en) * 2018-01-29 2021-06-24 Philipp K. Lang Augmented Reality Guidance for Orthopedic and Other Surgical Procedures

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070021738A1 (en) * 2005-06-06 2007-01-25 Intuitive Surgical Inc. Laparoscopic ultrasound robotic surgical system
US20100268067A1 (en) * 2009-02-17 2010-10-21 Inneroptic Technology Inc. Systems, methods, apparatuses, and computer-readable media for image guided surgery
US20180055529A1 (en) * 2016-08-25 2018-03-01 Ethicon Llc Ultrasonic transducer techniques for ultrasonic surgical instrument
US20190125454A1 (en) * 2017-10-30 2019-05-02 Ethicon Llc Method of hub communication with surgical instrument systems
US20210192759A1 (en) * 2018-01-29 2021-06-24 Philipp K. Lang Augmented Reality Guidance for Orthopedic and Other Surgical Procedures
US20200297430A1 (en) * 2019-03-22 2020-09-24 Globus Medical, Inc. System for neuronavigation registration and robotic trajectory guidance, robotic surgery, and related methods and devices
WO2021030536A1 (en) * 2019-08-13 2021-02-18 Duluth Medical Technologies Inc. Robotic surgical methods and apparatuses

Similar Documents

Publication Publication Date Title
US20220160443A1 (en) Robotic surgical methods and apparatuses
CN103997982B (en) By operating theater instruments with respect to the robot assisted device that patient body is positioned
US9320421B2 (en) Method of determination of access areas from 3D patient images
US20140031668A1 (en) Surgical and Medical Instrument Tracking Using a Depth-Sensing Device
US20220151705A1 (en) Augmented reality assisted surgical tool alignment
US11617493B2 (en) Thoracic imaging, distance measuring, surgical awareness, and notification system and method
CN112867460A (en) Dual position tracking hardware mount for surgical navigation
JP2010524562A (en) Implant planning using captured joint motion information
CN109646089A (en) A kind of spine and spinal cord body puncture based on multi-mode medical blending image enters waypoint intelligent positioning system and method
US20230363831A1 (en) Markerless navigation system
US20220338935A1 (en) Computer controlled surgical rotary tool
US20210186615A1 (en) Multi-arm robotic system for spine surgery with imaging guidance
US20220160440A1 (en) Surgical assistive robot arm
Rembold et al. Surgical robotics: An introduction
WO2012033739A2 (en) Surgical and medical instrument tracking using a depth-sensing device
WO2023034123A1 (en) Ultrasonic sensor superposition
US20240081784A1 (en) Methods for improved ultrasound imaging to emphasize structures of interest and devices thereof
CN115331531A (en) Teaching device and method for full-true simulation arthroscopic surgery
JP2023505956A (en) Anatomical feature extraction and presentation using augmented reality
JP2021153773A (en) Robot surgery support device, surgery support robot, robot surgery support method, and program
WO2023064433A1 (en) Methods for surgical registration and tracking using hybrid imaging devices and systems thereof
Lathrop Dexterity and guidance without automation: Surgical robot-like capabilities at a fraction of the cost
WO2023076308A1 (en) Mixed reality guidance of ultrasound probe
Banez Design and evaluation of a planning aid for port placement in robot-assisted laparoscopic surgery
Chang Simulation assisted robotic orthopedic surgery in femoroacetabular impingement

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22865330

Country of ref document: EP

Kind code of ref document: A1