EP2914194A1 - Système d'imagerie, dispositif d'exploitation comportant le système d'imagerie, et procédé d'imagerie - Google Patents

Système d'imagerie, dispositif d'exploitation comportant le système d'imagerie, et procédé d'imagerie

Info

Publication number
EP2914194A1
EP2914194A1 EP13786464.1A EP13786464A EP2914194A1 EP 2914194 A1 EP2914194 A1 EP 2914194A1 EP 13786464 A EP13786464 A EP 13786464A EP 2914194 A1 EP2914194 A1 EP 2914194A1
Authority
EP
European Patent Office
Prior art keywords
image data
selection
camera
image
volume
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP13786464.1A
Other languages
German (de)
English (en)
Inventor
Christian WINNE
Sebastian Engel
Erwin Keeve
Eckart Uhlmann
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Charite Universitaetsmedizin Berlin
Original Assignee
Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Charite Universitaetsmedizin Berlin
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV, Charite Universitaetsmedizin Berlin filed Critical Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Publication of EP2914194A1 publication Critical patent/EP2914194A1/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/361Image-producing devices, e.g. surgical cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/147Digital output to display device ; Cooperation and interconnection of the display device with other functional units using display panels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/08Volume rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/30Polynomial surface description
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body
    • A61B2090/365Correlation of different images or relation of image positions in respect to the body augmented reality, i.e. correlating a live optical image with another image
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/25User interfaces for surgical systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2215/00Indexing scheme for image rendering
    • G06T2215/16Using real world measurements to influence rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/555Constructional details for picking-up images in sites, inaccessible due to their dimensions or hazardous conditions, e.g. endoscopes or borescopes

Definitions

  • the invention relates to an imaging system, in particular for an operating device, comprising: an image data acquisition unit, an image data processing unit, an image storage unit.
  • the invention further relates to an operating device.
  • the invention also relates to a method for imaging, comprising the steps of: acquiring and providing image data and maintaining the image data, in particular in the medical or non-medical field.
  • an endoscopic navigation or instrument navigation is pursued for displaying a guidance device, in which optical or electromagnetic tracking methods are used for navigation;
  • a guidance device in which optical or electromagnetic tracking methods are used for navigation;
  • modular systems for an endoscope with expanding system modules such as a tracking camera, a computing unit and a visual display unit for displaying a clinical navigation are known.
  • Under tracking is basically a method for tracking or tracking to understand what the tracking of moving objects - namely in the present case the mobile device head - serves.
  • the goal of this tracking is usually the mapping of the observed actual movement, especially relative to a cartographed environment, for technical use.
  • This may be the merging of the tracked (guided) object - the mobile device head - with another object (e.g., a target point or target trajectory in the environment) or simply the knowledge of the current "pose" - ie position and / or Orientation and / or movement state of the tracked object.
  • absolute data relating to the position and / or orientation (pose) of the object and / or the movement of the object are regularly used for the purpose of tracking, as for example in the abovementioned system.
  • the quality of the particular pose and / or movement information depends first of all on the quality of the observation, the tracking algorithm used and on the modeling that serves to compensate for unavoidable measurement errors. Without modeling, however, the quality of the particular position and movement information is usually relatively poor.
  • absolute coordinates of a mobile device head e.g. In the context of a medical application, for example, also from the relative relationship between a patient tracker and a tracker for the device head.
  • the additional effort-spatial and temporal- to represent the required tracker is enormous and proves to be highly problematic in an operating room with a large number of actors.
  • a locator detection system may be maintained, for example, maintained at a tracking camera or other detection module of a locator detection system.
  • This can be, for example, an optical but also an electromagnetic or similar signal connection. Cancels such a particular optical signal connection abz. B. when an actor in the image pick-up line between tracking camera and a patient tracker device missing the necessary navigation information. In this case, support of the guidance of the device header by navigation information is no longer given.
  • the guide of the mobile device head may be interrupted until again Navigation information is available. Particularly in the case of the optical signal connection, this problem is known as the so-called "line of sight" problem.
  • a mobile device that can be improved in this respect is known with a tracking system from WO 2006/131373 A2, wherein the device is advantageously designed for the contactless determination and measurement of a spatial position and / or spatial orientation of bodies.
  • New approaches try to assist the navigation of a mobile device head by means of intraoperative magnetic resonance tomography or computer tomography in general, by coupling these with an imaging unit.
  • the registration of image data obtained, for example, by means of endoscopic video data with a preoperative CT image is described in the article by Mirota et al. "A System for Video-Based Navigation for Endoscopic Endonasal Skull Base Surgery" IEEE Transactions on Medical Imaging, Vol. 31, no. 4, April 2012 or in the article by Burschka et al. "Scale-invariant registration of monocular endoscopic images to CT scans for sinus surgery” in Medical Image Analysis 9 (2005) 413-426.
  • An essential goal of registration of image data obtained by, for example, endoscopic video data is to improve the accuracy of registration.
  • Enhanced visualization for minimally invasive surgery describes that an area of vision of an endoscope may be augmented with a so-called dynamic view expansion, based on previous observations uses a simultaneous localization and mapping (SLAM) approach.
  • SLAM simultaneous localization and mapping
  • registration methods Generally, such and other methods are referred to as so-called registration methods.
  • Registrar systems based on physical pointers regularly include so-called locators, namely a first locator to be attached to the patient to display the patient's coordinate system and an instrument locator to display the coordinate system of a pointer or instrument.
  • locators namely a first locator to be attached to the patient to display the patient's coordinate system and an instrument locator to display the coordinate system of a pointer or instrument.
  • the localizers can be controlled by a 3D measuring camera, eg. B. can be detected with a stereoscopic camera and the two coordinate systems can be associated in the context of Jardineauverarbei- tion and navigation.
  • a problem with the aforementioned approaches using physical pointer means is that the use of physical pointer means is comparatively complicated and also prone to failure in the implementation.
  • a problem with solely visually based registration and navigation approaches is their accuracy, which is ultimately determined by the resolution of the surgical camera used. It would be desirable to have an approach that can be implemented comparatively robustly with respect to disturbances with reduced expenditure and yet is available with comparatively high accuracy.
  • the invention is based on the object of providing an imaging system and an operating device and a method by means of which a surface model of the operating environment can be registered in an improved manner in a volume representation of the operating environment.
  • the handling and availability of registered surgical sites should be improved.
  • the imaging system for the surgical device with a mobile handleable device head comprises: an image data acquisition unit having an image acquisition unit, in particular an operation camera, which is designed to capture image data of an operating environment,
  • an image data processing unit configured to provide the image data
  • an image storage unit configured to store the image data of the operation environment and volume data of one of the operation environment. According to the invention are further provided:
  • Registration means which are designed to localize the image acquisition unit, in particular the surgical camera, relative to the operating environment
  • Virtual pointer means which are adapted to provide a number of pixels of the image data (300) automatically, in particular to identify and / or display, and
  • Assignment means in particular with computing means, which are designed to automatically associate with at least one of the pixels provided a volume location of the volume representation.
  • the pixels can be specified as surface locations.
  • the image data includes surface rendering data and / or the volume data includes volume rendering data.
  • the image data processing unit is designed to create a surface model of the operating environment by means of the image data, and / or an image storage unit is configured to hold the surface model representation data, and / or
  • the registration means are adapted to locate the operation camera relative to the surface model of the operating environment, and / or
  • the virtual pointer means are adapted to provide a number of surface sites in the surface model.
  • the image acquisition unit may basically comprise any type of imaging device.
  • the image acquisition unit may preferably be an operation camera directed to the operation environment.
  • an image acquisition unit may comprise an optical camera.
  • an imaging unit may also include some other form than the optical one in the visible area to image for real or virtual images.
  • the image acquisition unit can operate on the basis of infrared, ultraviolet or X-ray radiation.
  • the image recording unit may also comprise a device which is capable of generating a planar, optionally arbitrarily curved, topography from volume images; insofar a virtual picture.;s- this can also be a sectional plane view of a volume image; z. In a sagittal, frontal or transverse plane of a body.
  • Said operation device may preferably have in a periphery a device head that can be handled mobile.
  • the mobile device head can in particular comprise a tool, an instrument or sensor or the like device.
  • the device head is designed such that it has an image pickup unit, as may be the case with an endoscope, for example.
  • the image pickup unit may also be used remotely from the device head; in particular for observing the device head, in particular a distal end thereof in an operating environment.
  • the surgical device may be a medical device having a medical mobile device head such as an endoscope, a pointing instrument or a surgical instrument or the like; with a distal end for arrangement relative to a body, in particular body tissue, preferably insertion or attachment to the body, in particular to a body tissue, in particular for processing or observation of a biological body, such as a tissue-like body or the like.
  • a said surgical device may be a medical device, such as an endoscope, a pointing instrument, or a peripheral surgical instrument, which may be used, for example, in the context of laparoscopy or other medical examination method with the aid of an optical instrument; Such approaches have proven particularly useful in the field of minimally invasive surgery.
  • the device may be a non-medical device having a non-medical mobile device head such as an endoscope, a pointing instrument or a tool or the like; with a distal end for arrangement relative to a body, in particular a technical object such as a device or a device, preferably attachment or attachment to the body, in particular on a subject, in particular for processing or observation of a technical body, such as an object or Device or the like.
  • a non-medical mobile device head such as an endoscope, a pointing instrument or a tool or the like
  • a distal end for arrangement relative to a body, in particular a technical object such as a device or a device, preferably attachment or attachment to the body, in particular on a subject, in particular for processing or observation of a technical body, such as an object or Device or the like.
  • Device can also be used in a non-medical field of application.
  • the said system may be useful in a non-medical field of application, e.g. B.
  • a camera-based visual inspection from a distance for example, to protect against dangerous content
  • image data e.g., 3D X-ray image data, ultrasound image data or microwave image data, etc.
  • Another exemplary application is the investigation of internal cavities of components or assemblies using the system presented here, for example based on an endoscopic or endoscope-like camera system.
  • the concept of the invention has also proven itself.
  • the use of optical vision instruments is helpful.
  • tools in particular in the field of robotics, can be attached to an operating device that is equipped with an imaging system so that the tools can be navigated by means of the operating device.
  • the system can in particular increase the accuracy in the assembly of industrial robots or make previous assembly activities that are not possible with robots feasible.
  • a worker / mechanic by instruction of a data processing based on the above-mentioned imaging system attached to the tool, the assembly activity can be facilitated.
  • assembly eg, bolting of spark plugs
  • component eg, spark plug or screw
  • the operating device of the type mentioned can preferably be equipped with a manual and / or automatic guide for guiding the mobile device head, wherein a guide device is designed for navigation in order to enable automatic guidance of the mobile device head.
  • a guide device is designed for navigation in order to enable automatic guidance of the mobile device head.
  • the invention is equally applicable in a medical field and in a non-medical field, in particular non-invasive and without physical intervention on a body.
  • the method may preferably be restricted to a non-medical field.
  • the invention is based on the consideration that when registering a volume representation on a surface model of the operating environment, the surgical camera, irrespective of the type of registration, has hitherto only been used as image data acquisition means.
  • the invention has recognized that, in addition, when the operation camera is located relative to the surface model, it is particularly registered with respect to the surface model and the volume representation of the operating environment to produce a virtual pointing device. Accordingly, the invention provides:
  • - registration means adapted to locate the operation camera relative to the surface model
  • Virtual pointer means which are adapted to automatically provide a number of surface sites in the surface model
  • - Calculating means which are designed to automatically assign at least one of the surface areas provided a volume point of the volume representation.
  • the invention has recognized that the use of a physical pointer means is superfluous in the vast majority of cases via a virtual pointer means produced in this way. Rather, a number of surface sites can be made so automated, that an operator or other user need only be enabled to effectively select the surface site of interest; the selection process is more effective and faster than a cumbersome use of a physical pointer. Rather, a number of surface sites in the surface model can be automatically made available automatically and with reasonable effort with a respective assigned volume location of the volume representation. This leads to an effective registration of the surface location on the site of the volume representation assigned to the surface location.
  • the concept of the invention is based on the recognition that the registration of the operating camera relative to the surface model also allows the registration with respect to the volume representation of the operating environment and thus a surface location can be unambiguously assigned to a volume representation. With reasonable computational effort this can be done for a number of sites and these can be effectively provided to the surgeon or other users as a selection. This opens up the possibility for the surgeon or other user of any objects imaged in the image of the operating environment, ie. H. To view objects at specific but arbitrary locations of the operation environment in the surface model and the volume rendering. This also makes access to places in the operating environment that would not be accessible with a physical pointer instrument.
  • This possibility opens up independently of the registration means for the surgical camera; this may include registration by means of an external localization of the surgical camera (eg by tracking and / or pointer / pointer) and / or an internal localization by evaluation of the camera image data (visual method by the camera itself).
  • the registration means preferably comprises a physical patient locator, a physical camera locator and an external optical locator detection system.
  • a particularly preferred embodiment is illustrated in Fig. 3 and Fig. 4 of the drawings.
  • Said further developing first variant considerably increases the accuracy of a registration through the use of physical localizers, ie a registration between image and volume representation or between image data and volume data, in particular between surface model and volume representation of the operating environment.
  • physical registration means may also undergo a change in position relative to the identified body, for example, by slipping or releasing relative to the body during surgery. This can be counteracted because the operation camera is also registered with a locator.
  • the image or the image data, in particular the surface model can advantageously be used to determine the pose of the operating camera relative to the operating environment. Ie. even if a camera localizer is no longer detected by the external, optical localizer detection system for a short time, the pose of the surgical camera to the operating environment can be recalculated from the surface model for the interruption. Thus, a fundamental weakness of a physical registration agent is effectively balanced.
  • the registration means can be designed essentially for virtual localization of the camera.
  • the registration means referred to here as virtual comprise, in particular, the image data acquisition unit, the image data processing unit and a navigation unit.
  • the image data acquisition unit is designed to image data of an environment of Vorrich- tion head, in particular continuously, to detect and provide and
  • the image data processing unit is adapted to create a map of the environment by means of the image data and
  • the navigation unit is formed, by means of the image data and an image data flow to indicate at least one position of the device head in a vicinity of the operating environment using the map, such that the mobile device head is feasible based on the map.
  • Under navigation is basically any type of map creation and arrival of a position in the map and / or the indication of a destination point in the map to understand advantageous in relation to the position; In the following, therefore, the determination of a position with respect to a coordinate system and / or the specification of a destination point, in particular the indication of a route advantageously shown on the map between position and destination point.
  • the development is based on a substantially image data-based mapping and navigation in a map for the environment of the device head in the broader sense; that is, an environment that is not bound to a vicinity of the distal end of the device head, such as the visually detectable proximity at the distal end of an endoscope - the latter visually detectable proximity is referred to herein as the device head's operating environment.
  • a guide means with position reference to the device head this be assigned.
  • the guide means is preferably designed to make information on the position of the device head with respect to the environment in the map, the environment going beyond the proximity environment.
  • the position reference of the guide means to the device head may advantageously be rigid.
  • the position reference need not be rigid as long as the position reference is determinate variable or movable or at least calibrated. This may for example be the case when the device head at the distal end of a robot arm as part of a handling apparatus and the guide means is attached to the robot arm, such. B. caused by errors or strains variants are calibrated in the non-rigid but basically deterministic position reference between the guide means and device head is in this case.
  • An image data flow is understood to be the flow of image data points in time change, which occurs when one considers a number of image data points at a first and a second time while changing the position, the direction and / or speed thereof for a defined passage area.
  • the guide means comprises the image data acquisition.
  • the surface location in the surface model is assigned a surface coordinate of the surface model representation data.
  • the volume location of the volume representation has a volume coordinate associated with volume rendering data.
  • the data may be stored on the image storage unit in a suitable format, such as a data file or stream, or the like.
  • the surface location is preferably fixable as the intersection of a virtual viewing beam emanating from the surgical camera with the surface model.
  • a surface coordinate can be specified as a point of the operating camera assigned to the point of intersection.
  • Such a 2D pixel can be registered after registration of the surgical camera relative to the surface model and registered volume rendering of 3D image data to the patient or the surface model and it can also locate the camera itself in the volume image data or localize relative to them.
  • Preferred further developments also provide advantageous possibilities for making a selection or definition of a surface location relative to a volume location available.
  • a selection and / or monitor means is provided, which is configured to group the freely selectable and automatically provided and fixed number of surface locations into a selection and to visualize the selection in a selection representation.
  • the selection representation may be an image, but also a selection menu or a list or other representation.
  • the selection representation may also be a linguistic representation or a sensor feature.
  • the number of surface sites in the surface model is freely selectable, in particular free of physical Display, ie provided without a physical pointer.
  • the number of surface locations in the surface model is particularly preferably available only by virtual pointing means.
  • the system is also suitable for allowing physical pointing means and for enabling them to locate a surface location relative to a volume representation.
  • the number of surface locations in the surface model can be automatically determined.
  • the selection comprises at least one automatic pre-selection and one automatic final selection.
  • the at least one automatic preselection can comprise a number of cascaded automatic preselections so that, with appropriate interaction between selection and operator or other users, finally, a desired final selection of a registered surface location to a volume location is available.
  • a selection and / or monitor means is designed to group the automatic selection on the basis of the image data and / or the surface model. This concerns in particular the preselection. Additionally or alternatively, however, this may also concern the final selection, in particular an evaluation procedure for the final selection.
  • Grouping may be performed on the basis of first grouping parameters, which include a distance measure, in particular a distance of the operation camera to structures represented in the image data.
  • the grouping parameters preferably also include a 2D and / or 3D topography, in particular a 3D topography of represented structures based on the created surface model; this may include a shape or a depth gradient of a structure.
  • the grouping parameters also include a color, in particular a color or color change of the structure shown in the image data.
  • Such automatic selection grouped essentially on the basis of the image data and / or the surface model can be supplemented by an automatic selection independent of the image data and / or independently of the surface model.
  • second Gruppier parameters which include a Geometry default or a grid preset. For example, a geometric distribution of pixels for selecting surface locations registered with volume locations and / or a rectangular or circular grid may be specified. In this way, it is possible to select locations which correspond to a specific geometric distribution and / or follow a certain shape or lie in a specific grid, for example a specific quadrant or in a specific area.
  • an automatic final selection can be implemented from the locations provided in a preselection by means of evaluation methods;
  • selected positions can be grouped by means of evaluation procedures.
  • the evaluation methods include, for example, methods for statistical evaluation in connection with other image locations or image positions.
  • Mathematical filtering and / or logic methods are suitable here, such as, for example, a Kalman filter, a fuzzy logic and / or a neural network.
  • an interaction with the selection and / or monitor means implement a manual interaction between an operator or other user and the selection and / or monitor means in the context of a manual selection support by one or more input features.
  • An input feature may be, for example, a keyboard means for the hand or foot of the surgeon or other user; for example, this may be a computer mouse, a button, a pointer or the like. It can be used as input means and a gesture sensor that responds to a specific gesture. Also, a voice sensor or a touch-sensitive sensor such as an input pad is possible.
  • other mechanical input devices such as keyboards, buttons or push buttons are suitable.
  • the subject matter of the claims comprises a mobile manageable medical device and a particularly non-invasive method for processing or observation of a biological body such as a tissue or the like.
  • An image acquisition unit may in particular be an endoscope or an ultrasound imaging unit or another imaging unit. in particular a previously mentioned unit z. B. based on an IR, X or UV radiation.
  • 2-D slice images or 3-D volume images can also be registered for the operating environment.
  • a device head may also be a pointer instrument or a surgical instrument or the like. Medical device for processing or observing a body or for detecting the own position, or the instrument position, relative to the environment.
  • the subject matter of the claims comprises in particular a mobile manageable non-medical device and a particularly non-invasive method for processing or observing a technical body such as an object or a device or the like.
  • the concept may be used in industrial processing, Positioning or monitoring processes are successfully applied.
  • a claimed mobile handleable device - such as an instrument, tool or sensor-like system - are used according to the described principle, the described essentially image-based concept is advantageous.
  • Embodiments of the invention will now be described below with reference to the drawing in comparison with the prior art, which is also partly shown, this in the medical application frame, in which the concept is implemented with respect to a biological body. however, the embodiments also apply to a non-medical application framework in which the concept is implemented with respect to a technical body.
  • Fig. 1 shows a basic scheme of a method and an apparatus for
  • Imaging registering a surface model to a volume rendering considering the operation camera optionally based on a physical pointing device, a tracker, or based on a visual navigation method
  • FIG. 2 shows an exemplary embodiment of an operating device with an imaging system, in which the registration of the surgical camera is essentially based on a visual navigation method, in particular as described in the aforementioned DE 10 2012 21 1 378.9, the disclosure of which hereby by citation completely in the disclosure this application is accepted;
  • FIG. 3 shows an alternative embodiment of an operating device with an imaging system in which localizers for localizing not only the patient but also the surgical camera via an endoscope locator, so that a virtual pointing device can be formed, automatically make available a number of surface locations in the surface model
  • an endoscope is shown as a special camera system
  • FIG. 4 shows a modification of the embodiment of an operating device shown in FIG. 3, in which a second locator is mounted directly on the surgical camera instead of on the endoscope - the transformation TKL between camera localizer and camera origin (lens) to be calibrated is shown, wherein the camera is represented by a camera icon, which is generally for any advantageous usable camera;
  • 5 shows a detail X of FIG. 3 or FIG. 4 for illustrating a preferred procedure for determining a surface location as the intersection of a virtual viewing beam emanating from the surgical camera with the surface model;
  • Fig. 6 is a flow chart illustrating a preferred method of implementing automatic pre-selection and automatic final selection by means of a selection and / or monitor means utilizing first and second grouping parameters to provide at least one surface location with associated volume location;
  • FIG. 7 shows a flow chart for a particularly preferred embodiment of a procedure for navigating any pixels in medical camera image data of an operation camera.
  • FIG. 1 shows a schematic diagram of the general structure of a method and a device for clinical navigation.
  • FIG. 1 shows an operating device 1000 with a mobile device head 100 and an imaging system 200.
  • the device head 100 is in the form of an endoscope 110.
  • An image data acquisition unit 210 of the imaging system 200 has at the endoscope an operation camera 21 1, which is designed to continuously capture and provide image data 300 of an operating environment OU of the device head 100, ie in the field of view with a near environment NU of the operation camera 21 1.
  • the image data 300 are processed according to an image data processing unit 220 provided.
  • Objects here are a first, more roundish object OU1 and a second, rather elongated OU2.
  • the image data processing unit 220 is designed to create a surface model 310 of the operating environment OU by means of the image data 300, above all pre-operatively obtained volume representation data of a volume representation 320 of the operating environment can be present.
  • the surface model 310 as well as the volume representation 320 can be stored in suitable storage areas 231, 232 of an image storage unit 230.
  • corresponding representation data of the surface model 310 or representation data of the volume representation 320 are stored in the image memory unit 230.
  • the goal is now to determine certain surface locations OS1, OS2 in the view of the two-dimensional camera image, i. H.
  • the goal is to register the surface model 310 to the volume rendering 320.
  • volume location VP1 actually has the surface location OS1 or the volume location VP2 corresponds to the surface location OS2.
  • it may be, for example, the registration of video data, namely the image data 300 and the surface model derived therefrom 310 preoperatively obtained 3D data such.
  • 3D data such as the image data 300 and the surface model derived therefrom 310 preoperatively obtained 3D data such.
  • CT data so generally the volume representation act. So far, three key approaches to registration have proven useful, some of which are described in detail below.
  • a first approach uses a pointer or pointer, either as a physical hardware instrument (pointer) or, for example, as a laser pointer to identify and locate certain surface locations OS.
  • a second approach uses the identification and visualization of surface place in a video or the like image data 300 and registers it, for example on a CT data set by means of a tracking system.
  • a third approach identifies and visualizes surface locations in a video or the like image data 300 and registers them for volume rendering, such as a CT dataset, by reconstructing and registering the surface model 310 with the volume rendering 320 by suitable computing means.
  • an operation camera 21 1 has hitherto been used to merely monitor or visualize, for example, a pointer instrument or a pointer or other form of manual display of a surface location
  • computing means 240 are provided which are designed to match (register) the surface location OS1, OS2 identified by manual display to a volume location VP1, VP2 and thus to correctly associate the surface model 310 with the volume rendering 320.
  • the method and apparatus described in the present embodiment provides virtual pointing means 250 within the scope of the image data processing unit 220, which are designed to automatically provide a number of surface locations in the surface model 310, in order to eliminate the difficulties associated with the manual interventions ; So not only a single one is displayed manually but any number in the entire OU is displayed.
  • the number of surface locations OS in particular in the surface model 310, can be freely selectable, in particular be free from a physical display. Additionally or alternatively, the number of surface locations OS in the surface model 310 may also be automatically determinable. In particular, the selection may at least partially include an automatic pre-selection and / or an automatic final selection.
  • a selection and / or monitor means 500 is designed to group an automatic selection, in particular in a preselection and / or final selection, on the basis of the image data 300 and / or the surface model 310; z. Based on first grouping parameters: a distance measure, a 2-D or 3D topography, a color. Also, a selection and / or monitor means 500 may be configured to group an automatic selection independently of the image data 300 and / or the surface model 310, in particular based on second grouping parameters comprising: a geometry specification, a raster specification. The selection and / or monitor means 500 comprises a MAN machine interface MMI, which is actuatable for manual selection support. In addition, registration means 260 -as hardware and / or software implementation, e.g.
  • the imaging properties of the image recording unit can be defined in fundamentally different ways, and these and other properties of the image recording can preferably be used to determine position KP2 and position KP1 (pose KP).
  • a combination of spatial coordinates and directional coordinates of the imaging system can be used.
  • a coordinate location for example, a coordinate of a characterizing location of the imaging system, such as a focal point KP1 of an imaging unit, z. B. lens, the image pickup unit.
  • a direction coordinate for example, a coordinate of a direction vector of a visual beam, so z.
  • an orientation KP2 of the sight beam 330 is shown.
  • FIG. 2 this illustrates an example of a video-based third approach mentioned above for registering a volume rendering 320 to a surface model 310.
  • Fig. 2 this illustrates an example of a video-based third approach mentioned above for registering a volume rendering 320 to a surface model 310.
  • the same reference numerals will be used herein.
  • FIG. 2 shows a tissue structure G with objects 01, 02 which are recognizable in a volume representation 320 as a volume location VP or in a surface model as a surface location OS and should be assigned to one another.
  • An advantage of the system 1001 of an operating device illustrated in FIG. 2 is the use of a navigation method, referred to as visual navigation, without additional external position measuring systems.
  • image data 300 ie camera image data of an operation camera 21 1 z. B. in an endoscope 1 10 and / or image data 300 of an external camera 212 from an environment U of the device head 100 a card created. This may be a map of the environment U and / or map comprising the surface model 310 of the operation area OU.
  • SLAM simultaneous localization and mapping
  • image data 300 for orientation in an extended area, namely the operating environment OU using the map of the environment U and / or the map of the operating environment OU.
  • a separate movement of the device head 100 is estimated as well as a map of the area covered by the internal camera 21 1 or external camera 212 is continuously created.
  • map generation and motion detection the currently acquired sensor information is also checked for matches with the previously stored image map data. If a match is found, the system knows its own current position and orientation (pose) within the map.
  • a monocular SLAM method is already suitable as an information source in which feature points in the video image are continuously recorded and their movement in the image is evaluated. If the surface map 310 explained with reference to FIG. 1 can now be registered to a volume data record 320 of the patient, the visualization of positions of objects OS1 in the video image within the 3D image data VP is possible. Likewise, the common use of visual navigation with classical tracking methods is possible to determine the absolute position of the created surface map.
  • a disadvantage of this approach is the accuracy of navigation in the OU operational environment.
  • the navigation surface map to be created should range from the region of image data registration (eg, face on paranasal sinuses) to the surgical environment (eg, ethmoidal cells). Due to the piecewise construction of the map and the addition of new data on the basis of the existing map material, errors in the map structure can accumulate. Problems can also arise if it is not possible to generate video image data with distinctive and traceable image content for certain areas of the surgical area. A safe creation of a precise surface map with the help of the example monocular SLAM Procedure is thus a prerequisite to provide sufficient accuracy of surgical procedures.
  • FIG. 3 shows a modified system 1002 of an operating device 1000 with a device head 100 and an imaging system 200 applied to a tissue structure G in a patient as a particularly preferred embodiment.
  • OU pointer instruments have heretofore been used to acquire a 3D position of objects in OS in an operational environment, to which locators are attached for position sensing;
  • the localizers can be detected with optical or electromagnetic position measuring systems.
  • the background to this is that in surgical interventions with intraoperative real-time imaging-like the endoscopy exemplified in FIG. 3 with an endoscope as the device head 100 or another laparoscopic application-it is often necessary to determine position information for structures shown in the camera image and to the surgeon or to a surgeon or other user.
  • the special navigation instruments referred to as pointers with locator are guided by the surgeon by hand or with a robot arm to touch a tissue structure G in the operating environment OU. From the visualization of the pointer position in the image data 300, the surgeon or other user can conclude a tissue position 02 of the surface location OS.
  • the use of special navigation instruments requires additional instrument change during the procedure and thus complicates the operation and an exact sampling of the tissue G by the surgeon.
  • a new solution for the determination of pixels in current image data 300 of an intra-operative real-time imaging is proposed, which makes it possible to register a surface location OS to a volume location VP or, concretely, a 3D position in the reference coordinate system of the 3D image data (FIG. Volume rendering 320) of the patient, e.g. B. CT image data to bring in correspondence.
  • the position of the surgical camera 21 1 as well as of the patient and thus of the tissue structure G in the operating environment OU is detected by means of a position measuring system 400.
  • the position measuring system has a position measuring unit 410, which can be formed, for example, as an optical, electromagnetic or optical measuring unit; as well as a first locator 420 and a second locator 430.
  • the first locator 420 may be attached to the endoscope 110, as shown in FIG. 3, thereby representing a rigid, at least determinate, connection 422 to the surgical camera 21 1, or preferably directly, as in FIG Fig. 4 shown attached to an externally provided by the endoscope 1 10 provided surgical camera 212 or the like camera system.
  • the localizers 420 or 410 and 430 are designed as so-called optical trackers with localizer balls and can be attached to the object on display of the endoscope 110 or the external surgical camera 212 or to the patient (and thus to the tissue structure G).
  • Possible camera systems are conventional cameras (eg endoscopes), but also 3D time-of-flight cameras or stereoscopic camera systems for executing the surgical camera 21 1, 212.
  • Time-of-flight cameras also provide a picture with depth information in addition to a color or gray value image of the OU operating environment.
  • a surface model 310 can already be generated with a single image such that the calculation of the 3D position relative to the endoscope optics of the surgical camera 21 1 is possible for each 2D position in the image.
  • the transformation of the 3D coordinates supplied by the camera (surface coordinate of the surface model representation data) to the reference coordinate system R421 or R420 of the camera localizer or instrument localizer 421, 420 can be calculated.
  • Stereoscopic camera systems provide camera images of the operating environment simultaneously from two slightly different positions and thus allow a reconstruction of a 3D surface as a surface model 310, the objects shown in the camera images.
  • the surface model 310 which is reconstructed on the basis of one or more pairs of images, may be z.
  • B. can be realized as a point cloud and referenced to the localizer 420, 421; as it does about suitable transformations TLL1, TKL.
  • the surgical camera 21 1, 212 By means of the surgical camera 21 1, 212 thus based on a sequence of one or more consecutive camera images, the representation of a surface model of the operating environment OU, which here is visualized by the detection area KOL) of the camera system, takes place as essentially within the outer limits of the field of vision Operation camera 21 1, 212 is located. On the basis of this surface model 310, the calculation of 3D positions 01 for any 2D image positions 02 in the image data 300 takes place. After then successful registration of the 3D patient image data to the patient locator 430, 3D coordinates can be determined using the position measuring system 400 or the position measuring unit 410 into the reference coordinate system of the 3D image data 320.
  • the reference coordinate system R430 of the object localizer 430 that is, the 3D image data of the patient 320
  • the reference coordinate system R420 or R421 of the camera localizer 420, 421 as well as the reference coordinate system R212 of the operation camera 212 are referred to, which merge into one another by simple transformation TKL.
  • the principle of the navigation method illustrated in FIG. 3 and FIG. 4 is that the pixels in the camera image, ie the image data 300 or the associated surface model, have a corresponding 3D position, namely a volume coordinate of the volume representation data, in the pre- or intraoperative To determine volume image data of the patient.
  • the localization of the endoscope relative to the patient takes place with the aid of an external position measuring system 400 using the described optical measuring unit 410 with the associated reference coordinate system R410.
  • the optical measuring unit can each be realized as required with a different camera technology for generating a surface model 310.
  • the 3D position of a pixel determined from the camera image data is determined using transformations TKL, TLL1 (transformation between the camera locator reference coordinate systems and the position measuring system) transmitted by measurement, registration and calibration, TLL2 (transformation between the reference coordinate systems of the object locator and of the position measuring system) and TLL3 (transformation between the reference coordinate systems of the 3D image data and the object locator) and can then be used for visualization.
  • G designates the object of a tissue structure shown in the camera image data.
  • the localizers 420, 421, 430 have optical trackers 420T, 421T, 430T in the form of spheres, which together with the associated object (endoscope) Skop 1 10, camera 212, tissue structure G) can be detected by the position measuring system 400.
  • FIG. 3 and FIG. 4 show the volume representations 320 with their associated reference coordinate system R320, which results from the reference coordinate system R430 of the tissue structure G due to the indicated transformation TLL3.
  • the reference coordinate system R212 of the image data 300 can be brought into correspondence with the reference coordinate system of the volume representation 320.
  • Volume rendering can be thought of as a collection of volume co-ordinates of volume rendering data, e.g. B. for a CT, DVT or MRI image realize; so far, the volume representation 320, shown as a cube, is an exemplary representation of patient SD image data.
  • FIG. 5 illustrates the objects used by the exemplified concept as well as the principle of operation in connection with an optical position measuring system.
  • the main component of the present concept is, as explained, the operating camera 21 1, 212 with its associated reference coordinate system R 420 or R 421, R 2 12 (FIG. via transformation TKL).
  • the position of the surgical camera can be performed purely computationally via a SLAM method or-as explained with reference to FIG. 3 and FIG. 4 -with the aid of a localizer 420, 421 within the position measuring system 400 become.
  • the position of the camera jump (for example, the main lens) and the orientation or viewing direction of the camera thereby represent the camera coordinate system R212; this results relative to the reference coordinate system R421, R420 of the camera localizer by calibration, measurement or from the construction of the arrangement (dimensions of the camera, imaging geometries and dimensions of the camera, dimension of the endoscope 1 10).
  • the surgical camera 21 1, 212 is aligned by the surgeon or another user on a tissue structure G, the position of which can also be determined with the help of the localizer 430 fixedly connected thereto.
  • the camera image data, ie image data 300, which images the tissue structure G, are evaluated and used to represent the surface model 310 of the surgical area OU shown in FIG.
  • the 3D position can then be determined as a surface coordinate of the surface model representation data for a certain pixel in the reference coordinate system of the camera R212 or R421, R420.
  • a shown in Fig. 5 Viewing beam 330 are calculated for a desired pixel 301 of the image data 300 of the camera image.
  • the desired 3D position as the surface coordinate of the surface model representation data can thus be determined as the intersection of the viewing beam 330 with the surface model 310 as the point 31 1.
  • the image data 300 represents the camera image positioned in the focal plane of the surgical camera 21 1, 212.
  • FIGS. 3 to 5 have considerable advantages over the embodiment shown in FIG. 2 and can continue to be operated even in the event of a short-term failure of the position measuring system 400.
  • a failure of the position measuring system 400 may occur, for example, if there is an interruption of the line of sight of the optical measuring unit 410 to the localizers 420, 430, 421.
  • the previously created surface model 310 can be used to adjust the camera position.
  • H. comprising determining pose KP from focal point coordinates KP1 and orientation KP2 of the viewing beam 330 relative to the patient or the tissue structure G. For this purpose, based on a camera image, i. H.
  • the image data 300 or an image sequence thereof, the current topology of the operating environment OU relative to the operation camera 21 1, 212 are determined.
  • This data, z. B. surface coordinates of a point cloud of the surface model 310 can then be registered to the existing surface model 310. Due to the known transformation of patient localizer 430 to the 3D image data 320 as well as to the existing surface model 310, the 3D positions of arbitrary pixels OS1, OS2 as well as the camera 21 1, 212 itself can be calculated in the volume image data 320 and thus Ensure continuous navigation support for the surgeon or other user.
  • the aim of these methods is to automatically identify one or more image positions of interest, which are then used for a subsequent manual or automatic final selection of the image position.
  • an exemplary selection of steps is explained with reference to FIG. 5, which enables a selection, in particular preselection and final selection, of the navigated image positions.
  • the presented method allows the calculation and visualization of the 3D position in the patient for different 2D positions in the camera image. In this application, it is necessary to select a point for which the navigation information is calculated and displayed.
  • the following automatic and manual methods are suitable for this purpose; The following criteria can be taken into account as a basis for automatic preselection of image positions:
  • a set of image positions can be used as pre-selection, which is specified by the system independently of the image content or the surface model. This is z. B. a geometric distribution of the pixels in a rectangular or circular grid conceivable.
  • the following evaluation methods can be used: Evaluation of the image positions based on the criteria specified in the automatic pre-selection
  • the manual methods are characterized by the involvement of the user.
  • the following methods are suitable for the manual selection of a picture position, if necessary using a previous automatic preselection of picture positions:
  • Image position the corresponding 3D position is subsequently determined and displayed.
  • - Gesture control With the help of suitable sensors (eg PMD camera or Microsoft Kinect) the movement of the user can be recorded and evaluated.
  • the hand movement can be tracked and interpreted as a gesture which allows the selection of the desired 2D image point.
  • z. B a hand movement to the left to move the image position to be controlled also to the left.
  • the final selection of the given image position could be controlled.
  • Image position are controlled. If the surgeon presses on a foot pedal, the output changes. Selects the selected image position, which is used to calculate and visualize an SD position on the patient.
  • the selected picture position can be moved directly by voice commands. So z. For example, the voice command "left" will shift the 2D image position to the left by a predetermined offset.
  • Mechanical input devices If mechanical input devices are connected to the navigation system (eg keyboard, control buttons), the user can control the 2D image position via these input devices. Here either a shift of the current image position or a change of the selection in a set of preselected image positions can be triggered.
  • the novelty of this concept therefore basically lies in the possibility of calculating the corresponding 3D position in the volume image data for any 2D positions in image data of an intraoperatively used and tracked camera. Also new are the methods described for selecting or defining the 2D position in the camera image for which the 3D information is to be calculated and displayed.
  • a navigation system visualizes position information in 3D image data of the patient for any image positions of the camera image data.
  • As a technical application areas include in particular medical technology, but also all other applications in which an instrument-like system according to the principle described is used.
  • FIG. 6 shows an operating device 1000 with a device head 100, as already illustrated with reference to FIG. 1, and an exemplary image representation of a monitor module in versions (A), (B), (C), and in detail a representation either from image data 300, surface model 310, or volume rendering 320, or a combination thereof.
  • the surface model 310 is combined with the volume rendering 320 via a computing means 240.
  • the referenced representation of surface model 310 with volume rendering 320 as well as pose of operation camera 21 1 is achieved because the selection and / or monitor means provides image data 300 with an automatic selection of surface locations OS1, OS2. which can be selected via a mouse pointer 501 of the monitor module 500.
  • the option of a mouse pointer 501 ' is selected in the selection menu 520 of the monitor module 500.
  • a predefinition of the surface locations OS1, OS2 can take place according to the specifications of the monitor module via a distance measure or a topography or a color representation in the image data.
  • a default geometry 521 can be created via the selection menu 510, in this case a circle specification, so that only the surface location OS1 is displayed insofar as the final selection determines.
  • a raster selection 531 can be selected in a selection menu 530, for example to display all structures in the second quadrant x-this only leads to the display of the second surface location OS2.
  • FIG. 7 shows a preferred sequence of steps for carrying out a method for medical navigation of arbitrary pixels in medical camera image data 300. It is to be understood that each of the method steps explained below can also be implemented as an action unit in the context of a computer program product that is designed to be executed of the explained method step.
  • Each of the action units recognizable from FIG. 7 can be realized within the scope of a previously explained image data processing unit, image storage unit and / or navigation unit.
  • the action units are suitable for implementation in a registering means, a virtual pointer means and a correspondingly formed computing means; d. H. Registration means adapted to locate the operation camera relative to the surface model; virtual pointing means adapted to automatically provide a number of surface locations in the surface model; Calculating means which are designed to automatically associate with at least one of the surface points provided a volume location of the volume representation.
  • a preoperative volume rendering 320 with volume representation data is provided, here for example in a memory unit 232 of an image storage unit 230.
  • image data 300 are taken as camera image data of an operation camera 21 1, 212 are provided, from which a surface model 310 can be created via the image data processing unit 220 shown in FIG. 1 and can be stored in a memory unit 231 of the image data storage unit 230 in a method step VS3.
  • a camera registration by means of registration in particular in the context of a position measuring system 400 and / or a visual navigation, for example using a SLAM method.
  • the pose KP of the operation camera 21 1, 212- namely in a method step VS4.1 the focal point coordinate KP1 and in a method steps VS4.2 the orientation KP2- recorded.
  • a virtual pointing means such as for example a sight ray 330, can be made available as a virtual pointing means.
  • a number of surface locations in the surface model can be automatically made available with the virtual pointer means.
  • the surface location 31 1 may be fixable as an intersection of a virtual viewing beam 330 emanating from the surgical camera 21 1, 212 with the surface model 310, in particular a surface coordinate indicating a pixel 301 of the surgical camera 21 1, 212 assigned to the intersection point.
  • a sixth method step VS6 using suitable transformations; previously TKL, TLL1, TLL2, TLL3- via a computing module 240, a referencing of the volume representation 320 and the surface model 310 take place.
  • a preliminary and / or final selection of all possible objects - in particular surface locations and / or volume locations US, VP by means of a selection and / or monitor means available become.
  • the selected locations can be displayed by referencing the camera 21 1 and the volume and surface representations 320, 310 to an output module, as has been explained with reference to FIG. 6, for example.
  • a loop can be made, for example, to the previously explained nodes K1 and / or K2 in order to allow the method to proceed completely from the beginning. It may also be a loop to only one of the nodes K1, K2 formed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Surgery (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Pathology (AREA)
  • Veterinary Medicine (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Biophysics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Algebra (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Endoscopes (AREA)
  • Processing Or Creating Images (AREA)

Abstract

L'invention concerne un système d'imagerie (200), en particulier pour un dispositif d'exploitation (1000), comportant une tête de dispositif (100), mobile manipulable, comprenant : - une unité de détection de données image (210), présentant une unité de prise de vue, en particulier, une caméra d'exploitation (211, 212), qui est réalisée de façon à détecter des données image (300) d'un environnement opérationnel (OU), - une unité de traitement de données image (220) qui est conçue pour fournir les données image, - une unité de mémoire image (230), qui est réalisée de manière à fournir les données image (300) de l'environnement opérationnel, et les données de volumes d'une visualisation de volumes (320), associée à l'environnement opérationnel. L'invention est caractérisée en ce qu'il est en outre prévu : - des moyens d'enregistrement (260), qui sont conçus en vue de localiser l'unité de prise de vue, en particulier la caméra d'exploitation (211, 212) par rapport à l'environnement opérationnel, - des moyens indicateurs virtuels (250) qui sont configurés de manière à tenir automatiquement à disposition, en particulier, à identifier et/ou à indiquer, une pluralité de points en surface (OS) des données image (300), et - des moyens d'attribution, présentant notamment des moyens de calcul (240), qui sont conçus de manière à attribuer automatiquement, à au moins un des points en surface (OS) mis à disposition, un point volume (VP) de la visualisation de volumes (320).
EP13786464.1A 2012-11-05 2013-11-04 Système d'imagerie, dispositif d'exploitation comportant le système d'imagerie, et procédé d'imagerie Withdrawn EP2914194A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102012220115.7A DE102012220115A1 (de) 2012-11-05 2012-11-05 Bildgebendes System, Operationsvorrichtung mit dem bildgebenden System und Verfahren zur Bildgebung
PCT/EP2013/072926 WO2014068106A1 (fr) 2012-11-05 2013-11-04 Système d'imagerie, dispositif d'exploitation comportant le système d'imagerie, et procédé d'imagerie

Publications (1)

Publication Number Publication Date
EP2914194A1 true EP2914194A1 (fr) 2015-09-09

Family

ID=49546397

Family Applications (1)

Application Number Title Priority Date Filing Date
EP13786464.1A Withdrawn EP2914194A1 (fr) 2012-11-05 2013-11-04 Système d'imagerie, dispositif d'exploitation comportant le système d'imagerie, et procédé d'imagerie

Country Status (4)

Country Link
US (1) US20150287236A1 (fr)
EP (1) EP2914194A1 (fr)
DE (1) DE102012220115A1 (fr)
WO (1) WO2014068106A1 (fr)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101638477B1 (ko) * 2014-09-19 2016-07-11 주식회사 고영테크놀러지 옵티컬 트래킹 시스템 및 옵티컬 트래킹 시스템의 좌표계 정합 방법
KR101650821B1 (ko) * 2014-12-19 2016-08-24 주식회사 고영테크놀러지 옵티컬 트래킹 시스템 및 옵티컬 트래킹 시스템의 트래킹 방법
US11026750B2 (en) * 2015-01-23 2021-06-08 Queen's University At Kingston Real-time surgical navigation
GB2536650A (en) 2015-03-24 2016-09-28 Augmedics Ltd Method and system for combining video-based and optic-based augmented reality in a near eye display
JP6392192B2 (ja) * 2015-09-29 2018-09-19 富士フイルム株式会社 画像位置合せ装置、画像位置合せ装置の作動方法およびプログラム
FI20155784A (fi) * 2015-11-02 2017-05-03 Cryotech Nordic Oü Automatisoitu järjestelmä laser-avusteiseen dermatologiseen hoitoon ja ohjausmenetelmä
US9788770B1 (en) * 2016-04-14 2017-10-17 Verily Life Sciences Llc Continuous monitoring of tumor hypoxia using near-infrared spectroscopy and tomography with a photonic mixer device
DE102016117263B4 (de) 2016-09-14 2024-05-29 Carl Zeiss Meditec Ag Optisches Beobachtungsgerätsystem
US11779192B2 (en) * 2017-05-03 2023-10-10 Covidien Lp Medical image viewer control from surgeon's camera
WO2018206086A1 (fr) * 2017-05-09 2018-11-15 Brainlab Ag Génération d'une image de réalité augmentée d'un dispositif médical
US10432913B2 (en) 2017-05-31 2019-10-01 Proximie, Inc. Systems and methods for determining three dimensional measurements in telemedicine application
US11980507B2 (en) 2018-05-02 2024-05-14 Augmedics Ltd. Registration of a fiducial marker for an augmented reality system
EP3578126B1 (fr) * 2018-06-08 2023-02-22 Stryker European Operations Holdings LLC Système de navigation chirurgicale
US11766296B2 (en) 2018-11-26 2023-09-26 Augmedics Ltd. Tracking system for image-guided surgery
WO2020176401A1 (fr) * 2019-02-25 2020-09-03 The Johns Hopkins University Visualisation interactive de troncs de vol en réalité augmentée
US11980506B2 (en) 2019-07-29 2024-05-14 Augmedics Ltd. Fiducial marker
DE102019132308A1 (de) * 2019-11-28 2021-06-02 Carl Zeiss Meditec Ag Optisches Beobachtungssystem mit kontaktloser Zeigeeinheit, Betriebsverfahren und Computerprogrammprodukt
US11382712B2 (en) 2019-12-22 2022-07-12 Augmedics Ltd. Mirroring in image guided surgery
US20220395334A1 (en) * 2019-12-23 2022-12-15 Covidien Lp Systems and methods for guiding surgical procedures
US11896445B2 (en) 2021-07-07 2024-02-13 Augmedics Ltd. Iliac pin and adapter

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8644907B2 (en) * 1999-10-28 2014-02-04 Medtronic Navigaton, Inc. Method and apparatus for surgical navigation
DE10262124A1 (de) * 2002-12-04 2005-09-01 Siemens Ag Verfahren zur Visualisierung von dreidimensionalen Datensätzen, Verfahren zum Betrieb eines bildgebenden medizinischen Untersuchugsgeräts und Verfahren zur graphischen Positionierung einer mittels eines bildgebenden medizinischen Untersuchungsgeräts zu messenden Schicht in einem dreidimensionalen Datensatz einer Vorbereitungsmessung
US7492930B2 (en) * 2003-02-04 2009-02-17 Aesculap Ag Method and apparatus for capturing information associated with a surgical procedure performed using a localization device
FR2855292B1 (fr) * 2003-05-22 2005-12-09 Inst Nat Rech Inf Automat Dispositif et procede de recalage en temps reel de motifs sur des images, notamment pour le guidage par localisation
US7756563B2 (en) * 2005-05-23 2010-07-13 The Penn State Research Foundation Guidance method based on 3D-2D pose estimation and 3D-CT registration with application to live bronchoscopy
JP5741885B2 (ja) * 2005-06-09 2015-07-01 ナヴィスイス エージー 物体の空間位置そして/または空間方位の非接触決定及び測定用システムと方法、特に医療器具に関するパターン又は構造体を含む特に医療器具の較正及び試験方法
US7728868B2 (en) * 2006-08-02 2010-06-01 Inneroptic Technology, Inc. System and method of providing real-time dynamic imagery of a medical procedure site using multiple modalities
US8503759B2 (en) * 2007-04-16 2013-08-06 Alexander Greer Methods, devices, and systems useful in registration
US8218847B2 (en) * 2008-06-06 2012-07-10 Superdimension, Ltd. Hybrid registration method
DE102009040430B4 (de) * 2009-09-07 2013-03-07 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung, Verfahren und Computerprogramm zur Überlagerung eines intraoperativen Livebildes eines Operationsgebiets oder des Operationsgebiets mit einem präoperativen Bild des Operationsgebiets
AU2011266778B2 (en) * 2010-06-16 2014-10-30 A2 Surgical Method of determination of access areas from 3D patient images
CN103209656B (zh) * 2010-09-10 2015-11-25 约翰霍普金斯大学 配准过的表面下解剖部的可视化
WO2012056034A1 (fr) * 2010-10-28 2012-05-03 Fiagon Gmbh Accessoire de navigation pour appareils optiques en médecine et procédé associé
EP2452649A1 (fr) * 2010-11-12 2012-05-16 Deutsches Krebsforschungszentrum Stiftung des Öffentlichen Rechts Visualisation de données anatomiques à réalité améliorée
DE102012220116A1 (de) * 2012-06-29 2014-01-02 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Mobil handhabbare Vorrichtung, insbesondere zur Bearbeitung oder Beobachtung eines Körpers, und Verfahren zur Handhabung, insbesondere Kalibrierung, einer Vorrichtung
US10105149B2 (en) * 2013-03-15 2018-10-23 Board Of Regents Of The University Of Nebraska On-board tool tracking system and methods of computer assisted surgery

Also Published As

Publication number Publication date
US20150287236A1 (en) 2015-10-08
WO2014068106A1 (fr) 2014-05-08
DE102012220115A1 (de) 2014-05-22

Similar Documents

Publication Publication Date Title
EP2914194A1 (fr) Système d'imagerie, dispositif d'exploitation comportant le système d'imagerie, et procédé d'imagerie
US20220346886A1 (en) Systems and methods of pose estimation and calibration of perspective imaging system in image guided surgery
US20230301725A1 (en) Systems and methods of registration for image-guided procedures
EP2867855A1 (fr) Dispositif manuel mobile pour le traitement ou l'observation d'un corps
DE102007059599B4 (de) Vorrichtung für eine medizinische Intervention und Betriebsverfahren für eine Vorrichtung für eine medizinische Intervention
EP3076369B1 (fr) Procede et dispositif destines a la representation d'un objet
WO2012107041A1 (fr) Système endoscopique de traitement d'image pourvu de moyens générant des informations de mesure géométriques dans la plage de prise de vues d'une caméra numérique optique
RU2015110976A (ru) Роботизированное устройство и системное программное обеспечение, аппаратное обеспечение и способы использования для хирургического вмешательства, направляемого по изображению и с применением робота
DE102014203097A1 (de) Verfahren zum Unterstützen einer Navigation einer endoskopischen Vorrichtung
DE202011110755U1 (de) Navigationsaufsatz für optische Geräte in der Medizin und Vorrichtung zum Darstellen von Bilddaten
EP4213755B1 (fr) Système d'assistance chirurgicale
DE102014102425B4 (de) Mikroskopsystem und Mikroskopieverfahren unter Verwendung digitaler Marker
DE19807884A1 (de) Verfahren und Vorrichtung zur intraoperativen rechnergestützten Bestimmung von räumlichen Koordinaten anatomischer Zielobjekte
DE102008032508B4 (de) Medizinische Untersuchungs- und Behandlungseinrichtung zur Planung und Durchführung einer Punktion eines Patienten sowie zugehöriges Verfahren
WO2018007091A1 (fr) Dispositif d'imagerie dans une salle d'opération
DE102011006537A1 (de) Verfahren zur Registrierung eines ersten Koordinatensystems einer ersten medizinischen Bildgebungseinrichtung mit einem zweiten Koordinatensystem einer zweiten medizinischen Bildgebungseinrichtung und/oder einem dritten Koordinatensystem eines medizinischen Instruments bezüglich einer medizinischen Navigationseinrichtung und medizinisches Untersuchungs- und/oder Behandlungssystem
DE102011050240A1 (de) Vorrichtung und Verfahren zur Bestimmung der relativen Position und Orientierung von Objekten
EP3626176B1 (fr) Procédé d'assistance d'un utilisateur, produit programme informatique, support de données et système d'imagerie
EP1667067B1 (fr) Procédé et appareil pour calibrer un instrument médical
DE102014219581A1 (de) Verfahren, Vorrichtung und Computerprogramm zur Registrierung eines medizinischen Bildes mit einer anatomischen Struktur
DE102005012295A1 (de) Verfahren zu endoskopischen Navigation und zur Eichung von Endoskopsystemen sowie System
Marzi et al. Continuous feature-based tracking of the inner ear for robot-assisted microsurgery
DE102010064320B4 (de) Optischer Zeiger für ein Chirurgieassistenzsystem
US20230317252A1 (en) Conversion and transfer of real-time volumetric image data for a medical device
US20230360212A1 (en) Systems and methods for updating a graphical user interface based upon intraoperative imaging

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20150605

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAX Request for extension of the european patent (deleted)
17Q First examination report despatched

Effective date: 20161021

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20170503