WO2017042171A1 - Apparatus for imaging in a medical treatment - Google Patents

Apparatus for imaging in a medical treatment Download PDF

Info

Publication number
WO2017042171A1
WO2017042171A1 PCT/EP2016/070992 EP2016070992W WO2017042171A1 WO 2017042171 A1 WO2017042171 A1 WO 2017042171A1 EP 2016070992 W EP2016070992 W EP 2016070992W WO 2017042171 A1 WO2017042171 A1 WO 2017042171A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
image
area
medical treatment
interest
Prior art date
Application number
PCT/EP2016/070992
Other languages
French (fr)
Inventor
Bernardus Hendrikus Wilhelmus Hendriks
Harold Agnes Wilhelmus Schmeitz
Thirukumaran Thangaraj KANAGASABAPATHI
Johan Juliana Dries
Drazenko Babic
Jurgen Jean Louis HOPPENBROUWERS
Robert Johannes Frederik Homan
Ronaldus Frederik Johannes Holthuizen
Original Assignee
Koninklijke Philips N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips N.V. filed Critical Koninklijke Philips N.V.
Publication of WO2017042171A1 publication Critical patent/WO2017042171A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/361Image-producing devices, e.g. surgical cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/273Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion removing elements interfering with the pattern to be recognised
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/113Recognition of static hand signs
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/08Accessories or related features not otherwise provided for
    • A61B2090/0818Redundant systems, e.g. using two independent measuring systems and comparing the signals
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/361Image-producing devices, e.g. surgical cameras
    • A61B2090/3612Image-producing devices, e.g. surgical cameras with images taken automatically
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body
    • A61B2090/365Correlation of different images or relation of image positions in respect to the body augmented reality, i.e. correlating a live optical image with another image
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body
    • A61B2090/366Correlation of different images or relation of image positions in respect to the body using projection of images directly onto the body
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body
    • A61B2090/368Correlation of different images or relation of image positions in respect to the body changing the image on a display according to the operator's position
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/371Surgical systems with images on a monitor during operation with simultaneous use of two cameras
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/30Devices for illuminating a surgical field, the devices having an interrelation with other surgical devices or with a surgical procedure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Definitions

  • the present invention relates to an apparatus for imaging in a medical treatment for use with a surgical illumination system, to a medical system for imaging in a medical treatment, and to a method for imaging in a medical treatment for use with a surgical illumination system, as well as to a computer program element and a computer readable medium.
  • surgical (task) lighting is used to illuminate the treatment area.
  • a camera has been integrated into a surgical light, allowing observation of the surgery area and providing feedback to the surgeon.
  • a surgical robot system includes a slave system to perform a surgical operation on a patient and an imaging system that includes an image capture unit including a plurality of cameras to acquire a plurality of affected area images, an image generator detecting an occluded region in each of the affected area images acquired by the plurality of cameras, removing the occluded region therefrom, warping each of the affected area images from which the occluded region is removed, and matching the affected area images to generate a final image, and a controller driving each of the plurality of cameras of the image capture unit to acquire the plurality of affected area images and inputting the acquired plurality of affected area images to the image generator to generate a final image.
  • US2003/0164953 Al relates to a system for the combined shadow- free illumination of a pre-defmable area and for referencing three-dimensional spatial coordinates, and to an active or passive referencing system, each in particular for referencing surgical or medical instruments.
  • the system is characterized in that at least two cameras and the light source (operation lamp) are held together such that the optical signals detected by the cameras, for referencing three-dimensional spatial co-ordinates in the area illuminated by the light source, can be evaluated. Since the field of view of the light source, in its conventional use, is not obscured or only negligibly obscured, this field of view can simultaneously be used for optical navigation by the cameras held together with the light source.
  • the camera image may not always be useful, as the surgery area may be obstructed for example with the hands of the surgeon himself.
  • a computer program element as defined in appended claim 14.
  • a computer readable medium as defined in appended claim 16.
  • an apparatus for imaging in a medical treatment for use with a surgical illumination system comprising: - a first camera configured to be positioned at a first position;
  • a second camera configured to be positioned at a second position; a processing unit;
  • the first camera is configured to acquire at least one image of an area of interest of a medical treatment area from the first position.
  • the second camera is configured to acquire at least one image of the area of interest of the medical treatment area from the second position.
  • the first camera is configured to provide the at least one image acquired by the first camera to the processing unit, and the second camera is configured to provide the at least one image acquired by the second camera to the processing unit.
  • the processing unit is configured to generate at least one combined image of the area of interest of the medical treatment area from the at least one image provided by the first camera and the at least one image provided by the second camera, wherein the at least one combined image does not contain at least some of the image data representative of the part of the object.
  • the output unit is configured to output data representative of the area of interest of the medical treatment area.
  • the apparatus can completely remove the object from the combined image or can remove a proportion of the object such that the combined image is improved.
  • an obstruction in a first image can be removed using information from a second image.
  • image data acquired by a number of cameras can be combined in a form where an occluding object is removed, providing a surgeon with information representing an un-obscured view of the area of interest.
  • Images acquired by the cameras can be used to determine the distance of the cameras away from the object.
  • Images acquired by the cameras can be used to determine the distance of the cameras away from the medical treatment area, for example as a function of the geometric position and orientation of an arm or boom on which the cameras are mounted. This means that the distance the medical treatment area is away from the first camera and/or second camera can also be determined from acquired imagery.
  • the object can be situated between the medical treatment area and the first camera such that the at least one image acquired by the first camera comprises the image data representative of the part of the object.
  • the object can also be situated between the medical treatment area and the second camera such that the at least one image acquired by the second camera comprises image data representative of the part of the object.
  • the processing unit is configured to generate at least one combined image of the area of interest of the medical treatment area from the at least one image acquired by the first camera and the at least one image acquired by the second camera.
  • the at least one combined image does not contain at least some of the image data representative of the part of the object acquired by the first camera and does not contain at least some of the image data representative of the part of the object acquired by the second camera.
  • the object situated between the medical treatment area and the first camera and situated between the medical treatment area and the second camera i.e. above the medical treatment area or surgical field
  • objects, such as the hands of a surgeon that appear in the first camera image and additionally or alternatively in the second camera image can be removed and a combined image can be made where the occluded areas of the medical treatment area (or surgery field) can be recovered by combining the information from the first camera image and from the second camera image.
  • a combined image can be generated to provide an un-obscured image of the area of interest.
  • the first camera and the second camera are separated by a known distance.
  • the separation of the cameras based on the first camera image and the second camera image the parallax for the object can be determined. This means that the distance the object is away from the first camera and/or second camera can be
  • the at least one image acquired by the first camera comprises at least two images acquired at different times. This means that for the combined image where occluding objects are removed, the removal process can be improved by using information from prior image frames. In other words, the occluding object may not be able to be completed removed from the combined image. For example if there is a part of the area of interest that is occluded by the object as viewed by both the first camera and second camera at a specific moment in time, then in the combined image this part of the area of interest will be occluded. However, that occluded part of the area of interest may not be occluded in both an image acquired now with the first camera and an image acquired at an earlier time by the first camera.
  • the combined image from the first and second cameras acquired now, which has an occluded part of the area of interest can be augmented with an earlier image acquired by the first camera where that part of the area of interest is not occluded.
  • the prior image acquired by the first camera can be used to fill in the gap, to provide a combined image with no occluded areas.
  • the apparatus comprises a third camera configured to be positioned at a third position.
  • the third camera is configured to acquire at least one image of the area of interest of the medical treatment area from the third position, and wherein the third camera is configured to provide the at least one image acquired by the third camera to the processing unit.
  • the processing unit is configured to generate the combined image of the area of interest of the medical treatment area from the at least one image acquired by the first camera and the at least one image acquired by the second camera and the at least one image acquired by the third camera. In this manner, further robustness is provided in removing an occluded object from the combined image. This is because the possibility that there is a common area that is occluded is reduced when there are three images being combined. Additionally, by using a third camera at a third position the parallax computations used to determine the distance to the object become more robust, enabling for improved distance calculations, providing further robustness for removing an occluded object.
  • the apparatus comprises a tracking system; wherein the processing unit is configured to track the object using the tracking system. In this manner, the apparatus provides an additional layer of robustness relating to removing image data representative of the object from the combined image because the position of the object is provided with better certainty.
  • the processor is configured to track the position of a surgeon, and wherein the output unit is configured to output data representative of the area of interest of the medical treatment area from a viewing direction of the surgeon.
  • the apparatus is configured to cooperate with a surgical illumination system.
  • the processing unit is configured to provide control input to the surgical illumination system.
  • the light intensity on the medical treatment area can be kept constant or adjusted as necessary. For example, as a surgical lamp is moved away from the medical treatment area the light output from the surgical illumination system can be increased in order that the light intensity at the medical treatment area remains as required.
  • the first camera and/or second camera is a hyperspectral camera, or the first camera and/or second camera is a multispectral camera.
  • the apparatus can enhance tissue contrast between different tissue components and this information can be provided to a surgeon.
  • Combining the tissue contrast enhancement provided by the hyperspectral/multispectral camera with the removal of occluding objects in a combined image results in an enhanced visualization apparatus for use during surgery, which is not hampered by occluding objects.
  • the object to be removed from images can be better identified. For example, a surgeon's hands can be identified from the spectral content of the imagery associated with the blue gloves they are wearing. In other words, by providing enhanced contrast in images with enhanced object identification, the robustness of the apparatus for removing an object from images is improved.
  • the output unit comprises: a projection system, wherein the processing unit is configured to provide input to the projection system such that image data is projected from the projection system onto the area of interest of the medical treatment area.
  • a system for imaging in a medical treatment comprising:
  • the apparatus is configured to cooperate with the surgical illumination system.
  • a method for imaging in a medical treatment for use with a surgical illumination system comprising:
  • the method further comprises:
  • the method comprises generating at least one combined image of the area of interest of the medical treatment area from the at least one image acquired by the first camera and the at least one image acquired by the second camera, wherein the at least one combined image does not contain at least some of the image data representative of the part of the object acquired by the first camera and does not contain at least some of the image data representative of the part of the object acquired by the second camera.
  • the first position and second position are separated by a known distance.
  • the at least one image acquired by the first camera comprises at least two images acquired at different times.
  • the method further comprises providing at least one image of the area of interest of the medical treatment area from a third position, wherein the at least one image was acquired by a third camera.
  • the method comprises tracking the object. In an example, the method comprises tracking the position of the surgeon, and wherein outputting of data representative of the area of interest of the medical treatment area comprises outputting data representative of the area of interest of the medical treatment area from a viewing direction of the surgeon.
  • the method comprises cooperating with a surgical illumination system. In an example, the method comprises controlling input to the surgical illumination system.
  • the at least one image of the area of interest of a medical treatment area from the first position and/or second position comprises hyperspectral information or comprises multispectral information.
  • the method comprises projecting image data onto the area of interest of the medical treatment area.
  • the method comprises providing input to a projection system such that image data is projected from the projection system onto the area of interest of the medical treatment area.
  • a computer program element for controlling an apparatus as previously described which, when the computer program element is executed by a processing unit, is adapted to perform the method steps as previously described.
  • Fig. 1 shows a schematic set up of an example of apparatus for imaging in a medical treatment
  • Fig. 2 shows a schematic set up of an example of a system for imaging in a medical treatment
  • Fig. 3 shows an example of a method for imaging in a medical treatment.
  • Fig. 4 shows a schematic set up of another example of apparatus for imaging in a medical treatment
  • Fig. 5 shows schematic representations of images acquired by two cameras of an apparatus for imaging in a medical treatment and a combined image
  • Fig. 1 Shows an example of an apparatus 10 for imaging in a medical treatment.
  • the apparatus comprises: a first camera 20 configured to be positioned at a first position; a second camera 30 configured to be positioned at a second position; a processing unit 40; and an output unit 50.
  • the first camera 20 is configured to acquire at least one image 60 of an area of interest 70 of a medical treatment area 80 from the first position.
  • the second camera 30 is configured to acquire at least one image 90 of the area of interest 70 of the medical treatment area 80 from the second position.
  • the first camera 20 is further configured to provide the at least one image 60 acquired by the first camera 20 to the processing unit 40
  • the second camera 30 is further configured to provide the at least one image 90 acquired by the second camera 30 to the processing unit 40.
  • the processing unit 40 is configured to generate at least one combined image 110 of the area of interest 70 of the medical treatment area 80 from the at least one image 60 provided by the first camera 20 and the at least one image 90 provided by the second camera 30.
  • the processing unit 40 is configured to generate the at least one combined image 110 such that the at least one combined image 110 does not contain at least some of the image data representative of the part of the object 100.
  • the output unit 50 is configured to output data representative of the area of interest 70 of the medical treatment area 80.
  • the at least one combined image does not contain the image data representative of the part of the object.
  • the apparatus can completely remove the object from the combined image or can remove a proportion of the object such that the combined image is improved.
  • image data representative of the part of the object in the at least one image acquired by the first camera at a location of the area of interest is replaced by image data at the corresponding location of the area of interest in the at least one image acquired by the second camera that does not contain image data
  • an obstruction in a first image can be removed using information from a second image.
  • image data acquired by a number of cameras can be combined in a form where an occluding object is removed, providing a surgeon with information representing an un-obscured view of the area of interest.
  • the combined image is generated by removing the image data representative of the part of the object from the at least one first image acquired by the first camera.
  • the at least one image acquired by the first camera is overlaid with the at least one imaged acquired by the second camera.
  • the pixel information from the second image is used to replace the pixel information in the first image.
  • the at least one image acquired by the first camera is recalculated in order that it appears to have been acquired from a position between the position of the first camera and the position of the second camera. This can be done using the at least one image acquired by the second camera.
  • the at least one combined image is recalculated in order that it appears to have been acquired from a position between the position of the first camera and the position of the second camera. This can be done using the at least one image acquired by the first camera and the at least one image acquired by the second camera. In other words, the image can be recalculated as seen from a different point of view.
  • the first camera and/or second camera is configured to acquire a reference image.
  • a reference target is used for acquisition of the reference images.
  • the reference target comprises a checkerboard pattern.
  • the reference target is placed at the position of the medical treatment area. In this manner, images acquired by the cameras can be used to determine the distance of the cameras away from the medical treatment area, for example as a function of the geometric position and orientation of the arm or boom on which the cameras are mounted. This means that the distance the medical treatment area is away from the first camera and/or second camera can also be determined from acquired imagery.
  • the first camera and second camera are registered by each acquiring an image of the reference target, such as the checkerboard pattern.
  • the first and second cameras are registered with respect to different positions and orientations of the cameras.
  • the first and second cameras can be positioned at the maximum distances away a surgical table or moved as close as possible to the surgical table, and be positioned at all positions and orientations between and acquire imagery of the reference target.
  • the overlaying of the first image with the second image may comprise a warping of the second image such that features in the first image correspond with features in the second image, at the same locations in the images, when they are overlaid.
  • the term "overlaid” is used here to describe and explain the process of replacing pixels in the first image with pixels in the second image, and does not mean that the images are actually overlaid one on top of the other.
  • the at least one image acquired by the first image is combined with the at least one image acquired by the second camera to generate at least one combined 3D image.
  • the combined image in addition to having the object removed from the image can be presented as a 3D image.
  • imagery is used for example in student training.
  • the object can be situated between the medical treatment area and the first camera such that the at least one image acquired by the first camera comprises the image data representative of the part of the object and the object can be situated between the medical treatment area and the second camera such that the at least one image acquired by the second camera comprises image data representative of the part of the object.
  • the processing unit is again configured to generate at least one combined image of the area of interest of the medical treatment area from the at least one image acquired by the first camera and the at least one image acquired by the second camera.
  • the processing unit is configured to generate the at least one combined image such that it does not contain at least some of the image data representative of the part of the object acquired by the first camera and does not contain at least some of the image data
  • the at least one combined image does not contain image data representative of the part of the object acquired by the first camera. In an example, the at least one combined image does not contain image data representative of the part of the object acquired by the first camera and does not contain image data representative of the part of the object acquired by the first camera.
  • the first camera and second camera are set apart by a substantial distance.
  • “Substantial” means that the cameras are positioned apart such that they view the area of interest from different angular positions, such that an object situated above the medical treatment area will not occlude exactly the same part of the area of interest as viewed by both cameras.
  • the object situated between the medical treatment area and the first camera and situated between the medical treatment area and the second camera i.e. above the medical treatment area or surgical field
  • objects, such as the hands of a surgeon that appear in the first camera image and additionally or alternatively in the second camera image can be removed and a combined image can be made where the occluded areas of the medical treatment area (or surgery field) can be recovered by combining the information from the first camera image and from the second camera image.
  • a combined image can be generated to provide an un-obscured image of the area of interest.
  • the time at which the at least one image is acquired by the first camera is substantially the same as that for the at least one image acquired by the second camera.
  • the time at which the at least one image is acquired by the first camera is substantially different to the time at which the at least one image is acquired by the second camera.
  • the first and second camera can operate in real time and combine images acquired at approximately the same time.
  • the time of acquisition of the at least one first image can be different to that for the at least one second image. For example, an image acquired by the first camera now, rather than being combined with an image acquired by the second camera now, is combined with a prior image acquired by the second camera.
  • the processing unit is configured to identify the object and alternatively or additionally provide information relating to the object.
  • the processing unit uses image processing to determine if an object is a hand.
  • the processing unit uses image processing to determine if an object is a surgical instrument such as a scalpel.
  • the processing unit uses color based segmentation.
  • the processing unit uses spectral based segmentation. For example, the processing unit can determine if an object is a surgeon's hand, from the shape of the object and/or the color of the gloves being worn (e.g. the green-blue nitrile type of gloves), in order to enhance the removal of this object from the combined image. The same applies to other objects such as surgical instruments.
  • output data representative of the area of interest of the medical treatment area is presented on a screen, for example on a visual display unit VDU.
  • the first camera and the second camera are separated by a known distance.
  • "known distance" means a known lateral distance.
  • a first reference line can be defined that extends from the area of interest of the medical treatment area through the position of the first camera
  • a second reference line can be defined that extends from the area of interest of the medical treatment area through the position of the second camera.
  • a mid reference line can then be defined that is midway between the first and second reference lines and which extends through the area of interest of the medical treatment area.
  • the known distance is the length of a line extending from the first reference line to the second reference line that is perpendicular to the mid reference line and which extends through the position of the first camera.
  • the known distance is the length of a line extending from the first reference line to the second reference line that is perpendicular to the mid reference line and which extends through the position of the second camera.
  • the "known distance" is a geometric distance between the position of the first camera and the position of the second camera.
  • the first camera is at a known position.
  • the second camera is at a known position.
  • the first camera and the second camera are at known positions.
  • the first camera is separated from the medical treatment area by a known distance.
  • the second camera is separated from the medical treatment area by a known distance.
  • the first camera and the second camera are separated from the medical treatment area by known distances.
  • the medical treatment area is at a known position.
  • the distance the object is away from the first camera and/or second camera can be determined.
  • the distance to the medical treatment area can be determined.
  • the distance the object is away from the medical treatment area can be determined.
  • the processing unit is configured to remove image data representative of the object from the combined image when the distance of the object from the medical treatment area exceeds a threshold. In an example, the processing unit is configured to remove image data representative of the object from the combined image when the distance of the object from the first camera is below a threshold and alternatively or additionally the distance of the object from the second camera is below a threshold. In this manner, the surgeon is provided with information regarding the surgical treatment, for example with respect to their hands and the any surgical instrument they are using during interaction with a patient.
  • the apparatus can generate a combined image for the surgeon where the nurse's hand has been removed from the imagery. The surgeon can then continue to concentrate on what their hands and surgical instruments are doing. Also, as the surgeon raises their own hands above the patient, they will be visible to the surgeon until a threshold is reached at which point the apparatus will provide a combined image with the surgeon's hand removed from the imagery.
  • the at least one image acquired by the first camera comprises at least two images acquired at different times.
  • the at least one image acquired by the second camera comprises at least two images acquired at different times.
  • image data representative of the part of the object in the at least one image acquired by the first camera at a location of the area of interest is replaced by image data at the corresponding location of the area of interest in the at least one image acquired by the second camera that does not contain image data
  • an obstruction in a first image can be removed using information from a second image acquired by a second camera and additional using information from a prior image acquired by the first camera.
  • an image acquired by the first camera may have an object covering a center portion of the area of interest.
  • An image acquired by the second camera may have the object offset from the center portion of the area of interest, such that most of the object in the first image can be replaced with imagery of the area of interest from the second image.
  • a common occluded area remains.
  • a prior image acquired by the first camera can be used, where in that prior image the object is also offset from the center portion of the area of interest and where there is no common occluded area for the three images.
  • the prior image acquired by the first camera can then be used to replace the remaining part of the object in the later image acquired by the first camera (that was not replaced by image information acquired by the second camera) with imagery of the area of interest.
  • the resultant combined image shows the area of interest without the object. This means that for the combined image where occluding objects are removed, the removal process can be improved by using information from prior image frames. In other words, the occluding object may not be able to be completed removed from the combined image.
  • the combined image from the first and second cameras acquired now which has an occluded part of the area of interest can be augmented with an earlier image acquired by the first camera where that part of the area of interest is not occluded.
  • the prior image acquired by the first camera can be used to fill in the gap, to provide a combined image with no occluded areas.
  • the apparatus comprises a third camera 120 configured to be positioned at a third position.
  • the third camera is configured to acquire at least one image of the area of interest of the medical treatment area from the third position, and wherein the third camera is configured to provide the at least one image acquired by the third camera to the processing unit.
  • the processing unit is configured to generate the combined image of the area of interest of the medical treatment area from the at least one image acquired by the first camera and the at least one image acquired by the second camera and the at least one image acquired by the third camera.
  • the processing unit is configured to generate at least one combined image that does not contain at least some of the image data representative of the part of the object.
  • the processing unit is configured to generate at least one combined image that does not contain at least some of the image data representative of the part of the object.
  • the processing unit is configured to generate at least one combined image that does not contain at least some of the image data representative of the part of the object. In an example, the at least one combined image does not contain image data representative of the part of the object.
  • the separation between the first and third cameras is substantially less than the separation between the first and second cameras and substantially less than the separation between the second and third cameras.
  • the processing unit is configured to generate at least one paired image of the area of interest of the medical treatment area from the at least one image acquired by the first camera and the at least one image acquired by the third camera.
  • the processing unit is configured to generate a combined image of the area of interest of the medical treatment area from the at least one paired image and the at least one image acquired by the second camera.
  • cameras can be provided in pairs and by having the spacing between the two cameras forming a pair that is not too large, feature detection is improved.
  • the processing unit can use image processing of the paired image to detect features in the paired image such as determining different tissue components and this image processing can be improved when the cameras used to generate the paired image are not too far apart. Then by having a pair of cameras spacing away from another camera the ability is provided to generate a combined image that does not contain imagery of the object. In other words, the processing unit can remove an occluding object from the combined image, whilst at the same time providing for improved feature recognition within the combined imagery.
  • the apparatus comprises a fourth camera configured to be positioned at a fourth position; wherein the fourth camera is configured to acquire at least one image of the area of interest of the medical treatment area from the fourth position, and wherein the fourth camera is configured to provide the at least one image acquired by the fourth camera to the processing unit.
  • the processing unit is configured to generate a combined image of the area of interest of the medical treatment area from the at least one image acquired by the first camera and the at least one image acquired by the second camera and the at least one image acquired by the third camera and the at least one image acquired by the fourth camera.
  • the processing unit is configured to generate at least one combined image that does not contain at least some of the image data representative of the part of the object.
  • the separation between the first and third cameras is substantially less than the separation between the first and second cameras and substantially less than the separation between the second and third cameras.
  • the separation between the second and fourth cameras is substantially less than the separation between the first and second cameras and substantially less than the separation between the second and third cameras.
  • the processing unit is configured to generate at least one first paired image of the area of interest of the medical treatment area from the at least one image acquired by the first camera and the at least one image acquired by the third camera.
  • the processing unit is configured to generate at least one second paired image of the area of interest of the medical treatment area from the at least one image acquired by the second camera and the at least one image acquired by the fourth camera.
  • the processing unit is configured to generate a combined image of the area of interest of the medical treatment area from the at least one first paired image and the at least one second paired image.
  • cameras can be provided in pairs and by having the spacing between the pairs of cameras that is not too large, feature detection is improved which also leads to improved distance calculations of an occluding object from the cameras. Then by having a pair of cameras spacing away from another pair of cameras the ability is provided to remove an occluding object from the combined image with improved robustness, whilst at the same time providing for improved feature recognition within the combined imagery.
  • the apparatus comprises a tracking system 130; wherein the processing unit is configured to track the object using the tracking system.
  • the tracking system comprises the first camera and additionally or alternatively comprises the second camera.
  • the processing unit uses information contained within the at least one image acquired by the first camera and additionally or alternatively uses information contained within the at least one image acquired by the second camera and additionally or alternatively uses information contained within the combined image in order to track the object.
  • the processing unit is configured to use image processing in order to track the object.
  • the processing unit is configured to track the object based on the shape of the object.
  • the processing unit is configured to track the object based on marker recognition.
  • the apparatus provides an additional layer of robustness relating to removing image data representative of the object from the combined image because the position of the object is provided with better certainty.
  • an object can be tracked in time and its future position can be expected to be adjacent to the current position, or at least a distance away from the current position taking into account the time between image acquisitions and a likely speed of movement. Therefore, an object that has been previously removed from the combined image can be determined to be an object to be removed from the present combined image with better certainty.
  • image processing may have determined that an object is the surgeon's hand and should be removed from the combined image. The object is tracked and can continue to be removed from the combined image, even if at a later point in time the surgeon's hand is in an orientation such that it cannot be determined at that time to be a hand.
  • the processor is configured to track the position of a surgeon 140, and wherein the output unit is configured to output data representative of the area of interest of the medical treatment area from a viewing direction of the surgeon.
  • the apparatus is configured to cooperate with a surgical illumination system 150.
  • the surgical illumination system is a surgical light.
  • the first and second cameras are spaced apart by more than half the diameter of the surgical light.
  • the first and second cameras are attached to a mount that is different to the mount for the surgical illumination system.
  • the first and second cameras are attached to the ceiling.
  • the surgical illumination system is positioned such that the area of interest of the medical treatment area is illuminated.
  • the apparatus is configured to be retrofitted to a surgical illumination system, such as a surgical light.
  • the first and second cameras can be provided with an "add- on" mechanism in which the cameras can be attached to a pre-existing surgical light.
  • the first and second cameras are incorporated into the surgical illumination system.
  • the first and second cameras are positioned at the outer rim of the surgical light.
  • the at least one image acquired by the first camera is recalculated in order that it appears to have been acquired from a position between the position of the first camera and the position of the second camera.
  • the first image can be recalculated as if it was acquired at the center of the surgical light. This can be done using the at least one image acquired by the second camera.
  • the at least one combined image is recalculated in order that it appears to have been acquired from a position between the position of the first camera and the position of the second camera, for example as if it has been acquired at the center of the surgical light. This can be done using the at least one image acquired by the first camera and the at least one image acquired by the second camera. In other words, the image can be recalculated as seen from a different point of view.
  • the first and second cameras are not rigidly connected to a light of the surgical illumination system. In an example, the first and second cameras are rigidly connected to the surgical light.
  • first and second cameras By having the first and second cameras separate to the surgical illumination system, or not rigidly attached to the light of the surgical illumination system, enables the first and second cameras to be positioned independently from the light of the surgical illumination system. By having the first and second cameras attached to the surgical illumination system, leads to simplicity of operation, where the cameras are automatically pointing in the correct direction as the surgical light is moved.
  • the apparatus comprises a third and a fourth camera
  • the third and fourth cameras can be incorporated into the surgical illumination system.
  • the third and fourth cameras are positioned at the outer rim of the surgical light.
  • the first and third cameras are spaced apart by less than half the diameter of the surgical light.
  • the second and fourth cameras are spaced apart by less than half the diameter of the surgical light.
  • the processing unit is configured to provide control input to the surgical illumination system.
  • the processing unit is configured to provide adjustment of the light intensity of the surgical illumination system on the basis of the first camera and alternatively or additionally on the basis of the second camera.
  • adjustment of the light intensity is based on the at least one image acquired by the first camera and alternatively or additionally is based on the at least one image acquired by the second camera.
  • the apparatus is configured to provide gesture based illumination.
  • the processing unit is configured to interpret gestures - for example, interpret movements made by the surgeon's hands.
  • the processing unit is configured to control the illumination based on the movements made by the surgeon, for example increasing the light intensity on the basis of a particular movement of the hand.
  • the light intensity on the medical treatment area can be kept constant or adjusted as necessary.
  • the light output from the surgical illumination system can be increased in order that the light intensity at the medical treatment area remains as required.
  • the tracking system is incorporated into the surgical illumination system.
  • the surgical light is mounted on an arm or boom and the tracking system is also mounted on the arm or boom. This prevents problems and conflicts with placement of the tracking system and surgical illumination system. Also, ease of use of the apparatus is increased because only one object has to be positioned correctly.
  • the first camera is a hyperspectral camera, or the first camera is a multispectral camera.
  • the second camera is a hyperspectral camera, or the second camera is a multispectral camera.
  • any one or any number of the cameras is a hyperspectral camera, or a multispectral camera.
  • the term "hyperspectral” refers to a camera that can collect and enable processing of information from across a range of the electromagnetic spectrum. This information may extend beyond the visible range.
  • multispectral refers to camera that can capture image data at specific frequencies across the electromagnetic spectrum.
  • the wavelengths may be separated by filters or by the use of instruments that are sensitive to particular wavelengths (for example gratings, prisms), i.e.
  • the first camera and additionally or alternatively the second camera is a hyperspectral or multispectral filter-wheel camera for hyperspectral or multispectral imaging with a spectral range of 400 to 1000 nm (nanometer) or from 1000 to 1700 nm or from 500 to 1700 nm.
  • the first and/or second camera has various, for instance 6 or 8 or even more interchangeable filters.
  • the first and/or second camera has a charge-coupled device CCD with a resolution of 1392 x 1040 pixels or physical points in a raster image.
  • the first and/or second camera has an Indium gallium arsenide (InGaAs) or any other semiconductor sensor with a resolution of 640 x 512 pixels, or with a sensor with any other pixel resolution.
  • InGaAs Indium gallium arsenide
  • the apparatus can enhance tissue contrast between different tissue components and this information can be provided to a surgeon.
  • Combining the tissue contrast enhancement provided by the hyperspectral/multispectral camera with the removal of occluding objects in a combined image results in an enhanced visualization apparatus for use during surgery, which is not hampered by occluding objects.
  • the object to be removed from images can be better identified.
  • a surgeon's hands can be identified from the spectral content of the imagery associated with the blue gloves they are wearing.
  • the robustness of the apparatus for removing an object from images is improved.
  • the first camera and/or second camera is a thermal camera.
  • the output unit comprises: a projection system 160.
  • the processing unit is configured to provide input to the projection system such that image data is projected from the projection system onto the area of interest of the medical treatment area.
  • the projection system is incorporated into the surgical illumination system.
  • image data projected onto the area of interest of the medical treatment area comprises the at least one image acquired by the first camera.
  • image data projected onto the area of interest of the medical treatment area comprises the at least one image acquired by the second camera.
  • image data projected onto the area of interest of the medical treatment area comprises the at least one combined image.
  • image data projected onto the area of interest of the medical treatment area comprises X-ray image data.
  • image data projected onto the area of interest of the medical treatment area comprises 3D reconstruction information.
  • image data projected onto the area of interest of the medical treatment area comprises CT image data.
  • image data projected onto the area of interest of the medical treatment area comprises MRI image data.
  • image data projected onto the area of interest of the medical treatment area comprises ultrasound image data.
  • the output unit is configured to output data representative of the area of interest of the medical treatment area that comprises the at least one image acquired by the first camera onto a screen. In an example, the output unit is configured to output data representative of the area of interest of the medical treatment area that comprises the at least one image acquired by the second camera onto a screen. In an example, the output unit is configured to output data representative of the area of interest of the medical treatment area that comprises the at least one combined image onto a screen. In an example, the output unit is configured to output data representative of the area of interest of the medical treatment area that comprises X-ray image data onto a screen. In an example, the output unit is configured to output data representative of the area of interest of the medical treatment area that comprises CT image data onto a screen.
  • the output unit is configured to output data representative of the area of interest of the medical treatment area that comprises MRI image data onto a screen. In an example, the output unit is configured to output data representative of the area of interest of the medical treatment area that comprises ultrasound image data onto a screen.
  • the surgeon can be presented with data relating to the area of interest of the medical treatment area that augments the visual data that the surgeon is presented with by their own eyes.
  • the first and/or second camera can operate over a wavelength range extending beyond the visible and this information can be presented to the surgeon overlaid over the medical treatment area or provided for example on a screen.
  • surgeon can be provided with information acquired by complementary systems, such as X-ray, CT, MRI and/or ultrasound systems.
  • complementary systems such as X-ray, CT, MRI and/or ultrasound systems.
  • the first camera and additionally or alternatively the second camera is a camera configured to provide depth information.
  • the first and/or second camera can acquire time-of- flight data.
  • Fig. 2 shows an example of a system 200 for imaging in a medical treatment.
  • the system 200 comprises a surgical illumination system 150; and an apparatus 10 for imaging in a medical treatment.
  • the apparatus 10 is provided as an example according to the above mentioned Fig.l .
  • the apparatus 10 is configured to cooperate with the surgical illumination system 150.
  • image data representative of a part of the object in the at least one image acquired by the first camera at a location of the area of interest is replaced by image data at the corresponding location of the area of interest in the at least one image acquired by the second camera that does not contain image data
  • the at least one combined image does not contain image data representative of the part of the object.
  • Fig. 3 shows a method 300 for imaging in a medical treatment for use with a surgical illumination system.
  • the method comprises the following:
  • a first providing step 310 also referred to as step a
  • at least one image of an area of interest of a medical treatment area from a first position is provided, wherein the at least one image was acquired by a first camera.
  • a second providing step 320 also referred to as step b
  • at least one image of the area of interest of the medical treatment area from a second position is provided, wherein the at least one image was acquired by a second camera.
  • the method further comprises the following:
  • a generating step 330 also referred to as step c) at least one combined image of the area of interest of the medical treatment area is generated from the at least one image provided by the first camera and from the at least one image provided by the second camera.
  • the at least one combined image is generated such that it does not contain at least some of the image data representative of the part of the object.
  • step d data representative of the area of interest of the medical treatment area is output.
  • the method comprises generating at least one combined image of the area of interest of the medical treatment area from the at least one image acquired by the first camera and the at least one image acquired by the second camera, wherein the at least one combined image does not contain at least some of the image data representative of the part of the object acquired by the first camera and does not contain at least some of the image data representative of the part of the object acquired by the second camera.
  • the first position and second position are separated by a known distance.
  • the at least one image acquired by the first camera comprises at least two images acquired at different times.
  • the method further comprises providing at least one image of the area of interest of the medical treatment area from a third position, wherein the at least one image was acquired by a third camera.
  • the method comprises tracking the object. In an example, the method comprises tracking the position of the surgeon, and wherein outputting of data representative of the area of interest of the medical treatment area comprises outputting data representative of the area of interest of the medical treatment area from a viewing direction of the surgeon.
  • the method comprises cooperating with a surgical illumination system. In an example, the method comprises controlling input to the surgical illumination system.
  • the at least one image of the area of interest of a medical treatment area from the first position comprises hyperspectral information or comprises multispectral information.
  • the method comprises projecting image data onto the area of interest of the medical treatment area.
  • the method comprises providing input to a projection system such that image data is projected from the projection system onto the area of interest of the medical treatment area.
  • Fig. 4 shows a schematic set up of another example of an apparatus 10 for imaging in a medical treatment.
  • the apparatus 10 has been added to, or in other words is working in cooperation with, a surgical illumination system 150 such as a surgical light 150.
  • the surgical light 150 is attached to the ceiling of a surgery room by way of a manipulation arm.
  • a first camera 20 is attached to one side of the surgical light 150, and is arranged to view an area of interest 70 of the medical treatment area 80, which in the example shown is a part 70 of a patient 80 lying on a surgical table 170.
  • a second camera 30 is attached to the other side of the surgical light 150, and is similarly arranged to view the area of interest 70 of the medical treatment area 80.
  • a surgeon 140 is performing a procedure.
  • the hands 100 of the surgeon 140 form an object that is partially occluding the area of interest 70 as seen by cameras 20, 30.
  • an image 60 of the area of interest 70 of the medical treatment area 80 has been acquired by the first camera 20, where a hand 100 of the surgeon 140 is partially occluding the area of interest 70.
  • An image 90 of the area of interest 70 of the medical treatment area 80 has been acquired by the second camera 30, with the hand 100 of the surgeon 140 again partially occluding the area of interest 70. Due to the cameras 20, 30 being substantially separated in distance, in the images 60, 90 objects close to the cameras will have a larger shift than objects further away. In this example, the hand 100 of the surgeon 140 is closer to the cameras than the medical treatment area 80.
  • the hand 100 of the surgeon 140 shows a larger shift. Based on the shift, the distance of the hand 100 from the cameras 20, 30 can be calculated. Based on the shift, the distance of the area of interest 70 of the medical treatment area 80 from the cameras 20, 30 can also be calculated. In another example, the distance of the area of interest 70 of the medical treatment area from the cameras 20, 30 is determined from the known position of the surgical light 150 with respect to the surgical table 170. In this other example, the distance of the cameras 20, 30 to the area of interest 70 is determined from the known geometry of the surgical room in combination with the geometric arrangement of the arm holding the surgical light, or the distance can be determined through other means such as GPS, radar, or gyroscope position determination systems for example.
  • the hand 100 of the surgeon 140 that is substantially above the area of interest 70 can be identified and removed from the images 60, 90.
  • a combined image 110 can then be created by filling in pixels in the first image 60 that have been removed by equivalent pixels in the second image 90 at the locations of the removed pixels that have not been removed in the second image 90.
  • holes created in the images 60, 90 by the removal of the object 100, such as a surgeon's hand 100 can be filled in again by combining the images 60, 90.
  • the combined image 110 is then presented to the surgeon 140 on a screen 50.
  • the camera 20 and/or 30 can be multispectral or hyperspectral cameras.
  • the wavelength bands for camera 20 and/or 30 may been a visible or non-visible light spectrum, comprising several wavelength bands for instance:
  • a computer program or computer program element for controlling an appropriate system that is characterized by being configured to execute the method steps according to one of the preceding embodiments.
  • the computer program element might therefore be stored on a computer unit, which might also be part of an embodiment.
  • This computing unit may be configured to perform or induce performing of the steps of the method described above. Moreover, it may be configured to operate the components of the above described apparatus.
  • the computing unit can be configured to operate automatically and/or to execute the orders of a user.
  • a computer program may be loaded into a working memory of a data processor.
  • the data processor may thus be equipped to carry out the method according to one of the preceding embodiments.
  • This exemplary embodiment of the invention covers both, a computer program that right from the beginning uses the invention and computer program that by means of an update turns an existing program into a program that uses invention.
  • the computer program element might be able to provide all necessary steps to fulfill the procedure of an exemplary embodiment of the method as described above.
  • a computer readable medium such as a CD-ROM
  • the computer readable medium has a computer program element stored on it which computer program element is described by the preceding section.
  • a computer program may be stored and/or distributed on a suitable medium, such as an optical storage medium or a solid state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the internet or other wired or wireless telecommunication systems.
  • a suitable medium such as an optical storage medium or a solid state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the internet or other wired or wireless telecommunication systems.
  • the computer program may also be presented over a network like the World Wide Web and can be downloaded into the working memory of a data processor from such a network.
  • a medium for making a computer program element available for downloading is provided, which computer program element is arranged to perform a method according to one of the previously described embodiments of the invention.

Abstract

The present invention relates to apparatus for imaging in a medical treatment for use with a surgical illumination system. It is described to provide (310) at least one image of an area of interest of a medical treatment area from a first position, wherein the at least one image was acquired by a first camera. At least one image of the area of interest of the medical treatment area from a second position is provided (320), wherein the at least one 5 image was acquired by a second camera. If an object is situated between the medical treatment area and the first camera such that the at least one image provided by the first camera comprises image data representative of a part of the object, at least one combined image of the area of interest of the medical treatment area is generated (330) from the at least one image provided by the first camera and from the at least one image provided by the 10 second camera. The at least one combined image is generated such that it does not contain at least some of the image data representative of the part of the object. Data representative of the area of interest of the medical treatment area is output (340).

Description

Apparatus for imaging in a medical treatment
FIELD OF THE INVENTION
The present invention relates to an apparatus for imaging in a medical treatment for use with a surgical illumination system, to a medical system for imaging in a medical treatment, and to a method for imaging in a medical treatment for use with a surgical illumination system, as well as to a computer program element and a computer readable medium.
BACKGROUND OF THE INVENTION
In surgery rooms, surgical (task) lighting is used to illuminate the treatment area. A camera has been integrated into a surgical light, allowing observation of the surgery area and providing feedback to the surgeon.
US2014/288413 Al describes that a surgical robot system includes a slave system to perform a surgical operation on a patient and an imaging system that includes an image capture unit including a plurality of cameras to acquire a plurality of affected area images, an image generator detecting an occluded region in each of the affected area images acquired by the plurality of cameras, removing the occluded region therefrom, warping each of the affected area images from which the occluded region is removed, and matching the affected area images to generate a final image, and a controller driving each of the plurality of cameras of the image capture unit to acquire the plurality of affected area images and inputting the acquired plurality of affected area images to the image generator to generate a final image.
US2003/0164953 Al relates to a system for the combined shadow- free illumination of a pre-defmable area and for referencing three-dimensional spatial coordinates, and to an active or passive referencing system, each in particular for referencing surgical or medical instruments. The system is characterized in that at least two cameras and the light source (operation lamp) are held together such that the optical signals detected by the cameras, for referencing three-dimensional spatial co-ordinates in the area illuminated by the light source, can be evaluated. Since the field of view of the light source, in its conventional use, is not obscured or only negligibly obscured, this field of view can simultaneously be used for optical navigation by the cameras held together with the light source.
However, the camera image may not always be useful, as the surgery area may be obstructed for example with the hands of the surgeon himself.
SUMMARY OF THE INVENTION
It would be advantageous to have an improved technique for imaging in a medical treatment for use with a surgical illumination system.
The object of the present invention is solved with the subject matter of the independent claims, wherein further embodiments are incorporated in the dependent claims. It should be noted that the following described aspects of the invention apply also for the apparatus for imaging in a medical treatment for use with a surgical illumination system, to a medical system for imaging in a medical treatment, and to a method for imaging in a medical treatment for use with a surgical illumination system, as well as to a computer program element and a computer readable medium.
In an aspect, there is provided an apparatus for imaging in a medical treatment for use with a surgical illumination system as defined in appended claim 1. In another aspect, there is provided a system for imaging in a medical treatment as defined in appended claim 12. In another aspect, there is provided a method for imaging in a medical treatment for use with a surgical illumination system as defined in appended claim 13. In another aspect, there is provided a computer program element as defined in appended claim 14. In another aspect, there is provided a computer readable medium as defined in appended claim 16.
According to anexample, there is provided an apparatus for imaging in a medical treatment for use with a surgical illumination system, the apparatus comprising: - a first camera configured to be positioned at a first position;
a second camera configured to be positioned at a second position; a processing unit; and
an output unit.
The first camera is configured to acquire at least one image of an area of interest of a medical treatment area from the first position. The second camera is configured to acquire at least one image of the area of interest of the medical treatment area from the second position. The first camera is configured to provide the at least one image acquired by the first camera to the processing unit, and the second camera is configured to provide the at least one image acquired by the second camera to the processing unit. If an object is situated between the medical treatment area and the first camera such that the at least one image provided by the first camera comprises image data representative of a part of the object, the processing unit is configured to generate at least one combined image of the area of interest of the medical treatment area from the at least one image provided by the first camera and the at least one image provided by the second camera, wherein the at least one combined image does not contain at least some of the image data representative of the part of the object. The output unit is configured to output data representative of the area of interest of the medical treatment area.
As a result, the apparatus can completely remove the object from the combined image or can remove a proportion of the object such that the combined image is improved. In other words, an obstruction in a first image can be removed using information from a second image. In this manner, image data acquired by a number of cameras can be combined in a form where an occluding object is removed, providing a surgeon with information representing an un-obscured view of the area of interest. Images acquired by the cameras can be used to determine the distance of the cameras away from the object. Images acquired by the cameras can be used to determine the distance of the cameras away from the medical treatment area, for example as a function of the geometric position and orientation of an arm or boom on which the cameras are mounted. This means that the distance the medical treatment area is away from the first camera and/or second camera can also be determined from acquired imagery.
In an example, the object can be situated between the medical treatment area and the first camera such that the at least one image acquired by the first camera comprises the image data representative of the part of the object. The object can also be situated between the medical treatment area and the second camera such that the at least one image acquired by the second camera comprises image data representative of the part of the object. In this example, the processing unit is configured to generate at least one combined image of the area of interest of the medical treatment area from the at least one image acquired by the first camera and the at least one image acquired by the second camera. In this example, the at least one combined image does not contain at least some of the image data representative of the part of the object acquired by the first camera and does not contain at least some of the image data representative of the part of the object acquired by the second camera.
Due to the first camera and second camera being at different positions, the object situated between the medical treatment area and the first camera and situated between the medical treatment area and the second camera (i.e. above the medical treatment area or surgical field) will generate different obstructions of the surgical field (or area of interest) as seen by the first camera and as seen by the second camera due to the different parallax. This means that objects, such as the hands of a surgeon that appear in the first camera image and additionally or alternatively in the second camera image can be removed and a combined image can be made where the occluded areas of the medical treatment area (or surgery field) can be recovered by combining the information from the first camera image and from the second camera image. In this manner, if neither image acquired by the first camera or second camera can be used alone to provide an un-obscured or non-occluded image of the area of interest, a combined image can be generated to provide an un-obscured image of the area of interest.
In an example, the first camera and the second camera are separated by a known distance. By knowing the separation of the cameras, based on the first camera image and the second camera image the parallax for the object can be determined. This means that the distance the object is away from the first camera and/or second camera can be
determined.
In an example, the at least one image acquired by the first camera comprises at least two images acquired at different times. This means that for the combined image where occluding objects are removed, the removal process can be improved by using information from prior image frames. In other words, the occluding object may not be able to be completed removed from the combined image. For example if there is a part of the area of interest that is occluded by the object as viewed by both the first camera and second camera at a specific moment in time, then in the combined image this part of the area of interest will be occluded. However, that occluded part of the area of interest may not be occluded in both an image acquired now with the first camera and an image acquired at an earlier time by the first camera. Therefore, the combined image from the first and second cameras acquired now, which has an occluded part of the area of interest can be augmented with an earlier image acquired by the first camera where that part of the area of interest is not occluded. The prior image acquired by the first camera can be used to fill in the gap, to provide a combined image with no occluded areas.
In an example, the apparatus comprises a third camera configured to be positioned at a third position. In this example, the third camera is configured to acquire at least one image of the area of interest of the medical treatment area from the third position, and wherein the third camera is configured to provide the at least one image acquired by the third camera to the processing unit. The processing unit is configured to generate the combined image of the area of interest of the medical treatment area from the at least one image acquired by the first camera and the at least one image acquired by the second camera and the at least one image acquired by the third camera. In this manner, further robustness is provided in removing an occluded object from the combined image. This is because the possibility that there is a common area that is occluded is reduced when there are three images being combined. Additionally, by using a third camera at a third position the parallax computations used to determine the distance to the object become more robust, enabling for improved distance calculations, providing further robustness for removing an occluded object.
In an example, the apparatus comprises a tracking system; wherein the processing unit is configured to track the object using the tracking system. In this manner, the apparatus provides an additional layer of robustness relating to removing image data representative of the object from the combined image because the position of the object is provided with better certainty. In an example, the processor is configured to track the position of a surgeon, and wherein the output unit is configured to output data representative of the area of interest of the medical treatment area from a viewing direction of the surgeon.
In an example, the apparatus is configured to cooperate with a surgical illumination system.
In an example, the processing unit is configured to provide control input to the surgical illumination system. In this manner, the light intensity on the medical treatment area can be kept constant or adjusted as necessary. For example, as a surgical lamp is moved away from the medical treatment area the light output from the surgical illumination system can be increased in order that the light intensity at the medical treatment area remains as required.
In an example, the first camera and/or second camera is a hyperspectral camera, or the first camera and/or second camera is a multispectral camera. In this manner, the apparatus can enhance tissue contrast between different tissue components and this information can be provided to a surgeon. Combining the tissue contrast enhancement provided by the hyperspectral/multispectral camera with the removal of occluding objects in a combined image results in an enhanced visualization apparatus for use during surgery, which is not hampered by occluding objects. By providing a hyperspectral or multispectral image, the object to be removed from images can be better identified. For example, a surgeon's hands can be identified from the spectral content of the imagery associated with the blue gloves they are wearing. In other words, by providing enhanced contrast in images with enhanced object identification, the robustness of the apparatus for removing an object from images is improved.
In an example, the output unit comprises: a projection system, wherein the processing unit is configured to provide input to the projection system such that image data is projected from the projection system onto the area of interest of the medical treatment area.
According to an example, there is provided a system for imaging in a medical treatment comprising:
a surgical illumination system; and
an apparatus for imaging in a medical treatment according to any one of the preceding examples.
The apparatus is configured to cooperate with the surgical illumination system. According to an example, there is provided a method for imaging in a medical treatment for use with a surgical illumination system, comprising:
a) providing at least one image of an area of interest of a medical treatment area from a first position, wherein the at least one image was acquired by a first camera;
b) providing at least one image of the area of interest of the medical treatment area from a second position, wherein the at least one image was acquired by a second camera;
wherein if an object is situated between the medical treatment area and the first camera such that the at least one image provided by the first camera comprises image data representative of a part of the object, the method further comprises:
c) generating at least one combined image of the area of interest of the medical treatment area from the at least one image provided by the first camera and the at least one image provided by the second camera, wherein the at least one combined image does not contain at least some of the image data representative of the part of the object; and
d) outputting of data representative of the area of interest of the medical treatment area.
In an example, if the object is situated between the medical treatment area and the first camera such that the at least one image acquired by the first camera comprises the image data representative of the part of the object and the object is situated between the medical treatment area and the second camera such that the at least one image acquired by the second camera comprises image data representative of the part of the object, the method comprises generating at least one combined image of the area of interest of the medical treatment area from the at least one image acquired by the first camera and the at least one image acquired by the second camera, wherein the at least one combined image does not contain at least some of the image data representative of the part of the object acquired by the first camera and does not contain at least some of the image data representative of the part of the object acquired by the second camera.
In an example, the first position and second position are separated by a known distance.
In an example, the at least one image acquired by the first camera comprises at least two images acquired at different times.
In an example, the method further comprises providing at least one image of the area of interest of the medical treatment area from a third position, wherein the at least one image was acquired by a third camera.
In an example, the method comprises tracking the object. In an example, the method comprises tracking the position of the surgeon, and wherein outputting of data representative of the area of interest of the medical treatment area comprises outputting data representative of the area of interest of the medical treatment area from a viewing direction of the surgeon.
In an example, the method comprises cooperating with a surgical illumination system. In an example, the method comprises controlling input to the surgical illumination system.
In an example, the at least one image of the area of interest of a medical treatment area from the first position and/or second position comprises hyperspectral information or comprises multispectral information.
In an example, the method comprises projecting image data onto the area of interest of the medical treatment area. In an example, the method comprises providing input to a projection system such that image data is projected from the projection system onto the area of interest of the medical treatment area.
According to another example, there is provided a computer program element for controlling an apparatus as previously described which, when the computer program element is executed by a processing unit, is adapted to perform the method steps as previously described.
According to another example, there is provided a computer readable medium having stored the computer element as previously described.
Advantageously, the benefits provided by any of the above aspects and examples equally apply to all of the other aspects and examples and vice versa. The above aspects and examples will become apparent from and be elucidated with reference to the embodiments described hereinafter.
BRIEF DESCRIPTION OF THE DRAWINGS
Exemplary embodiments will be described in the following with reference to the following drawings:
Fig. 1 shows a schematic set up of an example of apparatus for imaging in a medical treatment;
Fig. 2 shows a schematic set up of an example of a system for imaging in a medical treatment;
Fig. 3 shows an example of a method for imaging in a medical treatment.
Fig. 4 shows a schematic set up of another example of apparatus for imaging in a medical treatment;
Fig. 5 shows schematic representations of images acquired by two cameras of an apparatus for imaging in a medical treatment and a combined image;
DETAILED DESCRIPTION OF EMBODIMENTS
Fig. 1. Shows an example of an apparatus 10 for imaging in a medical treatment. The apparatus comprises: a first camera 20 configured to be positioned at a first position; a second camera 30 configured to be positioned at a second position; a processing unit 40; and an output unit 50. The first camera 20 is configured to acquire at least one image 60 of an area of interest 70 of a medical treatment area 80 from the first position. The second camera 30 is configured to acquire at least one image 90 of the area of interest 70 of the medical treatment area 80 from the second position. The first camera 20 is further configured to provide the at least one image 60 acquired by the first camera 20 to the processing unit 40, and the second camera 30 is further configured to provide the at least one image 90 acquired by the second camera 30 to the processing unit 40. If an object 100 is situated between the medical treatment area 80 and the first camera 20 such that the at least one image 60 provided by the first camera 20 comprises image data representative of a part of the object 100, the processing unit 40 is configured to generate at least one combined image 110 of the area of interest 70 of the medical treatment area 80 from the at least one image 60 provided by the first camera 20 and the at least one image 90 provided by the second camera 30. The processing unit 40 is configured to generate the at least one combined image 110 such that the at least one combined image 110 does not contain at least some of the image data representative of the part of the object 100. The output unit 50 is configured to output data representative of the area of interest 70 of the medical treatment area 80.
In an example, the at least one combined image does not contain the image data representative of the part of the object.
In other words, the apparatus can completely remove the object from the combined image or can remove a proportion of the object such that the combined image is improved.
In an example, in the combined image, image data representative of the part of the object in the at least one image acquired by the first camera at a location of the area of interest is replaced by image data at the corresponding location of the area of interest in the at least one image acquired by the second camera that does not contain image data
representative of the part of the object. In other words, an obstruction in a first image can be removed using information from a second image. In this manner, image data acquired by a number of cameras can be combined in a form where an occluding object is removed, providing a surgeon with information representing an un-obscured view of the area of interest.
In an example, the combined image is generated by removing the image data representative of the part of the object from the at least one first image acquired by the first camera. In an example, the at least one image acquired by the first camera is overlaid with the at least one imaged acquired by the second camera. In an example, at a location in the first image where image data representative of the part of the object has been removed the pixel information from the second image is used to replace the pixel information in the first image.
In an example, the at least one image acquired by the first camera is recalculated in order that it appears to have been acquired from a position between the position of the first camera and the position of the second camera. This can be done using the at least one image acquired by the second camera. In an example, the at least one combined image is recalculated in order that it appears to have been acquired from a position between the position of the first camera and the position of the second camera. This can be done using the at least one image acquired by the first camera and the at least one image acquired by the second camera. In other words, the image can be recalculated as seen from a different point of view.
In an example, the first camera and/or second camera is configured to acquire a reference image. In an example a reference target is used for acquisition of the reference images. In an example, the reference target comprises a checkerboard pattern. In an example, the reference target is placed at the position of the medical treatment area. In this manner, images acquired by the cameras can be used to determine the distance of the cameras away from the medical treatment area, for example as a function of the geometric position and orientation of the arm or boom on which the cameras are mounted. This means that the distance the medical treatment area is away from the first camera and/or second camera can also be determined from acquired imagery.
In an example, the first camera and second camera are registered by each acquiring an image of the reference target, such as the checkerboard pattern. In an example, the first and second cameras are registered with respect to different positions and orientations of the cameras. For example, the first and second cameras can be positioned at the maximum distances away a surgical table or moved as close as possible to the surgical table, and be positioned at all positions and orientations between and acquire imagery of the reference target. By registering the cameras, the distance dependent shift of pixels from the second image to first image can be determined as part of the process of overlaying the image acquired by the first camera with the image acquired by the second camera. The overlaying of the first image with the second image may comprise a warping of the second image such that features in the first image correspond with features in the second image, at the same locations in the images, when they are overlaid. The term "overlaid" is used here to describe and explain the process of replacing pixels in the first image with pixels in the second image, and does not mean that the images are actually overlaid one on top of the other.
In an example, the at least one image acquired by the first image is combined with the at least one image acquired by the second camera to generate at least one combined 3D image. In other words, the combined image in addition to having the object removed from the image can be presented as a 3D image. Such imagery is used for example in student training.
According to an example, the object can be situated between the medical treatment area and the first camera such that the at least one image acquired by the first camera comprises the image data representative of the part of the object and the object can be situated between the medical treatment area and the second camera such that the at least one image acquired by the second camera comprises image data representative of the part of the object. In such a situation the processing unit is again configured to generate at least one combined image of the area of interest of the medical treatment area from the at least one image acquired by the first camera and the at least one image acquired by the second camera. The processing unit is configured to generate the at least one combined image such that it does not contain at least some of the image data representative of the part of the object acquired by the first camera and does not contain at least some of the image data
representative of the part of the object acquired by the second camera.
In an example, the at least one combined image does not contain image data representative of the part of the object acquired by the first camera. In an example, the at least one combined image does not contain image data representative of the part of the object acquired by the first camera and does not contain image data representative of the part of the object acquired by the first camera.
In an example, the first camera and second camera are set apart by a substantial distance. "Substantial" means that the cameras are positioned apart such that they view the area of interest from different angular positions, such that an object situated above the medical treatment area will not occlude exactly the same part of the area of interest as viewed by both cameras.
Due to the first camera and second camera being at different positions, the object situated between the medical treatment area and the first camera and situated between the medical treatment area and the second camera (i.e. above the medical treatment area or surgical field) will generate different obstructions of the surgical field (or area of interest) as seen by the first camera and as seen by the second camera due to the different parallax. This means that objects, such as the hands of a surgeon that appear in the first camera image and additionally or alternatively in the second camera image can be removed and a combined image can be made where the occluded areas of the medical treatment area (or surgery field) can be recovered by combining the information from the first camera image and from the second camera image. In this manner, if neither image acquired by the first camera or second camera can be used alone to provide an un-obscured or non-occluded image of the area of interest, a combined image can be generated to provide an un-obscured image of the area of interest.
In an example, the time at which the at least one image is acquired by the first camera is substantially the same as that for the at least one image acquired by the second camera. In an example, the time at which the at least one image is acquired by the first camera is substantially different to the time at which the at least one image is acquired by the second camera. In other words, in an example the first and second camera can operate in real time and combine images acquired at approximately the same time. However, in another example the time of acquisition of the at least one first image can be different to that for the at least one second image. For example, an image acquired by the first camera now, rather than being combined with an image acquired by the second camera now, is combined with a prior image acquired by the second camera. In this manner, if an object leads to there being a common occluded area at a particular moment in time, by looking at a previous image the object may now be positioned within the two images (acquired at different times) such that there is no common occluded area and the combined image can be more effectively generated that does not contain image data of the object. In other words, the object can be more effectively removed from the combined image.
In an example, the processing unit is configured to identify the object and alternatively or additionally provide information relating to the object. In an example, the processing unit uses image processing to determine if an object is a hand. In an example, the processing unit uses image processing to determine if an object is a surgical instrument such as a scalpel. In an example, the processing unit uses color based segmentation. In an example, the processing unit uses spectral based segmentation. For example, the processing unit can determine if an object is a surgeon's hand, from the shape of the object and/or the color of the gloves being worn (e.g. the green-blue nitrile type of gloves), in order to enhance the removal of this object from the combined image. The same applies to other objects such as surgical instruments.
In an example, output data representative of the area of interest of the medical treatment area is presented on a screen, for example on a visual display unit VDU.
According to an example, the first camera and the second camera are separated by a known distance. In an example, "known distance" means a known lateral distance. For example a first reference line can be defined that extends from the area of interest of the medical treatment area through the position of the first camera, and a second reference line can be defined that extends from the area of interest of the medical treatment area through the position of the second camera. A mid reference line can then be defined that is midway between the first and second reference lines and which extends through the area of interest of the medical treatment area. In an example, the known distance is the length of a line extending from the first reference line to the second reference line that is perpendicular to the mid reference line and which extends through the position of the first camera. In an example, the known distance is the length of a line extending from the first reference line to the second reference line that is perpendicular to the mid reference line and which extends through the position of the second camera. In an example, the "known distance" is a geometric distance between the position of the first camera and the position of the second camera. In an example, the first camera is at a known position. In an example, the second camera is at a known position. In an example, the first camera and the second camera are at known positions. In an example, the first camera is separated from the medical treatment area by a known distance. In an example, the second camera is separated from the medical treatment area by a known distance. In an example, the first camera and the second camera are separated from the medical treatment area by known distances. In an example, the medical treatment area is at a known position. By knowing the separation of the cameras, based on the first camera image and the second camera image the parallax for the object can be
determined. This means that the distance the object is away from the first camera and/or second camera can be determined. In an example, by knowing the separation of the cameras, based on the first camera image and the second camera image the distance to the medical treatment area can be determined. By knowing a position of one of the cameras with respect to the medical treatment area, the distance the object is away from the medical treatment area can be determined.
In an example, the processing unit is configured to remove image data representative of the object from the combined image when the distance of the object from the medical treatment area exceeds a threshold. In an example, the processing unit is configured to remove image data representative of the object from the combined image when the distance of the object from the first camera is below a threshold and alternatively or additionally the distance of the object from the second camera is below a threshold. In this manner, the surgeon is provided with information regarding the surgical treatment, for example with respect to their hands and the any surgical instrument they are using during interaction with a patient. However, if for example a nurse moves a hand over the field of view at a distance above the patient, and the hand would lead to occluded imagery, the apparatus can generate a combined image for the surgeon where the nurse's hand has been removed from the imagery. The surgeon can then continue to concentrate on what their hands and surgical instruments are doing. Also, as the surgeon raises their own hands above the patient, they will be visible to the surgeon until a threshold is reached at which point the apparatus will provide a combined image with the surgeon's hand removed from the imagery.
According to an example, the at least one image acquired by the first camera comprises at least two images acquired at different times. In an example, the at least one image acquired by the second camera comprises at least two images acquired at different times. In an example, in the combined image, image data representative of the part of the object in the at least one image acquired by the first camera at a location of the area of interest is replaced by image data at the corresponding location of the area of interest in the at least one image acquired by the second camera that does not contain image data
representative of the part of the object and is also replaced by prior image data at the corresponding location of the area of interest in the at least one image acquired by the first camera that does not contain image data representative of the part of the object . In other words, an obstruction in a first image can be removed using information from a second image acquired by a second camera and additional using information from a prior image acquired by the first camera. For example, an image acquired by the first camera may have an object covering a center portion of the area of interest. An image acquired by the second camera may have the object offset from the center portion of the area of interest, such that most of the object in the first image can be replaced with imagery of the area of interest from the second image. However, a common occluded area remains. Therefore, a prior image acquired by the first camera can be used, where in that prior image the object is also offset from the center portion of the area of interest and where there is no common occluded area for the three images. The prior image acquired by the first camera can then be used to replace the remaining part of the object in the later image acquired by the first camera (that was not replaced by image information acquired by the second camera) with imagery of the area of interest. The resultant combined image then shows the area of interest without the object. This means that for the combined image where occluding objects are removed, the removal process can be improved by using information from prior image frames. In other words, the occluding object may not be able to be completed removed from the combined image. For example if there is a part of the area of interest that is occluded by the object as viewed by both the first camera and second camera at a specific moment in time, then in the combined image this part of the area of interest will be occluded. However, that occluded part of the area of interest may not be occluded in both an image acquired now with the first camera and an image acquired at an earlier time by the first camera. Therefore, the combined image from the first and second cameras acquired now, which has an occluded part of the area of interest can be augmented with an earlier image acquired by the first camera where that part of the area of interest is not occluded. The prior image acquired by the first camera can be used to fill in the gap, to provide a combined image with no occluded areas. Similarly, in an example, additionally or alternatively an earlier or prior image acquired by the second camera can be used to fill in an occluded area. According to an example, the apparatus comprises a third camera 120 configured to be positioned at a third position. The third camera is configured to acquire at least one image of the area of interest of the medical treatment area from the third position, and wherein the third camera is configured to provide the at least one image acquired by the third camera to the processing unit. The processing unit is configured to generate the combined image of the area of interest of the medical treatment area from the at least one image acquired by the first camera and the at least one image acquired by the second camera and the at least one image acquired by the third camera.
In an example, if an object is situated between the medical treatment area and the first camera such that the at least one image acquired by the first camera comprises image data representative of the part of the object, the processing unit is configured to generate at least one combined image that does not contain at least some of the image data representative of the part of the object. In an example, if an object is situated between the medical treatment area and the first camera such that the at least one image acquired by the first camera comprises image data representative of the part of the object and the object is situated between the medical treatment area and the second camera such that the at least one image acquired by the second camera comprises image data representative of the part of the object, the processing unit is configured to generate at least one combined image that does not contain at least some of the image data representative of the part of the object. In an example, if an object is situated between the medical treatment area and the first camera such that the at least one image acquired by the first camera comprises image data representative of the part of the object and the object is situated between the medical treatment area and the second camera such that the at least one image acquired by the second camera comprises image data representative of the part of the object and the object is situated between the medical treatment area and the third camera such that the at least one image acquired by the third camera comprises image data representative of the part of the object, the processing unit is configured to generate at least one combined image that does not contain at least some of the image data representative of the part of the object. In an example, the at least one combined image does not contain image data representative of the part of the object.
In this manner, further robustness is provided in removing an occluded object from the combined image. This is because the possibility that there is a common area that is occluded is reduced when there are three images being combined. Additionally, by using a third camera at a third position the parallax computations used to determine the distance to the object become more robust, enabling for improved distance calculations, providing further robustness for removing an occluded object.
In an example, the separation between the first and third cameras is substantially less than the separation between the first and second cameras and substantially less than the separation between the second and third cameras. In an example, the processing unit is configured to generate at least one paired image of the area of interest of the medical treatment area from the at least one image acquired by the first camera and the at least one image acquired by the third camera. In this example, the processing unit is configured to generate a combined image of the area of interest of the medical treatment area from the at least one paired image and the at least one image acquired by the second camera. In other words, cameras can be provided in pairs and by having the spacing between the two cameras forming a pair that is not too large, feature detection is improved.
In an example, the processing unit can use image processing of the paired image to detect features in the paired image such as determining different tissue components and this image processing can be improved when the cameras used to generate the paired image are not too far apart. Then by having a pair of cameras spacing away from another camera the ability is provided to generate a combined image that does not contain imagery of the object. In other words, the processing unit can remove an occluding object from the combined image, whilst at the same time providing for improved feature recognition within the combined imagery.
In an example, by having a pair of cameras feature detection is improved, and this improves the robustness of the parallax calculation. This is because one way to improve the robustness of parallax computations is to identify the same objects in acquired images, and this can be facilitated by having cameras acting in pairs.
In an example, the apparatus comprises a fourth camera configured to be positioned at a fourth position; wherein the fourth camera is configured to acquire at least one image of the area of interest of the medical treatment area from the fourth position, and wherein the fourth camera is configured to provide the at least one image acquired by the fourth camera to the processing unit. In an example, the processing unit is configured to generate a combined image of the area of interest of the medical treatment area from the at least one image acquired by the first camera and the at least one image acquired by the second camera and the at least one image acquired by the third camera and the at least one image acquired by the fourth camera. In an example, if an object is situated between the medical treatment area and the first camera such that the at least one image acquired by the first camera comprises image data representative of the part of the object and the object is situated between the medical treatment area and the second camera such that the at least one image acquired by the second camera comprises image data representative of the part of the object and the object is situated between the medical treatment area and the third camera such that the at least one image acquired by the third camera comprises image data representative of the part of the object and the object is situated between the medical treatment area and the fourth camera such that the at least one image acquired by the fourth camera comprises image data representative of the part of the object, the processing unit is configured to generate at least one combined image that does not contain at least some of the image data representative of the part of the object.
In an example, the separation between the first and third cameras is substantially less than the separation between the first and second cameras and substantially less than the separation between the second and third cameras. In an example, the separation between the second and fourth cameras is substantially less than the separation between the first and second cameras and substantially less than the separation between the second and third cameras. In an example, the processing unit is configured to generate at least one first paired image of the area of interest of the medical treatment area from the at least one image acquired by the first camera and the at least one image acquired by the third camera. In an example, the processing unit is configured to generate at least one second paired image of the area of interest of the medical treatment area from the at least one image acquired by the second camera and the at least one image acquired by the fourth camera. In an example, the processing unit is configured to generate a combined image of the area of interest of the medical treatment area from the at least one first paired image and the at least one second paired image.
In other words, cameras can be provided in pairs and by having the spacing between the pairs of cameras that is not too large, feature detection is improved which also leads to improved distance calculations of an occluding object from the cameras. Then by having a pair of cameras spacing away from another pair of cameras the ability is provided to remove an occluding object from the combined image with improved robustness, whilst at the same time providing for improved feature recognition within the combined imagery.
According to an example, the apparatus comprises a tracking system 130; wherein the processing unit is configured to track the object using the tracking system. In an example, the tracking system comprises the first camera and additionally or alternatively comprises the second camera. In an example, the processing unit uses information contained within the at least one image acquired by the first camera and additionally or alternatively uses information contained within the at least one image acquired by the second camera and additionally or alternatively uses information contained within the combined image in order to track the object. In an example, the processing unit is configured to use image processing in order to track the object. In an example, the processing unit is configured to track the object based on the shape of the object. In an example, the processing unit is configured to track the object based on marker recognition.
In this manner, the apparatus provides an additional layer of robustness relating to removing image data representative of the object from the combined image because the position of the object is provided with better certainty. For example, an object can be tracked in time and its future position can be expected to be adjacent to the current position, or at least a distance away from the current position taking into account the time between image acquisitions and a likely speed of movement. Therefore, an object that has been previously removed from the combined image can be determined to be an object to be removed from the present combined image with better certainty. In other words, in an example, image processing may have determined that an object is the surgeon's hand and should be removed from the combined image. The object is tracked and can continue to be removed from the combined image, even if at a later point in time the surgeon's hand is in an orientation such that it cannot be determined at that time to be a hand.
According to an example, the processor is configured to track the position of a surgeon 140, and wherein the output unit is configured to output data representative of the area of interest of the medical treatment area from a viewing direction of the surgeon.
According to an example, the apparatus is configured to cooperate with a surgical illumination system 150.
In an example, the surgical illumination system is a surgical light. In an example, the first and second cameras are spaced apart by more than half the diameter of the surgical light. In an example, the first and second cameras are attached to a mount that is different to the mount for the surgical illumination system. In an example, the first and second cameras are attached to the ceiling. In an example, the surgical illumination system is positioned such that the area of interest of the medical treatment area is illuminated. In an example, the apparatus is configured to be retrofitted to a surgical illumination system, such as a surgical light. In other words, the first and second cameras can be provided with an "add- on" mechanism in which the cameras can be attached to a pre-existing surgical light. In an example, the first and second cameras are incorporated into the surgical illumination system. In an example, the first and second cameras are positioned at the outer rim of the surgical light.
In an example, the at least one image acquired by the first camera is recalculated in order that it appears to have been acquired from a position between the position of the first camera and the position of the second camera. For example, the first image can be recalculated as if it was acquired at the center of the surgical light. This can be done using the at least one image acquired by the second camera. In an example, the at least one combined image is recalculated in order that it appears to have been acquired from a position between the position of the first camera and the position of the second camera, for example as if it has been acquired at the center of the surgical light. This can be done using the at least one image acquired by the first camera and the at least one image acquired by the second camera. In other words, the image can be recalculated as seen from a different point of view.
In an example, the first and second cameras are not rigidly connected to a light of the surgical illumination system. In an example, the first and second cameras are rigidly connected to the surgical light.
By having the first and second cameras separate to the surgical illumination system, or not rigidly attached to the light of the surgical illumination system, enables the first and second cameras to be positioned independently from the light of the surgical illumination system. By having the first and second cameras attached to the surgical illumination system, leads to simplicity of operation, where the cameras are automatically pointing in the correct direction as the surgical light is moved.
In an example, wherein the apparatus comprises a third and a fourth camera, the third and fourth cameras can be incorporated into the surgical illumination system. In an example, the third and fourth cameras are positioned at the outer rim of the surgical light. In an example, the first and third cameras are spaced apart by less than half the diameter of the surgical light. In an example, the second and fourth cameras are spaced apart by less than half the diameter of the surgical light.
According to an example, the processing unit is configured to provide control input to the surgical illumination system. In an example, the processing unit is configured to provide adjustment of the light intensity of the surgical illumination system on the basis of the first camera and alternatively or additionally on the basis of the second camera. In an example, adjustment of the light intensity is based on the at least one image acquired by the first camera and alternatively or additionally is based on the at least one image acquired by the second camera. In an example, the apparatus is configured to provide gesture based illumination. For example, on the basis of the provided at least one image acquired by the first camera or alternatively or additionally on the basis of the provided at least one image acquired by the second camera, the processing unit is configured to interpret gestures - for example, interpret movements made by the surgeon's hands. In this example, the processing unit is configured to control the illumination based on the movements made by the surgeon, for example increasing the light intensity on the basis of a particular movement of the hand.
In this manner, the light intensity on the medical treatment area can be kept constant or adjusted as necessary. For example, as the surgical lamp is moved away from the medical treatment area the light output from the surgical illumination system can be increased in order that the light intensity at the medical treatment area remains as required.
In an example, the tracking system is incorporated into the surgical illumination system. For example, the surgical light is mounted on an arm or boom and the tracking system is also mounted on the arm or boom. This prevents problems and conflicts with placement of the tracking system and surgical illumination system. Also, ease of use of the apparatus is increased because only one object has to be positioned correctly.
According to an example, the first camera is a hyperspectral camera, or the first camera is a multispectral camera. In an example, the second camera is a hyperspectral camera, or the second camera is a multispectral camera. In an example, any one or any number of the cameras is a hyperspectral camera, or a multispectral camera. The term "hyperspectral" refers to a camera that can collect and enable processing of information from across a range of the electromagnetic spectrum. This information may extend beyond the visible range. The term "multispectral" refers to camera that can capture image data at specific frequencies across the electromagnetic spectrum. In an example, the wavelengths may be separated by filters or by the use of instruments that are sensitive to particular wavelengths (for example gratings, prisms), i.e. multiple spectra are used, which is the reason for the term "multispectral". This may include light from frequencies beyond the visible light range, such as infrared, which then may also be defined by the term "hyper" of the aforementioned term "hyperspectral imaging". Spectral - be it multispectral or hyperspectral - imaging may allow extraction of additional information from an image, especially information that the human eye fails to capture with its receptors for red, green and blue. In an example, the first camera and additionally or alternatively the second camera is a hyperspectral or multispectral filter-wheel camera for hyperspectral or multispectral imaging with a spectral range of 400 to 1000 nm (nanometer) or from 1000 to 1700 nm or from 500 to 1700 nm. In an example, the first and/or second camera has various, for instance 6 or 8 or even more interchangeable filters. In an example, the first and/or second camera has a charge-coupled device CCD with a resolution of 1392 x 1040 pixels or physical points in a raster image. In an example, the first and/or second camera has an Indium gallium arsenide (InGaAs) or any other semiconductor sensor with a resolution of 640 x 512 pixels, or with a sensor with any other pixel resolution.
In this manner, the apparatus can enhance tissue contrast between different tissue components and this information can be provided to a surgeon. Combining the tissue contrast enhancement provided by the hyperspectral/multispectral camera with the removal of occluding objects in a combined image results in an enhanced visualization apparatus for use during surgery, which is not hampered by occluding objects. By providing a
hyperspectral or multispectral image, the object to be removed from images can be better identified. For example, a surgeon's hands can be identified from the spectral content of the imagery associated with the blue gloves they are wearing. In other words, by providing enhanced contrast in images with enhanced object identification, the robustness of the apparatus for removing an object from images is improved.
In an example, the first camera and/or second camera is a thermal camera.
According to an example, the output unit comprises: a projection system 160. The processing unit is configured to provide input to the projection system such that image data is projected from the projection system onto the area of interest of the medical treatment area. In an example, the projection system is incorporated into the surgical illumination system. In an example, image data projected onto the area of interest of the medical treatment area comprises the at least one image acquired by the first camera. In an example, image data projected onto the area of interest of the medical treatment area comprises the at least one image acquired by the second camera. In an example, image data projected onto the area of interest of the medical treatment area comprises the at least one combined image. In an example, image data projected onto the area of interest of the medical treatment area comprises X-ray image data. In an example, image data projected onto the area of interest of the medical treatment area comprises 3D reconstruction information. In an example, image data projected onto the area of interest of the medical treatment area comprises CT image data. In an example, image data projected onto the area of interest of the medical treatment area comprises MRI image data. In an example, image data projected onto the area of interest of the medical treatment area comprises ultrasound image data.
In an example, the output unit is configured to output data representative of the area of interest of the medical treatment area that comprises the at least one image acquired by the first camera onto a screen. In an example, the output unit is configured to output data representative of the area of interest of the medical treatment area that comprises the at least one image acquired by the second camera onto a screen. In an example, the output unit is configured to output data representative of the area of interest of the medical treatment area that comprises the at least one combined image onto a screen. In an example, the output unit is configured to output data representative of the area of interest of the medical treatment area that comprises X-ray image data onto a screen. In an example, the output unit is configured to output data representative of the area of interest of the medical treatment area that comprises CT image data onto a screen. In an example, the output unit is configured to output data representative of the area of interest of the medical treatment area that comprises MRI image data onto a screen. In an example, the output unit is configured to output data representative of the area of interest of the medical treatment area that comprises ultrasound image data onto a screen.
In other words, the surgeon can be presented with data relating to the area of interest of the medical treatment area that augments the visual data that the surgeon is presented with by their own eyes. For example, the first and/or second camera can operate over a wavelength range extending beyond the visible and this information can be presented to the surgeon overlaid over the medical treatment area or provided for example on a screen.
Or the surgeon can be provided with information acquired by complementary systems, such as X-ray, CT, MRI and/or ultrasound systems.
In an example, the first camera and additionally or alternatively the second camera is a camera configured to provide depth information. For example, the first and/or second camera can acquire time-of- flight data.
Fig. 2 shows an example of a system 200 for imaging in a medical treatment.
The system 200 comprises a surgical illumination system 150; and an apparatus 10 for imaging in a medical treatment. The apparatus 10 is provided as an example according to the above mentioned Fig.l . The apparatus 10 is configured to cooperate with the surgical illumination system 150.
In an example, in the combined image, image data representative of a part of the object in the at least one image acquired by the first camera at a location of the area of interest is replaced by image data at the corresponding location of the area of interest in the at least one image acquired by the second camera that does not contain image data
representative of the part of the object. In an example, the at least one combined image does not contain image data representative of the part of the object.
Fig. 3 shows a method 300 for imaging in a medical treatment for use with a surgical illumination system. The method comprises the following:
In a first providing step 310, also referred to as step a), at least one image of an area of interest of a medical treatment area from a first position is provided, wherein the at least one image was acquired by a first camera.
In a second providing step 320, also referred to as step b), at least one image of the area of interest of the medical treatment area from a second position is provided, wherein the at least one image was acquired by a second camera.
If an object is situated between the medical treatment area and the first camera such that the at least one image provided by the first camera comprises image data representative of a part of the object, the method further comprises the following:
In a generating step 330, also referred to as step c), at least one combined image of the area of interest of the medical treatment area is generated from the at least one image provided by the first camera and from the at least one image provided by the second camera. The at least one combined image is generated such that it does not contain at least some of the image data representative of the part of the object.
In an outputting step 340, also referred to as step d), data representative of the area of interest of the medical treatment area is output.
In an example, if the object is situated between the medical treatment area and the first camera such that the at least one image acquired by the first camera comprises the image data representative of the part of the object and the object is situated between the medical treatment area and the second camera such that the at least one image acquired by the second camera comprises image data representative of the part of the object, the method comprises generating at least one combined image of the area of interest of the medical treatment area from the at least one image acquired by the first camera and the at least one image acquired by the second camera, wherein the at least one combined image does not contain at least some of the image data representative of the part of the object acquired by the first camera and does not contain at least some of the image data representative of the part of the object acquired by the second camera. In an example, the first position and second position are separated by a known distance.
In an example, the at least one image acquired by the first camera comprises at least two images acquired at different times.
In an example, the method further comprises providing at least one image of the area of interest of the medical treatment area from a third position, wherein the at least one image was acquired by a third camera.
In an example, the method comprises tracking the object. In an example, the method comprises tracking the position of the surgeon, and wherein outputting of data representative of the area of interest of the medical treatment area comprises outputting data representative of the area of interest of the medical treatment area from a viewing direction of the surgeon.
In an example, the method comprises cooperating with a surgical illumination system. In an example, the method comprises controlling input to the surgical illumination system.
In an example, the at least one image of the area of interest of a medical treatment area from the first position comprises hyperspectral information or comprises multispectral information.
In an example, the method comprises projecting image data onto the area of interest of the medical treatment area. In an example, the method comprises providing input to a projection system such that image data is projected from the projection system onto the area of interest of the medical treatment area.
Fig. 4 shows a schematic set up of another example of an apparatus 10 for imaging in a medical treatment. In the example, the apparatus 10 has been added to, or in other words is working in cooperation with, a surgical illumination system 150 such as a surgical light 150. The surgical light 150 is attached to the ceiling of a surgery room by way of a manipulation arm. A first camera 20 is attached to one side of the surgical light 150, and is arranged to view an area of interest 70 of the medical treatment area 80, which in the example shown is a part 70 of a patient 80 lying on a surgical table 170. A second camera 30 is attached to the other side of the surgical light 150, and is similarly arranged to view the area of interest 70 of the medical treatment area 80. A surgeon 140 is performing a procedure. During the procedure the hands 100 of the surgeon 140 form an object that is partially occluding the area of interest 70 as seen by cameras 20, 30. Referring to Fig. 5, an image 60 of the area of interest 70 of the medical treatment area 80 has been acquired by the first camera 20, where a hand 100 of the surgeon 140 is partially occluding the area of interest 70. An image 90 of the area of interest 70 of the medical treatment area 80 has been acquired by the second camera 30, with the hand 100 of the surgeon 140 again partially occluding the area of interest 70. Due to the cameras 20, 30 being substantially separated in distance, in the images 60, 90 objects close to the cameras will have a larger shift than objects further away. In this example, the hand 100 of the surgeon 140 is closer to the cameras than the medical treatment area 80. The hand 100 of the surgeon 140 shows a larger shift. Based on the shift, the distance of the hand 100 from the cameras 20, 30 can be calculated. Based on the shift, the distance of the area of interest 70 of the medical treatment area 80 from the cameras 20, 30 can also be calculated. In another example, the distance of the area of interest 70 of the medical treatment area from the cameras 20, 30 is determined from the known position of the surgical light 150 with respect to the surgical table 170. In this other example, the distance of the cameras 20, 30 to the area of interest 70 is determined from the known geometry of the surgical room in combination with the geometric arrangement of the arm holding the surgical light, or the distance can be determined through other means such as GPS, radar, or gyroscope position determination systems for example. Therefore, the hand 100 of the surgeon 140 that is substantially above the area of interest 70 can be identified and removed from the images 60, 90. A combined image 110 can then be created by filling in pixels in the first image 60 that have been removed by equivalent pixels in the second image 90 at the locations of the removed pixels that have not been removed in the second image 90. In other words, holes created in the images 60, 90 by the removal of the object 100, such as a surgeon's hand 100, can be filled in again by combining the images 60, 90. The combined image 110, is then presented to the surgeon 140 on a screen 50.
The camera 20 and/or 30 can be multispectral or hyperspectral cameras. The wavelength bands for camera 20 and/or 30 may been a visible or non-visible light spectrum, comprising several wavelength bands for instance:
(1) Blue: 0.450-0.520 μιη (micrometer)
(2) Green: 0.515-0.600 μιη
(3) Red: 0.60-0.69 μιη
(4) Visible: 0.45-0.7 μιη
(5) Infrared:0.7-1.0 μιη
(6) Near infrared: 1.0-3.0 μιη
(7) Mid infrared: 3.0-50.0 μιη
(8) Far infrared: 50.0-1000.0 μιη By extending the wavelength enables that the tissue contrast between several structures can be enhanced.
In another exemplary embodiment, a computer program or computer program element is provided for controlling an appropriate system that is characterized by being configured to execute the method steps according to one of the preceding embodiments.
The computer program element might therefore be stored on a computer unit, which might also be part of an embodiment. This computing unit may be configured to perform or induce performing of the steps of the method described above. Moreover, it may be configured to operate the components of the above described apparatus. The computing unit can be configured to operate automatically and/or to execute the orders of a user. A computer program may be loaded into a working memory of a data processor. The data processor may thus be equipped to carry out the method according to one of the preceding embodiments.
This exemplary embodiment of the invention covers both, a computer program that right from the beginning uses the invention and computer program that by means of an update turns an existing program into a program that uses invention.
Furthermore, the computer program element might be able to provide all necessary steps to fulfill the procedure of an exemplary embodiment of the method as described above.
According to a further exemplary embodiment of the present invention, a computer readable medium, such as a CD-ROM, is presented wherein the computer readable medium has a computer program element stored on it which computer program element is described by the preceding section.
A computer program may be stored and/or distributed on a suitable medium, such as an optical storage medium or a solid state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the internet or other wired or wireless telecommunication systems.
However, the computer program may also be presented over a network like the World Wide Web and can be downloaded into the working memory of a data processor from such a network. According to a further exemplary embodiment of the present invention, a medium for making a computer program element available for downloading is provided, which computer program element is arranged to perform a method according to one of the previously described embodiments of the invention. It has to be noted that embodiments of the invention are described with reference to different subject matters. In particular, some embodiments are described with reference to method type claims whereas other embodiments are described with reference to the device type claims. However, a person skilled in the art will gather from the above and the following description that, unless otherwise notified, in addition to any combination of features belonging to one type of subject matter also any combination between features relating to different subject matters is considered to be disclosed with this application.
However, all features can be combined providing synergetic effects that are more than the simple summation of the features.
While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive. The invention is not limited to the disclosed embodiments. Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing a claimed invention, from a study of the drawings, the disclosure, and the dependent claims.
In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items re-cited in the claims. The mere fact that certain measures are re-cited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. Any reference signs in the claims should not be construed as limiting the scope.

Claims

CLAIMS:
1. An apparatus (10) for imaging in a medical treatment for use with a surgical illumination system, the apparatus comprising:
a first camera (20) configured to be positioned at a first position; a second camera (30) configured to be positioned at a second position;
a processing unit (40);
an output unit (50);
wherein the first camera is configured to acquire at least one image (60) of an area of interest (70) of a medical treatment area (80) from the first position, and the second camera is configured to acquire at least one image (90) of the area of interest of the medical treatment area from the second position,
wherein the first camera is configured to provide the at least one image acquired by the first camera to the processing unit, and the second camera is configured to provide the at least one image acquired by the second camera to the processing unit;
wherein if an object (100) is situated between the medical treatment area and the first camera such that the at least one image provided by the first camera comprises image data representative of a part of the object, the processing unit is configured to generate at least one combined image (110) of the area of interest of the medical treatment area from the at least one image provided by the first camera and the at least one image provided by the second camera, wherein the at least one combined image does not contain at least some of the image data representative of the part of the object;
wherein, in the combined image, image data representative of the part of the object in the at least one image acquired by the first camera at a location of the area of interest is replaced by image data at the corresponding location of the area of interest in the at least one image acquired by the second camera that does not contain image data
representative of the part of the object; and
wherein the output unit is configured to output data representative of the area of interest of the medical treatment area.
2. Apparatus according to claim 1, wherein if the object is situated between the medical treatment area and the first camera such that the at least one image acquired by the first camera comprises the image data representative of the part of the object and the object is situated between the medical treatment area and the second camera such that the at least one image acquired by the second camera comprises image data representative of the part of the object, the processing unit is configured to generate at least one combined image of the area of interest of the medical treatment area from the at least one image acquired by the first camera and the at least one image acquired by the second camera, wherein the at least one combined image does not contain at least some of the image data representative of the part of the object acquired by the first camera and does not contain at least some of the image data representative of the part of the object acquired by the second camera.
3. Apparatus according to claim 1 or 2, wherein the first camera and the second camera are separated by a known distance.
4. Apparatus according to any of claims 1-3, wherein the at least one image acquired by the first camera comprises at least two images acquired at different times.
5. Apparatus according to any of claims 1-4, comprising:
- a third camera (120) configured to be positioned at a third position;
wherein the third camera is configured to acquire at least one image of the area of interest of the medical treatment area from the third position, and wherein the third camera is configured to provide the at least one image acquired by the third camera to the processing unit; and
wherein the processing unit is configured to generate the combined image of the area of interest of the medical treatment area from the at least one image acquired by the first camera and the at least one image acquired by the second camera and the at least one image acquired by the third camera.
6. Apparatus according to any of claims 1-5, comprising:
a tracking system (130);
wherein the processing unit is configured to track the object using the tracking system.
7. Apparatus according to claim 6, wherein the processor is configured to track the position of a surgeon (140), and
wherein the output unit is configured to output data representative of the area of interest of the medical treatment area from a viewing direction of the surgeon.
8. Apparatus according to any of claims 1-7, wherein the apparatus is configured to cooperate with a surgical illumination system (150).
9. Apparatus according to claim 8, wherein the processing unit is configured to provide control input to the surgical illumination system.
10. Apparatus according to any of claims 1-9, wherein the first camera is a hyperspectral camera, or the first camera is a multispectral camera.
11. Apparatus according to any of claims 1-10, wherein the output unit comprises:
a projection system ( 160),
wherein the processing unit is configured to provide input to the projection system such that image data is projected from the projection system onto the area of interest of the medical treatment area.
12. A system (200) for imaging in a medical treatment comprising:
a surgical illumination system (150); and
an apparatus (10) for imaging in a medical treatment according to any one of the preceding claims;
wherein the apparatus is configured to cooperate with the surgical illumination system.
13. A method (300) for imaging in a medical treatment for use with a surgical illumination system, comprising:
a) providing (310) at least one image of an area of interest of a medical treatment area from a first position, wherein the at least one image was acquired by a first camera; b) providing (320) at least one image of the area of interest of the medical treatment area from a second position, wherein the at least one image was acquired by a second camera; wherein if an object is situated between the medical treatment area and the first camera such that the at least one image provided by the first camera comprises image data representative of a part of the object, the method further comprises:
c) generating (330) at least one combined image of the area of interest of the medical treatment area from the at least one image provided by the first camera and the at least one image provided by the second camera, wherein the at least one combined image does not contain at least some of the image data representative of the part of the object; wherein, in the combined image, image data representative of the part of the object in the at least one image acquired by the first camera at a location of the area of interest is replaced by image data at the corresponding location of the area of interest in the at least one image acquired by the second camera that does not contain image data representative of the part of the object; and
d) outputting (340) of data representative of the area of interest of the medical treatment area.
14. A computer program element for controlling an apparatus according to one of claims 1 to 12, which when executed by a processor is configured to carry out the method of claim 13.
15. A computer readable medium having stored the program element of claim 14.
PCT/EP2016/070992 2015-09-10 2016-09-06 Apparatus for imaging in a medical treatment WO2017042171A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EPEP15184656.5 2015-09-10
EP15184656 2015-09-10

Publications (1)

Publication Number Publication Date
WO2017042171A1 true WO2017042171A1 (en) 2017-03-16

Family

ID=54140288

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2016/070992 WO2017042171A1 (en) 2015-09-10 2016-09-06 Apparatus for imaging in a medical treatment

Country Status (1)

Country Link
WO (1) WO2017042171A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210196109A1 (en) * 2019-12-30 2021-07-01 Ethicon Llc Adaptive visualization by a surgical system
US20230020346A1 (en) * 2021-07-14 2023-01-19 Cilag Gmbh International Scene adaptive endoscopic hyperspectral imaging system
US11612307B2 (en) 2016-11-24 2023-03-28 University Of Washington Light field capture and rendering for head-mounted displays
WO2023052960A1 (en) * 2021-09-29 2023-04-06 Cilag Gmbh International Surgical devices, systems, and methods using fiducial identification and tracking
US11741619B2 (en) 2021-01-04 2023-08-29 Propio, Inc. Methods and systems for registering preoperative image data to intraoperative image data of a scene, such as a surgical scene
US11832996B2 (en) 2019-12-30 2023-12-05 Cilag Gmbh International Analyzing surgical trends by a surgical system
US11850104B2 (en) 2019-12-30 2023-12-26 Cilag Gmbh International Surgical imaging system
US11864729B2 (en) 2019-12-30 2024-01-09 Cilag Gmbh International Method of using imaging devices in surgery
US11908146B2 (en) 2019-12-30 2024-02-20 Cilag Gmbh International System and method for determining, adjusting, and managing resection margin about a subject tissue

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030164953A1 (en) 2002-03-01 2003-09-04 Thomas Bauch Operation lamp with camera system for 3D referencing
US20060079752A1 (en) * 2004-09-24 2006-04-13 Siemens Aktiengesellschaft System for providing situation-dependent, real-time visual support to a surgeon, with associated documentation and archiving of visual representations
US20090088773A1 (en) * 2007-09-30 2009-04-02 Intuitive Surgical, Inc. Methods of locating and tracking robotic instruments in robotic surgical systems
US20140288413A1 (en) 2013-03-21 2014-09-25 Samsung Electronics Co., Ltd. Surgical robot system and method of controlling the same

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030164953A1 (en) 2002-03-01 2003-09-04 Thomas Bauch Operation lamp with camera system for 3D referencing
US20060079752A1 (en) * 2004-09-24 2006-04-13 Siemens Aktiengesellschaft System for providing situation-dependent, real-time visual support to a surgeon, with associated documentation and archiving of visual representations
US20090088773A1 (en) * 2007-09-30 2009-04-02 Intuitive Surgical, Inc. Methods of locating and tracking robotic instruments in robotic surgical systems
US20140288413A1 (en) 2013-03-21 2014-09-25 Samsung Electronics Co., Ltd. Surgical robot system and method of controlling the same

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11612307B2 (en) 2016-11-24 2023-03-28 University Of Washington Light field capture and rendering for head-mounted displays
US11882993B2 (en) 2019-12-30 2024-01-30 Cilag Gmbh International Method of using imaging devices in surgery
US11864729B2 (en) 2019-12-30 2024-01-09 Cilag Gmbh International Method of using imaging devices in surgery
US11937770B2 (en) 2019-12-30 2024-03-26 Cilag Gmbh International Method of using imaging devices in surgery
US11925309B2 (en) 2019-12-30 2024-03-12 Cilag Gmbh International Method of using imaging devices in surgery
US11744667B2 (en) * 2019-12-30 2023-09-05 Cilag Gmbh International Adaptive visualization by a surgical system
US11832996B2 (en) 2019-12-30 2023-12-05 Cilag Gmbh International Analyzing surgical trends by a surgical system
US11850104B2 (en) 2019-12-30 2023-12-26 Cilag Gmbh International Surgical imaging system
US11925310B2 (en) 2019-12-30 2024-03-12 Cilag Gmbh International Method of using imaging devices in surgery
US20210196109A1 (en) * 2019-12-30 2021-07-01 Ethicon Llc Adaptive visualization by a surgical system
US11896442B2 (en) 2019-12-30 2024-02-13 Cilag Gmbh International Surgical systems for proposing and corroborating organ portion removals
US11908146B2 (en) 2019-12-30 2024-02-20 Cilag Gmbh International System and method for determining, adjusting, and managing resection margin about a subject tissue
US11741619B2 (en) 2021-01-04 2023-08-29 Propio, Inc. Methods and systems for registering preoperative image data to intraoperative image data of a scene, such as a surgical scene
US20230020346A1 (en) * 2021-07-14 2023-01-19 Cilag Gmbh International Scene adaptive endoscopic hyperspectral imaging system
WO2023052960A1 (en) * 2021-09-29 2023-04-06 Cilag Gmbh International Surgical devices, systems, and methods using fiducial identification and tracking

Similar Documents

Publication Publication Date Title
WO2017042171A1 (en) Apparatus for imaging in a medical treatment
JP7444932B2 (en) Systems and programs for bleeding detection systems
US11793390B2 (en) Endoscopic imaging with augmented parallax
US11275249B2 (en) Augmented visualization during surgery
US11033188B2 (en) Imaging device and method for generating an image of a patient
EP3073894B1 (en) Corrected 3d imaging
JP6609616B2 (en) Quantitative 3D imaging of surgical scenes from a multiport perspective
US20190090955A1 (en) Systems and methods for position and orientation tracking of anatomy and surgical instruments
US20180053335A1 (en) Method and Device for Displaying an Object
US20150281680A1 (en) System and method for triangulation-based depth and surface visualization
US20180263710A1 (en) Medical imaging apparatus and surgical navigation system
US20200129240A1 (en) Systems and methods for intraoperative planning and placement of implants
WO2020045015A1 (en) Medical system, information processing device and information processing method
JP2022087198A (en) Surgical microscope having a data unit and method for overlaying images
KR20130108320A (en) Visualization of registered subsurface anatomy reference to related applications
CA3063693A1 (en) Systems and methods for detection of objects within a field of view of an image capture device
KR100726028B1 (en) Augmented reality projection system of affected parts and method therefor
JP7392654B2 (en) Medical observation system, medical observation device, and medical observation method
KR101667152B1 (en) Smart glasses system for supplying surgery assist image and method for supplying surgery assist image using smart glasses
JP2024501897A (en) Method and system for registering preoperative image data to intraoperative image data of a scene such as a surgical scene
JP2020074926A (en) Medical observation system, signal processing device and medical observation method
US20170091554A1 (en) Image alignment device, method, and program
EP3666166B1 (en) System and method for generating a three-dimensional model of a surgical site
US20220096165A1 (en) Interventional device tracking

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16763015

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16763015

Country of ref document: EP

Kind code of ref document: A1