WO2009144729A1 - Laparoscopic camera array - Google Patents

Laparoscopic camera array Download PDF

Info

Publication number
WO2009144729A1
WO2009144729A1 PCT/IL2009/000536 IL2009000536W WO2009144729A1 WO 2009144729 A1 WO2009144729 A1 WO 2009144729A1 IL 2009000536 W IL2009000536 W IL 2009000536W WO 2009144729 A1 WO2009144729 A1 WO 2009144729A1
Authority
WO
WIPO (PCT)
Prior art keywords
laparoscopic
image
system according
imaging system
images
Prior art date
Application number
PCT/IL2009/000536
Other languages
French (fr)
Inventor
Noam Hassidov
Moshe Shoham
Original Assignee
Technion Research & Development Foundation Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US7195508P priority Critical
Priority to US61/071,955 priority
Application filed by Technion Research & Development Foundation Ltd. filed Critical Technion Research & Development Foundation Ltd.
Publication of WO2009144729A1 publication Critical patent/WO2009144729A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/32Devices for opening or enlarging the visual field, e.g. of a tube of the body
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00064Constructional details of the endoscope body
    • A61B1/00071Insertion part of the endoscope body
    • A61B1/0008Insertion part of the endoscope body characterised by distal tip features
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/313Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for introducing through surgical openings, e.g. laparoscopes
    • A61B1/3132Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for introducing through surgical openings, e.g. laparoscopes for laparoscopy
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00163Optical arrangements
    • A61B1/00174Optical arrangements characterised by the viewing angles
    • A61B1/00183Optical arrangements characterised by the viewing angles for variable viewing angles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B17/00Surgical instruments, devices or methods, e.g. tourniquets
    • A61B17/00234Surgical instruments, devices or methods, e.g. tourniquets for minimally invasive surgery
    • A61B2017/00238Type of minimally invasive operation
    • A61B2017/00283Type of minimally invasive operation with a device releasably connected to an inner wall of the abdomen during surgery, e.g. an illumination source

Abstract

Insertion and deployment methods for multi-camera arrays for laparoscopic use, mounted on rigid intra-body structures, and deployed in predefined fixed positions and orientations. This enables the image synthesizing algorithms to stitch the 3D mesh and video images more accurately and speedily than prior art methods, where the camera positions and orientations are not predefined and some registration procedure may be necessary to define their positions relative to each other. The structures are constructed such that they can be inserted through a single body incision, making the procedure minimally invasive, and are deployed to their extended position only once within the body cavity. Additionally, methods of scanning of the work region, using a laser line source, are presented, enabling the generation of a three dimensional mesh of the profile of the work region, onto which video images of the work region can be readily stitched.

Description

LAPAROSCOPIC CAMERA ARRAY

FIELD OF THE INVENTION

The present invention relates to the field of enhancing laparoscopic vision for surgical procedures, especially by the use of camera arrays deployed in known configurations in the region of the surgical site.

BACKGROUND OF THE INVENTION

Laparoscopic surgery is a MIS (Minimal Invasive Surgery) procedure that enables physicians to operate on organs, such as those in the abdominal cavity via small ports rather than long cuts in the abdominal wall. Laparoscopic procedures enable minimal tissue trauma and fast recovery time. However, unlike open surgery, the physician has a limited field of view and does not have a direct vision of the operated organs. This results in low hand-eye coordination and long training periods to become proficient. The physician is required to allocate significant cognitive resources in order to align the surgical tools, organs and hands to fit the video images.

Studies have shown that optimal positioning of the laparoscope and monitor yield high performance, both in task execution time and in error rate, but during surgery, circumstances may arise that require of the physician to deviate from optimal positioning and to move the laparoscope, causing performance decay.

The pursuit after optimal laparoscope positioning and after the best image may be tiring for the physician, and may require any one of a number of situations, such as a single surgeon operating with one hand holding the laparoscope and the other hand holding a surgical tool, or two surgeons, with the junior surgeon holding the laparoscope and the senior operating with both hands holding the surgical tools, or a single surgeon operating with both hands holding the surgical tools and the laparoscope maneuvered by a robot.

There are a number of problems associated with laparoscopic surgery related to the visual perception which the surgeon has of the operating area. In classical "open surgery" methods, the surgeon learns by looking at anatomy charts of the human body, followed by views of a cadaver during his studies and in an operating room during his training period, and finally when he performs surgery. As a result, the surgeon's eyes and cognitive processes perceive the subject being operated on with what may be termed "Natural Body Space Perception".

In laparoscopic surgery, the surgeon looks via a small pupil (i.e. the camera) with a limited field of view, and the natural body space perception is missing. Aligned Multi Cameras Array (MCA) that enable improved display for laparoscopy surgery procedures have been described in a number of prior art publications.. The camera arrays contain several cameras each with a limited field of view. The stitched image covers a wide workspace, enabling the surgeon to perform a task without the need to rotate and tilt the laparoscope in an attempt to find the optimal viewing angle. This wide format image can not be captured when using a classical laparoscope, due to the laparoscope's limited field of view that covers only a given workspace. This situation is shown in Fig. IA to Fig. 2B.

The laparoscopic image is generally presented "as is" on a standard monitor mounted on a rack adjusted to the optimal height opposite the surgeon, as is customary today in the laparoscopy operation room. The surgeon looks at the monitor as if he were viewing a television, and he has to match the 2-dimensional vertical image presented on the monitor of the image captured by the laparoscope's camera, in a situation where the camera is being that is repeatedly steered in the body cavity, possibly rotating in 3 axes and moving in 3 directions. This task may be confusing for the surgeon.

There now follows a review of some of the issues raised in the prior art in the field. Image based rendering:

According to this system, a given 3-D object is set on a fixed spot and a single camera captures images while orbiting the 3-D object in a set radius (Rl) in a 360 degrees rotation looking at the center of the 3-D object. Assuming a static picture is taken every 3.6 degree during the orbital rotation, then 100 pictures are generated in the end of the process. Using an image process algorithm and without the need to reconstruct the exact 3-D geometry of the object a new image can be generated that will simulate a new picture taken at 5 degrees (for example), a position where the camera has never actually been. This image is set on the orbital rotation track (on the Rl radius) and can be zoomed in to magnify without changing the perspective. Light field rendering

If pictures are taken every 1/3 of a degree, then a collection of up to thousand pictures is gained. Due to the large data base, a new image can be generated that will simulate a new picture taken at any new position that can vary between the 3-D object and the orbital rotation track (on the Rl radius) without the need to reconstruct the exact 3-D geometry. In fact it is possible to generate a new image captured from a position where the camera has never been. If the images are dense enough, using image algorithms, one can construct a new image with a correct perspective views from observer positions where one has never actually stood.

Both the Image based rendering technique and the Light field rendering techniques are based on a large collection of pictures (hundreds and up to thousands of pictures). Using such enormous data base needs a computing capability with high performance which could be challenging to run in real time, as needed in a live surgery. Therefore, those methods are generally non-useful for stitching images from multi camera arrays using the computing capabilities currently available. 3-D geometry and light field

If a real 3-D geometrical model of the object is available, then it is possible use the same methodology but with fewer images taken (less dense capturing, possibly only every 10 degrees). In this case the images that are captured by the camera are projected on the 3- D geometrical model and then a new image can be generated. This image can be set to a new position where the camera have never been but will be based upon reconstruction from both 3-D geometry and images from the orbital camera's pictures.

In this 3-D geometry and light field technique, a pre-captured 3-D model is needed. When live surgery is being performed, the organs and the tools move and change continually. Therefore, a 3-D geometry that has been pre-captured can't be integrated into the process. If the organs move the 3-D surface will change and the "new" images will be projected upon an "old" 3-D surface, causing the real time image to look distorted. The method mentioned above is not therefore suitable for stitching images based on real time multi camera array applications. Multi Cameras Array

Stereoscopic optical systems that produce three dimensional views are well known. A majority of these systems include two separate cameras that provide separate side by side images similar to the human eyes. The eyes located in a given distance enabling the human brain to process the image captured and to reconstruct (a) panoramic views with low 3-D perceptions (b) limited front view that have high 3-D perception. This idea have been developed over the years. US patent Nos. US6,704,043, US7,116,352, US7, 154,527 show several examples of such systems.

The Stereoscopic optical systems technique as presented above is similar to the Auto-stereoscopic 3-D display in its benefit and limitation. Today contemporary laparoscopes can present a 3-D image on a 3-D display. Using such a technique might improve depth perception but the laparoscope is still subjected to tilting and rotation which causes disorientation and poor hand eye coordination. Fish eye camera

Another method used is the fish-eye view correction algorithm as presented in US patent No. US 5,313,306. The acquisition method presented is a fish eye camera that can be mounted on the tip of an endoscope. The distorted image is processed via a computing unit transforming it in to a straightened and flattened view that can be easily viewed.

This technique as presented above incorporates several limitation (a) despite the image correction the edges of the image have low resolution due to lack of pixels at the edges of the image circumference (b) if a surgical tool is maneuvered above the workplane if is difficult to correct the distorted image to fit both the 2-D workplane and the 3-D working tool, and (c) since there is a single focal point, the fish eye camera can be easily obstruct by a working tool.

A number of endoscopes using Multi Camera Arrays for use in the medical imaging fields, and especially for laparoscopic use, have been described in prior art publications. Included are WO 2008/006180 for "Endoscopic Vision System" to Catholic University of Leuven; US 2003/0176794 for "Surgical Imaging Device" to M.P. Whitman; US 2006/0020213 for "Surgical Imaging Device" to M.P. Whitman; US 2002/0007110 for "Endoscope, in particular having Stereo-Lateral-View Optics" to K. Irion; US 2003/0191364 for "Stereo Laparoscope with Discrete Working Distance" to R. Czarnek et al.; US 2005/0272979 for "Visual Means of an Endoscope" to F. Pauker et al.; WO 2001/049165 for "Endoscope" to I. Herrmann; US 2006/0122580 for "Tool Insertion Device for use in Minimally Invasive Surgery" to P. Dannan; US 6,261,226 for "Electronically Steerable Endoscope" to M. A. McKenna et al., and WO 2007/110620 for "Endoscope with a Plurality of Image Capturing Means" to T. A. Salem.

The disclosures of each of the publications mentioned in this section and in other sections of the specification, are hereby incorporated by reference, each in its entirety.

SUMMARY OF THE INVENTION

The methods and systems described in the current application tackle the problem of visual space perception within the operating cavity in two ways: (a) acquisition of a wide and high quality image using laparoscopic instrumentation, and

(b) presenting the image in such a way that the surgeon will have an image perception as close as possible to the "Natural Body Space Perception".

Unlike the above described prior art systems, in the present innovation, the MCA system performs the following tasks:

(a) Continuous on-going registration of the inner cavity 3-D geometry, and acquiring new 3-D geometry even if the organs change their shape and location.

(b) Capturing real time images using several cameras (Camera array)

The presented system projects the images from the cameras onto the 3-D geometry and stitches the images into one continuous large image. It is this new image that is presented to the surgeon.

The above process includes the following processes:

3-D geometry acquisition; multi camera image acquisition; image processing and image presentation to the surgeon, all performed in real-time. GIS 3-D geometry and aerial phonograph

The previously mentioned methodology (3-D geometry and light field) is similar to what is done in aerial photograph. The first layer is a 3-D geometry, based on GIS (Geographic Information System) of a given landscape and the second layer is of images taken by an airplane carrying a still camera that is keeping a fixed altitude. After the flight the images are projected on the 3-D GIS geometry, resulting in a 3-D stitched image. This 3-D stitched image can then be source for a new smaller image taken from a point that the airplane has never traveled before. Moreover, assuming that the airplane is flying at a 7km altitude taking 100 images (in a 10X10 array), the new stitched image can be presented as if a satellite had captured them using a zoomed high resolution single image at 300Km altitude.

In the GIS 3-D geometry and aerial photograph technique, the process takes place after the plane has landed and a 3-D model is built upon known GIS files. This method can yield good results but takes a high computing power and a pre-defined 3-D GIS file is needed. The method mentioned above is not suitable for stitching images based on multi camera array applications, (e) Seeing through obstacles

It is known that if a needle is held in front of the eye, since the needle is narrower than the pupil of the eye, it adds a haze to the view of the world, but it does not completely obscure any part of it. An obvious application of this principle is to "see through" objects consisting of many small parts, like trees or crowds of people. It's inconvenient to build a camera with a lens that is larger than a tree leaf, not to mention a person, but we can simulate such a camera by capturing and re sampling a light field. For example, if we have an array of N x N cameras pointing at a scene, one can simulate the focusing effect of a lens as large as the array in the following way: Consider a single pixel in the output image. Using geometrical optics, calculate the point's location on the plane of best focus that would be imaged onto this pixel by the giant lens. Now select the image sample from the view recorded by each camera, possibly with interpolation from neighboring samples, whose line of sight passes through that point. Add together these N x N samples. Repeat this procedure for each of the P x P pixels in the output image.

Thus, after work proportional to N2 x P2, a perspective view of the scene has been constructed, but using a synthetic camera having a large aperture and therefore a shallow depth of field. This could be described as "digital focusing".

In the "seeing through obstacles" technique, the process is based on a large array of cameras and dose not use a 3-D geometrical model. Introducing such a large array into the body's cavity in difficult and involves several technical barriers. The computing power needed to transfer tens of live video streams into one real time video stream is difficult to achieve at present.

In the present disclosure, the MCA uses a 3-D geometry model that is generated by the system and only a limited number of cameras are needed to cover the workspace. For example, if 4 cameras (camera 1-4) covers the entire workspace, adding 2 additional cameras (2a, 3 a) will yield new images with overlap to the prior 4 cameras. Therefore, if a camera (1-4) is partly blocked, the system can compensate using camera 2a or 3 a and generate a "clear" image although one of the camera is partly blocked, such as by an organ or a working tool. Arrays of lenses

If the range of viewpoints spans a short baseline (from inches to microns), then we can replace the multiple camera array with a single camera and an array of lenses. The use of lens arrays to capture light fields is known as integral photography. If a sensor is placed behind an array of small lenses (lenslets), each lenslet records a perspective view of the scene observed from that position on the array. This constitutes a light field.

Inserting a microlens arrays between the sensor and main lens of a photographic camera, creates a plenoptic camera. Starting from the light fields recorded by this camera, you can create perspective flybys and multi-perspective panoramas, although the range of available viewpoints is limited by the diameter of the camera's aperture. This works best in macro-photography, where the scene is close to the camera and therefore is large relative to the camera's aperture. Auto-stereoscopic 3-D Display

A small linear camera array is used and a special image displaying method. The cameras (6) are set in a liner array (side by side), and each camera image has been presented on a given portion of a screen, in such a way that each eye has viewed a different image. The surgeon has viewed different images in each eye, enabling him to reconstruct a 3-D image and enhance his perception due to stereoscopic viewing. A linear array mounted in the tip of a 10 mm laparoscope has been demonstrated.

The use of rod lenses was proposed to transfer the images from end to end, as these retain the picture definition. At the proximal end the rod lenses will image onto CCD chips. Each rod lens can image onto one or three chips, the latter option providing higher color definition. The housing for the CCD chips should be similar in size to the housing for existing 2D laparoscope cameras.

The Auto-stereoscopic 3-D Displaying technique comprises 6 rod lenses mounted inside a 10mm laparoscope tip. The laparoscope can be expanded up to a 15mm outer diameter, in order to gain a wider area of lenses. The above mentioned configuration will yield a 3-D image that will be presented on a T V screen and can be more easily interpreted by the human mind. Depth perception will help the surgeon to perform his task, but the laparoscope used is still subjected to tilting and rotation which causes disorientation and poor hand-eye coordination.

In the present invention, the cameras in the MCA are not mounted inside a single laparoscope tip and the distance between each camera is not limited. The only limitation is the cavity size and space. The distance between the cameras/rod lenses in the Auto- stereoscopic 3-D Displaying spans only few millimeters, but the distance between each camera in the present MCA can span up to several centimeters.

Moreover in the Auto-stereoscopic 3-D displaying technique, the camera array and the light source are adjacent, almost collinear with only few millimeters separating them. In such illumination conditions the shadow will be cast behind and could not see by the surgeon. In the present invention, the light sources are placed far from the camera and could cast a shadow that the surgeon can view. Viewing such a shadow helps in estimating the distance of the tool to the surface and improves the depth perception. Shadow casting and 3-D perception.

Shadows are enfolded in the images that we capture day by day. Cast shadow assist is gaming a solid 3-D perception, we can estimate distance between objects with the assistance of shadows; for example if two humans are standing one after the other in the observer's line of sight, the observer will see the first man and the second will be hidden - in this case the observer can not determine the distance between them. But if the light source (the sun or a street lamp) is located far from the observer's eye, a shadow will be projected on the ground, which, when viewed by the observer, enables an estimation of the distance between the pair to be made.

When using a single camera laparoscope, the camera and the light source are almost co-aligned, with only a few millimeters separating them. In this setup the shadow is cast behind the object and the camera can't capture it, resulting in low depth perceptions.

In the prior art, a system has been described that presents a shadow generated by both the endoscope embedded light source and additional light located far from the camera. This concept results in higher performance during endoscopic task. However, it might need an additional incision in the abdominal wall in order to insert the secondary light source.

In the present innovation the multiple light sources are embedded in the MCA system. Therefore, only one incision in the abdominal wall is needed, as preformed today in a laparoscopy procedure.

The above described prior art describes stitching methods for generating a single image from a number of overlapping separate images. It is known that when stitching images of 2D origin taken by an array of 2D cameras or scanners, the stitching process is straightforward, and there is virtually no image distortion. On the other hand, when using images of 3-D origin, the stitching process is not simple and the image may be distorted in a manner unacceptable for medical or surgical use.

A method is described in this disclosure, of reducing image distortion arising from the stitching of overlapping 3-D images taken from cameras in different locations. The proposed method is based on the fact that the surgeon is primarily interested in a distortion-free, high quality image of the region where he is working, and image distortions of surrounding tissues and organs may not be critical.

The image stitching software is therefore adapted, and involves an algorithm containing a number of steps to achieve this object: (a) The tip of the working tool is found, by known image processing routines. (b) The camera where the tip of the tool is best viewed, is then defined as the master camera, and its image the master image, while the other images become slave images.

(c) The software stitches the slave images to the master image and to the other slave images, in such a manner that the master image has minimal distortion. Any distortion in the slave images, being peripheral to the centre of the region of interest, is of less importance and can therefore be neglected.

(d) The system returns to stage (a)

(e) If the system finds that the working tool tip position has changed, the software will determine which camera provides the highest quality image of the tool-tip region, and will redefine that camera as supplying the master image for the stitching process.

By this means the surgeon is provided with a high quality image at the point at which he is working, coupled with a reasonable view of the surrounding areas to which he may need to make reference during the surgical procedure.

In the article "Low-Cost Laser Range Scanner and Fast Surface Registration Approach" by S. Winkelbach et al., published in "Lecture notes in computer science" on pages 718-728, VoI 4174/2006, by Springer Verlag (2006), there is described a method of scanning with a mechanically scanned line laser source. Software for use with such a method is available from the David Laserscanner organization. The reflection of this line laser source from the surface profile of the internal features within a body cavity, is imaged by any of the cameras of the array, and signal processing algorithms are used to build a complete three-dimensional image from these profiles. Images synthesized from the acquisition of just the outer profiles of the internal structures of the cavity, is believed to provide more readily distinguishable features than the prior art video images taken of the internal body environment, with its uniform red color and few distinguishing highlights. Furthermore, the image processing algorithms available for use in the prior art video imaging methods may have difficulties in stitching images of body parts having strong three-dimensional features, whereas surface profilers can readily handle such depth features.

This disclosure describes a number of novel insertion and deployment methods for multi-camera arrays. Some of the above mentioned prior art references describe methods of insertion and deployment of multiple cameras. The implementations described in the present disclosure describe systems in which the cameras are deployed in essentially predefined fixed positions and fixed orientations. This has the operative advantage that the image synthesizing algorithms can be made quicker and simpler. This may enable more accurate and speedier stitching of the 3D mesh and images than prior art methods, where the cameras may be regarded as floating within the body cavity, or may be fixed to a wall of the cavity, where they are subject to movements, and wear some sort of registration procedure is necessary to define their positions relative to each other.

The specific deployment systems and methods shown in this disclosure generally involve multi-camera arrays mounted on rigid intra-body structures or frames, used to support the cameras in predefined positions and orientations. These structures are generally constructed such that they can be inserted through a single incision, making the procedure minimally invasive, and are deployed to their extended position only once within the body cavity. The prior art generally shows cameras whose positions are variable, or intra cavity cameras which would appear to require an additional incision for each camera.

With regards to the display of the intra cavity images to the surgeon, in contrast to prior art methods, this disclosure describes the mounting of the monitor in a position over the abdomen of the patient, exactly over the operation site, such that the surgeon sees the images in the exact position and orientation used by his hands while performing the operation. This is described as a "virtual abdominal window", and it should be advantageous to the surgeon in providing a more lifelike, real-time image aligned correctly relative to his tactile motions. The proposed display system of the present invention presents a real time image in a correctly oriented position, on the monitor such that a natural view is provided to the surgeon, based only on images obtained in real time from the body's cavity. As an alternative to images obtained from a camera array within the body cavity, similar results can be obtained using separate cameras inserted into the body cavity through separate laparoscopic entries.

One exemplary implementation of the systems described in this disclosure involves a laparoscopic imaging system comprising:

(i) a longitudinal element having dimensions such that it will pass through a single laparoscopic port in the wall of a body cavity, the longitudinal element comprising:

(a) plurality of at least three imaging elements for generating images, and

(b) at least one illuminating source, and

(ii) an image processing system adapted to transform at least some of the images to at least one new image containing more information than that contained in any one of the at least some images, wherein the plurality of imaging elements and the at least one illuminating source are adapted to be deployed after passage into the body cavity, into an expanded structure, in which the plurality of imaging elements and the at least one illuminating source are disposed in locations known relative to each other.

In such a laparoscopic imaging system, the longitudinal element may have lateral dimensions substantially smaller than the expanded structure. Furthermore, the laparoscopic port may have an internal diameter of less than 15 mm.

In any of these exemplary systems, the longitudinal element may comprise a number of fingers, the fingers being attached to the longitudinal element such that they can be deployed at an angle to the axis of the longitudinal element. In such a situation, at least one of the fingers may be attached to the longitudinal element by means of either a hinge or a pivot. Additionally, at least one of the fingers may comprise at least one imaging element and one illumination source.

In any of these implementations, the fingers may be deployable to positions having angular orientations substantially different from the axis of the longitudinal element. In such cases, the angular orientations may conveniently be aligned at from 60 to 120 degrees from the axis of the longitudinal element

Yet other implementations may involve laparoscopic imaging systems as described above, in which at least one of the fingers may comprise an encoder to measure the angle of deployment of the finger. Furthermore, the fingers may be disposed such that a surgical working tool can be passed through the longitudinal element at least after deployment of the fingers, and optionally also before..

Additional implementations may involve a laparoscopic imaging system as described hereinabove, and in which the light for at least one illumination source is provided from a source outside of the body cavity. Also, the image from at least one of the imaging elements may be captured by a camera located outside of the body cavity. The laparoscope may also comprise a disposable flexible sheath which covers the fingers, such that the fingers do not require sterilization between procedures.

Additionally, alternative implementations of any of the above-described systems may further involve a laparoscopic imaging system wherein the expanded structure comprises at least one inflatable balloon, which is inflated only after the longitudinal element has passed though the laparoscopic port.

In any of these laparoscopic imaging systems, at least one of the fingers may comprise two imaging elements, such that the two imaging elements can generate a 3- dimensional image from one finger. Alternatively, according to further exemplary implementations, the at least one image containing more information than that contained in any one of the at least some plurality of images, may be presented on a display in such a manner that items in the at least one image are all displayed in their real-life orientation. In such cases, the display may be mounted relative to the body cavity such that the image is displayed in a position and orientation close to that of the real-life position and orientation of the body cavity.

Another example implementation can involve a laparoscopic imaging device comprising a helically shaped conduit having disposed within its walls a plurality of imaging elements for generating images, and at least one illuminating source, the conduit having openings in a wall in the general direction of the axis of the helix, such that the imaging elements and the at least one illuminating source have fields of view in the direction of the axis of the helix, wherein the helix has an outer dimension such that it can be inserted with a corkscrew-like motion through a laparoscopic port into a body cavity, such that the plurality of imaging elements and the at least one illuminating source are disposed in the body cavity in locations known relative to each other. In such an exemplary laparoscopic imaging device, a proximal end of the helix may be adapted to remain outside of the body cavity, such that the conduit can be manipulated such that the laparoscopic imaging device views different regions of the body cavity. Additionally, the known locations of the plurality of imaging elements and the at least one illuminating source may be such as to enable 3-D image reconstruction to be performed without the need for a registration procedure. This laparoscopic device may be such that the helical conduit passes through a laparoscopic port having an internal diameter of less than 15 mm.

Additional implementations described in this disclosure may involve a laparoscopic imaging system comprising:

(i) a plurality of imaging devices generating a plurality of 2-dimensional images, (ii) at least one illuminating source, and

(iii) a laser line scanner adapted to be scanned at a predetermined rate over an item to be viewed, such that a 3 -dimensional virtual mesh is formed of the profile of the item illuminated by the laser line scanner, wherein the plurality of 2-dimensionsal images is projected and stitched onto the 3- dimensional mesh. In such a system, the laser line scanner may incorporates pre-scanned calibration data to enable the three dimensional virtual mesh to be acquired without a reference plane. BRIEF DESCRIPTION OF THE DRAWINGS

The present invention will be understood and appreciated more fully from the following detailed description, taken in conjunction with the drawings in which: Fig IA is an isometric view of a prior art laparoscope Fig IB is a top view of the work-plane as viewed by a prior art laparoscope Fig 2 A is an isometric view of the camera array concept

Fig 2B is a top view of the work-plane as viewed by a Multi Camera Array (MCA) Fig 2C is an isometric view of the MCA, monitor and working tools Fig 3 A is a cross section of a MCA with a single "focal point" mode Fig 3B is a cross section of a MCA with parallel line of site mode Fig 3 C is a cross section of a MCA with a free standing cameras mode Fig 3D is a cross section of a MCA with deflection mirrors

Fig 4A-F are cross sections of a MCA insertion and deployment via a port into a cavity Fig 5 A is a cross section rendering the integration of a working channel in a MCA Fig 5B is a cross section rendering the use and rotating a surgical tool within a MCA Fig 6A-B are cross sections of a MCA comprising 2 light sources and one camera Fig 6C-D are cross sections of a MCA comprising 2 cameras and one light source Fig 6E-G are cross sections of a MCA comprising a light source and a 2-mirrored camera Fig 7A is a top view of a MCA deployed in a cavity Fig 7B is a cross section of a MCA with external light source and cameras Fig 8 A is a top view of a MCA deployed in a cavity comprising a helical frame Fig 8B-C are cross sections of a MCA comprising a helical frame Fig 9 A-B is a cross section of a MCA insertion in to a single use sheath Fig 9C-F are cross sections of a MCA deployment via a port into a cavity Fig 1OA is a top view of a MCA with deploying & stabilizing balloons Fig lOB-C is a cross section of a MCA with deploying & stabilizing balloons Fig 1OD is an isometric view of a MCA with deploying & stabilizing balloons Figs 1 IA-B are laser line scanning implementations of the above described systems Fig. 12 is a flow chart of the complete imaging procedure

Fig. 13 is a flow chart of a method of reducing image distortion from image stitching, and Fig. 14 is a flow chart of the laser line scanning implementation of Figs. 1 IA and 1 IB DETAILED DESCRIPTION

Reference is now made to Fig. IA which shows schematically a prior art laparoscope camera 10 viewing an object plane, which, due its limited field of view, covers only a limited workspace 12. As shown in Fig. IB, such a single camera cannot capture a wide format image, such that the surgeon needs to move or tilt the laproscope head in order to cover a wider field of view.

Multiple camera arrays 20 (MCA) are used, as shown schematically in Fig. 2A, in order to provide a wider field of view 22. Each camera has a limited field of view, but the stitched image from all of these cameras covers a wide workspace, as shown in Fig. 2B, enabling the surgeon to perform a task within a limited bodily cavity, without the need to rotate or tilt the laparoscope while seeking the optimal viewing angle.

Fig 2C illustrates schematically an isometric view of an MCA 20 mounted within a subject's abdominal cavity 24, showing the working tools 26 used to perform the surgical procedure, and a monitor 28 on which the generated composite image is displayed to the surgeon.

Reference is now made to Figs. 3A to 3D, showing schematically various different arrangements by which such an MCA system can be implemented. In Fig. 3 A, several cameras 33 are shown mounted inside the bodily cavity 32 looking forwards towards the plane 31 to be viewed, each having a limited field of view (FOV) 35. The cameras may be aligned such that the line of sight 34 of each camera diverges from a single point 36.

In Fig. 3B, the several cameras 33 are arranged to be forward looking towards the working plane 31 with their lines of sight 34 parallel, and perpendicular to the imaged plane 31. This arrangement is simple to hold and its images may be simpler to stitch together to form a composite wide FOV image than the example of Fig. 3 A.

In Fig. 3 C the cameras 33 are shown disposed in a "free standing" mode inside the bodily cavity, with their lines of sight 34 arranged such that they look in several directions.

In Fig. 3D the camera line of sight is deflected using mirrors 37, enabling the cameras 33 to be positioned horizontally. This arrangement reduces the space needed for the MCA, and enables a larger working space for the surgeon within the bodily cavity 32. The mirrors 37 can be flat or curved.

Though some of the camera arrangements shown in Figs. 3 A to 3D have been used in the prior art, the mechanical arrangements for maintaining the cameras in their predetermined positions may have been bulky, inflexible or may have involved multiple incisions in the abdomen wall. Figs. 4 A to 1OD now illustrate a number of novel configurations described in the present disclosure, for inserting and deploying such an MCA5 in a minimally invasive manner, taking up a minimum amount of space within the bodily cavity, and providing optimum positional flexibility to the surgeon using the MCA.

Reference is now made to Figs 4A to 4F, which show a first exemplary deployment scheme via a single laparoscopic opening into the body cavity 42. These drawings illustrate one practical way of implementing the arrangement shown in Fig. 3D. In Fig. 4B, the cavity wall 42 is shown punctured using a standard tool and a port 41 is inserted into the resulting opening 40. The MCA system is provided in a novel stowed configuration, and is shown in Fig. 4C being introduced into the port's lumen, 40. The stowed MCA system may conveniently comprise several rotatable fingers 44 connected to the main housing 42 of the MCA by means of rotation hinges 43. The MCA system will be described in more detail in relation to Fig. 4F hereinbelow. After the MCA has been fully inserted into the cavity, as shown in Fig. 4D, the fingers 44 are rotated on the hinges 43, as shown in Fig. 4E, until deployed and locked in their operating mode as shown in Fig. 4F. In Fig. 4E, there is seen an optional encoder 43 A operating on one of the hinges, which supplies data to the system controller regarding the angle of orientation of the finger 44 relative to the axis of the MCA housing 42. In the particular example of the MCA fingers shown in Figs. 4A to 4F, the cameras 45 are disposed along the length of the fingers 44, and the field of view 45 A is directed from the work plane into the cameras by means of inclined mirrors 46. Light sources 47, also directed towards the work plane, provide internal illumination for the imaging process.

Using such an arrangement, the MCA assembly can be inserted into the body cavity in a minimally invasive manner, and once inserted, can be deployed across the top of the bodily cavity so that it occupies a minimum of space in the cavity, as proposed in the example of Fig. 3D.

One of the advantages of the arrangement shown in Figs. 4 A to 4F is that once the MCA array has been deployed from its housing 42, the insertion port 40 can be used for the insertion of surgical tools or other surgical apparatus. This is illustrated schematically in Figs. 5 A and 5B. In Fig. 5 A, there is shown a central lumen or a working channel 52 enabling a surgical tool 50 to be inserted through the center of the MCA housing and able to perform operative tasks on the subject at its working end 51. The working channel 52 enables the tool to pass into the cavity only after the MCA system has been fully inserted and deployed into the body cavity. By tilting the MCA housing 42 or the port 41, the surgeon can adjust the movement of the working tool 51 as shown in Fig. 5B. If the camera fingers are not locked relative to the MCA housing, they may maintain the same view by rotating on their hinges. The images from the MCA can be sent via video cable 53 to the working station for processing.

Reference is now made to Figs 6A-6B, in which there is shown schematically an MCA comprising finger elements each incorporating two light sources 47 A, 47B, and one camera 45. The light from one of the two light sources is projected onto the work plane, and the light reflected therefrom returns via the mirror 46 A to the camera 45. Using such an arrangement, each light may be turned on separately in a continuous sequential cycle, and a pair of images is obtained within a short time frame, each image illuminated from a different light source. If the MCA is not moved between these images, the difference between the images is that due to the change of the light source. Such a difference in the images, knowing the distance between the light source and the camera, and using triangulation, enables the system to calculate the distance from the camera to points located on the work plane.

Reference is now made to Figs 6C-6D, in which there is shown schematically an MCA comprising finger elements, each incorporating two cameras 45, 45B with their associated reflectors 46A, 46B, and one light source. 47. One light source is continually turned on and the cameras 45, 45B, capture the images in a coordinated manner. Two different FOVs 45 A, 45 C, are viewed by the cameras, and since the cameras are located at a known distance apart, a post-process stereoscopic vision algorithm can be used to extract the 3-D surface representation of the work plane.

Reference is now made to Figs 6E to 6G, in which there is shown schematically an MCA each of whose fingers comprises a single light source 47 and a camera 45 with two mirrors 46A which is a semi-reflecting mirror, and 46B, which is a full reflector. The light source 47 illuminates the work plane continuously, and the image is captured by a single camera 45. An aperture enables the light 45 A to travel via the semi reflecting mirror 46 A into the camera 45, and another aperture a known distance from the first one, enables the light 45b to travel via the fully reflecting mirror 46b into the camera 45 through the semi- reflecting mirror 46 A..

The finger element includes a shutter 24, which could be operated by a rotary motor 23. When the shutter 24 is set to an open mode, as shown in Fig. 6F, light from both images 45A and 45B is captured by the camera. When the shutter 24 is set in a closed mode, as shown in Fig. 6G, light from only a single image 45 A is captured by the camera. Using an image processing algorithm, the "closed mode picture" can be subtracted from the "open mode picture" resulting in a residual image that comprises the 45B view only. Two images, of fields of view 45 A and 45B, have thus been generated, and since the distance between the mirrors 46A, 46B, is known, a post-process stereoscopic vision algorithm can be used to extrapolate the 3-D surface of the work plane from these two generated images of fields of view 45 A and 45B.

Reference is now made to Figs 7A to 7B, in which there is shown schematically an MCA arrangement with light source and cameras disposed externally to the body cavity. Fig. 7A is plan view of this arrangement, and Fig. 7B is a side elevation view, with the level of the plan view 7A marked. Several fingers 44A to 44E are deployed inside the cavity, through the laparoscopy port 41 in the cavity wall. The cameras 77 may be CCD or CMOS cameras, and are mounted outside the bodily cavity. The external light source, 70 convey the illumination via optic fibers 78 to the lenses 79 located in the fingers, which then project the light onto the work area. The reflected light travels back to the mirrors 46 and are collimated by the imaging lens 76 into the imaging optic fiber 75 back to the external cameras 77. The cameras and the light source may be mounted within an external frame 71. Alternatively, the cameras can be mounted inside the bodily cavity, as per any of the above described implementations, and the illuminating source outside, to avoid excessive heat dissipation at the operating site.

Reference is now made to Figs 8A to 8C, in which there is shown schematically an MCA arrangement using a helical frame to enable the cameras to be deployed in the cavity. Fig. 8 A is plan view of this arrangement, and Fig. 8B is a side elevation view. The cameras 45 A to 45D are mounted inside a helical pipe 81, fashioned in a corkscrew shape. The helical pipe 81 comprises cameras 45, electrical cables 82 and light sources 83, as shown in the magnified end view of the helical pipe shown in Fig. 8C. The helical pipe may be inserted into a single opening 41 in the cavity wall by a rotary motion. One advantage of this method is that the cameras are located in a given location and can not be moved, maintaining a set distance and orientation at all times. This enables simple and fast 3-D image reconstruction.

Reference is now made to Figs 9A to 9F which illustrate MCA systems using single use dispensible sheaths. Such use eliminates the need to wash and sterilize the MCA between each insertion. In Figs 9A, 9B there is shown how the MCA fingers 94 are inserted into the disposable sheaths 95 preoperatively, with the electrical cables 93 also enclosed within the sheath. Then as shown in Fig 9C to 9E, the MCA is inserted via a port 41 into a body cavity, and there deployed, as described previously. Fig. 9F is a magnified view of the viewing port of a finger, showing that the sheath 95 has a clear window 95a that enables the camera and light source to perform optically.

Reference is now made to Figs 1OA to 1OD, in which there is shown schematically an MCA arrangement using inflatable balloons for deploying and stabilizing the fingers. In Fig. 1OA there is shown such a structure, comprising semi-flexible MCA fingers 104 and their protective covers 105, which, because of the somewhat flexible nature of their support structure, are able to move somewhat within the cavity. In order to affix them in a stabilized formation, a set of inflatable balloons 106 are connected between points on the fingers 104, 105, which, when inflated with a suitable fluid, whether a gas or a liquid, maintain the ends of the fingers 104, 105 in their predetermined positions. The benefit of such a fluid filled structure is that it can be readily inserted into a laparoscopic cavity opening 41 when in a deflated state, but when inflated, is slightly flexible and should not therefore harm the inner organs and the cavity wall, while at the same time maintaining the MCA fingers and their internal optics in a predetermined position, for efficient viewing of the working area. Moreover in an emergency condition, the balloons can be easily deflated and the MCA swiftly removed.. A surgical tool 107 can be inserted through the neck of the structure..

Fig. 1OB shows a plan view of the inflatable MCA array; Fig. 1OC shows a cutaway side view showing the hollow centre of the inflatable connecting balloon 106 with the inflation fluid 107 inside, and the hollow nature of the finger protective covers 105, showing the internal hollowl04 for the camera and illumination optical arrangements; Fig. 1OD shows the inflatable MCA arrangement fully inflated within the body cavity and illuminating and viewing the working plane.

Reference is now made to Figs. 1 IA and 1 IB, which are schematic drawings of the arrangement by which a laser scanner can be used to generate a three-dimensional image within the body cavity. The video cameras provide continuous video images of the region under observation. However in the body cavity, the generally dominant red color present makes it difficult to perform frame matching using pixel features which are visible in adjacent cameras. A scanning laser line 150A, which may be generated from a laser beam 150 scanned across the surface of an organ 152 by means of a motor 151, is used in order to detect the profile of objects within the viewed region of interest, enabling the generation of a 3-D mesh which accurately copies the profile being scanned. The individual video images can then be accurately stitched onto the 3-D mesh generated by the scanning laser line. Since the light collected from the laser line is monochromatic, it is unaffected by any background ambient image color, and the 3-D mesh is therefore accurately generated regardless of the dominance of the illumination in the video images. By this means, using the scanning laser to generate 3-D image information enables the MCA to discriminate features in the work plane better than a pixel matching system.

Fig. 12 is now a flow chart showing one exemplary procedure used by the MCA of the present invention, to generate the images for display to the surgeon.

In step 120, the cameras capture images of the three dimensional work region.

In step 121, the image processing software extrapolates from these images, the three dimensional geometry of the work plane.

In step 122, the software projects these images onto the three dimensional geometry generated in step 121.

In step 123, the software crops and stitches these images onto a virtual new three- dimensional image.

In step 124, a virtual camera is disposed above the 3-D virtual work region, enabling generation of new images according to the surgeon's wishes, by zooming in or out, panning left or right or by rotating the virtual camera.

In step 125, the newly generated image is displayed on the external monitor.

Reference is now made to Fig. 13, which is a flowchart of a method of reducing image distortion arising from the stitching of overlapping 3-D images taken from cameras in different locations.

In step 130, all of the cameras capture images of the work region.

In step 131, in any of the images where the tip of the surgical tool is at all visible, the location of the tip is determined by known image processing routines.

In step 132, the camera where the tip of the tool is best viewed, which is generally the camera with the tip shown closest to the center of the field of view, is then defined as the master camera, while the other cameras are designated slave cameras.

In step 133, a master image is generated from the master camera, and slave images from the slave cameras.

In step 134, the software stitches the slave images to the master image and to the other slave images, in such a manner that the master image has minimal distortion. In other words, no attempt is made to optimize the distortion over the whole field of view, which comprises both the master and the slave images. In step 135, the composite image is displayed on the system monitor, and control returns to step 130, to check for the visibility of the tip again in all of the camera images. If the system finds that the working tool tip position has changed, a new determination is made of which camera provides the highest quality image of the tool-tip region, and redefines that camera as supplying the master image for the stitching process.

By this means the surgeon is provided with a high quality image at the point at which he is working, coupled with a reasonable view of the surrounding areas to which he may need to make reference during the surgical procedure.

Reference is now made to Fig. 14, which is a flow chart illustrating the method by which the laser line scanning implementation of Figs. HA and HB are executed in the system software.

In step 140, the illuminating light sources of the laparoscopic system provides light for the video imaging cameras to take their 2-D images of the body cavity.

In step 141 the scanned laser projects a laser line onto the working region, including the object of interest in the body cavity.

In step 142, the video cameras capture 2-D images of the object, which include the laser line displayed on the internal profile of the body cavity.

In step 143, the laser profiling software extracts the 3-D surface of the object imaged in each camera.

In step 144, the software stitches together all the 3-D surfaces obtained from the video cameras into one 3-D mesh of the surface profile of the internal profile of the body cavity.

In step 145, the software then takes each 2-D video image, as generated in step 142, and projects each onto the 3-D mesh of the object as generated in step 144.

In step 146 and software stitches the edges of the images to generate a new three- dimensional image of the entire object.

Finally in step 147, the 3-D image of the entire object is presented either on a 2-D or on a 3-D Monitor.

Using any of the above systems, a 2-D image can be presented on a classical monitor or a 3-D image on a special monitor. This is unlike other systems that require a dedicated 3-D display system to enhance the image. It is appreciated by persons skilled in the art that the present invention is not limited by what has been particularly shown and described hereinabove. Rather the scope of the present invention includes both combinations and subcombinations of various features described hereinabove as well as variations and modifications thereto which would occur to a person of skill in the art upon reading the above description and which are not in the prior art.

Claims

CLAIMS We claim:
1. A laparoscopic imaging system comprising: a longitudinal element having dimensions such that it will pass through a single laparoscopic port in the wall of a body cavity, said longitudinal element comprising: a plurality of at least three imaging elements for generating images; and at least one illuminating source; and an image processing system adapted to transform at least some of said images to at least one new image containing more information than that contained in any one of said at least some images; wherein said plurality of imaging elements and said at least one illuminating source are adapted to be deployed after passage into said body cavity, into an expanded structure, in which said plurality of imaging elements and said at least one illuminating source are disposed in locations known relative to each other.
2. A laparoscopic imaging system according to claim 1 and wherein said longitudinal element has lateral dimensions substantially smaller than said expanded structure.
3. A laparoscopic imaging system according to either of claims 1 and 2, wherein said laparoscopic port has an internal diameter of less than 15 mm.
4. A laparoscopic imaging system according to any of the previous claims and wherein said longitudinal element comprises a number of fingers, said fingers being attached to said longitudinal element such that said fingers can be deployed at an angle to the axis of said longitudinal element.
5. A laparoscopic imaging system according to claim 4 and wherein at least one of said fingers is attached to said longitudinal element by means of either one of a hinge or a pivot.
6. A laparoscopic imaging system according to claim 4 and wherein at least one of said fingers comprises at least one imaging element and one illumination source.
7. A laparoscopic imaging system according to claim 4 and wherein said fingers are deployable to positions having angular orientations substantially different from the axis of said longitudinal element.
8. A laparoscopic imaging system according to claim 7 and wherein said angular orientations are aligned at from 60 to 120 degrees from said axis of said longitudinal element
9. A laparoscopic imaging system according to claim 4, wherein at least one of said fingers comprises an encoder to measure the angle of deployment of said finger.
10. A laparoscopic imaging system according to any of claims 4 to 9 and wherein said fingers are disposed such that a surgical working tool can be passed through said longitudinal element at least after deployment of said fingers.
11. A laparoscopic imaging system according to any of the previous claims and wherein the light for at least one illumination source is provided from a source outside of said body cavity.
12. A laparoscopic imaging system according to any of the previous claims and wherein the image from at least one of said imaging elements is captured by a camera located outside of said body cavity.
13. A laparoscopic imaging system according to any of the previous claims further comprising a disposable flexible sheath which covers said fingers, such that said fingers do not require sterilization between procedures.
14. A laparoscopic imaging system according to claim 1 wherein said expanded structure comprises at least one inflatable balloon, which is inflated only after said longitudinal element has passed though said laparoscopic port.
15. A laparoscopic imaging system according to any of the previous claims, wherein at least one of said fingers comprises two imaging elements, such that said two imaging elements can generate a 3 -dimensional image from one finger.
16. A laparoscopic imaging system according to any of the previous claims, wherein said at least one image containing more information than that contained in any one of said at least some plurality of images, is presented on a display in such a manner that items in said at least one image are all displayed in their real-life orientation.
17. A laparoscopic imaging system according to claim 16, wherein said display is mounted relative to said body cavity such that the image is displayed in a position and orientation close to that of the real-life position and orientation of said body cavity.
18. A laparoscopic imaging device comprising: a helically shaped conduit having disposed within its walls a plurality of imaging elements for generating images, and at least one illuminating source, said conduit having openings in a wall in the general direction of the axis of said helix, such that said imaging elements and said at least one illuminating source have fields of view in the direction of the axis of said helix, wherein said helix has an outer dimension such that it can be inserted with a corkscrew-like motion through a laparoscopic port into a body cavity, such that said plurality of imaging elements and said at least one illuminating source are disposed in said body cavity in locations known relative to each other.
19. A laparoscopic imaging device according to claim 18, wherein a proximal end of said helix is adapted to remain outside of said body cavity, such that said conduit can be manipulated such that said laparoscopic imaging device views different regions of said body cavity.
20. A laparoscopic imaging device according to claim 18, wherein said laparoscopic port has an internal diameter of less than 15 mm.
21. A laparoscopic imaging device according to claim 18, wherein said known locations of said plurality of imaging elements and said at least one illuminating source enables 3-D image reconstruction to be performed without the need for a registration procedure.
22. A laparoscopic imaging system comprising: a plurality of imaging devices generating a plurality of 2-dimensional images; at least one illuminating source; and a laser line scanner adapted to be scanned at a predetermined rate over an item to be viewed, such that a 3 -dimensional virtual mesh is formed of the profile of said item illuminated by said laser line scanner, wherein said plurality of 2-dimensionsal images is projected and stitched onto said 3 -dimensional mesh.
23. A laparoscopic imaging system according to claim 22, wherein said laser line scanner incorporates prescanned calibration data to enable said three dimensional virtual mesh to be acquired without a reference plane.
PCT/IL2009/000536 2008-05-28 2009-05-31 Laparoscopic camera array WO2009144729A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US7195508P true 2008-05-28 2008-05-28
US61/071,955 2008-05-28

Publications (1)

Publication Number Publication Date
WO2009144729A1 true WO2009144729A1 (en) 2009-12-03

Family

ID=41376660

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2009/000536 WO2009144729A1 (en) 2008-05-28 2009-05-31 Laparoscopic camera array

Country Status (1)

Country Link
WO (1) WO2009144729A1 (en)

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2524664A1 (en) * 2011-05-19 2012-11-21 Tyco Healthcare Group LP Integrated surgical visualization apparatus, systems and methods thereof
CN103281971A (en) * 2011-01-04 2013-09-04 约翰霍普金斯大学 Minimally invasive laparoscopic retractor
US20140228644A1 (en) * 2013-02-14 2014-08-14 Sony Corporation Endoscope and endoscope apparatus
US8834488B2 (en) 2006-06-22 2014-09-16 Board Of Regents Of The University Of Nebraska Magnetically coupleable robotic surgical devices and related methods
EP2786696A1 (en) * 2013-04-04 2014-10-08 Dürr Dental AG Dental camera system
US8894633B2 (en) 2009-12-17 2014-11-25 Board Of Regents Of The University Of Nebraska Modular and cooperative medical devices and related systems and methods
US8968267B2 (en) 2010-08-06 2015-03-03 Board Of Regents Of The University Of Nebraska Methods and systems for handling or delivering materials for natural orifice surgery
US8974440B2 (en) 2007-08-15 2015-03-10 Board Of Regents Of The University Of Nebraska Modular and cooperative medical devices and related systems and methods
US9010214B2 (en) 2012-06-22 2015-04-21 Board Of Regents Of The University Of Nebraska Local control robotic surgical devices and related methods
US9060781B2 (en) 2011-06-10 2015-06-23 Board Of Regents Of The University Of Nebraska Methods, systems, and devices relating to surgical end effectors
CN104783889A (en) * 2015-04-01 2015-07-22 上海交通大学 Endoscopic surgery mechanical arm system and visual feedback device thereof
US9089353B2 (en) 2011-07-11 2015-07-28 Board Of Regents Of The University Of Nebraska Robotic surgical devices, systems, and related methods
WO2015111582A1 (en) * 2014-01-23 2015-07-30 シャープ株式会社 In-body monitoring camera system and auxiliary tool set
US9179981B2 (en) 2007-06-21 2015-11-10 Board Of Regents Of The University Of Nebraska Multifunctional operational component for robotic devices
US9264695B2 (en) 2010-05-14 2016-02-16 Hewlett-Packard Development Company, L.P. System and method for multi-viewpoint video capture
US9332242B2 (en) 2012-11-26 2016-05-03 Gyrus Acmi, Inc. Dual sensor imaging system
US9403281B2 (en) 2003-07-08 2016-08-02 Board Of Regents Of The University Of Nebraska Robotic devices with arms and related methods
JP2016185342A (en) * 2016-06-09 2016-10-27 ソニー株式会社 The endoscope and an endoscope apparatus
US9498292B2 (en) 2012-05-01 2016-11-22 Board Of Regents Of The University Of Nebraska Single site robotic device and related systems and methods
US9579088B2 (en) 2007-02-20 2017-02-28 Board Of Regents Of The University Of Nebraska Methods, systems, and devices for surgical visualization and device manipulation
US9662018B2 (en) 2012-03-30 2017-05-30 Covidien Lp Integrated self-fixating visualization devices, systems and methods
US9743987B2 (en) 2013-03-14 2017-08-29 Board Of Regents Of The University Of Nebraska Methods, systems, and devices relating to robotic surgical devices, end effectors, and controllers
US9770305B2 (en) 2012-08-08 2017-09-26 Board Of Regents Of The University Of Nebraska Robotic surgical devices, systems, and related methods
JP2017536215A (en) * 2014-09-15 2017-12-07 ヴィヴィッド メディカル インコーポレイテッド Single-use, expandable by using a port and joints steerable endoscope
US9888966B2 (en) 2013-03-14 2018-02-13 Board Of Regents Of The University Of Nebraska Methods, systems, and devices relating to force control surgical systems
WO2018046092A1 (en) * 2016-09-09 2018-03-15 Siemens Aktiengesellschaft Method for operating an endoscope, and endoscope
US9918708B2 (en) 2012-03-29 2018-03-20 Lapspace Medical Ltd. Tissue retractor
US9956043B2 (en) 2007-07-12 2018-05-01 Board Of Regents Of The University Of Nebraska Methods, systems, and devices for surgical access and procedures
US10230326B2 (en) 2015-03-24 2019-03-12 Carrier Corporation System and method for energy harvesting system planning and performance
US10335024B2 (en) 2007-08-15 2019-07-02 Board Of Regents Of The University Of Nebraska Medical inflation, attachment and delivery devices and related methods
US10342561B2 (en) 2014-09-12 2019-07-09 Board Of Regents Of The University Of Nebraska Quick-release end effectors and related systems and methods
EP3387982A4 (en) * 2015-12-07 2019-07-24 Kyocera Corp Trocar and low height type lens optical system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6421559B1 (en) * 1994-10-24 2002-07-16 Transscan Medical Ltd. Tissue characterization based on impedance images and on impedance measurements
US20060020213A1 (en) * 2004-07-09 2006-01-26 Whitman Michael P Surgical imaging device
US20060067573A1 (en) * 2000-03-08 2006-03-30 Parr Timothy C System, method, and apparatus for generating a three-dimensional representation from one or more two-dimensional images
US20070135803A1 (en) * 2005-09-14 2007-06-14 Amir Belson Methods and apparatus for performing transluminal and other procedures
US20070142880A1 (en) * 2005-11-07 2007-06-21 Barnard William L Light delivery apparatus
US20070244367A1 (en) * 2004-02-22 2007-10-18 Doheny Eye Institute Methods and systems for enhanced medical procedure visualization
US20080051817A1 (en) * 2004-04-26 2008-02-28 Patrick Leahy Surgical Device
US20080111513A1 (en) * 2003-07-08 2008-05-15 Board Of Regents Of The University Of Nebraska Robot for surgical applications

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6421559B1 (en) * 1994-10-24 2002-07-16 Transscan Medical Ltd. Tissue characterization based on impedance images and on impedance measurements
US20060067573A1 (en) * 2000-03-08 2006-03-30 Parr Timothy C System, method, and apparatus for generating a three-dimensional representation from one or more two-dimensional images
US20080111513A1 (en) * 2003-07-08 2008-05-15 Board Of Regents Of The University Of Nebraska Robot for surgical applications
US20070244367A1 (en) * 2004-02-22 2007-10-18 Doheny Eye Institute Methods and systems for enhanced medical procedure visualization
US20080051817A1 (en) * 2004-04-26 2008-02-28 Patrick Leahy Surgical Device
US20060020213A1 (en) * 2004-07-09 2006-01-26 Whitman Michael P Surgical imaging device
US20070135803A1 (en) * 2005-09-14 2007-06-14 Amir Belson Methods and apparatus for performing transluminal and other procedures
US20070142880A1 (en) * 2005-11-07 2007-06-21 Barnard William L Light delivery apparatus

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KOJIMA ET AL.: "High-precise Angle Measurement Technology and High-precision Angle Sensor", ASPE ANNUAL PROCEEDINGS, 24 October 2004 (2004-10-24) - 29 October 2004 (2004-10-29), ORLANDO, FLORIDA *
SATAVA ET AL.: "3-D Vision Technology applied to advanced minimally invasive surgery systems", SURG ENDOSC, vol. 7, 1993, pages 429 - 431 *

Cited By (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9403281B2 (en) 2003-07-08 2016-08-02 Board Of Regents Of The University Of Nebraska Robotic devices with arms and related methods
US10307199B2 (en) 2006-06-22 2019-06-04 Board Of Regents Of The University Of Nebraska Robotic surgical devices and related methods
US8968332B2 (en) 2006-06-22 2015-03-03 Board Of Regents Of The University Of Nebraska Magnetically coupleable robotic surgical devices and related methods
US8834488B2 (en) 2006-06-22 2014-09-16 Board Of Regents Of The University Of Nebraska Magnetically coupleable robotic surgical devices and related methods
US9883911B2 (en) 2006-06-22 2018-02-06 Board Of Regents Of The University Of Nebraska Multifunctional operational component for robotic devices
US9579088B2 (en) 2007-02-20 2017-02-28 Board Of Regents Of The University Of Nebraska Methods, systems, and devices for surgical visualization and device manipulation
US9179981B2 (en) 2007-06-21 2015-11-10 Board Of Regents Of The University Of Nebraska Multifunctional operational component for robotic devices
US9956043B2 (en) 2007-07-12 2018-05-01 Board Of Regents Of The University Of Nebraska Methods, systems, and devices for surgical access and procedures
US8974440B2 (en) 2007-08-15 2015-03-10 Board Of Regents Of The University Of Nebraska Modular and cooperative medical devices and related systems and methods
US10335024B2 (en) 2007-08-15 2019-07-02 Board Of Regents Of The University Of Nebraska Medical inflation, attachment and delivery devices and related methods
US8894633B2 (en) 2009-12-17 2014-11-25 Board Of Regents Of The University Of Nebraska Modular and cooperative medical devices and related systems and methods
US9264695B2 (en) 2010-05-14 2016-02-16 Hewlett-Packard Development Company, L.P. System and method for multi-viewpoint video capture
US8968267B2 (en) 2010-08-06 2015-03-03 Board Of Regents Of The University Of Nebraska Methods and systems for handling or delivering materials for natural orifice surgery
US9445800B2 (en) 2011-01-04 2016-09-20 The Johns Hopkins University Minimally invasive laparoscopic retractor
CN103281971B (en) * 2011-01-04 2017-02-15 约翰霍普金斯大学 Minimally invasive laparoscopic retractor
CN103281971A (en) * 2011-01-04 2013-09-04 约翰霍普金斯大学 Minimally invasive laparoscopic retractor
AU2012202350B2 (en) * 2011-05-19 2014-07-31 Covidien Lp Integrated visualization apparatus, systems and methods of thereof
EP2524664A1 (en) * 2011-05-19 2012-11-21 Tyco Healthcare Group LP Integrated surgical visualization apparatus, systems and methods thereof
US10350000B2 (en) 2011-06-10 2019-07-16 Board Of Regents Of The University Of Nebraska Methods, systems, and devices relating to surgical end effectors
US9060781B2 (en) 2011-06-10 2015-06-23 Board Of Regents Of The University Of Nebraska Methods, systems, and devices relating to surgical end effectors
US9757187B2 (en) 2011-06-10 2017-09-12 Board Of Regents Of The University Of Nebraska Methods, systems, and devices relating to surgical end effectors
US10111711B2 (en) 2011-07-11 2018-10-30 Board Of Regents Of The University Of Nebraska Robotic surgical devices, systems, and related methods
US9089353B2 (en) 2011-07-11 2015-07-28 Board Of Regents Of The University Of Nebraska Robotic surgical devices, systems, and related methods
US9918708B2 (en) 2012-03-29 2018-03-20 Lapspace Medical Ltd. Tissue retractor
US9662018B2 (en) 2012-03-30 2017-05-30 Covidien Lp Integrated self-fixating visualization devices, systems and methods
US9498292B2 (en) 2012-05-01 2016-11-22 Board Of Regents Of The University Of Nebraska Single site robotic device and related systems and methods
US10219870B2 (en) 2012-05-01 2019-03-05 Board Of Regents Of The University Of Nebraska Single site robotic device and related systems and methods
US9010214B2 (en) 2012-06-22 2015-04-21 Board Of Regents Of The University Of Nebraska Local control robotic surgical devices and related methods
US9770305B2 (en) 2012-08-08 2017-09-26 Board Of Regents Of The University Of Nebraska Robotic surgical devices, systems, and related methods
US9332242B2 (en) 2012-11-26 2016-05-03 Gyrus Acmi, Inc. Dual sensor imaging system
US20140228644A1 (en) * 2013-02-14 2014-08-14 Sony Corporation Endoscope and endoscope apparatus
US9545190B2 (en) * 2013-02-14 2017-01-17 Sony Corporation Endoscope apparatus with rotatable imaging module
US9888966B2 (en) 2013-03-14 2018-02-13 Board Of Regents Of The University Of Nebraska Methods, systems, and devices relating to force control surgical systems
US9743987B2 (en) 2013-03-14 2017-08-29 Board Of Regents Of The University Of Nebraska Methods, systems, and devices relating to robotic surgical devices, end effectors, and controllers
EP2786696A1 (en) * 2013-04-04 2014-10-08 Dürr Dental AG Dental camera system
US9220414B2 (en) 2013-04-04 2015-12-29 Duerr Dental Ag Dental camera system
WO2015111582A1 (en) * 2014-01-23 2015-07-30 シャープ株式会社 In-body monitoring camera system and auxiliary tool set
JP6027275B2 (en) * 2014-01-23 2016-11-16 シャープ株式会社 Body surveillance camera system and aid set
US10342561B2 (en) 2014-09-12 2019-07-09 Board Of Regents Of The University Of Nebraska Quick-release end effectors and related systems and methods
JP2017536215A (en) * 2014-09-15 2017-12-07 ヴィヴィッド メディカル インコーポレイテッド Single-use, expandable by using a port and joints steerable endoscope
US10230326B2 (en) 2015-03-24 2019-03-12 Carrier Corporation System and method for energy harvesting system planning and performance
CN104783889A (en) * 2015-04-01 2015-07-22 上海交通大学 Endoscopic surgery mechanical arm system and visual feedback device thereof
EP3387982A4 (en) * 2015-12-07 2019-07-24 Kyocera Corp Trocar and low height type lens optical system
JP2016185342A (en) * 2016-06-09 2016-10-27 ソニー株式会社 The endoscope and an endoscope apparatus
WO2018046092A1 (en) * 2016-09-09 2018-03-15 Siemens Aktiengesellschaft Method for operating an endoscope, and endoscope

Similar Documents

Publication Publication Date Title
Fuchs et al. Augmented reality visualization for laparoscopic surgery
US8114172B2 (en) System and method for 3D space-dimension based image processing
EP0571827B1 (en) System and method for augmentation of endoscopic surgery
JP5707449B2 (en) Tool position and identification indicator is displayed in the border area of ​​the computer display screen
US9033870B2 (en) Pluggable vision module and portable display for endoscopy
ES2292593T3 (en) Guidance system.
US9931023B2 (en) Stereo imaging miniature endoscope with single imaging chip and conjugated multi-bandpass filters
US9636188B2 (en) System and method for 3-D tracking of surgical instrument in relation to patient body
US20110009694A1 (en) Hand-held minimally dimensioned diagnostic device having integrated distal end visualization
JP5368198B2 (en) Solid-state observation variable direction endoscopes
US9629523B2 (en) Binocular viewing assembly for a surgical visualization system
EP1685790A1 (en) Endoscope device and imaging method using the same
US8237775B2 (en) System and method for 3D space-dimension based image processing
ES2215985T3 (en) Overlapping data x-ray image of a patient, or image data and video image scanner.
US20070032701A1 (en) Insertable device and system for minimal access procedure
US7559895B2 (en) Combining tomographic images in situ with direct vision using a holographic optical element
CN104918572B (en) Surgery for video capture and display of digital systems
JP5153620B2 (en) System for superimposing an image associated with an endoscope which is continuously guide
EP1891882A2 (en) Deflectable tip videoarthroscope
EP2217132B1 (en) Insertable surgical imaging device
US6768496B2 (en) System and method for generating an image from an image dataset and a video image
US6414708B1 (en) Video system for three dimensional imaging and photogrammetry
US20080147018A1 (en) Laparoscopic cannula with camera and lighting
JP4631057B2 (en) The endoscope system
US20140347395A1 (en) Augmented Reality Methods and Systems Including Optical Merging of a Plurality of Component Optical Images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09754348

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09754348

Country of ref document: EP

Kind code of ref document: A1