WO2022218851A1 - An endoscope system and a method thereof - Google Patents

An endoscope system and a method thereof Download PDF

Info

Publication number
WO2022218851A1
WO2022218851A1 PCT/EP2022/059413 EP2022059413W WO2022218851A1 WO 2022218851 A1 WO2022218851 A1 WO 2022218851A1 EP 2022059413 W EP2022059413 W EP 2022059413W WO 2022218851 A1 WO2022218851 A1 WO 2022218851A1
Authority
WO
WIPO (PCT)
Prior art keywords
endoscope system
location
imaging
probe
image
Prior art date
Application number
PCT/EP2022/059413
Other languages
French (fr)
Inventor
Pieter MUYSHONDT
Joris DIRCKX
Sam VAN DER JEUGHT
Original Assignee
Universiteit Antwerpen
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Universiteit Antwerpen filed Critical Universiteit Antwerpen
Publication of WO2022218851A1 publication Critical patent/WO2022218851A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/06Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
    • A61B1/07Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements using light-conductive means, e.g. optical fibres
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00006Operational features of endoscopes characterised by electronic signal processing of control signals
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00064Constructional details of the endoscope body
    • A61B1/00071Insertion part of the endoscope body
    • A61B1/0008Insertion part of the endoscope body characterised by distal tip features
    • A61B1/00096Optical elements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00163Optical arrangements
    • A61B1/00165Optical arrangements with light-conductive means, e.g. fibre optics
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00163Optical arrangements
    • A61B1/00194Optical arrangements adapted for three-dimensional imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/002Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor having rod-lens arrangements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/042Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances characterised by a proximal camera, e.g. a CCD camera
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/045Control thereof
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/06Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
    • A61B1/0605Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements for spatially modulated illumination
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/06Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
    • A61B1/0655Control therefor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/06Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
    • A61B1/0661Endoscope light sources
    • A61B1/0669Endoscope light sources at proximal end of an endoscope

Definitions

  • the present disclosure relates to an endoscope system for frontal imaging and more specifically to an endoscope system for frontal imaging based on fringe projection profilometry.
  • Rigid endoscopes are used in particular in medicine or other technology for examining hollow spaces such as bodily cavities, engine combustion areas, power units, and the like. Depending on the depth of the hollow area to be examined, various conversion lengths are bridged by the linear lens system integrated within the body of the endoscope.
  • Rigid endoscopes comprise optical fibres placed around a metal tube which respectively act as illumination and imaging probes.
  • the metal tube or barrel comprises objective and rod lenses placed therewithin which guide the captured illumination towards an eyepiece such as a magnifying lens.
  • Such rigid endoscopes provide only a qualitative two-dimensional, 2D, presentation of the inspected space and, thus, do not allow inspection of visually less distinguishable defects.
  • endoscopes are rather bulky and are not suitable for inspecting narrow cavities such as the ear canal, the middle ear, the nasal cavity, fuel injectors of engines, hydraulic manifolds, and so on.
  • an endoscope system for a three-dimensional frontal imaging characterized by the features of claim 1.
  • the endoscope system comprises an endoscope device including a substantially rigid illumination probe and a substantially rigid imaging probe both having a substantially rigid body, also called a barrel, which body can, for example, be substantially cylindrical.
  • the probes are further arranged substantially in parallel and on top of one another. A very small angle between the probes, for example smaller than 2°, may however be admissible.
  • the probes may for example engage each other or may for example be permanently attached to each other.
  • the illumination probe and the imaging probe may for example be comprised in a single housing for further protection.
  • the illumination probe and the imaging probe may be positioned at a relatively short distance of each other, for example at a distance smaller than 5 mm or less, such that a single housing can still enclose both the illumination probe and the imaging probe.
  • a relatively compact endoscope device can be provided, in particular a dual-barrel endoscope device.
  • the probes are configured to have an oblique viewing angle relative to one another.
  • the probes are further configured for fringe projection profilometry (FPP), such as single-shot FPP or preferably, multi-shot FPP.
  • FPP fringe projection profilometry
  • the illumination probe can for example project one or more fringe patterns, i.e.
  • the imaging probe can for example record reflections of the projected patterns from the object’s surface. Due to the relative viewing angle between the probes, reflected fringe patterns will be recorded by the imaging probe at a relative angle that enables depth quantification of the imaged surface and therefore three-dimensional representation of the object’s surface.
  • the probes of the endoscope device can further have respective viewing angles in the range of zero to eighty degrees and a difference between the viewing angles of at least 10 degrees.
  • This can for example allow one of the probes to have a zero-degree viewing angle and the other an oblique viewing angle ranging from 10 to 80 degrees, in particular an oblique viewing angle of more or less 30 degrees, thus greatly simplifying the construction of the endoscope device.
  • the combination of a zero-degree viewing angle and a 30-degree viewing angle can be an advantageous compromise between requirements for depth measurements favouring relatively large viewing angles and for prevention of image deformation (i.e. difference of recorded image with respect to projected image) favouring relatively small viewing angles.
  • the illumination probe and the imaging probe can advantageously have at least partially overlapping fields of view.
  • the fields of view of the probes depend on their respective viewing angles and define how much of the environment, i.e. the object’s surface, or what portion of it, will be illuminated by the illumination probe and what portion of the environment will be recorded by the imaging probe.
  • At least partially overlapping fields of view can allow illumination of at least part of an object’s surface by the illumination probe and imaging of at least part of said object’s surface by the imaging probe.
  • At least one, and more preferably each, of the probes can further comprise an array of optical lenses.
  • the probes can comprise a bundle of optical fibres.
  • the optical lenses may be distributed within a substantially rigid body of the at least one probe and along its length and such that the first lens of the array is positioned near a head end of the probe.
  • the first end or head end of the probe is to be understood as that end of the probe configured to be directed to an object to be imaged, for example at the tip or head of the probe, contrary to a second end or rear end of the probe opposite to the first end, which second or rear end may be configured to be coupled to further devices.
  • the first lens of the optical lens array of the illumination probe is an objective lens and is positioned near the head end of the probe, in particular the illumination end of the probe, i.e. near its tip.
  • the objective lens may be configured to enlarge an image to be projected.
  • the first lens of the optical lens array of the imaging probe is an objective lens and is positioned near the head end of the probe, in particular the imaging end of the probe, i.e. towards its tip.
  • the objective lens may be configured to enlarge a recorded image.
  • Each of the probes further comprises a prism placed within its substantially rigid body and in front of the objective lens, so at the first end or head end of said respective probe.
  • the prism is configured to define the probe’s viewing angle, in particular the oblique viewing angle, and in turn the probe’s field of view.
  • the prism may be fixed or controllable, i.e. the prism may have a fixed or a controllable angle, respectively.
  • the viewing angles of the probes may be easily modified.
  • the objective lens and the fixed prism of the respective probes are removably attachable to the respective probes. This way, the endoscope device may be easily modified to comply with the specifications of the use case.
  • the optical lens array allows projecting fringe patterns as well as recording reflections of the projected patterns which contain well-defined geometric distortions, e.g. barrel or pincushion distortions.
  • these geometric distortions are well-defined, they may be compensated for, for example, by calibrating the endoscope system.
  • endoscope calibration a high-quality, as well as a distortion-free three-dimensional image of the inspected object’s surface, is obtained.
  • the endoscope system can preferably further comprise an imaging device.
  • the imaging device is operatively coupled to the imaging probe and configured to record one or more images observed by the imaging probe.
  • the imaging device may be a digital camera such as a CCD image sensor.
  • the imaging device is coupled to the imaging probe via an optical lens that guides the light captured by the imaging probe towards the imaging device.
  • the endoscope system can preferably further comprise a projector device.
  • the projector device is operatively coupled to the illumination probe and is configured to produce light, such as uniform light or structured light, commonly referred to as fringe patterns, and project said light via the illumination probe.
  • the projector device may for example be configured to produce one or more fringe patterns and to project said one or more fringe patterns via the illumination probe.
  • the projector device may be a Digital Light Processing projector (DLP projector), or any other suitable device.
  • the projector device is coupled to the illumination probe via an optical system that guides the light produced by the projector device into the illumination probe.
  • the optical system may, for example, comprise an optical lens and a prism mirror.
  • the resolution of the projected image i.e. the light pattern projected by the illumination probe, as well as the resolution of the recorded image are defined by the properties of the projector and the imaging devices, respectively.
  • the resolution of the obtained three-dimensional representation of the object’s surface is defined by the projection and imaging devices.
  • the projector device and the optical system further form a projector unit which may be separate from the endoscope device comprising the probes.
  • the imaging device and the optical lens form an imaging unit which may be also separate from the endoscope device comprising the probes.
  • the projector unit and the imaging unit may be removably attached and/or releasably coupled to the respective probes, thereby allowing easy replacement of damaged parts.
  • the imaging unit and the projector unit are placed within the same housing to form one digital unit.
  • the endoscope system further comprises a controller that is configured to be operatively coupled to the projecting device and the imaging device.
  • the controller is further configured to control the operation of both the projecting device and the imaging device.
  • the controller is configured to control the light projected by the illumination probe and the light captured by the imaging probe.
  • the controller is configured to synchronize the light projection with the image recording.
  • the controller may preferably be configured to synchronize projection of the one or more fringe patterns with recording of images of said one or more fringe patterns, such that the projection and the recording are performed substantially simultaneously.
  • the light projected by the illumination probe may be light with a uniform pattern or it may be light with one or more structured patterns or fringe patterns.
  • the fringe patterns may for example include a sinusoidal intensity profile and relative phase shifts. Different fringe patterns may thus be projected by varying the relative phase shift between the fringe patterns.
  • the endoscope system which may for example be an endoscope system as described above, may be configured to employ a single-shot or a multi-shot fringe projection profilometry, FPP, to derive a three-dimensional image of the object’s surface.
  • FPP fringe projection profilometry
  • the endoscope system projects one (single-shot) or more (multi-shot) fringe patterns of a single frequency and records images of their reflections by the object’s surface.
  • one or more images of reflected fringe patterns are obtained.
  • the obtained one or more images are processed by means of fringe profilometry analysis which involves triangulation of the location of the fringe patterns on the object’s surface in relation to the illumination and imaging probes of the endoscope device.
  • a relative phase map of the object’s surface is obtained which defines the functional relation between the relative phase as a function of the location of the pixels in the image coordinates, or, in other words, as a function of the pixel coordinates in the recorded image.
  • the relative phase map needs to be transformed into an absolute phase map.
  • an image of a reflection of a reference marker projected onto the object’s surface is obtained and/or recorded by means of the endoscope system.
  • a location of the reference marker in the image is then determined by means of any image processing technique suitable for the purpose, such as thresholding.
  • a functional relation of the relative phase to the absolution phase at the reference location i.e. at the location of the reference marker in the recorded image
  • the relative phase map may then be converted into an absolute phase map describing the absolute phase as a function of the location of the pixels expressed in coordinates of the recorded image, and the resulting absolute phase map is mapped to an absolute phase-to-depth map, which may be obtained through calibration of the endoscope system, thereby obtaining a depth map or a three-dimensional image of the object’s surface, i.e. a three-dimensional profile describing the depth of the object’s surface as a function of the location of the pixels as expressed in coordinates of the recorded image.
  • the method may further comprise the following steps, which may be considered as a method for relating location information to phase-to-depth information for an endoscope system.
  • the method comprises obtaining, by means of the endoscope system, images of at least one projected marker at respective predetermined depths from the endoscope system.
  • the reference marker may be substantially circular.
  • the marker is projected onto, for example, a blank calibration plate oriented substantially perpendicularly to the bisector of the projection and observation axis of the probes, which include an angle due to the oblique viewing angle between the probes, and by displacing the plate at known distances along the depth direction, i.e. the bisector.
  • the method further comprises determining the location of the reference marker in the respective images.
  • the reference marker location in a respective image corresponds to the location of the pixel including the reflection of the reference marker in the imaging device coordinates. Due to the relative viewing angle between the probes of the endoscope device, the location of the reflected reference marker in the recorded images will vary with the location of the plate along the depth direction, i.e. the location of the marker in the recorded images at the respective depths will be offset from its location in the projected image. This offset describes a monotonic relationship between the location of the marker in the recorded image and the placement of the plate. As the plate locations are predetermined, the locations of the reference marker in the respective images can be directly related to the distance or the depth of the plate placement with respect to the endoscope device. As a result, an unambiguous and unique location-to-depth map for the endoscope system is obtained by employing a projection of a reference marker in combination with known image processing techniques.
  • the step of taking into account depth information can preferably include mapping the location of the reference marker to the location-to-depth map for the endoscope system. So, the location of the marker in the image is then mapped to the location-to-depth map for the endoscope system to obtain the depth information of the object’s surface at the location of the reference marker.
  • the method may further comprise the steps of deriving, from an absolute phase-to-depth map for the endoscope system, absolute phase values for the respective predetermined depths.
  • An absolute phase-to-depth map for an endoscope is typically obtained during the endoscope calibration.
  • This absolute phase-to-depth calibration map defines the functional relation between the absolute phase as a function of the pixel coordinates in the recorded image and the depth. Otherwise said, the absolute phase-to-depth map defines the absolute phase-to-depth relation for respective pixel locations.
  • a location-to-absolute phase map may be obtained for the reference marker.
  • the step of taking into account depth information of the object’s surface can then preferably include mapping the location of the reference marker to the location- to-absolute phase map for the endoscope system.
  • an absolute phase at the location of the reference marker may be obtained.
  • the location-to-absolute phase map can provide an unambiguous and unique functional relation between the imaging device coordinates and absolute phase.
  • the above-described methods can, for example, be performed by any computing system suitable for the purpose.
  • a computing system comprises at least one processor and at least one memory including computer program code, wherein the at least one memory and computer program code are configured to, with the at least one processor, cause the performing of the computing system.
  • a computer program product is disclosed comprising computer-executable instructions for causing a computer to perform the method according to second or third example aspects.
  • a computer readable storage medium comprising computer-executable instructions for performing the method as described above when the program is run on a computer.
  • FIG.1 shows a schematic presentation of the endoscope system according to the present disclosure
  • FIG.2A shows steps for deriving calibration maps for the endoscope system according to an example embodiment of the present disclosure
  • FIG.2B shows steps for deriving a location-to-depth and location-to-phase calibration maps according to the present disclosure
  • FIG.3A and 3B show steps for deriving a three-dimensional image of an object’s surface according to the present disclosure
  • FIG.4A shows examples of a surface profile imaged at various depths using the endoscope system according to the present disclosure.
  • FIG.4B shows cross-sections along the X- and Y-directions of the surface profiles of FIG.4A.
  • FIG.5 shows an example embodiment of a suitable computing system for performing one or several steps in embodiments of the present disclosure.
  • FIG.1 shows a block scheme of the endoscope system according to a preferred embodiment.
  • the endoscope system 100 comprises an endoscope device 101 including an illumination probe 110 and an imaging probe 120 placed substantially on top of one another. The probes are thus in parallel to one another and one next to another, for example engaging each other.
  • the illumination probe 110 comprises a substantially rigid body which can, for example, be substantially cylindrical.
  • the illumination probe 110 further comprises a system of an objective 118 and rod lenses 112-117 distributed within its rigid body and along its length with the objective lens 118 placed at or near the illumination end of the probe, i.e the tip or head of the probe. In other words, the system of lenses is distributed along the optical axis of the illumination probe.
  • the imaging probe 120 comprises a substantially rigid body which can, for example, be substantially cylindrical.
  • the imaging probe 120 further comprises a system of an objective 128 and rod lenses 122 to 127 distributed within its rigid body and along its length with the objective lens 128 placed at or near the imaging end of the probe, i.e. the probe’s tip.
  • the system of lenses is distributed along the optical axis of the imaging probe.
  • the endoscope device 101 in particular each of the probes 110, 120, may further comprise a prism placed in front of the objective lens 118, 128, so at the first end or head end of said respective probe.
  • the prism is configured to define the probe’s viewing angle, in particular the oblique viewing angle.
  • the probes may have, for example, viewing angles in the range of 10 to 80 degrees.
  • the viewing angle of the probes may depend on the use case, i.e. the specifications of the objects to be inspected such as an ear canal, a middle ear, endonasal components, vocal folds, engine cylinders, engine fuel injectors, hydraulic manifolds, and so on.
  • the objective lens 118 of the illumination probe may for example have a viewing angle of zero degrees, while the objective lens 128 of the imaging probe can have a viewing angle of 30 degrees, thus providing an oblique relative viewing angle of 30 degrees. This makes the endoscope suitable for imaging cavities of objects such as the ear canal and the middle ear of a human.
  • the illumination probe 110 and the imaging probe 120 may each be enclosed in a housing forming the so-called endoscope barrel.
  • the endoscope device 101 may form a dual barrel endoscope.
  • a single housing may enclose both the illumination probe 110 and the imaging probe 120 simultaneously.
  • a barrel of the illumination probe 110 or of the imaging probe 120 may have a diameter ranging from more or less 2mm to more or less 20mm thus allowing imaging of various object’s surfaces.
  • a 2 mm diameter barrel allows imaging of narrow cavities such as the nasal cavity of the average human population
  • a 5.7 mm diameter allows imaging ear-canal of the average human population
  • a 20 mm diameter allows imaging of engine cylinders, engine fuel injectors, hydraulic manifolds, and so on.
  • a diameter of each of the barrels of the illumination probe 110 and the imaging probe 120 need not be the same.
  • a first barrel may have a diameter of more or less 3 mm
  • a second barrel may have a diameter of more or less 2.7 mm, resulting in a dual barrel endoscope having a largest diameter of more or less 5.7 mm which is suitable for otologic applications.
  • the endoscope system 100 can further comprise a projector device 130, such as a DLP projector (Digital Light Processing projector), that is operatively coupled to the illumination probe 110 via an optical system comprising for example an optical lens 132 and a 90° prism mirror 134.
  • a projector device 130 such as a DLP projector (Digital Light Processing projector)
  • the projector device and the optical system may thus be positioned under a 90°-angle relative to the optical axis of the illumination probe, for example vertically when in use, or in any other suitable position.
  • the optical lens 132 and the prism mirror 134 may be removably attached to a second end or rear end of the illumination probe and thus forming part of the illumination probe, while the projector device 130 may be removably attached to the optical system formed by the optical lens and the prism mirror.
  • the projector device 130 is also removably attached to the illumination probe via the optical system.
  • the DLP projector contains flash memory for onboard image storage. In this way, the projector may be capable of buffering the images received from the controller.
  • the endoscope system 100 can further comprise an imaging device 140 such as a digital imaging sensor, such as a CCD sensor, that is operatively coupled to the imaging probe 120 via an optical lens 142.
  • the imaging device together with the optical lens are thus positioned along with the optical axis of the imaging probe and thus the endoscope.
  • the optical lens 142 is shown as located at a distance from the imaging probe, however, in the example shown in FIG.1 , it should be understood as the lens 142 is removably attached to the base of the imaging probe and thus forming part of the illumination probe, while the imaging device is being removably attached to the lens 142.
  • the imaging device 140 is also removably attached to the imaging probe via the lens 142.
  • the endoscope system 100 further comprises a controller 150 which is operatively connected to the projecting and the imaging devices 130 and 140.
  • the controller may be connected to the projecting device and the imaging device via for example a coaxial, Bayonet Neill-Concelman, BNC, connector, or a gigabit ethernet interface.
  • the controller 150 is further configured to control the operation of the projector and the imaging devices. More particularly, the controller 150 is configured to generate images, feed the respective images to the projector device 130, and activate the projector device which upon activation first converts the respective images into light patterns and then projects the light patterns into the illumination probe 110 thus illuminating the object’s surface with the respective light patterns.
  • the controller 150 may generate various images corresponding to light patterns such as uniform light patterns, fringe patterns, or any other light patterns. The controller thus not only controls what light patterns are projected but also their time of projection.
  • the controller 150 is further configured to activate the imaging device 140 which upon activation records the reflections by the object’s surface of the projected light patterns. Thus, the controller controls at what time the imaging device 130 records the light patterns reflected by the object’s surface.
  • FIG.2A shows the calibration steps to obtain an absolute phase-to-depth and depth-dependent in-plane calibration maps, which may be done using known procedures
  • FIG.2B shows the calibration steps to derive a location-to-depth and location-to-absolute phase calibration maps for one or more reference locations, which is specific for the present method.
  • the calibration of the endoscope system requires calibration 210 of the imaging pathway and a calibration 220 of the illumination pathway.
  • the endoscope system records images of a calibration plate including for example a 9x9 checkerboard pattern or a grid with a square size of, for example, 0.5 c 0.5 mm 2 .
  • the controller 150 generates an image with uniform intensity and feeds that image to the projector device 130, and, then activates the projector device 130. Upon activation, the projector device projects 211 uniform light into the illumination probe and onto the calibration plate.
  • the controller 150 activates the imaging device 140 which records an image of the calibration plate illuminated by the illumination probe.
  • the imaging device 140 thus obtains measurements or images 212 by recording images of the calibration plate with the calibration plate oriented perpendicularly to the bisector of the projection and observation axis or in other words the bisector of the viewing angle, providing the optimal sensitivity for depth measurement.
  • the images are recorded at multiple distances by displacing the calibration plate sequentially by, for example, 0.1 mm along the depth direction, i.e. , the bisector.
  • the calibration plate is fixed to a translation stage, which allows the plate to be placed at various distances from the endoscope device.
  • the recorded images are processed 213 to derive a depth- dependent in-plane calibration map.
  • This may be done by employing, for example, the pin-hole camera model with radial distortion as described by Tsai, R. in “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses.”, IEEE J Robot Autom 3:323- 344, 1987.
  • the pin-hole and distortion parameters may be calculated semi- automatically using a Levenberg-Marquardt algorithm following the approach disclosed by Zhang, Z. in flexible new technique for camera calibration,” IEEE Trans Pattern Anal Mach Intell 22: 1330-1334, 2000.
  • an optimization problem is set up that aims to minimize the difference between the output of the parametric camera model and the recorded images.
  • the algorithm solves this nonlinear optimization problem iteratively, yielding the camera parameters which map the real-world coordinates to the camera frame.
  • a functional relationship between pixel coordinates of the imaging device 130 and real-world two- dimensional, 2D, coordinates in the system of the calibration plate for different depths is obtained which represents the depth-dependent in-plane calibration map.
  • the calibration 220 of the illumination or projection pathway is done similarly to the calibration 210 of the imaging pathway.
  • the endoscope system projects 221 one or more fringe patterns onto a further calibration plate, for example on a flat white plate, to obtain 222 one or more measurements or images at the respective distances. This way one or more measurements are obtained at the respective distances.
  • the number and the type of fringe patterns projected depend on how the endoscope will be used to take actual measurements. If the endoscope takes measurements using a single-shot FPP mode of a single frequency, then the one fringe pattern will be used for the calibration. Further, the same fringe pattern used for calibrating the projection probe has to be used for the actual measurements. If the endoscope is used in a multi-shot FPP mode, for example, in four-shot FPP mode, then four fringe patterns are required. Again, the same four fringe patterns used for calibrating the projection probe have to be used for the actual measurements.
  • the endoscope system employs a four-shot FPP mode using four 8-bit fringe patterns of a single frequency with sinusoidal intensity profiles and relative phase shifts of p/2.
  • four images for each depth are obtained. Due to the relative viewing angle between the illumination probe and the imaging probes, the reflected fringe patterns recorded by the imaging device are deformed by the calibration plate. More particularly, the location of a light point of the fringe pattern in the recorded image will vary with the distance of the calibration plate. In the example of FIG.1 , the greater the distance between the calibration plate and the endoscope device, the bigger the offset between the location of the light point in the recorded image and the location of that light point in the projected image. Exploiting this observation allows describing the intensity profile of the four images at the respective depths observed by the imaging device as follows:
  • Il(i,j ) /' (i, j) + I''(ij cos [ ⁇ (i, j )] , (1 )
  • Il(i,j) to 14(1, j) are the four recorded fringe pattern intensities of the successive phase-shifted images at pixel indices i and j
  • V(i,j) is the average or background intensity
  • I" ⁇ i,j) the profile modulation intensity
  • f( ⁇ ,)) the phase map of the fringe patterns which can be obtained by
  • the controller is able to determine 313 absolute phase maps f( ⁇ ,)) for the respective depths.
  • a functional relationship between the absolute phase and pixel locations for every depth value is obtained by fitting a 2D polynomial, for example, a 5x5-order polynomial to the obtained absolute phase maps.
  • a three-dimensional data cube is obtained, herein referred to as the absolute phase-to-depth map, which stores absolute phase value as a function of the pixel location in the image and the depth.
  • the relationships between the pixel coordinates and the real-world in-plane coordinates obtained in the imaging pathway calibration may also be converted to a regular 3D interpolation grid, thus relating absolute phase values to in-plane X- and Y-coordinates for all pixels.
  • the determined absolute phase maps (p ⁇ i,j) for the respective depths contain 2p discontinuities due to the limited principle value domain [-p, +p] of the arctangent function of Equation (5).
  • These discrete 2p jumps can be removed by means of a fast 2D spatial phase-unwrapping algorithm, for example using a method described by Herraez MA et al. in Fast two-dimensional phase-unwrapping algorithm based on sorting by reliability following a noncontinuous path Appl Opt 2002; 41:7437-44.
  • continuous absolute phase maps for the respective depths are obtained.
  • a method to determine a unique absolute phase map for the surface is needed.
  • the endoscope system projects 231 an image of a reference marker onto a blank calibration plate at the respective distances to obtain 232 measurements or images at said respective distances.
  • the controller 150 generates an image containing a reference marker, for example, a circular marker of 8-pixel diameter, feeds the image to the projector device 130 and activates the projector device 130 to project the reference marker, i.e. a light point or a dot, to the illumination probe and on to the calibration plate.
  • Multiple measurements are taken with the calibration plate placed at respective distances from the endoscope in the same way as described above.
  • the recorded images are two-dimensional grayscale images representing phase maps characterizing how the projected light point is reflected by the calibration plate, or better how much the projected light is offset by the calibration plate at the respective depths.
  • the greater the distance of the object’s surface at the reference location the bigger the offset between the location of the marker in the recorded image and the location of the marker in the projected image.
  • the controller determines the pixel location of the reference marker in the recorded images, herein called reference locations, by employing image processing techniques suitable for the purpose, such as thresholding.
  • image processing technique will thus identify a small, confined area of connected pixels characterized with the highest intensity and a centroid point corresponding to the centre of the reference marker in the recorded image.
  • the controller proceeds to relate 234 the reference locations to the respective depths which are known, yielding a unique functional relation between the reference locations and the depth, i.e. a location-to-depth map, which is a one-dimensional map - a curve - defining the location of the reference marker in the depth direction.
  • the controller further relates the reference locations at the respective depths to the absolute phase- to-depth calibration map obtained in step 223, yielding a unique functional relation between the reference locations and the absolute phase, i.e. a location-to-absolute phase map.
  • the location-to-absolute phase calibration map is a one dimensional map, i.e. a curve.
  • this functional relationship is fitted using a third-order polynomial function to allow retrieval of the absolute phase map of arbitrary surfaces when a reference marker image is projected on it as will be detailed further below.
  • the endoscope system can be used to image an arbitrary surface, i.e. to obtain a three-dimensional representation of any object’s surface.
  • the operation of the controller and therefore the endoscope system to image an object’s surface will now be explained in detail with reference to FIG.3A and 3B which shows steps performed by the endoscope system to image a surface, such as a cavity, of an object.
  • the imaging of the object’s surface is performed in two main steps.
  • the first step 310 shown in FIG. 3A involves the projection of one or more fringe patterns, for example phase-shifted fringe patterns in multi-shot FPP, on the surface of the object 10 to determine the relative phase map of the object’s surface 10, while the second step 320 shown in FIG.
  • 3B involves the projection of a reference marker on the object's surface 10 to determine the absolute phase at the location of the reference marker based on the location-to-absolute phase map.
  • the absolute phase at the location of the reference marker can then provide a unique relation between the relative phase map and the absolute phase map or the distance or the depth of the object’s surface.
  • one or more fringe patterns are projected onto the object’s surface.
  • the number of fringe patterns depends on the type of fringe projection profilometry, FPP, employed. If a single-shot FPP is used, then a single fringe pattern of a single frequency is projected, while if a multi-shot FPP is used, two or more phase-shifted fringe patterns of a single frequency are projected.
  • Single-shot FPP may allow relatively fast projection of a set or series of images and does not need to rely on synchronization of projection and recording of images. Multi-shot FPP may be more advantageous in terms of calculation speed and accuracy. In the following description, multi-shot FPP will be described.
  • the same four 8-bit fringe patterns with sinusoidal intensity profiles and relative phase shifts pI2 used in the endoscope calibration are now projected onto the object’s surface.
  • the controller 150 thus generates four grayscale images each of which corresponds to a respective phase-shifted fringe pattern.
  • the controller then feeds the respective images to the projection device 130 one by one and activates the projector device 130 so that the respective fringe patterns are projected on the illumination probe and onto the surface of the object’s surface one after another.
  • the controller activates the imaging device which upon activation records four images corresponding respectively to the fringe patterns reflected from the object’s surface, thereby obtaining a measurement 312 or image of the object’s surface.
  • This requires that the projection device projects the fringe patterns on the object’s surface at least for the duration needed for the imaging device to record the respective reflected fringe patterns.
  • One way to assure this is to perform steps 311 and 312 simultaneously.
  • the reflected fringe patterns recorded by the imaging device are deformed by the object's surface, i.e. the location of a light point of the fringe pattern in the recorded image will vary with the distance of the object’s surface.
  • the intensity profile observed by the imaging device is described in the same ways as shown in Equations (1) to (4).
  • the controller determines 313 a relative phase map f'( ⁇ ,)) of the object’s surface from the recorded images by exploiting the relation in Equation (5). More particularly, the determined phase map f'( ⁇ ,)) represents relative phase values f' at respective pixel locations i,j.
  • a three-dimensional representation for the object’s surface is obtained 314 by relating the obtained relative phase map of the object’s surface with the depth of the object’s surface at a reference location.
  • the depth of the object’s surface at a reference location is obtained by performing the second main step 320.
  • the second main step 320 involves the projection of an image of a reference marker on the object’s surface 10 to determine the unique relation between the relative phase map and the distance or the depth of the object’s surface at the location of the reference marker.
  • an image of a reference marker is projected 321 onto the object’s surface.
  • the controller 150 generates an image containing a reference marker, for example, a circular marker of 8-pixel diameter feeds the image to the projector device 130 and activates the projector device 130 to project the reference marker, i.e. a light point or a dot, to the illumination probe and on to the surface of the object’s cavity.
  • the light point is thus projected at a specific location on the object’s surface which is herein referred to as the reference location.
  • step 321 to activate the imaging device which upon activation records an image comprising the projected light point reflected from the object surface, thereby obtaining a measurement of the reference marker.
  • step 321 to activate the imaging device which upon activation records an image comprising the projected light point reflected from the object surface, thereby obtaining a measurement of the reference marker.
  • the recorded image is a two-dimensional grayscale image characterizing how the projected light point is reflected by the object’s surface, or better how much the projected light is offset by the object’s surface.
  • the controller determines 323 the distance or the depth of the object’s surface at the reference location as follows. First, the controller determines the pixel location of the reference marker in the recorded image by image processing the recorded image. Any image processing techniques suitable for the purpose, such as thresholding, may be used. Based on the determined location, the controller then obtains the depth of the object’s surface at that location from the location-to-depth map obtained during the calibration. Alternatively, the location-to-absolute phase map, obtained during calibration, may be used to determine absolute phase at the location of the reference marker.
  • step 314 the controller proceeds to step 314 to correct the relative phase map for the object’s surface f'( ⁇ ,)) obtained in step 313 using the obtained depth at the location of the marker in step 323 and the absolute phase-to-depth calibration map obtained during the calibration, or directly the absolute phase obtained at the location of the reference marker.
  • an absolute phase map of the object’s surface i.e. is obtained.
  • the controller converts the obtained absolute phase map of the object’s surface to a depth map, i.e.
  • the second main step 320 may be performed prior to performing the first main step 310.
  • the depth of the object’s surface at the reference location will be readily available to the controller once step 314 is completed.
  • the controller can directly proceed to perform step 314 to derive the three- dimensional representation of the object’s surface.
  • steps 311 and 312 need to be performed for different sections of the object’s surface and/or at various instants.
  • the projection of the fringe patterns is preferably done in a loop fashion and the activation period for the imaging device, and, therefore, its exposure time, is preferably chosen high enough to maximize the intensity of the light captured by the imaging device without exceeding its dynamic range, but low enough to maximize the number of measured surface sections per unit of time, without exceeding the lower bound below which proper activation and real-time depth calculation are no longer possible.
  • FIG.4A shows the reconstructed 3D surface profile of a 9-mm diameter sphere imaged with the endoscope system of FIG.1 at five discrete distances separated by 1 mm along the bisector of the central projection and imaging axis, i.e the Z-axis.
  • the sphere is coated with a white dye to provide optimal diffuse light reflectivity. No smoothing is applied to the presented data.
  • the five panels in FIG.4A show a triangulation of the reconstructed 3D point cloud of the imaged sphere.
  • the colour variation depicts the depth error relative to the best-fitting sphere.
  • FIG.4B shows cross-sections along the X- and Y-direction of the sphere for the obtained point cloud and the corresponding best-fitting sphere, of which the location is depicted in the 3D plots of FIG.4A. From the results, it can be seen that the measurement errors increase towards the edge of the reconstructed surfaces, i.e. towards the edge of the field of view of the endoscope.
  • the 2D root-mean-square, RMS values of the difference in depth between the 3D point cloud and the best-fitting sphere are calculated.
  • the RMS depth error amounted to 12.4, 14.0, 18.3, 21.7, and 26.8 pm respectively for the five successive distances in increasing order, indicating that depth precision decreases with distance.
  • the 2D resolution of the endoscope system decreases with distance, since on average one depth value is obtained per 12.1, 15.9, 19.3, 22.4, and 25.1 pm respectively for the five distances in increasing order.
  • the encompassed area of the object's surface imaged by the imaging device increases with distance, as can be seen in FIG.4A. To find the optimal distance to the endoscope device at which an object's surface profile is measured, a compromise should be made between depth precision and 2D resolution on the one hand, and the size of the encompassed area of the imaged object on the other hand.
  • FIG.5 shows a computing system enabling to implement embodiments of the method for calibration and/or the method for imaging an object’s surface by means of the endoscope according to the present disclosure.
  • the computing system 400 may, in general, be formed as a suitable general-purpose computer and comprise a bus 410, a processor 402, a local memory 404, one or more optional input interfaces 414, one or more optional output interfaces 416, a communication interface 412, a storage element interface 406, and one or more storage elements 408.
  • Bus 410 may comprise one or more conductors that permit communication among the components of the computing system 400.
  • Processor 402 may include any type of conventional processor or microprocessor that interprets and executes programming instructions.
  • Local memory 404 may include a random-access memory, RAM, or another type of dynamic storage device that stores information and instructions for execution by processor 402 and/or read-only memory, ROM, or another type of static storage device that stores static information and instructions for use by processor 404.
  • Input interface 414 may comprise one or more conventional mechanisms that permit an operator or user to input information to the computing device 400, such as a keyboard 420, a mouse 430, a pen, voice recognition, and/or biometric mechanisms, a camera, etc.
  • Output interface 416 may comprise one or more conventional mechanisms that output information to the operator or user, such as a display 440, etc.
  • Communication interface 412 may comprise any transceiver-like mechanism such as for example one or more Ethernet interfaces that enables computing system 400 to communicate with other devices and/or systems, for example with other computing devices such as computing device 452.
  • the communication interface 412 of computing system 400 may be connected to such another computing system by means of a local area network, LAN, or a wide area network, WAN, such as for example the internet.
  • Storage element interface 406 may comprise a storage interface such as for example a Serial Advanced Technology Attachment, SATA, interface or a Small Computer System Interface, SCSI, for connecting bus 410 to one or more storage elements 408, such as one or more local disks, for example, SATA disk drives, and control the reading and writing of data to and/or from these storage elements 408.
  • computing system 400 could thus correspond to the controller 150 in the embodiments illustrated by FIG.1.
  • top, bottom, over, under, and the like are introduced for descriptive purposes and not necessarily to denote relative positions. It is to be understood that the terms so used are interchangeable under appropriate circumstances and embodiments of the invention are capable of operating according to the present invention in other sequences, or in orientations different from the one(s) described or illustrated above.

Abstract

The present disclosure relates to an endoscope system for a three-dimensional frontal imaging, comprising an endoscope device including an illumination rigid probe and an imaging rigid probe configured for fringe projection profilometry, FPP, wherein the probes are arranged substantially on top of one another and configured to have an oblique viewing angle relative to one another. The present disclosure further relates to a method for relating location information to phase-to-depth information for an endoscope system and to a method for deriving a three-dimensional image of an object's surface by means of an endoscope system.

Description

AN ENDOSCOPE SYSTEM AND A METHOD THEREOF
Technical Field
[01] The present disclosure relates to an endoscope system for frontal imaging and more specifically to an endoscope system for frontal imaging based on fringe projection profilometry.
Background
[02] Rigid endoscopes are used in particular in medicine or other technology for examining hollow spaces such as bodily cavities, engine combustion areas, power units, and the like. Depending on the depth of the hollow area to be examined, various conversion lengths are bridged by the linear lens system integrated within the body of the endoscope.
[03] Rigid endoscopes comprise optical fibres placed around a metal tube which respectively act as illumination and imaging probes. The metal tube or barrel comprises objective and rod lenses placed therewithin which guide the captured illumination towards an eyepiece such as a magnifying lens. Such rigid endoscopes, however, provide only a qualitative two-dimensional, 2D, presentation of the inspected space and, thus, do not allow inspection of visually less distinguishable defects.
[04] Furthermore, most endoscopes are rather bulky and are not suitable for inspecting narrow cavities such as the ear canal, the middle ear, the nasal cavity, fuel injectors of engines, hydraulic manifolds, and so on.
Summary
[05] It is an object of embodiments of the present disclosure to provide an endoscope system capable of providing high-quality and high-resolution imaging of inspected spaces. It is a further object of embodiments of the present disclosure to provide an endoscope system suitable for inspecting narrow cavities. [06] The scope of protection sought for various embodiments of the invention is set out by the independent claims. The embodiments and features described in this specification that do not fall within the scope of the independent claims, if any, are to be interpreted as examples useful for understanding various embodiments of the invention.
[07] The objects are achieved, according to a first example aspect of the present disclosure, by an endoscope system for a three-dimensional frontal imaging characterized by the features of claim 1. In particular, the endoscope system comprises an endoscope device including a substantially rigid illumination probe and a substantially rigid imaging probe both having a substantially rigid body, also called a barrel, which body can, for example, be substantially cylindrical. The probes are further arranged substantially in parallel and on top of one another. A very small angle between the probes, for example smaller than 2°, may however be admissible. The probes may for example engage each other or may for example be permanently attached to each other. The illumination probe and the imaging probe may for example be comprised in a single housing for further protection. Alternatively, the illumination probe and the imaging probe may be positioned at a relatively short distance of each other, for example at a distance smaller than 5 mm or less, such that a single housing can still enclose both the illumination probe and the imaging probe. In this way, a relatively compact endoscope device can be provided, in particular a dual-barrel endoscope device. At the same time, the probes are configured to have an oblique viewing angle relative to one another. The probes are further configured for fringe projection profilometry (FPP), such as single-shot FPP or preferably, multi-shot FPP. In this way, the illumination probe can for example project one or more fringe patterns, i.e. structured light patterns, onto an object’s surface while the imaging probe can for example record reflections of the projected patterns from the object’s surface. Due to the relative viewing angle between the probes, reflected fringe patterns will be recorded by the imaging probe at a relative angle that enables depth quantification of the imaged surface and therefore three-dimensional representation of the object’s surface.
[08] The probes of the endoscope device can further have respective viewing angles in the range of zero to eighty degrees and a difference between the viewing angles of at least 10 degrees. This can for example allow one of the probes to have a zero-degree viewing angle and the other an oblique viewing angle ranging from 10 to 80 degrees, in particular an oblique viewing angle of more or less 30 degrees, thus greatly simplifying the construction of the endoscope device. The combination of a zero-degree viewing angle and a 30-degree viewing angle can be an advantageous compromise between requirements for depth measurements favouring relatively large viewing angles and for prevention of image deformation (i.e. difference of recorded image with respect to projected image) favouring relatively small viewing angles.
[09] The illumination probe and the imaging probe can advantageously have at least partially overlapping fields of view. The fields of view of the probes depend on their respective viewing angles and define how much of the environment, i.e. the object’s surface, or what portion of it, will be illuminated by the illumination probe and what portion of the environment will be recorded by the imaging probe. At least partially overlapping fields of view can allow illumination of at least part of an object’s surface by the illumination probe and imaging of at least part of said object’s surface by the imaging probe.
[10] It is preferred that at least one, and more preferably each, of the probes can further comprise an array of optical lenses. Alternatively, the probes can comprise a bundle of optical fibres. The optical lenses may be distributed within a substantially rigid body of the at least one probe and along its length and such that the first lens of the array is positioned near a head end of the probe. The first end or head end of the probe is to be understood as that end of the probe configured to be directed to an object to be imaged, for example at the tip or head of the probe, contrary to a second end or rear end of the probe opposite to the first end, which second or rear end may be configured to be coupled to further devices. More specifically, the first lens of the optical lens array of the illumination probe is an objective lens and is positioned near the head end of the probe, in particular the illumination end of the probe, i.e. near its tip. The objective lens may be configured to enlarge an image to be projected. Similarly, the first lens of the optical lens array of the imaging probe is an objective lens and is positioned near the head end of the probe, in particular the imaging end of the probe, i.e. towards its tip. The objective lens may be configured to enlarge a recorded image. Each of the probes further comprises a prism placed within its substantially rigid body and in front of the objective lens, so at the first end or head end of said respective probe. The prism is configured to define the probe’s viewing angle, in particular the oblique viewing angle, and in turn the probe’s field of view. The prism may be fixed or controllable, i.e. the prism may have a fixed or a controllable angle, respectively. Thus, by varying the type of prism or by varying the angle of the prism having a controllable angle, the viewing angles of the probes may be easily modified. Preferably, the objective lens and the fixed prism of the respective probes are removably attachable to the respective probes. This way, the endoscope device may be easily modified to comply with the specifications of the use case. Further, the optical lens array, including the objective lens and the prism, allows projecting fringe patterns as well as recording reflections of the projected patterns which contain well-defined geometric distortions, e.g. barrel or pincushion distortions. As these geometric distortions are well-defined, they may be compensated for, for example, by calibrating the endoscope system. As a result, after endoscope calibration, a high-quality, as well as a distortion-free three-dimensional image of the inspected object’s surface, is obtained.
[11] The endoscope system can preferably further comprise an imaging device. The imaging device is operatively coupled to the imaging probe and configured to record one or more images observed by the imaging probe. The imaging device may be a digital camera such as a CCD image sensor. Preferably, the imaging device is coupled to the imaging probe via an optical lens that guides the light captured by the imaging probe towards the imaging device.
[12] The endoscope system can preferably further comprise a projector device. The projector device is operatively coupled to the illumination probe and is configured to produce light, such as uniform light or structured light, commonly referred to as fringe patterns, and project said light via the illumination probe. The projector device may for example be configured to produce one or more fringe patterns and to project said one or more fringe patterns via the illumination probe. The projector device may be a Digital Light Processing projector (DLP projector), or any other suitable device. Preferably, the projector device is coupled to the illumination probe via an optical system that guides the light produced by the projector device into the illumination probe. The optical system may, for example, comprise an optical lens and a prism mirror.
[13] As the coupling between the projector device and the imaging device to the respective probes is done by means of an optical lens, the resolution of the projected image, i.e. the light pattern projected by the illumination probe, as well as the resolution of the recorded image are defined by the properties of the projector and the imaging devices, respectively. In other words, the resolution of the obtained three-dimensional representation of the object’s surface is defined by the projection and imaging devices.
[14] According to a preferred embodiment, the projector device and the optical system further form a projector unit which may be separate from the endoscope device comprising the probes. Similarly, the imaging device and the optical lens form an imaging unit which may be also separate from the endoscope device comprising the probes. Preferably, the projector unit and the imaging unit may be removably attached and/or releasably coupled to the respective probes, thereby allowing easy replacement of damaged parts. Preferably, the imaging unit and the projector unit are placed within the same housing to form one digital unit.
[15] The endoscope system further comprises a controller that is configured to be operatively coupled to the projecting device and the imaging device. The controller is further configured to control the operation of both the projecting device and the imaging device. In particular, the controller is configured to control the light projected by the illumination probe and the light captured by the imaging probe. In other words, the controller is configured to synchronize the light projection with the image recording. The controller may preferably be configured to synchronize projection of the one or more fringe patterns with recording of images of said one or more fringe patterns, such that the projection and the recording are performed substantially simultaneously.
[16] The light projected by the illumination probe may be light with a uniform pattern or it may be light with one or more structured patterns or fringe patterns. The fringe patterns may for example include a sinusoidal intensity profile and relative phase shifts. Different fringe patterns may thus be projected by varying the relative phase shift between the fringe patterns.
[17] According to another aspect of the invention, a method for deriving a three- dimensional frontal image of an object’s surface by means of an endoscope system is disclosed characterized by the features of claim 9. In particular, the endoscope system, which may for example be an endoscope system as described above, may be configured to employ a single-shot or a multi-shot fringe projection profilometry, FPP, to derive a three-dimensional image of the object’s surface. According to the FPP, the endoscope system projects one (single-shot) or more (multi-shot) fringe patterns of a single frequency and records images of their reflections by the object’s surface. Thus, one or more images of reflected fringe patterns are obtained. The obtained one or more images are processed by means of fringe profilometry analysis which involves triangulation of the location of the fringe patterns on the object’s surface in relation to the illumination and imaging probes of the endoscope device. As a result of the processing, a relative phase map of the object’s surface is obtained which defines the functional relation between the relative phase as a function of the location of the pixels in the image coordinates, or, in other words, as a function of the pixel coordinates in the recorded image. To obtain a three-dimensional representation of the object’s surface, the relative phase map needs to be transformed into an absolute phase map. Thereto, an image of a reflection of a reference marker projected onto the object’s surface is obtained and/or recorded by means of the endoscope system. A location of the reference marker in the image is then determined by means of any image processing technique suitable for the purpose, such as thresholding. Thus, a functional relation of the relative phase to the absolution phase at the reference location, i.e. at the location of the reference marker in the recorded image, is obtained. Based on that functional relation, the relative phase map may then be converted into an absolute phase map describing the absolute phase as a function of the location of the pixels expressed in coordinates of the recorded image, and the resulting absolute phase map is mapped to an absolute phase-to-depth map, which may be obtained through calibration of the endoscope system, thereby obtaining a depth map or a three-dimensional image of the object’s surface, i.e. a three-dimensional profile describing the depth of the object’s surface as a function of the location of the pixels as expressed in coordinates of the recorded image.
[18] Thus, by taking a single measurement of the reference marker and relating the location of the marker in the recorded image to depth, actual depth information at a specific location on the object’s surface is obtained. The depth information can be then used together with an absolute phase-to-depth map for the endoscope to convert the relative phase map of the object’s surface into a three-dimensional image of the object’s surface.
[19] The method may further comprise the following steps, which may be considered as a method for relating location information to phase-to-depth information for an endoscope system. In particular, the method comprises obtaining, by means of the endoscope system, images of at least one projected marker at respective predetermined depths from the endoscope system. The reference marker may be substantially circular. To capture these images, the marker is projected onto, for example, a blank calibration plate oriented substantially perpendicularly to the bisector of the projection and observation axis of the probes, which include an angle due to the oblique viewing angle between the probes, and by displacing the plate at known distances along the depth direction, i.e. the bisector. The method further comprises determining the location of the reference marker in the respective images. This may be done by any image processing technique suitable for this purpose, such as thresholding. The reference marker location in a respective image corresponds to the location of the pixel including the reflection of the reference marker in the imaging device coordinates. Due to the relative viewing angle between the probes of the endoscope device, the location of the reflected reference marker in the recorded images will vary with the location of the plate along the depth direction, i.e. the location of the marker in the recorded images at the respective depths will be offset from its location in the projected image. This offset describes a monotonic relationship between the location of the marker in the recorded image and the placement of the plate. As the plate locations are predetermined, the locations of the reference marker in the respective images can be directly related to the distance or the depth of the plate placement with respect to the endoscope device. As a result, an unambiguous and unique location-to-depth map for the endoscope system is obtained by employing a projection of a reference marker in combination with known image processing techniques.
[20] The step of taking into account depth information can preferably include mapping the location of the reference marker to the location-to-depth map for the endoscope system. So, the location of the marker in the image is then mapped to the location-to-depth map for the endoscope system to obtain the depth information of the object’s surface at the location of the reference marker.
[21] The method may further comprise the steps of deriving, from an absolute phase-to-depth map for the endoscope system, absolute phase values for the respective predetermined depths. An absolute phase-to-depth map for an endoscope is typically obtained during the endoscope calibration. This absolute phase-to-depth calibration map defines the functional relation between the absolute phase as a function of the pixel coordinates in the recorded image and the depth. Otherwise said, the absolute phase-to-depth map defines the absolute phase-to-depth relation for respective pixel locations. Thus, by relating the determined absolute phase values for the respective predetermined depths to the respective determined locations, a location-to-absolute phase map may be obtained for the reference marker.
[22] The step of taking into account depth information of the object’s surface can then preferably include mapping the location of the reference marker to the location- to-absolute phase map for the endoscope system. In this way, an absolute phase at the location of the reference marker may be obtained. Again, the location-to-absolute phase map can provide an unambiguous and unique functional relation between the imaging device coordinates and absolute phase.
[23] According to further embodiments, the above-described methods can, for example, be performed by any computing system suitable for the purpose. Such a computing system comprises at least one processor and at least one memory including computer program code, wherein the at least one memory and computer program code are configured to, with the at least one processor, cause the performing of the computing system. [24] According to a further aspect, a computer program product is disclosed comprising computer-executable instructions for causing a computer to perform the method according to second or third example aspects.
[25] According to a further aspect a computer readable storage medium comprising computer-executable instructions for performing the method as described above when the program is run on a computer.
Brief Description of the Drawings
[26] Some example embodiments will now be described with reference to the accompanying drawings.
[27] FIG.1 shows a schematic presentation of the endoscope system according to the present disclosure;
[28] FIG.2A shows steps for deriving calibration maps for the endoscope system according to an example embodiment of the present disclosure;
[29] FIG.2B shows steps for deriving a location-to-depth and location-to-phase calibration maps according to the present disclosure;
[30] FIG.3A and 3B show steps for deriving a three-dimensional image of an object’s surface according to the present disclosure;
[31] FIG.4A shows examples of a surface profile imaged at various depths using the endoscope system according to the present disclosure; and
[32] FIG.4B shows cross-sections along the X- and Y-directions of the surface profiles of FIG.4A.
[33] FIG.5 shows an example embodiment of a suitable computing system for performing one or several steps in embodiments of the present disclosure.
Detailed Description of Embodiment(s)
[34] FIG.1 shows a block scheme of the endoscope system according to a preferred embodiment. The endoscope system 100 comprises an endoscope device 101 including an illumination probe 110 and an imaging probe 120 placed substantially on top of one another. The probes are thus in parallel to one another and one next to another, for example engaging each other. The illumination probe 110 comprises a substantially rigid body which can, for example, be substantially cylindrical. The illumination probe 110 further comprises a system of an objective 118 and rod lenses 112-117 distributed within its rigid body and along its length with the objective lens 118 placed at or near the illumination end of the probe, i.e the tip or head of the probe. In other words, the system of lenses is distributed along the optical axis of the illumination probe. Similarly, the imaging probe 120 comprises a substantially rigid body which can, for example, be substantially cylindrical. The imaging probe 120 further comprises a system of an objective 128 and rod lenses 122 to 127 distributed within its rigid body and along its length with the objective lens 128 placed at or near the imaging end of the probe, i.e. the probe’s tip. In other words, the system of lenses is distributed along the optical axis of the imaging probe. The endoscope device 101, in particular each of the probes 110, 120, may further comprise a prism placed in front of the objective lens 118, 128, so at the first end or head end of said respective probe. The prism is configured to define the probe’s viewing angle, in particular the oblique viewing angle. The probes may have, for example, viewing angles in the range of 10 to 80 degrees. The viewing angle of the probes may depend on the use case, i.e. the specifications of the objects to be inspected such as an ear canal, a middle ear, endonasal components, vocal folds, engine cylinders, engine fuel injectors, hydraulic manifolds, and so on. In the example shown in FIG.1 , the objective lens 118 of the illumination probe may for example have a viewing angle of zero degrees, while the objective lens 128 of the imaging probe can have a viewing angle of 30 degrees, thus providing an oblique relative viewing angle of 30 degrees. This makes the endoscope suitable for imaging cavities of objects such as the ear canal and the middle ear of a human.
[35] The illumination probe 110 and the imaging probe 120 may each be enclosed in a housing forming the so-called endoscope barrel. In this way, the endoscope device 101 may form a dual barrel endoscope. Additionally, and/or alternatively, a single housing may enclose both the illumination probe 110 and the imaging probe 120 simultaneously. A barrel of the illumination probe 110 or of the imaging probe 120 may have a diameter ranging from more or less 2mm to more or less 20mm thus allowing imaging of various object’s surfaces. A 2 mm diameter barrel allows imaging of narrow cavities such as the nasal cavity of the average human population, a 5.7 mm diameter allows imaging ear-canal of the average human population and a 20 mm diameter allows imaging of engine cylinders, engine fuel injectors, hydraulic manifolds, and so on. A diameter of each of the barrels of the illumination probe 110 and the imaging probe 120 need not be the same. As an example, a first barrel may have a diameter of more or less 3 mm, a second barrel may have a diameter of more or less 2.7 mm, resulting in a dual barrel endoscope having a largest diameter of more or less 5.7 mm which is suitable for otologic applications.
[36] The endoscope system 100 can further comprise a projector device 130, such as a DLP projector (Digital Light Processing projector), that is operatively coupled to the illumination probe 110 via an optical system comprising for example an optical lens 132 and a 90° prism mirror 134. The projector device and the optical system may thus be positioned under a 90°-angle relative to the optical axis of the illumination probe, for example vertically when in use, or in any other suitable position. In the example of FIG.1 , the optical lens 132 and the prism mirror 134 may be removably attached to a second end or rear end of the illumination probe and thus forming part of the illumination probe, while the projector device 130 may be removably attached to the optical system formed by the optical lens and the prism mirror. Thus, in this example, it should be understood that the projector device 130 is also removably attached to the illumination probe via the optical system. Further, in the example of FIG.1 , the DLP projector contains flash memory for onboard image storage. In this way, the projector may be capable of buffering the images received from the controller.
[37] The endoscope system 100 can further comprise an imaging device 140 such as a digital imaging sensor, such as a CCD sensor, that is operatively coupled to the imaging probe 120 via an optical lens 142. The imaging device together with the optical lens are thus positioned along with the optical axis of the imaging probe and thus the endoscope. For illustrative purposes, the optical lens 142 is shown as located at a distance from the imaging probe, however, in the example shown in FIG.1 , it should be understood as the lens 142 is removably attached to the base of the imaging probe and thus forming part of the illumination probe, while the imaging device is being removably attached to the lens 142. Thus, in this example, it should be understood that the imaging device 140 is also removably attached to the imaging probe via the lens 142.
[38] The endoscope system 100 further comprises a controller 150 which is operatively connected to the projecting and the imaging devices 130 and 140. Preferably, the controller may be connected to the projecting device and the imaging device via for example a coaxial, Bayonet Neill-Concelman, BNC, connector, or a gigabit ethernet interface. The controller 150 is further configured to control the operation of the projector and the imaging devices. More particularly, the controller 150 is configured to generate images, feed the respective images to the projector device 130, and activate the projector device which upon activation first converts the respective images into light patterns and then projects the light patterns into the illumination probe 110 thus illuminating the object’s surface with the respective light patterns. The controller 150 may generate various images corresponding to light patterns such as uniform light patterns, fringe patterns, or any other light patterns. The controller thus not only controls what light patterns are projected but also their time of projection. The controller 150 is further configured to activate the imaging device 140 which upon activation records the reflections by the object’s surface of the projected light patterns. Thus, the controller controls at what time the imaging device 130 records the light patterns reflected by the object’s surface.
[39] Before doing any measurement with the endoscope system, the endoscope system needs to be calibrated. The calibration of the endoscope system will be now explained in detail with reference to FIG.2A and FIG.2B, wherein FIG.2A shows the calibration steps to obtain an absolute phase-to-depth and depth-dependent in-plane calibration maps, which may be done using known procedures, and, FIG.2B shows the calibration steps to derive a location-to-depth and location-to-absolute phase calibration maps for one or more reference locations, which is specific for the present method.
[40] Generally speaking, the calibration of the endoscope system requires calibration 210 of the imaging pathway and a calibration 220 of the illumination pathway. [41] To calibrate 210 the imaging pathway, in the first step, the endoscope system records images of a calibration plate including for example a 9x9 checkerboard pattern or a grid with a square size of, for example, 0.5 c 0.5 mm2. For this purpose, the controller 150 generates an image with uniform intensity and feeds that image to the projector device 130, and, then activates the projector device 130. Upon activation, the projector device projects 211 uniform light into the illumination probe and onto the calibration plate. In the next step, the controller 150 activates the imaging device 140 which records an image of the calibration plate illuminated by the illumination probe. The imaging device 140 thus obtains measurements or images 212 by recording images of the calibration plate with the calibration plate oriented perpendicularly to the bisector of the projection and observation axis or in other words the bisector of the viewing angle, providing the optimal sensitivity for depth measurement. The images are recorded at multiple distances by displacing the calibration plate sequentially by, for example, 0.1 mm along the depth direction, i.e. , the bisector. For this purpose, the calibration plate is fixed to a translation stage, which allows the plate to be placed at various distances from the endoscope device.
[42] In the next step, the recorded images are processed 213 to derive a depth- dependent in-plane calibration map. This may be done by employing, for example, the pin-hole camera model with radial distortion as described by Tsai, R. in “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses.”, IEEE J Robot Autom 3:323- 344, 1987. Further, the pin-hole and distortion parameters may be calculated semi- automatically using a Levenberg-Marquardt algorithm following the approach disclosed by Zhang, Z. in flexible new technique for camera calibration,” IEEE Trans Pattern Anal Mach Intell 22: 1330-1334, 2000. Summarized in this procedure, an optimization problem is set up that aims to minimize the difference between the output of the parametric camera model and the recorded images. The algorithm then solves this nonlinear optimization problem iteratively, yielding the camera parameters which map the real-world coordinates to the camera frame. As a result, a functional relationship between pixel coordinates of the imaging device 130 and real-world two- dimensional, 2D, coordinates in the system of the calibration plate for different depths is obtained which represents the depth-dependent in-plane calibration map. Mathematically, this functional relationship can be expressed as z(x,y) = f(z(i,j )), where (x,y) stand for the real world coordinates and (ij) for the pixel coordinates in the recorded image.
[43] The calibration 220 of the illumination or projection pathway is done similarly to the calibration 210 of the imaging pathway. However, for the calibration of the projection pathway, instead of projecting uniform light, herein the endoscope system projects 221 one or more fringe patterns onto a further calibration plate, for example on a flat white plate, to obtain 222 one or more measurements or images at the respective distances. This way one or more measurements are obtained at the respective distances. The number and the type of fringe patterns projected depend on how the endoscope will be used to take actual measurements. If the endoscope takes measurements using a single-shot FPP mode of a single frequency, then the one fringe pattern will be used for the calibration. Further, the same fringe pattern used for calibrating the projection probe has to be used for the actual measurements. If the endoscope is used in a multi-shot FPP mode, for example, in four-shot FPP mode, then four fringe patterns are required. Again, the same four fringe patterns used for calibrating the projection probe have to be used for the actual measurements.
[44] In the example of FIG.1 , the endoscope system employs a four-shot FPP mode using four 8-bit fringe patterns of a single frequency with sinusoidal intensity profiles and relative phase shifts of p/2. As a result of the measurements, four images for each depth are obtained. Due to the relative viewing angle between the illumination probe and the imaging probes, the reflected fringe patterns recorded by the imaging device are deformed by the calibration plate. More particularly, the location of a light point of the fringe pattern in the recorded image will vary with the distance of the calibration plate. In the example of FIG.1 , the greater the distance between the calibration plate and the endoscope device, the bigger the offset between the location of the light point in the recorded image and the location of that light point in the projected image. Exploiting this observation allows describing the intensity profile of the four images at the respective depths observed by the imaging device as follows:
II ( i,j ) = /' (i, j) + I''(ij cos [φ (i, j )] , (1 )
Figure imgf000017_0003
where Il(i,j) to 14(1, j) are the four recorded fringe pattern intensities of the successive phase-shifted images at pixel indices i and j, V(i,j) is the average or background intensity, I"{i,j) the profile modulation intensity, and f(ί,)) the phase map of the fringe patterns, which can be obtained by
Figure imgf000017_0001
[45] By exploiting the relation in Equation (5), the controller is able to determine 313 absolute phase maps f(ί,)) for the respective depths. A functional relationship between the absolute phase and pixel locations for every depth value is obtained by fitting a 2D polynomial, for example, a 5x5-order polynomial to the obtained absolute phase maps. As a result, a three-dimensional data cube is obtained, herein referred to as the absolute phase-to-depth map, which stores absolute phase value as a function of the pixel location in the image and the depth. Mathematically, this functional relationship can be represented
Figure imgf000017_0002
[46] Next, interpolation between the absolute phase maps obtained at the different depths is performed. This allows deriving a depth map or the depth profile of an arbitrary 3D object's surface directly from its relative phase map as a function of the image pixel coordinates (i,j) as detailed further below. To enable real-time derivation of the depth map of the object’s surface, the absolute phase-to-depth relation for all pixels is interpolated on an equidistant phase array. As a result, a regular 3D interpolation grid is obtained, which allows fast 3D gridded interpolation suited for depth extraction in real-time. Furthermore, the relationships between the pixel coordinates and the real-world in-plane coordinates obtained in the imaging pathway calibration may also be converted to a regular 3D interpolation grid, thus relating absolute phase values to in-plane X- and Y-coordinates for all pixels.
[47] As described in further detail below, the determined absolute phase maps (p{i,j) for the respective depths contain 2p discontinuities due to the limited principle value domain [-p, +p] of the arctangent function of Equation (5). These discrete 2p jumps can be removed by means of a fast 2D spatial phase-unwrapping algorithm, for example using a method described by Herraez MA et al. in Fast two-dimensional phase-unwrapping algorithm based on sorting by reliability following a noncontinuous path Appl Opt 2002; 41:7437-44. After the unwrapping, continuous absolute phase maps for the respective depths are obtained. However, to allow unambiguous phase- to-depth correspondence for measurements of arbitrary surfaces, a method to determine a unique absolute phase map for the surface is needed.
[48] To do so, referring to FIG. 2B, the endoscope system projects 231 an image of a reference marker onto a blank calibration plate at the respective distances to obtain 232 measurements or images at said respective distances. To do so, the controller 150 generates an image containing a reference marker, for example, a circular marker of 8-pixel diameter, feeds the image to the projector device 130 and activates the projector device 130 to project the reference marker, i.e. a light point or a dot, to the illumination probe and on to the calibration plate. Multiple measurements are taken with the calibration plate placed at respective distances from the endoscope in the same way as described above.
[49] As described above, due to the relative viewing angle between the probes 110 and 120, the location of the marker in the recorded images will change with the depth of the calibration plate, describing a monotonic relationship. The recorded images are two-dimensional grayscale images representing phase maps characterizing how the projected light point is reflected by the calibration plate, or better how much the projected light is offset by the calibration plate at the respective depths. In the example of FIG.1 , the greater the distance of the object’s surface at the reference location, the bigger the offset between the location of the marker in the recorded image and the location of the marker in the projected image. This observation is herein exploited to relate the location of the reference marker in the recorded image to the distance of the calibration plate with respect to the endoscope device, i.e. the depth.
[50] In the next step, i.e. step 233, the controller determines the pixel location of the reference marker in the recorded images, herein called reference locations, by employing image processing techniques suitable for the purpose, such as thresholding. The image processing technique will thus identify a small, confined area of connected pixels characterized with the highest intensity and a centroid point corresponding to the centre of the reference marker in the recorded image.
[51] After identifying the reference locations in the respective images, the controller proceeds to relate 234 the reference locations to the respective depths which are known, yielding a unique functional relation between the reference locations and the depth, i.e. a location-to-depth map, which is a one-dimensional map - a curve - defining the location of the reference marker in the depth direction. The controller further relates the reference locations at the respective depths to the absolute phase- to-depth calibration map obtained in step 223, yielding a unique functional relation between the reference locations and the absolute phase, i.e. a location-to-absolute phase map. Again, the location-to-absolute phase calibration map is a one dimensional map, i.e. a curve. Similarly to above, this functional relationship is fitted using a third-order polynomial function to allow retrieval of the absolute phase map of arbitrary surfaces when a reference marker image is projected on it as will be detailed further below.
[52] Employing an image with multiple reference markers may also be used. In this case, multiple reference locations for the respective depths will be obtained instead of one. This allows coping with deviations of the location of the reference marker used to obtain the location-to-depth or absolute phase calibration maps and the location of the reference marker for obtaining depth information for the object’s surface.
[53] Once calibrated, the endoscope system can be used to image an arbitrary surface, i.e. to obtain a three-dimensional representation of any object’s surface. The operation of the controller and therefore the endoscope system to image an object’s surface will now be explained in detail with reference to FIG.3A and 3B which shows steps performed by the endoscope system to image a surface, such as a cavity, of an object. The imaging of the object’s surface is performed in two main steps. The first step 310 shown in FIG. 3A involves the projection of one or more fringe patterns, for example phase-shifted fringe patterns in multi-shot FPP, on the surface of the object 10 to determine the relative phase map of the object’s surface 10, while the second step 320 shown in FIG. 3B involves the projection of a reference marker on the object's surface 10 to determine the absolute phase at the location of the reference marker based on the location-to-absolute phase map. The absolute phase at the location of the reference marker can then provide a unique relation between the relative phase map and the absolute phase map or the distance or the depth of the object’s surface.
[54] In the first step, i.e. step 311, one or more fringe patterns are projected onto the object’s surface. The number of fringe patterns depends on the type of fringe projection profilometry, FPP, employed. If a single-shot FPP is used, then a single fringe pattern of a single frequency is projected, while if a multi-shot FPP is used, two or more phase-shifted fringe patterns of a single frequency are projected. Single-shot FPP may allow relatively fast projection of a set or series of images and does not need to rely on synchronization of projection and recording of images. Multi-shot FPP may be more advantageous in terms of calculation speed and accuracy. In the following description, multi-shot FPP will be described. In the example of FIG.1 , the same four 8-bit fringe patterns with sinusoidal intensity profiles and relative phase shifts pI2 used in the endoscope calibration are now projected onto the object’s surface. The controller 150 thus generates four grayscale images each of which corresponds to a respective phase-shifted fringe pattern. The controller then feeds the respective images to the projection device 130 one by one and activates the projector device 130 so that the respective fringe patterns are projected on the illumination probe and onto the surface of the object’s surface one after another.
[55] In the next step, the controller activates the imaging device which upon activation records four images corresponding respectively to the fringe patterns reflected from the object’s surface, thereby obtaining a measurement 312 or image of the object’s surface. This requires that the projection device projects the fringe patterns on the object’s surface at least for the duration needed for the imaging device to record the respective reflected fringe patterns. One way to assure this is to perform steps 311 and 312 simultaneously. [56] As detailed above, due to the relative viewing angle between the illumination probe and the imaging probes, the reflected fringe patterns recorded by the imaging device are deformed by the object's surface, i.e. the location of a light point of the fringe pattern in the recorded image will vary with the distance of the object’s surface. Thus, the intensity profile observed by the imaging device is described in the same ways as shown in Equations (1) to (4).
[57] The controller then determines 313 a relative phase map f'(ί,)) of the object’s surface from the recorded images by exploiting the relation in Equation (5). More particularly, the determined phase map f'(ί,)) represents relative phase values f' at respective pixel locations i,j.
[58] In the last step of the imaging method, a three-dimensional representation for the object’s surface is obtained 314 by relating the obtained relative phase map of the object’s surface with the depth of the object’s surface at a reference location. The depth of the object’s surface at a reference location is obtained by performing the second main step 320. Thus, to convert the relative phase map obtained from step 313 to a three-dimensional representation of the object’s surface in step 314, the method proceeds to first perform the second main step 320 of the imaging process and then continues with performing step 314.
[59] As detailed above and shown in FIG. 3B, the second main step 320 involves the projection of an image of a reference marker on the object’s surface 10 to determine the unique relation between the relative phase map and the distance or the depth of the object’s surface at the location of the reference marker. Thus, an image of a reference marker is projected 321 onto the object’s surface. To do so, the controller 150 generates an image containing a reference marker, for example, a circular marker of 8-pixel diameter feeds the image to the projector device 130 and activates the projector device 130 to project the reference marker, i.e. a light point or a dot, to the illumination probe and on to the surface of the object’s cavity. The light point is thus projected at a specific location on the object’s surface which is herein referred to as the reference location.
[60] The method then proceeds to step 321 to activate the imaging device which upon activation records an image comprising the projected light point reflected from the object surface, thereby obtaining a measurement of the reference marker. Similarly to above, to assure recording of the reflected light dot steps 321 and 322 may also be performed simultaneously.
[61] The recorded image is a two-dimensional grayscale image characterizing how the projected light point is reflected by the object’s surface, or better how much the projected light is offset by the object’s surface. The controller then determines 323 the distance or the depth of the object’s surface at the reference location as follows. First, the controller determines the pixel location of the reference marker in the recorded image by image processing the recorded image. Any image processing techniques suitable for the purpose, such as thresholding, may be used. Based on the determined location, the controller then obtains the depth of the object’s surface at that location from the location-to-depth map obtained during the calibration. Alternatively, the location-to-absolute phase map, obtained during calibration, may be used to determine absolute phase at the location of the reference marker.
[62] Once the depth or absolute phase of the object’s surface at the reference location is determined, the controller proceeds to step 314 to correct the relative phase map for the object’s surface f'(ί,)) obtained in step 313 using the obtained depth at the location of the marker in step 323 and the absolute phase-to-depth calibration map obtained during the calibration, or directly the absolute phase obtained at the location of the reference marker. As a result of this correction, an absolute phase map of the object’s surface, i.e.
Figure imgf000022_0001
is obtained. Next, the controller converts the obtained absolute phase map of the object’s surface to a depth map, i.e. z i,j ), of the object’s surface by using the absolute phase-to-depth calibration map and finally maps the obtained depth map z(i,j) onto the depth- dependent in-plane calibration map to obtain the three-dimensional representation of the object’s surface z(x,y).
[63] Alternatively, the second main step 320 may be performed prior to performing the first main step 310. In that case, the depth of the object’s surface at the reference location will be readily available to the controller once step 314 is completed. Thus, the controller can directly proceed to perform step 314 to derive the three- dimensional representation of the object’s surface. [64] In the case the object’s surface to be imaged is larger than the field of view of the endoscope device, or when a displacement of the object’s surface is to be measured, for example when the object or its surface is moving, steps 311 and 312 need to be performed for different sections of the object’s surface and/or at various instants. In such cases, the projection of the fringe patterns is preferably done in a loop fashion and the activation period for the imaging device, and, therefore, its exposure time, is preferably chosen high enough to maximize the intensity of the light captured by the imaging device without exceeding its dynamic range, but low enough to maximize the number of measured surface sections per unit of time, without exceeding the lower bound below which proper activation and real-time depth calculation are no longer possible.
[65] FIG.4A shows the reconstructed 3D surface profile of a 9-mm diameter sphere imaged with the endoscope system of FIG.1 at five discrete distances separated by 1 mm along the bisector of the central projection and imaging axis, i.e the Z-axis. The sphere is coated with a white dye to provide optimal diffuse light reflectivity. No smoothing is applied to the presented data. The five panels in FIG.4A show a triangulation of the reconstructed 3D point cloud of the imaged sphere. The colour variation depicts the depth error relative to the best-fitting sphere. FIG.4B shows cross-sections along the X- and Y-direction of the sphere for the obtained point cloud and the corresponding best-fitting sphere, of which the location is depicted in the 3D plots of FIG.4A. From the results, it can be seen that the measurement errors increase towards the edge of the reconstructed surfaces, i.e. towards the edge of the field of view of the endoscope. To quantify the endoscope system’s mean measurement error as a function of relative distance to the endoscope device, the 2D root-mean-square, RMS, values of the difference in depth between the 3D point cloud and the best-fitting sphere are calculated. The RMS depth error amounted to 12.4, 14.0, 18.3, 21.7, and 26.8 pm respectively for the five successive distances in increasing order, indicating that depth precision decreases with distance. At the same time, the 2D resolution of the endoscope system decreases with distance, since on average one depth value is obtained per 12.1, 15.9, 19.3, 22.4, and 25.1 pm respectively for the five distances in increasing order. On the other hand, the encompassed area of the object's surface imaged by the imaging device increases with distance, as can be seen in FIG.4A. To find the optimal distance to the endoscope device at which an object's surface profile is measured, a compromise should be made between depth precision and 2D resolution on the one hand, and the size of the encompassed area of the imaged object on the other hand.
[66] FIG.5 shows a computing system enabling to implement embodiments of the method for calibration and/or the method for imaging an object’s surface by means of the endoscope according to the present disclosure. The computing system 400 may, in general, be formed as a suitable general-purpose computer and comprise a bus 410, a processor 402, a local memory 404, one or more optional input interfaces 414, one or more optional output interfaces 416, a communication interface 412, a storage element interface 406, and one or more storage elements 408. Bus 410 may comprise one or more conductors that permit communication among the components of the computing system 400. Processor 402 may include any type of conventional processor or microprocessor that interprets and executes programming instructions. Local memory 404 may include a random-access memory, RAM, or another type of dynamic storage device that stores information and instructions for execution by processor 402 and/or read-only memory, ROM, or another type of static storage device that stores static information and instructions for use by processor 404. Input interface 414 may comprise one or more conventional mechanisms that permit an operator or user to input information to the computing device 400, such as a keyboard 420, a mouse 430, a pen, voice recognition, and/or biometric mechanisms, a camera, etc. Output interface 416 may comprise one or more conventional mechanisms that output information to the operator or user, such as a display 440, etc. Communication interface 412 may comprise any transceiver-like mechanism such as for example one or more Ethernet interfaces that enables computing system 400 to communicate with other devices and/or systems, for example with other computing devices such as computing device 452. The communication interface 412 of computing system 400 may be connected to such another computing system by means of a local area network, LAN, or a wide area network, WAN, such as for example the internet. Storage element interface 406 may comprise a storage interface such as for example a Serial Advanced Technology Attachment, SATA, interface or a Small Computer System Interface, SCSI, for connecting bus 410 to one or more storage elements 408, such as one or more local disks, for example, SATA disk drives, and control the reading and writing of data to and/or from these storage elements 408. Although the storage element(s) 608 above is/are described as a local disk, in general, any other suitable computer-readable media such as a removable magnetic disk, optical storage media such as a CD or DVD, -ROM disk, solid state drives, flash memory cards, ... could be used. Computing system 400 could thus correspond to the controller 150 in the embodiments illustrated by FIG.1.
[67] Although the present invention has been illustrated by reference to specific embodiments, it will be apparent to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied with various changes and modifications without departing from the scope thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the scope of the claims are therefore intended to be embraced therein.
[68] It will furthermore be understood by the reader of this patent application that the words "comprising" or "comprise" do not exclude other elements or steps, that the words "a" or "an" do not exclude a plurality, and that a single element, such as a computer system, a processor, or another integrated unit may fulfil the functions of several means recited in the claims. Any reference signs in the claims shall not be construed as limiting the respective claims concerned. The terms "first", "second", third", "a", "b", "c", and the like, when used in the description or in the claims are introduced to distinguish between similar elements or steps and are not necessarily describing a sequential or chronological order. Similarly, the terms "top", "bottom", "over", "under", and the like are introduced for descriptive purposes and not necessarily to denote relative positions. It is to be understood that the terms so used are interchangeable under appropriate circumstances and embodiments of the invention are capable of operating according to the present invention in other sequences, or in orientations different from the one(s) described or illustrated above.

Claims

Claims
1. An endoscope system (100) for a three-dimensional frontal imaging, the endoscope system including an endoscope device comprising a substantially rigid illumination probe (110) and a substantially rigid imaging probe (120) configured for fringe projection profilometry (FPP), wherein the probes are arranged substantially in parallel and on top of one another and wherein the probes are configured to have an oblique viewing angle relative to one another.
2. The endoscope system (100) according to claim 1, wherein the substantially rigid probes have respective viewing angles in the range of zero to eighty degrees and a difference between the viewing angles of at least 10 degrees.
3. The endoscope system (100) according to claims 1 or 2, wherein the substantially rigid probes have at least partially overlapping fields of view.
4. The endoscope system (100) according to any one of the preceding claims, wherein at least one of the substantially rigid probes (110; 120) comprises an optical lens array (112-118; 122-128).
5. The endoscope system (100) according to any one of the preceding claims, wherein the endoscope system further comprises an imaging device (140) operatively coupled to the imaging probe (120) and configured to record one or more images observed by the imaging probe (120).
6. The endoscope system (100) according to any one of the preceding claims, wherein the endoscope system further comprises a projector device (130) operatively coupled to the illumination probe (110) and configured to produce one or more fringe patterns and to project said one or more fringe patterns via the illumination probe (110).
7. The endoscope system (100) according to claims 5 and 6, wherein the endoscope system further comprises a controller (150) operatively coupled to the projector device (130) and to the imaging device (140), wherein the controller is preferably configured to synchronize projection of the one or more fringe patterns with recording of images of said one or more fringe patterns.
8. The endoscope system (100) according to any of the preceding claims 6 - 7, wherein the fringe patterns include a sinusoidal intensity profile and relative phase shifts.
9. A method for deriving a three-dimensional frontal image of an object’s surface by means of an endoscope system, the method comprising the steps of:
- obtaining (322), by means of the endoscope system, in particular an endoscope system according to any of the preceding claims, an image of one or more fringe patterns of a single frequency projected onto the object’s surface;
- determining (323), therefrom, a relative phase map of the object’s surface;
- obtaining (312), by means of the endoscope system, an image of a reference marker projected onto the object’s surface;
- determining, therefrom, a location of the reference marker in the image; and
- mapping (314) the relative phase map of the object’s surface to an absolute phase-to-depth map for the endoscope system by taking into account depth information of the object’s surface at the location of the reference marker in the image, thereby obtaining a three-dimensional image of the object’s surface.
10. The method according to claim 9, further comprising the steps of:
- obtaining (232), by means of the endoscope system, images of the reference marker projected on a reference plate at respective predetermined depths from the endoscope system; - determining (233) locations of the reference marker in the respective images; and
- relating (234) the determined locations of the reference marker to the predetermined depths, thereby obtaining a location-to-depth map for the endoscope system.
11. The method according to claim 10, wherein the step of taking into account depth information of the object’s surface includes mapping (314) the location of the reference marker to the location-to-depth map for the endoscope system, thereby obtaining depth information of the object’s surface at the reference location.
12. The method according to any of the preceding claims 9 - 11, further comprising the steps of:
- deriving, from an absolute phase-to-depth map for the endoscope system, absolute phase values for the respective predetermined depths; and
- relating the determined absolute phase values for the respective predetermined depths to the respective determined locations, thereby obtaining a location-to-absolute phase map.
13. The method according to claim 12, wherein the step of taking into account depth information of the object’s surface includes mapping the location of the reference marker to the location-to-absolute phase map for the endoscope system, thereby obtaining an absolute phase at the location of the reference marker.
14. A computer program product comprising computer-executable instructions for causing a computer to perform the method according to any of claims 9 to 13.
15. A computer readable storage medium comprising computer-executable instructions for performing the method according to any of claims 9 to 13 when the program is run on a computer.
PCT/EP2022/059413 2021-04-12 2022-04-08 An endoscope system and a method thereof WO2022218851A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP21167806.5 2021-04-12
EP21167806 2021-04-12

Publications (1)

Publication Number Publication Date
WO2022218851A1 true WO2022218851A1 (en) 2022-10-20

Family

ID=75477914

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2022/059413 WO2022218851A1 (en) 2021-04-12 2022-04-08 An endoscope system and a method thereof

Country Status (1)

Country Link
WO (1) WO2022218851A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6503195B1 (en) * 1999-05-24 2003-01-07 University Of North Carolina At Chapel Hill Methods and systems for real-time structured light depth extraction and endoscope using real-time structured light depth extraction
JP2013240590A (en) * 2012-04-26 2013-12-05 Yamaguchi Univ Three-dimensional shape acquisition device from stereoscopic endoscopic image
US20140296644A1 (en) * 2013-03-31 2014-10-02 Gyrus Acmi, Inc. (D.B.A. Olympus Surgical Technologies America) Panoramic organ imaging
US20180042466A1 (en) * 2016-08-12 2018-02-15 The Johns Hopkins University Compact endoscope design for three-dimensional surgical guidance

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6503195B1 (en) * 1999-05-24 2003-01-07 University Of North Carolina At Chapel Hill Methods and systems for real-time structured light depth extraction and endoscope using real-time structured light depth extraction
JP2013240590A (en) * 2012-04-26 2013-12-05 Yamaguchi Univ Three-dimensional shape acquisition device from stereoscopic endoscopic image
US20140296644A1 (en) * 2013-03-31 2014-10-02 Gyrus Acmi, Inc. (D.B.A. Olympus Surgical Technologies America) Panoramic organ imaging
US20180042466A1 (en) * 2016-08-12 2018-02-15 The Johns Hopkins University Compact endoscope design for three-dimensional surgical guidance

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HERRAEZ MA ET AL.: "Fast two-dimensional phase-unwrapping algorithm based on sorting by reliability following a noncontinuous path", APPL OPT, vol. 41, 2002, pages 7437 - 44
TSAI, R.: "A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses", IEEE J ROBOT AUTOM, vol. 3, 1987, pages 323 - 344, XP011217413, DOI: 10.1109/JRA.1987.1087109
ZHANG, Z: "A flexible new technique for camera calibration", IEEE TRANS PATTERN ANAL MACH INTELL, vol. 22, 2000, pages 1330 - 1334

Similar Documents

Publication Publication Date Title
Hu et al. Microscopic fringe projection profilometry: A review
CN110514143B (en) Stripe projection system calibration method based on reflector
Zuo et al. Micro Fourier transform profilometry (μFTP): 3D shape measurement at 10,000 frames per second
US7256899B1 (en) Wireless methods and systems for three-dimensional non-contact shape sensing
TW488145B (en) Three-dimensional profile scanning system
CN107218928B (en) A kind of complexity multi- piping branch system detection method
CN109556540A (en) A kind of contactless object plane degree detection method based on 3D rendering, computer
Xie et al. Simultaneous calibration of the intrinsic and extrinsic parameters of structured-light sensors
Acosta et al. Laser triangulation for shape acquisition in a 3D scanner plus scan
CN105528770A (en) Projector lens distortion correcting method
CN108895986B (en) Microscopic three-dimensional shape measuring device based on fringe imaging projection
CN109307483A (en) A kind of phase developing method based on structured-light system geometrical constraint
CN110672037A (en) Linear light source grating projection three-dimensional measurement system and method based on phase shift method
CN100523720C (en) Optical non-contact three-dimensional measuring instrument
CN111353997B (en) Real-time three-dimensional surface defect detection method based on fringe projection
TWI388797B (en) Three - dimensional model reconstruction method and its system
Frankowski et al. DLP-based 3D metrology by structured light or projected fringe technology for life sciences and industrial metrology
CN115540775A (en) 3D video extensometer of CCD single-phase machine
CN110146032B (en) Synthetic aperture camera calibration method based on light field distribution
CN106415198B (en) image recording method and coordinate measuring machine for carrying out said method
Cheng et al. A practical micro fringe projection profilometry for 3-D automated optical inspection
CN111272099A (en) Surface structure light precision detection system for three-dimensional surface morphology of aero-engine blade
CN210426454U (en) Surface three-dimensional contour detection device for composite surface product
CN2914032Y (en) Optics non-contact type three-dimensional shaped measuring instrument
WO2022218851A1 (en) An endoscope system and a method thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22721376

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22721376

Country of ref document: EP

Kind code of ref document: A1