CN101836852B - Medical endoscope containing structured light three-dimensional imaging system - Google Patents

Medical endoscope containing structured light three-dimensional imaging system Download PDF

Info

Publication number
CN101836852B
CN101836852B CN2010101792565A CN201010179256A CN101836852B CN 101836852 B CN101836852 B CN 101836852B CN 2010101792565 A CN2010101792565 A CN 2010101792565A CN 201010179256 A CN201010179256 A CN 201010179256A CN 101836852 B CN101836852 B CN 101836852B
Authority
CN
China
Prior art keywords
channel
structured light
mtd
illumination
mrow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2010101792565A
Other languages
Chinese (zh)
Other versions
CN101836852A (en
Inventor
王宽全
左旺孟
纪筱鹏
陈彦军
吴秋峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN2010101792565A priority Critical patent/CN101836852B/en
Publication of CN101836852A publication Critical patent/CN101836852A/en
Application granted granted Critical
Publication of CN101836852B publication Critical patent/CN101836852B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/06Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
    • A61B1/0605Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements for spatially modulated illumination

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biomedical Technology (AREA)
  • Optics & Photonics (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biophysics (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Endoscopes (AREA)

Abstract

The invention provides a medical endoscope containing a structured light three-dimensional imaging system, which relates to a medical endoscope, solves the problem that the conventional three-dimensional observing technology can not be directly applied to the medical endoscope due to space limit in the brain surgery of nasal cavity expansion. The medical endoscope comprises a working scope tube, a computing and processing module and a structured light channel, wherein the working scope tube comprises an imaging channel and an illuminating channel; illuminating optical fibers are arranged in the illuminating channel; the signal input end of the computing and processing module is connected with the electric signal output end of the imaging channel; the structured light channel is arranged in the illuminating channel; the light beam outputted from the tail end of the illuminating optical fiber is received by the structured light channel; the light beam generates structured light after passing through the structured light channel; and the structured light is outputted out of the illuminating channel through the structured light channel. The invention overcomes the defects of the prior art and can be applied to the brain surgery of nasal cavity expansion.

Description

Medical endoscope comprising structured light three-dimensional imaging system
Technical Field
The present invention relates to a medical endoscope.
Background
Endoscopic imaging is a typical medical imaging technique and plays an important role in medical diagnosis, surgical navigation and the like. With the increasing emphasis on the precision of diagnosis and surgical navigation, the application of three-dimensional imaging technology in medical endoscopes is increasing. Taking a brain tumor operation as an example, in a traditional brain tumor treatment method, such as a craniotomy, a surgical mode of cutting open the skull or facial bone of a patient is adopted, so that the appearance of the patient is often seriously damaged, and a long postoperative recovery period is needed. Recently, Expanded nasal cavity brain surgery (EEN) has received great attention in the clinical treatment of brain tumors. This surgical approach introduces miniature endoscopic and surgical instruments into the nasal cavity, locates the brain tumor precisely and resects it. The smooth implementation of the EEN operation needs effective guidance of an endoscopic image navigation system, and the performance of the navigation system directly influences the positioning accuracy of the brain tumor and the sensitivity of the surgical operation. An endoscopic image navigation system generally includes two subsystems: (1) the endoscope imaging system is used for acquiring an image of the operation part in real time; (2) a navigation system for mapping the position of the surgical device onto pre-operative CT or MRI data. FIG. 1 is a schematic view of a typical rigid body endoscope, as shown in FIG. 1, having two channels in the inner working scope tube portion: imaging channels 0-4 and illumination channels 0-6, wherein imaging channels 0-4 are used to image the organ surface and illumination channels 0-6 are used to output light beams. The optical elements of the imaging channels 0-4 (from the end) comprise divergent lenses 0-5 for observing a larger field angle, objective lenses 0-3 for focusing, rod-shaped conductive members 0-2 for converting images, and magnifying eyepieces 0-7; the illumination channel includes only illumination fibers 0-1 for connection to a light source. The illumination channels 0-6 are much simpler in construction than the imaging channels 0-4.
The current endoscope image navigation system applied to the EEN operation still has great ineffectiveness, and a certain deviation (the maximum deviation can reach about 2 cm) occurs in navigation due to the fact that a three-dimensional scene cannot be recovered from an acquired image or video. If the necessary three-dimensional scene information of the surgical site is lacking, the physician often needs to try to touch the tissue surface to experience depth distance, or rely on personal experience to make a subjective judgment. Therefore, the accurate three-dimensional visualization scene can obviously improve the operation sensitivity of a surgeon and the accuracy of brain tumor positioning, and has significant technical and medical application values.
In recent years, the three-dimensional modeling technology of endoscopic images has been developed to some extent and has achieved preliminary research results. However, since the endoscope used in EEN surgery must be very small in order to be able to pass through the nasal cavity to the area of the skull base, some conventional stereovision techniques such as multi-view stereovision are often not directly applicable due to space limitations. So far, no feasible EEN three-dimensional structure modeling technical report appears at home and abroad.
Disclosure of Invention
The invention aims to solve the problem that the conventional stereoscopic observation technology cannot be directly applied to a medical endoscope due to space limitation in the brain surgery for expanding a nasal cavity, and provides the medical endoscope comprising a structured light three-dimensional imaging system.
The medical endoscope comprises a structured light three-dimensional imaging system, and further comprises a calculation processing module, wherein the signal input end of the calculation processing module is connected with the electric signal output end of the imaging channel;
the structured light channel is arranged in the illumination channel, light beams output by the tail end of the illumination optical fiber are received by the structured light channel, the light beams generate structured light after passing through the structured light channel, and the structured light is output to the outside of the illumination channel by the structured light channel; the structured light channel consists of a focusing lens group, a miniature grid screen and a projection lens group, and light beams output by the tail end of the illumination optical fiber sequentially pass through the focusing lens group, the miniature grid screen and the projection lens group to generate structured light output;
the distance D between the end of the illuminating optical fiber and the miniature grid screen is determined according to a grid resolution standard, and the distance D needs to satisfy the following constraint condition:
<math> <mrow> <mfrac> <mi>H</mi> <mi>D</mi> </mfrac> <mo>&GreaterEqual;</mo> <mfrac> <mi>L</mi> <mrow> <mi>D</mi> <mo>+</mo> <mi>Z</mi> </mrow> </mfrac> <mo>,</mo> </mrow> </math>
where H denotes the radius of the structured light channel, L denotes the radius of the fiber illumination area, and Z denotes the distance between the micro-grid screen and the target.
The medical endoscope comprising the structured light three-dimensional imaging system acquires the three-dimensional shape information of the surfaces of medical tissues and organs by additionally arranging the structured light channel in the illumination channel of the medical endoscope and comprehensively using the structured light three-dimensional reconstruction method based on grid deformation and the defocusing three-dimensional reconstruction method based on the geometric method, and can be applied to the brain surgery of expanding the nasal cavity; the three-dimensional image is realized by a structured light three-dimensional reconstruction method, and no extra space is occupied.
Drawings
FIG. 1 is a schematic structural view of a typical rigid body endoscope; FIG. 2 is a schematic view of the construction of a medical endoscope of the present invention; FIG. 3 is a schematic view of the structure of a structured light channel in a medical endoscope of the present invention; fig. 4 is a schematic sectional view of a medical endoscope according to the present invention.
Detailed Description
The first embodiment is as follows: the embodiment is described with reference to fig. 2 and fig. 3, and the medical endoscope including the structured light three-dimensional imaging system of the embodiment includes a working lens tube, the working lens tube includes an imaging channel 1 and an illumination channel 2, an illumination optical fiber 3 is disposed in the illumination channel 2, and the medical endoscope further includes a calculation processing module 4, and a signal input end of the calculation processing module 4 is connected to an electrical signal output end of the imaging channel 1;
the LED lamp further comprises a structured light channel 5, the structured light channel 5 is arranged in the illumination channel 2, light beams output by the tail end of the illumination optical fiber 3 are received by the structured light channel 5, the light beams generate structured light after passing through the structured light channel 5, and the structured light is output to the outside of the illumination channel 2 through the structured light channel 5. The above structure can be seen in fig. 2.
The diameter of the illumination fiber 2 is about 10 micrometers (μm) and a thin fiber bundle is used because it can be approximated as a point source.
The endoscope, as shown in fig. 2, includes an imaging channel 1 and an illumination channel 2. The imaging channel 1 comprises a divergent lens for observing a large view angle, an objective lens for focusing, a rod-shaped conducting component for converting an image and a magnifying eyepiece; the illumination channel 2 is internally provided with an illumination optical fiber 3, and the illumination optical fiber 3 is connected to a light source. A structured light channel 5 is arranged in the illumination channel 2 for generating structured light. Wherein Target is the Target.
In the case of a dilated intranasal brain surgery, the endoscope must be as small as possible because of the need to insert the endoscope from the nasal cavity. As shown in fig. 4, the diameter of the rigid body endoscope is about 4 mm, the diameter of the imaging channel 1 is about 2.8 mm, and the diameter of the structured light channel 5 is about 1 mm.
Referring to fig. 3, the structured light channel 5 may be composed of a focusing lens group 51, a micro grid screen 52 and a projection lens group 53, and a light beam output from the end of the illumination fiber 3 sequentially passes through the focusing lens group 51, the micro grid screen 52 and the projection lens group 53 to generate structured light output.
The micro grid screen 52 is a key element of the design and needs to have the following features: solid, very regular and with high resolution (in terms of dimensions in grid units); in this embodiment, the micro grid screen 52 may be made of a carbon polymer material, and specifically, a carbon nanotube/epoxy resin composite material or a carbon nanotube/polyurethane composite material may be used.
Let b1 denote the distance from the micro grille 52 to the equivalent optical center of the focusing lens set 51, and let b2 denote the distance from the micro grille 52 to the equivalent optical center of the projection lens set 53, then let 1:3 > b1: b2 > 1: 5.
The light beam output by the structured light channel 5 is structured light, and due to the size and structure limitations of the endoscope, a complex structured light coding mode cannot be generated, and only a single grid mode can be generated.
When the projection lens group 53 can be approximated by a thin lens module, the distance b2 is determined by two factors, i.e., the distance Z2 from the equivalent optical center of the projection lens group 53 to the target and the magnification R of the projection lens group 53, i.e., b2 is Z/R, wherein the distance Z2 can be estimated according to statistical studies on the clinical characteristics of EEN, such as Z2 being 10-20 mm when imaging an intranasal endoscope, and the magnification R being determined by the light source used.
The focusing lens group 51 may be composed of a first plano-convex lens 511 and a second plano-convex lens 512, and a convex surface of the first plano-convex lens 511 and a convex surface of the second plano-convex lens 512 are oppositely disposed, a flat surface of the first plano-convex lens 511 serves as a light input end of the focusing lens group 51, and a flat surface of the second plano-convex lens 512 serves as a light output end of the focusing lens group 51.
Both the first plano-convex lens 511 and the second plano-convex lens 512 can adopt achromatic doublet and lens.
In the present embodiment, the endoscope uses, as a light source, cold light generated by xenon gas or a metal halide, and an achromatic doublet lens can be used as a focusing lens in consideration of a wide spectrum of the light source, so that chromatic aberration error due to a change in the refractive index of the lens with respect to the wavelength of light can be minimized.
Let d1 denote the focal length of the first plano-convex lens 511 and d2 denote the focal length of the second plano-convex lens 512, the distance between the optical center of the first plano-convex lens 511 and the optical center of the second plano-convex lens 512 may be d1+ d 2.
The projection lens group 53 may be composed of a third plano-convex lens 531 and a fourth plano-convex lens 532, and a convex surface of the third plano-convex lens 531 and a convex surface of the fourth plano-convex lens 532 are oppositely disposed, a flat surface of the third plano-convex lens 531 serves as a light input end of the focusing lens group 51, and a flat surface of the fourth plano-convex lens 532 serves as a light output end of the focusing lens group 51.
Let d3 denote the focal length of the third planoconvex lens 531 and d4 denote the focal length of the fourth planoconvex lens 532, the distance between the optical center of the third planoconvex lens 531 and the optical center of the fourth planoconvex lens 532 may be d3+ d 4.
The distance D between the end of the illumination fiber 3 and the micro-grid screen 52 can be determined according to a grid resolution criterion, said distance D satisfying the following constraints:
<math> <mrow> <mfrac> <mi>H</mi> <mi>D</mi> </mfrac> <mo>&GreaterEqual;</mo> <mfrac> <mi>L</mi> <mrow> <mi>D</mi> <mo>+</mo> <mi>Z</mi> </mrow> </mfrac> <mo>,</mo> </mrow> </math>
where H denotes the radius of the structured light tunnel 5, L denotes the radius of the fiber illumination area, and Z denotes the distance between the micro grid screen 52 and the target.
Under the condition of meeting the constraint conditions, a larger D value is selected, and higher grid resolution can be ensured to be obtained.
And the calculation processing module 4 is configured to perform three-dimensional reconstruction on the image obtained by the imaging channel 1 to obtain a three-dimensional image of the image.
The specific process of the calculation processing module 4 for three-dimensional reconstruction of the image obtained by the imaging channel 1 is as follows:
for a clear region in a target, reconstructing the surface three-dimensional shape of the clear region by a method of extracting grid angular points and a structured light three-dimensional reconstruction method based on grid deformation;
and for a fuzzy region in the target, reconstructing the surface three-dimensional shape of the fuzzy region by using a defocusing three-dimensional reconstruction method based on a geometric method.
The specific process of the three-dimensional reconstruction method based on grid deformation can refer to the following processes:
for any point P on the object, its coordinates in the world coordinate system are (X)w,Yw,Zw) The coordinates in the camera reference frame are
Figure GDA0000068520370000042
The coordinates in the reference coordinate system of the projection lens are
Figure GDA0000068520370000043
The origin of the camera reference coordinate system defines the optical center of the CCD lens of the camera in the imaging channel (1), the origin of the projection lens reference coordinate system defines the optical center of the projection lens group (53) in the structured light channel (5), and the camera image coordinate system (u)c,vc) Is defined at the center of the CCDProjection lens image coordinate system (u)p,vp) Is defined at the center of the projection lens group (53)
Figure GDA0000068520370000052
fcIs the focal length of the CCD lens, fpIs the focal length of the projection lens group (53);
world coordinate (X) of spatial point Pw,Yw,Zw) With reference coordinates of the camera
Figure GDA0000068520370000053
The following conversion relationship exists:
X w c Y w c Z w c = r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 X w Y w Z w + t 1 t 2 t 3
consider the oblique deformation of the image. The oblique deformation means that the X axis and the Y axis of an image are not orthogonal when the image is formed, and although the X axis and the Y axis are orthogonal in most cases, the X axis and the Y axis may not be orthogonal when the optical axis and the image forming plane are not completely orthogonal.
Assuming that the oblique deformation angles of the X axis and the Y axis are α, we obtain:
<math> <mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>u</mi> <mi>c</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>v</mi> <mi>c</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>f</mi> <mrow> <mi>c</mi> <mn>1</mn> </mrow> </msub> </mtd> <mtd> <mi>tan</mi> <mi>&alpha;</mi> <msub> <mi>f</mi> <mrow> <mi>c</mi> <mn>1</mn> </mrow> </msub> </mtd> <mtd> <msub> <mi>u</mi> <mn>0</mn> </msub> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <msub> <mi>f</mi> <mrow> <mi>c</mi> <mn>2</mn> </mrow> </msub> </mtd> <mtd> <msub> <mi>v</mi> <mn>0</mn> </msub> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>u</mi> </mtd> </mtr> <mtr> <mtd> <mi>v</mi> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> </mrow> </math>
wherein f isc1Is the focal length of the CCD lens in the U direction, fc2The focal length of the CCD lens in the V direction;
thus, it is obtained:
<math> <mrow> <msubsup> <mi>Z</mi> <mi>w</mi> <mi>c</mi> </msubsup> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>u</mi> <mi>c</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>v</mi> <mi>c</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>f</mi> <mrow> <mi>c</mi> <mn>1</mn> </mrow> </msub> </mtd> <mtd> <mi>tan</mi> <mi>&alpha;</mi> <msub> <mi>f</mi> <mrow> <mi>c</mi> <mn>1</mn> </mrow> </msub> </mtd> <mtd> <msub> <mi>u</mi> <mn>0</mn> </msub> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <msub> <mi>f</mi> <mrow> <mi>c</mi> <mn>2</mn> </mrow> </msub> </mtd> <mtd> <msub> <mi>v</mi> <mn>0</mn> </msub> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>r</mi> <mn>11</mn> </msub> </mtd> <mtd> <msub> <mi>r</mi> <mn>12</mn> </msub> </mtd> <mtd> <msub> <mi>r</mi> <mn>13</mn> </msub> </mtd> <mtd> <msub> <mi>t</mi> <mn>1</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>r</mi> <mn>21</mn> </msub> </mtd> <mtd> <msub> <mi>r</mi> <mn>22</mn> </msub> </mtd> <mtd> <msub> <mi>r</mi> <mn>23</mn> </msub> </mtd> <mtd> <msub> <mi>t</mi> <mn>2</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>r</mi> <mn>31</mn> </msub> </mtd> <mtd> <msub> <mi>r</mi> <mn>32</mn> </msub> </mtd> <mtd> <msub> <mi>r</mi> <mn>33</mn> </msub> </mtd> <mtd> <msub> <mi>t</mi> <mn>3</mn> </msub> </mtd> </mtr> </mtable> </mfenced> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>X</mi> <mi>w</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>Y</mi> <mi>w</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>Z</mi> <mi>w</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> </mrow> </math>
simplifying the internal parameter matrix and the external parameter matrix of the camera and ordering
Figure GDA0000068520370000057
Obtaining:
S c u c v c 1 = a 11 c a 12 c a 13 c a 14 c a 21 c a 22 c a 23 c a 24 c a 31 c a 32 c a 33 c a 34 c X w Y w Z w 1
in the same way, since the projector can be regarded as an upside-down of the camera, it is possible to obtain:
S p u p v p 1 = a 11 p a 12 p a 13 p a 14 p a 21 p a 22 p a 23 p a 24 p a 31 p a 32 p a 33 p a 34 p X w Y w Z w 1
elimination of ScAnd SpObtaining:
X w Y w Z w = a 11 c - u c a 31 c a 12 c - u c a 32 c a 13 c - u c a 33 c a 21 c - v c a 31 c a 22 c - v c a 32 c a 23 c - v c a 33 c a 11 p - u p a 31 p a 12 p - u p a 32 p a 13 p - u p a 33 p - 1 u c a 34 c - a 14 c v c a 34 c - a 24 c u p a 34 p - a 14 p
after the image obtained by the camera is decoded, each code value can be mapped to the corresponding position of the code pattern projected by the projection lens group (53), namely, a corresponding relation exists between the code values:
φ(uc,vc)=φ(up),
the concrete form of the above formula is determined by the adopted coding mode, and different coding modes correspond to different concrete forms;
for a calibrated structured light system, the internal and external parameters of the camera and projector are known. If matching (coresponsondence) of the image point of the spatial point on the camera and the projection point on the projector, i.e. determining the Correspondence, can be achieved, the coordinates of the spatial point P can be obtained, thereby achieving three-dimensional reconstruction. Due to the size and structure limitations of the endoscope, a complex structured light coding mode cannot be generated, only a single grid mode can be generated, and the three-dimensional shape of the surface can be reconstructed by extracting grid angular points and utilizing a triangulation method.
The defocusing three-dimensional reconstruction method based on the geometric method comprises the following processes:
step one, randomly generating T framesIso focal plane image rjObtaining the light intensity distribution I of the defocused image of each isofocal plane image at the object distance equal to z01,jWhile obtaining the intensity distribution I of the defocused image of each iso-focal plane image with the object distance equal to z12,j(ii) a Wherein j is 1, 2, 10mm and z0 are respectively equal to or less than 20mm, and z1 is equal to or less than 10mm and is equal to or less than 20 mm;
step two, based on { (I)1,j,I2,j) Constructing a training sample set by | j ═ 1, 2.. times, T }, and introducing an image pair Ij=(I1,j,I2,j);
Step three, according to the principle of minimum mean deviation, the method comprises the following steps
S ^ , r ^ =argmin | | I j - H S r j | | 2 ,
Wherein,representing the restored iso-focal plane image,
Figure GDA0000068520370000065
representing an estimate of depth information of the image, HsRepresenting a corresponding linear defocusing transformation operator when the depth is S;
step four, solving a corresponding linear operator for each depth level S
Figure GDA0000068520370000071
So that
Figure GDA0000068520370000072
Minimum to obtain linear operator corresponding to each depth level S
Figure GDA0000068520370000073
Wherein ISRepresenting a defocused image pair of depth S;
step five, when the endoscope is used for imaging, adjusting the camera to obtain two images I ═ I (I)1,I2) Using the product obtained in step four)
Figure GDA0000068520370000074
According to the formula
Figure GDA0000068520370000075
And obtaining the depth information of the image, thereby realizing the three-dimensional reconstruction of the image.
The specific process of the content in the step four is as follows:
solving for each depth level S a corresponding linear operator
Figure GDA0000068520370000076
So that
Figure GDA0000068520370000077
At a minimum, wherein ISRepresenting a defocused image pair of depth S;
constructing a large-interval learning planning problem to learn linear operators
<math> <mrow> <munder> <mi>min</mi> <msubsup> <mi>H</mi> <mi>S</mi> <mo>&perp;</mo> </msubsup> </munder> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>&mu;</mi> <mo>)</mo> </mrow> <munder> <mi>&Sigma;</mi> <mi>i</mi> </munder> <msup> <mrow> <mo>|</mo> <mo>|</mo> <msubsup> <mi>H</mi> <mi>S</mi> <mo>&perp;</mo> </msubsup> <msub> <mi>I</mi> <mi>i</mi> </msub> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <mi>&mu;</mi> <munder> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </munder> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>l</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>[</mo> <mn>1</mn> <mo>+</mo> <msup> <mrow> <mo>|</mo> <mo>|</mo> <msubsup> <mi>H</mi> <mi>S</mi> <mo>&perp;</mo> </msubsup> <msub> <mi>I</mi> <mi>i</mi> </msub> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>-</mo> <msup> <mrow> <mo>|</mo> <mo>|</mo> <msubsup> <mi>H</mi> <mi>S</mi> <mo>&perp;</mo> </msubsup> <msub> <mi>I</mi> <mi>l</mi> </msub> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>]</mo> <mo>,</mo> </mrow> </math>
Order to <math> <mrow> <msup> <mrow> <mo>(</mo> <msubsup> <mi>H</mi> <mi>S</mi> <mo>&perp;</mo> </msubsup> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>=</mo> <msubsup> <mi>H</mi> <mi>S</mi> <mo>&perp;</mo> </msubsup> <mo>,</mo> <msup> <mrow> <mo>(</mo> <msubsup> <mi>H</mi> <mi>S</mi> <mo>&perp;</mo> </msubsup> <mo>)</mo> </mrow> <mi>t</mi> </msup> <mo>=</mo> <msubsup> <mi>H</mi> <mi>S</mi> <mo>&perp;</mo> </msubsup> <mo>,</mo> </mrow> </math>
If IjThe corresponding depth information is S, then yi,l1, otherwise yi,l0; wherein, mu is 0.5, which is a compromise parameter;
solving and obtaining a linear operator by using a gradient descent method on a Stiefel manifold
The working principle of the invention is as follows:
a structured light channel in the endoscope emits light in a specific mode, after the light is projected to the surface of an organ, a camera in an imaging channel captures an image of the surface of the organ, and then a calculation processing module extracts the three-dimensional shape of the organ by utilizing a structured light three-dimensional reconstruction method through analyzing deformation information of the light in the image.
The medical endoscope comprising the structured light three-dimensional imaging system is characterized in that a pipeline is additionally arranged in an illumination channel of the rigid medical endoscope, a focusing lens group, a miniature grid screen and a projection lens group are arranged in the pipeline, a structured light generating system is realized by combining with an optical fiber bundle of the endoscope, and a structured light three-dimensional reconstruction method based on grid deformation and a defocusing three-dimensional reconstruction method based on a geometric method are comprehensively used for acquiring the three-dimensional shape information of the surfaces of medical tissues and organs.

Claims (1)

1. The medical endoscope comprises a structured light three-dimensional imaging system, and comprises a working endoscope tube, wherein the working endoscope tube comprises an imaging channel (1) and an illumination channel (2), an illumination optical fiber (3) is arranged in the illumination channel (2), the medical endoscope also comprises a calculation processing module (4), and the signal input end of the calculation processing module (4) is connected with the electrical signal output end of the imaging channel (1); the LED illumination device also comprises a structured light channel (5), wherein the structured light channel (5) is arranged in the illumination channel (2), a light beam output by the tail end of the illumination optical fiber (3) is received by the structured light channel (5), the light beam generates structured light after passing through the structured light channel (5), and the structured light is output to the outside of the illumination channel (2) by the structured light channel (5); the structured light channel (5) consists of a focusing lens group (51), a miniature grid screen (52) and a projection lens group (53), and light beams output by the tail end of the illumination optical fiber (3) sequentially pass through the focusing lens group (51), the miniature grid screen (52) and the projection lens group (53) to generate structured light output;
characterized in that the distance D between the end of the illumination fiber (3) and the micro-grid screen (52) is determined according to a grid resolution criterion, said distance D satisfying the following constraints:
<math> <mrow> <mfrac> <mi>H</mi> <mi>D</mi> </mfrac> <mo>&GreaterEqual;</mo> <mfrac> <mi>L</mi> <mrow> <mi>D</mi> <mo>+</mo> <mi>Z</mi> </mrow> </mfrac> <mo>,</mo> </mrow> </math>
where H denotes the radius of the structured light channel (5), L denotes the radius of the fiber illumination area, and Z denotes the distance between the micro-grid screen (52) and the target.
CN2010101792565A 2010-05-21 2010-05-21 Medical endoscope containing structured light three-dimensional imaging system Expired - Fee Related CN101836852B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010101792565A CN101836852B (en) 2010-05-21 2010-05-21 Medical endoscope containing structured light three-dimensional imaging system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010101792565A CN101836852B (en) 2010-05-21 2010-05-21 Medical endoscope containing structured light three-dimensional imaging system

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN201110261463.XA Division CN102283626B (en) 2010-05-21 2010-05-21 Medical endoscope containing structured light three-dimensional imaging system

Publications (2)

Publication Number Publication Date
CN101836852A CN101836852A (en) 2010-09-22
CN101836852B true CN101836852B (en) 2012-07-18

Family

ID=42740715

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010101792565A Expired - Fee Related CN101836852B (en) 2010-05-21 2010-05-21 Medical endoscope containing structured light three-dimensional imaging system

Country Status (1)

Country Link
CN (1) CN101836852B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103765159B (en) * 2011-09-02 2017-08-29 皇家飞利浦有限公司 The fast and dense point cloud imaging drawn using probability voxel
CN103513330B (en) * 2012-06-28 2016-04-27 耿征 Miniature three-dimensional imaging device and 3-D data collection method
US10512508B2 (en) 2015-06-15 2019-12-24 The University Of British Columbia Imagery system
CN105996961B (en) * 2016-04-27 2018-05-11 安翰光电技术(武汉)有限公司 3D three-dimensional imagings capsule endoscope system and method based on structure light
CN108388070A (en) * 2017-02-03 2018-08-10 深圳奥比中光科技有限公司 Fibre-optical projector and apply its depth camera
CN110613510B (en) * 2018-06-19 2020-07-21 清华大学 Self-projection endoscope device
CN110426837B (en) * 2019-07-19 2024-08-02 青岛智能产业技术研究院 Multi-eye three-dimensional endoscopic imaging system based on single lens
CN112577458B (en) * 2019-09-27 2024-02-02 沈阳华慧高新技术有限公司 Three-dimensional scanning endoscope device, calibration method and use method thereof

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6503195B1 (en) * 1999-05-24 2003-01-07 University Of North Carolina At Chapel Hill Methods and systems for real-time structured light depth extraction and endoscope using real-time structured light depth extraction
CN101264002A (en) * 2008-05-06 2008-09-17 中国科学院上海光学精密机械研究所 Three-dimensional endoscopic measurement device and method based on grating projection
CN101305899A (en) * 2008-07-09 2008-11-19 中国科学院上海光学精密机械研究所 Three-dimensional endoscopic measurement device and method based on amplitude type transmission grating projection

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2003253626A1 (en) * 2002-06-07 2003-12-22 University Of North Carolina At Chapel Hill Methods and systems for laser based real-time structured light depth extraction

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6503195B1 (en) * 1999-05-24 2003-01-07 University Of North Carolina At Chapel Hill Methods and systems for real-time structured light depth extraction and endoscope using real-time structured light depth extraction
CN101264002A (en) * 2008-05-06 2008-09-17 中国科学院上海光学精密机械研究所 Three-dimensional endoscopic measurement device and method based on grating projection
CN101305899A (en) * 2008-07-09 2008-11-19 中国科学院上海光学精密机械研究所 Three-dimensional endoscopic measurement device and method based on amplitude type transmission grating projection

Also Published As

Publication number Publication date
CN101836852A (en) 2010-09-22

Similar Documents

Publication Publication Date Title
CN101836852B (en) Medical endoscope containing structured light three-dimensional imaging system
JP6257728B2 (en) Surgical support system, operating method of surgical support system, information processing program, and information processing apparatus
CN106236006B (en) 3D optical molecular image laparoscope imaging systems
CN109222865B (en) Multimode imaging endoscope system
EP2829218B1 (en) Image completion system for in-image cutoff region, image processing device, and program therefor
CN109561810B (en) Endoscopic apparatus and method for endoscopy
CN101797182A (en) Nasal endoscope minimally invasive operation navigating system based on augmented reality technique
Noonan et al. A stereoscopic fibroscope for camera motion and 3D depth recovery during minimally invasive surgery
CN102283626B (en) Medical endoscope containing structured light three-dimensional imaging system
CN111588464A (en) Operation navigation method and system
CN209172253U (en) A kind of multi-modality imaging endoscopic system
CN110169821A (en) A kind of image processing method, apparatus and system
CN110141363B (en) Spine multi-stage registration system based on structured light scanning
CN114300095A (en) Image processing apparatus, image processing method, image processing device, image processing apparatus, and storage medium
Furukawa et al. Calibration of a 3d endoscopic system based on active stereo method for shape measurement of biological tissues and specimen
CN109771052B (en) Three-dimensional image establishing method and system based on multi-view imaging and multi-polarization state imaging
CN107485447B (en) Device and method for navigating pose of surgical instrument for knee cartilage grafting
Caccianiga et al. Dense 3d reconstruction through lidar: A comparative study on ex-vivo porcine tissue
CN112741689B (en) Method and system for realizing navigation by using optical scanning component
Ben-Hamadou et al. Construction of extended 3D field of views of the internal bladder wall surface: A proof of concept
CN117323002A (en) Neural endoscopic surgery visualization system based on mixed reality technology
US11310481B2 (en) Imaging device, system, method and program for converting a first image into a plurality of second images
CN110623626A (en) Two-dimensional-three-dimensional imaging converter for two-dimensional laparoscope
CN110623625A (en) Three-dimensional imaging converter for two-dimensional laparoscope
Caccianiga et al. Dense 3D reconstruction through lidar: A new perspective on computer-integrated surgery

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120718

Termination date: 20130521