CN111685711B - Medical endoscope three-dimensional imaging system based on 3D camera - Google Patents

Medical endoscope three-dimensional imaging system based on 3D camera Download PDF

Info

Publication number
CN111685711B
CN111685711B CN202010449010.9A CN202010449010A CN111685711B CN 111685711 B CN111685711 B CN 111685711B CN 202010449010 A CN202010449010 A CN 202010449010A CN 111685711 B CN111685711 B CN 111685711B
Authority
CN
China
Prior art keywords
infrared
image
dimensional
light
visible light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010449010.9A
Other languages
Chinese (zh)
Other versions
CN111685711A (en
Inventor
陈晓禾
洪凯程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Institute of Biomedical Engineering and Technology of CAS
Original Assignee
Suzhou Institute of Biomedical Engineering and Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Institute of Biomedical Engineering and Technology of CAS filed Critical Suzhou Institute of Biomedical Engineering and Technology of CAS
Priority to CN202010449010.9A priority Critical patent/CN111685711B/en
Publication of CN111685711A publication Critical patent/CN111685711A/en
Application granted granted Critical
Publication of CN111685711B publication Critical patent/CN111685711B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00163Optical arrangements
    • A61B1/00193Optical arrangements adapted for stereoscopic vision
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00064Constructional details of the endoscope body
    • A61B1/00071Insertion part of the endoscope body
    • A61B1/0008Insertion part of the endoscope body characterised by distal tip features
    • A61B1/00096Optical elements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00163Optical arrangements
    • A61B1/00186Optical arrangements with imaging filters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00163Optical arrangements
    • A61B1/00188Optical arrangements with focusing or zooming features
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/05Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances characterised by the image sensor, e.g. camera, being in the distal end portion
    • A61B1/051Details of CCD assembly

Abstract

The invention discloses a medical endoscope three-dimensional imaging system based on a 3D camera, which comprises: the system comprises an infrared light source module, a white light source module, an infrared imaging module, a visible light imaging module and an image processing module; acquiring a two-dimensional image through the visible light imaging module; the depth information of the measured object can be obtained through the infrared imaging module; and the image processing module performs information fusion on the obtained two-dimensional image and the depth information of the measured object, and reconstructs the two-dimensional image and the depth information to obtain a three-dimensional image of the measured object. According to the invention, two-dimensional image information of a focus can be obtained through the visible light unit, depth information can be obtained through the infrared light unit, information fusion is carried out through the image processing module, a three-dimensional image of the focus can be generated and displayed in the upper computer; the three-dimensional image is more comprehensive than the information of the traditional two-dimensional image, the depth details of the focus can be more accurately observed through the three-dimensional image, a doctor is assisted in observing blood vessels, tissues and lesion structures in the focus, and the success rate of diagnosis and treatment is improved.

Description

Medical endoscope three-dimensional imaging system based on 3D camera
Technical Field
The invention relates to the technical field of optical imaging, in particular to a medical endoscope three-dimensional imaging system based on a 3D camera.
Background
The medical endoscope adopts a CCD/CMOS sensor with a tiny front end to form a probe, enters a body through a hose to acquire an image, can assist a doctor to observe a focus in operation and diagnosis, is convenient to operate, provides the most intuitive image data, and amplifies local details. The existing endoscope imaging system mainly adopts a two-dimensional image endoscope and adopts a single light source and a single sensor to generate a high-definition image, can meet the basic requirements of doctor diagnosis and treatment, and has the characteristics of miniaturization and modularization. However, in an operation, a lesion with a large volume and a deep cavity often exist, at the moment, two-dimensional image observation is limited, and more depth information is needed to generate a three-dimensional image to restore image details. There is currently a lack of reliable solutions for obtaining three-dimensional images of lesions.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide a medical endoscope three-dimensional imaging system based on a 3D camera, aiming at the above deficiencies in the prior art.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows: a 3D camera-based medical endoscopic three-dimensional imaging system comprising: the system comprises an infrared light source module, a white light source module, an infrared imaging module, a visible light imaging module and an image processing module;
the white light source module emits visible light to irradiate the object to be measured, and the visible light imaging module acquires a two-dimensional image of the object to be measured;
the infrared light source module emits infrared structural light to irradiate the object to be measured, and infrared light reflected by the object to be measured is received by the infrared imaging module to obtain depth information of the object to be measured;
and the image processing module performs information fusion on the obtained two-dimensional image and the depth information of the measured object, and reconstructs the two-dimensional image and the depth information to obtain a three-dimensional image of the measured object.
Preferably, the device further comprises an upper computer for displaying the obtained three-dimensional image.
Preferably, the infrared light source module comprises an infrared light emitter, a light beam shaping unit, a light beam diffraction unit and a light beam projection unit;
the light beam shaping unit comprises a beam expanding element and a collimating element which are sequentially arranged along an incident light path, the light beam diffraction unit comprises a diffraction element, and the light beam projection unit comprises a projection lens group.
Preferably, the infrared light emitter generates a single-path infrared laser, the single-path infrared laser completes laser beam expansion through the beam expansion element, enters the collimation element for collimation, and the collimated light beam is diffracted through the diffraction element and finally passes through the projection lens group to output modulated infrared structured light.
Preferably, the infrared light source module further includes a first power supply, a first driving circuit connected to the infrared light emitter, and a first controller connected to the first driving circuit.
Preferably, the white light source module includes a white light emitter, a second power supply, a second driving circuit connected to the white light emitter, a lens group connected to the white light emitter, a second controller connected to the second driving circuit, and a temperature detection sensor and a heat sink both connected to the second controller.
Preferably, the infrared imaging module comprises a lens group, a near-infrared narrow-band filter and an infrared CCD image detector;
infrared light reflected by the object to be measured is focused by the lens group to eliminate phase difference, field curvature and distortion interference; and then, after the visible light part is filtered by the near-infrared narrowband filter, the infrared CCD image detector is irradiated with the filtered visible light part to obtain infrared image data and obtain the depth information of the measured object.
Preferably, the visible light imaging module comprises an optical lens, an optical compensation unit, a broad spectrum filter, a visible light CMOS image detector, a detection sensor, a coil and a third controller;
the optical compensation unit comprises a zoom ring, a focusing ring and an aperture, the detection sensor comprises a position sensor and a horizontal/vertical gyroscope sensor, the coil comprises an electromagnetic coil and a driving coil, the electromagnetic coil is used for controlling the focusing ring to rotate for automatic focusing, and the driving coil is used for driving the zoom ring to rotate and the aperture to zoom so as to adjust the focal length, the field angle and the light incoming amount.
Preferably, the work flow of the visible light imaging module comprises:
1) Optical imaging: the visible light reflected by the object to be measured is subjected to primary processing through the optical lens, phase difference, field curvature and distortion interference are eliminated, and a primary image is formed;
2) Zooming adjustment: the third controller collects position data of the position sensor and calculates the position data, then controls the driving coil, rotates the zooming ring, adjusts the focal length and the field angle of a lens and completes the adaptation of the current distance of the measured object;
3) Automatic focusing: the third controller completes the calculation of the image distance according to the focal length after zooming adjustment, reads the data of the horizontal/vertical gyroscope sensor at the same time, judges whether horizontal/vertical deviation exists or not, and compensates if the horizontal/vertical deviation exists; driving the electromagnetic coil to control the focusing ring to rotate so as to carry out automatic focusing;
4) Exposure adjustment: the third controller reads the current exposure parameters of the visible light CMOS image detector, evaluates the current light inlet quantity, controls the driving coil to adjust the size of the aperture and adjusts the exposure quantity to a proper range;
5) Filtering by using an optical filter: the light passing through the optical lens passes through the broad spectrum filter again to filter out the non-visible light part;
6) Visible light image reception: and the visible light passing through the wide-spectrum filter is received by the visible light CMOS image detector to obtain an RGB two-dimensional color image.
Preferably, the method for realizing three-dimensional image reconstruction by information fusion by the image processing module comprises the following steps:
1) Two-dimensional image acquisition: the white light source module works to irradiate a measured object, and the visible light imaging module acquires a two-dimensional color image of the measured object;
2) Collecting infrared images: the infrared light source module works to send out infrared structure light right the testee carries out infrared light irradiation, and this infrared structure light adopts the multiple spot range finding mode for specific visual angle, the long laser of specific duration when gathering, utilizes simultaneously infrared CCD image detector's photoresponse integral characteristic calculates, specifically does: each pixel in the infrared CCD image detector utilizes two synchronous trigger switches S 1 (t 1 To t 2 ) And S 2 (t 2 To t 2 + dt) to control the period during which the charge retention element of each pixel collects and reflects light, resulting in a response c 1 、c 2 And if so, the distance between the measured object and each pixel in the infrared CCD image detector is as follows:
Figure BDA0002506997290000031
wherein c is the speed of light;
thereby obtaining depth information of the measured object;
3) Median filtering: carrying out channel separation on the two-dimensional color image obtained in the step 1), and respectively carrying out median filtering on three channels to remove salt and pepper noise; let f (i, j) and g (i, j) be the original image and the filtered image respectively, the size of the filtering window is (2n + 1) ((2n + 1)), and n is the filtering radius, then:
g(i,j)=Mid{f(ik,j-k),k∈(n,n)};
4) Gaussian filtering: performing Gaussian filtering on the image obtained in the step 3) to remove high-frequency noise, specifically: determining a convolution kernel according to the radius of the filtering window, and assuming that a pixel point in an image is represented as p (i, j), expressing the Gaussian convolution kernel in the filtering window as:
Figure BDA0002506997290000041
wherein, the standard deviation of the σ gaussian function;
5) And (3) calculating three-dimensional coordinates: converting coordinates of different positions of the measured object into three-dimensional coordinates by using the depth information of the measured object obtained in the step 2) to form 3D point cloud data;
6) Three-dimensional image generation: the method comprises the following steps of realizing three-dimensional image reconstruction by adopting a Delaunay triangulation method, completing reconstruction of a two-dimensional image in the depth direction, and generating a three-dimensional image after filtering, wherein the Delaunay triangulation method comprises the following steps:
6-1) firstly projecting the point cloud data obtained in the step 5) onto an XOY plane, establishing a triangle for the projected two-dimensional data, and enclosing all data points;
6-2) insert into the triangle a point that is connected to the 3 vertices that surround his triangle, forming 3 new triangles;
6-3) carrying out detection of the empty circumcircles on the obtained new triangles one by one, and simultaneously removing the sides which do not meet the requirements by using a local optimization method of LOP;
6-4) repeating the steps 6-2) and 6-3) until all the insertion points are completed;
6-5) calculating the longest side of three sides of each triangle, and removing the triangles with the longest side larger than a preset threshold value;
6-6) outputting a grid map of the surface of the object in the three-dimensional space;
7) And finishing the reconstruction of the three-dimensional image and outputting the three-dimensional image.
The invention has the beneficial effects that:
(1) The optical system comprises a visible light optical path and an infrared light optical path, wherein a two-dimensional image of a focus can be obtained through the visible light optical path, depth information can be obtained through the infrared light optical path, information fusion is carried out through an image processing module, a three-dimensional image of the focus can be generated and displayed in an upper computer; the three-dimensional image has more comprehensive information than the traditional two-dimensional image, the three-dimensional image can more accurately observe the depth details of the focus, assist a doctor to observe blood vessels, tissues and lesion structures in the focus, and improve the success rate of diagnosis and treatment;
(2) The image processing method has the advantages of powerful image processing function and strong real-time property; the system can collect 1080p and below format images and generate three-dimensional pictures, and the video frame rate output is adjustable from 24fps to 60 fps;
(3) The system has a good imaging effect, a multi-stage feedback loop exists, the illumination of the light source is regulated in real time by closed-loop control, and a picture with proper brightness can be generated;
(4) The system processing and calculating part of the invention adopts a special image processing chip to work, and has small volume and high efficiency; the visible light source and the white light source are both LED emitters, so that the power consumption of the system can be reduced to dozens of watts, and the noise is reduced;
(5) The system of the invention has simple structure and convenient use.
Drawings
FIG. 1 is a schematic structural diagram of a three-dimensional imaging system of a medical endoscope based on a 3D camera according to the present invention;
FIG. 2 is a schematic diagram of an infrared light source module according to the present invention;
FIG. 3 is a schematic diagram of an optical path of an infrared light source module according to the present invention;
FIG. 4 is a schematic structural diagram of a white light source module according to the present invention;
FIG. 5 is a schematic diagram of the principle structure of an infrared imaging module of the present invention;
FIG. 6 is a schematic structural diagram of a visible light imaging module according to the present invention;
FIG. 7 is a flowchart of the operation of the image processing module of the present invention;
description of reference numerals:
1-an infrared emitting laser; 2-beam shaping unit; 3-a diffractive element; 4-a projection lens group; 20-a beam expanding element; 21-collimating element.
Detailed Description
The present invention is further described in detail below with reference to examples so that those skilled in the art can practice the invention with reference to the description.
It will be understood that terms such as "having," "including," and "comprising," as used herein, do not preclude the presence or addition of one or more other elements or groups thereof.
As shown in fig. 1, a medical endoscope three-dimensional imaging system based on a 3D camera according to the embodiment is characterized by comprising: the system comprises an infrared light source module, a white light source module, an infrared imaging module, a visible light imaging module and an image processing module;
the white light source module emits visible light to irradiate the object to be measured, and a two-dimensional image of the object to be measured is obtained through the visible light imaging module;
the infrared light source module emits infrared structural light to irradiate a measured object, and infrared light reflected by the measured object is received by the infrared imaging module to obtain depth information of the measured object;
and the image processing module performs information fusion on the obtained two-dimensional image and the depth information of the measured object, and reconstructs the two-dimensional image and the depth information to obtain a three-dimensional image of the measured object.
In a preferred embodiment, the system further comprises an upper computer for displaying the obtained three-dimensional image.
The infrared light source module is used for generating near-infrared structure light with specific wavelength to irradiate the measured object, and the light reflected by the measured object is received by the infrared imaging module to complete space identification. Referring to fig. 2, in one embodiment, the infrared light source module includes an infrared light emitter 1 (which may employ a near-infrared LED laser emitter), a beam shaping unit 2, a beam diffraction unit, and a beam projection unit; the beam shaping unit 2 includes a beam expanding element 20 and a collimating element 21 arranged in sequence along an incident optical path, the beam diffraction unit includes a diffraction element 3, and the beam projection unit includes a projection lens group 4.
The working process of the infrared light source module comprises the following steps:
(1) Optical beam expanding: the infrared emission laser 1 generates single-path infrared laser, completes laser beam expansion through the beam expansion element 20, and enters the collimation element 21 after beam expansion;
(2) Laser alignment: laser is diverged to a certain degree after being optically expanded, and is corrected by a collimation element 21, so that beam shaping calibration is realized;
(3) Generating a pattern image: scattering the calibrated infrared light by using a diffraction element 3 to generate required modulated light for structured light irradiation;
(4) Optical projection: the divergence angle of the speckle pattern generated by diffraction is limited to a certain degree, the projection lens group 4 is used for expanding the projection angle of the pattern, and the modulated infrared structured light is output.
Further, the infrared light source module further comprises a first power supply, a first driving circuit connected with the infrared light emitter, and a first controller connected with the first driving circuit. The first controller realizes the control of the infrared light emitter through the first driving circuit.
In one embodiment, the white light source module includes a white light emitter, a second power source, a second driving circuit connected to the white light emitter, a lens group connected to the white light emitter, a second controller connected to the second driving circuit, and a temperature detection sensor and a heat sink both connected to the second controller. In this embodiment, the white light emitter may be a high-power white light LED element, the temperature of the white light emitter is detected by the temperature detection sensor, and the white light emitter is controlled by the heat sink in real time. The second controller controls the white light emitter through the second driving circuit.
The infrared imaging module is used for receiving infrared light reflected by the object and collecting space information of the object. In one embodiment, the infrared imaging module comprises a lens group, a near infrared narrowband filter and an infrared CCD image detector. The working process of the infrared imaging module comprises the following steps:
(1) Optical imaging: infrared light of reflected light of the object to be measured is focused through the front-end lens group, phase difference, field curvature and distortion interference are eliminated, and a primary image is formed;
(2) Screening specific wavelength light: the infrared CCD image detector only receives near infrared light of a specific waveband, and for the received visible light part, a near infrared narrow-band filter is adopted for filtering and cutting off, and infrared light is output to the infrared CCD image detector;
(3) Receiving an infrared image: the received infrared light is identified by a special infrared CCD image detector to generate an electric signal, and a final infrared image is obtained through processing and calculation so as to obtain the depth information of the measured object.
The visible light camera module is used for receiving a visible light wavelength part of a measured object and generating a planar two-dimensional image, and the requirement on image resolution is high and the image resolution needs to reach a high-definition level. In one embodiment, the visible light imaging module comprises an optical lens, an optical compensation unit, a broad spectrum filter, a visible light CMOS image detector, a detection sensor, a coil and a third controller;
the optical compensation unit comprises a zoom ring, a focusing ring and an aperture, the detection sensor comprises a position sensor and a horizontal/vertical gyroscope sensor, the coil comprises an electromagnetic coil and a driving coil, the electromagnetic coil is used for controlling the rotation of the focusing ring to carry out automatic focusing, and the driving coil is used for driving the rotation of the zoom ring and the expansion and contraction of the aperture to adjust the focal length, the angle of view and the light entering amount.
The work flow of the visible light imaging module comprises the following steps:
1) Optical imaging: the visible light reflected by the measured object is firstly subjected to primary processing by an optical lens to eliminate phase difference, field curvature and distortion interference and form a primary image;
2) Zooming adjustment: the third controller collects position data of the position sensor and calculates the position data, then controls the driving coil, appropriately rotates the zoom ring, adjusts the focal length and the field angle of the lens and completes the adaptation to the current distance of the measured object;
3) Automatic focusing: and the third controller completes the calculation of the image distance according to the focal length after zooming adjustment, simultaneously reads the data of the horizontal/vertical gyroscope sensor, judges whether horizontal/vertical deviation exists or not, and compensates if the horizontal/vertical deviation exists: driving an electromagnetic coil to control the focusing ring to rotate so as to carry out automatic focusing;
4) Exposure adjustment: the third controller reads the exposure parameters of the current visible light CMOS image detector, evaluates the current light inlet quantity, controls the driving coil to adjust the size of the aperture and adjusts the exposure quantity to a proper range;
5) Filtering by using an optical filter: the light passing through the optical lens passes through a broad-spectrum filter to filter out the non-visible light part;
6) Visible light image reception: the visible light passing through the broad-spectrum filter is received by a visible light CMOS image detector to generate three-dimensional RGB data, and the three-dimensional RGB data are combined to obtain an RGB two-dimensional high-resolution color image.
The image processing module completes the receiving and processing functions of the multi-format image, the image processing module is composed of an image processing chip, the image completes the three-dimensional shape construction of the measured object by adopting a data fusion mode, the image processing module performs information fusion, and the method for realizing the three-dimensional image reconstruction comprises the following steps:
1) Two-dimensional image acquisition: the image processing module controls the white light source module to start to work to irradiate the measured object, and the visible light imaging module acquires a two-dimensional color image of the measured object and stores the two-dimensional color image into a cache;
2) Collecting infrared images: image processing module control infrared light source module opens work, sends infrared structure light and carries out infrared light to the testee and shine, and this infrared structure light adopts the multiple spot range finding mode for specific field angle, specific long laser when gathering, utilizes infrared CCD image detector's photoresponse integral characteristic to calculate simultaneously, specifically is: each pixel in the infrared CCD image detector utilizes two synchronous trigger switches S 1 (t 1 To t 2 ) And S 2 (t 2 To t 2 + dt) to control the period during which the charge retention element of each pixel collects and reflects light, resulting in a response c 1 、c 2 B, carrying out the following steps of; the distance from the measured object to each pixel in the infrared CCD image detector is as follows:
Figure BDA0002506997290000081
wherein c is the speed of light;
thereby obtaining the depth information of the measured object;
3) Median filtering: carrying out channel separation on the two-dimensional color image obtained in the step 1), and carrying out median filtering on three channels respectively to remove salt and pepper noise; let f (i, j) and g (i, j) be the original image and the filtered image respectively, the size of the filtering window is (2n + 1) × (2n + 1), and n is the filtering radius, then:
g(i,j)=Mid{f(i-k,j-k),k∈(-n,n)};
4) Gaussian filtering: carrying out Gaussian filtering on the image obtained in the step 3) to remove high-frequency noise, and specifically: determining a convolution kernel according to the radius of the filtering window, and assuming that a pixel point in an image is represented as p (i, j), expressing the Gaussian convolution kernel in the filtering window as:
Figure BDA0002506997290000091
wherein, the standard deviation of the σ gaussian function;
5) Calculating three-dimensional coordinates: converting coordinates of different positions of the measured object into three-dimensional coordinates by using the depth information of the measured object obtained in the step 2) to form 3D point cloud data;
6) Three-dimensional image generation: the method comprises the following steps of realizing three-dimensional image reconstruction by adopting a Delaunay triangulation method, completing reconstruction of a two-dimensional image in the depth direction, and generating a three-dimensional image after filtering, wherein the Delaunay triangulation method comprises the following steps:
6-1) firstly projecting the point cloud data obtained in the step 5) onto an XOY plane, establishing a triangle for the projected two-dimensional data, and enclosing all data points;
6-2) insert into the triangle a point that is connected to the 3 vertices that surround his triangle, forming 3 new triangles;
6-3) carrying out detection of the empty circumcircles on the obtained new triangles one by one, and simultaneously removing the sides which do not meet the requirements by using a local optimization method of LOP;
6-4) repeating the steps 6-2) and 6-3) until all the points are inserted;
6-5) calculating the longest edge of three edges of each triangle, and removing the triangles of which the longest edge is greater than a preset threshold value;
6-6) outputting a grid map of the surface of the object in three-dimensional space;
7) And finishing the reconstruction of the three-dimensional image, outputting the three-dimensional image and waiting for the next image acquisition.
While embodiments of the invention have been disclosed above, it is not limited to the applications listed in the description and the embodiments, which are fully applicable in all kinds of fields of application of the invention, and further modifications may readily be effected by those skilled in the art, so that the invention is not limited to the specific details without departing from the general concept defined by the claims and the scope of equivalents.

Claims (9)

1. A medical endoscope three-dimensional imaging system based on a 3D camera is characterized by comprising: the system comprises an infrared light source module, a white light source module, an infrared imaging module, a visible light imaging module and an image processing module;
the white light source module emits visible light to irradiate the object to be measured, and the visible light imaging module acquires a two-dimensional image of the object to be measured;
the infrared light source module emits infrared structural light to irradiate the object to be measured, and infrared light reflected by the object to be measured is received by the infrared imaging module to obtain depth information of the object to be measured;
the image processing module performs information fusion on the obtained two-dimensional image and the depth information of the measured object, and reconstructs the two-dimensional image and the depth information to obtain a three-dimensional image of the measured object;
the method for realizing three-dimensional image reconstruction by information fusion of the image processing module comprises the following steps:
1) Two-dimensional image acquisition: the white light source module is used for irradiating a measured object, and the visible light imaging module is used for acquiring a two-dimensional color image of the measured object;
2) Collecting infrared images: the infrared light source module works to emit infrared structural light to carry out infrared treatment on the measured objectThe light irradiation, this infrared structure light is the laser of specific visual angle, specific duration, adopts the multiple spot range finding mode when gathering, utilizes infrared CCD image detector's photoresponse integral characteristic to calculate simultaneously, specifically is: each pixel in the infrared CCD image detector utilizes two synchronous trigger switches S 1 And S 2 To control the period of time during which the charge holding element of each pixel collects and reflects light, resulting in a response c 1 、c 2 (ii) a The distance between the measured object and each pixel in the infrared CCD image detector is as follows:
Figure FDA0003872043360000011
wherein c is the speed of light;
thereby obtaining depth information of the measured object;
3) Median filtering: performing channel separation on the two-dimensional color image obtained in the step 1), and performing median filtering on three channels respectively to remove salt and pepper noise; let f (i, j) and g (i, j) be the original image and the filtered image respectively, the size of the filtering window is (2n + 1) × (2n + 1), and n is the filtering radius, then:
g(i,j) = Mid{f(i-k,j-k),k ∈ (-n,n)} ;
4) Gaussian filtering: performing Gaussian filtering on the image obtained in the step 3) to remove high-frequency noise, specifically: determining a convolution kernel according to the radius of the filtering window, and assuming that a pixel point in an image is represented as p (i, j), expressing a Gaussian convolution kernel in the filtering window as:
Figure FDA0003872043360000021
wherein, the standard deviation of the σ gaussian function;
5) And (3) calculating three-dimensional coordinates: converting coordinates at different positions of the measured object into three-dimensional coordinates by using the depth information of the measured object obtained in the step 2) to form 3D point cloud data;
6) Three-dimensional image generation: the method comprises the following steps of realizing three-dimensional image reconstruction by adopting a Delaunay triangulation method, completing reconstruction of a two-dimensional image in the depth direction, and generating a three-dimensional image after filtering, wherein the Delaunay triangulation method comprises the following steps:
6-1) firstly projecting the point cloud data obtained in the step 5) onto an XOY plane, establishing a triangle for the projected two-dimensional data, and enclosing all data points;
6-2) insert into the triangle a point that is connected to the 3 vertices that surround his triangle, forming 3 new triangles;
6-3) detecting the empty circumcircles of the obtained new triangles one by one, and removing the sides which do not meet the requirements by using a local optimization method of LOP (level of load);
6-4) repeating the steps 6-2) and 6-3) until all the insertion points are completed;
6-5) calculating the longest edge of three edges of each triangle, and removing the triangles of which the longest edge is greater than a preset threshold value;
6-6) outputting a grid map of the surface of the object in the three-dimensional space;
7) And finishing the reconstruction of the three-dimensional image and outputting the three-dimensional image.
2. The 3D camera-based medical endoscopic three-dimensional imaging system according to claim 1, further comprising an upper computer for displaying the obtained three-dimensional image.
3. The 3D camera-based medical endoscope three-dimensional imaging system according to claim 1, characterized in that the infrared light source module comprises an infrared light emitter, a beam shaping unit, a beam diffraction unit and a beam projection unit;
the light beam shaping unit comprises a beam expanding element and a collimating element which are sequentially arranged along an incident light path, the light beam diffraction unit comprises a diffraction element, and the light beam projection unit comprises a projection lens group.
4. The medical endoscope three-dimensional imaging system based on the 3D camera as claimed in claim 3, wherein the infrared light emitter generates a single path of infrared laser, the laser beam is expanded by the beam expanding element and enters the collimating element for collimation, the collimated light beam is diffracted by the diffraction element, and finally the modulated infrared structured light is output through the projection lens group.
5. The 3D camera-based medical endoscope three-dimensional imaging system according to claim 4, characterized in that the infrared light source module further comprises a first power supply, a first driving circuit connected with the infrared light emitter and a first controller connected with the first driving circuit.
6. The 3D-camera-based medical endoscope three-dimensional imaging system according to claim 5, characterized in that the white light source module comprises a white light emitter, a second power supply, a second driving circuit connected with the white light emitter, a lens group connected with the white light emitter, a second controller connected with the second driving circuit, and a temperature detection sensor and a heat sink both connected with the second controller.
7. The 3D camera-based medical endoscope three-dimensional imaging system according to claim 6, characterized in that said infrared imaging module comprises a lens group, a near infrared narrowband filter and an infrared CCD image detector;
infrared light reflected by the object to be measured is focused by the lens group to eliminate phase difference, field curvature and distortion interference; and then, after the visible light part is filtered by the near-infrared narrowband filter, the infrared CCD image detector is irradiated with the filtered visible light part to obtain infrared image data and obtain the depth information of the measured object.
8. The 3D camera-based medical endoscope three-dimensional imaging system according to claim 7, characterized in that the visible light imaging module comprises an optical lens, an optical compensation unit, a broad spectrum filter, a visible light CMOS image detector, a detection sensor, a coil and a third controller;
the optical compensation unit comprises a zoom ring, a focusing ring and a diaphragm, the detection sensor comprises a position sensor and a horizontal/vertical gyroscope sensor, the coil comprises an electromagnetic coil and a driving coil, the electromagnetic coil is used for controlling the focusing ring to rotate so as to carry out automatic focusing, and the driving coil is used for driving the zoom ring to rotate and the diaphragm to stretch and retract so as to adjust the focal length, the field angle and the light incoming amount.
9. The 3D camera-based medical endoscope three-dimensional imaging system according to claim 8, characterized in that the workflow of the visible light imaging module comprises:
1) Optical imaging: the visible light reflected by the object to be measured is subjected to primary processing through the optical lens, phase difference, field curvature and distortion interference are eliminated, and a primary image is formed;
2) Zooming adjustment: the third controller collects position data of the position sensor and calculates the position data, then controls the driving coil, rotates the zooming ring, adjusts the focal length and the field angle of a lens and completes adaptation to the current distance of the measured object;
3) Automatic focusing: the third controller completes the calculation of the image distance according to the focal length after zooming adjustment, simultaneously reads the data of the horizontal/vertical gyroscope sensor, judges whether horizontal/vertical deviation exists or not, and compensates if the horizontal/vertical deviation exists: driving the electromagnetic coil to control the focusing ring to rotate so as to carry out automatic focusing;
4) Exposure adjustment: the third controller reads the current exposure parameters of the visible light CMOS image detector, evaluates the current light inlet quantity, controls the driving coil to adjust the size of the aperture and adjusts the exposure quantity to a proper range;
5) Filtering by using an optical filter: the light passing through the optical lens is filtered by the broad spectrum filter to remove non-visible light;
6) Visible light image reception: and the visible light passing through the broad-spectrum filter is received by the visible light CMOS image detector to obtain an RGB two-dimensional color image.
CN202010449010.9A 2020-05-25 2020-05-25 Medical endoscope three-dimensional imaging system based on 3D camera Active CN111685711B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010449010.9A CN111685711B (en) 2020-05-25 2020-05-25 Medical endoscope three-dimensional imaging system based on 3D camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010449010.9A CN111685711B (en) 2020-05-25 2020-05-25 Medical endoscope three-dimensional imaging system based on 3D camera

Publications (2)

Publication Number Publication Date
CN111685711A CN111685711A (en) 2020-09-22
CN111685711B true CN111685711B (en) 2023-01-03

Family

ID=72478188

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010449010.9A Active CN111685711B (en) 2020-05-25 2020-05-25 Medical endoscope three-dimensional imaging system based on 3D camera

Country Status (1)

Country Link
CN (1) CN111685711B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112294453B (en) * 2020-10-12 2022-04-15 浙江未来技术研究院(嘉兴) Microsurgery surgical field three-dimensional reconstruction system and method
CN112730449B (en) * 2020-12-16 2023-07-14 上海辛玮智能科技有限公司 Microscopic three-dimensional detection optical method for auto-focusing liquid crystal module
CN113367638B (en) * 2021-05-14 2023-01-03 广东欧谱曼迪科技有限公司 Method and device for acquiring high-precision three-dimensional fluorescence image, storage medium and terminal
CN113870218B (en) * 2021-09-27 2022-05-06 杜乐 Intestinal disease judging platform based on optical data acquisition
CN114598829A (en) * 2022-03-17 2022-06-07 上海宇度医学科技股份有限公司 Single CMOS imaging system and method
CN115252992B (en) * 2022-07-28 2023-04-07 北京大学第三医院(北京大学第三临床医学院) Trachea cannula navigation system based on structured light stereoscopic vision
CN116723363B (en) * 2023-08-11 2024-03-05 深圳市安思疆科技有限公司 3D full-face detection equipment and detection method thereof
CN116761050B (en) * 2023-08-14 2023-11-03 合肥航谱时代科技有限公司 Image acquisition system based on visible light and infrared fusion
CN117281451A (en) * 2023-11-14 2023-12-26 杭州显微智能科技有限公司 3D endoscope fluorescence imaging system and imaging method thereof

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103356155B (en) * 2013-06-24 2014-12-31 清华大学深圳研究生院 Virtual endoscope assisted cavity lesion examination system
CN103957397B (en) * 2014-04-02 2015-11-25 宁波大学 A kind of low resolution depth image top sampling method based on characteristics of image
CH709747A1 (en) * 2014-06-11 2015-12-15 Quarz Partners Ag Method and apparatus for three-dimensional measuring of tooth rows.
CN104933718B (en) * 2015-06-23 2019-02-15 广东省智能制造研究所 A kind of physical coordinates localization method based on binocular vision
US20170280970A1 (en) * 2016-03-31 2017-10-05 Covidien Lp Thoracic endoscope for surface scanning
CN106510626A (en) * 2016-11-08 2017-03-22 重庆理工大学 Human body superficial layer vein three-dimensional reconstruction device and method based on binocular stereoscopic vision
CN107067470B (en) * 2017-04-05 2019-09-06 东北大学 Portable three-dimensional reconstruction of temperature field system based on thermal infrared imager and depth camera
CN109427086A (en) * 2017-08-22 2019-03-05 上海荆虹电子科技有限公司 3-dimensional image creation device and method
CN108050958B (en) * 2018-01-11 2023-12-19 浙江江奥光电科技有限公司 Monocular depth camera based on field of view matching and method for detecting object morphology by monocular depth camera
CN109115121A (en) * 2018-07-06 2019-01-01 华东师范大学 A kind of big visual field laser three-dimensional imaging instrument and imaging method
CN110930311B (en) * 2018-09-19 2023-04-25 杭州萤石软件有限公司 Method and device for improving signal-to-noise ratio of infrared image and visible light image fusion
CN209375823U (en) * 2018-12-20 2019-09-10 武汉万集信息技术有限公司 3D camera
CN210128694U (en) * 2019-06-11 2020-03-06 深圳市光鉴科技有限公司 Depth imaging device

Also Published As

Publication number Publication date
CN111685711A (en) 2020-09-22

Similar Documents

Publication Publication Date Title
CN111685711B (en) Medical endoscope three-dimensional imaging system based on 3D camera
US20240000295A1 (en) Light field capture and rendering for head-mounted displays
US20240102795A1 (en) Generation of one or more edges of luminosity to form three-dimensional models of scenes
US10375330B2 (en) Systems and methods for surface topography acquisition using laser speckle
EP3073894B1 (en) Corrected 3d imaging
US11026763B2 (en) Projection mapping apparatus
CN104380066B (en) For the system of high light spectrum image-forming, the method for recording and displaying high spectrum image
US20200081530A1 (en) Method and system for registering between an external scene and a virtual image
US8442355B2 (en) System and method for generating a multi-dimensional image
JP6964592B2 (en) Measurement support device, endoscope system, endoscope system processor, and measurement support method
JP6438216B2 (en) Image generating apparatus and image generating method
JP2011069965A (en) Image capturing apparatus, image display method, and recording medium with image display program recorded thereon
JP2009025225A (en) Three-dimensional imaging apparatus and control method for the same, and program
AU2018306730B9 (en) Projection scanning system
JPWO2018021035A1 (en) Image processing apparatus and method, endoscope system, and program
TWI518305B (en) Method of capturing images
JP2019501720A (en) System and apparatus for eye tracking
JP5399187B2 (en) Method of operating image acquisition apparatus and image acquisition apparatus
CN212326346U (en) Endoscope imaging system
CN109068035A (en) A kind of micro- camera array endoscopic imaging system of intelligence
JP7179837B2 (en) Endoscope device, endoscope image display method, and operation method of endoscope device
JP2003250789A (en) Radiation imaging apparatus, program, and computer- readable storage medium
WO2020044809A1 (en) Information processing device, information processing method and program
JP2022171177A (en) Image processing device, imaging device, control method of the same, and program
JP3668466B2 (en) Real-time range finder

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant