CN112294453A - Microsurgery surgical field three-dimensional reconstruction system and method - Google Patents

Microsurgery surgical field three-dimensional reconstruction system and method Download PDF

Info

Publication number
CN112294453A
CN112294453A CN202011084952.8A CN202011084952A CN112294453A CN 112294453 A CN112294453 A CN 112294453A CN 202011084952 A CN202011084952 A CN 202011084952A CN 112294453 A CN112294453 A CN 112294453A
Authority
CN
China
Prior art keywords
photosensitive element
infrared
dimensional reconstruction
image
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011084952.8A
Other languages
Chinese (zh)
Other versions
CN112294453B (en
Inventor
刘威
邵航
唐洁
廖家胜
阮程
黄海亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yangtze Delta Region Institute of Tsinghua University Zhejiang
Original Assignee
Zhejiang Future Technology Institute (jiaxing)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Future Technology Institute (jiaxing) filed Critical Zhejiang Future Technology Institute (jiaxing)
Priority to CN202011084952.8A priority Critical patent/CN112294453B/en
Publication of CN112294453A publication Critical patent/CN112294453A/en
Application granted granted Critical
Publication of CN112294453B publication Critical patent/CN112294453B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/20Surgical microscopes characterised by non-optical aspects
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body
    • A61B2090/367Correlation of different images or relation of image positions in respect to the body creating a 3D dataset from 2D images using position information
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/373Surgical systems with images on a monitor during operation using light, e.g. by using optical scanners

Landscapes

  • Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Pathology (AREA)
  • Gynecology & Obstetrics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

A microsurgery surgical field three-dimensional reconstruction system and a method thereof are disclosed, wherein pattern information of a measured scene is collected through a visible light viewpoint collecting unit; collecting an infrared speckle pattern of a measured scene through an infrared light viewpoint collecting unit; and controlling the shooting of the visible light viewpoint acquisition unit and the infrared light viewpoint acquisition unit by adopting a three-dimensional reconstruction calculation control unit, and carrying out information fusion on the patterns obtained by the visible light viewpoint acquisition unit and the patterns obtained by the infrared viewpoint acquisition unit to obtain a three-dimensional reconstruction result. According to the technical scheme, a multi-view point joint optimization and infrared speckle-based object surface texture enhancement mechanism is introduced into high-precision three-dimensional reconstruction, the appearance structure of the operation field can be accurately obtained by designing the structures of the infrared photosensitive element and the speckle projector, and the appearance structure is used as a three-dimensional reconstruction model under the operation field prior optimization visible light, so that the three-dimensional reconstruction precision under a microscope is improved on the basis of not influencing a main light path of the microscope.

Description

Microsurgery surgical field three-dimensional reconstruction system and method
Technical Field
The invention relates to the technical field of microstereoscopy imaging, in particular to a microsurgical field three-dimensional reconstruction system and a microsurgical field three-dimensional reconstruction method.
Background
The microscope is a commonly used auxiliary device in a surgical fine operation, and a doctor can clearly see the fine tissues of a human body in an operation field by virtue of the amplification effect of the microscope, so that the patient is finely treated. In recent years, the three-dimensional reconstruction technology of the operative field (operative field) area is regarded by researchers in the medical imaging field, and compared with the traditional CT/MRI imaging technology, the visual-based image reconstruction technology can see the color textures on the surface of the operative field, can provide more visual three-dimensional visual perception experience for doctors, can also carry out digital measurement on the operative field by means of the visual three-dimensional reconstruction result, and provides intraoperative guidance for the doctors, so that the three-dimensional reconstruction technology has great application value.
For the three-dimensional reconstruction problem of the operation area, the existing methods are roughly divided into two types. One is a method based on binocular stereovision, which reconstructs the operating area three-dimensionally by means of parallax generated by a microscope dual optical path, often only reconstructing the area within a limited viewing angle. In addition, the scene under the microscope has its special aspect compared to other visual fields. The surgical field area has a large number of specular reflection areas under the irradiation of a microscope illumination light source, and a large number of non-texture areas exist in the surgical field area, and these factors often cause the result of a stereo matching algorithm to be poor, and finally cause the result of three-dimensional reconstruction to be difficult to use in clinic. The other is a structured light three-dimensional reconstruction method, such as a single-frame structured light and a multi-frame structured light, although the reconstruction precision of the structured light is high, the structured light method needs to introduce an expensive structured light projector, and the method is time-consuming and difficult to use in real time in clinic. In summary, a new technical scheme for performing three-dimensional reconstruction of a microsurgical field is urgently needed.
Disclosure of Invention
Therefore, the invention provides a microsurgical operation field three-dimensional reconstruction system and a microsurgical operation field three-dimensional reconstruction method, which are used for realizing multi-view high-precision operation field three-dimensional reconstruction and solving the problem of failure of three-dimensional reconstruction of a specular reflection area and a texture-free area in an operation area.
In order to achieve the above purpose, the invention provides the following technical scheme: a microsurgical field three-dimensional reconstruction system, comprising:
visible light viewpoint acquisition unit: the system comprises a pattern information acquisition unit, a data acquisition unit and a data processing unit, wherein the pattern information acquisition unit is used for acquiring pattern information of a measured scene; the visible light viewpoint acquisition unit comprises a first photosensitive element, a first optical zoom body, a second photosensitive element, a second optical zoom body and a main field objective;
the first photosensitive element is used as a first view angle in the operative field viewpoint acquisition to receive photons emitted by the surface of the measured object and present an image of the measured object under the first observation view angle; the first optical zoom lens group is adopted by the first optical zoom lens group to change the magnification of the object to be measured on the first photosensitive element;
the second photosensitive element is used as a second view angle in the operative field viewpoint acquisition to receive photons emitted by the surface of the measured object and present an image of the measured object at the second observation view angle; the second optical zoom adopts an optical zoom lens group to change the magnification of the object to be detected on the second photosensitive element;
the main field objective is used for determining and changing a microscope working distance formed by a first observation visual angle and an optical path of the first observation visual angle;
infrared light viewpoint acquisition unit: the infrared speckle pattern is used for acquiring the infrared speckle pattern of a measured scene; the infrared light viewpoint acquisition unit comprises a first speckle projector, a first infrared optical lens assembly, a third photosensitive element, a second speckle projector, a second infrared optical lens assembly and a fourth photosensitive element;
the first speckle projector is used for projecting laser speckles, and the laser speckles are projected to the surface of a measured object through the first infrared optical lens assembly to form a first group of infrared scattered spots in a given pattern form; imaging on the third optical photosensitive element through the first infrared optical lens assembly after the first group of infrared scattered spots on the surface of the measured object is reflected;
the second speckle projector is used for projecting laser speckles, and the laser speckles are projected to the surface of a measured object through the second infrared optical lens assembly to form a second group of infrared scattered spots in a given pattern form; imaging on the fourth optical photosensitive element through the second infrared optical lens assembly after the second group of infrared scattered spots on the surface of the measured object is reflected;
a three-dimensional reconstruction calculation control unit: the infrared viewpoint acquisition unit is used for acquiring the pattern of the visible light viewpoint acquisition unit and the pattern of the infrared viewpoint acquisition unit, and acquiring the three-dimensional reconstruction result.
As a preferred scheme of the microsurgical field three-dimensional reconstruction system, the visible light viewpoint acquisition unit further comprises an illumination light source assembly, and the illumination light source assembly is used for illuminating the measured object.
As a preferred scheme of the microsurgical field three-dimensional reconstruction system, the first speckle projector, the first infrared optical lens assembly and the third photosensitive element are positioned on one side of the main field objective; the second speckle projector, the second infrared optical lens assembly and the fourth photosensitive element are positioned on the other side of the main-field objective lens.
As a preferred scheme of the microsurgical field three-dimensional reconstruction system, the first photosensitive element and the second photosensitive element adopt color photosensitive elements which sense visible light; the third photosensitive element and the fourth photosensitive element adopt gray photosensitive elements for infrared light.
As a preferred scheme of the microsurgical field three-dimensional reconstruction system, the three-dimensional reconstruction calculation control unit comprises a synchronous camera and a calculation device; the synchronous camera is respectively connected with the first photosensitive element, the second photosensitive element, the third photosensitive element and the fourth photosensitive element; the computing equipment is connected with the synchronous camera and used for processing data obtained by the first photosensitive element, the second photosensitive element, the third photosensitive element and the fourth photosensitive element to obtain a final three-dimensional reconstruction result.
The invention also provides a three-dimensional reconstruction method of the microsurgical field, which is used for the three-dimensional reconstruction system of the microsurgical field and comprises the following steps:
step 1, calibrating a first photosensitive element, a second photosensitive element, a third photosensitive element and a fourth photosensitive element under a preset microscope magnification to obtain internal parameters of the first photosensitive element
Figure BDA0002720066770000031
Internal parameter of the second photosensitive element
Figure BDA0002720066770000032
Internal parameter of the third photosensitive element
Figure BDA0002720066770000033
And fourth photosensitive element intrinsic parameter
Figure BDA0002720066770000034
And acquiring external parameters of the second photosensitive element relative to the first photosensitive element
Figure BDA0002720066770000035
External parameter of the third photosensitive element relative to the first photosensitive element
Figure BDA0002720066770000036
And the external parameter of the fourth photosensitive element relative to the first photosensitive element
Figure BDA0002720066770000037
Step 2, under a given microscope magnification i, controlling the first photosensitive element, the second photosensitive element, the third photosensitive element and the fourth photosensitive element through the synchronous camera, enabling the first photosensitive element, the second photosensitive element, the third photosensitive element and the fourth photosensitive element to shoot a measured object at the same time, and recording an image generated by the first photosensitive element
Figure BDA0002720066770000038
Image generated by the second photosensitive element
Figure BDA0002720066770000039
Image generated by the third photosensitive element
Figure BDA00027200667700000310
And an image produced by the fourth photosensitive element
Figure BDA00027200667700000311
Step 3, adopting the internal parameters and the external parameters of the first photosensitive element and the internal parameters and the external parameters of the second photosensitive element, and utilizing a stereo correction algorithm in computer vision to carry out image pair alignment
Figure BDA0002720066770000041
Correcting the image pair
Figure BDA0002720066770000042
First image of
Figure BDA0002720066770000043
And a second image
Figure BDA0002720066770000044
Realizing line alignment of point pairs with the same characteristics to obtain a corrected image pair
Figure BDA0002720066770000045
And obtaining a reprojection matrix Q of the corrected first photosensitive element1
Adopting the internal parameter and the external parameter of the third photosensitive element and the internal parameter and the external parameter of the fourth photosensitive element to carry out stereo correction algorithm in computer vision on the image pair
Figure BDA0002720066770000046
Correcting the image pair
Figure BDA0002720066770000047
Middle third image
Figure BDA0002720066770000048
And a fourth image
Figure BDA0002720066770000049
Realizing line alignment of point pairs with the same characteristics to obtain a corrected image pair
Figure BDA00027200667700000410
And obtaining a reprojection matrix Q of the corrected third photosensitive element3
Step 4, respectively correcting the image pairs
Figure BDA00027200667700000411
And correcting the image pair
Figure BDA00027200667700000412
Obtaining the image pair using a dense matching algorithm
Figure BDA00027200667700000413
Of (d) a parallax map12And the pair of images
Figure BDA00027200667700000414
Figure BDA00027200667700000415
Of (d) a parallax map34
Step 5, correcting the image pair
Figure BDA00027200667700000416
The first corrected image of
Figure BDA00027200667700000417
And a second corrected image
Figure BDA00027200667700000418
Based on the reprojection matrix Q1And a disparity map d12Obtaining a first corrected image using triangulation in computer vision
Figure BDA00027200667700000419
In the space of each point under the camera coordinate system of the first photosensitive elementCoordinates, generating a spatial point cloud P1
For the corrected image pair
Figure BDA00027200667700000420
The third corrected image of (1)
Figure BDA00027200667700000421
And a fourth corrected image
Figure BDA00027200667700000422
Based on the reprojection matrix Q3And a disparity map d34Obtaining a third corrected image using triangulation in computer vision
Figure BDA00027200667700000423
Generating a space point cloud P by the space coordinates of each point in the third photosensitive element camera coordinate system2
Step 6, adopting the space point cloud P1And the spatial point cloud P2Eliminating the error reconstruction result of the non-texture area to correct the spatial point cloud P1
As a preferable scheme of the microsurgical field three-dimensional reconstruction method, the dense matching algorithm in the step 4 uses a dense optical flow algorithm or a deep learning-based stereo matching algorithm.
As a preferable scheme of the microsurgical field three-dimensional reconstruction method, the step 6 comprises the following steps:
6.1, based on the space relation between the third photosensitive element and the first photosensitive element, the space point cloud P in the coordinate system of the third photosensitive element2Transforming to the coordinate system of the first photosensitive element to form transformed space point cloud
Figure BDA00027200667700000424
Step 6.2, triangularization of the transformed spatial point cloud by using point cloud in computer vision
Figure BDA00027200667700000425
Rendering is carried out to obtain rendered space point cloud
Figure BDA00027200667700000426
6.3, adopting the rendered space point cloud
Figure BDA0002720066770000051
For space point cloud P1Optimizing:
for a spatial point cloud P1Each point P in1t(X1t,Y1t,Z1t) Obtaining a set of proximate points
Figure BDA0002720066770000052
Figure BDA0002720066770000053
Where n represents the number of domain points,
Figure BDA0002720066770000054
is P1tThe domain points of (1);
finding point P using least squares1tThe fitting plane Ax + By + Cz + D of the domain point is 0, and the point P is obtained1tThe normal vector (A, B, C) of (A) and (B) is then calculated to obtain P according to the equation of point-to-point equation1tAnd a line l parallel to the normal vector of the point:
Figure BDA0002720066770000055
then, the straight line l and the rendered space point cloud are processed
Figure BDA0002720066770000056
The intersection point of (A) is defined as P1tNew coordinates of (2);
iterating the above process to complete the spatial point cloud P1Optimizing the position of the midpoint to obtain optimized space point cloud under visible light
Figure BDA0002720066770000057
The invention collects the pattern information of the measured scene through the visible light viewpoint collecting unit; collecting an infrared speckle pattern of a measured scene through an infrared light viewpoint collecting unit; and controlling the shooting of the visible light viewpoint acquisition unit and the infrared light viewpoint acquisition unit by adopting a three-dimensional reconstruction calculation control unit, and carrying out information fusion on the patterns obtained by the visible light viewpoint acquisition unit and the patterns obtained by the infrared viewpoint acquisition unit to obtain a three-dimensional reconstruction result. According to the technical scheme, a multi-view point joint optimization and infrared speckle-based object surface texture enhancement mechanism is introduced into high-precision three-dimensional reconstruction, the appearance structure of the operation field can be accurately obtained by designing the structures of the infrared photosensitive element and the speckle projector, and the appearance structure is used as a three-dimensional reconstruction model under the operation field prior optimization visible light, so that the three-dimensional reconstruction precision under a microscope is improved on the basis of not influencing a main light path of the microscope.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It should be apparent that the drawings in the following description are merely exemplary, and that other embodiments can be derived from the drawings provided by those of ordinary skill in the art without inventive effort.
FIG. 1 is a schematic diagram of a three-dimensional reconstruction system for a microsurgical field provided in an embodiment of the present invention;
FIG. 2 is a schematic diagram of a hardware relationship of a three-dimensional microsurgical field reconstruction system provided in an embodiment of the present invention;
fig. 3 is a schematic flow chart of a three-dimensional reconstruction method of a microsurgical field provided in an embodiment of the present invention.
Detailed Description
The present invention is described in terms of particular embodiments, other advantages and features of the invention will become apparent to those skilled in the art from the following disclosure, and it is to be understood that the described embodiments are merely exemplary of the invention and that it is not intended to limit the invention to the particular embodiments disclosed. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1 and 2, there is provided a microsurgical field three-dimensional reconstruction system comprising:
visible light viewpoint acquisition unit 110: the system comprises a pattern information acquisition unit, a data acquisition unit and a data processing unit, wherein the pattern information acquisition unit is used for acquiring pattern information of a measured scene; the visible light viewpoint collecting unit 110 includes a first photosensitive element 111, a first optical zoom body 113, a second photosensitive element 112, a second optical zoom body 114, and a main field objective 116;
the first photosensitive element 111 is used as a first view angle in the operative field viewpoint acquisition to receive photons emitted from the surface of the object to be measured and present an image of the object to be measured at the first observation view angle; the first optical zoom body 113 adopts an optical zoom lens group to change the magnification of the object to be detected on the first photosensitive element 111;
the second photosensitive element 112 is used as a second view angle in the operative field viewpoint acquisition to receive photons emitted from the surface of the object to be measured and present an image of the object to be measured at the second observation view angle; the second optical zoom lens set is adopted by the second optical zoom 114 to change the magnification of the object to be measured on the second photosensitive element 112;
the main field objective 116 is used for determining and changing the microscope working distance formed by the first observation angle and the optical path of the first observation angle;
infrared light viewpoint collecting unit 120: the infrared speckle pattern is used for acquiring the infrared speckle pattern of a measured scene; the infrared light viewpoint collecting unit 120 includes a first speckle projector 123, a first infrared optical lens assembly 122, a third photosensitive element 121, a second speckle projector 126, a second infrared optical lens assembly 125, and a fourth photosensitive element 124;
the first speckle projector 123 is used for projecting laser speckles, and the laser speckles are projected to the surface of a measured object through the first infrared optical lens assembly 122 to form a first group of infrared scattered spots in a given pattern; imaging on the third optical photosensitive element through the first infrared optical lens assembly 122 after the first group of infrared scattered spots on the surface of the measured object is reflected;
the second speckle projector 126 is used for projecting laser speckles, and the laser speckles are projected to the surface of the measured object through the second infrared optical lens assembly 125 to form a second group of infrared scattered spots in a given pattern; imaging on the fourth optical photosensitive element through the second infrared optical lens assembly 125 after the second group of infrared scattered spots on the surface of the measured object is reflected;
the three-dimensional reconstruction calculation control unit 130: the system is configured to control the shooting of the visible light viewpoint collecting unit 110 and the infrared light viewpoint collecting unit 120, and perform information fusion on the pattern obtained by the visible light viewpoint collecting unit 110 and the pattern obtained by the infrared viewpoint collecting unit to obtain a three-dimensional reconstruction result.
Specifically, the visible light viewpoint collecting unit 110 further includes an illumination light source assembly 115, and the illumination light source assembly 115 is configured to illuminate the object to be measured. The illumination light source assembly 115 provides sufficient illumination for the object to be measured, and ensures the imaging quality of the object to be measured on the first photosensitive element 111 and the second photosensitive element 112.
Specifically, the first photosensitive element 111 is used as a first observation angle in multi-viewpoint acquisition for receiving photons emitted from the surface of the object to be measured, and finally presenting an image of the object to be measured at the first observation angle, and the first optical zoom body 113 is a set of optical zoom lens group capable of changing the magnification of the object to be measured on the first photosensitive element 111; the second optical zoom 114 and the second photosensitive element 112 serve as a second observation angle of the object to be measured, and the function thereof is identical to that of the first observation angle, and there is only a difference in the angle of view of the object to be observed. The main field objective 116 is used to determine and vary the working distance of the microscope consisting of the optical paths of the first and second viewing angles.
Specifically, the first speckle projector 123, the first infrared optical lens assembly 122 and the third photosensitive element 121 are located on one side of the main-field objective 116; the second speckle projector 126, second infrared optical lens assembly 125 and fourth photosensitive element 124 are located on the other side of the main-field objective 116. The first photosensitive element 111 and the second photosensitive element 112 are color photosensitive elements which sense visible light; the third photosensitive element 121 and the fourth photosensitive element 124 adopt a grayscale photosensitive element for infrared light.
The infrared light viewpoint collecting unit 120 is composed of two infrared light collecting devices, which are respectively located at both sides of the microscope body. Taking one of the infrared light collection devices as an example, the collection device is composed of a third photosensitive element 121, a first speckle projector 123 and a first infrared optical lens assembly 122. The first speckle projector 123 is used to project laser speckles, which are projected onto the object surface through the first infrared optical lens assembly 122 to form infrared scattered spots having a specific pattern form. The speckle point on the object surface is reflected and imaged on the third optical photosensitive element through the first infrared optical lens assembly 122.
Specifically, the first infrared optical lens assembly 122 has two functions, on one hand, the speckle is projected onto the surface of the object through the internal spectroscope, and on the other hand, the infrared light reflected by the surface of the object is projected onto the third photosensitive element 121 through the first infrared optical lens assembly 122. The magnification of the first infrared optical lens assembly 122 is comparable to the minimum magnification of the first optical zoom body 113. The third photosensitive element 121, the first photosensitive element 111, and the second photosensitive element 112 are slightly different in image formation manner, the third photosensitive element 121 is a grayscale photosensitive element that is sensitive to infrared light, and the first photosensitive element 111 and the second photosensitive element 112 are color photosensitive elements that are sensitive to visible light.
Specifically, there are differences in principle and function between the first photosensitive element 111 and the second photosensitive element 112 in design, and the third photosensitive element 121 and the fourth photosensitive element 124. In principle, the first and second photosensitive elements 111 and 112 image by means of visible light, and the third and fourth photosensitive elements 121 and 124 image in the infrared light band. Functionally, since the speckle projector is added to both the third photosensitive element 121 and the fourth photosensitive element 124, the third photosensitive element 121 and the fourth photosensitive element 124 receive the illumination light reflected by the object surface and also receive the speckles reflected by the object surface. The advantage of this design is that due to the existence of the fine speckles, the original non-textured and highlight areas in the third photosensitive element 121 and the fourth photosensitive element 124 are enhanced in detail, so that the stereo matching problem is effectively solved, and the quality of three-dimensional reconstruction under infrared light is enhanced.
In addition, it should be noted that the light emitted by the first speckle projector 123 and the second speckle projector 126 belongs to the infrared band, and the first photosensitive element 111 and the second photosensitive element 112 belong to the visible light imaging, and the quantum efficiency in the infrared band is low, so that the speckles do not appear on the image corresponding to the visible light photosensitive element.
Specifically, the method comprises the following steps. The three-dimensional reconstruction calculation control unit 130 includes a synchronous camera 131 and a calculation device 132; the synchronous camera 131 is respectively connected with the first photosensitive element 111, the second photosensitive element 112, the third photosensitive element 121 and the fourth photosensitive element 124; the computing device 132 is connected to the synchronous camera 131, and the computing device 132 is configured to process data obtained by the first photosensitive element 111, the second photosensitive element 112, the third photosensitive element 121, and the fourth photosensitive element 124 to obtain a final three-dimensional reconstruction result. The synchronous camera 131 is connected to the four photosensitive elements and is responsible for controlling simultaneous photographing of the four photosensitive elements. The computing device 132 processes the data obtained in the optical sensing elements to obtain the final reconstruction result.
Referring to fig. 3, the present invention further provides a three-dimensional reconstruction method of a microsurgical field, which is used for the three-dimensional reconstruction system of the microsurgical field, and comprises the following steps:
s1, calibrating the first light sensing element 111, the second light sensing element 112, the third light sensing element 121 and the fourth light sensing element 124 under the preset microscope magnification to obtain the internal parameters of the first light sensing element 111
Figure BDA0002720066770000091
Internal parameters of the second photosensitive element 112
Figure BDA0002720066770000092
Internal parameter of the third photosensitive element 121
Figure BDA0002720066770000093
And the internal parameters of the fourth photosensitive element 124
Figure BDA0002720066770000094
And obtains the external parameters of the second photosensitive element 112 relative to the first photosensitive element 111
Figure BDA0002720066770000095
External parameters of the third photosensitive element 121 relative to the first photosensitive element 111
Figure BDA0002720066770000096
And the external parameter of the fourth photosensitive element 124 relative to the first photosensitive element 111
Figure BDA0002720066770000097
S2, under the given microscope magnification i, controlling the first light-sensing element 111, the second light-sensing element 112, the third light-sensing element 121 and the fourth light-sensing element 124 by the synchronous camera 131, making the first light-sensing element 111, the second light-sensing element 112, the third light-sensing element 121 and the fourth light-sensing element 124 shoot the object to be measured simultaneously, recording the image generated by the first light-sensing element 111
Figure BDA0002720066770000098
The image generated by the second photosensitive element 112
Figure BDA0002720066770000099
The image generated by the third photosensitive element 121
Figure BDA00027200667700000910
And the image generated by the fourth photosensitive element 124
Figure BDA00027200667700000911
S3, adopting the internal parameter and the external parameter of the first photosensitive element 111 and the internal parameter of the second photosensitive element 112Number and extrinsic parameters, stereo correction algorithms in computer vision
Figure BDA00027200667700000912
Correcting the image pair
Figure BDA00027200667700000913
First image of
Figure BDA00027200667700000914
And a second image
Figure BDA00027200667700000915
Realizing line alignment of point pairs with the same characteristics to obtain a corrected image pair
Figure BDA00027200667700000916
And obtaining a reprojection matrix Q of the corrected first photosensitive element 1111
Using the internal and external parameters of the third photosensitive element 121 and the internal and external parameters of the fourth photosensitive element 124, a stereo correction algorithm in computer vision is used to correct the image pair
Figure BDA00027200667700000917
Correcting the image pair
Figure BDA00027200667700000918
Middle third image
Figure BDA00027200667700000919
And a fourth image
Figure BDA00027200667700000920
Realizing line alignment of point pairs with the same characteristics to obtain a corrected image pair
Figure BDA00027200667700000921
And obtains the reprojection matrix Q of the corrected third photosensitive element 1213
S4, respectively aligning the correction mapsImage pair
Figure BDA00027200667700000922
And correcting the image pair
Figure BDA00027200667700000923
Obtaining the image pair using a dense matching algorithm
Figure BDA00027200667700000924
Of (d) a parallax map12And the pair of images
Figure BDA00027200667700000925
Figure BDA00027200667700000926
Of (d) a parallax map34
S5, correcting the image pair
Figure BDA00027200667700000927
The first corrected image of
Figure BDA00027200667700000928
And a second corrected image
Figure BDA00027200667700000929
Based on the reprojection matrix Q1And a disparity map d12Obtaining a first corrected image using triangulation in computer vision
Figure BDA00027200667700000930
The space coordinates of each point in the first photosensitive element 111 under the camera coordinate system generate a space point cloud P1
For the corrected image pair
Figure BDA00027200667700000931
The third corrected image of (1)
Figure BDA00027200667700000932
And a firstFour corrected images
Figure BDA00027200667700000933
Based on the reprojection matrix Q3And a disparity map d34Obtaining a third corrected image using triangulation in computer vision
Figure BDA0002720066770000101
Generating a spatial point cloud P by the spatial coordinates of each point in the third photosensitive element 121 under the camera coordinate system2
S6, adopting the space point cloud P1And the spatial point cloud P2Eliminating the error reconstruction result of the non-texture area to correct the spatial point cloud P1
Specifically, in S5, a first corrected image is obtained using a triangulation method in computer vision
Figure BDA0002720066770000102
The specific formula of the space coordinate of each point in the camera coordinate system of the first photosensitive element 111 is as follows:
Figure BDA0002720066770000103
wherein (x, y) represents the first corrected image
Figure BDA0002720066770000104
At one point in the above-mentioned process,
Figure BDA0002720066770000105
represents the parallax value at (X, Y) in the parallax map, and (X, Y, Z, W) represents the spatial coordinates of (X, Y) in the coordinate system of the photosensitive element. Thus, the spatial point cloud P corresponding to the image captured by the first photosensitive element 111 can be obtained1. Similarly, a spatial point cloud P under a stereo image pair formed by the third photosensitive element 121 and the fourth photosensitive element 124 can be obtained2
Specifically, the dense matching algorithm in S4 uses a dense optical flow algorithm or a deep learning-based stereo matching algorithm.
Specifically, S6 includes:
s6.1, based on the space relation between the third photosensitive element 121 and the first photosensitive element 111, the space point cloud P in the coordinate system of the third photosensitive element 1212Transforming to the coordinate system of the first photosensitive element 111 to form a transformed space point cloud
Figure BDA0002720066770000106
Specifically, for any point (X)p2,Yp2,Zp2)∈P2The space coordinate of the first photosensitive element 111 in the coordinate system is (X)p1,Yp1,Zp1) Wherein the following relationship is satisfied:
Figure BDA0002720066770000107
P2the model under the new coordinate system is a spatial point cloud
Figure BDA0002720066770000108
S6.2, using point cloud triangulation in computer vision to process space point cloud
Figure BDA0002720066770000109
Rendering is carried out to obtain rendered space point cloud
Figure BDA00027200667700001010
S6.3, adopting rendered space point cloud
Figure BDA00027200667700001011
For space point cloud P1Optimizing:
for a spatial point cloud P1Each point P in1t(X1t,Y1t,Z1t) Obtaining a set of proximate points
Figure BDA0002720066770000111
Figure BDA0002720066770000112
Where n represents the number of domain points,
Figure BDA0002720066770000113
is P1tThe domain points of (1);
finding point P using least squares1tThe fitting plane Ax + By + Cz + D of the domain point is 0, and the point P is obtained1tThe normal vector (A, B, C) of (A) and (B) is then calculated to obtain P according to the equation of point-to-point equation1tAnd a line l parallel to the normal vector of the point:
Figure BDA0002720066770000114
then, the straight line l and the rendered space point cloud are processed
Figure BDA0002720066770000115
The intersection point of (A) is defined as P1tNew coordinates of (2);
iterating the above process to complete the spatial point cloud P1Optimizing the position of the midpoint to obtain optimized space point cloud under visible light
Figure BDA0002720066770000116
The invention collects the pattern information of the measured scene through the visible light viewpoint collecting unit 110; collecting an infrared speckle pattern of a measured scene by an infrared light viewpoint collecting unit 120; the three-dimensional reconstruction calculation control unit 130 is used for controlling the shooting of the visible light viewpoint acquisition unit 110 and the infrared light viewpoint acquisition unit 120, and information fusion is carried out on the patterns obtained by the visible light viewpoint acquisition unit 110 and the patterns obtained by the infrared viewpoint acquisition unit, so as to obtain a three-dimensional reconstruction result. According to the technical scheme, a multi-view point joint optimization and infrared speckle-based object surface texture enhancement mechanism is introduced into high-precision three-dimensional reconstruction, the appearance structure of the operation field can be accurately obtained by designing the structures of the infrared photosensitive element and the speckle projector, and the appearance structure is used as a three-dimensional reconstruction model under the operation field prior optimization visible light, so that the three-dimensional reconstruction precision under a microscope is improved on the basis of not influencing a main light path of the microscope.
Although the invention has been described in detail above with reference to a general description and specific examples, it will be apparent to one skilled in the art that modifications or improvements may be made thereto based on the invention. Accordingly, such modifications and improvements are intended to be within the scope of the invention as claimed.

Claims (8)

1. A microsurgical field three-dimensional reconstruction system, comprising:
visible light viewpoint acquisition unit: the system comprises a pattern information acquisition unit, a data acquisition unit and a data processing unit, wherein the pattern information acquisition unit is used for acquiring pattern information of a measured scene; the visible light viewpoint acquisition unit comprises a first photosensitive element, a first optical zoom body, a second photosensitive element, a second optical zoom body and a main field objective;
the first photosensitive element is used as a first view angle in the operative field viewpoint acquisition to receive photons emitted by the surface of the measured object and present an image of the measured object under the first observation view angle; the first optical zoom lens group is adopted by the first optical zoom lens group to change the magnification of the object to be measured on the first photosensitive element;
the second photosensitive element is used as a second view angle in the operative field viewpoint acquisition to receive photons emitted by the surface of the measured object and present an image of the measured object at the second observation view angle; the second optical zoom adopts an optical zoom lens group to change the magnification of the object to be detected on the second photosensitive element;
the main field objective is used for determining and changing a microscope working distance formed by a first observation visual angle and an optical path of the first observation visual angle;
infrared light viewpoint acquisition unit: the infrared speckle pattern is used for acquiring the infrared speckle pattern of a measured scene; the infrared light viewpoint acquisition unit comprises a first speckle projector, a first infrared optical lens assembly, a third photosensitive element, a second speckle projector, a second infrared optical lens assembly and a fourth photosensitive element;
the first speckle projector is used for projecting laser speckles, and the laser speckles are projected to the surface of a measured object through the first infrared optical lens assembly to form a first group of infrared scattered spots in a given pattern form; imaging on the third optical photosensitive element through the first infrared optical lens assembly after the first group of infrared scattered spots on the surface of the measured object is reflected;
the second speckle projector is used for projecting laser speckles, and the laser speckles are projected to the surface of a measured object through the second infrared optical lens assembly to form a second group of infrared scattered spots in a given pattern form; imaging on the fourth optical photosensitive element through the second infrared optical lens assembly after the second group of infrared scattered spots on the surface of the measured object is reflected;
a three-dimensional reconstruction calculation control unit: the infrared viewpoint acquisition unit is used for acquiring the pattern of the visible light viewpoint acquisition unit and the pattern of the infrared viewpoint acquisition unit, and acquiring the three-dimensional reconstruction result.
2. The microsurgical field three-dimensional reconstruction system of claim 1, wherein the visible light viewpoint collecting unit further comprises an illumination light source assembly for illuminating the object to be measured.
3. The microsurgical field three-dimensional reconstruction system of claim 1, wherein the first speckle projector, first infrared optical lens assembly and third photosensitive element are located at one side of the main field objective; the second speckle projector, the second infrared optical lens assembly and the fourth photosensitive element are positioned on the other side of the main-field objective lens.
4. The microsurgical field three-dimensional reconstruction system of claim 1, wherein the first photosensitive element and the second photosensitive element are color photosensitive elements that sense visible light; the third photosensitive element and the fourth photosensitive element adopt gray photosensitive elements for infrared light.
5. The microsurgical field three-dimensional reconstruction system of claim 1, wherein the three-dimensional reconstruction computational control unit comprises a synchronized camera and a computing device; the synchronous camera is respectively connected with the first photosensitive element, the second photosensitive element, the third photosensitive element and the fourth photosensitive element; the computing equipment is connected with the synchronous camera and used for processing data obtained by the first photosensitive element, the second photosensitive element, the third photosensitive element and the fourth photosensitive element to obtain a final three-dimensional reconstruction result.
6. A microsurgical field three-dimensional reconstruction method for a microsurgical field three-dimensional reconstruction system as claimed in any one of claims 1 to 5, characterized in that it comprises the following steps:
step 1, calibrating a first photosensitive element, a second photosensitive element, a third photosensitive element and a fourth photosensitive element under a preset microscope magnification to obtain internal parameters of the first photosensitive element
Figure FDA0002720066760000021
Internal parameter of the second photosensitive element
Figure FDA0002720066760000022
Internal parameter of the third photosensitive element
Figure FDA0002720066760000023
And fourth photosensitive element intrinsic parameter
Figure FDA0002720066760000024
And acquiring external parameters of the second photosensitive element relative to the first photosensitive element
Figure FDA0002720066760000025
The third photosensitive element is opposite toExternal parameter of the first photosensitive element
Figure FDA0002720066760000026
And the external parameter of the fourth photosensitive element relative to the first photosensitive element
Figure FDA0002720066760000027
Step 2, under a given microscope magnification i, controlling the first photosensitive element, the second photosensitive element, the third photosensitive element and the fourth photosensitive element through the synchronous camera, enabling the first photosensitive element, the second photosensitive element, the third photosensitive element and the fourth photosensitive element to shoot a measured object at the same time, and recording an image generated by the first photosensitive element
Figure FDA0002720066760000028
Image generated by the second photosensitive element
Figure FDA0002720066760000029
Image generated by the third photosensitive element
Figure FDA00027200667600000210
And an image produced by the fourth photosensitive element
Figure FDA00027200667600000211
Step 3, adopting the internal parameters and the external parameters of the first photosensitive element and the internal parameters and the external parameters of the second photosensitive element, and utilizing a stereo correction algorithm in computer vision to carry out image pair alignment
Figure FDA00027200667600000212
Correcting the image pair
Figure FDA00027200667600000213
First image of
Figure FDA00027200667600000214
And a second image
Figure FDA00027200667600000215
Realizing line alignment of point pairs with the same characteristics to obtain a corrected image pair
Figure FDA0002720066760000031
And obtaining a reprojection matrix Q of the corrected first photosensitive element1
Adopting the internal parameter and the external parameter of the third photosensitive element and the internal parameter and the external parameter of the fourth photosensitive element to carry out stereo correction algorithm in computer vision on the image pair
Figure FDA0002720066760000032
Correcting the image pair
Figure FDA0002720066760000033
Middle third image
Figure FDA0002720066760000034
And a fourth image
Figure FDA0002720066760000035
Realizing line alignment of point pairs with the same characteristics to obtain a corrected image pair
Figure FDA0002720066760000036
And obtaining a reprojection matrix Q of the corrected third photosensitive element3
Step 4, respectively correcting the image pairs
Figure FDA0002720066760000037
And correcting the image pair
Figure FDA0002720066760000038
Obtaining the image pair using a dense matching algorithm
Figure FDA0002720066760000039
Of (d) a parallax map12And the pair of images
Figure FDA00027200667600000310
Figure FDA00027200667600000311
Of (d) a parallax map34
Step 5, correcting the image pair
Figure FDA00027200667600000312
The first corrected image of
Figure FDA00027200667600000313
And a second corrected image
Figure FDA00027200667600000314
Based on the reprojection matrix Q1And a disparity map d12Obtaining a first corrected image using triangulation in computer vision
Figure FDA00027200667600000315
Generating a space point cloud P by the space coordinates of each point in the first photosensitive element under the camera coordinate system1
For the corrected image pair
Figure FDA00027200667600000316
The third corrected image of (1)
Figure FDA00027200667600000317
And a fourth corrected image
Figure FDA00027200667600000318
Based on the reprojection matrix Q3And a disparity map d34Using triangulation in computer visionThe method obtains a third corrected image
Figure FDA00027200667600000319
Generating a space point cloud P by the space coordinates of each point in the third photosensitive element camera coordinate system2
Step 6, adopting the space point cloud P1And the spatial point cloud P2Eliminating the error reconstruction result of the non-texture area to correct the spatial point cloud P1
7. The microsurgical field three-dimensional reconstruction method of claim 6, wherein the dense matching algorithm in the step 4 uses a dense optical flow algorithm or a deep learning based stereo matching algorithm.
8. The microsurgical field three-dimensional reconstruction method of claim 6, wherein the step 6 comprises:
6.1, based on the space relation between the third photosensitive element and the first photosensitive element, the space point cloud P in the coordinate system of the third photosensitive element2Transforming to the coordinate system of the first photosensitive element to form transformed space point cloud
Figure FDA00027200667600000320
Step 6.2, triangularization of the transformed spatial point cloud by using point cloud in computer vision
Figure FDA00027200667600000321
Rendering is carried out to obtain rendered space point cloud
Figure FDA00027200667600000322
6.3, adopting the rendered space point cloud
Figure FDA00027200667600000323
To the airPoint cloud P1Optimizing:
for a spatial point cloud P1Each point P in1t(X1t,Y1t,Z1t) Obtaining a set of proximate points N
Figure FDA0002720066760000041
Where n represents the number of domain points,
Figure FDA0002720066760000042
is P1tThe domain points of (1);
finding point P using least squares1tThe fitting plane Ax + By + Cz + D of the domain point is 0, and the point P is obtained1tThe normal vector (A, B, C) of (A) and (B) is then calculated to obtain P according to the equation of point-to-point equation1tAnd a line l parallel to the normal vector of the point:
Figure FDA0002720066760000043
then, the straight line l and the rendered space point cloud are processed
Figure FDA0002720066760000044
The intersection point of (A) is defined as P1tNew coordinates of (2);
iterating the above process to complete the spatial point cloud P1Optimizing the position of the midpoint to obtain optimized space point cloud under visible light
Figure FDA0002720066760000045
CN202011084952.8A 2020-10-12 2020-10-12 Microsurgery surgical field three-dimensional reconstruction system and method Active CN112294453B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011084952.8A CN112294453B (en) 2020-10-12 2020-10-12 Microsurgery surgical field three-dimensional reconstruction system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011084952.8A CN112294453B (en) 2020-10-12 2020-10-12 Microsurgery surgical field three-dimensional reconstruction system and method

Publications (2)

Publication Number Publication Date
CN112294453A true CN112294453A (en) 2021-02-02
CN112294453B CN112294453B (en) 2022-04-15

Family

ID=74489833

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011084952.8A Active CN112294453B (en) 2020-10-12 2020-10-12 Microsurgery surgical field three-dimensional reconstruction system and method

Country Status (1)

Country Link
CN (1) CN112294453B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113721359A (en) * 2021-09-06 2021-11-30 戴朴 System and method for real-time three-dimensional measurement of key indexes in ear microsurgery

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103279987A (en) * 2013-06-18 2013-09-04 厦门理工学院 Object fast three-dimensional modeling method based on Kinect
CN103337094A (en) * 2013-06-14 2013-10-02 西安工业大学 Method for realizing three-dimensional reconstruction of movement by using binocular camera
CN103337071A (en) * 2013-06-19 2013-10-02 北京理工大学 Device and method for structure-reconstruction-based subcutaneous vein three-dimensional visualization
US20130343634A1 (en) * 2012-06-26 2013-12-26 Xerox Corporation Contemporaneously reconstructing images captured of a scene illuminated with unstructured and structured illumination sources
CN103810708A (en) * 2014-02-13 2014-05-21 西安交通大学 Method and device for perceiving depth of laser speckle image
CN105608734A (en) * 2015-12-23 2016-05-25 王娟 Three-dimensional image information acquisition apparatus and image reconstruction method therefor
CN106691491A (en) * 2017-02-28 2017-05-24 赛诺威盛科技(北京)有限公司 CT (computed tomography) positioning system implemented by using visible light and infrared light and CT positioning method
US20170154436A1 (en) * 2015-05-27 2017-06-01 Zhuhai Ritech Technology Co. Ltd. Stereoscopic vision three dimensional measurement method and system for calculating laser speckle as texture
CN106875468A (en) * 2015-12-14 2017-06-20 深圳先进技术研究院 Three-dimensional reconstruction apparatus and method
US20170249053A1 (en) * 2011-02-10 2017-08-31 Edge 3 Technologies, Inc. Near Touch Interaction
CN108921027A (en) * 2018-06-01 2018-11-30 杭州荣跃科技有限公司 A kind of running disorder object recognition methods based on laser speckle three-dimensional reconstruction
CN109242812A (en) * 2018-09-11 2019-01-18 中国科学院长春光学精密机械与物理研究所 Image interfusion method and device based on conspicuousness detection and singular value decomposition
CN109903376A (en) * 2019-02-28 2019-06-18 四川川大智胜软件股份有限公司 A kind of the three-dimensional face modeling method and system of face geological information auxiliary
CN110363806A (en) * 2019-05-29 2019-10-22 中德(珠海)人工智能研究院有限公司 A method of three-dimensional space modeling is carried out using black light projection feature
CN110940295A (en) * 2019-11-29 2020-03-31 北京理工大学 High-reflection object measurement method and system based on laser speckle limit constraint projection
CN111009007A (en) * 2019-11-20 2020-04-14 华南理工大学 Finger multi-feature comprehensive three-dimensional reconstruction method
CN111145342A (en) * 2019-12-27 2020-05-12 山东中科先进技术研究院有限公司 Binocular speckle structured light three-dimensional reconstruction method and system
CN111260765A (en) * 2020-01-13 2020-06-09 浙江未来技术研究院(嘉兴) Dynamic three-dimensional reconstruction method for microsurgery operative field
CN111491151A (en) * 2020-03-09 2020-08-04 浙江未来技术研究院(嘉兴) Microsurgical stereoscopic video rendering method
CN111685711A (en) * 2020-05-25 2020-09-22 中国科学院苏州生物医学工程技术研究所 Medical endoscope three-dimensional imaging system based on 3D camera

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170249053A1 (en) * 2011-02-10 2017-08-31 Edge 3 Technologies, Inc. Near Touch Interaction
US20130343634A1 (en) * 2012-06-26 2013-12-26 Xerox Corporation Contemporaneously reconstructing images captured of a scene illuminated with unstructured and structured illumination sources
CN103337094A (en) * 2013-06-14 2013-10-02 西安工业大学 Method for realizing three-dimensional reconstruction of movement by using binocular camera
CN103279987A (en) * 2013-06-18 2013-09-04 厦门理工学院 Object fast three-dimensional modeling method based on Kinect
CN103337071A (en) * 2013-06-19 2013-10-02 北京理工大学 Device and method for structure-reconstruction-based subcutaneous vein three-dimensional visualization
CN103810708A (en) * 2014-02-13 2014-05-21 西安交通大学 Method and device for perceiving depth of laser speckle image
US20170154436A1 (en) * 2015-05-27 2017-06-01 Zhuhai Ritech Technology Co. Ltd. Stereoscopic vision three dimensional measurement method and system for calculating laser speckle as texture
CN106875468A (en) * 2015-12-14 2017-06-20 深圳先进技术研究院 Three-dimensional reconstruction apparatus and method
CN105608734A (en) * 2015-12-23 2016-05-25 王娟 Three-dimensional image information acquisition apparatus and image reconstruction method therefor
CN106691491A (en) * 2017-02-28 2017-05-24 赛诺威盛科技(北京)有限公司 CT (computed tomography) positioning system implemented by using visible light and infrared light and CT positioning method
CN108921027A (en) * 2018-06-01 2018-11-30 杭州荣跃科技有限公司 A kind of running disorder object recognition methods based on laser speckle three-dimensional reconstruction
CN109242812A (en) * 2018-09-11 2019-01-18 中国科学院长春光学精密机械与物理研究所 Image interfusion method and device based on conspicuousness detection and singular value decomposition
CN109903376A (en) * 2019-02-28 2019-06-18 四川川大智胜软件股份有限公司 A kind of the three-dimensional face modeling method and system of face geological information auxiliary
CN110363806A (en) * 2019-05-29 2019-10-22 中德(珠海)人工智能研究院有限公司 A method of three-dimensional space modeling is carried out using black light projection feature
CN111009007A (en) * 2019-11-20 2020-04-14 华南理工大学 Finger multi-feature comprehensive three-dimensional reconstruction method
CN110940295A (en) * 2019-11-29 2020-03-31 北京理工大学 High-reflection object measurement method and system based on laser speckle limit constraint projection
CN111145342A (en) * 2019-12-27 2020-05-12 山东中科先进技术研究院有限公司 Binocular speckle structured light three-dimensional reconstruction method and system
CN111260765A (en) * 2020-01-13 2020-06-09 浙江未来技术研究院(嘉兴) Dynamic three-dimensional reconstruction method for microsurgery operative field
CN111491151A (en) * 2020-03-09 2020-08-04 浙江未来技术研究院(嘉兴) Microsurgical stereoscopic video rendering method
CN111685711A (en) * 2020-05-25 2020-09-22 中国科学院苏州生物医学工程技术研究所 Medical endoscope three-dimensional imaging system based on 3D camera

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113721359A (en) * 2021-09-06 2021-11-30 戴朴 System and method for real-time three-dimensional measurement of key indexes in ear microsurgery
CN113721359B (en) * 2021-09-06 2024-07-05 戴朴 System and method for real-time three-dimensional measurement of key indexes in ear microsurgery

Also Published As

Publication number Publication date
CN112294453B (en) 2022-04-15

Similar Documents

Publication Publication Date Title
CN110288642B (en) Three-dimensional object rapid reconstruction method based on camera array
US9392262B2 (en) System and method for 3D reconstruction using multiple multi-channel cameras
US7953271B2 (en) Enhanced object reconstruction
JP7379704B2 (en) System and method for integrating visualization camera and optical coherence tomography
CN105203044B (en) To calculate stereo vision three-dimensional measurement method and system of the laser speckle as texture
JP4343341B2 (en) Endoscope device
JP6458732B2 (en) Image processing apparatus, image processing method, and program
WO2017008226A1 (en) Three-dimensional facial reconstruction method and system
TWI520576B (en) Method and system for converting 2d images to 3d images and computer-readable medium
CN107123156A (en) A kind of active light source projection three-dimensional reconstructing method being combined with binocular stereo vision
CN109579695B (en) Part measuring method based on heterogeneous stereoscopic vision
US11986240B2 (en) Surgical applications with integrated visualization camera and optical coherence tomography
CN110992431B (en) Combined three-dimensional reconstruction method for binocular endoscope soft tissue image
WO2018032841A1 (en) Method, device and system for drawing three-dimensional image
Malti et al. Combining conformal deformation and cook–torrance shading for 3-d reconstruction in laparoscopy
CN100348949C (en) Three-dimensional foot type measuring and modeling method based on specific grid pattern
CN112294453B (en) Microsurgery surgical field three-dimensional reconstruction system and method
CN115205491A (en) Method and device for handheld multi-view three-dimensional reconstruction
CN106303501B (en) Stereo-picture reconstructing method and device based on image sparse characteristic matching
EP4379316A1 (en) Measuring system providing shape from shading
Zhao et al. Geometrical-analysis-based algorithm for stereo matching of single-lens binocular and multi-ocular stereovision system
JP6890422B2 (en) Information processing equipment, control methods and programs for information processing equipment
CN212163540U (en) Omnidirectional stereoscopic vision camera configuration system
Zhao et al. Augmented Reality Calibration with Stereo Image Registration for Surgical Navigation
Bellmann et al. A benchmarking dataset for performance evaluation of automatic surface reconstruction algorithms

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240311

Address after: 314050 9F, No. 705, Asia Pacific Road, Nanhu District, Jiaxing City, Zhejiang Province

Patentee after: ZHEJIANG YANGTZE DELTA REGION INSTITUTE OF TSINGHUA University

Country or region after: China

Address before: No.152 Huixin Road, Nanhu District, Jiaxing City, Zhejiang Province 314000

Patentee before: ZHEJIANG FUTURE TECHNOLOGY INSTITUTE (JIAXING)

Country or region before: China