CN112294453A - A system and method for three-dimensional reconstruction of microsurgery operating field - Google Patents

A system and method for three-dimensional reconstruction of microsurgery operating field Download PDF

Info

Publication number
CN112294453A
CN112294453A CN202011084952.8A CN202011084952A CN112294453A CN 112294453 A CN112294453 A CN 112294453A CN 202011084952 A CN202011084952 A CN 202011084952A CN 112294453 A CN112294453 A CN 112294453A
Authority
CN
China
Prior art keywords
photosensitive element
infrared
dimensional reconstruction
point cloud
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011084952.8A
Other languages
Chinese (zh)
Other versions
CN112294453B (en
Inventor
刘威
邵航
唐洁
廖家胜
阮程
黄海亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiaxing Zhitong Technology Co ltd
Original Assignee
Zhejiang Future Technology Institute (jiaxing)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Future Technology Institute (jiaxing) filed Critical Zhejiang Future Technology Institute (jiaxing)
Priority to CN202011084952.8A priority Critical patent/CN112294453B/en
Publication of CN112294453A publication Critical patent/CN112294453A/en
Application granted granted Critical
Publication of CN112294453B publication Critical patent/CN112294453B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/20Surgical microscopes characterised by non-optical aspects
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body
    • A61B2090/367Correlation of different images or relation of image positions in respect to the body creating a 3D dataset from 2D images using position information
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/373Surgical systems with images on a monitor during operation using light, e.g. by using optical scanners

Landscapes

  • Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Pathology (AREA)
  • Gynecology & Obstetrics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

A microsurgery surgical field three-dimensional reconstruction system and a method thereof are disclosed, wherein pattern information of a measured scene is collected through a visible light viewpoint collecting unit; collecting an infrared speckle pattern of a measured scene through an infrared light viewpoint collecting unit; and controlling the shooting of the visible light viewpoint acquisition unit and the infrared light viewpoint acquisition unit by adopting a three-dimensional reconstruction calculation control unit, and carrying out information fusion on the patterns obtained by the visible light viewpoint acquisition unit and the patterns obtained by the infrared viewpoint acquisition unit to obtain a three-dimensional reconstruction result. According to the technical scheme, a multi-view point joint optimization and infrared speckle-based object surface texture enhancement mechanism is introduced into high-precision three-dimensional reconstruction, the appearance structure of the operation field can be accurately obtained by designing the structures of the infrared photosensitive element and the speckle projector, and the appearance structure is used as a three-dimensional reconstruction model under the operation field prior optimization visible light, so that the three-dimensional reconstruction precision under a microscope is improved on the basis of not influencing a main light path of the microscope.

Description

Microsurgery surgical field three-dimensional reconstruction system and method
Technical Field
The invention relates to the technical field of microstereoscopy imaging, in particular to a microsurgical field three-dimensional reconstruction system and a microsurgical field three-dimensional reconstruction method.
Background
The microscope is a commonly used auxiliary device in a surgical fine operation, and a doctor can clearly see the fine tissues of a human body in an operation field by virtue of the amplification effect of the microscope, so that the patient is finely treated. In recent years, the three-dimensional reconstruction technology of the operative field (operative field) area is regarded by researchers in the medical imaging field, and compared with the traditional CT/MRI imaging technology, the visual-based image reconstruction technology can see the color textures on the surface of the operative field, can provide more visual three-dimensional visual perception experience for doctors, can also carry out digital measurement on the operative field by means of the visual three-dimensional reconstruction result, and provides intraoperative guidance for the doctors, so that the three-dimensional reconstruction technology has great application value.
For the three-dimensional reconstruction problem of the operation area, the existing methods are roughly divided into two types. One is a method based on binocular stereovision, which reconstructs the operating area three-dimensionally by means of parallax generated by a microscope dual optical path, often only reconstructing the area within a limited viewing angle. In addition, the scene under the microscope has its special aspect compared to other visual fields. The surgical field area has a large number of specular reflection areas under the irradiation of a microscope illumination light source, and a large number of non-texture areas exist in the surgical field area, and these factors often cause the result of a stereo matching algorithm to be poor, and finally cause the result of three-dimensional reconstruction to be difficult to use in clinic. The other is a structured light three-dimensional reconstruction method, such as a single-frame structured light and a multi-frame structured light, although the reconstruction precision of the structured light is high, the structured light method needs to introduce an expensive structured light projector, and the method is time-consuming and difficult to use in real time in clinic. In summary, a new technical scheme for performing three-dimensional reconstruction of a microsurgical field is urgently needed.
Disclosure of Invention
Therefore, the invention provides a microsurgical operation field three-dimensional reconstruction system and a microsurgical operation field three-dimensional reconstruction method, which are used for realizing multi-view high-precision operation field three-dimensional reconstruction and solving the problem of failure of three-dimensional reconstruction of a specular reflection area and a texture-free area in an operation area.
In order to achieve the above purpose, the invention provides the following technical scheme: a microsurgical field three-dimensional reconstruction system, comprising:
visible light viewpoint acquisition unit: the system comprises a pattern information acquisition unit, a data acquisition unit and a data processing unit, wherein the pattern information acquisition unit is used for acquiring pattern information of a measured scene; the visible light viewpoint acquisition unit comprises a first photosensitive element, a first optical zoom body, a second photosensitive element, a second optical zoom body and a main field objective;
the first photosensitive element is used as a first view angle in the operative field viewpoint acquisition to receive photons emitted by the surface of the measured object and present an image of the measured object under the first observation view angle; the first optical zoom lens group is adopted by the first optical zoom lens group to change the magnification of the object to be measured on the first photosensitive element;
the second photosensitive element is used as a second view angle in the operative field viewpoint acquisition to receive photons emitted by the surface of the measured object and present an image of the measured object at the second observation view angle; the second optical zoom adopts an optical zoom lens group to change the magnification of the object to be detected on the second photosensitive element;
the main field objective is used for determining and changing a microscope working distance formed by a first observation visual angle and an optical path of the first observation visual angle;
infrared light viewpoint acquisition unit: the infrared speckle pattern is used for acquiring the infrared speckle pattern of a measured scene; the infrared light viewpoint acquisition unit comprises a first speckle projector, a first infrared optical lens assembly, a third photosensitive element, a second speckle projector, a second infrared optical lens assembly and a fourth photosensitive element;
the first speckle projector is used for projecting laser speckles, and the laser speckles are projected to the surface of a measured object through the first infrared optical lens assembly to form a first group of infrared scattered spots in a given pattern form; imaging on the third optical photosensitive element through the first infrared optical lens assembly after the first group of infrared scattered spots on the surface of the measured object is reflected;
the second speckle projector is used for projecting laser speckles, and the laser speckles are projected to the surface of a measured object through the second infrared optical lens assembly to form a second group of infrared scattered spots in a given pattern form; imaging on the fourth optical photosensitive element through the second infrared optical lens assembly after the second group of infrared scattered spots on the surface of the measured object is reflected;
a three-dimensional reconstruction calculation control unit: the infrared viewpoint acquisition unit is used for acquiring the pattern of the visible light viewpoint acquisition unit and the pattern of the infrared viewpoint acquisition unit, and acquiring the three-dimensional reconstruction result.
As a preferred scheme of the microsurgical field three-dimensional reconstruction system, the visible light viewpoint acquisition unit further comprises an illumination light source assembly, and the illumination light source assembly is used for illuminating the measured object.
As a preferred scheme of the microsurgical field three-dimensional reconstruction system, the first speckle projector, the first infrared optical lens assembly and the third photosensitive element are positioned on one side of the main field objective; the second speckle projector, the second infrared optical lens assembly and the fourth photosensitive element are positioned on the other side of the main-field objective lens.
As a preferred scheme of the microsurgical field three-dimensional reconstruction system, the first photosensitive element and the second photosensitive element adopt color photosensitive elements which sense visible light; the third photosensitive element and the fourth photosensitive element adopt gray photosensitive elements for infrared light.
As a preferred scheme of the microsurgical field three-dimensional reconstruction system, the three-dimensional reconstruction calculation control unit comprises a synchronous camera and a calculation device; the synchronous camera is respectively connected with the first photosensitive element, the second photosensitive element, the third photosensitive element and the fourth photosensitive element; the computing equipment is connected with the synchronous camera and used for processing data obtained by the first photosensitive element, the second photosensitive element, the third photosensitive element and the fourth photosensitive element to obtain a final three-dimensional reconstruction result.
The invention also provides a three-dimensional reconstruction method of the microsurgical field, which is used for the three-dimensional reconstruction system of the microsurgical field and comprises the following steps:
step 1, calibrating a first photosensitive element, a second photosensitive element, a third photosensitive element and a fourth photosensitive element under a preset microscope magnification to obtain internal parameters of the first photosensitive element
Figure BDA0002720066770000031
Internal parameter of the second photosensitive element
Figure BDA0002720066770000032
Internal parameter of the third photosensitive element
Figure BDA0002720066770000033
And fourth photosensitive element intrinsic parameter
Figure BDA0002720066770000034
And acquiring external parameters of the second photosensitive element relative to the first photosensitive element
Figure BDA0002720066770000035
External parameter of the third photosensitive element relative to the first photosensitive element
Figure BDA0002720066770000036
And the external parameter of the fourth photosensitive element relative to the first photosensitive element
Figure BDA0002720066770000037
Step 2, under a given microscope magnification i, controlling the first photosensitive element, the second photosensitive element, the third photosensitive element and the fourth photosensitive element through the synchronous camera, enabling the first photosensitive element, the second photosensitive element, the third photosensitive element and the fourth photosensitive element to shoot a measured object at the same time, and recording an image generated by the first photosensitive element
Figure BDA0002720066770000038
Image generated by the second photosensitive element
Figure BDA0002720066770000039
Image generated by the third photosensitive element
Figure BDA00027200667700000310
And an image produced by the fourth photosensitive element
Figure BDA00027200667700000311
Step 3, adopting the internal parameters and the external parameters of the first photosensitive element and the internal parameters and the external parameters of the second photosensitive element, and utilizing a stereo correction algorithm in computer vision to carry out image pair alignment
Figure BDA0002720066770000041
Correcting the image pair
Figure BDA0002720066770000042
First image of
Figure BDA0002720066770000043
And a second image
Figure BDA0002720066770000044
Realizing line alignment of point pairs with the same characteristics to obtain a corrected image pair
Figure BDA0002720066770000045
And obtaining a reprojection matrix Q of the corrected first photosensitive element1
Adopting the internal parameter and the external parameter of the third photosensitive element and the internal parameter and the external parameter of the fourth photosensitive element to carry out stereo correction algorithm in computer vision on the image pair
Figure BDA0002720066770000046
Correcting the image pair
Figure BDA0002720066770000047
Middle third image
Figure BDA0002720066770000048
And a fourth image
Figure BDA0002720066770000049
Realizing line alignment of point pairs with the same characteristics to obtain a corrected image pair
Figure BDA00027200667700000410
And obtaining a reprojection matrix Q of the corrected third photosensitive element3
Step 4, respectively correcting the image pairs
Figure BDA00027200667700000411
And correcting the image pair
Figure BDA00027200667700000412
Obtaining the image pair using a dense matching algorithm
Figure BDA00027200667700000413
Of (d) a parallax map12And the pair of images
Figure BDA00027200667700000414
Figure BDA00027200667700000415
Of (d) a parallax map34
Step 5, correcting the image pair
Figure BDA00027200667700000416
The first corrected image of
Figure BDA00027200667700000417
And a second corrected image
Figure BDA00027200667700000418
Based on the reprojection matrix Q1And a disparity map d12Obtaining a first corrected image using triangulation in computer vision
Figure BDA00027200667700000419
In the space of each point under the camera coordinate system of the first photosensitive elementCoordinates, generating a spatial point cloud P1
For the corrected image pair
Figure BDA00027200667700000420
The third corrected image of (1)
Figure BDA00027200667700000421
And a fourth corrected image
Figure BDA00027200667700000422
Based on the reprojection matrix Q3And a disparity map d34Obtaining a third corrected image using triangulation in computer vision
Figure BDA00027200667700000423
Generating a space point cloud P by the space coordinates of each point in the third photosensitive element camera coordinate system2
Step 6, adopting the space point cloud P1And the spatial point cloud P2Eliminating the error reconstruction result of the non-texture area to correct the spatial point cloud P1
As a preferable scheme of the microsurgical field three-dimensional reconstruction method, the dense matching algorithm in the step 4 uses a dense optical flow algorithm or a deep learning-based stereo matching algorithm.
As a preferable scheme of the microsurgical field three-dimensional reconstruction method, the step 6 comprises the following steps:
6.1, based on the space relation between the third photosensitive element and the first photosensitive element, the space point cloud P in the coordinate system of the third photosensitive element2Transforming to the coordinate system of the first photosensitive element to form transformed space point cloud
Figure BDA00027200667700000424
Step 6.2, triangularization of the transformed spatial point cloud by using point cloud in computer vision
Figure BDA00027200667700000425
Rendering is carried out to obtain rendered space point cloud
Figure BDA00027200667700000426
6.3, adopting the rendered space point cloud
Figure BDA0002720066770000051
For space point cloud P1Optimizing:
for a spatial point cloud P1Each point P in1t(X1t,Y1t,Z1t) Obtaining a set of proximate points
Figure BDA0002720066770000052
Figure BDA0002720066770000053
Where n represents the number of domain points,
Figure BDA0002720066770000054
is P1tThe domain points of (1);
finding point P using least squares1tThe fitting plane Ax + By + Cz + D of the domain point is 0, and the point P is obtained1tThe normal vector (A, B, C) of (A) and (B) is then calculated to obtain P according to the equation of point-to-point equation1tAnd a line l parallel to the normal vector of the point:
Figure BDA0002720066770000055
then, the straight line l and the rendered space point cloud are processed
Figure BDA0002720066770000056
The intersection point of (A) is defined as P1tNew coordinates of (2);
iterating the above process to complete the spatial point cloud P1Optimizing the position of the midpoint to obtain optimized space point cloud under visible light
Figure BDA0002720066770000057
The invention collects the pattern information of the measured scene through the visible light viewpoint collecting unit; collecting an infrared speckle pattern of a measured scene through an infrared light viewpoint collecting unit; and controlling the shooting of the visible light viewpoint acquisition unit and the infrared light viewpoint acquisition unit by adopting a three-dimensional reconstruction calculation control unit, and carrying out information fusion on the patterns obtained by the visible light viewpoint acquisition unit and the patterns obtained by the infrared viewpoint acquisition unit to obtain a three-dimensional reconstruction result. According to the technical scheme, a multi-view point joint optimization and infrared speckle-based object surface texture enhancement mechanism is introduced into high-precision three-dimensional reconstruction, the appearance structure of the operation field can be accurately obtained by designing the structures of the infrared photosensitive element and the speckle projector, and the appearance structure is used as a three-dimensional reconstruction model under the operation field prior optimization visible light, so that the three-dimensional reconstruction precision under a microscope is improved on the basis of not influencing a main light path of the microscope.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It should be apparent that the drawings in the following description are merely exemplary, and that other embodiments can be derived from the drawings provided by those of ordinary skill in the art without inventive effort.
FIG. 1 is a schematic diagram of a three-dimensional reconstruction system for a microsurgical field provided in an embodiment of the present invention;
FIG. 2 is a schematic diagram of a hardware relationship of a three-dimensional microsurgical field reconstruction system provided in an embodiment of the present invention;
fig. 3 is a schematic flow chart of a three-dimensional reconstruction method of a microsurgical field provided in an embodiment of the present invention.
Detailed Description
The present invention is described in terms of particular embodiments, other advantages and features of the invention will become apparent to those skilled in the art from the following disclosure, and it is to be understood that the described embodiments are merely exemplary of the invention and that it is not intended to limit the invention to the particular embodiments disclosed. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1 and 2, there is provided a microsurgical field three-dimensional reconstruction system comprising:
visible light viewpoint acquisition unit 110: the system comprises a pattern information acquisition unit, a data acquisition unit and a data processing unit, wherein the pattern information acquisition unit is used for acquiring pattern information of a measured scene; the visible light viewpoint collecting unit 110 includes a first photosensitive element 111, a first optical zoom body 113, a second photosensitive element 112, a second optical zoom body 114, and a main field objective 116;
the first photosensitive element 111 is used as a first view angle in the operative field viewpoint acquisition to receive photons emitted from the surface of the object to be measured and present an image of the object to be measured at the first observation view angle; the first optical zoom body 113 adopts an optical zoom lens group to change the magnification of the object to be detected on the first photosensitive element 111;
the second photosensitive element 112 is used as a second view angle in the operative field viewpoint acquisition to receive photons emitted from the surface of the object to be measured and present an image of the object to be measured at the second observation view angle; the second optical zoom lens set is adopted by the second optical zoom 114 to change the magnification of the object to be measured on the second photosensitive element 112;
the main field objective 116 is used for determining and changing the microscope working distance formed by the first observation angle and the optical path of the first observation angle;
infrared light viewpoint collecting unit 120: the infrared speckle pattern is used for acquiring the infrared speckle pattern of a measured scene; the infrared light viewpoint collecting unit 120 includes a first speckle projector 123, a first infrared optical lens assembly 122, a third photosensitive element 121, a second speckle projector 126, a second infrared optical lens assembly 125, and a fourth photosensitive element 124;
the first speckle projector 123 is used for projecting laser speckles, and the laser speckles are projected to the surface of a measured object through the first infrared optical lens assembly 122 to form a first group of infrared scattered spots in a given pattern; imaging on the third optical photosensitive element through the first infrared optical lens assembly 122 after the first group of infrared scattered spots on the surface of the measured object is reflected;
the second speckle projector 126 is used for projecting laser speckles, and the laser speckles are projected to the surface of the measured object through the second infrared optical lens assembly 125 to form a second group of infrared scattered spots in a given pattern; imaging on the fourth optical photosensitive element through the second infrared optical lens assembly 125 after the second group of infrared scattered spots on the surface of the measured object is reflected;
the three-dimensional reconstruction calculation control unit 130: the system is configured to control the shooting of the visible light viewpoint collecting unit 110 and the infrared light viewpoint collecting unit 120, and perform information fusion on the pattern obtained by the visible light viewpoint collecting unit 110 and the pattern obtained by the infrared viewpoint collecting unit to obtain a three-dimensional reconstruction result.
Specifically, the visible light viewpoint collecting unit 110 further includes an illumination light source assembly 115, and the illumination light source assembly 115 is configured to illuminate the object to be measured. The illumination light source assembly 115 provides sufficient illumination for the object to be measured, and ensures the imaging quality of the object to be measured on the first photosensitive element 111 and the second photosensitive element 112.
Specifically, the first photosensitive element 111 is used as a first observation angle in multi-viewpoint acquisition for receiving photons emitted from the surface of the object to be measured, and finally presenting an image of the object to be measured at the first observation angle, and the first optical zoom body 113 is a set of optical zoom lens group capable of changing the magnification of the object to be measured on the first photosensitive element 111; the second optical zoom 114 and the second photosensitive element 112 serve as a second observation angle of the object to be measured, and the function thereof is identical to that of the first observation angle, and there is only a difference in the angle of view of the object to be observed. The main field objective 116 is used to determine and vary the working distance of the microscope consisting of the optical paths of the first and second viewing angles.
Specifically, the first speckle projector 123, the first infrared optical lens assembly 122 and the third photosensitive element 121 are located on one side of the main-field objective 116; the second speckle projector 126, second infrared optical lens assembly 125 and fourth photosensitive element 124 are located on the other side of the main-field objective 116. The first photosensitive element 111 and the second photosensitive element 112 are color photosensitive elements which sense visible light; the third photosensitive element 121 and the fourth photosensitive element 124 adopt a grayscale photosensitive element for infrared light.
The infrared light viewpoint collecting unit 120 is composed of two infrared light collecting devices, which are respectively located at both sides of the microscope body. Taking one of the infrared light collection devices as an example, the collection device is composed of a third photosensitive element 121, a first speckle projector 123 and a first infrared optical lens assembly 122. The first speckle projector 123 is used to project laser speckles, which are projected onto the object surface through the first infrared optical lens assembly 122 to form infrared scattered spots having a specific pattern form. The speckle point on the object surface is reflected and imaged on the third optical photosensitive element through the first infrared optical lens assembly 122.
Specifically, the first infrared optical lens assembly 122 has two functions, on one hand, the speckle is projected onto the surface of the object through the internal spectroscope, and on the other hand, the infrared light reflected by the surface of the object is projected onto the third photosensitive element 121 through the first infrared optical lens assembly 122. The magnification of the first infrared optical lens assembly 122 is comparable to the minimum magnification of the first optical zoom body 113. The third photosensitive element 121, the first photosensitive element 111, and the second photosensitive element 112 are slightly different in image formation manner, the third photosensitive element 121 is a grayscale photosensitive element that is sensitive to infrared light, and the first photosensitive element 111 and the second photosensitive element 112 are color photosensitive elements that are sensitive to visible light.
Specifically, there are differences in principle and function between the first photosensitive element 111 and the second photosensitive element 112 in design, and the third photosensitive element 121 and the fourth photosensitive element 124. In principle, the first and second photosensitive elements 111 and 112 image by means of visible light, and the third and fourth photosensitive elements 121 and 124 image in the infrared light band. Functionally, since the speckle projector is added to both the third photosensitive element 121 and the fourth photosensitive element 124, the third photosensitive element 121 and the fourth photosensitive element 124 receive the illumination light reflected by the object surface and also receive the speckles reflected by the object surface. The advantage of this design is that due to the existence of the fine speckles, the original non-textured and highlight areas in the third photosensitive element 121 and the fourth photosensitive element 124 are enhanced in detail, so that the stereo matching problem is effectively solved, and the quality of three-dimensional reconstruction under infrared light is enhanced.
In addition, it should be noted that the light emitted by the first speckle projector 123 and the second speckle projector 126 belongs to the infrared band, and the first photosensitive element 111 and the second photosensitive element 112 belong to the visible light imaging, and the quantum efficiency in the infrared band is low, so that the speckles do not appear on the image corresponding to the visible light photosensitive element.
Specifically, the method comprises the following steps. The three-dimensional reconstruction calculation control unit 130 includes a synchronous camera 131 and a calculation device 132; the synchronous camera 131 is respectively connected with the first photosensitive element 111, the second photosensitive element 112, the third photosensitive element 121 and the fourth photosensitive element 124; the computing device 132 is connected to the synchronous camera 131, and the computing device 132 is configured to process data obtained by the first photosensitive element 111, the second photosensitive element 112, the third photosensitive element 121, and the fourth photosensitive element 124 to obtain a final three-dimensional reconstruction result. The synchronous camera 131 is connected to the four photosensitive elements and is responsible for controlling simultaneous photographing of the four photosensitive elements. The computing device 132 processes the data obtained in the optical sensing elements to obtain the final reconstruction result.
Referring to fig. 3, the present invention further provides a three-dimensional reconstruction method of a microsurgical field, which is used for the three-dimensional reconstruction system of the microsurgical field, and comprises the following steps:
s1, calibrating the first light sensing element 111, the second light sensing element 112, the third light sensing element 121 and the fourth light sensing element 124 under the preset microscope magnification to obtain the internal parameters of the first light sensing element 111
Figure BDA0002720066770000091
Internal parameters of the second photosensitive element 112
Figure BDA0002720066770000092
Internal parameter of the third photosensitive element 121
Figure BDA0002720066770000093
And the internal parameters of the fourth photosensitive element 124
Figure BDA0002720066770000094
And obtains the external parameters of the second photosensitive element 112 relative to the first photosensitive element 111
Figure BDA0002720066770000095
External parameters of the third photosensitive element 121 relative to the first photosensitive element 111
Figure BDA0002720066770000096
And the external parameter of the fourth photosensitive element 124 relative to the first photosensitive element 111
Figure BDA0002720066770000097
S2, under the given microscope magnification i, controlling the first light-sensing element 111, the second light-sensing element 112, the third light-sensing element 121 and the fourth light-sensing element 124 by the synchronous camera 131, making the first light-sensing element 111, the second light-sensing element 112, the third light-sensing element 121 and the fourth light-sensing element 124 shoot the object to be measured simultaneously, recording the image generated by the first light-sensing element 111
Figure BDA0002720066770000098
The image generated by the second photosensitive element 112
Figure BDA0002720066770000099
The image generated by the third photosensitive element 121
Figure BDA00027200667700000910
And the image generated by the fourth photosensitive element 124
Figure BDA00027200667700000911
S3, adopting the internal parameter and the external parameter of the first photosensitive element 111 and the internal parameter of the second photosensitive element 112Number and extrinsic parameters, stereo correction algorithms in computer vision
Figure BDA00027200667700000912
Correcting the image pair
Figure BDA00027200667700000913
First image of
Figure BDA00027200667700000914
And a second image
Figure BDA00027200667700000915
Realizing line alignment of point pairs with the same characteristics to obtain a corrected image pair
Figure BDA00027200667700000916
And obtaining a reprojection matrix Q of the corrected first photosensitive element 1111
Using the internal and external parameters of the third photosensitive element 121 and the internal and external parameters of the fourth photosensitive element 124, a stereo correction algorithm in computer vision is used to correct the image pair
Figure BDA00027200667700000917
Correcting the image pair
Figure BDA00027200667700000918
Middle third image
Figure BDA00027200667700000919
And a fourth image
Figure BDA00027200667700000920
Realizing line alignment of point pairs with the same characteristics to obtain a corrected image pair
Figure BDA00027200667700000921
And obtains the reprojection matrix Q of the corrected third photosensitive element 1213
S4, respectively aligning the correction mapsImage pair
Figure BDA00027200667700000922
And correcting the image pair
Figure BDA00027200667700000923
Obtaining the image pair using a dense matching algorithm
Figure BDA00027200667700000924
Of (d) a parallax map12And the pair of images
Figure BDA00027200667700000925
Figure BDA00027200667700000926
Of (d) a parallax map34
S5, correcting the image pair
Figure BDA00027200667700000927
The first corrected image of
Figure BDA00027200667700000928
And a second corrected image
Figure BDA00027200667700000929
Based on the reprojection matrix Q1And a disparity map d12Obtaining a first corrected image using triangulation in computer vision
Figure BDA00027200667700000930
The space coordinates of each point in the first photosensitive element 111 under the camera coordinate system generate a space point cloud P1
For the corrected image pair
Figure BDA00027200667700000931
The third corrected image of (1)
Figure BDA00027200667700000932
And a firstFour corrected images
Figure BDA00027200667700000933
Based on the reprojection matrix Q3And a disparity map d34Obtaining a third corrected image using triangulation in computer vision
Figure BDA0002720066770000101
Generating a spatial point cloud P by the spatial coordinates of each point in the third photosensitive element 121 under the camera coordinate system2
S6, adopting the space point cloud P1And the spatial point cloud P2Eliminating the error reconstruction result of the non-texture area to correct the spatial point cloud P1
Specifically, in S5, a first corrected image is obtained using a triangulation method in computer vision
Figure BDA0002720066770000102
The specific formula of the space coordinate of each point in the camera coordinate system of the first photosensitive element 111 is as follows:
Figure BDA0002720066770000103
wherein (x, y) represents the first corrected image
Figure BDA0002720066770000104
At one point in the above-mentioned process,
Figure BDA0002720066770000105
represents the parallax value at (X, Y) in the parallax map, and (X, Y, Z, W) represents the spatial coordinates of (X, Y) in the coordinate system of the photosensitive element. Thus, the spatial point cloud P corresponding to the image captured by the first photosensitive element 111 can be obtained1. Similarly, a spatial point cloud P under a stereo image pair formed by the third photosensitive element 121 and the fourth photosensitive element 124 can be obtained2
Specifically, the dense matching algorithm in S4 uses a dense optical flow algorithm or a deep learning-based stereo matching algorithm.
Specifically, S6 includes:
s6.1, based on the space relation between the third photosensitive element 121 and the first photosensitive element 111, the space point cloud P in the coordinate system of the third photosensitive element 1212Transforming to the coordinate system of the first photosensitive element 111 to form a transformed space point cloud
Figure BDA0002720066770000106
Specifically, for any point (X)p2,Yp2,Zp2)∈P2The space coordinate of the first photosensitive element 111 in the coordinate system is (X)p1,Yp1,Zp1) Wherein the following relationship is satisfied:
Figure BDA0002720066770000107
P2the model under the new coordinate system is a spatial point cloud
Figure BDA0002720066770000108
S6.2, using point cloud triangulation in computer vision to process space point cloud
Figure BDA0002720066770000109
Rendering is carried out to obtain rendered space point cloud
Figure BDA00027200667700001010
S6.3, adopting rendered space point cloud
Figure BDA00027200667700001011
For space point cloud P1Optimizing:
for a spatial point cloud P1Each point P in1t(X1t,Y1t,Z1t) Obtaining a set of proximate points
Figure BDA0002720066770000111
Figure BDA0002720066770000112
Where n represents the number of domain points,
Figure BDA0002720066770000113
is P1tThe domain points of (1);
finding point P using least squares1tThe fitting plane Ax + By + Cz + D of the domain point is 0, and the point P is obtained1tThe normal vector (A, B, C) of (A) and (B) is then calculated to obtain P according to the equation of point-to-point equation1tAnd a line l parallel to the normal vector of the point:
Figure BDA0002720066770000114
then, the straight line l and the rendered space point cloud are processed
Figure BDA0002720066770000115
The intersection point of (A) is defined as P1tNew coordinates of (2);
iterating the above process to complete the spatial point cloud P1Optimizing the position of the midpoint to obtain optimized space point cloud under visible light
Figure BDA0002720066770000116
The invention collects the pattern information of the measured scene through the visible light viewpoint collecting unit 110; collecting an infrared speckle pattern of a measured scene by an infrared light viewpoint collecting unit 120; the three-dimensional reconstruction calculation control unit 130 is used for controlling the shooting of the visible light viewpoint acquisition unit 110 and the infrared light viewpoint acquisition unit 120, and information fusion is carried out on the patterns obtained by the visible light viewpoint acquisition unit 110 and the patterns obtained by the infrared viewpoint acquisition unit, so as to obtain a three-dimensional reconstruction result. According to the technical scheme, a multi-view point joint optimization and infrared speckle-based object surface texture enhancement mechanism is introduced into high-precision three-dimensional reconstruction, the appearance structure of the operation field can be accurately obtained by designing the structures of the infrared photosensitive element and the speckle projector, and the appearance structure is used as a three-dimensional reconstruction model under the operation field prior optimization visible light, so that the three-dimensional reconstruction precision under a microscope is improved on the basis of not influencing a main light path of the microscope.
Although the invention has been described in detail above with reference to a general description and specific examples, it will be apparent to one skilled in the art that modifications or improvements may be made thereto based on the invention. Accordingly, such modifications and improvements are intended to be within the scope of the invention as claimed.

Claims (8)

1.一种显微手术术野三维重建系统,其特征在于,包括:1. A three-dimensional reconstruction system of a microsurgery field is characterized in that, comprising: 可见光视点采集单元:用于采集被测量场景的图案信息;所述可见光视点采集单元包括第一感光元件、第一光学变倍体、第二感光元件、第二光学变倍体及主视野物镜;Visible light viewpoint collection unit: used to collect pattern information of the scene to be measured; the visible light viewpoint collection unit includes a first photosensitive element, a first optical zoom body, a second photosensitive element, a second optical zoom body, and a main field of view objective lens; 所述第一感光元件作为术野视点采集中的第一视角接收被测物体表面发出的光子并呈现被测物体在第一观测视角下的像;所述第一光学变倍体采用光学变倍镜组改变被测物体在所述第一感光元件上的放大倍率;The first photosensitive element receives photons emitted from the surface of the measured object as the first viewing angle in the operative field viewpoint collection and presents the image of the measured object under the first observation viewing angle; the first optical zoom body adopts an optical zoom The lens group changes the magnification of the measured object on the first photosensitive element; 所述第二感光元件作为术野视点采集中的第二视角接收被测物体表面发出的光子并呈现被测物体在第二观测视角下的像;所述第二光学变倍体采用光学变倍镜组改变被测物体在所述第二感光元件上的放大倍率;The second photosensitive element receives photons emitted from the surface of the measured object as the second viewing angle in the operative field viewpoint collection and presents the image of the measured object under the second observation viewing angle; the second optical zoom body adopts an optical zoom The lens group changes the magnification of the measured object on the second photosensitive element; 所述主视野物镜用于确定和改变由第一观测视角和第一观测视角的光路所形成的显微镜工作距离;The main field of view objective lens is used to determine and change the microscope working distance formed by the first observation angle of view and the optical path of the first observation angle of view; 红外光视点采集单元:用于采集被测量场景的红外散斑图案;所述红外光视点采集单元包括第一散斑投射器、第一红外光学透镜组件、第三感光元件、第二散斑投射器、第二红外光学透镜组件和第四感光元件;Infrared light viewpoint collection unit: used to collect the infrared speckle pattern of the measured scene; the infrared light viewpoint collection unit includes a first speckle projector, a first infrared optical lens assembly, a third photosensitive element, and a second speckle projection a device, a second infrared optical lens assembly and a fourth photosensitive element; 所述第一散斑投射器用于投射激光散斑,所述激光散斑通过所述第一红外光学透镜组件投射到被测物体表面形成具有给定图案形式的第一组红外散斑点;被测物体表面上的第一组红外散斑点反射后通过所述第一红外光学透镜组件在所述第三光学感光元件上成像;The first speckle projector is used to project laser speckles, and the laser speckles are projected onto the surface of the object to be measured through the first infrared optical lens assembly to form a first group of infrared speckles with a given pattern; The first group of infrared speckles on the surface of the object are reflected on the third optical photosensitive element through the first infrared optical lens assembly; 所述第二散斑投射器用于投射激光散斑,所述激光散斑通过所述第二红外光学透镜组件投射到被测物体表面形成具有给定图案形式的第二组红外散斑点;被测物体表面上的第二组红外散斑点反射后通过所述第二红外光学透镜组件在所述第四光学感光元件上成像;The second speckle projector is used to project laser speckles, and the laser speckles are projected onto the surface of the object to be measured through the second infrared optical lens assembly to form a second group of infrared speckles with a given pattern; The second group of infrared speckles on the surface of the object are reflected on the fourth optical photosensitive element through the second infrared optical lens assembly; 三维重建计算控制单元:用于控制所述可见光视点采集单元和红外光视点采集单元的拍摄,并将所述可见光视点采集单元得到的图案与所述红外视点采集单元得到的图案进行信息融合,以获得三维重建结果。Three-dimensional reconstruction calculation control unit: used to control the shooting of the visible light viewpoint collection unit and the infrared light viewpoint collection unit, and to perform information fusion between the pattern obtained by the visible light viewpoint collection unit and the pattern obtained by the infrared viewpoint collection unit, to Obtain 3D reconstruction results. 2.根据权利要求1所述的一种显微手术术野三维重建系统,其特征在于,所述可见光视点采集单元还包括照明光源组件,所述照明光源组件用于给所述被测物体进行照明。2 . The three-dimensional reconstruction system for a microsurgery operating field according to claim 1 , wherein the visible light viewpoint collection unit further comprises an illumination light source assembly, and the illumination light source assembly is used to perform the measurement on the measured object. 3 . illumination. 3.根据权利要求1所述的一种显微手术术野三维重建系统,其特征在于,所述第一散斑投射器、第一红外光学透镜组件和第三感光元件位于所述主视野物镜的一侧;所述第二散斑投射器、第二红外光学透镜组件和第四感光元件位于所述主视野物镜的另外一侧。3 . The three-dimensional reconstruction system for a microsurgery operating field according to claim 1 , wherein the first speckle projector, the first infrared optical lens assembly and the third photosensitive element are located in the main field objective lens. 4 . The second speckle projector, the second infrared optical lens assembly and the fourth photosensitive element are located on the other side of the main field objective lens. 4.根据权利要求1所述的一种显微手术术野三维重建系统,其特征在于,所述第一感光元件和第二感光元件采用对可见光感知的彩色感光元件;所述第三感光元件和第四感光元件采用对红外光的灰度感光元件。4 . The three-dimensional reconstruction system for a microsurgery field according to claim 1 , wherein the first photosensitive element and the second photosensitive element are color photosensitive elements that perceive visible light; the third photosensitive element And the fourth photosensitive element adopts a grayscale photosensitive element for infrared light. 5.根据权利要求1所述的一种显微手术术野三维重建系统,其特征在于,所述三维重建计算控制单元包括同步相机和计算设备;所述同步相机分别与所述第一感光元件、第二感光元件、第三感光元件和第四感光元件连接;所述计算设备与所述同步相机连接,计算设备用于将第一感光元件、第二感光元件、第三感光元件和第四感光元件获得的数据进行处理得到最终的三维重建结果。5 . The three-dimensional reconstruction system for a microsurgery operating field according to claim 1 , wherein the three-dimensional reconstruction calculation control unit comprises a synchronous camera and a computing device; the synchronous camera is respectively connected with the first photosensitive element. 6 . , the second photosensitive element, the third photosensitive element and the fourth photosensitive element are connected; the computing device is connected with the synchronous camera, and the computing device is used to connect the first photosensitive element, the second photosensitive element, the third photosensitive element and the fourth photosensitive element The data obtained by the photosensitive element is processed to obtain the final 3D reconstruction result. 6.一种显微手术术野三维重建方法,用于如权利要求1至5任一项的显微手术术野三维重建系统,其特征在于,包括以下步骤:6. A method for three-dimensional reconstruction of a microsurgery operating field, which is used for the three-dimensional reconstruction system of a microsurgery operating field according to any one of claims 1 to 5, characterized in that, comprising the following steps: 步骤1、对第一感光元件、第二感光元件、第三感光元件和第四感光元件在预设显微镜放大倍率下进行标定,获取第一感光元件内参数
Figure FDA0002720066760000021
第二感光元件内参数
Figure FDA0002720066760000022
第三感光元件内参数
Figure FDA0002720066760000023
和第四感光元件内参数
Figure FDA0002720066760000024
并获取第二感光元件相对于第一感光元件的外参数
Figure FDA0002720066760000025
第三感光元件相对于第一感光元件的外参数
Figure FDA0002720066760000026
和第四感光元件相对于第一感光元件的外参数
Figure FDA0002720066760000027
Step 1. Calibrate the first photosensitive element, the second photosensitive element, the third photosensitive element and the fourth photosensitive element under the preset microscope magnification, and obtain the internal parameters of the first photosensitive element
Figure FDA0002720066760000021
Internal parameters of the second photosensitive element
Figure FDA0002720066760000022
Internal parameters of the third photosensitive element
Figure FDA0002720066760000023
and the fourth photosensitive element internal parameters
Figure FDA0002720066760000024
And get the external parameters of the second photosensitive element relative to the first photosensitive element
Figure FDA0002720066760000025
External parameters of the third photosensitive element relative to the first photosensitive element
Figure FDA0002720066760000026
and the extrinsic parameters of the fourth photosensitive element relative to the first photosensitive element
Figure FDA0002720066760000027
步骤2、在给定显微镜放大倍率i下,通过同步相机控制第一感光元件、第二感光元件、第三感光元件和第四感光元件,使第一感光元件、第二感光元件、第三感光元件和第四感光元件同时拍摄被测物体,记录第一感光元件所生成的图像
Figure FDA0002720066760000028
第二感光元件所生成的图像
Figure FDA0002720066760000029
第三感光元件所生成的图像
Figure FDA00027200667600000210
和第四感光元件所生成的图像
Figure FDA00027200667600000211
Step 2. Under a given microscope magnification i, control the first photosensitive element, the second photosensitive element, the third photosensitive element and the fourth photosensitive element by synchronizing the camera, so that the first photosensitive element, the second photosensitive element and the third photosensitive element are The element and the fourth photosensitive element shoot the object to be measured at the same time, and record the image generated by the first photosensitive element
Figure FDA0002720066760000028
Image generated by the second photosensitive element
Figure FDA0002720066760000029
Image generated by the third photosensitive element
Figure FDA00027200667600000210
and the image generated by the fourth photosensitive element
Figure FDA00027200667600000211
步骤3、采用第一感光元件的内参数和外参数、第二感光元件的内参数和外参数,利用计算机视觉中的立体校正算法对图像对
Figure FDA00027200667600000212
进行校正,使得图像对
Figure FDA00027200667600000213
中第一图像
Figure FDA00027200667600000214
和第二图像
Figure FDA00027200667600000215
具有相同特征的点对实现行对齐,得到校正图像对
Figure FDA0002720066760000031
并得到校正后第一感光元件的重投影矩阵Q1
Step 3. Using the internal parameters and external parameters of the first photosensitive element and the internal parameters and external parameters of the second photosensitive element, use the stereo correction algorithm in computer vision to pair the images.
Figure FDA00027200667600000212
Correction is made so that the image pair
Figure FDA00027200667600000213
in the first image
Figure FDA00027200667600000214
and the second image
Figure FDA00027200667600000215
Pairs of points with the same feature are aligned to obtain corrected image pairs
Figure FDA0002720066760000031
and obtain the reprojection matrix Q 1 of the corrected first photosensitive element;
采用第三感光元件的内参数和外参数、第四感光元件的内参数和外参数,利用计算机视觉中的立体校正算法对图像对
Figure FDA0002720066760000032
进行校正,使得图像对
Figure FDA0002720066760000033
中第三图像
Figure FDA0002720066760000034
和第四图像
Figure FDA0002720066760000035
具有相同特征的点对实现行对齐,得到校正图像对
Figure FDA0002720066760000036
并得到校正后第三感光元件的重投影矩阵Q3
Using the intrinsic and extrinsic parameters of the third photosensitive element and the intrinsic and extrinsic parameters of the fourth photosensitive element, the stereo correction algorithm in computer vision is used to compare the image pairs.
Figure FDA0002720066760000032
Correction is made so that the image pair
Figure FDA0002720066760000033
middle third image
Figure FDA0002720066760000034
and the fourth image
Figure FDA0002720066760000035
Pairs of points with the same feature are aligned to obtain corrected image pairs
Figure FDA0002720066760000036
and obtain the reprojection matrix Q 3 of the third photosensitive element after correction;
步骤4、分别对所述校正图像对
Figure FDA0002720066760000037
和校正图像对
Figure FDA0002720066760000038
使用稠密匹配算法,获得所述图像对
Figure FDA0002720066760000039
的视差图d12以及所述图像对
Figure FDA00027200667600000310
Figure FDA00027200667600000311
的视差图d34
Step 4. Pair the corrected images respectively
Figure FDA0002720066760000037
and corrected image pairs
Figure FDA0002720066760000038
Using a dense matching algorithm, the image pair is obtained
Figure FDA0002720066760000039
The disparity map d 12 and the image pair
Figure FDA00027200667600000310
Figure FDA00027200667600000311
the disparity map d 34 ;
步骤5、对所述校正图像对
Figure FDA00027200667600000312
中的第一校正图像
Figure FDA00027200667600000313
和第二校正图像
Figure FDA00027200667600000314
基于所述重投影矩阵Q1和视差图d12,使用计算机视觉中的三角测量方法得到第一校正图像
Figure FDA00027200667600000315
中每一点在所述第一感光元件的相机坐标系下的空间坐标,生成空间点云P1
Step 5. Pair the corrected image
Figure FDA00027200667600000312
The first corrected image in
Figure FDA00027200667600000313
and the second corrected image
Figure FDA00027200667600000314
Based on the reprojection matrix Q 1 and the disparity map d 12 , a first corrected image is obtained using the triangulation method in computer vision
Figure FDA00027200667600000315
the spatial coordinates of each point in the camera coordinate system of the first photosensitive element to generate a spatial point cloud P 1 ;
对所述校正图像对
Figure FDA00027200667600000316
中的第三校正图像
Figure FDA00027200667600000317
和第四校正图像
Figure FDA00027200667600000318
基于所述重投影矩阵Q3和视差图d34,使用计算机视觉中的三角测量方法得到第三校正图像
Figure FDA00027200667600000319
中每一点在所述第三感光元件相机坐标系下的空间坐标,生成空间点云P2
for the corrected image pair
Figure FDA00027200667600000316
The third corrected image in
Figure FDA00027200667600000317
and the fourth corrected image
Figure FDA00027200667600000318
Based on the reprojection matrix Q 3 and the disparity map d 34 , a third corrected image is obtained using the triangulation method in computer vision
Figure FDA00027200667600000319
the spatial coordinates of each point in the third photosensitive element camera coordinate system to generate a spatial point cloud P 2 ;
步骤6、采用所述空间点云P1和空间点云P2对无纹理区域的错误重建结果进行消除,以校正所述空间点云P1Step 6: Use the spatial point cloud P 1 and the spatial point cloud P 2 to eliminate the erroneous reconstruction result of the textureless area, so as to correct the spatial point cloud P 1 .
7.根据权利要求6所述的一种显微手术术野三维重建方法,其特征在于,所述步骤4中的稠密匹配算法使用稠密光流算法或基于深度学习的立体匹配算法。7 . The method for three-dimensional reconstruction of a microsurgery field according to claim 6 , wherein the dense matching algorithm in step 4 uses a dense optical flow algorithm or a deep learning-based stereo matching algorithm. 8 . 8.根据权利要求6所述的一种显微手术术野三维重建方法,其特征在于,所述步骤6包括:8. The method for three-dimensional reconstruction of a microsurgery field according to claim 6, wherein the step 6 comprises: 步骤6.1、基于所述第三感光元件与第一感光元件的空间关系,将位于第三感光元件坐标系中的空间点云P2变换到第一感光元件的坐标系下,形成变换后的空间点云
Figure FDA00027200667600000320
Step 6.1. Based on the spatial relationship between the third photosensitive element and the first photosensitive element, transform the spatial point cloud P2 located in the coordinate system of the third photosensitive element to the coordinate system of the first photosensitive element to form a transformed space point cloud
Figure FDA00027200667600000320
步骤6.2、使用计算机视觉中的点云三角化对变换后的空间点云
Figure FDA00027200667600000321
进行渲染,得到渲染后的空间点云
Figure FDA00027200667600000322
Step 6.2. Use point cloud triangulation in computer vision to transform the spatial point cloud
Figure FDA00027200667600000321
Render to get the rendered spatial point cloud
Figure FDA00027200667600000322
步骤6.3、采用渲染后的空间点云
Figure FDA00027200667600000323
对空间点云P1进行优化:
Step 6.3, using the rendered spatial point cloud
Figure FDA00027200667600000323
Optimize the spatial point cloud P1 :
对于空间点云P1中的每个点P1t(X1t,Y1t,Z1t)获取临近点集合N
Figure FDA0002720066760000041
其中n代表领域点的个数,
Figure FDA0002720066760000042
为P1t的领域点;
For each point P 1t (X 1t , Y 1t , Z 1t ) in the spatial point cloud P 1 , obtain a set of adjacent points N
Figure FDA0002720066760000041
where n represents the number of field points,
Figure FDA0002720066760000042
is the domain point of P 1t ;
使用最小二乘方法求出点P1t领域点的拟合平面Ax+By+Cz+D=0,得到点P1t处的法向量(A,B,C),再根据点向式方程,求出过P1t的且平行该点法向量的直线l:Use the least squares method to find the fitting plane Ax+By+Cz+D=0 of the point P 1t field, and obtain the normal vector (A, B, C) at the point P 1t , and then according to the point-to-point equation, find The line l passing through P 1t and parallel to the normal vector of the point:
Figure FDA0002720066760000043
Figure FDA0002720066760000043
然后将直线l与渲染后的空间点云
Figure FDA0002720066760000044
的交点作为P1t的新坐标;
Then connect the line l with the rendered spatial point cloud
Figure FDA0002720066760000044
The intersection of , as the new coordinates of P 1t ;
迭代上述过程完成空间点云P1中点的位置优化,得到可见光下的优化后的空间点云
Figure FDA0002720066760000045
Iterate the above process to complete the position optimization of the midpoint of the spatial point cloud P1, and obtain the optimized spatial point cloud under visible light.
Figure FDA0002720066760000045
CN202011084952.8A 2020-10-12 2020-10-12 Microsurgery surgical field three-dimensional reconstruction system and method Active CN112294453B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011084952.8A CN112294453B (en) 2020-10-12 2020-10-12 Microsurgery surgical field three-dimensional reconstruction system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011084952.8A CN112294453B (en) 2020-10-12 2020-10-12 Microsurgery surgical field three-dimensional reconstruction system and method

Publications (2)

Publication Number Publication Date
CN112294453A true CN112294453A (en) 2021-02-02
CN112294453B CN112294453B (en) 2022-04-15

Family

ID=74489833

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011084952.8A Active CN112294453B (en) 2020-10-12 2020-10-12 Microsurgery surgical field three-dimensional reconstruction system and method

Country Status (1)

Country Link
CN (1) CN112294453B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113721359A (en) * 2021-09-06 2021-11-30 戴朴 System and method for real-time three-dimensional measurement of key indexes in ear microsurgery
CN114782631A (en) * 2022-04-29 2022-07-22 四川中天鑫源生命科技有限公司 Cell shadow picture formation of image VR dress equipment system

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103279987A (en) * 2013-06-18 2013-09-04 厦门理工学院 Object fast three-dimensional modeling method based on Kinect
CN103337094A (en) * 2013-06-14 2013-10-02 西安工业大学 Method for realizing three-dimensional reconstruction of movement by using binocular camera
CN103337071A (en) * 2013-06-19 2013-10-02 北京理工大学 Device and method for structure-reconstruction-based subcutaneous vein three-dimensional visualization
US20130343634A1 (en) * 2012-06-26 2013-12-26 Xerox Corporation Contemporaneously reconstructing images captured of a scene illuminated with unstructured and structured illumination sources
CN103810708A (en) * 2014-02-13 2014-05-21 西安交通大学 Method and device for perceiving depth of laser speckle image
CN105608734A (en) * 2015-12-23 2016-05-25 王娟 Three-dimensional image information acquisition apparatus and image reconstruction method therefor
CN106691491A (en) * 2017-02-28 2017-05-24 赛诺威盛科技(北京)有限公司 CT (computed tomography) positioning system implemented by using visible light and infrared light and CT positioning method
US20170154436A1 (en) * 2015-05-27 2017-06-01 Zhuhai Ritech Technology Co. Ltd. Stereoscopic vision three dimensional measurement method and system for calculating laser speckle as texture
CN106875468A (en) * 2015-12-14 2017-06-20 深圳先进技术研究院 Three-dimensional reconstruction apparatus and method
US20170249053A1 (en) * 2011-02-10 2017-08-31 Edge 3 Technologies, Inc. Near Touch Interaction
CN108921027A (en) * 2018-06-01 2018-11-30 杭州荣跃科技有限公司 A kind of running disorder object recognition methods based on laser speckle three-dimensional reconstruction
CN109242812A (en) * 2018-09-11 2019-01-18 中国科学院长春光学精密机械与物理研究所 Image interfusion method and device based on conspicuousness detection and singular value decomposition
CN109903376A (en) * 2019-02-28 2019-06-18 四川川大智胜软件股份有限公司 A three-dimensional face modeling method and system assisted by face geometry information
CN110363806A (en) * 2019-05-29 2019-10-22 中德(珠海)人工智能研究院有限公司 A Method of Using Invisible Light Casting Features for 3D Space Modeling
CN110940295A (en) * 2019-11-29 2020-03-31 北京理工大学 High-reflection object measurement method and system based on laser speckle limit constraint projection
CN111009007A (en) * 2019-11-20 2020-04-14 华南理工大学 Finger multi-feature comprehensive three-dimensional reconstruction method
CN111145342A (en) * 2019-12-27 2020-05-12 山东中科先进技术研究院有限公司 A binocular speckle structured light three-dimensional reconstruction method and system
CN111260765A (en) * 2020-01-13 2020-06-09 浙江未来技术研究院(嘉兴) A Dynamic 3D Reconstruction Method of Microsurgery Field
CN111491151A (en) * 2020-03-09 2020-08-04 浙江未来技术研究院(嘉兴) Microsurgical stereoscopic video rendering method
CN111685711A (en) * 2020-05-25 2020-09-22 中国科学院苏州生物医学工程技术研究所 Medical endoscope three-dimensional imaging system based on 3D camera

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170249053A1 (en) * 2011-02-10 2017-08-31 Edge 3 Technologies, Inc. Near Touch Interaction
US20130343634A1 (en) * 2012-06-26 2013-12-26 Xerox Corporation Contemporaneously reconstructing images captured of a scene illuminated with unstructured and structured illumination sources
CN103337094A (en) * 2013-06-14 2013-10-02 西安工业大学 Method for realizing three-dimensional reconstruction of movement by using binocular camera
CN103279987A (en) * 2013-06-18 2013-09-04 厦门理工学院 Object fast three-dimensional modeling method based on Kinect
CN103337071A (en) * 2013-06-19 2013-10-02 北京理工大学 Device and method for structure-reconstruction-based subcutaneous vein three-dimensional visualization
CN103810708A (en) * 2014-02-13 2014-05-21 西安交通大学 Method and device for perceiving depth of laser speckle image
US20170154436A1 (en) * 2015-05-27 2017-06-01 Zhuhai Ritech Technology Co. Ltd. Stereoscopic vision three dimensional measurement method and system for calculating laser speckle as texture
CN106875468A (en) * 2015-12-14 2017-06-20 深圳先进技术研究院 Three-dimensional reconstruction apparatus and method
CN105608734A (en) * 2015-12-23 2016-05-25 王娟 Three-dimensional image information acquisition apparatus and image reconstruction method therefor
CN106691491A (en) * 2017-02-28 2017-05-24 赛诺威盛科技(北京)有限公司 CT (computed tomography) positioning system implemented by using visible light and infrared light and CT positioning method
CN108921027A (en) * 2018-06-01 2018-11-30 杭州荣跃科技有限公司 A kind of running disorder object recognition methods based on laser speckle three-dimensional reconstruction
CN109242812A (en) * 2018-09-11 2019-01-18 中国科学院长春光学精密机械与物理研究所 Image interfusion method and device based on conspicuousness detection and singular value decomposition
CN109903376A (en) * 2019-02-28 2019-06-18 四川川大智胜软件股份有限公司 A three-dimensional face modeling method and system assisted by face geometry information
CN110363806A (en) * 2019-05-29 2019-10-22 中德(珠海)人工智能研究院有限公司 A Method of Using Invisible Light Casting Features for 3D Space Modeling
CN111009007A (en) * 2019-11-20 2020-04-14 华南理工大学 Finger multi-feature comprehensive three-dimensional reconstruction method
CN110940295A (en) * 2019-11-29 2020-03-31 北京理工大学 High-reflection object measurement method and system based on laser speckle limit constraint projection
CN111145342A (en) * 2019-12-27 2020-05-12 山东中科先进技术研究院有限公司 A binocular speckle structured light three-dimensional reconstruction method and system
CN111260765A (en) * 2020-01-13 2020-06-09 浙江未来技术研究院(嘉兴) A Dynamic 3D Reconstruction Method of Microsurgery Field
CN111491151A (en) * 2020-03-09 2020-08-04 浙江未来技术研究院(嘉兴) Microsurgical stereoscopic video rendering method
CN111685711A (en) * 2020-05-25 2020-09-22 中国科学院苏州生物医学工程技术研究所 Medical endoscope three-dimensional imaging system based on 3D camera

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113721359A (en) * 2021-09-06 2021-11-30 戴朴 System and method for real-time three-dimensional measurement of key indexes in ear microsurgery
CN113721359B (en) * 2021-09-06 2024-07-05 戴朴 System and method for real-time three-dimensional measurement of key indexes in ear microsurgery
CN114782631A (en) * 2022-04-29 2022-07-22 四川中天鑫源生命科技有限公司 Cell shadow picture formation of image VR dress equipment system

Also Published As

Publication number Publication date
CN112294453B (en) 2022-04-15

Similar Documents

Publication Publication Date Title
CN110288642B (en) Three-dimensional object rapid reconstruction method based on camera array
Schmalz et al. An endoscopic 3D scanner based on structured light
JP7379704B2 (en) System and method for integrating visualization camera and optical coherence tomography
US7953271B2 (en) Enhanced object reconstruction
JP4343341B2 (en) Endoscope device
CN102506757B (en) Self-positioning method in multi-angle measurement of binocular stereo measurement system
US9392262B2 (en) System and method for 3D reconstruction using multiple multi-channel cameras
CN112967330B (en) Endoscopic image three-dimensional reconstruction method combining SfM and binocular matching
CN109186491A (en) Parallel multi-thread laser measurement system and measurement method based on homography matrix
US20210169320A1 (en) Surgical applications with integrated visualization camera and optical coherence tomography
CN113108721A (en) High-reflectivity object three-dimensional measurement method based on multi-beam self-adaptive complementary matching
AU2006295455A1 (en) Artifact mitigation in three-dimensional imaging
CN102831601A (en) Three-dimensional matching method based on union similarity measure and self-adaptive support weighting
WO2018032841A1 (en) Method, device and system for drawing three-dimensional image
CN112294453B (en) Microsurgery surgical field three-dimensional reconstruction system and method
CN107610215B (en) A high-precision multi-angle oral three-dimensional digital imaging model construction method
CN114004880B (en) Point cloud and strong reflection target real-time positioning method of binocular camera
CN1544883A (en) Three-dimensional foot type measuring and modeling method based on specific grid pattern
CN114264253B (en) Device and method for non-contact measurement of three-dimensional profile of high-temperature object
ES2734676T3 (en) Stereoscopic display system and endoscope method using a shadow-based algorithm
JP4750197B2 (en) Endoscope device
CN117765042A (en) Registration method and device for oral tomographic image, computer equipment and storage medium
CN112804515A (en) Omnidirectional stereoscopic vision camera configuration system and camera configuration method
Guo et al. An accurate speckle 3D reconstruction system based on binocular endoscope
CN111481293A (en) Multi-viewpoint optical positioning method and system based on optimal viewpoint selection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240311

Address after: 314050 9F, No. 705, Asia Pacific Road, Nanhu District, Jiaxing City, Zhejiang Province

Patentee after: ZHEJIANG YANGTZE DELTA REGION INSTITUTE OF TSINGHUA University

Country or region after: China

Address before: No.152 Huixin Road, Nanhu District, Jiaxing City, Zhejiang Province 314000

Patentee before: ZHEJIANG FUTURE TECHNOLOGY INSTITUTE (JIAXING)

Country or region before: China

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20241121

Address after: 314000 room 307, building 1, No. 152, Huixin Road, Daqiao Town, Nanhu District, Jiaxing City, Zhejiang Province

Patentee after: Jiaxing Zhitong Technology Co.,Ltd.

Country or region after: China

Address before: 314050 9F, No. 705, Asia Pacific Road, Nanhu District, Jiaxing City, Zhejiang Province

Patentee before: ZHEJIANG YANGTZE DELTA REGION INSTITUTE OF TSINGHUA University

Country or region before: China

TR01 Transfer of patent right