CN112294453B - Microsurgery surgical field three-dimensional reconstruction system and method - Google Patents
Microsurgery surgical field three-dimensional reconstruction system and method Download PDFInfo
- Publication number
- CN112294453B CN112294453B CN202011084952.8A CN202011084952A CN112294453B CN 112294453 B CN112294453 B CN 112294453B CN 202011084952 A CN202011084952 A CN 202011084952A CN 112294453 B CN112294453 B CN 112294453B
- Authority
- CN
- China
- Prior art keywords
- photosensitive element
- infrared
- dimensional reconstruction
- image
- acquisition unit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 27
- 238000002406 microsurgery Methods 0.000 title abstract description 3
- 238000004364 calculation method Methods 0.000 claims abstract description 10
- 230000004927 fusion Effects 0.000 claims abstract description 6
- 230000003287 optical effect Effects 0.000 claims description 62
- 230000001360 synchronised effect Effects 0.000 claims description 13
- 239000011159 matrix material Substances 0.000 claims description 12
- 238000003384 imaging method Methods 0.000 claims description 10
- 238000005286 illumination Methods 0.000 claims description 9
- 230000000007 visual effect Effects 0.000 claims description 8
- 230000004438 eyesight Effects 0.000 claims description 7
- 230000008569 process Effects 0.000 claims description 7
- 230000008859 change Effects 0.000 claims description 6
- 238000012937 correction Methods 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 5
- 238000013135 deep learning Methods 0.000 claims description 3
- 238000009877 rendering Methods 0.000 claims description 3
- 238000005457 optimization Methods 0.000 abstract description 6
- 230000007246 mechanism Effects 0.000 abstract description 3
- 238000005516 engineering process Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000012283 microsurgical operation Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003321 amplification Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/20—Surgical microscopes characterised by non-optical aspects
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B2090/364—Correlation of different images or relation of image positions in respect to the body
- A61B2090/367—Correlation of different images or relation of image positions in respect to the body creating a 3D dataset from 2D images using position information
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
- A61B2090/373—Surgical systems with images on a monitor during operation using light, e.g. by using optical scanners
Landscapes
- Health & Medical Sciences (AREA)
- Surgery (AREA)
- Life Sciences & Earth Sciences (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- Pathology (AREA)
- Gynecology & Obstetrics (AREA)
- Radiology & Medical Imaging (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
A microsurgery surgical field three-dimensional reconstruction system and a method thereof are disclosed, wherein pattern information of a measured scene is collected through a visible light viewpoint collecting unit; collecting an infrared speckle pattern of a measured scene through an infrared light viewpoint collecting unit; and controlling the shooting of the visible light viewpoint acquisition unit and the infrared light viewpoint acquisition unit by adopting a three-dimensional reconstruction calculation control unit, and carrying out information fusion on the patterns obtained by the visible light viewpoint acquisition unit and the patterns obtained by the infrared viewpoint acquisition unit to obtain a three-dimensional reconstruction result. According to the technical scheme, a multi-view point joint optimization and infrared speckle-based object surface texture enhancement mechanism is introduced into high-precision three-dimensional reconstruction, the appearance structure of the operation field can be accurately obtained by designing the structures of the infrared photosensitive element and the speckle projector, and the appearance structure is used as a three-dimensional reconstruction model under the operation field prior optimization visible light, so that the three-dimensional reconstruction precision under a microscope is improved on the basis of not influencing a main light path of the microscope.
Description
Technical Field
The invention relates to the technical field of microstereoscopy imaging, in particular to a microsurgical field three-dimensional reconstruction system and a microsurgical field three-dimensional reconstruction method.
Background
The microscope is a commonly used auxiliary device in a surgical fine operation, and a doctor can clearly see the fine tissues of a human body in an operation field by virtue of the amplification effect of the microscope, so that the patient is finely treated. In recent years, the three-dimensional reconstruction technology of the operative field (operative field) area is regarded by researchers in the medical imaging field, and compared with the traditional CT/MRI imaging technology, the visual-based image reconstruction technology can see the color textures on the surface of the operative field, can provide more visual three-dimensional visual perception experience for doctors, can also carry out digital measurement on the operative field by means of the visual three-dimensional reconstruction result, and provides intraoperative guidance for the doctors, so that the three-dimensional reconstruction technology has great application value.
For the three-dimensional reconstruction problem of the operation area, the existing methods are roughly divided into two types. One is a method based on binocular stereovision, which reconstructs the operating area three-dimensionally by means of parallax generated by a microscope dual optical path, often only reconstructing the area within a limited viewing angle. In addition, the scene under the microscope has its special aspect compared to other visual fields. The surgical field area has a large number of specular reflection areas under the irradiation of a microscope illumination light source, and a large number of non-texture areas exist in the surgical field area, and these factors often cause the result of a stereo matching algorithm to be poor, and finally cause the result of three-dimensional reconstruction to be difficult to use in clinic. The other is a structured light three-dimensional reconstruction method, such as a single-frame structured light and a multi-frame structured light, although the reconstruction precision of the structured light is high, the structured light method needs to introduce an expensive structured light projector, and the method is time-consuming and difficult to use in real time in clinic. In summary, a new technical scheme for performing three-dimensional reconstruction of a microsurgical field is urgently needed.
Disclosure of Invention
Therefore, the invention provides a microsurgical operation field three-dimensional reconstruction system and a microsurgical operation field three-dimensional reconstruction method, which are used for realizing multi-view high-precision operation field three-dimensional reconstruction and solving the problem of failure of three-dimensional reconstruction of a specular reflection area and a texture-free area in an operation area.
In order to achieve the above purpose, the invention provides the following technical scheme: a microsurgical field three-dimensional reconstruction system, comprising:
visible light viewpoint acquisition unit: the system comprises a pattern information acquisition unit, a data acquisition unit and a data processing unit, wherein the pattern information acquisition unit is used for acquiring pattern information of a measured scene; the visible light viewpoint acquisition unit comprises a first photosensitive element, a first optical zoom body, a second photosensitive element, a second optical zoom body and a main field objective;
the first photosensitive element is used as a first view angle in the operative field viewpoint acquisition to receive photons emitted by the surface of the measured object and present an image of the measured object under the first observation view angle; the first optical zoom lens group is adopted by the first optical zoom lens group to change the magnification of the object to be measured on the first photosensitive element;
the second photosensitive element is used as a second view angle in the operative field viewpoint acquisition to receive photons emitted by the surface of the measured object and present an image of the measured object at the second observation view angle; the second optical zoom adopts an optical zoom lens group to change the magnification of the object to be detected on the second photosensitive element;
the main field objective is used for determining and changing a microscope working distance formed by a first observation visual angle and an optical path of the first observation visual angle;
infrared light viewpoint acquisition unit: the infrared speckle pattern is used for acquiring the infrared speckle pattern of a measured scene; the infrared light viewpoint acquisition unit comprises a first speckle projector, a first infrared optical lens assembly, a third photosensitive element, a second speckle projector, a second infrared optical lens assembly and a fourth photosensitive element;
the first speckle projector is used for projecting laser speckles, and the laser speckles are projected to the surface of a measured object through the first infrared optical lens assembly to form a first group of infrared scattered spots in a given pattern form; imaging on the third photosensitive element through the first infrared optical lens assembly after the first group of infrared scattered spots on the surface of the measured object are reflected;
the second speckle projector is used for projecting laser speckles, and the laser speckles are projected to the surface of a measured object through the second infrared optical lens assembly to form a second group of infrared scattered spots in a given pattern form; imaging on the fourth photosensitive element through the second infrared optical lens assembly after the second group of infrared scattered spots on the surface of the measured object are reflected;
a three-dimensional reconstruction calculation control unit: the infrared light viewpoint acquisition unit is used for acquiring a pattern of a visible light viewpoint and a pattern of an infrared light viewpoint, and controlling the visible light viewpoint acquisition unit and the infrared light viewpoint acquisition unit to shoot, and performing information fusion on the pattern obtained by the visible light viewpoint acquisition unit and the pattern obtained by the infrared light viewpoint acquisition unit to obtain a three-dimensional reconstruction result.
As a preferred scheme of the microsurgical field three-dimensional reconstruction system, the visible light viewpoint acquisition unit further comprises an illumination light source assembly, and the illumination light source assembly is used for illuminating the measured object.
As a preferred scheme of the microsurgical field three-dimensional reconstruction system, the first speckle projector, the first infrared optical lens assembly and the third photosensitive element are positioned on one side of the main field objective; the second speckle projector, the second infrared optical lens assembly and the fourth photosensitive element are positioned on the other side of the main-field objective lens.
As a preferred scheme of the microsurgical field three-dimensional reconstruction system, the first photosensitive element and the second photosensitive element adopt color photosensitive elements which sense visible light; the third photosensitive element and the fourth photosensitive element adopt gray photosensitive elements for infrared light.
As a preferred scheme of the microsurgical field three-dimensional reconstruction system, the three-dimensional reconstruction calculation control unit comprises a synchronous camera and a calculation device; the synchronous camera is respectively connected with the first photosensitive element, the second photosensitive element, the third photosensitive element and the fourth photosensitive element; the computing equipment is connected with the synchronous camera and used for processing data obtained by the first photosensitive element, the second photosensitive element, the third photosensitive element and the fourth photosensitive element to obtain a final three-dimensional reconstruction result.
The invention also provides a three-dimensional reconstruction method of the microsurgical field, which is used for the three-dimensional reconstruction system of the microsurgical field and comprises the following steps:
step 1, calibrating a first photosensitive element, a second photosensitive element, a third photosensitive element and a fourth photosensitive element under a preset microscope magnification to obtain internal parameters of the first photosensitive elementInternal parameter of the second photosensitive elementInternal parameter of the third photosensitive elementAnd fourth photosensitive element intrinsic parameterAnd acquiring external parameters of the second photosensitive element relative to the first photosensitive elementExternal parameter of the third photosensitive element relative to the first photosensitive elementAnd the external parameter of the fourth photosensitive element relative to the first photosensitive element
Step 2, under a given microscope magnification i, controlling the first photosensitive element, the second photosensitive element, the third photosensitive element and the fourth photosensitive element through the synchronous camera, enabling the first photosensitive element, the second photosensitive element, the third photosensitive element and the fourth photosensitive element to shoot a measured object at the same time, and recording an image generated by the first photosensitive elementImage generated by the second photosensitive elementImage generated by the third photosensitive elementAnd an image produced by the fourth photosensitive element
Step 3, adopting the internal parameters and the external parameters of the first photosensitive element and the internal parameters and the external parameters of the second photosensitive element, and utilizing a stereo correction algorithm in computer vision to carry out image pair alignmentCorrecting the image pairFirst image ofAnd a second imageRealizing line alignment of point pairs with the same characteristics to obtain a corrected image pairAnd obtaining a reprojection matrix Q of the corrected first photosensitive element1;
Adopting the internal parameter and the external parameter of the third photosensitive element and the internal parameter and the external parameter of the fourth photosensitive element to carry out stereo correction algorithm in computer vision on the image pairCorrecting the image pairMiddle third imageAnd a fourth imageRealizing line alignment of point pairs with the same characteristics to obtain a corrected image pairAnd obtaining a reprojection matrix Q of the corrected third photosensitive element3;
Step 4, respectively correcting the image pairsAnd correcting the image pairObtaining the image pair using a dense matching algorithmOf (d) a parallax map12And the pair of images Of (d) a parallax map34;
Step 5, correcting the image pairThe first corrected image ofAnd a second corrected imageBased on the reprojection matrix Q1And a disparity map d12Obtaining a first corrected image using triangulation in computer visionGenerating a space point cloud P by the space coordinates of each point in the first photosensitive element under the camera coordinate system1;
For the corrected image pairThe third corrected image of (1)And a fourth corrected imageBased on the weightProjection matrix Q3And a disparity map d34Obtaining a third corrected image using triangulation in computer visionGenerating a space point cloud P by the space coordinates of each point in the third photosensitive element camera coordinate system2;
Step 6, adopting the space point cloud P1And the spatial point cloud P2Eliminating the error reconstruction result of the non-texture area to correct the spatial point cloud P1。
As a preferable scheme of the microsurgical field three-dimensional reconstruction method, the dense matching algorithm in the step 4 uses a dense optical flow algorithm or a deep learning-based stereo matching algorithm.
As a preferable scheme of the microsurgical field three-dimensional reconstruction method, the step 6 comprises the following steps:
6.1, based on the space relation between the third photosensitive element and the first photosensitive element, the space point cloud P in the coordinate system of the third photosensitive element2Transforming to the coordinate system of the first photosensitive element to form transformed space point cloud
Step 6.2, triangularization of the transformed spatial point cloud by using point cloud in computer visionRendering is carried out to obtain rendered space point cloud
for a spatial point cloud P1Each point P in1t(X1t,Y1t,Z1t) Obtaining a set of proximate points Where n represents the number of domain points,is P1tThe domain points of (1);
finding point P using least squares1tThe fitting plane Ax + By + Cz + D of the domain point is 0, and the point P is obtained1tThe normal vector (A, B, C) of (A) and (B) is then calculated to obtain P according to the equation of point-to-point equation1tAnd a line l parallel to the normal vector of the point:
then, the straight line l and the rendered space point cloud are processedThe intersection point of (A) is defined as P1tNew coordinates of (2);
iterating the above process to complete the spatial point cloud P1Optimizing the position of the midpoint to obtain optimized space point cloud under visible light
The invention collects the pattern information of the measured scene through the visible light viewpoint collecting unit; collecting an infrared speckle pattern of a measured scene through an infrared light viewpoint collecting unit; and controlling the shooting of the visible light viewpoint acquisition unit and the infrared light viewpoint acquisition unit by adopting a three-dimensional reconstruction calculation control unit, and carrying out information fusion on the patterns obtained by the visible light viewpoint acquisition unit and the patterns obtained by the infrared light viewpoint acquisition unit to obtain a three-dimensional reconstruction result. According to the technical scheme, a multi-view point joint optimization and infrared speckle-based object surface texture enhancement mechanism is introduced into high-precision three-dimensional reconstruction, the appearance structure of the operation field can be accurately obtained by designing the structures of the infrared photosensitive element and the speckle projector, and the appearance structure is used as a three-dimensional reconstruction model under the operation field prior optimization visible light, so that the three-dimensional reconstruction precision under a microscope is improved on the basis of not influencing a main light path of the microscope.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It should be apparent that the drawings in the following description are merely exemplary, and that other embodiments can be derived from the drawings provided by those of ordinary skill in the art without inventive effort.
FIG. 1 is a schematic diagram of a three-dimensional reconstruction system for a microsurgical field provided in an embodiment of the present invention;
FIG. 2 is a schematic diagram of a hardware relationship of a three-dimensional microsurgical field reconstruction system provided in an embodiment of the present invention;
fig. 3 is a schematic flow chart of a three-dimensional reconstruction method of a microsurgical field provided in an embodiment of the present invention.
Detailed Description
The present invention is described in terms of particular embodiments, other advantages and features of the invention will become apparent to those skilled in the art from the following disclosure, and it is to be understood that the described embodiments are merely exemplary of the invention and that it is not intended to limit the invention to the particular embodiments disclosed. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1 and 2, there is provided a microsurgical field three-dimensional reconstruction system comprising:
visible light viewpoint acquisition unit 110: the system comprises a pattern information acquisition unit, a data acquisition unit and a data processing unit, wherein the pattern information acquisition unit is used for acquiring pattern information of a measured scene; the visible light viewpoint collecting unit 110 includes a first photosensitive element 111, a first optical zoom body 113, a second photosensitive element 112, a second optical zoom body 114, and a main field objective 116;
the first photosensitive element 111 is used as a first view angle in the operative field viewpoint acquisition to receive photons emitted from the surface of the object to be measured and present an image of the object to be measured at the first observation view angle; the first optical zoom body 113 adopts an optical zoom lens group to change the magnification of the object to be detected on the first photosensitive element 111;
the second photosensitive element 112 is used as a second view angle in the operative field viewpoint acquisition to receive photons emitted from the surface of the object to be measured and present an image of the object to be measured at the second observation view angle; the second optical zoom lens set is adopted by the second optical zoom 114 to change the magnification of the object to be measured on the second photosensitive element 112;
the main field objective 116 is used for determining and changing the microscope working distance formed by the first observation angle and the optical path of the first observation angle;
infrared light viewpoint collecting unit 120: the infrared speckle pattern is used for acquiring the infrared speckle pattern of a measured scene; the infrared light viewpoint collecting unit 120 includes a first speckle projector 123, a first infrared optical lens assembly 122, a third photosensitive element 121, a second speckle projector 126, a second infrared optical lens assembly 125, and a fourth photosensitive element 124;
the first speckle projector 123 is used for projecting laser speckles, and the laser speckles are projected to the surface of a measured object through the first infrared optical lens assembly 122 to form a first group of infrared scattered spots in a given pattern; imaging the first group of infrared scattered spots on the surface of the measured object on the third photosensitive element through the first infrared optical lens assembly 122 after the first group of infrared scattered spots are reflected;
the second speckle projector 126 is used for projecting laser speckles, and the laser speckles are projected to the surface of the measured object through the second infrared optical lens assembly 125 to form a second group of infrared scattered spots in a given pattern; imaging the second group of infrared scattered spots on the surface of the measured object on the fourth photosensitive element through the second infrared optical lens assembly 125 after being reflected;
the three-dimensional reconstruction calculation control unit 130: the infrared viewpoint acquisition unit is used for controlling the shooting of the visible light viewpoint acquisition unit 110 and the infrared viewpoint acquisition unit 120, and performing information fusion on the pattern obtained by the visible light viewpoint acquisition unit 110 and the pattern obtained by the infrared viewpoint acquisition unit to obtain a three-dimensional reconstruction result.
Specifically, the visible light viewpoint collecting unit 110 further includes an illumination light source assembly 115, and the illumination light source assembly 115 is configured to illuminate the object to be measured. The illumination light source assembly 115 provides sufficient illumination for the object to be measured, and ensures the imaging quality of the object to be measured on the first photosensitive element 111 and the second photosensitive element 112.
Specifically, the first photosensitive element 111 is used as a first observation angle in multi-viewpoint acquisition for receiving photons emitted from the surface of the object to be measured, and finally presenting an image of the object to be measured at the first observation angle, and the first optical zoom body 113 is a set of optical zoom lens group capable of changing the magnification of the object to be measured on the first photosensitive element 111; the second optical zoom 114 and the second photosensitive element 112 serve as a second observation angle of the object to be measured, and the function thereof is identical to that of the first observation angle, and there is only a difference in the angle of view of the object to be observed. The main field objective 116 is used to determine and vary the working distance of the microscope consisting of the optical paths of the first and second viewing angles.
Specifically, the first speckle projector 123, the first infrared optical lens assembly 122 and the third photosensitive element 121 are located on one side of the main-field objective 116; the second speckle projector 126, second infrared optical lens assembly 125 and fourth photosensitive element 124 are located on the other side of the main-field objective 116. The first photosensitive element 111 and the second photosensitive element 112 are color photosensitive elements which sense visible light; the third photosensitive element 121 and the fourth photosensitive element 124 adopt a grayscale photosensitive element for infrared light.
The infrared light viewpoint collecting unit 120 is composed of two infrared light collecting devices, which are respectively located at both sides of the microscope body. Taking one of the infrared light collection devices as an example, the collection device is composed of a third photosensitive element 121, a first speckle projector 123 and a first infrared optical lens assembly 122. The first speckle projector 123 is used to project laser speckles, which are projected onto the object surface through the first infrared optical lens assembly 122 to form infrared scattered spots having a specific pattern form. The speckle point on the object surface is reflected and imaged on the third photosensitive element by the first infrared optical lens assembly 122.
Specifically, the first infrared optical lens assembly 122 has two functions, on one hand, the speckle is projected onto the surface of the object through the internal spectroscope, and on the other hand, the infrared light reflected by the surface of the object is projected onto the third photosensitive element 121 through the first infrared optical lens assembly 122. The magnification of the first infrared optical lens assembly 122 is comparable to the minimum magnification of the first optical zoom body 113. The third photosensitive element 121, the first photosensitive element 111, and the second photosensitive element 112 are slightly different in image formation manner, the third photosensitive element 121 is a grayscale photosensitive element that is sensitive to infrared light, and the first photosensitive element 111 and the second photosensitive element 112 are color photosensitive elements that are sensitive to visible light.
Specifically, there are differences in principle and function between the first photosensitive element 111 and the second photosensitive element 112 in design, and the third photosensitive element 121 and the fourth photosensitive element 124. In principle, the first and second photosensitive elements 111 and 112 image by means of visible light, and the third and fourth photosensitive elements 121 and 124 image in the infrared light band. Functionally, since the speckle projector is added to both the third photosensitive element 121 and the fourth photosensitive element 124, the third photosensitive element 121 and the fourth photosensitive element 124 receive the illumination light reflected by the object surface and also receive the speckles reflected by the object surface. The advantage of this design is that due to the existence of the fine speckles, the original non-textured and highlight areas in the third photosensitive element 121 and the fourth photosensitive element 124 are enhanced in detail, so that the stereo matching problem is effectively solved, and the quality of three-dimensional reconstruction under infrared light is enhanced.
In addition, it should be noted that the light emitted by the first speckle projector 123 and the second speckle projector 126 belongs to the infrared band, and the first photosensitive element 111 and the second photosensitive element 112 belong to the visible light imaging, and the quantum efficiency in the infrared band is low, so that the speckles do not appear on the image corresponding to the visible light photosensitive element.
Specifically, the method comprises the following steps. The three-dimensional reconstruction calculation control unit 130 includes a synchronous camera 131 and a calculation device 132; the synchronous camera 131 is respectively connected with the first photosensitive element 111, the second photosensitive element 112, the third photosensitive element 121 and the fourth photosensitive element 124; the computing device 132 is connected to the synchronous camera 131, and the computing device 132 is configured to process data obtained by the first photosensitive element 111, the second photosensitive element 112, the third photosensitive element 121, and the fourth photosensitive element 124 to obtain a final three-dimensional reconstruction result. The synchronous camera 131 is connected to the four photosensitive elements and is responsible for controlling simultaneous photographing of the four photosensitive elements. The computing device 132 processes the data obtained in the optical sensing elements to obtain the final reconstruction result.
Referring to fig. 3, the present invention further provides a three-dimensional reconstruction method of a microsurgical field, which is used for the three-dimensional reconstruction system of the microsurgical field, and comprises the following steps:
s1, calibrating the first light sensing element 111, the second light sensing element 112, the third light sensing element 121 and the fourth light sensing element 124 under the preset microscope magnification to obtain the internal parameters of the first light sensing element 111Internal parameters of the second photosensitive element 112Internal parameter of the third photosensitive element 121And the internal parameters of the fourth photosensitive element 124And obtains the external parameters of the second photosensitive element 112 relative to the first photosensitive element 111The third photosensitive element 121 is opposite to the first sensorExternal parameter of the optical element 111And the external parameter of the fourth photosensitive element 124 relative to the first photosensitive element 111
S2, under the given microscope magnification i, controlling the first light-sensing element 111, the second light-sensing element 112, the third light-sensing element 121 and the fourth light-sensing element 124 by the synchronous camera 131, making the first light-sensing element 111, the second light-sensing element 112, the third light-sensing element 121 and the fourth light-sensing element 124 shoot the object to be measured simultaneously, recording the image generated by the first light-sensing element 111The image generated by the second photosensitive element 112The image generated by the third photosensitive element 121And the image generated by the fourth photosensitive element 124
S3, adopting the internal parameter and the external parameter of the first photosensitive element 111 and the internal parameter and the external parameter of the second photosensitive element 112 to carry out stereo correction algorithm in computer vision on the image pairCorrecting the image pairFirst image ofAnd a second imageRealizing line alignment of point pairs with the same characteristics to obtain a corrected image pairAnd obtaining a reprojection matrix Q of the corrected first photosensitive element 1111;
Using the internal and external parameters of the third photosensitive element 121 and the internal and external parameters of the fourth photosensitive element 124, a stereo correction algorithm in computer vision is used to correct the image pairCorrecting the image pairMiddle third imageAnd a fourth imageRealizing line alignment of point pairs with the same characteristics to obtain a corrected image pairAnd obtains the reprojection matrix Q of the corrected third photosensitive element 1213;
S4, respectively aligning the corrected image pairsAnd correcting the image pairObtaining the image pair using a dense matching algorithmOf (d) a parallax map12And the pair of images Of (d) a parallax map34;
S5, correcting the image pairThe first corrected image ofAnd a second corrected imageBased on the reprojection matrix Q1And a disparity map d12Obtaining a first corrected image using triangulation in computer visionThe space coordinates of each point in the first photosensitive element 111 under the camera coordinate system generate a space point cloud P1;
For the corrected image pairThe third corrected image of (1)And a fourth corrected imageBased on the reprojection matrix Q3And a disparity map d34Obtaining a third corrected image using triangulation in computer visionGenerating a spatial point cloud P by the spatial coordinates of each point in the third photosensitive element 121 under the camera coordinate system2;
S6, adopting the space point cloud P1And the spatial point cloud P2Eliminating the error reconstruction result of the non-texture area to correct the spatial point cloud P1。
Specifically, in S5, a first corrected image is obtained using a triangulation method in computer visionThe specific formula of the space coordinate of each point in the camera coordinate system of the first photosensitive element 111 is as follows:
wherein (x, y) represents the first corrected imageAt one point in the above-mentioned process,represents the parallax value at (X, Y) in the parallax map, and (X, Y, Z, W) represents the spatial coordinates of (X, Y) in the coordinate system of the photosensitive element. Thus, the spatial point cloud P corresponding to the image captured by the first photosensitive element 111 can be obtained1. Similarly, a spatial point cloud P under a stereo image pair formed by the third photosensitive element 121 and the fourth photosensitive element 124 can be obtained2。
Specifically, the dense matching algorithm in S4 uses a dense optical flow algorithm or a deep learning-based stereo matching algorithm.
Specifically, S6 includes:
s6.1, based on the space relation between the third photosensitive element 121 and the first photosensitive element 111, the space point cloud P in the coordinate system of the third photosensitive element 1212Transforming to the coordinate system of the first photosensitive element 111 to form a transformed space point cloudSpecifically, for any oneDot (X)p2,Yp2,Zp2)∈P2The space coordinate of the first photosensitive element 111 in the coordinate system is (X)p1,Yp1,Zp1) Wherein the following relationship is satisfied:
S6.2, using point cloud triangulation in computer vision to process space point cloudRendering is carried out to obtain rendered space point cloud
for a spatial point cloud P1Each point P in1t(X1t,Y1t,Z1t) Obtaining a set of proximate points Where n represents the number of domain points,is P1tThe domain points of (1);
finding point P using least squares1tThe fitting plane Ax + By + Cz + D of the domain point is 0, and the point P is obtained1tThe normal vector (A, B, C) of (A) and (B) is then calculated to obtain P according to the equation of point-to-point equation1tAnd a line l parallel to the normal vector of the point:
then, the straight line l and the rendered space point cloud are processedThe intersection point of (A) is defined as P1tNew coordinates of (2);
iterating the above process to complete the spatial point cloud P1Optimizing the position of the midpoint to obtain optimized space point cloud under visible light
The invention collects the pattern information of the measured scene through the visible light viewpoint collecting unit 110; collecting an infrared speckle pattern of a measured scene by an infrared light viewpoint collecting unit 120; the three-dimensional reconstruction calculation control unit 130 is used for controlling the shooting of the visible light viewpoint acquisition unit 110 and the infrared light viewpoint acquisition unit 120, and information fusion is carried out on the pattern obtained by the visible light viewpoint acquisition unit 110 and the pattern obtained by the infrared light viewpoint acquisition unit, so as to obtain a three-dimensional reconstruction result. According to the technical scheme, a multi-view point joint optimization and infrared speckle-based object surface texture enhancement mechanism is introduced into high-precision three-dimensional reconstruction, the appearance structure of the operation field can be accurately obtained by designing the structures of the infrared photosensitive element and the speckle projector, and the appearance structure is used as a three-dimensional reconstruction model under the operation field prior optimization visible light, so that the three-dimensional reconstruction precision under a microscope is improved on the basis of not influencing a main light path of the microscope.
Although the invention has been described in detail above with reference to a general description and specific examples, it will be apparent to one skilled in the art that modifications or improvements may be made thereto based on the invention. Accordingly, such modifications and improvements are intended to be within the scope of the invention as claimed.
Claims (8)
1. A microsurgical field three-dimensional reconstruction system, comprising:
visible light viewpoint acquisition unit: the system comprises a pattern information acquisition unit, a data acquisition unit and a data processing unit, wherein the pattern information acquisition unit is used for acquiring pattern information of a measured scene; the visible light viewpoint acquisition unit comprises a first photosensitive element, a first optical zoom body, a second photosensitive element, a second optical zoom body and a main field objective;
the first photosensitive element is used as a first view angle in the operative field viewpoint acquisition to receive photons emitted by the surface of the measured object and present an image of the measured object under the first observation view angle; the first optical zoom lens group is adopted by the first optical zoom lens group to change the magnification of the object to be measured on the first photosensitive element;
the second photosensitive element is used as a second view angle in the operative field viewpoint acquisition to receive photons emitted by the surface of the measured object and present an image of the measured object at the second observation view angle; the second optical zoom adopts an optical zoom lens group to change the magnification of the object to be detected on the second photosensitive element;
the main field objective is used for determining and changing a microscope working distance formed by a first observation visual angle and an optical path of the first observation visual angle;
infrared light viewpoint acquisition unit: the infrared speckle pattern is used for acquiring the infrared speckle pattern of a measured scene; the infrared light viewpoint acquisition unit comprises a first speckle projector, a first infrared optical lens assembly, a third photosensitive element, a second speckle projector, a second infrared optical lens assembly and a fourth photosensitive element;
the first speckle projector is used for projecting laser speckles, and the laser speckles are projected to the surface of a measured object through the first infrared optical lens assembly to form a first group of infrared scattered spots in a given pattern form; imaging on the third photosensitive element through the first infrared optical lens assembly after the first group of infrared scattered spots on the surface of the measured object are reflected;
the second speckle projector is used for projecting laser speckles, and the laser speckles are projected to the surface of a measured object through the second infrared optical lens assembly to form a second group of infrared scattered spots in a given pattern form; imaging on the fourth photosensitive element through the second infrared optical lens assembly after the second group of infrared scattered spots on the surface of the measured object are reflected;
a three-dimensional reconstruction calculation control unit: the infrared light viewpoint acquisition unit is used for acquiring a pattern of a visible light viewpoint and a pattern of an infrared light viewpoint, and controlling the visible light viewpoint acquisition unit and the infrared light viewpoint acquisition unit to shoot, and performing information fusion on the pattern obtained by the visible light viewpoint acquisition unit and the pattern obtained by the infrared light viewpoint acquisition unit to obtain a three-dimensional reconstruction result.
2. The microsurgical field three-dimensional reconstruction system of claim 1, wherein the visible light viewpoint collecting unit further comprises an illumination light source assembly for illuminating the object to be measured.
3. The microsurgical field three-dimensional reconstruction system of claim 1, wherein the first speckle projector, first infrared optical lens assembly and third photosensitive element are located at one side of the main field objective; the second speckle projector, the second infrared optical lens assembly and the fourth photosensitive element are positioned on the other side of the main-field objective lens.
4. The microsurgical field three-dimensional reconstruction system of claim 1, wherein the first photosensitive element and the second photosensitive element are color photosensitive elements that sense visible light; the third photosensitive element and the fourth photosensitive element adopt gray photosensitive elements for infrared light.
5. The microsurgical field three-dimensional reconstruction system of claim 1, wherein the three-dimensional reconstruction computational control unit comprises a synchronized camera and a computing device; the synchronous camera is respectively connected with the first photosensitive element, the second photosensitive element, the third photosensitive element and the fourth photosensitive element; the computing equipment is connected with the synchronous camera and used for processing data obtained by the first photosensitive element, the second photosensitive element, the third photosensitive element and the fourth photosensitive element to obtain a final three-dimensional reconstruction result.
6. A microsurgical field three-dimensional reconstruction method for a microsurgical field three-dimensional reconstruction system as claimed in any one of claims 1 to 5, characterized in that it comprises the following steps:
step 1, calibrating a first photosensitive element, a second photosensitive element, a third photosensitive element and a fourth photosensitive element under a preset microscope magnification to obtain internal parameters of the first photosensitive elementInternal parameter of the second photosensitive elementInternal parameter of the third photosensitive elementAnd fourth photosensitive element intrinsic parameterAnd acquiring external parameters of the second photosensitive element relative to the first photosensitive elementExternal parameter of the third photosensitive element relative to the first photosensitive elementAnd the external parameter of the fourth photosensitive element relative to the first photosensitive element
Step 2, under the given microscope magnification i, controlling the first step by a synchronous cameraA photosensitive element, a second photosensitive element, a third photosensitive element and a fourth photosensitive element, which make the first photosensitive element, the second photosensitive element, the third photosensitive element and the fourth photosensitive element shoot the object to be measured at the same time and record the image generated by the first photosensitive elementImage generated by the second photosensitive elementImage generated by the third photosensitive elementAnd an image produced by the fourth photosensitive element
Step 3, adopting the internal parameters and the external parameters of the first photosensitive element and the internal parameters and the external parameters of the second photosensitive element, and utilizing a stereo correction algorithm in computer vision to carry out image pair alignmentCorrecting the image pairFirst image ofAnd a second imageRealizing line alignment of point pairs with the same characteristics to obtain a corrected image pairAnd obtaining a reprojection matrix Q of the corrected first photosensitive element1;
Adopting the internal parameter and the external parameter of the third photosensitive element and the internal parameter and the external parameter of the fourth photosensitive element to carry out stereo correction algorithm in computer vision on the image pairCorrecting the image pairMiddle third imageAnd a fourth imageRealizing line alignment of point pairs with the same characteristics to obtain a corrected image pairAnd obtaining a reprojection matrix Q of the corrected third photosensitive element3;
Step 4, respectively correcting the image pairsAnd correcting the image pairObtaining the image pair using a dense matching algorithmOf (d) a parallax map12And the pair of images Of (d) a parallax map34;
Step 5, correcting the image pairThe first corrected image ofAnd a second corrected imageBased on the reprojection matrix Q1And a disparity map d12Obtaining a first corrected image using triangulation in computer visionGenerating a space point cloud P by the space coordinates of each point in the first photosensitive element under the camera coordinate system1;
For the corrected image pairThe third corrected image of (1)And a fourth corrected imageBased on the reprojection matrix Q3And a disparity map d34Obtaining a third corrected image using triangulation in computer visionGenerating a space point cloud P by the space coordinates of each point in the third photosensitive element camera coordinate system2;
Step 6, adopting the space point cloud P1And the spatial point cloud P2Eliminating the error reconstruction result of the non-texture region to correct the texture regionThe space point cloud P1。
7. The microsurgical field three-dimensional reconstruction method of claim 6, wherein the dense matching algorithm in the step 4 uses a dense optical flow algorithm or a deep learning based stereo matching algorithm.
8. The microsurgical field three-dimensional reconstruction method of claim 6, wherein the step 6 comprises:
6.1, based on the space relation between the third photosensitive element and the first photosensitive element, the space point cloud P in the coordinate system of the third photosensitive element2Transforming to the coordinate system of the first photosensitive element to form transformed space point cloud
Step 6.2, triangularization of the transformed spatial point cloud by using point cloud in computer visionRendering is carried out to obtain rendered space point cloud
for a spatial point cloud P1Each point P in1t(X1t,Y1t,Z1t) Obtaining a set of proximate points Where n represents the number of domain points,is P1tThe domain points of (1);
finding point P using least squares1tThe fitting plane Ax + By + Cz + D of the domain point is 0, and the point P is obtained1tThe normal vector (A, B, C) of (A) and (B) is then calculated to obtain P according to the equation of point-to-point equation1tAnd a line l parallel to the normal vector of the point:
then, the straight line l and the rendered space point cloud are processedThe intersection point of (A) is defined as P1tNew coordinates of (2);
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011084952.8A CN112294453B (en) | 2020-10-12 | 2020-10-12 | Microsurgery surgical field three-dimensional reconstruction system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011084952.8A CN112294453B (en) | 2020-10-12 | 2020-10-12 | Microsurgery surgical field three-dimensional reconstruction system and method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112294453A CN112294453A (en) | 2021-02-02 |
CN112294453B true CN112294453B (en) | 2022-04-15 |
Family
ID=74489833
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011084952.8A Active CN112294453B (en) | 2020-10-12 | 2020-10-12 | Microsurgery surgical field three-dimensional reconstruction system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112294453B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113721359B (en) * | 2021-09-06 | 2024-07-05 | 戴朴 | System and method for real-time three-dimensional measurement of key indexes in ear microsurgery |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103279987A (en) * | 2013-06-18 | 2013-09-04 | 厦门理工学院 | Object fast three-dimensional modeling method based on Kinect |
CN103337094A (en) * | 2013-06-14 | 2013-10-02 | 西安工业大学 | Method for realizing three-dimensional reconstruction of movement by using binocular camera |
CN103337071A (en) * | 2013-06-19 | 2013-10-02 | 北京理工大学 | Device and method for structure-reconstruction-based subcutaneous vein three-dimensional visualization |
CN103810708A (en) * | 2014-02-13 | 2014-05-21 | 西安交通大学 | Method and device for perceiving depth of laser speckle image |
CN105608734A (en) * | 2015-12-23 | 2016-05-25 | 王娟 | Three-dimensional image information acquisition apparatus and image reconstruction method therefor |
CN106691491A (en) * | 2017-02-28 | 2017-05-24 | 赛诺威盛科技(北京)有限公司 | CT (computed tomography) positioning system implemented by using visible light and infrared light and CT positioning method |
CN106875468A (en) * | 2015-12-14 | 2017-06-20 | 深圳先进技术研究院 | Three-dimensional reconstruction apparatus and method |
CN108921027A (en) * | 2018-06-01 | 2018-11-30 | 杭州荣跃科技有限公司 | A kind of running disorder object recognition methods based on laser speckle three-dimensional reconstruction |
CN109242812A (en) * | 2018-09-11 | 2019-01-18 | 中国科学院长春光学精密机械与物理研究所 | Image interfusion method and device based on conspicuousness detection and singular value decomposition |
CN109903376A (en) * | 2019-02-28 | 2019-06-18 | 四川川大智胜软件股份有限公司 | A kind of the three-dimensional face modeling method and system of face geological information auxiliary |
CN110363806A (en) * | 2019-05-29 | 2019-10-22 | 中德(珠海)人工智能研究院有限公司 | A method of three-dimensional space modeling is carried out using black light projection feature |
CN110940295A (en) * | 2019-11-29 | 2020-03-31 | 北京理工大学 | High-reflection object measurement method and system based on laser speckle limit constraint projection |
CN111009007A (en) * | 2019-11-20 | 2020-04-14 | 华南理工大学 | Finger multi-feature comprehensive three-dimensional reconstruction method |
CN111145342A (en) * | 2019-12-27 | 2020-05-12 | 山东中科先进技术研究院有限公司 | Binocular speckle structured light three-dimensional reconstruction method and system |
CN111260765A (en) * | 2020-01-13 | 2020-06-09 | 浙江未来技术研究院(嘉兴) | Dynamic three-dimensional reconstruction method for microsurgery operative field |
CN111491151A (en) * | 2020-03-09 | 2020-08-04 | 浙江未来技术研究院(嘉兴) | Microsurgical stereoscopic video rendering method |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8970589B2 (en) * | 2011-02-10 | 2015-03-03 | Edge 3 Technologies, Inc. | Near-touch interaction with a stereo camera grid structured tessellations |
US9141868B2 (en) * | 2012-06-26 | 2015-09-22 | Xerox Corporation | Contemporaneously reconstructing images captured of a scene illuminated with unstructured and structured illumination sources |
CN105203044B (en) * | 2015-05-27 | 2019-06-11 | 珠海真幻科技有限公司 | To calculate stereo vision three-dimensional measurement method and system of the laser speckle as texture |
CN111685711B (en) * | 2020-05-25 | 2023-01-03 | 中国科学院苏州生物医学工程技术研究所 | Medical endoscope three-dimensional imaging system based on 3D camera |
-
2020
- 2020-10-12 CN CN202011084952.8A patent/CN112294453B/en active Active
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103337094A (en) * | 2013-06-14 | 2013-10-02 | 西安工业大学 | Method for realizing three-dimensional reconstruction of movement by using binocular camera |
CN103279987A (en) * | 2013-06-18 | 2013-09-04 | 厦门理工学院 | Object fast three-dimensional modeling method based on Kinect |
CN103337071A (en) * | 2013-06-19 | 2013-10-02 | 北京理工大学 | Device and method for structure-reconstruction-based subcutaneous vein three-dimensional visualization |
CN103810708A (en) * | 2014-02-13 | 2014-05-21 | 西安交通大学 | Method and device for perceiving depth of laser speckle image |
CN106875468A (en) * | 2015-12-14 | 2017-06-20 | 深圳先进技术研究院 | Three-dimensional reconstruction apparatus and method |
CN105608734A (en) * | 2015-12-23 | 2016-05-25 | 王娟 | Three-dimensional image information acquisition apparatus and image reconstruction method therefor |
CN106691491A (en) * | 2017-02-28 | 2017-05-24 | 赛诺威盛科技(北京)有限公司 | CT (computed tomography) positioning system implemented by using visible light and infrared light and CT positioning method |
CN108921027A (en) * | 2018-06-01 | 2018-11-30 | 杭州荣跃科技有限公司 | A kind of running disorder object recognition methods based on laser speckle three-dimensional reconstruction |
CN109242812A (en) * | 2018-09-11 | 2019-01-18 | 中国科学院长春光学精密机械与物理研究所 | Image interfusion method and device based on conspicuousness detection and singular value decomposition |
CN109903376A (en) * | 2019-02-28 | 2019-06-18 | 四川川大智胜软件股份有限公司 | A kind of the three-dimensional face modeling method and system of face geological information auxiliary |
CN110363806A (en) * | 2019-05-29 | 2019-10-22 | 中德(珠海)人工智能研究院有限公司 | A method of three-dimensional space modeling is carried out using black light projection feature |
CN111009007A (en) * | 2019-11-20 | 2020-04-14 | 华南理工大学 | Finger multi-feature comprehensive three-dimensional reconstruction method |
CN110940295A (en) * | 2019-11-29 | 2020-03-31 | 北京理工大学 | High-reflection object measurement method and system based on laser speckle limit constraint projection |
CN111145342A (en) * | 2019-12-27 | 2020-05-12 | 山东中科先进技术研究院有限公司 | Binocular speckle structured light three-dimensional reconstruction method and system |
CN111260765A (en) * | 2020-01-13 | 2020-06-09 | 浙江未来技术研究院(嘉兴) | Dynamic three-dimensional reconstruction method for microsurgery operative field |
CN111491151A (en) * | 2020-03-09 | 2020-08-04 | 浙江未来技术研究院(嘉兴) | Microsurgical stereoscopic video rendering method |
Also Published As
Publication number | Publication date |
---|---|
CN112294453A (en) | 2021-02-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110288642B (en) | Three-dimensional object rapid reconstruction method based on camera array | |
CN107063129B (en) | A kind of array parallel laser projection three-dimensional scan method | |
US9392262B2 (en) | System and method for 3D reconstruction using multiple multi-channel cameras | |
JP7379704B2 (en) | System and method for integrating visualization camera and optical coherence tomography | |
US7953271B2 (en) | Enhanced object reconstruction | |
CN105203044B (en) | To calculate stereo vision three-dimensional measurement method and system of the laser speckle as texture | |
JP4343341B2 (en) | Endoscope device | |
JP6458732B2 (en) | Image processing apparatus, image processing method, and program | |
EP3007131A1 (en) | Artifact mitigation in three-dimensional imaging | |
TWI520576B (en) | Method and system for converting 2d images to 3d images and computer-readable medium | |
CN107123156A (en) | A kind of active light source projection three-dimensional reconstructing method being combined with binocular stereo vision | |
US11986240B2 (en) | Surgical applications with integrated visualization camera and optical coherence tomography | |
CN110992431B (en) | Combined three-dimensional reconstruction method for binocular endoscope soft tissue image | |
JP7462890B2 (en) | Method and system for calibrating a plenoptic camera system - Patents.com | |
CN105809654A (en) | Target object tracking method and device, and stereo display equipment and method | |
Malti et al. | Combining conformal deformation and cook–torrance shading for 3-d reconstruction in laparoscopy | |
WO2018032841A1 (en) | Method, device and system for drawing three-dimensional image | |
CN100348949C (en) | Three-dimensional foot type measuring and modeling method based on specific grid pattern | |
CN112294453B (en) | Microsurgery surgical field three-dimensional reconstruction system and method | |
CN106303501B (en) | Stereo-picture reconstructing method and device based on image sparse characteristic matching | |
Deguchi et al. | 3d fundus shape reconstruction and display from stereo fundus images | |
EP4379316A1 (en) | Measuring system providing shape from shading | |
CN116518869A (en) | Metal surface measurement method and system based on photometric stereo and binocular structured light | |
JP6890422B2 (en) | Information processing equipment, control methods and programs for information processing equipment | |
CN212163540U (en) | Omnidirectional stereoscopic vision camera configuration system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20240311 Address after: 314050 9F, No. 705, Asia Pacific Road, Nanhu District, Jiaxing City, Zhejiang Province Patentee after: ZHEJIANG YANGTZE DELTA REGION INSTITUTE OF TSINGHUA University Country or region after: China Address before: No.152 Huixin Road, Nanhu District, Jiaxing City, Zhejiang Province 314000 Patentee before: ZHEJIANG FUTURE TECHNOLOGY INSTITUTE (JIAXING) Country or region before: China |
|
TR01 | Transfer of patent right |