CN110728745B - Underwater binocular stereoscopic vision three-dimensional reconstruction method based on multilayer refraction image model - Google Patents

Underwater binocular stereoscopic vision three-dimensional reconstruction method based on multilayer refraction image model Download PDF

Info

Publication number
CN110728745B
CN110728745B CN201910874161.6A CN201910874161A CN110728745B CN 110728745 B CN110728745 B CN 110728745B CN 201910874161 A CN201910874161 A CN 201910874161A CN 110728745 B CN110728745 B CN 110728745B
Authority
CN
China
Prior art keywords
coordinate system
new
image
stereoscopic vision
light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910874161.6A
Other languages
Chinese (zh)
Other versions
CN110728745A (en
Inventor
屠大维
金攀
庄苏锋
张旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN201910874161.6A priority Critical patent/CN110728745B/en
Publication of CN110728745A publication Critical patent/CN110728745A/en
Application granted granted Critical
Publication of CN110728745B publication Critical patent/CN110728745B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects

Abstract

The invention provides an underwater binocular stereoscopic three-dimensional reconstruction method based on a multi-layer refraction image model. Belongs to the field of underwater computer vision research, and is typically applied to three-dimensional reconstruction of underwater objects. The method uses a direction information image and a position image of a left camera and a right camera which are calculated based on a light field multilayer refraction theory, and then the parallax image can be obtained by directly utilizing a stereo matching method in air based on the direction information image. And finally, determining corresponding points on the left and right direction information images by utilizing the parallax map, and calculating the three-dimensional coordinates of the matching points by combining the position image data of the corresponding coordinates. The point cloud image of the whole matching area can be obtained by traversing the whole image according to the method. The invention not only realizes the three-dimensional reconstruction of the underwater object, but also ensures that the calculation efficiency is obviously improved on the premise of higher precision.

Description

Underwater binocular stereoscopic vision three-dimensional reconstruction method based on multilayer refraction image model
Technical Field
The invention belongs to the field of underwater computer vision research, relates to an underwater three-dimensional reconstruction method based on a multi-layer refraction image model, and particularly relates to a binocular stereoscopic vision three-dimensional reconstruction method in a multi-layer refraction system in an underwater light field model.
Background
With the progress of science and technology, people have new knowledge on exploitation and utilization of ocean resources, and various countries are developing underwater detection technologies without surplus energy. The underwater three-dimensional reconstruction technology is used as an important technical means for detecting deep water lakes and oceans, and can be used for underwater topography scanning detection, submarine archaeology and body type data acquisition of underwater slow moving organisms. At present, sonar is mainly used as a main detection technology for underwater detection, but the method has lower precision and cannot meet the requirement of underwater accurate detection. The underwater environment can be intuitively observed by using visual detection, and more accurate three-dimensional information can be obtained.
Although the three-dimensional reconstruction technology in the air is mature at present, the application of a precise optical instrument in an underwater environment has a plurality of difficulties due to the special imaging environment, and a waterproof cover is often required to be additionally arranged on a camera at the moment, so that besides the degradation of imaging quality caused by the absorption and scattering of light by original water, the refraction of light rays occurs at the interface of the water, waterproof cover glass and air, and therefore, an polar constraint model in the air is not applicable any more, namely, the three-dimensional reconstruction method in the original air is invalid.
Lin Junyi et al in his paper "a laser line scanning technique based on binocular stereoscopic vision [ J ]. Mechanical design and manufacture, 2011 (8): 200-202." propose a method for three-dimensional reconstruction based on a limit constraint method. It uses a traditional camera model to describe the underwater refraction environment, and uses distortion parameters to correct the deviation caused by refraction. However, the method is inaccurate, calibration within the field of view is required in different water areas, and the camera and the calibration plate must be relatively stationary during the calibration process, so that the practical operability is poor.
Chinese patent CN201410195345.7 proposes a three-dimensional reconstruction method of an underwater target based on line structured light. The method utilizes a single camera and line structured light combination method to carry out three-dimensional reconstruction, and uses calibration data obtained at different depths under water to correct the center of a laser stripe in an obtained image so as to eliminate the influence caused by refraction. Although the method has high precision compared with the traditional small hole imaging model, the method needs to be calibrated by a calibration plate in water with different depths, and in the calibration process, calibration pictures are required to be shot in different depths of a field of view, but the method is difficult to realize underwater, so that the practical operability is poor.
Disclosure of Invention
The invention provides an underwater binocular stereoscopic three-dimensional reconstruction method based on a multi-layer refraction image model for solving the problems. The method has higher actual operability and improves the calculation efficiency of the algorithm on the premise of ensuring higher precision.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
an underwater binocular stereoscopic vision three-dimensional reconstruction method based on a multi-layer refraction image model comprises the following steps:
step 1: calculating direction information images of the left camera and the right camera, namely dir_L and dir_R;
step 2: calculating position images of corresponding pixels, namely pos_l and pos_r;
step 3: calculating a disparity map disp by utilizing a matching algorithm in air based on the direction information images dir_L and dir_R of the left and right cameras, and determining coordinate corresponding points on the left and right direction information images according to the disparity map;
step 4: and calculating the three-dimensional coordinates of the matching points according to the direction information image and the position image calculated above.
With the above solution, the present invention has the following obvious advantages:
1. the precision is high. The light rays are described by adopting a multi-layer refraction coordinate system of the light field, so that the accuracy is much higher than that of the traditional distortion correction, and no systematic error exists.
2. The left and right direction images obtained through calculation can be directly utilized to match an algorithm in the air, and the portability is good.
3. The calculation amount is reduced, the calculation speed is faster, and the execution efficiency of the system is effectively improved.
Drawings
FIG. 1 is a flowchart of an algorithm of the present invention.
Fig. 2 is a direction information image calculated by the present invention, wherein a is a right camera direction information image and b is a left camera direction information image.
Fig. 3 is a position image calculated by the present invention, where a is a right camera position image and b is a left camera position image.
Fig. 4 is a point cloud image calculated by the algorithm of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings.
As shown in fig. 1, an underwater binocular stereoscopic three-dimensional reconstruction method based on a multi-layer refraction image model comprises the following steps:
step 1: left and right direction information images are calculated.
First, the embodiment adopts the method for calibrating the underwater stereoscopic vision system based on the multilayer refraction model in Chinese patent CN201710702222The multilayer refraction model in. After the camera is enclosed within the capsule, the z-axis of the camera coordinate system, i.e., the camera optical axis, is generally not perpendicular to the "air-water" interface. Therefore, a multi-layer refraction coordinate system with a z-axis perpendicular to an air-water interface is established, and a normal vector parameter (n) is obtained by adopting a multi-layer refraction model-based underwater stereoscopic vision system calibration method in China patent CN201710702222 L ,n R ) And calculating the conversion relation between the camera coordinate system and the multi-layer refraction coordinate system according to the normal vector parameters. The relationship of the multi-layer refractive imaging coordinate system to the camera coordinate system can be expressed as:
P rc R r P r + c t r
c R r =[n c ×z c n c ×(n c ×z c ) n c ]
c t r =[0 0 0] T
z c =[0 0 1] T
and then establishing a stereoscopic vision coordinate system according to the multi-layer refraction model of the camera. Taking the optical center of the left camera as an origin, taking the connecting line direction of the optical center of the left camera and the optical center of the right camera as an x-axis, taking the z-axis (namely the normal line of an interface) of the current left camera optical field of the left camera and the x-axis to obtain a y-axis, and taking the x-axis and the y-axis to obtain a z-axis. Obtaining a stereoscopic vision coordinate system:
P rr R new P new + r t new
r R new =[n x n r ×z x n x ×(n r ×z x )]
r t new =[0 0 0] T
z c =[0 0 1] T
wherein n is x The unit vectors formed for the left and right camera optical centers are represented in a multi-layer refractive coordinate system. The relationship of the multi-layer refractive coordinate system to the stereoscopic coordinate system can be expressed as: p (P) newnew R r P r + new t r Wherein: new R rr R new -1 . Then defining a direction image reference matrix, and establishing a left direction image matrix and a right direction image matrix under a stereoscopic vision coordinate system.
The direction vector of the light ray in the multi-layer refractive coordinate system is calculated. Firstly, calculating a direction vector I of light corresponding to any pixel in a left direction information image and a right direction information image under a stereoscopic vision coordinate system L_stereo And I R_stereo . And then according to the coordinate transformation relation between the stereoscopic vision coordinate system and the multi-layer refraction coordinate system: p (P) rr R new P new Obtaining the direction vector I of the light ray under the multi-layer refraction coordinate system L_reflect And I R_reflect
According to the light field representation method, respectively calculating rays of left and right direction information image points which are propagated and refracted through a multi-layer interface and then enter airAnd converted into a light vector +.>And->According to the light field model described in Chinese patent CN109490251A 'Underwater refractive index self-calibration method based on light field multilayer refractive index model', the ray vectors of left and right direction information images are expressed as light field:
light ray L r Propagation distance d 0 Refraction then occurs from the water into the air, and the incident light and the refracted light can be expressed as:
1 L r =R(s 0 t 0 1.333 1)×T(d 00 L r
wherein the method comprises the steps of
The light rays which reach the air after the image points of the left and right direction information images are transmitted and refracted by water can be obtained according to the above formula, and can be converted into light ray vectors by using the following formula
Vector the light rayAnd converting into a left camera coordinate system and a right camera coordinate system, calculating according to the internal parameters of the left camera and the right camera to obtain pixel positions of any image point on the direction information image, which correspond to the pixel positions on the image, and establishing a mapping table of the x direction and the y direction.
Then the underwater binocular vision measuring system is utilized to acquire an image of an underwater target, meanwhile, green dispersion point laser is used for assisting in increasing the texture of the underwater image, and after the acquired left and right images are corrected in distortion, the left and right direction information images can be rapidly calculated by utilizing a remap function in opencv and the mapping table in the x direction and the y direction: dir_l and dir_r.
Step 2: left and right position images are calculated.
First, according to the ray vectorAnd calculating the intersection point of the left image ray and the interface.
L, L =R(s L ,t LL ,μ’ L )×L L =R(s L ,t LL ,μ’ L )×T(d L )×L r L =(u’ L ,v’ L ,s’ L ,t, L ) T
The intersection point of the left image light ray and the interface is obtained by the following steps: c (C) L =(u’ L ,v’ L ,d L )。
Light direction:
the above intersection point is then converted into a left stereoscopic coordinate system. Position and posture conversion matrix based on left stereoscopic vision coordinate system and left multilayer refraction coordinate system new R r L Transforming the intersection point and direction of the calculated light ray and the interface to obtain an intersection point C of the light ray and the interface under a new coordinate system new L Direction of light I new L
C new Lnew R r L C L =(u new L ,v new L ,d new L )
And then, a new light field coordinate system is established according to the stereoscopic vision coordinate system, and the position information of the light rays under the stereoscopic vision coordinate system is obtained. A new light field coordinate system is first established, defined as follows: the u-v coordinate system is parallel to the x-y plane of the stereoscopic vision coordinate system, and the origin is coincident with the origin of the stereoscopic vision coordinate system; the parallel plane, which is one unit length from the u-v plane, is defined as the s-t plane, and the s-t coordinate system is parallel to the u-v coordinate system. The rays are represented in the new light field coordinate system as:
L new L =[u new L v new L s new L t new L ] T
intersection of the light field with the new coordinate system x-y plane:
L newL =T(-d new L )×L new L =[u newL ,v newL ,s new L ,t new L ] T
according to P new L =(u newL ,v newL 0) obtaining the position information of each ray on the left image and storing the position information in the position image. The same method is adopted to obtain the position information of each ray on the right image: p (P) new R =(u newR ,v newR 0), as shown in fig. 3, the position image is a two-channel image.
Step 3: based on the left and right direction information images (as shown in fig. 2) obtained in the above, a parallax map disp is calculated by using an SGBM matching algorithm in air. Let a pixel point coordinate of the left direction information image be (x l ,y l ) The coordinates correspond to the coordinates (x) in the disparity map disp disp ,y disp );
x r =x l +x disp
y r =y l
Pixel point (x) l ,y l ) And the pixel point (x) r ,y r ) Corresponding to each other.
Step 4: from the corresponding coordinates (x l ,y l )、(x r ,y r ) Left and right direction vectors can be obtained from the pixel data l a、 r a. Stored in the left position image (as in fig. 3 (a)) is the point position data of the left camera in the left multilayer refractive coordinate system l q, the point position data of the right camera in the right multi-layer refractive coordinate system is stored in the right position image (as in fig. 3 (b)) r q. Then the direction vector of the right camera under the multilayer refraction coordinate system r a and point location r q is converted into the left camera multi-layer refractive coordinate system. Due to the left phaseThe point P in the machine multilayer refractive coordinate system is at two straight lines simultaneously, so the following constraint is satisfied:
the constraint can then be converted into the following form using an antisymmetric matrix representation instead of cross-multiplication:
and finally, singular value decomposition is carried out on the formula, so that the three-dimensional coordinates of the matching point P can be calculated. The point cloud image of the matching area can be obtained by traversing the whole image according to the method (as shown in figure 4).
The underwater three-dimensional reconstruction based on the multi-layer refraction model is completed.

Claims (4)

1. An underwater binocular stereoscopic vision three-dimensional reconstruction method based on a multi-layer refraction image model is characterized by comprising the following steps of:
step 1: calculating direction information images of the left camera and the right camera, namely dir_L and dir_R;
step 2: calculating position images of corresponding pixels, namely pos_l and pos_r;
step 3: calculating a disparity map disp by utilizing a matching algorithm in air based on the direction information images dir_L and dir_R of the left and right cameras, and determining coordinate corresponding points on the left and right direction information images according to the disparity map;
step 4: calculating three-dimensional coordinates of the matching points according to the calculated direction information images and the position images;
the step 1 calculates the direction information images of the left camera and the right camera, and specifically comprises the following steps:
step 1-1: a multi-layer refraction three-dimensional model of a camera is adopted to establish a three-dimensional vision coordinate system; defining a reference matrix of a direction image, and establishing left and right direction images under a stereoscopic vision coordinate system;
step 1-2: calculating a light ray direction vector I corresponding to any pixel of the left and right direction information images under a stereoscopic vision coordinate system L_stereo 、I R_stereo And according to the coordinate transformation relation between the stereoscopic vision coordinate system and the multi-layer refraction coordinate system: p (P) rr R new P new Calculating to obtain the direction vector I of the light ray under the multi-layer refraction coordinate system L_reflect 、I R_reflect
Step 1-3: according to the light field model, the ray vectors of the left and right direction information images are expressed as a light field:
for refractive index μ n Medium of (a) light ray n L r Propagation distance d n After entering refractive index of mu n+1 Refraction occurs when the medium, and the incident light and the refracted light are expressed as:
n+1 L r =R(s n t n μ n μ n+1 )×T(d nn L r
wherein the method comprises the steps of
Calculating light rays reaching air after the image points of the left and right direction information images are propagated and refracted through the multi-layer interfaces according to the formulaAnd converted into a light vector +.>
Step 1-4: vector the light raysConverting to a left camera coordinate system and a right camera coordinate system, calculating according to the internal parameters of the left camera and the right camera to obtain pixel positions of any image point on the direction image corresponding to the image, and establishing a position mapping table;
step 1-5: using an underwater target image obtained by an underwater binocular vision measurement system, and rapidly calculating left and right direction information images by using a Remap function in OpenCV according to the position mapping table calculated in the step 1-4;
the step 2 of calculating the left and right position images specifically comprises the following steps:
step 2-1: using light field representation, the light vectors according to steps 1-4Calculating the intersection point of the light ray and the interface, and representing the light ray by using a light field:
L’=R(s,t,μ,μ’)×L=R(s,t,μ,μ’)×T(d)×L r
=(u’,v’,s’,t’) T
the intersection point of the light ray and the interface is obtained by the following steps: c= (u ', v', d);
light direction:
step 2-2: converting the intersection point into a corresponding stereoscopic vision coordinate system;
position and posture conversion matrix based on stereoscopic vision coordinate system and multilayer refraction coordinate system new R r Transforming the intersection point and direction of the calculated light ray and the interface to obtain an intersection point C of the light ray and the interface under a new coordinate system new Direction of light I new
C newnew R r C=(c x ,c y ,d new )
Step 2-3: establishing a new light field coordinate system according to the stereoscopic vision coordinate system, and solving the position information of the light rays under the stereoscopic vision coordinate system;
the new light field coordinate system is defined as follows: the u-v coordinate system is parallel to the x-y plane of the stereoscopic vision coordinate system, and the origin is coincident with the origin of the stereoscopic vision coordinate system; the parallel plane which is a unit length away from the u-v plane is defined as an s-t plane, and the s-t coordinate system is parallel to the u-v coordinate system, then the light ray is expressed as follows in the new light field coordinate system:
intersection of the light field with the xy plane of the new coordinate system:
L new ’=T(-d new )×L new =[u new ’,v new ’,s new ,t new ] T
according to P new =(u new ’,v new ' 0) obtaining the position data of each ray of the left and right images, and storing the position data in the position images.
2. The method for three-dimensional reconstruction of underwater binocular stereoscopic vision based on a multi-layer refraction image model according to claim 1, wherein the establishing of the stereoscopic vision coordinate system in the step 1-1 comprises the following steps:
step 1-1-1: taking a left camera optical center as an origin, wherein the connecting line direction of the left camera optical center and the right camera optical center is the x axis of a left stereoscopic vision coordinate system;
step 1-1-2: taking the z-axis of the left multilayer refractive coordinate system, namely the normal of the interface, and the x-axis of the left stereoscopic coordinate system as the y-axis of the left stereoscopic coordinate system;
step 1-1-3: the x-axis and the y-axis are cross multiplied as the z-axis;
step 1-1-4: and translating the left stereoscopic vision coordinate system to the right camera optical center to obtain the right stereoscopic vision coordinate system.
3. The method for reconstructing underwater binocular stereoscopic vision based on the multi-layer refraction image model according to claim 1, wherein the corresponding points of the left and right direction information images are determined in the step 3, which comprises the following steps: calculating a parallax image by using a matching algorithm in air based on the left and right direction information images obtained in the step one; let a pixel point coordinate of the left direction information image be (x l ,y l ) The coordinates correspond to the coordinates (x) in the disparity map disp disp ,y disp );
x r =x l +x disp
y r =y l
Pixel point (x) l ,y l ) And the pixel point (x) r ,y r ) Corresponding to each other.
4. The method for reconstructing underwater binocular stereoscopic vision based on the multi-layer refraction image model according to claim 1, wherein the three-dimensional coordinates of the matching points are calculated in the step 4, which comprises the following steps: from the coordinates (x l ,y l )、(x r ,y r ) Obtaining left and right direction vectors from pixel data l a and (a) r a, a; stored in the left position image is point position data of the left camera under the left multilayer refractive coordinate system l q, the point position data of the right camera under the right multilayer refraction coordinate system is stored in the right position image r q; then the direction vector of the right camera under the multilayer refraction coordinate system r a and point location r q-conversionUnder the left camera multi-layer refraction coordinate system; since the point P is in two straight lines at the same time in the left camera multi-layer refractive coordinate system, the following constraint is satisfied:
the above constraint is then translated into the following form using an antisymmetric matrix representation instead of cross-multiplication:
and finally, singular value decomposition is carried out on the formula, and the three-dimensional coordinates of the matching point P are calculated.
CN201910874161.6A 2019-09-17 2019-09-17 Underwater binocular stereoscopic vision three-dimensional reconstruction method based on multilayer refraction image model Active CN110728745B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910874161.6A CN110728745B (en) 2019-09-17 2019-09-17 Underwater binocular stereoscopic vision three-dimensional reconstruction method based on multilayer refraction image model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910874161.6A CN110728745B (en) 2019-09-17 2019-09-17 Underwater binocular stereoscopic vision three-dimensional reconstruction method based on multilayer refraction image model

Publications (2)

Publication Number Publication Date
CN110728745A CN110728745A (en) 2020-01-24
CN110728745B true CN110728745B (en) 2023-09-15

Family

ID=69218997

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910874161.6A Active CN110728745B (en) 2019-09-17 2019-09-17 Underwater binocular stereoscopic vision three-dimensional reconstruction method based on multilayer refraction image model

Country Status (1)

Country Link
CN (1) CN110728745B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111563921B (en) * 2020-04-17 2022-03-15 西北工业大学 Underwater point cloud acquisition method based on binocular camera
CN111784753B (en) * 2020-07-03 2023-12-05 江苏科技大学 Jing Shichang three-dimensional reconstruction stereo matching method before recovery and docking of autonomous underwater robot
CN114967763B (en) * 2022-08-01 2022-11-08 电子科技大学 Plant protection unmanned aerial vehicle sowing control method based on image positioning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107507242A (en) * 2017-08-16 2017-12-22 华中科技大学无锡研究院 A kind of multilayer dioptric system imaging model construction method based on ligh field model
CN108921936A (en) * 2018-06-08 2018-11-30 上海大学 A kind of underwater laser grating matching and stereo reconstruction method based on ligh field model
CN109490251A (en) * 2018-10-26 2019-03-19 上海大学 Underwater refractive index self-calibrating method based on light field multilayer refraction model

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101876533B (en) * 2010-06-23 2011-11-30 北京航空航天大学 Microscopic stereovision calibrating method
US10019809B2 (en) * 2015-09-22 2018-07-10 The Governors Of The University Of Alberta Underwater 3D image reconstruction utilizing triple wavelength dispersion and camera system thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107507242A (en) * 2017-08-16 2017-12-22 华中科技大学无锡研究院 A kind of multilayer dioptric system imaging model construction method based on ligh field model
CN108921936A (en) * 2018-06-08 2018-11-30 上海大学 A kind of underwater laser grating matching and stereo reconstruction method based on ligh field model
CN109490251A (en) * 2018-10-26 2019-03-19 上海大学 Underwater refractive index self-calibrating method based on light field multilayer refraction model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张旭等.针对机器人位姿测量立体标靶的单目视觉标定方法.红外与激光工程.2017,(第11期),221-229. *

Also Published As

Publication number Publication date
CN110728745A (en) 2020-01-24

Similar Documents

Publication Publication Date Title
CN110044300B (en) Amphibious three-dimensional vision detection device and detection method based on laser
CN102679959B (en) Omnibearing 3D (Three-Dimensional) modeling system based on initiative omnidirectional vision sensor
CN106091984B (en) A kind of three dimensional point cloud acquisition methods based on line laser
CN110728745B (en) Underwater binocular stereoscopic vision three-dimensional reconstruction method based on multilayer refraction image model
CN104537707B (en) Image space type stereoscopic vision moves real-time measurement system online
CN108734776A (en) A kind of three-dimensional facial reconstruction method and equipment based on speckle
CN106204731A (en) A kind of multi-view angle three-dimensional method for reconstructing based on Binocular Stereo Vision System
CN103903222B (en) Three-dimensional sensing method and three-dimensional sensing device
CN107358632B (en) Underwater camera calibration method applied to underwater binocular stereo vision
Kunz et al. Hemispherical refraction and camera calibration in underwater vision
CN105547189A (en) Mutative scale-based high-precision optical three-dimensional measurement method
CN109544628B (en) Accurate reading identification system and method for pointer instrument
CN111127540B (en) Automatic distance measurement method and system for three-dimensional virtual space
CN114998499A (en) Binocular three-dimensional reconstruction method and system based on line laser galvanometer scanning
CN112258586B (en) Calibration method for stereoscopic vision model parameters of single plane mirror
WO2024045632A1 (en) Binocular vision and imu-based underwater scene three-dimensional reconstruction method, and device
CN105004324A (en) Monocular vision sensor with triangulation ranging function
CN112634379B (en) Three-dimensional positioning measurement method based on mixed vision field light field
CN109490251A (en) Underwater refractive index self-calibrating method based on light field multilayer refraction model
CN108010125A (en) True scale three-dimensional reconstruction system and method based on line-structured light and image information
CN107560554A (en) A kind of three-dimensional information vision measuring method based on relay lens
CN115359127A (en) Polarization camera array calibration method suitable for multilayer medium environment
CN106500625A (en) A kind of telecentricity stereo vision measuring apparatus and its method for being applied to the measurement of object dimensional pattern micron accuracies
CN115714855A (en) Three-dimensional visual perception method and system based on stereoscopic vision and TOF fusion
CN110378967B (en) Virtual target calibration method combining grating projection and stereoscopic vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant