CN112995639B - Fine three-dimensional sensing method for underwater target - Google Patents

Fine three-dimensional sensing method for underwater target Download PDF

Info

Publication number
CN112995639B
CN112995639B CN202110162626.2A CN202110162626A CN112995639B CN 112995639 B CN112995639 B CN 112995639B CN 202110162626 A CN202110162626 A CN 202110162626A CN 112995639 B CN112995639 B CN 112995639B
Authority
CN
China
Prior art keywords
camera
point cloud
checkerboard
cloud reconstruction
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110162626.2A
Other languages
Chinese (zh)
Other versions
CN112995639A (en
Inventor
丛杨
冯云
古长军
唐旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Institute of Automation of CAS
Original Assignee
Shenyang Institute of Automation of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Institute of Automation of CAS filed Critical Shenyang Institute of Automation of CAS
Priority to CN202110162626.2A priority Critical patent/CN112995639B/en
Publication of CN112995639A publication Critical patent/CN112995639A/en
Application granted granted Critical
Publication of CN112995639B publication Critical patent/CN112995639B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a fine three-dimensional perception method for an underwater target. The method comprises the following steps: the underwater sealed picture acquisition camera, the point cloud reconstruction camera, the floodlight, the embedded processor and the line laser are sequentially fixed on a rigid cross bar, the rigid cross bar is fixed on an underwater turntable, and the relative positions of the point cloud reconstruction camera and the line laser are fixed; calibrating equipment before target reconstruction; the line laser synchronously projects the line laser when the point cloud reconstruction camera shoots during target reconstruction, and the embedded processor reconstructs a single-line point cloud according to a scanning picture and calibration parameters; the turntable rotates to control the rigid cross rod to scan the whole target at fixed intervals, and the embedded processor splices all single-line point clouds according to the calibration parameters to generate target point clouds; the floodlight synchronously illuminates when the picture acquisition camera shoots, and a color picture of a target is shot. According to the invention, the precise three-dimensional perception of the underwater target is realized through the underwater laser three-dimensional reconstruction technology.

Description

Fine three-dimensional sensing method for underwater target
Technical Field
The invention belongs to the field of machine vision, and particularly relates to an underwater environment measuring method.
Background
71% of the earth's surface area is the ocean, where a large amount of mineral and biological resources are contained, and currently the human being has explored less than 5% of the area. With the increasingly deep exploration and development of deep sea submarine resources and the protection of related environments, the improvement of the fine perception measurement capability of submarine topography is urgently needed; the water bottom terrain refined perception measurement technology can also be used for detecting submarine targets such as submarine sunken ships, submarine pipeline cables and the like.
Compared with the optical detection technology, the sonar technology widely applied to submarine topography measurement has the advantages that the accuracy and the resolution are in the meter level, the laser scanning measurement technology has the millimeter level resolution, and the fine perception requirement of the submarine topography can be better met. Due to the addition of the underwater sealed cabin, the underwater reflected light can be refracted twice in a window of the sealed cabin before entering a camera lens, the thickness of the window of the sealed cabin is increased along with the increase of the working depth of the underwater reconstruction system, and the influence of the window on the light cannot be ignored. In the traditional method, the equipment is put into water and calibrated according to a camera pinhole model, or the calibration result of the camera pinhole model of the equipment in the air is directly subjected to focal length correction compensation, and the two refractions of a window are uniformly used as the distortion of a camera for correction. A calibration mode of a pure camera pinhole model neglecting twice refraction of a thick window causes deformation of an underwater three-dimensional reconstruction result, and the perception accuracy of underwater topography is seriously reduced.
Disclosure of Invention
The invention provides an underwater environment sensing method for precisely sensing an underwater three-dimensional target with high precision. The method can reconstruct the three-dimensional data of the underwater environment at high precision, realizes high-precision measurement and sensing of the underwater environment, and finally provides data for underwater environment detection.
The technical scheme adopted by the invention for realizing the purpose is as follows: a fine three-dimensional perception system for underwater targets comprises a point cloud reconstruction camera, a line laser, a picture acquisition camera, a floodlight, a rigid cross bar, an underwater turntable and an embedded processor;
the point cloud reconstruction camera is used for acquiring a point cloud reconstruction map of the target;
a line laser for projecting a line laser;
the picture acquisition camera is used for acquiring a color picture of the target;
the floodlight is used for providing illumination for the picture acquisition camera;
the rigid cross bar is used for fixing the point cloud reconstruction camera, the line laser, the picture acquisition camera and the floodlight;
the underwater rotary table is connected with the rigid cross rod and is used for rotating the rigid cross rod to scan the whole target;
and the embedded processor is used for controlling the rotation of the underwater rotary table, controlling the brightness of the line laser and the floodlight, controlling the point cloud reconstruction camera and the picture acquisition camera to respectively acquire the target point cloud and the color picture, generating point cloud data in real time and transmitting the data to the outside in an off-line manner.
The picture acquisition camera, the point cloud reconstruction camera, the floodlight, the embedded processor and the line laser are respectively arranged in the sealed cabin and are fixed on the rigid cross bar; the rigid cross rod is fixed on the underwater rotary table; the picture acquisition camera and the point cloud reconstruction camera are arranged at one end of the rigid cross rod, and the line laser is arranged at the other end of the rigid cross rod.
A fine three-dimensional perception method for underwater targets comprises the following steps:
calibrating the point cloud reconstruction camera by controlling the rigid cross bar to act;
and (4) performing three-dimensional perception on the underwater target by using a point cloud reconstruction camera and a picture acquisition camera.
The method for calibrating the point cloud reconstruction camera by controlling the rigid cross bar to act comprises the following steps:
calibrating internal parameters and distortion parameters of a point cloud reconstruction camera;
calibrating the pose parameters of a front cover of a sealed cabin of a point cloud reconstruction camera;
calibrating a coordinate transformation matrix of the point cloud reconstruction camera and the rotary table;
and calibrating a laser plane equation under the point cloud reconstruction camera coordinate system.
The calibration point cloud reconstruction camera internal parameter and distortion parameter method comprises the following steps:
1) placing the point cloud reconstruction camera in the air, and removing a front cover of a sealed cabin of the point cloud reconstruction camera;
2) a point cloud reconstruction camera acquires pictures of the checkerboard in different postures;
3) respectively extracting checkerboard angular points of the collected pictures;
4) and calibrating the internal parameters and distortion of the point cloud reconstruction camera by using a Zhang-Zhengyou calibration method according to the number of the squares of the checkerboard, the size parameters and the checkerboard angular points in the picture.
The method for calibrating the position and posture parameters of the camera sealed cabin front cover comprises the following steps:
1) measuring point cloud reconstruction camera sealed cabin front cover window thickness d1Obtaining the refractive index n of the window of the front cover of the sealed cabingRefractive index n of water in the environment of usew
2) Placing the checkerboard right in front of the point cloud reconstruction camera and fixing;
3) the point cloud reconstruction camera is placed in the air, the front cover of the sealed cabin of the point cloud reconstruction camera is removed, and a checkerboard picture I is shota
4) The point cloud reconstruction camera is provided with a front cover of a sealed cabin and placed in water to shoot a checkerboard picture Iw
5) Separately obtain pictures IaAnd IwCorner point pixel coordinate C of checkerboardaAnd Cw
6) Respectively calculating the pixel coordinates C of the corner pointsaAnd CwCorresponding camera incident light vector VaAnd Vw
7) Calculating the window interface direction V of each corner pointa×VwThe common perpendicular line of the interface direction of each angular point is the normal line direction n of the front cover; the window interface is a front cover window plane and is used for isolating air in the sealed cabin from water outside the sealed cabin;
8) according to the number of the squares, the size parameters and the picture I of the checkerboardaCalculating a conversion matrix RT between the point cloud reconstruction camera and a checkerboard coordinate system by the checkerboard angular points, and solving a coordinate P of the checkerboard angular points under the camera coordinate system;
9) according to picture IwThe angular points of the checkerboard and the camera internal reference obtain the light propagation direction v in the sealed cabina(ii) a Calculating the light propagation direction v in the sealed cabin window by utilizing Fresnel's law according to the normal direction n of the sealed cabin windowgAnd a propagation direction v of light outside the capsulew
10) According to vaPassing through the origin O, v of the camera coordinate systemwPassing through the coordinates P of the corner points of the checkerboard, the normal direction n of the window and the thickness d of the window1Obtaining the distance d from the window to the point cloud reconstruction camera through the geometric constraint relation0
The calibration point cloud reconstruction camera and turntable coordinate transformation matrix comprises the following steps:
1) placing the point cloud reconstruction camera in the air, and removing a front cover of a sealed cabin of the point cloud reconstruction camera;
2) the checkerboard is fixed in front of the camera, the rotary table is rotated, and pictures of the checkerboard at different positions in the field range of the camera are collected;
3) extracting the pixel coordinates of the checkerboard corner points in the collected picture;
4) calculating a conversion matrix between a point cloud reconstruction camera coordinate system and a checkerboard coordinate system under each picture according to the pixel coordinates of the checkerboard angular points in the collected pictures;
5) according to the conversion matrix, coordinate points of the origin of the point cloud reconstruction camera coordinate system under each acquired picture under the checkerboard coordinate system are worked out, and a fitting circular equation formed by the coordinate points relative to the checkerboard is worked out;
6) obtaining coordinates of the checkerboard angular points under a point cloud reconstruction camera coordinate system;
7) obtaining coordinates of the checkerboard angular points under a coordinate system of the rotary table;
8) and solving a conversion matrix of the point cloud reconstruction camera and the turntable coordinate system according to coordinate values of the checkered corner points under the point cloud reconstruction camera coordinate system and the turntable coordinate system.
The laser plane equation under the coordinate system of the calibration point cloud reconstruction camera comprises the following steps:
1) the point cloud reconstruction camera is installed on a front cover of a sealed cabin and is placed in water;
2) the checkerboard moves back and forth relative to the point cloud reconstruction camera in the point cloud reconstruction camera view field, so that the laser of each position line is ensured to irradiate on the checkerboard, and the checkerboard collects a laser picture I of a closing line at each positioncAnd opening line laser picture IlAnd forming a pair of pictures;
3) extracting Picture IcCalculating a conversion matrix between a point cloud reconstruction camera coordinate system and a checkerboard coordinate system;
4) converting the coordinates of the angular points of the checkerboard into three-dimensional points by using a conversion matrix and using the refraction parameter d of the window0,d1,va,vg,vwN and calculating the back projection coordinates of the checkerboard angular points by internal reference of a point cloud reconstruction camera, and iteratively correcting a transformation matrix to minimize the error between the original checkerboard angular point coordinates and the back projection coordinates;
5) solving an equation of a plane where the checkerboards are located according to the coordinates of the angular points of the checkerboards and the corrected conversion matrix;
6) extracting Picture IlLine laser central image ofA pixel coordinate;
7) solving an underwater ray equation corresponding to the central pixel coordinate of the line laser;
8) the intersection point of the underwater ray equation and the plane where the checkerboard is located is a line laser three-dimensional coordinate;
9) and fitting the linear laser three-dimensional coordinates of all the pictures into a plane equation, namely a laser plane equation.
The method for three-dimensional perception of the underwater target by adopting the point cloud reconstruction camera and the picture acquisition camera comprises the following steps:
1) performing underwater target local single line reconstruction through a point cloud reconstruction camera;
2) the rigid cross rod is controlled by the rotation of the rotary table to carry out underwater target scanning reconstruction;
3) and carrying out underwater target color picture shooting and storage through synchronous illumination of the picture acquisition camera and the floodlight.
The method for reconstructing the underwater target local single line through the point cloud reconstruction camera comprises the following steps:
1) a point cloud reconstruction camera shoots a target picture, and line laser synchronously projects line laser during shooting;
2) the embedded processor extracts a line laser center of the picture;
3) the embedded processor refracts the parameter d according to the central coordinate of the line laser and the window0,d1N and point cloud reconstruction camera internal parameters are used for correcting incident light of the camera in the water;
4) the embedded processor reconstructs a single-line point cloud of a line laser irradiation target according to the incident light of the camera in the water and a laser plane equation;
5) and the embedded processor stores the point cloud and the picture result in real time.
The method for scanning and reconstructing the underwater target by controlling the rigid cross rod through the rotation of the rotary table comprises the following steps:
1) the turntable rotates to control the rigid cross rods to rotate at fixed intervals, so that the line laser scanning of the whole target is ensured;
2) performing local single line reconstruction on the underwater target at each fixed angle;
3) the embedded processor splices a single line reconstruction point cloud according to the fixed rotation angle and a coordinate transformation matrix of the point cloud reconstruction camera and the rotary table to generate a target point cloud;
4) and the embedded processor stores the point cloud and the picture result in real time.
The invention has the following beneficial effects and advantages:
1. the underwater environment reconstruction method can perform real-time underwater environment reconstruction, so that three-dimensional data of the underwater environment can be measured in real time, and data are provided for underwater environment detection;
2. according to the invention, a high-precision equipment calibration method is adopted, so that the influence of water body refraction on the reconstruction precision is reduced, and the reconstruction precision of an underwater environment is improved;
3. the invention adopts a split structure, and can change the system structure according to the measurement requirement and the carrying equipment, thereby improving the measurement capability and the deployment convenience.
Drawings
FIG. 1 is a schematic structural diagram of a fine three-dimensional perception system for underwater targets according to the invention;
FIG. 2 is a schematic diagram of a hardware structure of an underwater target fine three-dimensional sensing system of the invention;
the system comprises a camera, a floodlight, an embedded processor, a rigid cross bar and an underwater turntable, wherein 1 is a picture acquisition camera, 2 is a point cloud reconstruction camera, 3 is a floodlight, 4 is the embedded processor, 5 is a line laser, 6 is the rigid cross bar and 7 is the underwater turntable;
FIG. 3 is a schematic view of a refraction calibration principle;
where O is the origin of the camera coordinate system, P is the object point under the camera coordinate system, d0Is the window-to-camera distance, d1Is the thickness of the window of the sealed cabin, vaIs the direction of light propagation, v, in the capsulegIs the direction of light propagation in the window of the capsule, vwIs the propagation direction of light outside the sealed cabin, and n is the normal direction of the window of the sealed cabin;
FIG. 4 is a schematic diagram of an underwater three-dimensional reconstruction;
wherein 2 is a point cloud reconstruction camera, 5 is a line laser, O is the origin of a camera coordinate system, P is an object point under the camera coordinate system, vaIs the direction of light propagation, v, in the capsulegIs the direction of light propagation in the window of the capsule, vwIs the propagation direction of light outside the sealed cabin, and r is the light plane of the linear laser in water;
FIG. 5 is a flow chart of a method for calibrating a device.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples.
The device comprises a picture acquisition camera, a point cloud reconstruction camera, a floodlight, an embedded processor and a line laser which are well sealed underwater, wherein the picture acquisition camera, the point cloud reconstruction camera, the floodlight, the embedded processor and the line laser are sequentially fixed on a rigid cross bar, the rigid cross bar is fixed on an underwater turntable, and the relative positions of the point cloud reconstruction camera and the line laser are fixed; calibrating equipment before target reconstruction: the method comprises the steps of placing the calibration point cloud in the air to reconstruct internal parameters and distortion parameters of a camera, placing the calibration point cloud in the air and calibrating pose parameters of a front cover of the camera in the water, placing the calibration point cloud in the air to calibrate a coordinate transformation matrix of the camera and a rotary table, and placing the calibration point cloud in the water to calibrate a laser plane equation under a camera coordinate system; the line laser synchronously projects the line laser when the point cloud reconstruction camera shoots during target reconstruction, and the embedded processor reconstructs a single-line point cloud according to a scanning picture and calibration parameters; the turntable rotates to control the rigid cross rod to scan the whole target at fixed intervals, and the embedded processor splices all single-line point clouds according to the calibration parameters to generate target point clouds; the floodlight synchronously illuminates when the picture acquisition camera shoots, and a color picture of a target is shot; the embedded processor stores point cloud and picture results in real time; the embedded processor can transmit data results to the outside in an off-line mode. According to the invention, the underwater target is finely sensed by an underwater laser three-dimensional reconstruction technology.
The basic structure of the present invention is shown in fig. 1. The underwater target precise three-dimensional sensing hardware system is mainly constructed by using a point cloud reconstruction camera which is well sealed underwater, a line laser, a picture acquisition camera, a floodlight, a rigid cross bar, an underwater turntable and an embedded processor; and a software method for calibrating equipment considering window refraction, reconstructing single line point cloud, splicing target point cloud and shooting color pictures.
The hardware system of the invention is shown in figure 2, a picture collecting camera 1, a point cloud reconstruction camera 2, a floodlight 3, an embedded processor 4 and a line laser 5 are all fixed on a rigid cross bar 6 in sequence; the rigid cross bar is fixed on the underwater rotary table 7; the relative positions of the point cloud reconstruction camera 2 and the line laser 5 are fixed.
The picture acquisition camera 1 is a color camera, is placed in an underwater sealed cabin, and is fixed on the rigid cross rod 6 through the point cloud reconstruction camera 2. The point cloud reconstruction camera 2 is a black and white camera, is placed in an underwater sealed cabin and is fixed on the rigid cross rod 6. If the point cloud reconstruction camera is placed in an outdoor shallow water area or other areas with bright ambient light, a band-pass interference filter can be added in front of the camera, and the short-wave cut-off frequency of a pass band is determined according to the point cloud reconstruction camera view field in consideration of the blue shift effect of the band-pass interference filter. Picture collection camera 1 and point cloud rebuild camera 2 are connected through a set of giga net twine, trigger line and power cord with embedded processor 4 respectively, embedded processor 4 is through giga net twine control picture collection camera 1 and point cloud rebuild camera 2 to the picture data that the transmission was gathered and is obtained, trigger line passes to the circuit board at embedded processor 4 place with the trigger signal that picture collection camera 1 and point cloud rebuild camera 2 synchronous sending when exposing, in order to trigger floodlight 3 and line laser 5 respectively.
The floodlight 3 is a white LED array capable of emitting high-brightness flash, and the line laser 5 is a green or blue high-power line laser emitter, which are sealed underwater and fixed on the rigid cross bar 6. The floodlight 3 and the line laser 5 are connected with the embedded processor 4 through a group of serial port lines and a power line respectively, the embedded processor 4 configures brightness parameters of the floodlight 3 and the line laser 5 through the serial port lines, and a circuit board where the embedded processor 4 is located connects trigger lines generated by the picture acquisition camera 1 and the point cloud reconstruction camera 2 to the triggering input ports of the floodlight 3 and the line laser 5 respectively. The floodlight 3 uniformly illuminates the front area covering the field of view of the image acquisition camera 1 when the image acquisition camera is exposed; the line laser 5 projects vertical line laser within its field of view when exposed by the point cloud reconstruction camera 2.
The underwater rotary table 7 is fixed at the bottom of the rigid cross rod 6, and underwater sealing is already performed. The underwater rotary table 7 is connected with the embedded processor 4 through a group of serial port lines and a power line, the embedded processor 4 controls the rotation of the underwater rotary table 7 through the serial port lines, and the underwater rotary table 7 drives the whole device to rotate through the rigid cross rod.
The embedded processor 4 is placed in an underwater sealed cabin, fixed on a rigid cross bar 6 and connected to the water surface through a transmission cable so as to supply power and transmit data off line. The embedded processor 4 is used for controlling other underwater equipment, processing and storing data in real time.
The flow chart of the device calibration method of the invention is shown in fig. 5. Firstly, calibrating internal parameters and distortion parameters of a point cloud reconstruction camera; then calibrating the pose parameters of the front cover of the point cloud reconstruction camera; calibrating a coordinate transformation matrix of the point cloud reconstruction camera and the rotary table; and finally calibrating a laser plane equation under the point cloud reconstruction camera coordinate system.
When calibrating the internal parameters and distortion parameters of the point cloud reconstruction camera, the specific implementation steps are as follows:
(1) removing a front cover of a sealed cabin of the point cloud reconstruction camera, and placing the point cloud reconstruction camera in the air;
(2) the checkerboard places different poses at different positions in the point cloud reconstruction camera view field, and the camera respectively shoots pictures at each pose;
(3) respectively extracting the checkerboard angular points in each acquired picture;
(4) and calibrating the internal parameters and distortion of the point cloud reconstruction camera by using a Zhang-Zhengyou calibration method according to the number and size parameters of the squares of the checkerboard and the angular points of the checkerboard in the picture.
As shown in fig. 3, when calibrating the pose parameters of the point cloud reconstruction camera front cover, the specific implementation steps are as follows:
(1) measuring the thickness d of the window of the front cover of the sealed cabin1Measuring or finding refractive index n of window of front cover of sealed cabingAnd the refractive index n of water in the environment of usew
(2) Placing the checkerboard right in front of the point cloud reconstruction camera and fixing, and ensuring that the checkerboard is filled with the field of view of the camera in water as much as possible;
(3) removing the front cover of the sealed cabin of the point cloud reconstruction camera, placing the camera in the air, and shooting a checkerboard picture Ia
(4) Installing a point cloud reconstruction camera sealed cabin front cover, placing the point cloud reconstruction camera sealed cabin front cover in water, and shooting a checkerboard picture Iw
(5) Respectively extracting picture IaAnd IwCorner point pixel coordinate C of checkerboardaAnd Cw
(6) Multiplying the pixel coordinates of the checkerboard corner points by the camera internal parameters to obtain corresponding camera incident light vectors VaAnd Vw
(7)Va×VwCalculating the direction of the interface of the window corresponding to the angular point of the checkerboard, wherein the calculated normal is the common normal of the air-glass interface and the glass-water interface of the window which are considered to be parallel;
(8) the common perpendicular line of the interface directions corresponding to each checkerboard corner point is the normal direction n of the front cover;
(9) according to the number and size parameters of the squares of the checkerboard and the picture IaCalculating a conversion matrix RT between the camera and a checkerboard coordinate system by the checkerboard angular points;
(10) multiplying coordinates of the checkerboard corner points in a checkerboard coordinate system by the transformation matrix RT to obtain points P of the checkerboard corner points in a camera coordinate system;
(11) according to picture IwMultiplying the angular points of the checkerboard by the internal reference of the camera to obtain the light propagation direction v in the sealed cabina
(12) Calculating the light propagation direction v in the sealed cabin window by utilizing Fresnel's law according to the normal direction n of the sealed cabin windowgAnd a propagation direction v of light outside the capsulew
(13) According to the origin O and the direction vaAnd the window normal direction n, the intersection point p of the light ray and the air-glass interface can be obtainedagThe coordinate of (2) is a distance d0A function of (a);
(14) according to the point of intersection pagDirection vgAnd the window normal direction n, the intersection point p of the light and the glass-water interface can be obtainedgwThe coordinate of (2) is a distance d0A function of (a);
(15) according to line segment P-PgwDirection (v) and direction (v)wSimilarly, the distance d from the window to the camera can be determined0
When calibrating a coordinate transformation matrix of a point cloud reconstruction camera and a turntable, the method comprises the following specific implementation steps:
(1) removing a front cover of a sealed cabin of the point cloud reconstruction camera, and placing the equipment in the air;
(2) the checkerboard is fixed in front of the camera, and the rotary table is rotated;
(3) the camera respectively shoots pictures at different positions of the checkerboard within the field range of the camera;
(4) respectively extracting the pixel coordinates of the checkerboard angular points in each picture;
(5) calculating a conversion matrix between the camera and the checkerboard coordinate system under each picture;
(6) solving a coordinate point of the camera origin under the checkerboard coordinate system under each picture;
(7) fitting a space circular equation by using a coordinate point of the origin of the camera;
(8) generating coordinates of the checkerboard angular points in a camera coordinate system;
(9) converting coordinates of the generated checkerboard angular points in a camera coordinate system into coordinates in a turntable coordinate system by using a space circular equation;
(10) and solving a conversion matrix of the camera and the turntable coordinate system according to coordinates of the checkerboard angular points under the camera and turntable coordinate systems.
When a laser plane equation under a point cloud reconstruction camera coordinate system is calibrated, the specific implementation steps are as follows:
(1) installing a point cloud reconstruction camera sealed cabin front cover, and putting the equipment in water;
(2) the checkerboard moves back and forth in the view field of the point cloud reconstruction camera, and the laser of each position line is ensured to irradiate on the checkerboard;
(3) closing line laser and acquiring picture I by point cloud reconstruction camera under each position of checkerboardcThen opening laser, point cloud reconstruction camera collecting picture IlAs a picture pair;
(4) extracting Picture IcA checkerboard corner point;
(5) calculating a conversion matrix between the camera and the checkerboard coordinate system;
(6) the angular point coordinates of the checkerboard are converted into space three-dimensional points by using a conversion matrixBy the refractive parameter d of the window0,d1,va,vg,vwN and camera internal parameters to obtain back projection coordinates of the checkerboard angular points;
(7) the Levenberg-Marquardt algorithm is used for iterative optimization of a transformation matrix, so that the error between the original checkerboard corner coordinates and the back projection coordinates is minimized;
(8) solving a checkerboard equation according to the coordinates of the checkerboard angular points and the corrected conversion matrix;
(9) extracting Picture IlLine laser center pixel coordinates of (1);
(10) solving an underwater ray equation corresponding to the central pixel coordinate of the line laser;
(11) the intersection point of the ray equation and the checkerboard equation is a three-dimensional coordinate of the line laser center;
(12) and fitting the three-dimensional coordinates of the line laser centers of all the pictures into a plane equation, namely a laser plane equation.
The underwater three-dimensional reconstruction comprises underwater target local single line reconstruction and underwater target scanning reconstruction. The basic principle is shown in fig. 4, and the incident light v in the air can be solved by using the camera internal reference and the pixel point at the center of the laser in the pictureaThe light ray equation of (c); calculating incident light v in window by using Fresnel law by using window parametersgAnd incident light v in waterwThe light ray equation of (c); and (3) obtaining the laser r emitted by the line laser through calibration in an underwater laser plane equation, and solving the space coordinate of the line-plane intersection point P, namely the three-dimensional space position of the reconstruction target point.
The specific implementation steps are as follows:
(1) the point cloud reconstruction camera shoots a target picture, and the line laser synchronously projects the line laser when the point cloud reconstruction camera shoots;
(2) the embedded processor extracts a line laser center of a shot picture;
(3) the embedded processor multiplies the camera internal parameters by the line laser central coordinates to obtain camera incident rays in the sealed cabin;
(4) using Fresnel's law and refractive parameter d of the window0,d1N, calculating the secretCamera incident rays in water corresponding to the camera incident rays in the capsule;
(5) the embedded processor reconstructs a single-line point cloud according to the incident light of the camera in the water and a laser plane equation;
(6) if the laser line reconstruction method works in the single line reconstruction mode, namely the turntable is still used for reconstructing a three-dimensional point cloud corresponding to a laser line according to a picture, the embedded processor stores the point cloud and the picture result, and the reconstruction process is finished;
(7) if the system works in a scanning mode, namely the turntable rotates to reconstruct three-dimensional point cloud of the whole scene according to a series of pictures, the embedded processor splices a single line to reconstruct the point cloud to generate a target point cloud;
(8) if the scanning work is finished, the embedded processor stores the point cloud and picture results and ends the reconstruction process;
(9) and (4) if the scanning is required to be continued, rotating the rotary table by a fixed angle, and starting from the step (1) again.
When the underwater target color picture is shot, the picture collecting camera shoots the color picture of the target, the floodlight synchronously illuminates when shooting, and the embedded processor stores the target color picture.

Claims (5)

1. A fine three-dimensional perception system for underwater targets is characterized by comprising a point cloud reconstruction camera, a line laser, a picture acquisition camera, a floodlight, a rigid cross bar, an underwater turntable and an embedded processor;
the point cloud reconstruction camera is used for acquiring a point cloud reconstruction map of the target;
a line laser for projecting a line laser;
the picture acquisition camera is used for acquiring a color picture of the target;
the floodlight is used for providing illumination for the picture acquisition camera;
the rigid cross bar is used for fixing the point cloud reconstruction camera, the line laser, the picture acquisition camera and the floodlight;
the underwater rotary table is connected with the rigid cross rod and is used for rotating the rigid cross rod to scan the whole target;
the embedded processor is used for controlling the rotation of the underwater rotary table, controlling the brightness of the line laser and the floodlight, controlling the point cloud reconstruction camera and the picture acquisition camera to respectively acquire a target point cloud and a color picture, generating point cloud data in real time and transmitting the data to the outside in an off-line manner;
the picture acquisition camera, the point cloud reconstruction camera, the floodlight, the embedded processor and the line laser are respectively arranged in the sealed cabin and are fixed on the rigid cross bar; the rigid cross rod is fixed on the underwater rotary table; the picture acquisition camera and the point cloud reconstruction camera are arranged at one end of the rigid cross rod, and the line laser is arranged at the other end of the rigid cross rod;
calibrating the point cloud reconstruction camera by controlling the rigid cross bar to act;
using a point cloud reconstruction camera and a picture acquisition camera to perform underwater target three-dimensional perception;
the method for calibrating the point cloud reconstruction camera by controlling the rigid cross bar to act comprises the following steps:
calibrating internal parameters and distortion parameters of a point cloud reconstruction camera;
calibrating the pose parameters of a front cover of a sealed cabin of a point cloud reconstruction camera;
calibrating a coordinate transformation matrix of the point cloud reconstruction camera and the rotary table;
calibrating a laser plane equation under a point cloud reconstruction camera coordinate system;
the method for carrying out underwater target three-dimensional perception by using the point cloud reconstruction camera and the picture acquisition camera comprises the following steps:
1) performing underwater target local single line reconstruction through a point cloud reconstruction camera;
2) the rigid cross rod is controlled by the rotation of the rotary table to carry out underwater target scanning reconstruction;
3) carrying out underwater target color picture shooting and storing through synchronous lighting of a picture acquisition camera and a floodlight;
the method for calibrating the position and posture parameters of the camera sealed cabin front cover comprises the following steps:
1) measuring point cloud reconstruction camera sealed cabin front cover window thickness d1Obtaining the refractive index n of the window of the front cover of the sealed cabingRefractive index n of water in the environment of usew
2) Placing the checkerboard right in front of the point cloud reconstruction camera and fixing;
3) the point cloud reconstruction camera is placed in the air, the front cover of the sealed cabin of the point cloud reconstruction camera is removed, and a checkerboard picture I is shota
4) The point cloud reconstruction camera is provided with a front cover of a sealed cabin and placed in water to shoot a checkerboard picture Iw
5) Separately obtain pictures IaAnd IwCorner point pixel coordinate C of checkerboardaAnd Cw
6) Respectively calculating the pixel coordinates C of the corner pointsaAnd CwCorresponding camera incident light vector VaAnd Vw
7) Calculating the window interface direction V of each corner pointa×VwThe common vertical line of the interface direction of each angular point is the normal direction n of the window of the front cover of the sealed cabin; the window interface is a window plane of a front cover of the sealed cabin and is used for isolating air in the sealed cabin from water outside the sealed cabin;
8) according to the number of the squares, the size parameters and the picture I of the checkerboardaCalculating a conversion matrix RT between the point cloud reconstruction camera and a checkerboard coordinate system by the checkerboard angular points, and solving a coordinate P of the checkerboard angular points under the camera coordinate system;
9) according to picture IwThe angular points of the checkerboard and the camera internal reference obtain the light propagation direction v in the sealed cabina(ii) a According to the normal direction n of the window of the front cover of the sealed cabin, the light propagation direction v in the window of the sealed cabin is calculated by utilizing the Fresnel lawgAnd a propagation direction v of light outside the capsulew
10) According to vaPassing through the origin O, v of the camera coordinate systemwPassing through a checkerboard corner coordinate P, the normal direction n of the sealed cabin front cover window and the thickness d of the sealed cabin front cover window1The distance d from the front cover window of the sealed cabin to the point cloud reconstruction camera is obtained through the geometric constraint relation0
The laser plane equation under the coordinate system of the calibration point cloud reconstruction camera comprises the following steps:
1) the point cloud reconstruction camera is installed on a front cover of a sealed cabin and is placed in water;
2) the checkerboard moves back and forth relative to the point cloud reconstruction camera in the point cloud reconstruction camera view field, so that the laser of each position line is ensured to irradiate on the checkerboard, and the checkerboard collects a laser picture I of a closing line at each positioncAnd opening line laser picture IlAnd forming a pair of pictures;
3) extracting Picture IcCalculating a conversion matrix between a point cloud reconstruction camera coordinate system and a checkerboard coordinate system;
4) converting the angular point coordinates of the checkerboard into three-dimensional spatial points by using a conversion matrix, and using a refraction parameter d of a window of a front cover of the sealed cabin0,d1,va,vg,vwN and calculating the back projection coordinates of the checkerboard angular points by internal reference of a point cloud reconstruction camera, and iteratively correcting a transformation matrix to minimize the error between the original checkerboard angular point coordinates and the back projection coordinates;
5) solving an equation of a plane where the checkerboards are located according to the coordinates of the angular points of the checkerboards and the corrected conversion matrix;
6) extracting Picture IlLine laser center pixel coordinates of (1);
7) solving an underwater ray equation corresponding to the central pixel coordinate of the line laser;
8) the intersection point of the underwater ray equation and the plane where the checkerboard is located is a line laser three-dimensional coordinate;
9) fitting the linear laser three-dimensional coordinates of all the pictures into a plane equation, namely a laser plane equation;
the method for reconstructing the underwater target local single line through the point cloud reconstruction camera comprises the following steps:
1) a point cloud reconstruction camera shoots a target picture, and line laser synchronously projects line laser during shooting;
2) the embedded processor extracts a line laser center of the picture;
3) the embedded processor is used for processing the data according to the line laser center coordinate and the refraction parameter d of the sealed cabin front cover window0,d1N and point cloud reconstruction camera internal parameters are used for correcting incident light of the camera in the water;
4) the embedded processor reconstructs a single-line point cloud of a line laser irradiation target according to the incident light of the camera in the water and a laser plane equation;
5) and the embedded processor stores the point cloud and the picture result in real time.
2. A fine three-dimensional perception method for underwater targets is characterized by comprising the following steps:
calibrating the point cloud reconstruction camera by controlling the rigid cross bar to act;
using a point cloud reconstruction camera and a picture acquisition camera to perform underwater target three-dimensional perception;
the method for calibrating the point cloud reconstruction camera by controlling the rigid cross bar to act comprises the following steps:
calibrating internal parameters and distortion parameters of a point cloud reconstruction camera;
calibrating the pose parameters of a front cover of a sealed cabin of a point cloud reconstruction camera;
calibrating a coordinate transformation matrix of the point cloud reconstruction camera and the rotary table;
calibrating a laser plane equation under a point cloud reconstruction camera coordinate system;
the method for carrying out underwater target three-dimensional perception by using the point cloud reconstruction camera and the picture acquisition camera comprises the following steps:
1) performing underwater target local single line reconstruction through a point cloud reconstruction camera;
2) the rigid cross rod is controlled by the rotation of the rotary table to carry out underwater target scanning reconstruction;
3) carrying out underwater target color picture shooting and storing through synchronous lighting of a picture acquisition camera and a floodlight;
the method for calibrating the position and posture parameters of the camera sealed cabin front cover comprises the following steps:
1) measuring point cloud reconstruction camera sealed cabin front cover window thickness d1Obtaining the refractive index n of the window of the front cover of the sealed cabingRefractive index n of water in the environment of usew
2) Placing the checkerboard right in front of the point cloud reconstruction camera and fixing;
3) point cloud reconstruction camera projectionIn the air, the front cover of the sealed cabin is removed, and a checkerboard picture I is shota
4) The point cloud reconstruction camera is provided with a front cover of a sealed cabin and placed in water to shoot a checkerboard picture Iw
5) Separately obtain pictures IaAnd IwCorner point pixel coordinate C of checkerboardaAnd Cw
6) Respectively calculating the pixel coordinates C of the corner pointsaAnd CwCorresponding camera incident light vector VaAnd Vw
7) Calculating the window interface direction V of each corner pointa×VwThe common vertical line of the interface direction of each angular point is the normal direction n of the window of the front cover of the sealed cabin; the window interface is a window plane of a front cover of the sealed cabin and is used for isolating air in the sealed cabin from water outside the sealed cabin;
8) according to the number of the squares, the size parameters and the picture I of the checkerboardaCalculating a conversion matrix RT between the point cloud reconstruction camera and a checkerboard coordinate system by the checkerboard angular points, and solving a coordinate P of the checkerboard angular points under the camera coordinate system;
9) according to picture IwThe angular points of the checkerboard and the camera internal reference obtain the light propagation direction v in the sealed cabina(ii) a According to the normal direction n of the window of the front cover of the sealed cabin, the light propagation direction v in the window of the sealed cabin is calculated by utilizing the Fresnel lawgAnd a propagation direction v of light outside the capsulew
10) According to vaPassing through the origin O, v of the camera coordinate systemwPassing through a checkerboard corner coordinate P, the normal direction n of the sealed cabin front cover window and the thickness d of the sealed cabin front cover window1The distance d from the front cover window of the sealed cabin to the point cloud reconstruction camera is obtained through the geometric constraint relation0
The laser plane equation under the coordinate system of the calibration point cloud reconstruction camera comprises the following steps:
1) the point cloud reconstruction camera is installed on a front cover of a sealed cabin and is placed in water;
2) checkerboard in the field of view of point cloud reconstruction camera relative to the point cloud reconstruction cameraMoving to ensure that the laser of each position line irradiates on the checkerboard, and collecting and closing line laser pictures I at each position of the checkerboardcAnd opening line laser picture IlAnd forming a pair of pictures;
3) extracting Picture IcCalculating a conversion matrix between a point cloud reconstruction camera coordinate system and a checkerboard coordinate system;
4) converting the angular point coordinates of the checkerboard into three-dimensional spatial points by using a conversion matrix, and using a refraction parameter d of a window of a front cover of the sealed cabin0,d1,va,vg,vwN and calculating the back projection coordinates of the checkerboard angular points by internal reference of a point cloud reconstruction camera, and iteratively correcting a transformation matrix to minimize the error between the original checkerboard angular point coordinates and the back projection coordinates;
5) solving an equation of a plane where the checkerboards are located according to the coordinates of the angular points of the checkerboards and the corrected conversion matrix;
6) extracting Picture IlLine laser center pixel coordinates of (1);
7) solving an underwater ray equation corresponding to the central pixel coordinate of the line laser;
8) the intersection point of the underwater ray equation and the plane where the checkerboard is located is a line laser three-dimensional coordinate;
9) fitting the linear laser three-dimensional coordinates of all the pictures into a plane equation, namely a laser plane equation;
the method for reconstructing the underwater target local single line through the point cloud reconstruction camera comprises the following steps:
1) a point cloud reconstruction camera shoots a target picture, and line laser synchronously projects line laser during shooting;
2) the embedded processor extracts a line laser center of the picture;
3) the embedded processor is used for processing the data according to the line laser center coordinate and the refraction parameter d of the sealed cabin front cover window0,d1N and point cloud reconstruction camera internal parameters are used for correcting incident light of the camera in the water;
4) the embedded processor reconstructs a single-line point cloud of a line laser irradiation target according to the incident light of the camera in the water and a laser plane equation;
5) and the embedded processor stores the point cloud and the picture result in real time.
3. The fine three-dimensional perception method for the underwater target according to claim 2, wherein the calibrating of the point cloud reconstruction camera internal parameters and distortion parameters comprises the following steps:
1) placing the point cloud reconstruction camera in the air, and removing a front cover of a sealed cabin of the point cloud reconstruction camera;
2) a point cloud reconstruction camera acquires pictures of the checkerboard in different postures;
3) respectively extracting checkerboard angular points of the collected pictures;
4) and calibrating the internal parameters and distortion of the point cloud reconstruction camera by using a Zhang-Zhengyou calibration method according to the number of the squares of the checkerboard, the size parameters and the checkerboard angular points in the picture.
4. The fine three-dimensional perception method for the underwater target according to claim 2, wherein the calibrating of the coordinate transformation matrix of the point cloud reconstruction camera and the turntable comprises the following steps:
1) placing the point cloud reconstruction camera in the air, and removing a front cover of a sealed cabin of the point cloud reconstruction camera;
2) the checkerboard is fixed in front of the camera, the rotary table is rotated, and pictures of the checkerboard at different positions in the field range of the camera are collected;
3) extracting the pixel coordinates of the checkerboard corner points in the collected picture;
4) calculating a conversion matrix between a point cloud reconstruction camera coordinate system and a checkerboard coordinate system under each picture according to the pixel coordinates of the checkerboard angular points in the collected pictures;
5) according to the conversion matrix, coordinate points of the origin of the point cloud reconstruction camera coordinate system under each acquired picture under the checkerboard coordinate system are worked out, and a fitting circular equation formed by the coordinate points relative to the checkerboard is worked out;
6) obtaining coordinates of the checkerboard angular points under a point cloud reconstruction camera coordinate system;
7) obtaining coordinates of the checkerboard angular points under a coordinate system of the rotary table;
8) and solving a conversion matrix of the point cloud reconstruction camera and the turntable coordinate system according to coordinate values of the checkered corner points under the point cloud reconstruction camera coordinate system and the turntable coordinate system.
5. The fine three-dimensional perception method for the underwater target according to claim 2, wherein scanning reconstruction of the underwater target is performed by controlling the rigid cross rod through rotation of the rotary table, and the method comprises the following steps:
1) the turntable rotates to control the rigid cross rods to rotate at fixed intervals, so that the line laser scanning of the whole target is ensured;
2) performing local single line reconstruction on the underwater target at each fixed angle;
3) the embedded processor splices a single line reconstruction point cloud according to the fixed rotation angle and a coordinate transformation matrix of the point cloud reconstruction camera and the rotary table to generate a target point cloud;
4) and the embedded processor stores the point cloud and the picture result in real time.
CN202110162626.2A 2021-02-05 2021-02-05 Fine three-dimensional sensing method for underwater target Active CN112995639B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110162626.2A CN112995639B (en) 2021-02-05 2021-02-05 Fine three-dimensional sensing method for underwater target

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110162626.2A CN112995639B (en) 2021-02-05 2021-02-05 Fine three-dimensional sensing method for underwater target

Publications (2)

Publication Number Publication Date
CN112995639A CN112995639A (en) 2021-06-18
CN112995639B true CN112995639B (en) 2022-04-15

Family

ID=76348210

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110162626.2A Active CN112995639B (en) 2021-02-05 2021-02-05 Fine three-dimensional sensing method for underwater target

Country Status (1)

Country Link
CN (1) CN112995639B (en)

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DK2909807T3 (en) * 2012-10-17 2020-04-14 Cathx Res Ltd Improvements to underwater imaging for underwater studies
GB201407270D0 (en) * 2014-04-24 2014-06-11 Cathx Res Ltd 3D data in underwater surveys
US10019809B2 (en) * 2015-09-22 2018-07-10 The Governors Of The University Of Alberta Underwater 3D image reconstruction utilizing triple wavelength dispersion and camera system thereof
CN105678742B (en) * 2015-12-29 2018-05-22 哈尔滨工业大学深圳研究生院 A kind of underwater camera scaling method
CN105787997B (en) * 2016-03-27 2018-12-25 中国海洋大学 Underwater high-precision three-dimensional reconstructing device and method
CN106952341B (en) * 2017-03-27 2020-03-31 中国人民解放军国防科学技术大学 Underwater scene three-dimensional point cloud reconstruction method and system based on vision
CN107358632B (en) * 2017-06-29 2020-01-14 西北工业大学 Underwater camera calibration method applied to underwater binocular stereo vision
CN207034549U (en) * 2017-08-07 2018-02-23 王俊霞 A kind of high speed dynamic target measurement device
CN109059873A (en) * 2018-06-08 2018-12-21 上海大学 Underwater 3 D reconstructing device and method based on light field multilayer refraction model
CN109029284B (en) * 2018-06-14 2019-10-22 大连理工大学 A kind of three-dimensional laser scanner based on geometrical constraint and camera calibration method
CN109242908B (en) * 2018-07-12 2021-08-03 中国科学院自动化研究所 Calibration method for underwater binocular vision measurement system
CN109490251A (en) * 2018-10-26 2019-03-19 上海大学 Underwater refractive index self-calibrating method based on light field multilayer refraction model
CN109507658B (en) * 2018-11-21 2020-09-22 浙江大学 All-round tracking positioner of underwater robot coastal waters bed motion
CN110044300B (en) * 2019-01-22 2024-04-09 中国海洋大学 Amphibious three-dimensional vision detection device and detection method based on laser
CN110260820B (en) * 2019-04-29 2021-07-06 上海大学 Underwater binocular stereo vision measurement system and method based on dynamic reference coordinate system
CN110161485B (en) * 2019-06-13 2021-03-26 同济大学 External parameter calibration device for laser radar and vision camera
CN110763152B (en) * 2019-10-09 2021-08-20 哈尔滨工程大学 Underwater active rotation structure light three-dimensional vision measuring device and measuring method
CN111207670A (en) * 2020-02-27 2020-05-29 河海大学常州校区 Line structured light calibration device and method
CN112254668A (en) * 2020-10-12 2021-01-22 青岛图海纬度科技有限公司 Underwater three-dimensional scanning imaging device and imaging method

Also Published As

Publication number Publication date
CN112995639A (en) 2021-06-18

Similar Documents

Publication Publication Date Title
CN105787997B (en) Underwater high-precision three-dimensional reconstructing device and method
KR102015606B1 (en) Multi-line laser array three-dimensional scanning system and multi-line laser array three-dimensional scanning method
CN110044300B (en) Amphibious three-dimensional vision detection device and detection method based on laser
WO2020093436A1 (en) Three-dimensional reconstruction method for inner wall of pipe
Digumarti et al. Underwater 3D capture using a low-cost commercial depth camera
CN103591939B (en) Based on simulation sea bed topographic survey method and the measurement mechanism of active stereo vision technique
CN107358632B (en) Underwater camera calibration method applied to underwater binocular stereo vision
CN107505324A (en) 3D scanning means and scan method based on binocular collaboration laser
CN114283203B (en) Calibration method and system of multi-camera system
CN109444056A (en) A kind of underwater spectral reflectivity in-situ measurement device of binocular imaging formula and measurement method
CN110910506B (en) Three-dimensional reconstruction method and device based on normal detection, detection device and system
CN113066132B (en) 3D modeling calibration method based on multi-equipment acquisition
CN109781068A (en) The vision measurement system ground simulation assessment system and method for space-oriented application
CN114047185A (en) Visible light imaging device and monitoring method suitable for shallow sea coral reef underwater monitoring
US20240244330A1 (en) Systems and methods for capturing and generating panoramic three-dimensional models and images
CN112991532B (en) Underwater high-precision three-dimensional reconstruction method based on photometric stereo method and laser triangulation method
CN112995639B (en) Fine three-dimensional sensing method for underwater target
CN116824067A (en) Indoor three-dimensional reconstruction method and device thereof
CN104502378B (en) X-ray CT (Computed Tomography) device
Detry et al. Turbid-water subsea infrastructure 3D reconstruction with assisted stereo
CN115830225A (en) Underwater RGB-D three-dimensional reconstruction system and method
CN207600397U (en) A kind of abyssopelagic organism measuring device
CN112113505B (en) Portable scanning measurement device and method based on line structured light
CN108981609A (en) A kind of complexity precision castings 3 D measuring method
CN209181735U (en) Amphibious 3D vision detection device based on laser

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant