CN116402886A - Multi-vision three-dimensional measurement method for underwater robot - Google Patents

Multi-vision three-dimensional measurement method for underwater robot Download PDF

Info

Publication number
CN116402886A
CN116402886A CN202310243314.3A CN202310243314A CN116402886A CN 116402886 A CN116402886 A CN 116402886A CN 202310243314 A CN202310243314 A CN 202310243314A CN 116402886 A CN116402886 A CN 116402886A
Authority
CN
China
Prior art keywords
camera
target
marine product
target marine
mechanical arm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310243314.3A
Other languages
Chinese (zh)
Inventor
毕胜
游锦春
付先平
刘晓凯
金国华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Maritime University
Original Assignee
Dalian Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Maritime University filed Critical Dalian Maritime University
Priority to CN202310243314.3A priority Critical patent/CN116402886A/en
Publication of CN116402886A publication Critical patent/CN116402886A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/0095Means or methods for testing manipulators
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques
    • G06T5/70
    • G06T5/80
    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/97Determining parameters from multiple pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Multimedia (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a multi-vision three-dimensional measurement method for an underwater robot, which comprises the following steps: calibrating and correcting distortion of a monocular camera and a binocular camera respectively to obtain camera parameter information and each frame of image of an underwater image; obtaining an underwater clear image; taking the left camera of the binocular camera as a main camera, inputting frames acquired by the main camera into a target marine product detection program to obtain the position of the target marine product in a pixel coordinate system; obtaining three-dimensional coordinates of the target seafood relative to a coordinate origin of the main camera; based on the three-dimensional coordinates, navigating the AUV to the vicinity of the position of the target marine product, and calculating the position and the size of the target marine product; and controlling the mechanical arm to move according to the obtained target marine product position, estimating the target position by utilizing a monocular camera with eyes on hands and the target marine product size while the mechanical arm moves, updating the obtained new three-dimensional coordinates into a mechanical arm movement end point, and grabbing the target marine product when the mechanical arm moves to the end point.

Description

Multi-vision three-dimensional measurement method for underwater robot
Technical Field
The invention relates to the technical field of machine vision, in particular to a multi-vision three-dimensional measurement method for an underwater robot.
Background
Due to the influence of the underwater environment and marine impurities on the refraction, scattering, absorption and other factors of light, the problem of measuring the three-dimensional position of the underwater target is usually solved by using a visual mode instead of a laser radar and other modes. In particular to underwater seafood fishing, the following methods are commonly used: the fixed position fishing of the monocular camera and the low-freedom-degree grabbing arm is used, the fishing technology assumes that all products to be fished are located on the gentle seabed, firstly, when the grabbing arm is estimated to be put down in advance, the position of an object in the visual field of the monocular camera can be grabbed, and when the robot cruises on the ground, if a grabbing target appears at the corresponding position of the visual field, the grabbing arm is controlled to perform the set grabbing action. Flexible position fishing using binocular cameras with high degree of freedom gripper arms. This fishing technique uses binocular cameras to calculate target depth information in real time and scales to coordinates relative to the gripper arms. The fishing technique uses a suction pump to suck the target into a fishing box.
However, in the fixed position capturing technique using the low-degree-of-freedom capturing arm and the monocular camera, since the position where capturing is possible is fixed and the underwater robot itself has a considerable volume, the number of scenes where capturing can be smoothly performed is small. For the flexible position capturing technology using the high-freedom-degree capturing arm and the binocular camera, the problem that the mechanical arm blocks the binocular vision in the capturing process to cause matching failure often occurs due to the fact that the calculation time required for carrying out binocular measurement is large, and therefore improvement is needed. For the fishing mode using the pumping type suction device, although the working efficiency is very high, irreversible damage is easily caused to the seabed; moreover, due to the soft nature of most seafood, this process is prone to damage to the quality of seafood. Simultaneously, when sucking marine products, the sediment, the broken stone and the like can be sucked together by the larger suction force, and the impact between the sediment, the broken stone and the like can cause damage to the machine and easily cause secondary damage to the marine products.
Disclosure of Invention
According to the multi-vision three-dimensional measurement method for the underwater robot, the method can solve the defects that when the underwater autonomous fishing robot is used for grabbing a target object, the fishing environment is limited when monocular vision is used, and the problems that when binocular vision is used, the stereo matching accuracy is low, the speed is low and the binocular vision is blocked in the grabbing process due to the fact that characteristic information of the underwater environment is lost and the like; the technical means adopted by the invention comprises the following steps:
calibrating and correcting distortion of a monocular camera and a binocular camera respectively to obtain camera parameter information, and obtaining each frame of image of the corrected underwater image based on the camera parameter information;
preprocessing the corrected underwater image to obtain an underwater clear image;
taking the left camera of the binocular camera as a main camera, and inputting the underwater clear image acquired by the main camera into a target detection program to obtain a detection frame position in a pixel coordinate system;
the method comprises the steps of measuring the distance of a target marine product by using a binocular camera, and obtaining the three-dimensional coordinates of the target marine product relative to a coordinate origin by using a main camera;
based on the three-dimensional coordinates, navigating the AUV to the vicinity of the position of the target marine product, and calculating the accurate position and the size of the target marine product by using a binocular camera;
and controlling the mechanical arm to move according to the obtained target marine product position, estimating the target position by utilizing a monocular camera with eyes on hands and the target marine product size while the mechanical arm moves, updating the obtained new three-dimensional coordinates into a mechanical arm movement end point, and grabbing the target marine product when the mechanical arm moves to the end point.
Setting a maximum parallax value, and expanding the position of a detection frame in a pixel coordinate system to the left by pixels with the maximum parallax value as a range for searching homonymous points on the imaging surface of a right camera of the binocular camera;
performing stereo matching by using an AD-Census stereo matching method in the range to obtain a parallax value corresponding to each pixel point in the range and forming a parallax map;
calculating three-dimensional coordinates, which correspond to each pixel point in the parallax map and take the main camera as a coordinate origin, according to the calibrated baseline distance of the binocular camera:
Figure BDA0004125126300000021
wherein D is the actual distance in a certain direction, f is the focal length of the camera pixel unit, disp is the point disparity value, and B is the baseline distance;
the method comprises the steps of using a Garbecut algorithm to take the position of a detection frame as the input of the algorithm, dividing the real outline and occupied pixel points of a target marine product, and representing the real outline and the occupied pixel points in a mask mode;
finding out a minimum rectangular area surrounding the target marine product according to the contour information, and obtaining coordinates of four corner points of the area so as to obtain a minimum circumscribed rectangle;
and calculating the midpoint coordinates of the four sides of the minimum circumscribed rectangle, and obtaining the corresponding world coordinates to obtain the size of the target marine product, wherein the included angle between the minimum circumscribed rectangle and the x axis of the pixel coordinates is the posture of the target marine product.
Controlling the mechanical arm to move according to the obtained three-dimensional coordinates of the target marine product;
while the mechanical arm moves, the monocular camera with eyes on the hands and the obtained length information of the target marine products are utilized to calculate the position of the target in real time by utilizing a formula:
Figure BDA0004125126300000031
wherein d is 1 ,d 2 ,l 1 ,l 2 Respectively the vertical distance and the horizontal distance between the two end points of the target object and the optical center of the camera, wherein yaw and pitch are the pitch angle and the yaw angle of the camera, and h 1 ,h 2 The vertical distance between imaging points in cameras at two end points of an object and an optical center is respectively, H is the height of the camera from the ground, f is the focal length of the camera, and Len is the real length of a target object.
According to the multi-vision three-dimensional measurement method for the underwater robot, provided by the invention, the priori knowledge of a target object is used while the flexibility of the grabbing arm is increased, the position measurement and calculation of the underwater target in a specific shape are performed through monocular ranging with smaller calculation scale, and finally, the automatic grabbing of the underwater marine products can be balanced in precision and speed.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to the drawings without inventive effort to a person skilled in the art.
FIG. 1 is a general flow chart of the method disclosed in the present invention;
FIG. 2 is a flow chart of a method for measuring the size of a target object by a binocular vision system according to the present invention;
FIG. 3 is a schematic diagram of object recognition in the method for measuring the size of an object by a binocular vision system according to the present invention;
FIG. 4 is a schematic diagram of Grabcut algorithm object segmentation in the method for measuring object size by binocular vision system of the present invention;
FIG. 5 is a schematic diagram of a smallest circumscribed rectangle in the method for measuring the size of a target object by a binocular vision system according to the present invention;
FIG. 6 is a schematic diagram of calculation of measurement points in the method for measuring the size of a target object by the binocular vision system of the present invention;
FIG. 7 is a simplified perspective view of a monocular ranging model according to the present invention;
fig. 8 is a schematic top view of a monocular ranging model according to the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
As shown in fig. 1, the invention provides a multi-vision three-dimensional measurement method for an underwater robot, which specifically comprises the following steps:
s1: calibrating and correcting distortion of a monocular camera (hereinafter referred to as monocular camera) with eyes on a grabbing arm and a binocular camera (hereinafter referred to as binocular camera) with eyes outside hands respectively to obtain camera parameters, and calculating each frame of corrected image through the camera parameters;
s2: performing image preprocessing on the corrected image to obtain a clearer underwater clear image, so that the subsequent processing is convenient;
s3: taking the left camera of the binocular camera as a main camera, and sending frames acquired by the main camera into a target marine product detection program to obtain the detection frame position of the target marine product in a pixel coordinate system;
s4: the detected target marine product is subjected to distance measurement by using a binocular camera, and three-dimensional coordinates of the target marine product relative to a coordinate origin with a main camera are obtained;
s5: navigating the AUV to the vicinity of the position of the marine product according to the obtained three-dimensional coordinates;
s6: measuring and calculating the position and the size of the target seafood by using a binocular camera;
s7: and (3) controlling the mechanical arm to move according to the obtained target marine product position, estimating the target position by utilizing the monocular camera of the eyes on the hands and the target marine product size obtained in the step (S6) while the mechanical arm moves, updating the estimated new three-dimensional coordinate into a mechanical arm movement end point, and grabbing when the mechanical arm moves to the end point.
The following specific mode is adopted in the S1:
s11: preparing a planar checkerboard calibration plate with a known size;
s12: shooting a plurality of calibration images from different angles respectively, and ensuring that the positions, angles, sizes, exposure and other conditions of the calibration plates in the images are diversified as much as possible;
s13: the calibration images shot by the monocular camera and the binocular camera are respectively imported into Camera Calibrator Toolbox and Stereo Camera Calibrator Toolbox in MATLAB, and the side length of the checkerboard is set;
s14: after the successful importing, clicking a Calibrate button to obtain a calibration result, selecting a part with a calibration error smaller than 0.4, and respectively exporting camera parameters of a monocular camera and a binocular camera;
s15: calculating correction mapping of the left camera and the right camera by using a stereoRectify function in OpenCV, wherein the correction mapping is used for correcting an original image into an image under a new view angle;
s16: calculating a mapping table required by the distortion correction and the stereo correction of the camera by using an initUndicatrifyfap function in the OpenCV, and mapping pixel coordinates in an original image to pixel coordinates in a new corrected image;
s17: and applying correction mapping to the original image by using a remap function in OpenCV to obtain a new corrected image.
The following specific mode is adopted in the S2:
s21: the median filtering is used for effectively reducing noise in the image, in particular to salt and pepper noise-like floaters;
s22: the original low-contrast area in the image is expanded to the whole gray level range by reassigning the gray level of the image in a histogram equalization mode, so that the contrast of the underwater image is improved, and the underwater image is clearer and brighter;
s23: and using homomorphic filtering to remove shadows and reflections caused by using an artificial light source under water, enhancing detail information of the image, removing image noise and retaining color information of the image.
The following mode is specifically adopted in the S3:
s31: loading a weight file which is trained and can identify marine products;
s32: the frame acquired by the left camera of the binocular camera is sent to the target seafood detection program to obtain the position of the seafood detection frame in the pixel coordinate system, and the coordinates (x) of the upper left corner of the detection frame are used 1 ,y 1 ) And the coordinates of the lower right corner point (x 2 ,y 2 ) The position of the detection frame can be represented;
the following method is specifically adopted in the S4:
s41: according to a preset maximum parallax maxDIsp, a detection frame on an imaging surface is expanded leftwards by maxDIsp pixels to be used as a range for searching homonymous points on the imaging surface of a right camera of the binocular camera;
s42: performing stereo matching by using an AD-Census stereo matching method in the range of S41 to obtain a parallax value corresponding to each pixel point in the range, and forming a parallax map;
s43: and (2) calculating three-dimensional coordinates which correspond to each pixel point in the parallax map and take the main camera as the origin of coordinates according to the baseline distance of the binocular camera, which is obtained by the marking in the step (S2).
Figure BDA0004125126300000061
Wherein D is the actual distance in a certain direction, f is the focal length of the camera pixel unit, disp is the point disparity value, and B is the baseline distance.
The following manner is specifically adopted in S6, as shown in fig. 2:
s61: using Grabcut algorithm, taking the position of the detection frame obtained in the step S4 as the input of the algorithm, dividing the real outline and the occupied pixel points of the target marine product as shown in fig. 3, and representing the real outline and the occupied pixel points in a mask manner as shown in fig. 4;
s62: detecting contour information by using a findContours function in OpenCV according to the mask information obtained in the step S61;
s63: finding a minimum rectangular area surrounding the target seafood by using minarea rect in OpenCV according to the contour information obtained in S62, and obtaining coordinates of four corner points of the area by using boupoint, as shown in fig. 5;
s64: the coordinates of the middle points of the four sides of the minimum bounding rectangle in S63 are calculated, as shown in fig. 6, the world coordinates corresponding to the points are calculated according to the method of S5, the length and width of the target seafood are calculated to be the size of the target seafood, and the included angle of the x-axis of the minimum bounding rectangle and the pixel coordinates in the return information of the minimum bounding rectangle in S63 is the gesture of the target seafood.
The following method is specifically adopted in S7:
s71: controlling the mechanical arm to move according to the three-dimensional coordinates of the target marine product obtained in the step S5;
s72: while the mechanical arm moves, the monocular camera with eyes on the hand and the length information of the target marine product obtained in the step S6 are utilized, and as shown in fig. 7 and 8, the position of the target marine product is calculated in real time by utilizing a formula;
Figure BDA0004125126300000071
wherein d 1 ,d 2 ,l 1 ,l 2 Respectively the vertical distance and the horizontal distance between the two end points of the target marine product and the optical center of the camera, wherein yaw and pitch are the pitch angle and the yaw angle of the camera, and h 1 ,h 2 The vertical distance between imaging points in cameras at two end points of an object and an optical center is respectively, H is the height of the camera from the ground, f is the focal length of the camera, and Len is the real length of a target marine product.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (4)

1. A multi-vision three-dimensional measurement method for an underwater robot, comprising:
calibrating and correcting distortion of a monocular camera and a binocular camera respectively to obtain camera parameter information, and obtaining each frame of image of the corrected underwater image based on the camera parameter information;
preprocessing the corrected underwater image to obtain an underwater clear image;
taking the left camera of the binocular camera as a main camera, and inputting the underwater clear image acquired by the main camera into a target detection program to obtain a detection frame position in a pixel coordinate system;
the method comprises the steps of measuring the distance of a target marine product by using a binocular camera, and obtaining the three-dimensional coordinates of the target marine product relative to a coordinate origin by using a main camera;
based on the three-dimensional coordinates, navigating the AUV to the vicinity of the position of the target marine product, and calculating the accurate position and the size of the target marine product by using a binocular camera;
and controlling the mechanical arm to move according to the obtained target marine product position, estimating the target position by utilizing a monocular camera with eyes on hands and the target marine product size while the mechanical arm moves, updating the obtained new three-dimensional coordinates into a mechanical arm movement end point, and grabbing the target marine product when the mechanical arm moves to the end point.
2. The multi-vision three-dimensional measurement method for an underwater robot according to claim 1, wherein:
setting a maximum parallax value, and expanding the position of a detection frame in a pixel coordinate system to the left by pixels with the maximum parallax value as a range for searching homonymous points on the imaging surface of a right camera of the binocular camera;
performing stereo matching by using an AD-Census stereo matching method in the range to obtain a parallax value corresponding to each pixel point in the range and forming a parallax map;
calculating three-dimensional coordinates, which correspond to each pixel point in the parallax map and take the main camera as a coordinate origin, according to the calibrated baseline distance of the binocular camera:
Figure FDA0004125126290000011
wherein D is the actual distance in a certain direction, f is the focal length of the camera pixel unit, disp is the point disparity value, and B is the baseline distance.
3. The multi-vision three-dimensional measurement method for an underwater robot according to claim 2, wherein:
the method comprises the steps of using a Garbecut algorithm to take the position of a detection frame as the input of the algorithm, dividing the real outline and occupied pixel points of a target marine product, and representing the real outline and the occupied pixel points in a mask mode;
finding out a minimum rectangular area surrounding the target marine product according to the contour information, and obtaining coordinates of four corner points of the area so as to obtain a minimum circumscribed rectangle;
and calculating the midpoint coordinates of the four sides of the minimum circumscribed rectangle, and obtaining the corresponding world coordinates to obtain the size of the target marine product, wherein the included angle between the minimum circumscribed rectangle and the x axis of the pixel coordinates is the posture of the target marine product.
4. A multi-vision three-dimensional measurement method for an underwater robot according to claim 3, characterized in that:
controlling the mechanical arm to move according to the obtained three-dimensional coordinates of the target marine product;
while the mechanical arm moves, the monocular camera with eyes on the hands and the obtained length information of the target marine products are utilized to calculate the position of the target in real time by utilizing a formula:
Figure FDA0004125126290000021
wherein d is 1 ,d 2 ,l 1 ,l 2 Respectively the vertical distance and the horizontal distance between the two end points of the target object and the optical center of the camera, wherein yaw and pitch are the pitch angle and the yaw angle of the camera, and h 1 ,h 2 The vertical distance between imaging points in cameras at two end points of an object and an optical center is respectively, H is the height of the camera from the ground, f is the focal length of the camera, and Len is the real length of a target object.
CN202310243314.3A 2023-03-14 2023-03-14 Multi-vision three-dimensional measurement method for underwater robot Pending CN116402886A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310243314.3A CN116402886A (en) 2023-03-14 2023-03-14 Multi-vision three-dimensional measurement method for underwater robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310243314.3A CN116402886A (en) 2023-03-14 2023-03-14 Multi-vision three-dimensional measurement method for underwater robot

Publications (1)

Publication Number Publication Date
CN116402886A true CN116402886A (en) 2023-07-07

Family

ID=87015024

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310243314.3A Pending CN116402886A (en) 2023-03-14 2023-03-14 Multi-vision three-dimensional measurement method for underwater robot

Country Status (1)

Country Link
CN (1) CN116402886A (en)

Similar Documents

Publication Publication Date Title
CN107844750B (en) Water surface panoramic image target detection and identification method
US10234873B2 (en) Flight device, flight control system and method
CN111563921B (en) Underwater point cloud acquisition method based on binocular camera
US20170293796A1 (en) Flight device and flight control method
CN111897349B (en) Autonomous obstacle avoidance method for underwater robot based on binocular vision
CN111721259B (en) Underwater robot recovery positioning method based on binocular vision
CN110322457A (en) A kind of de-stacking method of 2D in conjunction with 3D vision
CN111429533B (en) Camera lens distortion parameter estimation device and method
CN109448045B (en) SLAM-based planar polygon measurement method and machine-readable storage medium
WO2018232518A1 (en) Determining positions and orientations of objects
WO2014044126A1 (en) Coordinate acquisition device, system and method for real-time 3d reconstruction, and stereoscopic interactive device
CN108257185A (en) More checkerboard angle point detection process and camera marking method
CN108107837A (en) A kind of glass processing device and method of view-based access control model guiding
CN113137920A (en) Underwater measurement equipment and underwater measurement method
JP2692603B2 (en) 3D measurement method
CN112734652B (en) Near-infrared blood vessel image projection correction method based on binocular vision
CN115082777A (en) Binocular vision-based underwater dynamic fish form measuring method and device
CN111105467A (en) Image calibration method and device and electronic equipment
CN113313116A (en) Vision-based accurate detection and positioning method for underwater artificial target
CN115108466A (en) Intelligent positioning method for container spreader
CN115830018B (en) Carbon block detection method and system based on deep learning and binocular vision
CN116402886A (en) Multi-vision three-dimensional measurement method for underwater robot
CN116596987A (en) Workpiece three-dimensional size high-precision measurement method based on binocular vision
CN115841668A (en) Binocular vision apple identification and accurate positioning method
CN108489469A (en) A kind of monocular distance measuring device and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination