CN111721259A - Underwater robot recovery positioning method based on binocular vision - Google Patents

Underwater robot recovery positioning method based on binocular vision Download PDF

Info

Publication number
CN111721259A
CN111721259A CN202010594951.1A CN202010594951A CN111721259A CN 111721259 A CN111721259 A CN 111721259A CN 202010594951 A CN202010594951 A CN 202010594951A CN 111721259 A CN111721259 A CN 111721259A
Authority
CN
China
Prior art keywords
image
light source
pixel
camera
underwater
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010594951.1A
Other languages
Chinese (zh)
Other versions
CN111721259B (en
Inventor
朱志宇
朱志鹏
齐坤
曾庆军
戴晓强
赵强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University of Science and Technology
Original Assignee
Jiangsu University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University of Science and Technology filed Critical Jiangsu University of Science and Technology
Priority to CN202010594951.1A priority Critical patent/CN111721259B/en
Publication of CN111721259A publication Critical patent/CN111721259A/en
Application granted granted Critical
Publication of CN111721259B publication Critical patent/CN111721259B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a binocular vision-based underwater robot recovery positioning method, which comprises the steps of utilizing two underwater CCD cameras to shoot a calibration plate, and obtaining parameters of the binocular cameras, wherein the parameters comprise an internal parameter matrix, an external parameter matrix, a distortion coefficient and a rotation matrix and a translation matrix between the cameras; acquiring a visual image shot by an underwater binocular camera as an input image to be analyzed; graying and binaryzation processing an input image, and judging a connected domain in the image; matching light sources, performing morphological processing on the underwater image, and acquiring a final light source central point coordinate; the relative position of the AUV and dock is resolved. According to the method, the short-distance high-precision binocular vision positioning is applied to the autonomous docking process of underwater AUV recovery, the Hough circular detection method is replaced by the centroid detection algorithm and the connected domain detection algorithm, so that the real-time performance of calculating the relative position information of the AUV and the docking station is improved, the positioning real-time performance and stability are improved, and the AUV docking success rate is guaranteed.

Description

Underwater robot recovery positioning method based on binocular vision
Technical Field
The invention relates to a binocular vision-based underwater robot recovery positioning method, and belongs to the technical field of underwater robot recovery.
Background
An Autonomous Underwater Vehicle (AUV) works in an ocean environment without a cable, and recycling of the AUV is one of important research contents of AUV research and convenience. In recent years, underwater light vision obtains abundant research results, but due to interference factors such as darker light of an underwater environment, more suspended organisms and the like, acquired images have serious noise and color distortion, and the acquired images have great influence on description and target positioning of underwater scenes, so that the operation task of an underwater robot and the recovery work of the underwater robot are influenced.
Therefore, an underwater optical vision target detection and positioning system is researched, the target of ensuring the measurement accuracy, real-time performance and stability of the system is obtained, and attitude information and position information are provided for the underwater robot so that the AUV can be recycled. Therefore, the underwater visual detection and target positioning technology has important research significance and use value for AUV recovery positioning in short distance.
Disclosure of Invention
The invention aims to provide a binocular vision-based underwater robot recovery positioning method. The invention can provide accurate position information for the AUV, so that the AUV can be conveniently recycled and the recycling of the AUV is ensured.
The purpose of the invention is realized as follows: this procedure is as follows:
the method comprises the following steps: shooting a calibration plate by using two underwater CCD cameras to acquire parameters of a binocular camera, wherein the parameters comprise an internal parameter matrix, an external parameter matrix, a distortion coefficient and a rotation and translation matrix between the cameras;
step two: acquiring a visual image shot by an underwater binocular camera as an input image to be analyzed;
step three: graying and binaryzation processing an input image, and judging a connected domain in the image;
step four: matching light sources, performing morphological processing on the underwater image, and acquiring a final light source central point coordinate;
step five: the relative position of the AUV and dock is resolved.
In conclusion, the invention is mainly used for accurately acquiring the position information of the docking station in the process of recovering the autonomous underwater robot after the autonomous underwater robot completes the corresponding underwater task. Such a process comprises the following steps: calibrating an underwater binocular camera: calculating internal and external parameters of the binocular camera; and (3) correcting the binocular image: distortion correction and stereo correction; matching the characteristic points of the binocular images: morphological processing is carried out to obtain light source information, centroid detection is carried out to obtain image coordinates of the center of the light source, the center points are matched, and mismatching is removed; calculating the position information of the docking station relative to the autonomous underwater robot; information fusion: and integrating visual positioning and dead reckoning positioning data by Kalman filtering for advantage complementation.
The advantage of dead reckoning positioning is the high frequency of data updates. The method has the advantages of high system frequency band, stable navigation data output and good short-term performance. The relative position and posture of the AUV and the docking device can be provided, and the relative position and posture is the basis of all behaviors in the docking process. By adopting the method, the defects of low optical vision positioning efficiency and poor stability can be made up to a great extent.
Compared with the prior art, the invention has the following advantages:
(1) the method combines the computer vision technology and the information fusion technology, realizes real-time positioning in the AUV underwater docking process, improves the positioning precision, and overcomes the defects of long updating period, poor robustness and the like of single vision positioning.
(2) In the traditional underwater light source detection, Hough circular detection is used, the method is large in calculation amount and long in time consumption, the centroid detection algorithm is used instead, the calculation speed is high, the real-time response is high, and the positioning rapidity is improved.
(3) For the special underwater environment and the influence of water quality on the light source, the method adopts morphological corrosion and expansion treatment to eliminate irregular fine noise at the edge of the light source, smoothen the outline of the light source, improve the accuracy of the next step of centroid detection and improve the precision of calculating the center coordinate of the light source.
(4) In the binaryzation stage of the underwater image, a wallner fast self-adaptive binaryzation algorithm is adopted, the problem of uneven brightness of the traditional binaryzation processing of the underwater image is eliminated, and light source omission is reduced or avoided.
Drawings
FIG. 1 is an overall flow chart of a binocular vision-based underwater robot recovery positioning method of the present invention;
FIG. 2 is a flow chart of the calibration of an underwater binocular camera based on the Zhang Zhengyou calibration method of the present invention;
FIG. 3 is a method for classifying coordinates of a light source center according to the present invention;
FIG. 4 is a general model of binocular vision established by the present invention;
FIG. 5 is a flow chart of a process for fusing visual positioning information with dead reckoning positioning information in accordance with the present invention;
FIG. 6 is a flowchart of a method for calculating connected components in step three of the present invention.
Detailed Description
The invention is further elucidated with reference to the accompanying drawings.
As shown in fig. 1 to 6, the binocular vision-based underwater robot recovery positioning method of the present invention specifically includes the following steps:
the method comprises the following steps: shooting a calibration plate by using two underwater CCD cameras to acquire parameters of a binocular camera, wherein the parameters comprise an internal parameter matrix, an external parameter matrix, a distortion coefficient and a rotation and translation matrix between the cameras;
calibrating basic parameters of the camera by using a Zhangyingyou plane calibration method, firstly printing a 7 x 10 black and white grid calibration plate and shooting a plurality of calibration plate images from different angles under water; detecting characteristic points in the image to solve internal and external parameters of the camera under the ideal distortion-free condition and using maximum likelihood estimation to improve the precision; solving an actual radial distortion coefficient by using a least square method; then, a maximum likelihood method is used by integrating internal and external parameters and distortion coefficients, estimation is optimized, and estimation precision is improved; and finally obtaining accurate internal and external parameters and distortion coefficients of the camera.
Step two: acquiring a visual image shot by an underwater binocular camera as an input image to be analyzed;
step three: graying and binaryzation processing an input image, and judging a connected domain in the image;
and performing binarization processing on the image by adopting a wallner self-adaptive threshold to obtain an obvious black-and-white image of the underwater light source, wherein a wallner algorithm specifically comprises the following steps:
Figure BDA0002554761840000021
wherein p (n) represents the gray value of the nth pixel point, gs(n) represents the value after binarization, s represents the number of pixels before the nth, and s is one eighth of the image width, so that the algorithm has the advantage of solving the problem of uneven brightness caused by the illumination angle;
judging the number of white areas in the image by using a connected component detection algorithm, wherein the algorithm comprises the following steps:
(1) scanning the image pixel by pixel, and if the current pixel value is 0, moving to the position of the next scanning;
(2) if the current pixel value is 1, two adjacent pixels on the left side and the upper side of the pixel are checked;
(3) considering the combination of the two pixels, if the pixels are both 0, the pixel is given a new mark to indicate the start of a new connected domain;
(4) only one pixel in the middle of the pixels is 1, and then the current pixel is marked as a pixel marking value of 1 in the pixels;
(5) if the pixel values are all 1 and the labels are the same, the label of the current pixel is the label;
(6) if the pixel values are all 1 but the labels are different, a smaller value is assigned to the current pixel;
(7) taking the above as a cycle, finding out all connected domains to obtain the number of the connected domains;
step four: light source matching, namely smoothing the binarized light source image on the basis of obtaining a connected domain to obtain a final light source central point coordinate; the method specifically comprises the following steps:
(1) eliminating noise near the light source by using a morphological corrosion algorithm, and completely highlighting a connected domain of the light source;
(2) smoothing the edge of the light source connected domain by using a morphological expansion algorithm;
(3) using a centroid detection algorithm to obtain center coordinates of the light source;
Figure BDA0002554761840000031
wherein, IijThe light intensity, x, received for each pixel point on the imagec,ycIs a central point coordinate;
(4) marking light source coordinates, marking the maximum and minimum horizontal coordinates as left and right respectively, and marking the minimum and maximum vertical coordinates as up and down respectively;
(5) finally, matching the coordinates of the light sources with the same marks in the binocular cameras, namely matching the coordinates of the light sources in the two images acquired by the binocular cameras;
step five: resolving the relative position of the AUV and the docking station;
assuming that the coordinate system of the camera coincides with the world coordinate system, O1-X1Y1Forming the image coordinate system of the left camera with a focal length f1. Coordinate system OC2-XC2YC2A coordinate system O corresponding to the imaging plane of the coordinate system forming the right camera2-X2Y2Focal length of frIts projected points P on the left and right imaging planesl(Xl,Yl) And Pr(Xr,Yr) Then, the projection models of the left and right cameras are:
Figure BDA0002554761840000032
there is a correspondence between the two cameras, i.e. the right camera passes through a translation matrix T ═ TXTYYZ]TAnd a rotation matrix
Figure BDA0002554761840000041
Can be mixed withThe cameras are completely overlapped, and the correspondence can be expressed as:
Figure BDA0002554761840000042
combining the above formulas to obtain points P (x, y, z) and Pl(x, y, z) and PrA mathematical expression of (x, y, z):
Figure BDA0002554761840000043
the internal and external parameters of the left and right cameras obtained by calibration and the matched Pl(Xl,Yl) And Pr(Xr,Yr) The three-dimensional position information of the AUV can be obtained;
step six: fusing position data obtained by binocular vision with position data obtained by dead reckoning; the method specifically comprises the following steps:
(1) selecting pose information in the docking process as a state variable of Kalman filtering, wherein X is [ ξ η ζ ψ ═ b-]T
(2) The discretization state equation is:
Figure BDA0002554761840000044
state transition matrix phi (k +1, k) diag { phiξηξψ}, wherein:
Figure BDA0002554761840000045
φη、φζ、φψphi and phiξThe same, only corresponding replacement is needed;
control matrix U (k) diag { U }ξ,Uη,Uζ,Uψ}, wherein:
Figure BDA0002554761840000051
Uη、Uζ、Uψand UξAre identical to each otherOnly the corresponding substitution is needed. In the above two formulas, T is the sampling time, tau、τ、τ、τFour degree-of-freedom acceleration rate-related event constants, respectively; a ═ aξaηaζaψ]Is the acceleration of each degree;
(3) establishing an observation equation: regarding information given by the visual navigation as observation quantity of the system, wherein the observation quantity comprises position information and attitude information, and obtaining an observation equation of the system: z (k) HX (k) + V (k)
Wherein Z is [ ξ ]obsηobsζobsψobs]T
Figure BDA0002554761840000052
The embodiments of the present invention have been described in detail with reference to the drawings, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of those skilled in the art without departing from the gist of the present invention.

Claims (7)

1. A binocular vision-based underwater robot recovery positioning method is characterized by comprising the following steps:
the method comprises the following steps: shooting a calibration plate by using two underwater CCD cameras to acquire parameters of a binocular camera, wherein the parameters comprise an internal parameter matrix, an external parameter matrix, a distortion coefficient and a rotation and translation matrix between the cameras;
step two: acquiring a visual image shot by an underwater binocular camera as an input image to be analyzed;
step three: graying and binaryzation processing an input image, and judging a connected domain in the image;
step four: matching light sources, performing morphological processing on the underwater image, and acquiring a final light source central point coordinate;
step five: the relative position of the AUV and dock is resolved.
2. The binocular vision-based underwater robot recycling and positioning method of claim 1, wherein a Zhang-Yongyou calibration method used in the first step is used for obtaining various parameters of a camera, and the specific steps are as follows:
printing a 7-by-10 black and white grid calibration plate and shooting a plurality of calibration plate images from different angles under water;
detecting characteristic points in the image to solve internal and external parameters of the camera under the ideal distortion-free condition and using maximum likelihood estimation to improve the precision;
solving an actual radial distortion coefficient by using a least square method;
and integrating the internal and external parameters and the distortion coefficient, optimizing and estimating by using a maximum likelihood method, improving the estimation precision, and finally obtaining the accurate internal and external parameters and the distortion coefficient of the camera.
3. The binocular vision-based underwater robot recycling and positioning method according to claim 1, wherein in the third step, the image is binarized by using a wallner adaptive threshold value to obtain an underwater light source black-and-white image, wherein a wallner algorithm specifically comprises the following steps:
Figure FDA0002554761830000011
wherein p (n) represents the gray value of the nth pixel point, gs(n) represents a binarized value, s represents the number of pixels before the nth, and s is one eighth of the image width.
4. The binocular vision-based underwater robot recycling and positioning method of claim 3, wherein in the third step, a connected component judgment algorithm is used for obtaining the number of connected components in the image, namely the number of light sources, and the algorithm comprises the following steps:
scanning the image pixel by pixel, and if the current pixel value is 0, moving to the position of the next scanning;
if the current pixel value is 1, two adjacent pixels on the left side and the upper side of the pixel are checked;
considering the combination of the two pixels, if the pixels are both 0, the pixel is given a new mark to indicate the start of a new connected domain;
only one pixel in the middle of the pixels is 1, and then the current pixel is marked as a pixel marking value of 1 in the pixels;
if the pixel values are all 1 and the labels are the same, the label of the current pixel is the label;
if the pixel values are all 1 but the labels are different, a smaller value is assigned to the current pixel;
and (4) taking the above as a cycle, finding out all connected domains and obtaining the number of the connected domains.
5. The binocular vision-based underwater robot recycling and positioning method of claim 1, wherein in the fourth step, on the basis of obtaining the connected domain, the binarized light source image is smoothed, and by using morphological erosion and expansion operations in image processing, pixel noise around the connected domain where the light source is located is eliminated, and the connected domain where the light source is located is highlighted and the edge of the light source is smoothed.
6. The binocular vision-based underwater robot recycling and positioning method as claimed in claim 1, wherein a centroid detection algorithm is applied in the fourth step, the core of the centroid detection algorithm is that the sum of all horizontal coordinates and vertical coordinates in a connected domain is counted, the number of pixels in the connected domain is counted, and the average is calculated to obtain the center coordinate of the connected domain; then the central coordinates of each light source are marked in turn by marking the left light source and the right light source according to the maximum and minimum horizontal coordinates and marking the upper light source and the lower light source according to the maximum and minimum vertical coordinates,
Figure FDA0002554761830000021
wherein, IijThe light intensity, x, received for each pixel point on the imagec,ycIs a central point coordinate;
and finally, matching the coordinates of the light sources with the same mark in the binocular camera.
7. The binocular vision-based underwater robot recycling and positioning method according to claim 1, wherein in the fifth step, the coordinate system of the camera is assumed to be coincident with the world coordinate system, and O is1-X1Y1Forming the image coordinate system of the left camera with a focal length f1. Coordinate system OC2-XC2YC2A coordinate system O corresponding to the imaging plane of the coordinate system forming the right camera2-X2Y2Focal length of frIts projected points P on the left and right imaging planesl(Xl,Yl) And Pr(Xr,Yr) Then, the projection models of the left and right cameras are:
Figure FDA0002554761830000022
there is some correspondence between the two cameras, i.e. the right camera passes through a translation matrix T ═ TXTYYZ]TAnd a rotation matrix
Figure FDA0002554761830000023
Can be completely coincided with the left camera, and the corresponding relation can be expressed as:
Figure FDA0002554761830000024
combining the above formulas to obtain points P (x, y, z) and Pl(x, y, z) and PrA mathematical expression of (x, y, z):
Figure FDA0002554761830000031
the internal and external parameters of the left and right cameras obtained by calibration and the matched Pl(Xl,Yl) And Pr(Xr,Yr) Three-dimensional position information of the AUV can be obtained.
CN202010594951.1A 2020-06-24 2020-06-24 Underwater robot recovery positioning method based on binocular vision Active CN111721259B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010594951.1A CN111721259B (en) 2020-06-24 2020-06-24 Underwater robot recovery positioning method based on binocular vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010594951.1A CN111721259B (en) 2020-06-24 2020-06-24 Underwater robot recovery positioning method based on binocular vision

Publications (2)

Publication Number Publication Date
CN111721259A true CN111721259A (en) 2020-09-29
CN111721259B CN111721259B (en) 2022-05-03

Family

ID=72568964

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010594951.1A Active CN111721259B (en) 2020-06-24 2020-06-24 Underwater robot recovery positioning method based on binocular vision

Country Status (1)

Country Link
CN (1) CN111721259B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111784655A (en) * 2020-06-24 2020-10-16 江苏科技大学 Underwater robot recovery positioning method
CN112215901A (en) * 2020-10-09 2021-01-12 哈尔滨工程大学 Multifunctional calibration plate device for underwater calibration
CN112634377A (en) * 2020-12-28 2021-04-09 深圳市杉川机器人有限公司 Camera calibration method of sweeping robot, terminal and computer readable storage medium
CN113034399A (en) * 2021-04-01 2021-06-25 江苏科技大学 Binocular vision based autonomous underwater robot recovery and guide pseudo light source removing method
CN113109762A (en) * 2021-04-07 2021-07-13 哈尔滨工程大学 Optical vision guiding method for AUV (autonomous Underwater vehicle) docking recovery
CN113313116A (en) * 2021-06-20 2021-08-27 西北工业大学 Vision-based accurate detection and positioning method for underwater artificial target
CN113895594A (en) * 2021-09-22 2022-01-07 中国船舶重工集团公司第七0七研究所九江分部 AUV recovery method based on underwater dynamic recovery platform
CN114638883A (en) * 2022-03-09 2022-06-17 西南交通大学 Vision-limited repositioning target method for insulator water washing robot
CN116452878A (en) * 2023-04-20 2023-07-18 广东工业大学 Attendance checking method and system based on deep learning algorithm and binocular vision
CN116740334A (en) * 2023-06-23 2023-09-12 河北大学 Unmanned aerial vehicle intrusion detection positioning method based on binocular vision and improved YOLO

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103714345A (en) * 2013-12-27 2014-04-09 Tcl集团股份有限公司 Method and system for detecting fingertip space position based on binocular stereoscopic vision
CN104091324A (en) * 2014-06-16 2014-10-08 华南理工大学 Quick checkerboard image feature matching algorithm based on connected domain segmentation
CN104182982A (en) * 2014-08-27 2014-12-03 大连理工大学 Overall optimizing method of calibration parameter of binocular stereo vision camera
CN105225251A (en) * 2015-09-16 2016-01-06 三峡大学 Over the horizon movement overseas target based on machine vision identifies and locating device and method fast
CN108765495A (en) * 2018-05-22 2018-11-06 山东大学 A kind of quick calibrating method and system based on binocular vision detection technology
CN109242908A (en) * 2018-07-12 2019-01-18 中国科学院自动化研究所 Scaling method for underwater two CCD camera measure system
US20190204084A1 (en) * 2017-09-29 2019-07-04 Goertek Inc. Binocular vision localization method, device and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103714345A (en) * 2013-12-27 2014-04-09 Tcl集团股份有限公司 Method and system for detecting fingertip space position based on binocular stereoscopic vision
CN104091324A (en) * 2014-06-16 2014-10-08 华南理工大学 Quick checkerboard image feature matching algorithm based on connected domain segmentation
CN104182982A (en) * 2014-08-27 2014-12-03 大连理工大学 Overall optimizing method of calibration parameter of binocular stereo vision camera
CN105225251A (en) * 2015-09-16 2016-01-06 三峡大学 Over the horizon movement overseas target based on machine vision identifies and locating device and method fast
US20190204084A1 (en) * 2017-09-29 2019-07-04 Goertek Inc. Binocular vision localization method, device and system
CN108765495A (en) * 2018-05-22 2018-11-06 山东大学 A kind of quick calibrating method and system based on binocular vision detection technology
CN109242908A (en) * 2018-07-12 2019-01-18 中国科学院自动化研究所 Scaling method for underwater two CCD camera measure system

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111784655A (en) * 2020-06-24 2020-10-16 江苏科技大学 Underwater robot recovery positioning method
CN112215901A (en) * 2020-10-09 2021-01-12 哈尔滨工程大学 Multifunctional calibration plate device for underwater calibration
CN112215901B (en) * 2020-10-09 2023-08-01 哈尔滨工程大学 Multifunctional calibration plate device for underwater calibration
CN112634377A (en) * 2020-12-28 2021-04-09 深圳市杉川机器人有限公司 Camera calibration method of sweeping robot, terminal and computer readable storage medium
WO2022205525A1 (en) * 2021-04-01 2022-10-06 江苏科技大学 Binocular vision-based autonomous underwater vehicle recycling guidance false light source removal method
CN113034399A (en) * 2021-04-01 2021-06-25 江苏科技大学 Binocular vision based autonomous underwater robot recovery and guide pseudo light source removing method
CN113109762A (en) * 2021-04-07 2021-07-13 哈尔滨工程大学 Optical vision guiding method for AUV (autonomous Underwater vehicle) docking recovery
CN113109762B (en) * 2021-04-07 2022-08-02 哈尔滨工程大学 Optical vision guiding method for AUV (autonomous Underwater vehicle) docking recovery
CN113313116A (en) * 2021-06-20 2021-08-27 西北工业大学 Vision-based accurate detection and positioning method for underwater artificial target
CN113895594A (en) * 2021-09-22 2022-01-07 中国船舶重工集团公司第七0七研究所九江分部 AUV recovery method based on underwater dynamic recovery platform
CN114638883A (en) * 2022-03-09 2022-06-17 西南交通大学 Vision-limited repositioning target method for insulator water washing robot
CN114638883B (en) * 2022-03-09 2023-07-14 西南交通大学 Visual limited repositioning target method for insulator water flushing robot
CN116452878A (en) * 2023-04-20 2023-07-18 广东工业大学 Attendance checking method and system based on deep learning algorithm and binocular vision
CN116452878B (en) * 2023-04-20 2024-02-02 广东工业大学 Attendance checking method and system based on deep learning algorithm and binocular vision
CN116740334A (en) * 2023-06-23 2023-09-12 河北大学 Unmanned aerial vehicle intrusion detection positioning method based on binocular vision and improved YOLO
CN116740334B (en) * 2023-06-23 2024-02-06 河北大学 Unmanned aerial vehicle intrusion detection positioning method based on binocular vision and improved YOLO

Also Published As

Publication number Publication date
CN111721259B (en) 2022-05-03

Similar Documents

Publication Publication Date Title
CN111721259B (en) Underwater robot recovery positioning method based on binocular vision
CN111784655B (en) Underwater robot recycling and positioning method
CN109255813B (en) Man-machine cooperation oriented hand-held object pose real-time detection method
CN110497187B (en) Sun flower pattern assembly system based on visual guidance
CN104484648B (en) Robot variable visual angle obstacle detection method based on outline identification
CN110648367A (en) Geometric object positioning method based on multilayer depth and color visual information
CN107844750A (en) A kind of water surface panoramic picture target detection recognition methods
CN108597009B (en) Method for detecting three-dimensional target based on direction angle information
CN111897349A (en) Underwater robot autonomous obstacle avoidance method based on binocular vision
CN111862201A (en) Deep learning-based spatial non-cooperative target relative pose estimation method
CN112132874B (en) Calibration-plate-free heterogeneous image registration method and device, electronic equipment and storage medium
CN111627072A (en) Method and device for calibrating multiple sensors and storage medium
CN111179233B (en) Self-adaptive deviation rectifying method based on laser cutting of two-dimensional parts
CN114029946A (en) Method, device and equipment for guiding robot to position and grab based on 3D grating
CN113324478A (en) Center extraction method of line structured light and three-dimensional measurement method of forge piece
CN115830018B (en) Carbon block detection method and system based on deep learning and binocular vision
CN114140439A (en) Laser welding seam feature point identification method and device based on deep learning
CN106952262B (en) Ship plate machining precision analysis method based on stereoscopic vision
CN113313116B (en) Underwater artificial target accurate detection and positioning method based on vision
CN108109154A (en) A kind of new positioning of workpiece and data capture method
CN115629066A (en) Method and device for automatic wiring based on visual guidance
CN111738971B (en) Circuit board stereoscopic scanning detection method based on line laser binocular stereoscopic vision
Li et al. Vision-based target detection and positioning approach for underwater robots
CN112484680A (en) Sapphire wafer positioning and tracking method based on circle detection
CN116594351A (en) Numerical control machining unit system based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant