CN114248893A - An operational underwater robot for sea cucumber fishing and its control method - Google Patents

An operational underwater robot for sea cucumber fishing and its control method Download PDF

Info

Publication number
CN114248893A
CN114248893A CN202210183134.6A CN202210183134A CN114248893A CN 114248893 A CN114248893 A CN 114248893A CN 202210183134 A CN202210183134 A CN 202210183134A CN 114248893 A CN114248893 A CN 114248893A
Authority
CN
China
Prior art keywords
sea cucumber
underwater robot
propeller
visual image
control board
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210183134.6A
Other languages
Chinese (zh)
Other versions
CN114248893B (en
Inventor
位耀光
张树斌
安冬
李道亮
刘金存
吴英昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Agricultural University
Original Assignee
China Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Agricultural University filed Critical China Agricultural University
Priority to CN202210183134.6A priority Critical patent/CN114248893B/en
Publication of CN114248893A publication Critical patent/CN114248893A/en
Application granted granted Critical
Publication of CN114248893B publication Critical patent/CN114248893B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B63SHIPS OR OTHER WATERBORNE VESSELS; RELATED EQUIPMENT
    • B63CLAUNCHING, HAULING-OUT, OR DRY-DOCKING OF VESSELS; LIFE-SAVING IN WATER; EQUIPMENT FOR DWELLING OR WORKING UNDER WATER; MEANS FOR SALVAGING OR SEARCHING FOR UNDERWATER OBJECTS
    • B63C11/00Equipment for dwelling or working underwater; Means for searching for underwater objects
    • B63C11/52Tools specially adapted for working underwater, not otherwise provided for
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01KANIMAL HUSBANDRY; AVICULTURE; APICULTURE; PISCICULTURE; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
    • A01K80/00Harvesting oysters, mussels, sponges or the like
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/04Control of altitude or depth
    • G05D1/06Rate of change of altitude or depth
    • G05D1/0692Rate of change of altitude or depth specially adapted for under-water vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Environmental Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Ocean & Marine Engineering (AREA)
  • Animal Husbandry (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Mechanical Engineering (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

The invention relates to an operation type underwater robot for sea cucumber catching and a control method thereof, belonging to the field of underwater robots. In the grabbing process, a sea cucumber recognition and tracking algorithm based on MobileNet-transform-GCN is used for recognizing and continuously tracking sea cucumbers to be caught, meanwhile, the sea cucumbers to be caught are positioned in real time, a path from an operation type underwater robot to a target point is planned by adopting a rapid search tree algorithm, the operation type underwater robot is controlled to move according to the path based on an Actor-Critic reinforcement learning model, and accurate control and autonomous grabbing of the sea cucumber catching robot in a complex underwater environment are achieved.

Description

一种面向海参捕捞的作业型水下机器人及其控制方法An operational underwater robot for sea cucumber fishing and its control method

技术领域technical field

本发明涉及水下机器人领域,特别是涉及一种面向海参捕捞的作业型水下机器人及其控制方法。The invention relates to the field of underwater robots, in particular to an operation-type underwater robot for sea cucumber fishing and a control method thereof.

背景技术Background technique

随着人们对海参、扇贝、海胆等高质量海产品的需求的增加,人工捕捞的方式不仅速度慢、产量低,而且成本较高、危险系数大。所以通过智能化自动设备代替人进行海产品进行捕捞的技术得到了广泛的关注。自动化智能捕捞设备能够大大提高海参捕捞的效率,减少对人工的依赖,最大化海参的产量,还能降低海参捕捞的风险,减少不必要的损失。作业型水下机器人是海参捕捞智能化设备的一种。通过搭载机械臂的水下机器人,结合目标识别、检测、跟踪和相应的稳定控制算法,可以实现机器人的自动化海参抓取,大幅度降低海参捕捞的难度。但是水下作业机器人的设计与控制十分复杂,一方面水下环境非常复杂,不仅会受本身浮力、重力的作用,还会受洋流和潮汐的影响,这些因素会影响水下机器人的平衡性与稳定性,增加了控制难度,另一方面,水下环境较差,亮度较低,红光衰减较快,色偏严重,所以水下环境的辨识度与可见范围较低,这也增加了识别海参并进行捕捞的难度。With the increasing demand for high-quality seafood such as sea cucumbers, scallops, and sea urchins, artificial fishing is not only slow and low in yield, but also has a high cost and a high risk factor. Therefore, the technology of catching seafood through intelligent automatic equipment instead of humans has received extensive attention. Automated and intelligent fishing equipment can greatly improve the efficiency of sea cucumber fishing, reduce the dependence on labor, maximize the output of sea cucumbers, reduce the risk of sea cucumber fishing, and reduce unnecessary losses. Operational underwater robot is a kind of intelligent equipment for sea cucumber fishing. Through an underwater robot equipped with a robotic arm, combined with target recognition, detection, tracking and corresponding stable control algorithms, the robot can automatically grasp sea cucumbers and greatly reduce the difficulty of sea cucumber fishing. However, the design and control of underwater robots are very complex. On the one hand, the underwater environment is very complex, which is not only affected by its own buoyancy and gravity, but also affected by ocean currents and tides. These factors will affect the balance and stability of underwater robots. Stability increases the difficulty of control. On the other hand, the underwater environment is poor, the brightness is low, the red light decays quickly, and the color shift is serious, so the recognition degree and visible range of the underwater environment are low, which also increases the recognition Sea cucumbers and the difficulty of fishing.

发明内容SUMMARY OF THE INVENTION

本发明的目的是提供一种面向海参捕捞的作业型水下机器人及其控制方法,以实现复杂水下环境下的海参捕捞机器人精准控制和自主抓取。The purpose of the present invention is to provide an operational underwater robot for sea cucumber fishing and a control method thereof, so as to realize precise control and autonomous grasping of the sea cucumber fishing robot in a complex underwater environment.

为实现上述目的,本发明提供了如下方案:For achieving the above object, the present invention provides the following scheme:

一种面向海参捕捞的作业型水下机器人,所述作业型水下机器人包括:本体框架、推进器、第二控制板、第一控制板、摄像机构和抓取机构;An operation type underwater robot for sea cucumber fishing, the operation type underwater robot comprises: a body frame, a propeller, a second control board, a first control board, a camera mechanism and a grabbing mechanism;

推进器、第二控制板、第一控制板和摄像机构均设置在本体框架上,抓取机构的固定端与本体框架连接;The propeller, the second control board, the first control board and the camera mechanism are all arranged on the body frame, and the fixed end of the grabbing mechanism is connected with the body frame;

摄像机构与第二控制板的信号输入端连接,第二控制板的信号输出端分别与第一控制板和抓取机构连接;所述摄像机构用于拍摄所述作业型水下机器人的前方视觉图像和下方视觉图像,并将前方视觉图像和下方视觉图像均传输至第二控制板;所述第二控制板用于根据下方视觉图像确定海参的三维位置信息,并根据所述海参的三维位置信息和前方视觉图像进行在线路径规划;The camera mechanism is connected with the signal input end of the second control board, and the signal output end of the second control board is respectively connected with the first control board and the grabbing mechanism; the camera mechanism is used for photographing the forward vision of the operation-type underwater robot image and the lower visual image, and transmit both the forward visual image and the lower visual image to the second control board; the second control board is used to determine the three-dimensional position information of the sea cucumber according to the lower visual image, and according to the three-dimensional position of the sea cucumber Information and forward visual images for online path planning;

第一控制板与推进器连接,所述第一控制板用于根据所述在线路径规划调节推进器,使所述作业型水下机器人按照所述在线路径规划进行运动;The first control board is connected to the propeller, and the first control board is used to adjust the propeller according to the online path plan, so that the operation-type underwater robot moves according to the online path plan;

所述第二控制板还用于在所述作业型水下机器人到达所述在线路径规划的目标点时,控制所述抓取机构对海参进行捕捞。The second control board is further configured to control the grasping mechanism to fish for sea cucumbers when the operation-type underwater robot reaches the target point of the online path planning.

可选的,所述摄像机构包括:前视单目摄像头和俯视双目摄像头;Optionally, the camera mechanism includes: a front-view monocular camera and a top-down binocular camera;

前视单目摄像头和俯视双目摄像头均与第二控制板的信号输入端连接;所述前视单目摄像头用于拍摄所述作业型水下机器人的前方视觉图像,并将所述前方视觉图像传输至第二控制板;所述俯视双目摄像头用于拍摄所述作业型水下机器人的下方视觉图像,并将所述下方视觉图像传输至第二控制板。The forward-looking monocular camera and the top-view binocular camera are both connected to the signal input end of the second control board; the forward-looking monocular camera is used to capture the forward visual image of the operation-type underwater robot, and the forward visual The image is transmitted to the second control board; the top-view binocular camera is used to capture the lower visual image of the operation-type underwater robot, and transmit the lower visual image to the second control board.

可选的,所述推进器包括:第一螺旋桨推进器、第二螺旋桨推进器、第三螺旋桨推进器、第四螺旋桨推进器、第五螺旋桨推进器、第六螺旋桨推进器、第七螺旋桨推进器和第八螺旋桨推进器;Optionally, the propellers include: a first propeller, a second propeller, a third propeller, a fourth propeller, a fifth propeller, a sixth propeller, and a seventh propeller and the eighth propeller;

第一螺旋桨推进器、第二螺旋桨推进器、第三螺旋桨推进器和第四螺旋桨推进器在水平方向上按照矢量型推进器进行布置,第一螺旋桨推进器和第二螺旋桨推进器布置在本体框架的水平方向的前方,第三螺旋桨推进器和第四螺旋桨推进器布置在本体框架的水平方向的后方;The first propeller, the second propeller, the third propeller and the fourth propeller are arranged in the horizontal direction as vector type propellers, and the first propeller and the second propeller are arranged on the body frame The front of the horizontal direction, the third propeller and the fourth propeller are arranged at the rear of the body frame in the horizontal direction;

第一螺旋桨推进器、第二螺旋桨推进器、第三螺旋桨推进器、第四螺旋桨推进器、第五螺旋桨推进器、第六螺旋桨推进器、第七螺旋桨推进器和第八螺旋桨推进器均与第一控制板连接;The first propeller, the second propeller, the third propeller, the fourth propeller, the fifth propeller, the sixth propeller, the seventh propeller and the eighth propeller are all the same as the first propeller. a control panel connection;

所述第一控制板用于控制第一螺旋桨推进器、第二螺旋桨推进器、第三螺旋桨推进器和第四螺旋桨推进器为所述作业型水下机器人提供水平前后方向上的推力,并控制第五螺旋桨推进器、第六螺旋桨推进器、第七螺旋桨推进器和第八螺旋桨推进器为所述作业型水下机器人提供垂直方向上的推力,使所述作业型水下机器人按照所述在线路径规划进行运动。The first control board is used to control the first propeller, the second propeller, the third propeller and the fourth propeller to provide thrust in the horizontal front and rear directions for the operation-type underwater robot, and to control the The fifth propeller, the sixth propeller, the seventh propeller and the eighth propeller provide the operation-type underwater robot with thrust in the vertical direction, so that the operation-type underwater robot can follow the online Path planning for movement.

可选的,所述抓取机构包括:基座、肩关节、肘关节、小臂关节、夹爪、第一舵机、第二舵机、第三舵机和第四舵机;Optionally, the grabbing mechanism includes: a base, a shoulder joint, an elbow joint, a forearm joint, a gripper, a first steering gear, a second steering gear, a third steering gear, and a fourth steering gear;

第一舵机、第二舵机、第三舵机和第四舵机均与第二控制板连接;The first steering gear, the second steering gear, the third steering gear and the fourth steering gear are all connected to the second control board;

所述肩关节通过基座固定到所述本体框架上;基座上装有第一舵机,所述第一舵机用于在第二控制板的控制下驱动肩关节绕垂直轴心方向旋转;The shoulder joint is fixed on the body frame through the base; the base is equipped with a first steering gear, and the first steering gear is used to drive the shoulder joint to rotate around the vertical axis direction under the control of the second control board;

所述肘关节串联于肩关节上,肩关节与肘关节之间装有第二舵机,所述第二舵机用于在第二控制板的控制下驱动肘关节绕垂直于肩关节轴心方向旋转;The elbow joint is connected in series with the shoulder joint, and a second steering gear is installed between the shoulder joint and the elbow joint, and the second steering gear is used to drive the elbow joint around the axis perpendicular to the shoulder joint under the control of the second control board. direction rotation;

所述小臂关节串联于肘关节上,小臂关节与肘关节之间装有第三舵机,所述第三舵机用于在第二控制板的控制下驱动小臂关节绕垂直于肘关节中心线方向旋转;The forearm joint is connected in series with the elbow joint, and a third steering gear is installed between the forearm joint and the elbow joint, and the third steering gear is used to drive the forearm joint to rotate around the elbow perpendicular to the elbow Rotation in the direction of the joint centerline;

所述夹爪与第四舵机装在小臂关节上方;所述夹爪包括两个相对设置的网状结构;所述第四舵机用于在第二控制板的控制下驱动两个相对设置的网状结构向相反方向旋转,实现夹爪的开合控制。The clamping jaw and the fourth steering gear are mounted above the forearm joint; the clamping jaw includes two oppositely arranged mesh structures; the fourth steering gear is used to drive the two opposing gears under the control of the second control board The set mesh structure rotates in the opposite direction to realize the opening and closing control of the clamping jaws.

可选的,所述作业型水下机器人还包括:装载网箱和第五舵机;Optionally, the operation-type underwater robot further includes: a loading cage and a fifth steering gear;

装载网箱设置在本体框架上,所述装载网箱与抓取机构相对设置;The loading net box is arranged on the body frame, and the loading net box is arranged opposite to the grabbing mechanism;

所述装载网箱通过第五舵机与第二控制板连接,所述装载网箱用于在所述抓取机构抓取结束运动到预设位置时,在第二控制板的控制下通过第五舵机驱动所述装载网箱自动打开,装载所述抓取机构捕捞的海参。The loading cage is connected to the second control board through the fifth steering gear, and the loading cage is used to pass the second control board under the control of the second control board when the grabbing mechanism moves to a preset position after grabbing. Five steering gears drive the loading cage to automatically open, and load the sea cucumbers caught by the grabbing mechanism.

一种面向海参捕捞的作业型水下机器人的控制方法,所述控制方法包括:A control method of an operation-type underwater robot for sea cucumber fishing, the control method comprising:

实时获取摄像机构拍摄的作业型水下机器人的前方视觉图像和下方视觉图像;Real-time acquisition of the forward visual image and the lower visual image of the operational underwater robot captured by the camera mechanism;

根据实时获取的下方视觉图像,利用基于MobileNet-Transformer-GCN的海参识别跟踪算法,识别并持续跟踪待捕捞海参,同时实时定位待捕捞海参的像素坐标;According to the lower visual image acquired in real time, the sea cucumber identification and tracking algorithm based on MobileNet-Transformer-GCN is used to identify and continuously track the sea cucumbers to be caught, and at the same time locate the pixel coordinates of the sea cucumbers to be caught in real time;

通过双目立体匹配将所述像素坐标转换为世界坐标,获得待捕捞海参的三维位置;Convert the pixel coordinates into world coordinates through binocular stereo matching to obtain the three-dimensional position of the sea cucumber to be caught;

将待捕捞海参的三维位置设置为目标点,采用快速搜索树算法规划作业型水下机器人到目标点之间的路径;Set the three-dimensional position of the sea cucumber to be fished as the target point, and use the fast search tree algorithm to plan the path between the operational underwater robot and the target point;

根据所述前方视觉图像,基于Actor-Critic强化学习模型控制作业型水下机器人按照所述路径进行运动,运行至目标点后作业型水下机器人悬浮;所述Actor-Critic强化学习模型通过高斯混合模型对样本空间进行聚类压缩;According to the forward visual image, the operation-type underwater robot is controlled based on the Actor-Critic reinforcement learning model to move according to the path, and the operation-type underwater robot suspends after running to the target point; the Actor-Critic reinforcement learning model passes the Gaussian mixture The model performs clustering compression on the sample space;

根据逆运动学,通过作业型水下机器人的抓取机构抓取待捕捞海参。According to inverse kinematics, the sea cucumber to be caught is grabbed by the grabbing mechanism of the operational underwater robot.

可选的,所述实时获取摄像机构拍摄的作业型水下机器人的前方视觉图像和下方视觉图像,之后还包括:Optionally, the real-time acquisition of the front visual image and the bottom visual image of the operation-type underwater robot photographed by the camera mechanism further includes:

采用基于孪生卷积神经网络的水下图像增强算法对前方视觉图像和下方视觉图像进行颜色校正和去雾增强;所述孪生卷积神经网络包括第一分支卷积神经网络和第二分支卷积神经网络;所述第一分支卷积神经网络由标签图像的颜色特征进行约束,负责图像的颜色校正;所述第二分支卷积神经网络由纹理特征进行约束,负责图像的清晰化;所述第一分支卷积神经网络和所述第二分支卷积神经网络分别经过特征的约束后进行卷积特征变换操作,最后通过点乘的方式对两个分支特征进行拼接,拼接后经过一层卷积变换生成最终的清晰化图像。The underwater image enhancement algorithm based on the Siamese convolutional neural network is used to perform color correction and dehazing enhancement on the front visual image and the lower visual image; the Siamese convolutional neural network includes a first branch convolutional neural network and a second branch convolutional neural network. neural network; the first branch convolutional neural network is constrained by the color feature of the label image and is responsible for the color correction of the image; the second branch convolutional neural network is constrained by the texture feature and is responsible for the clarity of the image; the The first branch convolutional neural network and the second branch convolutional neural network perform convolution feature transformation operations after being constrained by features respectively, and finally the two branch features are spliced by point multiplication, and after splicing, a layer of convolution is performed. The product transform produces the final sharpened image.

可选的,所述根据实时获取的下方视觉图像,利用基于MobileNet-Transformer-GCN的海参识别跟踪算法,识别并持续跟踪待捕捞海参,同时实时定位待捕捞海参的像素坐标,具体包括:Optionally, according to the lower visual image obtained in real time, the sea cucumber identification and tracking algorithm based on MobileNet-Transformer-GCN is used to identify and continuously track the sea cucumber to be fished, and simultaneously locate the pixel coordinates of the sea cucumber to be fished in real time, specifically including:

缩放实时获取的下方视觉图像,获得缩放后的下方视觉图像;Zoom the lower visual image obtained in real time to obtain the zoomed lower visual image;

将缩放后的下方视觉图像依次输入第一轻量化模块、第二轻量化模块、第三轻量化模块、第一Transformer-GCN模块、第四轻量化模块、第二Transformer-GCN模块、第五轻量化模块和全局池化模块,输出特征图;Input the zoomed lower visual image into the first lightweight module, the second lightweight module, the third lightweight module, the first Transformer-GCN module, the fourth lightweight module, the second Transformer-GCN module, and the fifth lightweight module. Quantization module and global pooling module, output feature map;

将所述特征图进行映射,获得待捕捞海参的预测结果;所述预测结果包括目标位置、目标类别和置信度;Mapping the feature map to obtain a prediction result of the sea cucumber to be caught; the prediction result includes a target location, a target category and a confidence level;

将所述特征图输入全连接模块,获得深度身份特征;Inputting the feature map into a fully connected module to obtain a deep identity feature;

从缩放后的下方视觉图像中提取梯度直方图特征,作为人工身份特征;Extract gradient histogram features from the scaled lower visual image as artificial identity features;

采用主成分分析将人工身份特征映射至与深度身份特征相同的维度;Using principal component analysis to map artificial identity features to the same dimensions as deep identity features;

融合映射后的人工身份特征和深度身份特征,获得融合后的身份特征;Fusing the mapped artificial identity features and deep identity features to obtain the fused identity features;

将融合后的身份特征输入滤波模块,计算缩放后的下方视觉图像中每个检测目标的响应值;Input the fused identity feature into the filtering module, and calculate the response value of each detection target in the zoomed lower visual image;

选取缩放后的下方视觉图像中响应值最大的检测目标确定为当前追踪的待捕捞海参。The detection target with the largest response value in the zoomed lower visual image is selected and determined as the currently tracked sea cucumber to be caught.

可选的,所述通过双目立体匹配将所述像素坐标转换为世界坐标,获得待捕捞海参的三维位置,具体包括:Optionally, converting the pixel coordinates into world coordinates through binocular stereo matching to obtain the three-dimensional position of the sea cucumber to be fished specifically includes:

采用张正友标定法对相机进行标定,获得相机的内参矩阵和畸变系数;The camera is calibrated by Zhang Zhengyou's calibration method, and the internal parameter matrix and distortion coefficient of the camera are obtained;

利用所述内参矩阵将下方视觉图像像素转换到相机坐标系;Using the internal parameter matrix to convert the lower visual image pixels to the camera coordinate system;

在相机坐标系中通过畸变系数对下方视觉图像像素进行换算,并将换算后的下方视觉图像像素转换到像素坐标系;In the camera coordinate system, the lower visual image pixels are converted by the distortion coefficient, and the converted lower visual image pixels are converted to the pixel coordinate system;

利用公式D=fT/d,计算空间中某点到相机平面的距离D;其中,f为标定获取的焦距,T为双目相机中两个相机的距离,d为视差值;Use the formula D = fT / d to calculate the distance D from a point in the space to the camera plane; where f is the focal length obtained by calibration, T is the distance between the two cameras in the binocular camera, and d is the parallax value;

根据空间中某点到相机平面的距离,利用公式

Figure 628959DEST_PATH_IMAGE001
,将所述像素坐标转换为世界坐标,获得待捕捞海参的三维位置;其中,(X,Y,Z)为待捕捞海参的三维位置坐标,(x 1,y 1)和(x 2,y 2)分别为双目视觉中待捕捞海参在两个相机拍摄的图像中的像素坐标。According to the distance from a point in space to the camera plane, use the formula
Figure 628959DEST_PATH_IMAGE001
, convert the pixel coordinates into world coordinates to obtain the three-dimensional position of the sea cucumber to be fished; wherein ( X , Y , Z ) are the three-dimensional position coordinates of the sea cucumber to be fished, ( x 1 , y 1 ) and ( x 2 , y ) 2 ) are respectively the pixel coordinates of the sea cucumber to be caught in the images captured by the two cameras in binocular vision.

可选的,所述基于Actor-Critic强化学习模型控制作业型水下机器人按照所述路径进行运动,运行至目标点后作业型水下机器人悬浮,具体包括:Optionally, the operation-type underwater robot is controlled based on the Actor-Critic reinforcement learning model to move according to the path, and the operation-type underwater robot suspends after running to the target point, specifically including:

获取作业型水下机器人的当前运动姿态,并将所述当前运动姿态传入行动网络中的在线策略网络;所述当前运动姿态包括偏航角、俯仰角、滚转角、三维坐标、角速度和线速度;Obtain the current motion posture of the operational underwater robot, and transmit the current motion posture to the online strategy network in the action network; the current motion posture includes yaw angle, pitch angle, roll angle, three-dimensional coordinates, angular velocity and line speed;

根据Actor-Critic强化学习模型的奖励函数和所述路径,计算当前奖励值;所述奖励函数为R=r 0-ρ 1||Δφ,Δθψ||-ρ 2||Δxyz||2;其中,Δxyz为三维坐标状态量,Δφ,Δθψ分别为偏航角状态量、俯仰角状态量、滚转角状态量,r 0为奖励常量,ρ 1||Δφ,Δθψ||为相对方位误差的二范数,ρ 2||Δxyz||2为相对位置误差的二范数,ρ 1ρ 2分别为第一系数和第二系数;Calculate the current reward value according to the reward function of the Actor-Critic reinforcement learning model and the path; the reward function is R = r 0 - ρ 1 ||Δφ,Δ θψ ||- ρ 2 ||Δ xyz || 2 ; among them, Δ xyz are three-dimensional coordinate state quantities, Δφ,Δ θψ are yaw angle state quantities, pitch angle state quantities, roll angle state quantities, respectively , r 0 is the reward constant, ρ 1 ||Δφ,Δ θψ || is the second norm of the relative orientation error, ρ 2 ||Δ xyz || 2 is the second norm of the relative position error norm, ρ 1 and ρ 2 are the first coefficient and the second coefficient, respectively;

将当前奖励值与状态函数融合为训练样本加入样本空间;所述状态函数为s=[gxyz,Δφ,Δθψ,u,v,w,p,q,r];其中,g为状态常量,u,v,w为线速度状态量,p,q,r为角速度状态量,s为状态函数;The current reward value and the state function are fused into a training sample and added to the sample space; the state function is s =[ gxyz ,Δφ,Δ θψ , u , v , w , p , q , r ]; among them, g is the state constant, u , v , w are the linear velocity state quantities, p , q , r are the angular velocity state quantities, and s is the state function;

通过高斯混合模型对样本空间中的样本进行融合,压缩样本空间;所述高斯混合模型为

Figure 1821DEST_PATH_IMAGE002
;其中,P(x)为压缩后的样本空间,K为压缩前的样本空间中样本的类别,
Figure 550746DEST_PATH_IMAGE003
是类分布概率,
Figure 156170DEST_PATH_IMAGE004
σ k 分别为类均值与类方差,(R i ,s i )为压缩前的样本空间中的第i个样本,N为子模型的高斯分布密度函数;The samples in the sample space are fused by the Gaussian mixture model to compress the sample space; the Gaussian mixture model is
Figure 1821DEST_PATH_IMAGE002
; Among them, P ( x ) is the sample space after compression, K is the category of the sample in the sample space before compression,
Figure 550746DEST_PATH_IMAGE003
is the class distribution probability,
Figure 156170DEST_PATH_IMAGE004
and σ k are the class mean and class variance, respectively, ( R i , s i ) is the ith sample in the sample space before compression, and N is the Gaussian distribution density function of the sub-model;

将压缩后的样本空间中的样本输入评价网络中的目标-行为网络,进行梯度计算,更新在线状态-行为网络的参数;Input the samples in the compressed sample space into the target-action network in the evaluation network, perform gradient calculation, and update the parameters of the online state-action network;

通过评价网络计算得到的梯度对行动网络中的在线策略网络进行优化与参数更新,并在累计多次梯度后,通过在线策略网络的梯度更新目标策略网络的参数;The online strategy network in the action network is optimized and updated by the gradient calculated by the evaluation network, and after accumulating multiple gradients, the parameters of the target strategy network are updated through the gradient of the online strategy network;

通过行动网络中的在线策略网络生成新的状态函数,用于机械臂与螺旋桨推进器的控制;Generate a new state function through the online policy network in the action network for the control of the manipulator and the propeller;

循环以上步骤直至收敛,使得作业型水下机器人的运动结果符合所述路径。The above steps are repeated until convergence, so that the motion result of the operation-type underwater robot conforms to the path.

根据本发明提供的具体实施例,本发明公开了以下技术效果:According to the specific embodiments provided by the present invention, the present invention discloses the following technical effects:

本发明公开一种面向海参捕捞的作业型水下机器人及其控制方法,摄像机构拍摄作业型水下机器人的前方视觉图像和下方视觉图像,第二控制板根据下方视觉图像确定海参的三维位置信息,并根据海参的三维位置信息和前方视觉图像进行在线路径规划,第一控制板根据在线路径规划调节推进器,使作业型水下机器人按照在线路径规划进行运动,第二控制板在作业型水下机器人到达在线路径规划的目标点时,控制抓取机构对海参进行捕捞。在抓取过程中利用基于MobileNet-Transformer-GCN的海参识别跟踪算法,识别并持续跟踪待捕捞海参,同时实时定位待捕捞海参的像素坐标,采用快速搜索树算法规划作业型水下机器人到目标点之间的路径,基于Actor-Critic强化学习模型控制作业型水下机器人按照所述路径进行运动,根据逆运动学通过作业型水下机器人的抓取机构抓取待捕捞海参,实现了复杂水下环境下的海参捕捞机器人精准控制和自主抓取。The invention discloses an operation-type underwater robot facing sea cucumber fishing and a control method thereof. A camera mechanism captures a front visual image and a lower visual image of the operation-type underwater robot, and a second control board determines three-dimensional position information of the sea cucumber according to the lower visual image. , and carry out online path planning according to the three-dimensional position information of the sea cucumber and the forward visual image. The first control board adjusts the propeller according to the online path planning, so that the operation-type underwater robot moves according to the online path planning, and the second control board is in the operation-type underwater vehicle. When the robot reaches the target point of the online path planning, it controls the grasping mechanism to fish the sea cucumber. In the grabbing process, the sea cucumber identification and tracking algorithm based on MobileNet-Transformer-GCN is used to identify and continuously track the sea cucumbers to be caught, and at the same time locate the pixel coordinates of the sea cucumbers to be caught in real time, and use the fast search tree algorithm to plan the operation-type underwater robot to the target point. The path between the two is based on the Actor-Critic reinforcement learning model to control the operation-type underwater robot to move according to the described path, and according to inverse kinematics, the grasping mechanism of the operation-type underwater robot grabs the sea cucumbers to be caught, and realizes complex underwater Precise control and autonomous grasping of sea cucumber fishing robots in the environment.

附图说明Description of drawings

为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the accompanying drawings required in the embodiments will be briefly introduced below. Obviously, the drawings in the following description are only some of the present invention. In the embodiments, for those of ordinary skill in the art, other drawings can also be obtained according to these drawings without creative labor.

图1为本发明提供的面向海参捕捞的作业型水下机器人的主视图;Fig. 1 is the front view of the operation type underwater robot that faces sea cucumber fishing provided by the present invention;

图2为本发明提供的面向海参捕捞的作业型水下机器人的后视图;Fig. 2 is the rear view of the operation-type underwater robot facing sea cucumber fishing provided by the present invention;

图3为本发明提供的面向海参捕捞的作业型水下机器人的仰视图;Fig. 3 is the bottom view of the operation-type underwater robot facing sea cucumber fishing provided by the present invention;

图4为本发明提供的面向海参捕捞的作业型水下机器人的侧视图;4 is a side view of an operation-type underwater robot facing sea cucumber fishing provided by the present invention;

图5为本发明提供的抓取机构的结构示意图;5 is a schematic structural diagram of a grabbing mechanism provided by the present invention;

图6为本发明提供的面向海参捕捞的作业型水下机器人的控制方法的框架图;6 is a frame diagram of a control method of an operation-type underwater robot for sea cucumber fishing provided by the present invention;

图7为本发明提供的水下图像增强算法的原理图;7 is a schematic diagram of an underwater image enhancement algorithm provided by the present invention;

图8为本发明提供的海参识别跟踪算法的结构框图;8 is a structural block diagram of a sea cucumber identification and tracking algorithm provided by the present invention;

图9为本发明提供的轻量化模块的结构示意图;9 is a schematic structural diagram of a lightweight module provided by the present invention;

图10为本发明提供的Transformer-GCN的结构示意图;10 is a schematic structural diagram of Transformer-GCN provided by the present invention;

图11为本发明提供的海参识别跟踪算法的原理图;11 is a schematic diagram of a sea cucumber identification and tracking algorithm provided by the present invention;

图12为本发明提供的Actor-Critic强化学习模型的原理图。FIG. 12 is a schematic diagram of the Actor-Critic reinforcement learning model provided by the present invention.

符号说明:1-第一控制舱,2-电源仓,31-第一螺旋桨推进器,32-第二螺旋桨推进器,33-第三螺旋桨推进器,34-第四螺旋桨推进器,41-第五螺旋桨推进器,42-第六螺旋桨推进器,43-第七螺旋桨推进器,44-第八螺旋桨推进器,5-俯视双目摄像头,6-第二控制舱,7-装载网箱,81-夹爪,82-小臂关节,83-肘关节,84-肩关节,85-基座,9-本体框架。Symbol description: 1-first control cabin, 2-power compartment, 31-first propeller, 32-second propeller, 33-third propeller, 34-fourth propeller, 41-th Five propellers, 42-sixth propeller, 43-seventh propeller, 44-eighth propeller, 5-overhead binocular camera, 6-second control cabin, 7-loading cage, 81 - jaws, 82- forearm joint, 83- elbow joint, 84- shoulder joint, 85- base, 9- body frame.

具体实施方式Detailed ways

下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only a part of the embodiments of the present invention, but not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.

本发明的目的是提供一种面向海参捕捞的作业型水下机器人及其控制方法,以实现复杂水下环境下的海参捕捞机器人精准控制和自主抓取。The purpose of the present invention is to provide an operational underwater robot for sea cucumber fishing and a control method thereof, so as to realize precise control and autonomous grasping of the sea cucumber fishing robot in a complex underwater environment.

为使本发明的上述目的、特征和优点能够更加明显易懂,下面结合附图和具体实施方式对本发明作进一步详细的说明。In order to make the above objects, features and advantages of the present invention more clearly understood, the present invention will be described in further detail below with reference to the accompanying drawings and specific embodiments.

本发明提供了一种面向海参捕捞的作业型水下机器人,如图1-4所示,作业型水下机器人包括:本体框架9、推进器、第二控制板、第一控制板、摄像机构和抓取机构。The present invention provides an operation-type underwater robot for sea cucumber fishing. As shown in Figures 1-4, the operation-type underwater robot includes: a body frame 9, a propeller, a second control board, a first control board, and a camera mechanism and grabbing mechanism.

推进器、第二控制板、第一控制板和摄像机构均设置在本体框架9上,抓取机构的固定端与本体框架9连接。摄像机构与第二控制板的信号输入端连接,第二控制板的信号输出端分别与第一控制板和抓取机构连接;摄像机构用于拍摄作业型水下机器人的前方视觉图像和下方视觉图像,并将前方视觉图像和下方视觉图像均传输至第二控制板;第二控制板用于根据下方视觉图像确定海参的三维位置信息,并根据海参的三维位置信息和前方视觉图像进行在线路径规划。第一控制板与推进器连接,第一控制板用于根据在线路径规划调节推进器,使作业型水下机器人按照在线路径规划进行运动。第二控制板还用于在作业型水下机器人到达在线路径规划的目标点时,控制抓取机构对海参进行捕捞。The pusher, the second control board, the first control board and the camera mechanism are all arranged on the main body frame 9 , and the fixed end of the grabbing mechanism is connected to the main body frame 9 . The camera mechanism is connected with the signal input end of the second control board, and the signal output end of the second control board is connected with the first control board and the grasping mechanism respectively; image, and transmit both the forward visual image and the lower visual image to the second control board; the second control board is used to determine the 3D position information of the sea cucumber according to the lower visual image, and carry out the online path according to the 3D position information of the sea cucumber and the forward visual image planning. The first control board is connected with the propeller, and the first control board is used for adjusting the propeller according to the online path planning, so that the operation-type underwater robot moves according to the online path planning. The second control board is also used for controlling the grasping mechanism to fish the sea cucumber when the operation-type underwater robot reaches the target point of the online path planning.

如图5所示,摄像机构包括:前视单目摄像头和俯视双目摄像头5。前视单目摄像头和俯视双目摄像头5均与第二控制板的信号输入端连接;前视单目摄像头用于拍摄作业型水下机器人的前方视觉图像,并将前方视觉图像传输至第二控制板;俯视双目摄像头5用于拍摄作业型水下机器人的下方视觉图像,并将下方视觉图像传输至第二控制板。As shown in FIG. 5 , the camera mechanism includes: a front-view monocular camera and a top-down binocular camera 5 . The forward-looking monocular camera and the top-down binocular camera 5 are both connected to the signal input end of the second control board; the forward-looking monocular camera is used to capture the forward visual image of the operation-type underwater robot, and transmit the forward visual image to the second control board. Control board; the top-view binocular camera 5 is used to capture the lower visual image of the operation-type underwater robot, and transmit the lower visual image to the second control board.

前视单目高清摄像头,固定于第一控制舱1内的云台上,可通过舵机调整其视角,其用于获取作业型海参捕捞机器人运动前方的视野信息,以实现机器人的跟踪和避障等任务。俯视双目摄像头5固定于机器人框架中间位置,通过水密连接线连接到第一个控制仓。第一个控制舱内的视觉信息处理模块,可用于双目摄像机和单目摄像机所获取的视频信息的处理和传输。此外第一控制舱1内还包括九轴惯性传感器和深度传感器,用于获取机器人姿态信息和深度信息。The front-view monocular high-definition camera is fixed on the pan/tilt in the first control cabin 1, and its viewing angle can be adjusted by the steering gear, which is used to obtain the vision information in front of the movement of the operational sea cucumber fishing robot, so as to realize the tracking and avoidance of the robot. obstacles and other tasks. The top-view binocular camera 5 is fixed in the middle of the robot frame, and is connected to the first control compartment through a watertight connection line. The visual information processing module in the first control cabin can be used for the processing and transmission of the video information obtained by the binocular camera and the monocular camera. In addition, the first control cabin 1 also includes a nine-axis inertial sensor and a depth sensor for acquiring robot attitude information and depth information.

第二控制板位于第二个控制仓内,第二控制板具有图形处理单元、传感器处理器、网络通信模块,功耗低,最大功耗30瓦,能够对深度网络模型进行量化加速,能够运行目标识别、目标检测和跟踪算法,实时识别前视视野中的障碍物和俯视视野中的目标,并能够通过所述双目摄像头计算出目标的三维位置信息和姿态信息。第二控制板还能够运行自主控制算法、导航算法,实现海参捕捞作业机器人的自主控制。The second control board is located in the second control compartment. The second control board has a graphics processing unit, a sensor processor, and a network communication module. It has low power consumption and a maximum power consumption of 30 watts. It can quantify and accelerate the deep network model and can run Target recognition, target detection and tracking algorithms can identify obstacles in the front-view field of view and targets in the top-down field of view in real time, and can calculate the three-dimensional position information and attitude information of the target through the binocular camera. The second control board can also run an autonomous control algorithm and a navigation algorithm to realize the autonomous control of the sea cucumber fishing robot.

位于第一控制舱下方的为海参捕捞作业水下机器人的电源仓2,为机器人自主控制时提供必要的电源动力;电源仓2下方为所属机器人视觉系统中的双目摄像头,用于机械臂抓取时获取目标三维位置信息和姿态信息。框架后方下部为第二控制舱6,第二控制舱6内设置第二控制板,同时第二控制舱6与第一控制舱1之间通过水密连接线连接。Below the first control cabin is the power supply compartment 2 of the underwater robot for sea cucumber fishing, which provides the necessary power for autonomous control of the robot; below the power supply compartment 2 is the binocular camera in the robot's vision system, which is used for grasping by the robotic arm. Take the time to obtain the three-dimensional position information and attitude information of the target. The lower part behind the frame is a second control cabin 6 , a second control board is arranged in the second control cabin 6 , and the second control cabin 6 and the first control cabin 1 are connected by a watertight connection line.

推进器包括:第一螺旋桨推进器31、第二螺旋桨推进器32、第三螺旋桨推进器33、第四螺旋桨推进器34、第五螺旋桨推进器41、第六螺旋桨推进器42、第七螺旋桨推进器43和第八螺旋桨推进器44。第一螺旋桨推进器31、第二螺旋桨推进器32、第三螺旋桨推进器33和第四螺旋桨推进器34在水平方向上按照矢量型进行布置,第一螺旋桨推进器31和第二螺旋桨推进器32布置在本体框架9的水平方向的前方,第三螺旋桨推进器33和第四螺旋桨推进器34布置在本体框架9的水平方向的后方。第一螺旋桨推进器31、第二螺旋桨推进器32、第三螺旋桨推进器33、第四螺旋桨推进器34、第五螺旋桨推进器41、第六螺旋桨推进器42、第七螺旋桨推进器43和第八螺旋桨推进器44均与第一控制板连接。第一控制板用于控制第一螺旋桨推进器31、第二螺旋桨推进器32、第三螺旋桨推进器33和第四螺旋桨推进器34为作业型水下机器人提供水平前后方向上的推力,并控制第五螺旋桨推进器41、第六螺旋桨推进器42、第七螺旋桨推进器43和第八螺旋桨推进器44为作业型水下机器人提供垂直方向上的推力,使作业型水下机器人按照在线路径规划进行运动。The propellers include: a first propeller propeller 31, a second propeller propeller 32, a third propeller propeller 33, a fourth propeller propeller 34, a fifth propeller propeller 41, a sixth propeller propeller 42, and a seventh propeller propeller 43 and the eighth propeller 44. The first propeller 31, the second propeller 32, the third propeller 33 and the fourth propeller 34 are arranged in a vector type in the horizontal direction, the first propeller 31 and the second propeller 32 The third propeller 33 and the fourth propeller 34 are arranged at the rear of the body frame 9 in the horizontal direction. The first propeller 31, the second propeller 32, the third propeller 33, the fourth propeller 34, the fifth propeller 41, the sixth propeller 42, the seventh propeller 43 and the third propeller The eight propellers 44 are all connected to the first control board. The first control board is used to control the first propeller 31, the second propeller 32, the third propeller 33 and the fourth propeller 34 to provide thrust in the horizontal front and rear directions for the operation-type underwater robot, and to control the The fifth propeller 41, the sixth propeller 42, the seventh propeller 43 and the eighth propeller 44 provide vertical thrust for the operation-type underwater robot, so that the operation-type underwater robot can be planned according to the online path Exercise.

第一螺旋桨推进器31至第八螺旋桨推进器44均为六自由度矢量型螺旋桨推进器,其用于为海参捕捞机器人提供动力以驱动机器人运动和精准姿态控制。第一螺旋桨推进器31至第八螺旋桨推进器44通过水密连接线连接到第一个控制仓上,控制仓可以通过驱动信号驱动螺旋桨旋转,控制机器人运动方向。The first propeller 31 to the eighth propeller 44 are all six-degree-of-freedom vector propellers, which are used to provide power for the sea cucumber fishing robot to drive the robot's motion and precise attitude control. The first propeller 31 to the eighth propeller 44 are connected to the first control compartment through a watertight connection line, and the control compartment can drive the propeller to rotate through a driving signal to control the movement direction of the robot.

抓取机构包括:基座85、肩关节84、肘关节83、小臂关节82、夹爪81、第一舵机、第二舵机、第三舵机和第四舵机。第一舵机、第二舵机、第三舵机和第四舵机均与第二控制板连接。肩关节84通过基座85固定到本体框架9上;基座85上装有第一舵机,第一舵机用于在第二控制板的控制下驱动肩关节84绕垂直轴心方向旋转。肘关节83串联于肩关节84上,肩关节84与肘关节83之间装有第二舵机,第二舵机用于在第二控制板的控制下驱动肘关节83绕垂直于肩关节84轴心方向旋转。小臂关节82串联于肘关节83上,小臂关节82与肘关节83之间装有第三舵机,第三舵机用于在第二控制板的控制下驱动小臂关节82绕垂直于肘关节83中心线方向旋转。夹爪81与第四舵机装在小臂关节82上方;夹爪81包括两个相对设置的网状结构;第四舵机用于在第二控制板的控制下驱动两个相对设置的网状结构向相反方向旋转,实现夹爪81的开合控制。The grasping mechanism includes: a base 85, a shoulder joint 84, an elbow joint 83, a forearm joint 82, a clamping jaw 81, a first steering gear, a second steering gear, a third steering gear and a fourth steering gear. The first steering gear, the second steering gear, the third steering gear and the fourth steering gear are all connected with the second control board. The shoulder joint 84 is fixed to the body frame 9 through the base 85; the base 85 is equipped with a first steering gear, and the first steering gear is used to drive the shoulder joint 84 to rotate around the vertical axis direction under the control of the second control board. The elbow joint 83 is connected to the shoulder joint 84 in series, a second steering gear is installed between the shoulder joint 84 and the elbow joint 83, and the second steering gear is used to drive the elbow joint 83 to be perpendicular to the shoulder joint 84 under the control of the second control board. Rotation in the direction of the axis. The forearm joint 82 is connected in series with the elbow joint 83, and a third steering gear is installed between the forearm joint 82 and the elbow joint 83, and the third steering gear is used to drive the forearm joint 82 to rotate perpendicular to The elbow joint 83 rotates in the direction of the center line. The clamping jaw 81 and the fourth steering gear are installed above the forearm joint 82; the clamping jaw 81 includes two oppositely arranged mesh structures; the fourth steering gear is used to drive the two oppositely arranged meshes under the control of the second control board The like structure rotates in the opposite direction to realize the opening and closing control of the clamping jaws 81 .

夹爪81为海参状条形网状结构,所述结构为柔性结构,在塑料骨架的外层套有硅胶套,适用于海参的捕捞。The clamping jaw 81 is a sea cucumber-shaped strip-shaped mesh structure, and the structure is a flexible structure, and a silicone sleeve is covered on the outer layer of the plastic frame, which is suitable for the fishing of sea cucumbers.

抓取机构为三自由度机械臂,其包括串联电机驱动结构、机械刚体传动结构和夹爪81,驱动结构通过传动结构驱动夹爪81实现对海参的抓取,串联电机驱动结构由第一舵机、第二舵机、第三舵机和第四舵机构成,机械刚体传动结构由基座85、肩关节84、肘关节83、小臂关节82构成。所有驱动舵机通过水密连接线连接到第二个控制仓,仓内装有控制板可以驱动机械臂舵机旋转。The grasping mechanism is a three-degree-of-freedom mechanical arm, which includes a series motor drive structure, a mechanical rigid body transmission structure and a gripper 81. The drive structure drives the gripper 81 through the drive structure to grasp the sea cucumber, and the series motor drive structure is driven by the first rudder. The mechanical rigid body transmission structure is composed of a base 85 , a shoulder joint 84 , an elbow joint 83 and a forearm joint 82 . All drive servos are connected to the second control compartment through watertight cables, and a control board is installed in the compartment to drive the robotic arm servos to rotate.

作业型水下机器人还包括:装载网箱7和第五舵机。装载网箱7设置在本体框架9上,装载网箱7与抓取机构相对设置。装载网箱7通过第五舵机与第二控制板连接,装载网箱7用于在抓取机构抓取结束运动到预设位置时,在第二控制板的控制下通过第五舵机驱动装载网箱7自动打开,装载抓取机构捕捞的海参,便于抓取后的海参的盛放回收。The operation-type underwater robot also includes: loading the cage 7 and the fifth steering gear. The loading net box 7 is arranged on the main body frame 9, and the loading net box 7 is arranged opposite to the grabbing mechanism. The loading cage 7 is connected to the second control board through the fifth steering gear, and the loading cage 7 is used to be driven by the fifth steering gear under the control of the second control board when the grasping mechanism ends and moves to the preset position. The loading cage 7 is automatically opened, and the sea cucumbers caught by the grasping mechanism are loaded, so as to facilitate the recovery of the grasped sea cucumbers.

优选地,装载网箱7为亚克力材质。装载网箱7固定于机器人前部下方。Preferably, the loading cage 7 is made of acrylic material. The loading cage 7 is fixed under the front of the robot.

作业型水下机器人还包括:低功率探照灯,固定于本体框架9的前方支架,和支架下方。其用于在较暗水下条件下对前方进行照明,扩大海参捕捞机器人视野范围。第二控制板能够通过获取到的视觉信息判断所述机器人所处环境下的明暗度变化,当明暗度低于阈值时,照明系统自动开启进行补光,扩大机器人在水下黑暗场景中的可视范围。The operation-type underwater robot also includes: a low-power searchlight, a front bracket fixed on the body frame 9, and a lower part of the bracket. It is used to illuminate the front under darker underwater conditions and expand the field of vision of the sea cucumber fishing robot. The second control board can judge the change of brightness in the environment where the robot is located by the obtained visual information. When the brightness is lower than the threshold, the lighting system is automatically turned on to fill in the light, which expands the robot's ability to operate in dark underwater scenes. view range.

第二控制板执行基于孪生网络的水下图像增强算法;基于MobileNet-Transformer-GCN的海参识别检测算法和海参目标跟踪定位算法;水下机器人在线运动规划与协调控制算法;海参跟踪控制与悬浮抓取算法,无需上位机即可实现自主控制。The second control board executes underwater image enhancement algorithm based on twin network; sea cucumber identification and detection algorithm and sea cucumber target tracking and positioning algorithm based on MobileNet-Transformer-GCN; underwater robot online motion planning and coordinated control algorithm; sea cucumber tracking control and suspension grasping Taking the algorithm, the autonomous control can be realized without the upper computer.

本发明基于前述的面向海参捕捞的作业型水下机器人,还提供了一种面向海参捕捞的作业型水下机器人的控制方法,如图6所示,控制方法包括:Based on the aforementioned operation-type underwater robot for sea cucumber fishing, the present invention also provides a control method for the operation-type underwater robot for sea cucumber fishing, as shown in FIG. 6 , the control method includes:

步骤1,实时获取摄像机构拍摄的作业型水下机器人的前方视觉图像和下方视觉图像。In step 1, the forward visual image and the lower visual image of the operation-type underwater robot captured by the camera mechanism are acquired in real time.

通过双目摄像头和单目摄像头获取机器人前方和下方工作空间的视觉图像。其中单目摄像头用于引导水下机器人从当前位置向海参目标位置移动,具体通过目标检测和跟踪算法实现,其中的运动过程通过规划算法和控制算法实现;双目摄像头用于机器人到达目标位置后的抓取控制,具体通过目标检测和跟踪定位实现,抓取过程通过逆运动学进行计算。Visual images of the workspace in front of and below the robot are obtained through binocular and monocular cameras. The monocular camera is used to guide the underwater robot to move from the current position to the target position of the sea cucumber, which is realized by the target detection and tracking algorithm, and the movement process is realized by the planning algorithm and control algorithm; the binocular camera is used for the robot to reach the target position after the The grasping control is realized by target detection and tracking and positioning, and the grasping process is calculated by inverse kinematics.

步骤1之后还包括:After step 1 also include:

采用基于孪生卷积神经网络的水下图像增强算法对前方视觉图像和下方视觉图像进行颜色校正和去雾增强。如图7所示,孪生卷积神经网络包括第一分支卷积神经网络和第二分支卷积神经网络;第一分支卷积神经网络由标签图像的颜色特征进行约束,负责图像的颜色校正;第二分支卷积神经网络由纹理特征进行约束,负责图像的清晰化;第一分支卷积神经网络和第二分支卷积神经网络分别经过特征的约束后进行卷积特征变换操作,最后通过点乘的方式对两个分支特征进行拼接,拼接后经过一层卷积变换生成最终的清晰化图像。The underwater image enhancement algorithm based on the Siamese convolutional neural network is used to perform color correction and dehazing enhancement on the forward vision image and the lower vision image. As shown in Figure 7, the twin convolutional neural network includes a first branch convolutional neural network and a second branch convolutional neural network; the first branch convolutional neural network is constrained by the color feature of the label image, and is responsible for the color correction of the image; The second branch convolutional neural network is constrained by texture features and is responsible for the clarity of the image; the first branch convolutional neural network and the second branch convolutional neural network are respectively constrained by the features to perform convolution feature transformation operations, and finally pass the point The two branch features are spliced by multiplication, and after splicing, a layer of convolution transformation is performed to generate the final clear image.

所用颜色特征为颜色一阶矩、颜色二阶矩和颜色三阶矩,其提取公式具体表示为:The color features used are the first-order moment of color, the second-order moment of color and the third-order moment of color, and the extraction formula is specifically expressed as:

Figure 104010DEST_PATH_IMAGE005
Figure 104010DEST_PATH_IMAGE005

所用纹理特征为局部二值模式算子所提取的特征。特征的提取是通过算子模板进行卷积提取,以3*3的卷积算子为例,四周的八个像素值分别于中心像素进行比较,大于中心像素值的为1,小于中心像素值的为0。两个分支经过两种特征的约束后进一步进行卷积特征变换操作,最后通过点乘的方式对两个分支特征进行拼接,拼接后经过一层卷积变换生成最终的清晰化图像。The texture features used are those extracted by the local binary pattern operator. The feature extraction is performed by convolution extraction through the operator template. Taking the 3*3 convolution operator as an example, the eight surrounding pixel values are compared with the central pixel, and the value greater than the central pixel value is 1, and the value is less than the central pixel value. is 0. After the two branches are constrained by the two features, the convolution feature transformation operation is further performed, and finally the two branch features are spliced by point multiplication. After splicing, a layer of convolution transformation is performed to generate the final clear image.

所设计水下图像增强算法,能够适应于水下环境中海参与背景颜色相差较大的情况,该情况下,海参的边缘特征明显,能够进行较好的纹理特征提取。The designed underwater image enhancement algorithm can be adapted to the situation where the sea cucumber has a large difference in the background color in the underwater environment. In this case, the edge features of the sea cucumber are obvious, and better texture feature extraction can be performed.

步骤2,根据实时获取的下方视觉图像,利用基于MobileNet-Transformer-GCN的海参识别跟踪算法,识别并持续跟踪待捕捞海参,同时实时定位待捕捞海参的像素坐标。Step 2: According to the real-time acquired visual image below, the sea cucumber identification and tracking algorithm based on MobileNet-Transformer-GCN is used to identify and continuously track the sea cucumbers to be fished, and simultaneously locate the pixel coordinates of the sea cucumbers to be fished in real time.

该算法以双目和单目摄像头所采集的水下图像为输入,实时检测输出海参的位置信息。所设计海参检测与海参目标跟踪算法为单步算法,能够同时检测和跟踪,能够更好的适用于机器人平台,算法流程为检测、跟踪、定位,最终所获取的位置信息用于机械爪的抓取。The algorithm takes the underwater images collected by binocular and monocular cameras as input, and detects and outputs the position information of sea cucumbers in real time. The designed sea cucumber detection and sea cucumber target tracking algorithm is a single-step algorithm, which can detect and track at the same time, and can be better applied to the robot platform. Pick.

所设计海参检测与海参目标跟踪算法为单步算法,能够同时检测和跟踪,能够更好的适用于机器人平台,算法流程为检测、跟踪、定位,最终所获取的位置信息用于机械爪的抓取。所涉及的的检测识别算法融合了MobileNet、Transformer与GCN的特点,头部部分先通过卷积层进行特征提取,所用的卷积为分解卷积,分解卷积分为两步,纵深卷积和点卷积,通过分解卷积可以减少模型的计算量,轻量化模型使其适用于水下作业机器人,在纵深卷积部分,通过膨胀卷积替换普通卷积,膨胀卷积能进一步使得模型轻量化,既能获得较大的感受野,提高检测精度,也能降低计算量;通过分解卷积可以实现图像特征从低维到高维的映射,输入到Transformer模型中,进一步进行特征提取,在Transformer中没有采用全连接结构,而是采用了GCN结构,通过在进行特征的提取的同时,训练图边的权重,学习不同特征之间的关系,增加归纳偏置;在模型末端部分分为两个分支,一个分支为检测结果,另一个分支为身份特征的提取,身份特征可用于目标的跟踪,在跟踪模型上,针对水下海参之间相似度大,身份特征提取难的问题,融合人工特征进行身份特征的提取,针对相似度大,模型偏差容易增大的问题,采用模拟退火和回溯的方法进行修正,并通过相关滤波提升跟踪速度,使其适用于海参捕捞机器人。The designed sea cucumber detection and sea cucumber target tracking algorithm is a single-step algorithm, which can detect and track at the same time, and can be better applied to the robot platform. Pick. The detection and recognition algorithm involved combines the characteristics of MobileNet, Transformer and GCN. The head part is first extracted through the convolution layer. The convolution used is decomposed convolution. The decomposed convolution is divided into two steps, depth convolution and point convolution. Convolution, the calculation amount of the model can be reduced by decomposing the convolution, and the lightweight model makes it suitable for underwater operation robots. In the deep convolution part, the ordinary convolution is replaced by the expanded convolution, and the expanded convolution can further make the model lightweight. , which can not only obtain a larger receptive field, improve the detection accuracy, but also reduce the amount of calculation; by decomposing the convolution, the image features can be mapped from low-dimensional to high-dimensional, input into the Transformer model, and further feature extraction, in the Transformer Instead of using a fully connected structure, the GCN structure is used. While extracting features, the weights of the edges of the graph are trained, the relationship between different features is learned, and the induction bias is increased; at the end of the model, it is divided into two parts. Branches, one branch is the detection result, the other branch is the extraction of identity features, and the identity features can be used for target tracking. In the tracking model, for the problem that the similarity between underwater sea cucumbers is large and the extraction of identity features is difficult, artificial features are fused. For the extraction of identity features, for the problem that the similarity is large and the model deviation is easy to increase, the method of simulated annealing and backtracking is used for correction, and the tracking speed is improved through correlation filtering, making it suitable for sea cucumber fishing robots.

检测识别和跟踪算法用于获取海参在机器人视觉范围内的坐标位置既像素坐标。所设计的的检测识别算法融合了轻量化模块与Transformer-GCN模块。Detection, recognition and tracking algorithms are used to obtain the coordinate position of the sea cucumber within the vision range of the robot, that is, the pixel coordinates. The designed detection and recognition algorithm integrates the lightweight module and the Transformer-GCN module.

所设计算法由轻量化模块和Transformer-GCN模块组成。The designed algorithm consists of a lightweight module and a Transformer-GCN module.

参照图9,每个轻量化模块由三个卷积核组成,n×n×C i 维度的输入先经过1×1×C e 维度的卷积核卷积计算,变为n×n×C e 维度的特征图;然后再经过3×3×C e 维度卷积核卷积计算,变为

Figure 503899DEST_PATH_IMAGE006
×C e 维度的特征图;最后经过1×1×C p 维度的卷积核卷积计算,输出
Figure 540119DEST_PATH_IMAGE007
维度的特征图作为这一模块的输出特征图。Referring to Figure 9, each lightweight module consists of three convolution kernels. The input of n × n × C i dimension is first subjected to the convolution calculation of the convolution kernel of 1 × 1 × C e dimension, and becomes n × n × C The feature map of the e dimension; then through the 3 × 3 × C e dimension convolution kernel convolution calculation, it becomes
Figure 503899DEST_PATH_IMAGE006
The feature map of × C e dimension; finally, the convolution kernel of 1 × 1 × C p dimension is calculated by convolution, and the output
Figure 540119DEST_PATH_IMAGE007
The dimension feature map is used as the output feature map of this module.

参照图10,Transformer-GCN模块n×n×C i 维度的输入先经过3×3×C i 维度的卷积核卷积计算,变为

Figure 698304DEST_PATH_IMAGE008
×C i 维度的特征图,此特征图标记为T 1;然后再经过1×1×C t 维度卷积核卷积计算,变为
Figure 503580DEST_PATH_IMAGE009
×C t 维度的特征图;再次将特征图拉伸展开变为
Figure 339949DEST_PATH_IMAGE010
×C t 维度的特征图,此时特征集合可以表示为F=[f 0,f 1,• • •,f Ct ],每个子特征f i 的维度为
Figure 860536DEST_PATH_IMAGE011
;针对特征集合构建强连通图,连通图边向量表示为W通过余弦相似度计算,既
Figure 276605DEST_PATH_IMAGE012
,其中iϵ(1,• • •,C t ),jϵ(1,• • •,C t ),此时特征图可以转化为图卷积特征,既G=(F,W),利用卷积核通过图卷积计算,既G e =G*θ 1,可以得到该层图卷积网络特征,其中θ 1为卷积核,此时图卷积特征为F '=[f 0',f 1',• • •,f Ct '],每个子特征f i '维度为n g ;再利用图卷积核θ 2计算下一层图卷积特征,此时图卷积特征为F ''=[f 0'',f 1'',• • •,f Ct ''],每个子特征f i ''维度为
Figure 326600DEST_PATH_IMAGE011
;将图卷积特征拼接为
Figure 212166DEST_PATH_IMAGE009
×C t 维度的特征图;再经过1×1×C i 维度的卷积核卷积计算,输出
Figure 82033DEST_PATH_IMAGE013
×C i 维度的特征图,此特征图标记为T 2;最后再将T 1T 2进行拼接作为模块的最后输出,拼接后特征图维度为
Figure 442739DEST_PATH_IMAGE013
×2C i 。Referring to Figure 10, the input of the Transformer-GCN module n × n × C i dimension is first calculated by the convolution kernel of the 3 × 3 × C i dimension, and becomes
Figure 698304DEST_PATH_IMAGE008
× C i dimension feature map, this feature map is marked as T 1 ; then after 1 × 1 × C t dimension convolution kernel convolution calculation, it becomes
Figure 503580DEST_PATH_IMAGE009
The feature map of × C t dimension; the feature map is stretched and expanded again to become
Figure 339949DEST_PATH_IMAGE010
× C t dimension feature map, the feature set can be expressed as F =[ f 0 , f 1 ,• • •, f Ct ], and the dimension of each sub-feature f i is
Figure 860536DEST_PATH_IMAGE011
; Construct a strongly connected graph for the feature set, and the edge vector of the connected graph is expressed as W calculated by cosine similarity, both
Figure 276605DEST_PATH_IMAGE012
, where i ϵ(1,• • •, C t ), j ϵ(1,• • •, C t ), the feature map can be converted into a graph convolution feature, that is, G = ( F , W ), using The convolution kernel is calculated by graph convolution, that is, Ge = G 1 , the graph convolution network feature of this layer can be obtained, where θ 1 is the convolution kernel, and the graph convolution feature is F '=[ f 0 ' , f 1 ',• • •, f Ct '], the dimension of each sub-feature f i ' is n g ; then the graph convolution kernel θ 2 is used to calculate the graph convolution feature of the next layer, at this time, the graph convolution feature is F ''=[ f 0 '', f 1 '',• • •, f Ct ''], the dimension of each sub-feature f i '' is
Figure 326600DEST_PATH_IMAGE011
; concatenate the graph convolution features as
Figure 212166DEST_PATH_IMAGE009
The feature map of × C t dimension; and then through the convolution kernel convolution calculation of 1 × 1 × C i dimension, the output
Figure 82033DEST_PATH_IMAGE013
× The feature map of dimension C i , this feature map is marked as T 2 ; finally, T 1 and T 2 are spliced as the final output of the module, and the dimension of the feature map after splicing is
Figure 442739DEST_PATH_IMAGE013
×2 C i .

所设计算法输出端包括两个分支,可同时完成目标检测和跟踪功能。The output end of the designed algorithm includes two branches, which can complete the function of target detection and tracking at the same time.

在采集图片缩放到640×640×3后,先后输入到轻量化模块1、轻量化模块2、轻量化模块3、Transformer-GCN模块1、轻量化模块4、Transformer-GCN模块2、轻量化模块5中,此时所输出深度特征维度为4×4×2048,然后通过全局池化将特征变为1×1×2048。After the captured image is zoomed to 640×640×3, it is input to the lightweight module 1, the lightweight module 2, the lightweight module 3, the Transformer-GCN module 1, the lightweight module 4, the Transformer-GCN module 2, and the lightweight module. 5, the dimension of the output depth feature is 4×4×2048, and then the feature is changed to 1×1×2048 by global pooling.

针对目标检测分支,如图11所示,通过全连接层将全局池化后的特征映射为1×6维度的预测结果,其中包括海参的目标位置(中心点坐标x,y和目标框宽和高w,h),目标的类别(是否为海参),置信度(检测为海参的概率)六个预测结果。For the target detection branch, as shown in Figure 11, the global pooled features are mapped into 1 × 6-dimensional prediction results through the fully connected layer, including the target position of the sea cucumber (the center point coordinates x, y and the target frame width and High w, h), the category of the target (whether it is a sea cucumber), and the confidence (probability of being detected as a sea cucumber) six prediction results.

针对跟踪分支,首先将全局池化后的特征通过全连接层进行计算,将特征映射为1×1×256维度的特征,这一特征称为深度身份特征;同时对于原始图片提取梯度直方图特征作为人工身份特征,所提取梯度直方图特征维度是不确定的,通过主成分分析将特征维度映射为1×1×256。融合映射后的人工身份特征和深度身份特征输入到相关滤波中进行预测,相关滤波计算公式如下所示:For the tracking branch, the global pooled features are first calculated through the fully connected layer, and the features are mapped into 1×1×256-dimensional features, which are called deep identity features; at the same time, the gradient histogram feature is extracted from the original image. As an artificial identity feature, the feature dimension of the extracted gradient histogram is uncertain, and the feature dimension is mapped to 1×1×256 by principal component analysis. The artificial identity features and deep identity features after fusion and mapping are input into the correlation filter for prediction. The correlation filter calculation formula is as follows:

y=x

Figure 347241DEST_PATH_IMAGE014
w y = x
Figure 347241DEST_PATH_IMAGE014
w

其中x为融合特征,w为滤波模板,通过卷积计算每个目标的响应值y,响应值最大的目标为当前的追踪目标。Where x is the fusion feature, w is the filter template, the response value y of each target is calculated by convolution, and the target with the largest response value is the current tracking target.

在一个示例中,参照图8,步骤2具体包括:In one example, referring to FIG. 8 , step 2 specifically includes:

2-1,缩放实时获取的下方视觉图像,获得缩放后的下方视觉图像;2-1, zoom the lower visual image obtained in real time to obtain the zoomed lower visual image;

2-2,将缩放后的下方视觉图像依次输入第一轻量化模块、第二轻量化模块、第三轻量化模块、第一Transformer-GCN模块、第四轻量化模块、第二Transformer-GCN模块、第五轻量化模块和全局池化模块,输出特征图;2-2. Input the scaled lower visual image into the first lightweight module, the second lightweight module, the third lightweight module, the first Transformer-GCN module, the fourth lightweight module, and the second Transformer-GCN module , the fifth lightweight module and the global pooling module, output feature map;

2-3,将特征图进行映射,获得待捕捞海参的预测结果;预测结果包括目标位置、目标类别和置信度;2-3, map the feature map to obtain the prediction result of the sea cucumber to be caught; the prediction result includes the target location, target category and confidence;

2-4,将特征图输入全连接模块,获得深度身份特征;2-4, input the feature map into the fully connected module to obtain deep identity features;

2-5,从缩放后的下方视觉图像中提取梯度直方图特征,作为人工身份特征;2-5, extract the gradient histogram feature from the zoomed lower visual image as an artificial identity feature;

2-6,采用主成分分析将人工身份特征映射至与深度身份特征相同的维度;2-6, using principal component analysis to map artificial identity features to the same dimension as deep identity features;

2-7,融合映射后的人工身份特征和深度身份特征,获得融合后的身份特征;2-7, fuse the mapped artificial identity features and deep identity features to obtain the fused identity features;

2-8,将融合后的身份特征输入滤波模块,计算缩放后的下方视觉图像中每个检测目标的响应值;2-8, input the fused identity feature into the filtering module, and calculate the response value of each detection target in the zoomed lower visual image;

2-9,选取缩放后的下方视觉图像中响应值最大的检测目标确定为当前追踪的待捕捞海参。2-9, select the detection target with the largest response value in the zoomed lower visual image and determine it as the currently tracked sea cucumber to be caught.

步骤3,通过双目立体匹配将像素坐标转换为世界坐标,获得待捕捞海参的三维位置。In step 3, the pixel coordinates are converted into world coordinates through binocular stereo matching to obtain the three-dimensional position of the sea cucumber to be fished.

在抓取过程中,获取跟踪目标的像素坐标之后,需要通过双目立体匹配实现像素坐标到世界坐标的转换,根据检测和跟踪结果获得海参的真实世界坐标。In the grabbing process, after obtaining the pixel coordinates of the tracking target, it is necessary to convert the pixel coordinates to the world coordinates through binocular stereo matching, and obtain the real world coordinates of the sea cucumber according to the detection and tracking results.

在一个示例中,具体包括:In one example, this includes:

3-1,采用张正友标定法对相机进行标定,获得相机的内参矩阵[f,1/d x ,1/d y ,c x ,c y ]和畸变系数[k 1,k 2,k 3,p 1,p 2],同时还包括外参旋转矩阵R和平移向量t3-1. Use Zhang Zhengyou's calibration method to calibrate the camera to obtain the camera's internal parameter matrix [ f , 1/ d x , 1/ d y , c x , c y ] and distortion coefficients [ k 1 , k 2 , k 3 , p 1 , p 2 ], and also includes the external parameter rotation matrix R and translation vector t .

3-2,利用内参矩阵将下方视觉图像像素转换到相机坐标系。为了使双目摄像头左右视图的像素位置对齐,需要根据摄像机标定得到的摄像机内参数、旋转矩阵和平移向量,对图像进行畸变校正和成像原点对准。该方法首先通过摄像头内参矩阵将图像像素转换到相机坐标系,在相机坐标系中通过畸变系数进行换算后再将图像像素转换到像素坐标系。3-2, use the internal parameter matrix to convert the pixels of the lower visual image to the camera coordinate system. In order to align the pixel positions of the left and right views of the binocular camera, it is necessary to perform distortion correction and imaging origin alignment on the image according to the camera internal parameters, rotation matrix and translation vector obtained from the camera calibration. The method firstly converts the image pixels to the camera coordinate system through the camera internal parameter matrix, and then converts the image pixels to the pixel coordinate system after conversion by the distortion coefficient in the camera coordinate system.

3-3,在相机坐标系中通过畸变系数对下方视觉图像像素进行换算,并将换算后的下方视觉图像像素转换到像素坐标系;3-3, in the camera coordinate system, convert the lower visual image pixels by the distortion coefficient, and convert the converted lower visual image pixels to the pixel coordinate system;

3-4,在消除畸变和像素对齐后,需要对视差图计算,以获取目标的三维坐标,实现从像素坐标到世界坐标的转换。利用公式D=fT/d,计算空间中某点到相机平面的距离D;其中,f为标定获取的焦距,T为双目相机中两个相机的距离,d为视差值;3-4, after eliminating distortion and pixel alignment, it is necessary to calculate the disparity map to obtain the three-dimensional coordinates of the target, and realize the conversion from pixel coordinates to world coordinates. Use the formula D = fT / d to calculate the distance D from a point in the space to the camera plane; where f is the focal length obtained by calibration, T is the distance between the two cameras in the binocular camera, and d is the parallax value;

3-5,根据空间中某点到相机平面的距离,利用公式

Figure 397849DEST_PATH_IMAGE015
,将所述像素坐标转换为世界坐标,获得待捕捞海参的三维位置;其中,(X,Y,Z)为待捕捞海参的三维位置坐标,(x 1,y 1)和(x 2,y 2)分别为双目视觉中待捕捞海参在两个相机拍摄的图像中的像素坐标。利用上式依次遍历视差图上每一个像素点,求得目标的深度图,可以求取每一个像素点在世界坐标系下的坐标。3-5, according to the distance from a point in space to the camera plane, use the formula
Figure 397849DEST_PATH_IMAGE015
, convert the pixel coordinates into world coordinates to obtain the three-dimensional position of the sea cucumber to be fished; wherein ( X , Y , Z ) are the three-dimensional position coordinates of the sea cucumber to be fished, ( x 1 , y 1 ) and ( x 2 , y ) 2 ) are respectively the pixel coordinates of the sea cucumber to be caught in the images captured by the two cameras in binocular vision. Using the above formula to traverse each pixel on the disparity map in turn, to obtain the depth map of the target, the coordinates of each pixel in the world coordinate system can be obtained.

所设计的海参检测模型和目标跟踪模型能够同时实现海参检测和跟踪任务,同时所设计的检测和跟踪模型适用于海参捕捞任务。The designed sea cucumber detection model and target tracking model can realize the sea cucumber detection and tracking tasks at the same time, and the designed detection and tracking model is suitable for the sea cucumber fishing task.

在获取到目标三维位置信息后,通过水下机器人运动规划算法进行运动规划。After obtaining the three-dimensional position information of the target, the motion planning is carried out through the motion planning algorithm of the underwater robot.

步骤4,将待捕捞海参的三维位置设置为目标点,采用快速搜索树算法规划作业型水下机器人到目标点之间的路径。Step 4: Set the three-dimensional position of the sea cucumber to be fished as the target point, and use the fast search tree algorithm to plan the path between the operation-type underwater robot and the target point.

通过快速搜索树算法规划机器人到目标之间的路径,其过程如下:The path between the robot and the target is planned by the fast search tree algorithm, and the process is as follows:

4-1,初始化根节点q int 4-1, initialize the root node q int .

4-2,每次外部循环中,随机选取一个随机点q rand ,产生随机点后,遍历树中每个节点,找出距离此随机点最近的节点q near ,定义步进变量eps,接下来的每次内部循环中从q near q rand 拓展步长eps,到达新的节点q new ,检查路径是否碰撞,若不发生碰撞,在树上添加新节点q new 进入下个内部循环,若发生碰撞则不添加新节点进入下个内部循环,直到到达q rand ,再次开始外部循环。4-2, in each outer loop, randomly select a random point q rand , after generating a random point, traverse each node in the tree, find the node q near closest to the random point, define the step variable eps , and then In each inner loop of , the step size eps is extended from q near to q rand , reaching a new node q new , checking whether the path collides, if no collision occurs, add a new node q new to the tree and enter the next inner loop, if it occurs Collision then does not add new nodes and goes to the next inner loop until q rand is reached, starting the outer loop again.

4-3,重复4-2步骤直到达到终点,获取到当前的海参捕捞机器人到目标点的路径。4-3, repeat steps 4-2 until reaching the end point, and obtain the path from the current sea cucumber fishing robot to the target point.

海参捕捞机器人的协调运动规划控制通过任务优先级算法完成,根据海参捕捞任务,水下机器人运动优先级可以分为四级。第一级为为前视摄像头引导下的运动,通过在线运动规划算法,实时规划捕捞机器人到目标点的位置和路径,第一级为三维位置上的运动;第二级为捕捞机器人在双目摄像机引导下的运动,在机器人运动到目标附近一定距离之后,通过双目进行海参的定位,同时进行在线路径规划,调整捕捞机器人位置,到使得目标位于机械臂运动空间内;第三级为机械臂姿态的调整,使得机械臂作业空间最大化;第四级为机械臂运动,捕捞海参。The coordinated motion planning and control of the sea cucumber fishing robot is completed by the task priority algorithm. According to the sea cucumber fishing task, the motion priority of the underwater robot can be divided into four levels. The first level is the movement guided by the forward-looking camera. Through the online motion planning algorithm, the position and path of the fishing robot to the target point are planned in real time. The first level is the movement in the three-dimensional position; For the movement guided by the camera, after the robot moves to a certain distance near the target, the sea cucumber is positioned by binoculars, and online path planning is performed at the same time to adjust the position of the fishing robot so that the target is located in the movement space of the robotic arm; the third level is the mechanical The adjustment of the arm posture maximizes the working space of the robotic arm; the fourth level is the movement of the robotic arm to fish for sea cucumbers.

步骤5,根据前方视觉图像,基于Actor-Critic强化学习模型控制作业型水下机器人按照路径进行运动,运行至目标点后作业型水下机器人悬浮;Actor-Critic强化学习模型通过高斯混合模型对样本空间进行聚类压缩。Step 5: According to the forward visual image, the operation-type underwater robot is controlled to move according to the path based on the Actor-Critic reinforcement learning model, and the operation-type underwater robot is suspended after running to the target point; the Actor-Critic reinforcement learning model uses the Gaussian mixture model to analyze the sample space for clustering compression.

基于所设计的Actor-Critic强化学习模型,按照所规划路径实现海参跟踪控制和悬浮抓取算法。海参捕捞机器人的运动控制为三维控制,所以需要在三个姿态角、三维坐标、角速度和线速度上同时控制,Based on the designed Actor-Critic reinforcement learning model, sea cucumber tracking control and suspension grabbing algorithms are implemented according to the planned path. The motion control of the sea cucumber fishing robot is three-dimensional control, so it needs to be controlled at the same time in three attitude angles, three-dimensional coordinates, angular velocity and linear velocity.

如图12所示,所设计的模型具体流程如下:As shown in Figure 12, the specific process of the designed model is as follows:

5-1,螺旋桨推进器与机械臂根据随机初始状态函数运动后,获取当前的运动姿态,传入行动网络,根据步骤三所获取的期望规划和奖励函数计算当前奖励值。所设计的奖励函数如下:5-1. After the propeller thruster and the manipulator move according to the random initial state function, the current motion posture is obtained, and it is passed into the action network, and the current reward value is calculated according to the expected planning and reward function obtained in step 3. The designed reward function is as follows:

R=r 0-ρ 1||Δφ,Δθψ||-ρ 2||Δxyz||2 R = r 0 - ρ 1 ||Δφ,Δ θψ ||- ρ 2 ||Δ xyz || 2

其中Δxyz为三维坐标状态量,Δφ,Δθψ分别为偏航角状态量、俯仰角状态量、滚转角状态量,r 0为奖励常量,ρ 1||Δφ,Δθψ||为相对方位误差的二范数,ρ 2||Δxyz||2为相对位置误差的二范数。Among them, Δ x , Δ y , Δ z are the three-dimensional coordinate state quantities, Δφ, Δ θ , Δ ψ are the yaw angle state quantities, pitch angle state quantities, and roll angle state quantities, respectively, r 0 is the reward constant, ρ 1 || Δφ, Δ θ , Δ ψ || are the two-norm of the relative orientation error, and ρ 2 ||Δ x , Δ y , Δ z || 2 are the two-norm of the relative position error.

5-2,将奖励值与状态函数融合为训练样本加入训练集空间,同时通过高斯混合模型进行样本的融合,压缩样本空间。状态函数表示如下:5-2. Integrate the reward value and the state function into training samples and add them to the training set space. At the same time, the Gaussian mixture model is used to fuse the samples to compress the sample space. The state function is represented as follows:

s=[gxyz,Δφ,Δθψ,u,v,w,p,q,r] s =[ gxyz ,Δφ,Δ θψ , u , v , w , p , q , r ]

其中g为状态常量,u,v,w为线速度状态量,p,q,r为角速度状态量。所以当前训练样本空间可以表示为Where g is the state constant, u , v , w are the linear velocity state quantities, p , q , r are the angular velocity state quantities. So the current training sample space can be expressed as

[R,s]=[[R 1,s 1],• • •,[R n,s n]][ R , s ]=[[ R 1 , s 1 ],• • •,[ R n , s n ]]

样本空间随时间会逐渐增大,增加计算量,本发明所设计算法通过高斯混合模型对样本空间进行聚类压缩,对于此样本空间高斯混合模型可以表示为The sample space will gradually increase with time, increasing the amount of calculation. The algorithm designed in the present invention performs clustering and compression on the sample space through the Gaussian mixture model. The Gaussian mixture model for this sample space can be expressed as

Figure 755012DEST_PATH_IMAGE016
Figure 755012DEST_PATH_IMAGE016

此样本空间中共有K个类别的样本,π k 是类分布概率,也就是混合高斯成分的权重,所以满足

Figure 919408DEST_PATH_IMAGE017
Figure 810573DEST_PATH_IMAGE018
=1,
Figure 425225DEST_PATH_IMAGE019
σ k 分别为类均值与类方差。需要对
Figure 410630DEST_PATH_IMAGE019
σ k
Figure 110208DEST_PATH_IMAGE020
进行求解,其中求解出的
Figure 989303DEST_PATH_IMAGE019
作为重构后的样本。There are a total of K categories of samples in this sample space, π k is the class distribution probability, that is, the weight of the mixture of Gaussian components, so it satisfies
Figure 919408DEST_PATH_IMAGE017
Figure 810573DEST_PATH_IMAGE018
=1,
Figure 425225DEST_PATH_IMAGE019
and σ k are the class mean and class variance, respectively. need to
Figure 410630DEST_PATH_IMAGE019
, σ k ,
Figure 110208DEST_PATH_IMAGE020
solve, where the solved
Figure 989303DEST_PATH_IMAGE019
as a reconstructed sample.

该高斯混合模型的对数似然函数可以表示为:The log-likelihood function of this Gaussian mixture model can be expressed as:

Figure 384643DEST_PATH_IMAGE021
Figure 384643DEST_PATH_IMAGE021

Figure 719328DEST_PATH_IMAGE022
Figure 719328DEST_PATH_IMAGE022

对于高斯混合模型可以通过期望最大算法求解,具体计算过程如下:For the Gaussian mixture model, it can be solved by the expectation-maximization algorithm. The specific calculation process is as follows:

第一步:通过初始化

Figure 225527DEST_PATH_IMAGE019
σ k 来求解某个样本的条件概率,即:Step 1: By initializing
Figure 225527DEST_PATH_IMAGE019
and σ k to solve the conditional probability of a sample, namely:

Figure 693548DEST_PATH_IMAGE023
Figure 693548DEST_PATH_IMAGE023

Figure 522440DEST_PATH_IMAGE024
Figure 522440DEST_PATH_IMAGE024

第二步:在获得后验概率后可以通过最小化对数似然函数进一步优化

Figure 341491DEST_PATH_IMAGE004
σ k :Step 2: After obtaining the posterior probability, it can be further optimized by minimizing the log-likelihood function
Figure 341491DEST_PATH_IMAGE004
with σ k :

Figure 916960DEST_PATH_IMAGE025
Figure 916960DEST_PATH_IMAGE025

Figure 395082DEST_PATH_IMAGE026
Figure 395082DEST_PATH_IMAGE026

Figure 256859DEST_PATH_IMAGE027
Figure 256859DEST_PATH_IMAGE027

重复这两步直到收敛,求解出的

Figure 438572DEST_PATH_IMAGE028
即为重构样本空间,Q为中间变量。Repeat these two steps until convergence, and the solved
Figure 438572DEST_PATH_IMAGE028
is the reconstructed sample space, and Q is the intermediate variable.

5-3,将样本输入评价网络中的目标-行为网络,进行梯度计算,更新在线状态-行为网络的参数。另外,在累计多次梯度之后,通过在线状态-行为网络梯度更新目标状态-行为网络参数。5-3, input the sample into the target-action network in the evaluation network, perform gradient calculation, and update the parameters of the online state-action network. In addition, after accumulating multiple gradients, the target state-action network parameters are updated through the online state-action network gradient.

5-4,通过评价网络得到的梯度对在线策略网络进行优化与参数更新。另外,在累计多次梯度后,通过在线策略网络梯度更新目标策略网络参数。5-4, optimize and update the parameters of the online policy network through the gradient obtained by the evaluation network. In addition, after accumulating multiple gradients, the target policy network parameters are updated through the online policy network gradient.

5-5,通过行动网络生成新的状态函数,用于机械臂与螺旋桨推进器的控制。5-5, generate a new state function through the action network for the control of the robotic arm and the propeller.

5-6,循环5-1至5-5直至收敛,既机械臂与螺旋桨推进器的运动结果符合步骤三所规划的运动轨迹与姿态,实现对海参的抓取。5-6, loop 5-1 to 5-5 until convergence, that is, the motion results of the robotic arm and the propeller are in line with the motion trajectory and attitude planned in step 3, so as to realize the grasping of the sea cucumber.

步骤6,根据逆运动学,通过作业型水下机器人的抓取机构夹爪81待捕捞海参。Step 6, according to inverse kinematics, the sea cucumber is to be caught by the gripper 81 of the grasping mechanism of the operation-type underwater robot.

本发明可实现复杂水下环境下的海参捕捞机器人精准控制和自主抓取。The invention can realize precise control and autonomous grasping of a sea cucumber fishing robot in a complex underwater environment.

本说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。The various embodiments in this specification are described in a progressive manner, and each embodiment focuses on the differences from other embodiments, and the same and similar parts between the various embodiments can be referred to each other.

本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处。综上所述,本说明书内容不应理解为对本发明的限制。In this paper, specific examples are used to illustrate the principles and implementations of the present invention. The descriptions of the above embodiments are only used to help understand the methods and core ideas of the present invention; meanwhile, for those skilled in the art, according to the present invention There will be changes in the specific implementation and application scope. In conclusion, the contents of this specification should not be construed as limiting the present invention.

Claims (10)

1. An operation type underwater robot for sea cucumber fishing, characterized by comprising: the device comprises a body frame, a propeller, a second control plate, a first control plate, a camera mechanism and a grabbing mechanism;
the propeller, the second control plate, the first control plate and the camera shooting mechanism are all arranged on the body frame, and the fixed end of the grabbing mechanism is connected with the body frame;
the camera shooting mechanism is connected with a signal input end of the second control board, and a signal output end of the second control board is respectively connected with the first control board and the grabbing mechanism; the camera shooting mechanism is used for shooting a front visual image and a lower visual image of the operation type underwater robot and transmitting the front visual image and the lower visual image to the second control board; the second control board is used for determining three-dimensional position information of the sea cucumber according to the lower visual image and carrying out online path planning according to the three-dimensional position information of the sea cucumber and the front visual image;
the first control board is connected with the propeller and used for adjusting the propeller according to the on-line path plan so that the operation type underwater robot moves according to the on-line path plan;
the second control board is also used for controlling the grabbing mechanism to catch the sea cucumbers when the operation type underwater robot reaches a target point of the online path planning.
2. The working underwater robot for sea cucumber fishing according to claim 1, wherein the camera mechanism comprises: a forward looking monocular camera and a look down binocular camera;
the front-view monocular camera and the overlooking binocular camera are both connected with the signal input end of the second control board; the front monocular camera is used for shooting a front visual image of the operation type underwater robot and transmitting the front visual image to the second control board; and the overlooking binocular camera is used for shooting a lower visual image of the operation type underwater robot and transmitting the lower visual image to the second control board.
3. The working type underwater robot for sea cucumber fishing according to claim 1, wherein the propeller comprises: a first propeller thruster, a second propeller thruster, a third propeller thruster, a fourth propeller thruster, a fifth propeller thruster, a sixth propeller thruster, a seventh propeller thruster, and an eighth propeller thruster;
the first propeller thruster, the second propeller thruster, the third propeller thruster and the fourth propeller thruster are arranged in the horizontal direction according to the vector thruster, the first propeller thruster and the second propeller thruster are arranged in front of the horizontal direction of the body frame, and the third propeller thruster and the fourth propeller thruster are arranged behind the horizontal direction of the body frame;
the first propeller thruster, the second propeller thruster, the third propeller thruster, the fourth propeller thruster, the fifth propeller thruster, the sixth propeller thruster, the seventh propeller thruster and the eighth propeller thruster are all connected with the first control board;
the first control board is used for controlling the first propeller thruster, the second propeller thruster, the third propeller thruster and the fourth propeller thruster to provide thrust in the horizontal front-back direction for the operation type underwater robot, and controlling the fifth propeller thruster, the sixth propeller thruster, the seventh propeller thruster and the eighth propeller thruster to provide thrust in the vertical direction for the operation type underwater robot, so that the operation type underwater robot moves according to the on-line path planning.
4. The working type underwater robot for sea cucumber fishing according to claim 1, wherein the catching mechanism comprises: the device comprises a base, a shoulder joint, an elbow joint, a forearm joint, a clamping jaw, a first steering engine, a second steering engine, a third steering engine and a fourth steering engine;
the first steering engine, the second steering engine, the third steering engine and the fourth steering engine are all connected with the second control panel;
the shoulder joint is fixed to the body frame through a base; the base is provided with a first steering engine, and the first steering engine is used for driving the shoulder joint to rotate around the direction of the vertical axis under the control of the second control board;
the elbow joint is connected in series with the shoulder joint, a second steering engine is arranged between the shoulder joint and the elbow joint and used for driving the elbow joint to rotate around the direction vertical to the axis of the shoulder joint under the control of a second control board;
the forearm joint is connected in series with the elbow joint, a third steering engine is arranged between the forearm joint and the elbow joint and used for driving the forearm joint to rotate around the direction vertical to the elbow joint center line under the control of the second control board;
the clamping jaw and the fourth steering engine are arranged above the small arm joint; the clamping jaw comprises two oppositely arranged net structures; the fourth steering wheel is used for driving the two oppositely-arranged net-shaped structures to rotate in opposite directions under the control of the second control board, so that the opening and closing control of the clamping jaw is realized.
5. The working underwater robot for sea cucumber fishing according to claim 1, further comprising: the loading net cage and a fifth steering engine;
the loading net cage is arranged on the body frame, and the loading net cage is arranged opposite to the grabbing mechanism;
the loading net box is connected with the second control board through a fifth steering engine and is used for driving the loading net box to be automatically opened under the control of the second control board when the grabbing mechanism finishes grabbing and moves to a preset position, and loading the sea cucumbers caught by the grabbing mechanism.
6. A control method for a sea cucumber fishing operation type underwater robot is characterized by comprising the following steps:
acquiring a front visual image and a lower visual image of the operation type underwater robot shot by a camera shooting mechanism in real time;
according to the lower visual image obtained in real time, a sea cucumber identification and tracking algorithm based on MobileNet-Transformer-GCN is utilized to identify and continuously track the sea cucumber to be caught, and meanwhile, the pixel coordinates of the sea cucumber to be caught are positioned in real time;
converting the pixel coordinates into world coordinates through binocular stereo matching to obtain the three-dimensional position of the sea cucumber to be caught;
setting the three-dimensional position of the sea cucumber to be caught as a target point, and planning a path from the operation type underwater robot to the target point by adopting a rapid search tree algorithm;
controlling the operation type underwater robot to move according to the path based on an Actor-Critic reinforcement learning model according to the front visual image, and suspending the operation type underwater robot after the operation type underwater robot runs to a target point; the Actor-Critic reinforcement learning model performs clustering compression on a sample space through a Gaussian mixture model;
according to inverse kinematics, the sea cucumbers to be caught are caught by a grabbing mechanism of the operation type underwater robot.
7. The control method of the working underwater robot for sea cucumber catching according to claim 6, wherein the real-time acquisition of the front visual image and the lower visual image of the working underwater robot photographed by the photographing means further comprises:
carrying out color correction and defogging enhancement on the front visual image and the lower visual image by adopting an underwater image enhancement algorithm based on a twin convolutional neural network; the twin convolutional neural network comprises a first branch convolutional neural network and a second branch convolutional neural network; the first branch convolutional neural network is constrained by the color characteristics of the label image and is responsible for color correction of the image; the second branch convolutional neural network is constrained by texture features and is responsible for the image definition; and the first branch convolutional neural network and the second branch convolutional neural network are subjected to convolutional characteristic transformation operation after characteristic constraint, finally the two branch characteristics are spliced in a dot-product mode, and a final clear image is generated through one layer of convolutional transformation after splicing.
8. The control method of the working underwater robot for sea cucumber fishing according to claim 6, wherein the method for identifying and continuously tracking the sea cucumber to be fished and simultaneously locating the pixel coordinates of the sea cucumber to be fished in real time by using a sea cucumber identification and tracking algorithm based on MobileNet-transform-GCN according to the real-time acquired lower visual image specifically comprises the following steps:
zooming the real-time acquired lower visual image to obtain a zoomed lower visual image;
inputting the zoomed lower visual image into a first lightweight module, a second lightweight module, a third lightweight module, a first transform-GCN module, a fourth lightweight module, a second transform-GCN module, a fifth lightweight module and a global pooling module in sequence, and outputting a characteristic diagram;
mapping the characteristic graph to obtain a prediction result of the sea cucumber to be caught; the prediction result comprises a target position, a target category and a confidence coefficient;
inputting the feature map into a full-connection module to obtain a depth identity feature;
extracting gradient histogram features from the zoomed lower visual image as artificial identity features;
mapping the artificial identity features to dimensions the same as the depth identity features by principal component analysis;
fusing the mapped artificial identity features and the depth identity features to obtain fused identity features;
inputting the fused identity characteristics into a filtering module, and calculating the response value of each detection target in the zoomed lower visual image;
and selecting the detection target with the maximum response value in the zoomed lower visual image to determine the current tracked sea cucumber to be caught.
9. The control method for the working underwater robot for sea cucumber fishing according to claim 6, wherein the pixel coordinates are converted into world coordinates through binocular stereo matching to obtain the three-dimensional position of the sea cucumber to be fished, and specifically comprises the following steps:
calibrating the camera by adopting a Zhangyingyou calibration method to obtain an internal reference matrix and a distortion coefficient of the camera;
converting the lower visual image pixels to a camera coordinate system using the internal reference matrix;
converting the lower visual image pixels in a camera coordinate system through a distortion coefficient, and converting the converted lower visual image pixels into a pixel coordinate system;
using formulasD=fT/dCalculating the distance from a point in space to the plane of the cameraD(ii) a Wherein,fin order to calibrate the acquired focal length,Tis the distance of two of the binocular cameras,dis a parallax value;
according to the distance from a point in space to the plane of the camera, using a formula
Figure 55462DEST_PATH_IMAGE001
Converting the pixel coordinates into world coordinates to obtain the three-dimensional position of the sea cucumber to be caught; wherein (A), (B), (C), (D), (C), (B), (C)X,Y,Z) For the three-dimensional position coordinates of the sea cucumber to be caught (x 1,y 1) And (a)x 2,y 2) Respectively is a picture of the sea cucumber to be caught in binocular vision shot by two camerasPixel coordinates in the image.
10. The control method for the sea cucumber catching-oriented working underwater robot as claimed in claim 6, wherein the Actor-Critic-based reinforcement learning model controls the working underwater robot to move according to the path and to suspend after the working underwater robot runs to a target point, specifically comprises:
acquiring the current motion attitude of the operation type underwater robot, and transmitting the current motion attitude into an online strategy network in a mobile network; the current motion attitude comprises a yaw angle, a pitch angle, a roll angle, a three-dimensional coordinate, an angular velocity and a linear velocity;
calculating a current reward value according to a reward function and the path of the Actor-critical reinforcement learning model; the reward function isR=r 0-ρ 1||Δφ,Δθψ||-ρ 2||Δxyz||2(ii) a Wherein, DeltaxyzIs a three-dimensional coordinate state quantity, Δ φ, ΔθψRespectively a yaw angle state quantity, a pitch angle state quantity and a roll angle state quantity,r 0in order to award the constant amount of the prize,ρ 1||Δφ,Δθψ| | is a two-norm relative orientation error,ρ 2||Δxyz||2is a two-norm of the relative position error,ρ 1andρ 2a first coefficient and a second coefficient, respectively;
fusing the current reward value and the state function into a training sample and adding the training sample into a sample space; the state function iss=[gxyz,Δφ,Δθψ,u,v,w,p,q,r](ii) a Wherein,gis a constant value of the state, and the state,u,v,was the state quantity of the linear velocity,p,q,ras the state quantity of the angular velocity,sis a function of the state;
fusing the samples in the sample space through a Gaussian mixture model, and compressing the sample space; the Gaussian mixture model is
Figure 776863DEST_PATH_IMAGE002
(ii) a Wherein,P(x) In order to obtain a compressed sample space,Kfor the class of samples in the sample space before compression,
Figure 166387DEST_PATH_IMAGE003
is the probability of the class distribution,
Figure 109066DEST_PATH_IMAGE004
andσ k respectively, mean class and variance class: (R i ,s i ) Is the first in sample space before compressioniThe number of the samples is one,Nis a Gaussian distribution density function of the submodel;
inputting the compressed samples in the sample space into a target-behavior network in an evaluation network, performing gradient calculation, and updating parameters of the online state-behavior network;
optimizing and updating parameters of an online policy network in the action network through the gradient obtained by evaluating network calculation, and updating parameters of a target policy network through the gradient of the online policy network after accumulating the gradient for multiple times;
generating a new state function through an online strategy network in the action network, wherein the new state function is used for controlling the mechanical arm and the propeller;
and circulating the steps until convergence, so that the motion result of the operation type underwater robot conforms to the path.
CN202210183134.6A 2022-02-28 2022-02-28 An operational underwater robot for sea cucumber fishing and its control method Active CN114248893B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210183134.6A CN114248893B (en) 2022-02-28 2022-02-28 An operational underwater robot for sea cucumber fishing and its control method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210183134.6A CN114248893B (en) 2022-02-28 2022-02-28 An operational underwater robot for sea cucumber fishing and its control method

Publications (2)

Publication Number Publication Date
CN114248893A true CN114248893A (en) 2022-03-29
CN114248893B CN114248893B (en) 2022-05-13

Family

ID=80796982

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210183134.6A Active CN114248893B (en) 2022-02-28 2022-02-28 An operational underwater robot for sea cucumber fishing and its control method

Country Status (1)

Country Link
CN (1) CN114248893B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114700947A (en) * 2022-04-20 2022-07-05 中国科学技术大学 Robot based on visual-touch fusion and grabbing system and method thereof
CN114739389A (en) * 2022-05-17 2022-07-12 中国船舶科学研究中心 Deep sea operation type cable controlled submersible underwater navigation device and use method thereof
CN114973391A (en) * 2022-06-30 2022-08-30 北京万里红科技有限公司 Eyeball tracking method, device and equipment applied to metacarpal space
CN115009478A (en) * 2022-06-15 2022-09-06 江苏科技大学 An intelligent underwater fishing robot and its fishing method
CN116062130A (en) * 2022-12-20 2023-05-05 昆明理工大学 A Shallow Water Underwater Robot Based on Full Degrees of Freedom
CN116243720A (en) * 2023-04-25 2023-06-09 广东工业大学 AUV underwater object searching method and system based on 5G networking
CN116255908A (en) * 2023-05-11 2023-06-13 山东建筑大学 Sea creature positioning measurement device and method for underwater robot
CN116405644A (en) * 2023-05-31 2023-07-07 湖南开放大学(湖南网络工程职业学院、湖南省干部教育培训网络学院) Remote control system and method for computer network equipment
CN117029838A (en) * 2023-10-09 2023-11-10 广东电网有限责任公司阳江供电局 Navigation control method and system for underwater robot
CN118124760A (en) * 2024-03-19 2024-06-04 哈尔滨工程大学 An autonomous underwater operation robot and underwater operation method based on binocular vision
CN118651412A (en) * 2024-07-01 2024-09-17 瀚科智翔无人科技(南京)有限公司 An autonomous load conveying device based on a flying platform
CN118752508A (en) * 2024-09-09 2024-10-11 齐鲁空天信息研究院 A motion planning and control method for an unmanned underwater sea cucumber suction robot

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130096549A (en) * 2012-02-22 2013-08-30 한국과학기술원 Jellyfish-polyp removal robot using remotely operated vehicle
KR20140013209A (en) * 2012-07-20 2014-02-05 삼성중공업 주식회사 Subsea equipment, underwater operation system and underwater operation method
CN106780356A (en) * 2016-11-15 2017-05-31 天津大学 Image defogging method based on convolutional neural networks and prior information
CN107146248A (en) * 2017-04-27 2017-09-08 杭州电子科技大学 A Stereo Matching Method Based on Two-Stream Convolutional Neural Network
CN107977671A (en) * 2017-10-27 2018-05-01 浙江工业大学 A kind of tongue picture sorting technique based on multitask convolutional neural networks
CN111062990A (en) * 2019-12-13 2020-04-24 哈尔滨工程大学 A Binocular Vision Localization Method for Underwater Robot Target Grasping
CN112809703A (en) * 2021-02-10 2021-05-18 中国人民解放军国防科技大学 Bottom sowing sea cucumber catching robot based on ESRGAN enhanced super-resolution and CNN image recognition
CN113500610A (en) * 2021-07-19 2021-10-15 浙江大学台州研究院 Underwater harvesting robot
CN113561178A (en) * 2021-07-30 2021-10-29 燕山大学 An underwater robot intelligent grasping device and method thereof
WO2022021804A1 (en) * 2020-07-28 2022-02-03 谈斯聪 Underwater robot device and underwater regulation and control management optimization system and method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130096549A (en) * 2012-02-22 2013-08-30 한국과학기술원 Jellyfish-polyp removal robot using remotely operated vehicle
KR20140013209A (en) * 2012-07-20 2014-02-05 삼성중공업 주식회사 Subsea equipment, underwater operation system and underwater operation method
CN106780356A (en) * 2016-11-15 2017-05-31 天津大学 Image defogging method based on convolutional neural networks and prior information
CN107146248A (en) * 2017-04-27 2017-09-08 杭州电子科技大学 A Stereo Matching Method Based on Two-Stream Convolutional Neural Network
CN107977671A (en) * 2017-10-27 2018-05-01 浙江工业大学 A kind of tongue picture sorting technique based on multitask convolutional neural networks
CN111062990A (en) * 2019-12-13 2020-04-24 哈尔滨工程大学 A Binocular Vision Localization Method for Underwater Robot Target Grasping
WO2022021804A1 (en) * 2020-07-28 2022-02-03 谈斯聪 Underwater robot device and underwater regulation and control management optimization system and method
CN112809703A (en) * 2021-02-10 2021-05-18 中国人民解放军国防科技大学 Bottom sowing sea cucumber catching robot based on ESRGAN enhanced super-resolution and CNN image recognition
CN113500610A (en) * 2021-07-19 2021-10-15 浙江大学台州研究院 Underwater harvesting robot
CN113561178A (en) * 2021-07-30 2021-10-29 燕山大学 An underwater robot intelligent grasping device and method thereof

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114700947A (en) * 2022-04-20 2022-07-05 中国科学技术大学 Robot based on visual-touch fusion and grabbing system and method thereof
CN114739389A (en) * 2022-05-17 2022-07-12 中国船舶科学研究中心 Deep sea operation type cable controlled submersible underwater navigation device and use method thereof
CN115009478A (en) * 2022-06-15 2022-09-06 江苏科技大学 An intelligent underwater fishing robot and its fishing method
CN115009478B (en) * 2022-06-15 2023-10-27 江苏科技大学 Intelligent underwater fishing robot and fishing method thereof
CN114973391A (en) * 2022-06-30 2022-08-30 北京万里红科技有限公司 Eyeball tracking method, device and equipment applied to metacarpal space
CN116062130A (en) * 2022-12-20 2023-05-05 昆明理工大学 A Shallow Water Underwater Robot Based on Full Degrees of Freedom
CN116243720A (en) * 2023-04-25 2023-06-09 广东工业大学 AUV underwater object searching method and system based on 5G networking
CN116243720B (en) * 2023-04-25 2023-08-22 广东工业大学 A 5G network-based AUV underwater object-finding method and system
CN116255908B (en) * 2023-05-11 2023-08-15 山东建筑大学 Underwater robot-oriented marine organism positioning measurement device and method
CN116255908A (en) * 2023-05-11 2023-06-13 山东建筑大学 Sea creature positioning measurement device and method for underwater robot
CN116405644A (en) * 2023-05-31 2023-07-07 湖南开放大学(湖南网络工程职业学院、湖南省干部教育培训网络学院) Remote control system and method for computer network equipment
CN116405644B (en) * 2023-05-31 2024-01-12 湖南开放大学(湖南网络工程职业学院、湖南省干部教育培训网络学院) Remote control system and method for computer network equipment
CN117029838A (en) * 2023-10-09 2023-11-10 广东电网有限责任公司阳江供电局 Navigation control method and system for underwater robot
CN117029838B (en) * 2023-10-09 2024-01-23 广东电网有限责任公司阳江供电局 Navigation control method and system for underwater robot
CN118124760A (en) * 2024-03-19 2024-06-04 哈尔滨工程大学 An autonomous underwater operation robot and underwater operation method based on binocular vision
CN118651412A (en) * 2024-07-01 2024-09-17 瀚科智翔无人科技(南京)有限公司 An autonomous load conveying device based on a flying platform
CN118752508A (en) * 2024-09-09 2024-10-11 齐鲁空天信息研究院 A motion planning and control method for an unmanned underwater sea cucumber suction robot

Also Published As

Publication number Publication date
CN114248893B (en) 2022-05-13

Similar Documents

Publication Publication Date Title
CN114248893B (en) An operational underwater robot for sea cucumber fishing and its control method
Rohan et al. Convolutional neural network-based real-time object detection and tracking for parrot AR drone 2
CN110543859B (en) Sea cucumber autonomous identification and grabbing method based on deep learning and binocular positioning
CN108491880B (en) Object classification and pose estimation method based on neural network
US12131529B2 (en) Virtual teach and repeat mobile manipulation system
Wang et al. Real-time underwater onboard vision sensing system for robotic gripping
CN105225269A (en) Based on the object modelling system of motion
CN112347900B (en) An automatic grasping method of monocular vision underwater target based on distance estimation
CN113681552B (en) Five-dimensional grabbing method for robot hybrid object based on cascade neural network
Ji-Yong et al. Design and vision based autonomous capture of sea organism with absorptive type remotely operated vehicle
CN115578460B (en) Robot grabbing method and system based on multi-mode feature extraction and dense prediction
CN113752255A (en) A real-time grasping method of robotic arm with six degrees of freedom based on deep reinforcement learning
CN117001675A (en) Double-arm cooperative control non-cooperative target obstacle avoidance trajectory planning method
CN103226693B (en) The identification of fishing for object based on full-view stereo vision and space positioning apparatus and method
Yan et al. Autonomous vision-based navigation and stability augmentation control of a biomimetic robotic hammerhead shark
Venna et al. Application of image-based visual servoing on autonomous drones
CN116659516B (en) Deep stereo attention visual navigation method and device based on binocular parallax mechanism
CN114578817B (en) Control method of intelligent transport vehicle based on multi-sensor detection and multi-data fusion
Zhang et al. Underwater autonomous grasping robot based on multi-stage cascade DetNet
CN112124537A (en) Intelligent control method for underwater robot for autonomous absorption and fishing of benthos
CN117464689A (en) A robot brainwave adaptive grabbing method and system based on autonomous exploration algorithm
CN117808876A (en) Unmanned vehicle mechanical arm autonomous grabbing system and method suitable for small target
Tang et al. Online camera-gimbal-odometry system extrinsic calibration for fixed-wing UAV swarms
Hai et al. Object Detection and Multiple Objective Optimization Manipulation Planning for Underwater Autonomous Capture in Oceanic Natural Aquatic Farm
Suzui et al. Toward 6 dof object pose estimation with minimum dataset

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant