CN114248893A - Operation type underwater robot for sea cucumber fishing and control method thereof - Google Patents
Operation type underwater robot for sea cucumber fishing and control method thereof Download PDFInfo
- Publication number
- CN114248893A CN114248893A CN202210183134.6A CN202210183134A CN114248893A CN 114248893 A CN114248893 A CN 114248893A CN 202210183134 A CN202210183134 A CN 202210183134A CN 114248893 A CN114248893 A CN 114248893A
- Authority
- CN
- China
- Prior art keywords
- sea cucumber
- underwater robot
- visual image
- propeller
- operation type
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 241000251511 Holothuroidea Species 0.000 title claims abstract description 143
- 238000000034 method Methods 0.000 title claims abstract description 34
- 230000002787 reinforcement Effects 0.000 claims abstract description 14
- 230000000007 visual effect Effects 0.000 claims description 92
- 230000007246 mechanism Effects 0.000 claims description 52
- 238000001514 detection method Methods 0.000 claims description 27
- 230000033001 locomotion Effects 0.000 claims description 26
- 230000006870 function Effects 0.000 claims description 24
- 238000013527 convolutional neural network Methods 0.000 claims description 22
- 210000002310 elbow joint Anatomy 0.000 claims description 21
- 238000011068 loading method Methods 0.000 claims description 21
- 210000000323 shoulder joint Anatomy 0.000 claims description 20
- 238000004364 calculation method Methods 0.000 claims description 18
- 210000000245 forearm Anatomy 0.000 claims description 14
- 239000000203 mixture Substances 0.000 claims description 13
- 238000010586 diagram Methods 0.000 claims description 11
- 238000013507 mapping Methods 0.000 claims description 9
- 239000011159 matrix material Substances 0.000 claims description 9
- 238000007906 compression Methods 0.000 claims description 8
- 238000012937 correction Methods 0.000 claims description 8
- 230000004044 response Effects 0.000 claims description 8
- 238000012549 training Methods 0.000 claims description 8
- 230000009466 transformation Effects 0.000 claims description 8
- 238000001914 filtration Methods 0.000 claims description 7
- 230000009471 action Effects 0.000 claims description 6
- 230000006835 compression Effects 0.000 claims description 6
- 238000011176 pooling Methods 0.000 claims description 4
- 238000000513 principal component analysis Methods 0.000 claims description 4
- 238000011156 evaluation Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 abstract description 7
- 238000000605 extraction Methods 0.000 description 7
- 239000000047 product Substances 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 108010066057 cabin-1 Proteins 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 210000000078 claw Anatomy 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 239000000725 suspension Substances 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 241000257465 Echinoidea Species 0.000 description 1
- 241000237503 Pectinidae Species 0.000 description 1
- VYPSYNLAJGMNEJ-UHFFFAOYSA-N Silicium dioxide Chemical compound O=[Si]=O VYPSYNLAJGMNEJ-UHFFFAOYSA-N 0.000 description 1
- NIXOWILDQLNWCW-UHFFFAOYSA-N acrylic acid group Chemical group C(C=C)(=O)O NIXOWILDQLNWCW-UHFFFAOYSA-N 0.000 description 1
- 230000002238 attenuated effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000006698 induction Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 235000020637 scallop Nutrition 0.000 description 1
- 239000000741 silica gel Substances 0.000 description 1
- 229910002027 silica gel Inorganic materials 0.000 description 1
- 238000002922 simulated annealing Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B63—SHIPS OR OTHER WATERBORNE VESSELS; RELATED EQUIPMENT
- B63C—LAUNCHING, HAULING-OUT, OR DRY-DOCKING OF VESSELS; LIFE-SAVING IN WATER; EQUIPMENT FOR DWELLING OR WORKING UNDER WATER; MEANS FOR SALVAGING OR SEARCHING FOR UNDERWATER OBJECTS
- B63C11/00—Equipment for dwelling or working underwater; Means for searching for underwater objects
- B63C11/52—Tools specially adapted for working underwater, not otherwise provided for
-
- A—HUMAN NECESSITIES
- A01—AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
- A01K—ANIMAL HUSBANDRY; AVICULTURE; APICULTURE; PISCICULTURE; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
- A01K80/00—Harvesting oysters, mussels, sponges or the like
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/04—Control of altitude or depth
- G05D1/06—Rate of change of altitude or depth
- G05D1/0692—Rate of change of altitude or depth specially adapted for under-water vehicles
Landscapes
- Engineering & Computer Science (AREA)
- Environmental Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Ocean & Marine Engineering (AREA)
- Animal Husbandry (AREA)
- Biodiversity & Conservation Biology (AREA)
- Mechanical Engineering (AREA)
- Manipulator (AREA)
Abstract
The invention relates to an operation type underwater robot for sea cucumber catching and a control method thereof, belonging to the field of underwater robots. In the grabbing process, a sea cucumber recognition and tracking algorithm based on MobileNet-transform-GCN is used for recognizing and continuously tracking sea cucumbers to be caught, meanwhile, the sea cucumbers to be caught are positioned in real time, a path from an operation type underwater robot to a target point is planned by adopting a rapid search tree algorithm, the operation type underwater robot is controlled to move according to the path based on an Actor-Critic reinforcement learning model, and accurate control and autonomous grabbing of the sea cucumber catching robot in a complex underwater environment are achieved.
Description
Technical Field
The invention relates to the field of underwater robots, in particular to an operation type underwater robot for sea cucumber fishing and a control method thereof.
Background
With the increasing demand of people for high-quality marine products such as sea cucumbers, scallops, sea urchins and the like, the manual fishing mode has the advantages of low speed, low yield, higher cost and large danger coefficient. Therefore, the technology of fishing marine products by intelligent automatic equipment instead of people is widely concerned. The automatic intelligent fishing equipment can greatly improve the efficiency of fishing the sea cucumbers, reduce the dependence on manpower, maximize the yield of the sea cucumbers, reduce the risk of fishing the sea cucumbers and reduce unnecessary loss. An operation type underwater robot is one of sea cucumber catching intelligent devices. By means of the underwater robot carrying the mechanical arm, automatic sea cucumber grabbing of the robot can be achieved by combining target recognition, detection, tracking and corresponding stability control algorithms, and the difficulty of sea cucumber catching is greatly reduced. However, the design and control of the underwater operation robot are very complex, on one hand, the underwater environment is very complex, the underwater operation robot can not only be influenced by buoyancy and gravity of the underwater operation robot, but also be influenced by ocean currents and tides, the balance and stability of the underwater operation robot can be influenced by the factors, the control difficulty is increased, on the other hand, the underwater environment is poor, the brightness is low, the red light is quickly attenuated, the color cast is serious, the identification degree and the visible range of the underwater environment are low, and the difficulty in identifying and fishing the sea cucumbers is increased.
Disclosure of Invention
The invention aims to provide an operation type underwater robot for sea cucumber fishing and a control method thereof, so as to realize accurate control and autonomous grabbing of the sea cucumber fishing robot in a complex underwater environment.
In order to achieve the purpose, the invention provides the following scheme:
an operation type underwater robot for sea cucumber fishing, the operation type underwater robot comprising: the device comprises a body frame, a propeller, a second control plate, a first control plate, a camera mechanism and a grabbing mechanism;
the propeller, the second control plate, the first control plate and the camera shooting mechanism are all arranged on the body frame, and the fixed end of the grabbing mechanism is connected with the body frame;
the camera shooting mechanism is connected with a signal input end of the second control board, and a signal output end of the second control board is respectively connected with the first control board and the grabbing mechanism; the camera shooting mechanism is used for shooting a front visual image and a lower visual image of the operation type underwater robot and transmitting the front visual image and the lower visual image to the second control board; the second control board is used for determining three-dimensional position information of the sea cucumber according to the lower visual image and carrying out online path planning according to the three-dimensional position information of the sea cucumber and the front visual image;
the first control board is connected with the propeller and used for adjusting the propeller according to the on-line path plan so that the operation type underwater robot moves according to the on-line path plan;
the second control board is also used for controlling the grabbing mechanism to catch the sea cucumbers when the operation type underwater robot reaches a target point of the online path planning.
Optionally, the camera mechanism includes: a forward looking monocular camera and a look down binocular camera;
the front-view monocular camera and the overlooking binocular camera are both connected with the signal input end of the second control board; the front monocular camera is used for shooting a front visual image of the operation type underwater robot and transmitting the front visual image to the second control board; and the overlooking binocular camera is used for shooting a lower visual image of the operation type underwater robot and transmitting the lower visual image to the second control board.
Optionally, the propeller comprises: a first propeller thruster, a second propeller thruster, a third propeller thruster, a fourth propeller thruster, a fifth propeller thruster, a sixth propeller thruster, a seventh propeller thruster, and an eighth propeller thruster;
the first propeller thruster, the second propeller thruster, the third propeller thruster and the fourth propeller thruster are arranged in the horizontal direction according to the vector thruster, the first propeller thruster and the second propeller thruster are arranged in front of the horizontal direction of the body frame, and the third propeller thruster and the fourth propeller thruster are arranged behind the horizontal direction of the body frame;
the first propeller thruster, the second propeller thruster, the third propeller thruster, the fourth propeller thruster, the fifth propeller thruster, the sixth propeller thruster, the seventh propeller thruster and the eighth propeller thruster are all connected with the first control board;
the first control board is used for controlling the first propeller thruster, the second propeller thruster, the third propeller thruster and the fourth propeller thruster to provide thrust in the horizontal front-back direction for the operation type underwater robot, and controlling the fifth propeller thruster, the sixth propeller thruster, the seventh propeller thruster and the eighth propeller thruster to provide thrust in the vertical direction for the operation type underwater robot, so that the operation type underwater robot moves according to the on-line path planning.
Optionally, the grabbing mechanism comprises: the device comprises a base, a shoulder joint, an elbow joint, a forearm joint, a clamping jaw, a first steering engine, a second steering engine, a third steering engine and a fourth steering engine;
the first steering engine, the second steering engine, the third steering engine and the fourth steering engine are all connected with the second control panel;
the shoulder joint is fixed to the body frame through a base; the base is provided with a first steering engine, and the first steering engine is used for driving the shoulder joint to rotate around the direction of the vertical axis under the control of the second control board;
the elbow joint is connected in series with the shoulder joint, a second steering engine is arranged between the shoulder joint and the elbow joint and used for driving the elbow joint to rotate around the direction vertical to the axis of the shoulder joint under the control of a second control board;
the forearm joint is connected in series with the elbow joint, a third steering engine is arranged between the forearm joint and the elbow joint and used for driving the forearm joint to rotate around the direction vertical to the elbow joint center line under the control of the second control board;
the clamping jaw and the fourth steering engine are arranged above the small arm joint; the clamping jaw comprises two oppositely arranged net structures; the fourth steering wheel is used for driving the two oppositely-arranged net-shaped structures to rotate in opposite directions under the control of the second control board, so that the opening and closing control of the clamping jaw is realized.
Optionally, the operation type underwater robot further includes: the loading net cage and a fifth steering engine;
the loading net cage is arranged on the body frame, and the loading net cage is arranged opposite to the grabbing mechanism;
the loading net box is connected with the second control board through a fifth steering engine and is used for driving the loading net box to be automatically opened under the control of the second control board when the grabbing mechanism finishes grabbing and moves to a preset position, and loading the sea cucumbers caught by the grabbing mechanism.
A control method of an operation type underwater robot for sea cucumber fishing comprises the following steps:
acquiring a front visual image and a lower visual image of the operation type underwater robot shot by a camera shooting mechanism in real time;
according to the lower visual image obtained in real time, a sea cucumber identification and tracking algorithm based on MobileNet-Transformer-GCN is utilized to identify and continuously track the sea cucumber to be caught, and meanwhile, the pixel coordinates of the sea cucumber to be caught are positioned in real time;
converting the pixel coordinates into world coordinates through binocular stereo matching to obtain the three-dimensional position of the sea cucumber to be caught;
setting the three-dimensional position of the sea cucumber to be caught as a target point, and planning a path from the operation type underwater robot to the target point by adopting a rapid search tree algorithm;
controlling the operation type underwater robot to move according to the path based on an Actor-Critic reinforcement learning model according to the front visual image, and suspending the operation type underwater robot after the operation type underwater robot runs to a target point; the Actor-Critic reinforcement learning model performs clustering compression on a sample space through a Gaussian mixture model;
according to inverse kinematics, the sea cucumbers to be caught are caught by a grabbing mechanism of the operation type underwater robot.
Optionally, the acquiring, in real time, a front visual image and a lower visual image of the working underwater robot shot by the camera mechanism further includes:
carrying out color correction and defogging enhancement on the front visual image and the lower visual image by adopting an underwater image enhancement algorithm based on a twin convolutional neural network; the twin convolutional neural network comprises a first branch convolutional neural network and a second branch convolutional neural network; the first branch convolutional neural network is constrained by the color characteristics of the label image and is responsible for color correction of the image; the second branch convolutional neural network is constrained by texture features and is responsible for the image definition; and the first branch convolutional neural network and the second branch convolutional neural network are subjected to convolutional characteristic transformation operation after characteristic constraint, finally the two branch characteristics are spliced in a dot-product mode, and a final clear image is generated through one layer of convolutional transformation after splicing.
Optionally, the identifying and tracking sea cucumbers to be caught and continuously tracking the sea cucumbers to be caught by using a sea cucumber identifying and tracking algorithm based on MobileNet-Transformer-GCN according to the lower visual image obtained in real time, and simultaneously locating pixel coordinates of the sea cucumbers to be caught in real time specifically include:
zooming the real-time acquired lower visual image to obtain a zoomed lower visual image;
inputting the zoomed lower visual image into a first lightweight module, a second lightweight module, a third lightweight module, a first transform-GCN module, a fourth lightweight module, a second transform-GCN module, a fifth lightweight module and a global pooling module in sequence, and outputting a characteristic diagram;
mapping the characteristic graph to obtain a prediction result of the sea cucumber to be caught; the prediction result comprises a target position, a target category and a confidence coefficient;
inputting the feature map into a full-connection module to obtain a depth identity feature;
extracting gradient histogram features from the zoomed lower visual image as artificial identity features;
mapping the artificial identity features to dimensions the same as the depth identity features by principal component analysis;
fusing the mapped artificial identity features and the depth identity features to obtain fused identity features;
inputting the fused identity characteristics into a filtering module, and calculating the response value of each detection target in the zoomed lower visual image;
and selecting the detection target with the maximum response value in the zoomed lower visual image to determine the current tracked sea cucumber to be caught.
Optionally, the pixel coordinates are converted into world coordinates through binocular stereo matching, so as to obtain a three-dimensional position of the sea cucumber to be caught, and the method specifically includes:
calibrating the camera by adopting a Zhangyingyou calibration method to obtain an internal reference matrix and a distortion coefficient of the camera;
converting the lower visual image pixels to a camera coordinate system using the internal reference matrix;
converting the lower visual image pixels in a camera coordinate system through a distortion coefficient, and converting the converted lower visual image pixels into a pixel coordinate system;
using formulasD=fT/dCalculating the distance from a point in space to the plane of the cameraD(ii) a Wherein,fin order to calibrate the acquired focal length,Tis the distance of two of the binocular cameras,dis a parallax value;
according to the distance from a point in space to the plane of the camera, using a formulaConverting the pixel coordinates into world coordinates to obtain the three-dimensional position of the sea cucumber to be caught; wherein (A), (B), (C), (D), (C), (B), (C)X,Y,Z) For the three-dimensional position coordinates of the sea cucumber to be caught (x 1,y 1) And (a)x 2,y 2) The coordinates of the pixels of the sea cucumber to be caught in the binocular vision in the images shot by the two cameras are respectively.
Optionally, the operation type underwater robot is controlled to move according to the path based on the Actor-Critic reinforcement learning model, and the operation type underwater robot suspends after running to a target point, which specifically includes:
acquiring the current motion attitude of the operation type underwater robot, and transmitting the current motion attitude into an online strategy network in a mobile network; the current motion attitude comprises a yaw angle, a pitch angle, a roll angle, a three-dimensional coordinate, an angular velocity and a linear velocity;
calculating a current reward value according to a reward function and the path of the Actor-critical reinforcement learning model; the reward function isR=r 0-ρ 1||Δφ,Δθ,Δψ||-ρ 2||Δx,Δy,Δz||2(ii) a Wherein, Deltax,Δy,ΔzIs a three-dimensional coordinate state quantity, Δ φ, Δθ,ΔψRespectively a yaw angle state quantity, a pitch angle state quantity and a roll angle state quantity,r 0in order to award the constant amount of the prize,ρ 1||Δφ,Δθ,Δψ| | is a two-norm relative orientation error,ρ 2||Δx,Δy,Δz||2is a two-norm of the relative position error,ρ 1andρ 2a first coefficient and a second coefficient, respectively;
fusing the current reward value and the state function into a training sample and adding the training sample into a sample space; the state function iss=[g,Δx,Δy,Δz,Δφ,Δθ,Δψ,u,v,w,p,q,r](ii) a Wherein,gis a constant value of the state, and the state,u,v,was the state quantity of the linear velocity,p,q,ras the state quantity of the angular velocity,sis a function of the state;
fusing the samples in the sample space through a Gaussian mixture model, and compressing the sample space; the Gaussian mixture model is(ii) a Wherein,P(x) In order to obtain a compressed sample space,Kfor the class of samples in the sample space before compression,is the probability of the class distribution,andσ k respectively, mean class and variance class: (R i ,s i ) Is the first in sample space before compressioniThe number of the samples is one,Nis a Gaussian distribution density function of the submodel;
inputting the compressed samples in the sample space into a target-behavior network in an evaluation network, performing gradient calculation, and updating parameters of the online state-behavior network;
optimizing and updating parameters of an online policy network in the action network through the gradient obtained by evaluating network calculation, and updating parameters of a target policy network through the gradient of the online policy network after accumulating the gradient for multiple times;
generating a new state function through an online strategy network in the action network, wherein the new state function is used for controlling the mechanical arm and the propeller;
and circulating the steps until convergence, so that the motion result of the operation type underwater robot conforms to the path.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the invention discloses a sea cucumber catching-oriented operation type underwater robot and a control method thereof.A camera shooting mechanism shoots a front visual image and a lower visual image of the operation type underwater robot, a second control board determines three-dimensional position information of a sea cucumber according to the lower visual image and carries out online path planning according to the three-dimensional position information and the front visual image of the sea cucumber, a first control board adjusts a propeller according to the online path planning to enable the operation type underwater robot to move according to the online path planning, and a second control board controls a grabbing mechanism to catch the sea cucumber when the operation type underwater robot reaches a target point of the online path planning. In the grabbing process, a sea cucumber recognition and tracking algorithm based on MobileNet-transform-GCN is used for recognizing and continuously tracking sea cucumbers to be caught, simultaneously, pixel coordinates of the sea cucumbers to be caught are located in real time, a path from an operation type underwater robot to a target point is planned by adopting a rapid search tree algorithm, the operation type underwater robot is controlled to move according to the path based on an Actor-Critic reinforcement learning model, the sea cucumbers to be caught are grabbed through a grabbing mechanism of the operation type underwater robot according to inverse kinematics, and accurate control and autonomous grabbing of the sea cucumber catching robot in a complex underwater environment are achieved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a front view of the working type underwater robot for sea cucumber fishing according to the present invention;
FIG. 2 is a rear view of the working type underwater robot for sea cucumber fishing according to the present invention;
FIG. 3 is a bottom view of the working underwater robot for sea cucumber fishing according to the present invention;
FIG. 4 is a side view of the working type underwater robot for sea cucumber fishing according to the present invention;
FIG. 5 is a schematic structural view of a grasping mechanism according to the present invention;
FIG. 6 is a frame diagram of the control method of the working underwater robot for sea cucumber fishing according to the present invention;
FIG. 7 is a schematic diagram of an underwater image enhancement algorithm provided by the present invention;
FIG. 8 is a block diagram of the sea cucumber identification and tracking algorithm provided by the present invention;
FIG. 9 is a schematic structural view of a lightweight module according to the present invention;
FIG. 10 is a schematic structural diagram of a Transformer-GCN according to the present invention;
FIG. 11 is a schematic diagram of a sea cucumber identification and tracking algorithm provided by the present invention;
FIG. 12 is a schematic diagram of an Actor-Critic reinforcement learning model provided by the present invention.
Description of the symbols: 1-a first control cabin, 2-a power supply cabin, 31-a first propeller thruster, 32-a second propeller thruster, 33-a third propeller thruster, 34-a fourth propeller thruster, 41-a fifth propeller thruster, 42-a sixth propeller thruster, 43-a seventh propeller thruster, 44-an eighth propeller thruster, 5-a overlook binocular camera, 6-a second control cabin, 7-a loading net cage, 81-a clamping jaw, 82-a forearm joint, 83-an elbow joint, 84-a shoulder joint, 85-a base and 9-a body frame.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide an operation type underwater robot for sea cucumber fishing and a control method thereof, so as to realize accurate control and autonomous grabbing of the sea cucumber fishing robot in a complex underwater environment.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
The invention provides an operation type underwater robot for sea cucumber fishing, as shown in fig. 1-4, the operation type underwater robot comprises: body frame 9, propeller, second control panel, first control panel, camera mechanism and snatch the mechanism.
Propeller, second control panel, first control panel and camera mechanism all set up on body frame 9, and the stiff end that snatchs the mechanism is connected with body frame 9. The camera shooting mechanism is connected with a signal input end of the second control board, and a signal output end of the second control board is respectively connected with the first control board and the grabbing mechanism; the camera shooting mechanism is used for shooting a front visual image and a lower visual image of the operation type underwater robot and transmitting the front visual image and the lower visual image to the second control board; the second control board is used for determining the three-dimensional position information of the sea cucumber according to the lower visual image and carrying out online path planning according to the three-dimensional position information of the sea cucumber and the front visual image. The first control board is connected with the propeller and used for adjusting the propeller according to the on-line path planning, so that the operation type underwater robot moves according to the on-line path planning. The second control board is also used for controlling the grabbing mechanism to catch the sea cucumbers when the operation type underwater robot reaches a target point of the on-line path planning.
As shown in fig. 5, the imaging mechanism includes: a forward looking monocular camera and a look down binocular camera 5. The front-view monocular camera and the overlooking binocular camera 5 are both connected with the signal input end of the second control board; the front monocular camera is used for shooting a front visual image of the operation type underwater robot and transmitting the front visual image to the second control board; the overlooking binocular camera 5 is used for shooting a lower visual image of the operation type underwater robot and transmitting the lower visual image to the second control board.
On the cloud platform in being fixed in first control cabin 1, its visual angle of accessible steering wheel adjustment, its visual field information that is used for acquireing operation type sea cucumber fishing robot motion the place ahead to tasks such as the tracking of realization robot and obstacle avoidance. Overlook binocular camera 5 and be fixed in robot frame intermediate position, be connected to first control storehouse through the watertight connecting wire. And the visual information processing module in the first control cabin can be used for processing and transmitting the video information acquired by the binocular camera and the monocular camera. In addition, the first control cabin 1 further comprises a nine-axis inertial sensor and a depth sensor, and the nine-axis inertial sensor and the depth sensor are used for acquiring attitude information and depth information of the robot.
The second control panel is positioned in the second control cabin, is provided with a graphic processing unit, a sensor processor and a network communication module, has low power consumption and maximum power consumption of 30 watts, can quantitatively accelerate the depth network model, can run a target recognition, target detection and tracking algorithm, recognizes the obstacles in the front view field and the targets in the overlooking view field in real time, and can calculate the three-dimensional position information and the posture information of the targets through the binocular camera. The second control panel can also operate an autonomous control algorithm and a navigation algorithm, and autonomous control of the sea cucumber fishing operation robot is realized.
The power supply bin 2 is positioned below the first control cabin and is used for providing necessary power supply power for the underwater robot for sea cucumber fishing operation when the robot is automatically controlled; and a binocular camera in the robot vision system is arranged below the power supply bin 2 and used for acquiring three-dimensional position information and posture information of a target when the mechanical arm grabs. The lower part of the rear part of the frame is provided with a second control cabin 6, a second control panel is arranged in the second control cabin 6, and meanwhile, the second control cabin 6 is connected with the first control cabin 1 through a watertight connecting line.
The propeller includes: first propeller thruster 31, second propeller thruster 32, third propeller thruster 33, fourth propeller thruster 34, fifth propeller thruster 41, sixth propeller thruster 42, seventh propeller thruster 43 and eighth propeller thruster 44. The first propeller thruster 31, the second propeller thruster 32, the third propeller thruster 33, and the fourth propeller thruster 34 are arranged in a vector type in the horizontal direction, the first propeller thruster 31 and the second propeller thruster 32 are arranged in front of the body frame 9 in the horizontal direction, and the third propeller thruster 33 and the fourth propeller thruster 34 are arranged behind the body frame 9 in the horizontal direction. The first propeller thruster 31, the second propeller thruster 32, the third propeller thruster 33, the fourth propeller thruster 34, the fifth propeller thruster 41, the sixth propeller thruster 42, the seventh propeller thruster 43 and the eighth propeller thruster 44 are all connected with the first control board. The first control board is used for controlling the first propeller thruster 31, the second propeller thruster 32, the third propeller thruster 33 and the fourth propeller thruster 34 to provide thrust in the horizontal front-back direction for the operation type underwater robot, and controlling the fifth propeller thruster 41, the sixth propeller thruster 42, the seventh propeller thruster 43 and the eighth propeller thruster 44 to provide thrust in the vertical direction for the operation type underwater robot, so that the operation type underwater robot moves according to the on-line path planning.
The first propeller thruster 31 to the eighth propeller thruster 44 are all six-degree-of-freedom vector type propeller thrusters, and are used for providing power for the sea cucumber catching robot to drive the robot to move and control the accurate posture. The first propeller thruster 31 to the eighth propeller thruster 44 are connected to a first control cabin through watertight connecting lines, and the control cabin can drive the propellers to rotate through driving signals to control the moving direction of the robot.
Snatch the mechanism and include: base 85, shoulder joint 84, elbow joint 83, forearm joint 82, clamping jaw 81, first steering wheel, second steering wheel, third steering wheel and fourth steering wheel. And the first steering engine, the second steering engine, the third steering engine and the fourth steering engine are all connected with the second control panel. The shoulder joint 84 is fixed to the body frame 9 by a base 85; the base 85 is provided with a first steering gear, and the first steering gear is used for driving the shoulder joint 84 to rotate around the vertical axis direction under the control of the second control board. The elbow joint 83 is connected in series with the shoulder joint 84, a second steering engine is arranged between the shoulder joint 84 and the elbow joint 83, and the second steering engine is used for driving the elbow joint 83 to rotate around the axis direction perpendicular to the shoulder joint 84 under the control of a second control board. The forearm joint 82 is connected in series with the elbow joint 83, a third steering engine is arranged between the forearm joint 82 and the elbow joint 83, and the third steering engine is used for driving the forearm joint 82 to rotate around the direction perpendicular to the central line of the elbow joint 83 under the control of the second control board. A clamping jaw 81 and a fourth steering engine are arranged above the small arm joint 82; the clamping jaw 81 comprises two oppositely arranged net structures; the fourth steering engine is used for driving the two oppositely arranged net-shaped structures to rotate in opposite directions under the control of the second control board, so that the opening and closing of the clamping jaw 81 are controlled.
The clamping jaw 81 is a sea cucumber-shaped strip-shaped net structure which is of a flexible structure, and a silica gel sleeve is sleeved on the outer layer of the plastic framework, so that the clamping jaw is suitable for catching sea cucumbers.
The grabbing mechanism is a three-degree-of-freedom mechanical arm and comprises a series motor driving structure, a mechanical rigid body transmission structure and a clamping jaw 81, the driving structure drives the clamping jaw 81 through the transmission structure to grab the sea cucumbers, the series motor driving structure comprises a first steering engine, a second steering engine, a third steering engine and a fourth steering engine, and the mechanical rigid body transmission structure comprises a base 85, a shoulder joint 84, an elbow joint 83 and a forearm joint 82. All driving steering engines are connected to a second control bin through watertight connecting lines, and a control panel is arranged in the bin and can drive the mechanical arm steering engines to rotate.
The operation type underwater robot further includes: a loading net cage 7 and a fifth steering engine. The loading net cage 7 is arranged on the body frame 9, and the loading net cage 7 is arranged opposite to the grabbing mechanism. The loading net box 7 is connected with the second control board through a fifth steering engine, the loading net box 7 is used for driving the loading net box 7 to be automatically opened under the control of the second control board when the grabbing mechanism finishes grabbing and moves to a preset position, and the sea cucumbers caught by the grabbing mechanism are loaded, so that the sea cucumbers grabbed by the grabbing mechanism can be conveniently stored and recovered.
Preferably, the loading net cage 7 is made of acrylic material. A loading net cage 7 is fixed under the front of the robot.
The operation type underwater robot further includes: a low-power searchlight fixed on the front bracket of the body frame 9 and below the bracket. The sea cucumber fishing robot is used for illuminating the front under the dark underwater condition, and the visual field range of the sea cucumber fishing robot is expanded. The second control board can judge the brightness change of the robot in the environment through the acquired visual information, and when the brightness is lower than a threshold value, the lighting system is automatically started to supplement light, so that the visible range of the robot in an underwater dark scene is enlarged.
The second control board executes an underwater image enhancement algorithm based on the twin network; a sea cucumber identification and detection algorithm and a sea cucumber target tracking and positioning algorithm based on the MobileNet-Transformer-GCN; an online motion planning and coordination control algorithm of the underwater robot; the sea cucumber tracking control and suspension grabbing algorithm can realize autonomous control without an upper computer.
The invention is based on the above-mentioned working underwater robot for sea cucumber fishing, and also provides a control method for the working underwater robot for sea cucumber fishing, as shown in fig. 6, the control method comprises:
And visual images of the front and lower working spaces of the robot are acquired through the binocular camera and the monocular camera. The underwater robot is used for guiding the underwater robot to move from a current position to a sea parameter target position, and is specifically realized through a target detection and tracking algorithm, wherein the motion process is realized through a planning algorithm and a control algorithm; the binocular camera is used for grabbing control after the robot reaches a target position, specifically, the grabbing control is achieved through target detection and tracking positioning, and the grabbing process is calculated through inverse kinematics.
and carrying out color correction and defogging enhancement on the front visual image and the lower visual image by adopting an underwater image enhancement algorithm based on the twin convolutional neural network. As shown in fig. 7, the twin convolutional neural network includes a first branch convolutional neural network and a second branch convolutional neural network; the first branch convolution neural network is constrained by the color characteristics of the label image and is responsible for color correction of the image; the second branch convolution neural network is constrained by texture features and is responsible for the image definition; and the first branch convolutional neural network and the second branch convolutional neural network are subjected to convolutional characteristic transformation operation after characteristic constraint, and finally the two branch characteristics are spliced in a dot-product mode, and a final clear image is generated through one layer of convolutional transformation after splicing.
The used color features are color first moment, color second moment and color third moment, and the extraction formula is specifically expressed as follows:
the texture features are extracted by the local binary pattern operator. The extraction of the characteristics is carried out by convolution extraction through an operator template, taking a convolution operator of 3 x 3 as an example, the eight pixel values around are respectively compared with the central pixel, the value which is larger than the central pixel value is 1, and the value which is smaller than the central pixel value is 0. And after the two branches are constrained by the two features, further performing convolution feature transformation operation, finally splicing the two branch features in a point multiplication mode, and generating a final clear image through a layer of convolution transformation after splicing.
The designed underwater image enhancement algorithm can be suitable for the condition that the color difference of the sea participant background in the underwater environment is large, under the condition, the edge characteristics of the sea cucumber are obvious, and the texture characteristic extraction can be better carried out.
And 2, identifying and continuously tracking the sea cucumbers to be caught according to the lower visual images acquired in real time by using a sea cucumber identification and tracking algorithm based on the MobileNet-Transformer-GCN, and simultaneously positioning the pixel coordinates of the sea cucumbers to be caught in real time.
The algorithm takes underwater images collected by binocular and monocular cameras as input, and detects and outputs position information of the sea cucumbers in real time. The designed sea cucumber detection and sea cucumber target tracking algorithm is a single-step algorithm, can be used for simultaneously detecting and tracking and can be better suitable for a robot platform, the algorithm flow comprises detection, tracking and positioning, and finally the obtained position information is used for grabbing by the mechanical claw.
The designed sea cucumber detection and sea cucumber target tracking algorithm is a single-step algorithm, can be used for simultaneously detecting and tracking and can be better suitable for a robot platform, the algorithm flow comprises detection, tracking and positioning, and finally the obtained position information is used for grabbing by the mechanical claw. The detection and identification algorithm integrates the characteristics of MobileNet, a transform and GCN, the head part is subjected to feature extraction through a convolution layer, the used convolution is decomposed convolution which is divided into two steps, namely, a depth convolution and a point convolution, the calculated amount of the model can be reduced through the decomposed convolution, the model is lightened to be suitable for underwater operation robots, the expansion convolution replaces the common convolution in the depth convolution part, the expansion convolution can further lighten the model, a larger receptive field can be obtained, the detection precision is improved, and the calculated amount can also be reduced; mapping of image features from low dimension to high dimension can be achieved through decomposition convolution, the image features are input into a Transformer model for further feature extraction, a full connection structure is not adopted in the Transformer, a GCN structure is adopted, the weights of graph edges are trained while the features are extracted, the relation between different features is learned, and induction bias is increased; the method is characterized in that the tail end part of the model is divided into two branches, one branch is a detection result, the other branch is extraction of identity characteristics, the identity characteristics can be used for tracking a target, on the tracking model, aiming at the problems that the similarity between underwater sea cucumbers is large and the identity characteristics are difficult to extract, the identity characteristics are extracted by fusing artificial characteristics, and aiming at the problems that the similarity is large and the model deviation is easy to increase, the method of simulated annealing and backtracking is adopted for correction, and the tracking speed is improved through related filtering, so that the method is suitable for the sea cucumber catching robot.
The detection recognition and tracking algorithm is used for acquiring the pixel coordinates of the sea cucumber at the coordinate position in the visual range of the robot. The designed detection and identification algorithm is fused with a lightweight module and a Transformer-GCN module.
The calculation method comprises a lightweight module and a Transformer-GCN module.
Referring to fig. 9, each of the lightweight modules consists of three convolution kernels,n×n×C i dimension input is first processed through 1 x 1C e Convolution kernel convolution calculation of dimension becomesn×n×C e A feature map of dimensions; then passes through 3X 3 for useC e Dimension convolution kernel convolution calculation into×C e A feature map of dimensions; finally passes through 1 x 1C p Convolution kernel convolution calculation of dimension, outputThe feature map of the dimension serves as the output feature map of this module.
Referring to FIG. 10, a Transformer-GCN modulen×n×C i Dimension input is first processed through 3 x 3C i Convolution kernel convolution calculation of dimension becomes×C i A feature map of the dimension, the feature map being labeledT 1(ii) a Then after 1X 1 for useC t Dimension convolution kernel convolution calculation into×C t A feature map of dimensions; the feature diagram is pulled and stretched again to form×C t Feature maps of dimensions, where a set of features can be represented asF=[f 0,f 1,• • •,f Ct ]Each sub-featuref i Has the dimension of(ii) a Constructing a strong connection graph aiming at the feature set, wherein the edge vector of the connection graph is expressed asWBy cosine similarity calculation, i.e.Whereiniϵ(1,• • •,C t ),jϵ(1,• • •,C t ) In this case, the feature map can be converted into a map convolution feature, i.e.G=(F,W) By convolution kernel through a graph convolution calculation, i.e.G e =G*θ 1The layer graph convolution network characteristics can be obtained, whereinθ 1For convolution kernels, when the graph convolution is characterized byF '=[f 0',f 1',• • •,f Ct ']Each sub-featuref i Of dimension ofn g (ii) a Reuse of graph convolution kernelsθ 2Calculating the convolution characteristic of the next layer of graph, wherein the convolution characteristic of the graph isF ''=[f 0'',f 1'',• • •,f Ct '']Each sub-featuref i '' dimension of(ii) a Concatenating graph convolution features into×C t A feature map of dimensions; then 1 x 1 inC i Convolution kernel convolution calculation of dimension, output×C i A feature map of the dimension, the feature map being labeledT 2(ii) a Finally, willT 1AndT 2splicing is carried out to be used as the final output of the module, and the dimension of the spliced characteristic diagram is×2C i 。
The output end of the calculation method comprises two branches, and the target detection and tracking functions can be simultaneously completed.
After the collected picture is zoomed to 640 multiplied by 3, the collected picture is input into a lightweight module 1, a lightweight module 2, a lightweight module 3, a transform-GCN module 1, a lightweight module 4, a transform-GCN module 2 and a lightweight module 5 in sequence, the feature dimension of the output depth is 4 multiplied by 2048 at the moment, and then the feature is changed into 1 multiplied by 2048 through global pooling.
For the target detection branch, as shown in fig. 11, the globally pooled features are mapped to prediction results of 1 × 6 dimensions by the full connection layer, where the prediction results include six prediction results, namely, a target position (center point coordinates x and y and a target frame width and height w and h) of the sea cucumber, a category (whether the target is the sea cucumber) of the target, and a confidence (a probability of detecting the target is the sea cucumber).
For the tracking branch, firstly, calculating the globally pooled features through a full connection layer, and mapping the features into features with dimensions of 1 × 1 × 256, wherein the features are called depth identity features; and simultaneously extracting gradient histogram features from the original picture as artificial identity features, wherein the feature dimensions of the extracted gradient histogram are uncertain, and the feature dimensions are mapped to be 1 multiplied by 256 through principal component analysis. Inputting the artificial identity features and the depth identity features after fusion mapping into related filtering for prediction, wherein a related filtering calculation formula is as follows:
whereinxIn order to fuse the features of the image,wfor filtering the template, calculating the response value of each target by convolutionyAnd the target with the maximum response value is the current tracking target.
In one example, referring to fig. 8, step 2 specifically includes:
2-1, zooming the real-time acquired lower visual image to obtain a zoomed lower visual image;
2-2, sequentially inputting the zoomed lower visual image into a first lightweight module, a second lightweight module, a third lightweight module, a first transducer-GCN module, a fourth lightweight module, a second transducer-GCN module, a fifth lightweight module and a global pooling module, and outputting a characteristic diagram;
2-3, mapping the characteristic graph to obtain a prediction result of the sea cucumber to be caught; the prediction result comprises a target position, a target category and a confidence coefficient;
2-4, inputting the feature map into a full-connection module to obtain a depth identity feature;
2-5, extracting gradient histogram features from the zoomed lower visual image as artificial identity features;
2-6, mapping the artificial identity features to the dimension same as the depth identity features by adopting principal component analysis;
2-7, fusing the mapped artificial identity features and the depth identity features to obtain fused identity features;
2-8, inputting the fused identity characteristics into a filtering module, and calculating the response value of each detection target in the zoomed lower visual image;
and 2-9, selecting the detection target with the maximum response value in the zoomed lower visual image to determine the detection target as the currently tracked sea cucumber to be caught.
And 3, converting the pixel coordinates into world coordinates through binocular stereo matching to obtain the three-dimensional position of the sea cucumber to be caught.
In the grabbing process, after the pixel coordinates of the tracking target are obtained, the conversion from the pixel coordinates to world coordinates needs to be realized through binocular stereo matching, and the real world coordinates of the sea cucumber are obtained according to the detection and tracking results.
In one example, the method specifically comprises the following steps:
3-1, calibrating the camera by adopting a Zhangyingyou calibration method to obtain the internal reference matrix of the cameraf,1/d x ,1/d y ,c x ,c y ]And distortion coefficient [ alpha ]k 1,k 2,k 3,p 1,p 2]And also includes an external reference rotation matrixRAnd translation vectort。
And 3-2, converting the pixels of the lower visual image into a camera coordinate system by using the internal reference matrix. In order to align the pixel positions of the left view and the right view of the binocular camera, distortion correction and imaging origin alignment are performed on an image according to camera intrinsic parameters, a rotation matrix and a translation vector obtained by camera calibration. The method comprises the steps of firstly converting image pixels into a camera coordinate system through a camera internal reference matrix, and converting the image pixels into a pixel coordinate system after conversion is carried out through a distortion coefficient in the camera coordinate system.
3-3, converting the lower visual image pixels in the camera coordinate system through a distortion coefficient, and converting the converted lower visual image pixels into a pixel coordinate system;
3-4, after distortion and pixel alignment are eliminated, calculating a disparity map to acquire three-dimensional coordinates of the target, and converting the pixel coordinates into world coordinates. Using formulasD=fT/dCalculating the distance from a point in space to the plane of the cameraD(ii) a Wherein,fin order to calibrate the acquired focal length,Tis the distance of two of the binocular cameras,dis a parallax value;
3-5, according to the distance from a certain point in space to the camera plane, using a formulaConverting the pixel coordinates into world coordinates to obtain the three-dimensional position of the sea cucumber to be caught; wherein (A), (B), (C), (D), (C), (B), (C)X,Y,Z) For the three-dimensional position coordinates of the sea cucumber to be caught (x 1,y 1) And (a)x 2,y 2) The coordinates of the pixels of the sea cucumber to be caught in the binocular vision in the images shot by the two cameras are respectively. And sequentially traversing each pixel point on the parallax map by using the above formula to obtain the depth map of the target, so that the coordinate of each pixel point under the world coordinate system can be obtained.
The designed sea cucumber detection model and the target tracking model can simultaneously realize the sea cucumber detection and tracking tasks, and the designed detection and tracking model is suitable for the sea cucumber catching task.
And after the target three-dimensional position information is acquired, performing motion planning through an underwater robot motion planning algorithm.
And 4, setting the three-dimensional position of the sea cucumber to be caught as a target point, and planning a path from the operation type underwater robot to the target point by adopting a fast search tree algorithm.
Planning a path between the robot and a target by a quick search tree algorithm, wherein the process is as follows:
4-1, initializing root nodeq int 。
4-2, randomly selecting a random point in each external cycleq rand After generating random point, traversing each node in tree to find out the node nearest to the random pointq near Defining step variablesepsIn each subsequent internal cycle fromq near To the direction ofq rand Step length expansionepsTo a new nodeq new Checking whether the paths collide, if not, adding a new node on the treeq new Entering the next internal circulation, if collision happens, entering the next internal circulation without adding a new node until the next internal circulation is reachedq rand Reopening of the deviceAn external circulation is started.
And 4-3, repeating the step 4-2 until the end point is reached, and acquiring the path from the current sea cucumber catching robot to the target point.
The coordinated motion planning control of the sea cucumber catching robot is completed through a task priority algorithm, and the motion priority of the underwater robot can be divided into four levels according to the sea cucumber catching task. The first stage is movement guided by the forward-looking camera, the position and the path from the fishing robot to a target point are planned in real time through an online movement planning algorithm, and the first stage is movement on a three-dimensional position; the second stage is the movement of the fishing robot guided by the binocular camera, after the robot moves to a certain distance near the target, the sea cucumber is positioned through the binocular camera, meanwhile, the online path planning is carried out, and the position of the fishing robot is adjusted until the target is positioned in the mechanical arm movement space; the third stage is the adjustment of the posture of the mechanical arm, so that the operation space of the mechanical arm is maximized; and the fourth stage is mechanical arm movement and sea cucumber catching.
And realizing sea cucumber tracking control and suspension grabbing algorithms according to the planned path based on the designed Actor-Critic reinforcement learning model. The motion control of the sea cucumber catching robot is three-dimensional control, so that the three attitude angles, three-dimensional coordinates, angular speeds and linear speeds need to be simultaneously controlled,
as shown in fig. 12, the specific flow of the designed model is as follows:
5-1, after the propeller thruster and the mechanical arm move according to the random initial state function, acquiring the current movement posture, transmitting the current movement posture into the action network, and calculating the current reward value according to the expectation planning and the reward function acquired in the step three. The designed reward function is as follows:
R=r 0-ρ 1||Δφ,Δθ,Δψ||-ρ 2||Δx,Δy,Δz||2
wherein Δx,Δy,ΔzIs a three-dimensional coordinate state quantity, Δ φ, Δθ,ΔψRespectively a yaw angle state quantity, a pitch angle state quantity and a roll angle state quantity,r 0in order to award the constant amount of the prize,ρ 1||Δφ,Δθ,Δψ| | is a two-norm relative orientation error,ρ 2||Δx,Δy,Δz||2is a two-norm relative position error.
And 5-2, fusing the reward value and the state function into a training sample, adding the training sample into a training set space, fusing the sample through a Gaussian mixture model, and compressing the sample space. The state function is represented as follows:
s=[g,Δx,Δy,Δz,Δφ,Δθ,Δψ,u,v,w,p,q,r]
whereingIs a constant value of the state, and the state,u,v,was the state quantity of the linear velocity,p,q,ris the angular velocity state quantity. So the current training sample space can be expressed as
[R,s]=[[R 1,s 1],• • •,[R n,s n]]
The sample space is gradually increased along with the time, the calculated amount is increased, the calculation method provided by the invention carries out clustering compression on the sample space through a Gaussian mixture model, and the Gaussian mixture model of the sample space can be expressed as
Common in this sample spaceKSamples of one class,. pi k Is the class distribution probability, i.e. the weight of the Gaussian mixture, and therefore satisfies =1,Andσ k respectively, a class mean and a class variance. Need to be aligned with、σ k 、Performing a solution, whereinAs reconstructed samples.
The log-likelihood function of the gaussian mixture model can be expressed as:
the Gaussian mixture model can be solved through an expected maximum algorithm, and the specific calculation process is as follows:
the first step is as follows: by initialisationAndσ k to solve for the conditional probability of a certain sample, i.e.:
a second step of obtaining the posterior probabilityFurther optimization by minimizing log-likelihood functionsAndσ k :
repeating the two steps until convergence, and solvingI.e. the reconstructed sample space, Q is the intermediate variable.
And 5-3, inputting the sample into a target-behavior network in the evaluation network, performing gradient calculation, and updating parameters of the online state-behavior network. In addition, after accumulating the gradients a plurality of times, the target state-behavior network parameters are updated by the online state-behavior network gradients.
And 5-4, optimizing and updating parameters of the online policy network through the gradient obtained by evaluating the network. In addition, after accumulating multiple gradients, the target policy network parameters are updated via the online policy network gradients.
And 5-5, generating a new state function through the action network, and controlling the mechanical arm and the propeller.
5-6, circulating from 5-1 to 5-5 until convergence, wherein the motion result of the mechanical arm and the propeller accords with the motion trail and the posture planned in the third step, and the sea cucumber is grabbed.
And 6, according to inverse kinematics, catching the sea cucumbers by a gripping mechanism clamping jaw 81 of the operation type underwater robot.
The sea cucumber catching robot can realize accurate control and autonomous grabbing of the sea cucumber catching robot under a complex underwater environment.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.
Claims (10)
1. An operation type underwater robot for sea cucumber fishing, characterized by comprising: the device comprises a body frame, a propeller, a second control plate, a first control plate, a camera mechanism and a grabbing mechanism;
the propeller, the second control plate, the first control plate and the camera shooting mechanism are all arranged on the body frame, and the fixed end of the grabbing mechanism is connected with the body frame;
the camera shooting mechanism is connected with a signal input end of the second control board, and a signal output end of the second control board is respectively connected with the first control board and the grabbing mechanism; the camera shooting mechanism is used for shooting a front visual image and a lower visual image of the operation type underwater robot and transmitting the front visual image and the lower visual image to the second control board; the second control board is used for determining three-dimensional position information of the sea cucumber according to the lower visual image and carrying out online path planning according to the three-dimensional position information of the sea cucumber and the front visual image;
the first control board is connected with the propeller and used for adjusting the propeller according to the on-line path plan so that the operation type underwater robot moves according to the on-line path plan;
the second control board is also used for controlling the grabbing mechanism to catch the sea cucumbers when the operation type underwater robot reaches a target point of the online path planning.
2. The working underwater robot for sea cucumber fishing according to claim 1, wherein the camera mechanism comprises: a forward looking monocular camera and a look down binocular camera;
the front-view monocular camera and the overlooking binocular camera are both connected with the signal input end of the second control board; the front monocular camera is used for shooting a front visual image of the operation type underwater robot and transmitting the front visual image to the second control board; and the overlooking binocular camera is used for shooting a lower visual image of the operation type underwater robot and transmitting the lower visual image to the second control board.
3. The working type underwater robot for sea cucumber fishing according to claim 1, wherein the propeller comprises: a first propeller thruster, a second propeller thruster, a third propeller thruster, a fourth propeller thruster, a fifth propeller thruster, a sixth propeller thruster, a seventh propeller thruster, and an eighth propeller thruster;
the first propeller thruster, the second propeller thruster, the third propeller thruster and the fourth propeller thruster are arranged in the horizontal direction according to the vector thruster, the first propeller thruster and the second propeller thruster are arranged in front of the horizontal direction of the body frame, and the third propeller thruster and the fourth propeller thruster are arranged behind the horizontal direction of the body frame;
the first propeller thruster, the second propeller thruster, the third propeller thruster, the fourth propeller thruster, the fifth propeller thruster, the sixth propeller thruster, the seventh propeller thruster and the eighth propeller thruster are all connected with the first control board;
the first control board is used for controlling the first propeller thruster, the second propeller thruster, the third propeller thruster and the fourth propeller thruster to provide thrust in the horizontal front-back direction for the operation type underwater robot, and controlling the fifth propeller thruster, the sixth propeller thruster, the seventh propeller thruster and the eighth propeller thruster to provide thrust in the vertical direction for the operation type underwater robot, so that the operation type underwater robot moves according to the on-line path planning.
4. The working type underwater robot for sea cucumber fishing according to claim 1, wherein the catching mechanism comprises: the device comprises a base, a shoulder joint, an elbow joint, a forearm joint, a clamping jaw, a first steering engine, a second steering engine, a third steering engine and a fourth steering engine;
the first steering engine, the second steering engine, the third steering engine and the fourth steering engine are all connected with the second control panel;
the shoulder joint is fixed to the body frame through a base; the base is provided with a first steering engine, and the first steering engine is used for driving the shoulder joint to rotate around the direction of the vertical axis under the control of the second control board;
the elbow joint is connected in series with the shoulder joint, a second steering engine is arranged between the shoulder joint and the elbow joint and used for driving the elbow joint to rotate around the direction vertical to the axis of the shoulder joint under the control of a second control board;
the forearm joint is connected in series with the elbow joint, a third steering engine is arranged between the forearm joint and the elbow joint and used for driving the forearm joint to rotate around the direction vertical to the elbow joint center line under the control of the second control board;
the clamping jaw and the fourth steering engine are arranged above the small arm joint; the clamping jaw comprises two oppositely arranged net structures; the fourth steering wheel is used for driving the two oppositely-arranged net-shaped structures to rotate in opposite directions under the control of the second control board, so that the opening and closing control of the clamping jaw is realized.
5. The working underwater robot for sea cucumber fishing according to claim 1, further comprising: the loading net cage and a fifth steering engine;
the loading net cage is arranged on the body frame, and the loading net cage is arranged opposite to the grabbing mechanism;
the loading net box is connected with the second control board through a fifth steering engine and is used for driving the loading net box to be automatically opened under the control of the second control board when the grabbing mechanism finishes grabbing and moves to a preset position, and loading the sea cucumbers caught by the grabbing mechanism.
6. A control method for a sea cucumber fishing operation type underwater robot is characterized by comprising the following steps:
acquiring a front visual image and a lower visual image of the operation type underwater robot shot by a camera shooting mechanism in real time;
according to the lower visual image obtained in real time, a sea cucumber identification and tracking algorithm based on MobileNet-Transformer-GCN is utilized to identify and continuously track the sea cucumber to be caught, and meanwhile, the pixel coordinates of the sea cucumber to be caught are positioned in real time;
converting the pixel coordinates into world coordinates through binocular stereo matching to obtain the three-dimensional position of the sea cucumber to be caught;
setting the three-dimensional position of the sea cucumber to be caught as a target point, and planning a path from the operation type underwater robot to the target point by adopting a rapid search tree algorithm;
controlling the operation type underwater robot to move according to the path based on an Actor-Critic reinforcement learning model according to the front visual image, and suspending the operation type underwater robot after the operation type underwater robot runs to a target point; the Actor-Critic reinforcement learning model performs clustering compression on a sample space through a Gaussian mixture model;
according to inverse kinematics, the sea cucumbers to be caught are caught by a grabbing mechanism of the operation type underwater robot.
7. The control method of the working underwater robot for sea cucumber catching according to claim 6, wherein the real-time acquisition of the front visual image and the lower visual image of the working underwater robot photographed by the photographing means further comprises:
carrying out color correction and defogging enhancement on the front visual image and the lower visual image by adopting an underwater image enhancement algorithm based on a twin convolutional neural network; the twin convolutional neural network comprises a first branch convolutional neural network and a second branch convolutional neural network; the first branch convolutional neural network is constrained by the color characteristics of the label image and is responsible for color correction of the image; the second branch convolutional neural network is constrained by texture features and is responsible for the image definition; and the first branch convolutional neural network and the second branch convolutional neural network are subjected to convolutional characteristic transformation operation after characteristic constraint, finally the two branch characteristics are spliced in a dot-product mode, and a final clear image is generated through one layer of convolutional transformation after splicing.
8. The control method of the working underwater robot for sea cucumber fishing according to claim 6, wherein the method for identifying and continuously tracking the sea cucumber to be fished and simultaneously locating the pixel coordinates of the sea cucumber to be fished in real time by using a sea cucumber identification and tracking algorithm based on MobileNet-transform-GCN according to the real-time acquired lower visual image specifically comprises the following steps:
zooming the real-time acquired lower visual image to obtain a zoomed lower visual image;
inputting the zoomed lower visual image into a first lightweight module, a second lightweight module, a third lightweight module, a first transform-GCN module, a fourth lightweight module, a second transform-GCN module, a fifth lightweight module and a global pooling module in sequence, and outputting a characteristic diagram;
mapping the characteristic graph to obtain a prediction result of the sea cucumber to be caught; the prediction result comprises a target position, a target category and a confidence coefficient;
inputting the feature map into a full-connection module to obtain a depth identity feature;
extracting gradient histogram features from the zoomed lower visual image as artificial identity features;
mapping the artificial identity features to dimensions the same as the depth identity features by principal component analysis;
fusing the mapped artificial identity features and the depth identity features to obtain fused identity features;
inputting the fused identity characteristics into a filtering module, and calculating the response value of each detection target in the zoomed lower visual image;
and selecting the detection target with the maximum response value in the zoomed lower visual image to determine the current tracked sea cucumber to be caught.
9. The control method for the working underwater robot for sea cucumber fishing according to claim 6, wherein the pixel coordinates are converted into world coordinates through binocular stereo matching to obtain the three-dimensional position of the sea cucumber to be fished, and specifically comprises the following steps:
calibrating the camera by adopting a Zhangyingyou calibration method to obtain an internal reference matrix and a distortion coefficient of the camera;
converting the lower visual image pixels to a camera coordinate system using the internal reference matrix;
converting the lower visual image pixels in a camera coordinate system through a distortion coefficient, and converting the converted lower visual image pixels into a pixel coordinate system;
using formulasD=fT/dCalculating the distance from a point in space to the plane of the cameraD(ii) a Wherein,fin order to calibrate the acquired focal length,Tis the distance of two of the binocular cameras,dis a parallax value;
according to the distance from a point in space to the plane of the camera, using a formulaConverting the pixel coordinates into world coordinates to obtain the three-dimensional position of the sea cucumber to be caught; wherein (A), (B), (C), (D), (C), (B), (C)X,Y,Z) For the three-dimensional position coordinates of the sea cucumber to be caught (x 1,y 1) And (a)x 2,y 2) Respectively is a picture of the sea cucumber to be caught in binocular vision shot by two camerasPixel coordinates in the image.
10. The control method for the sea cucumber catching-oriented working underwater robot as claimed in claim 6, wherein the Actor-Critic-based reinforcement learning model controls the working underwater robot to move according to the path and to suspend after the working underwater robot runs to a target point, specifically comprises:
acquiring the current motion attitude of the operation type underwater robot, and transmitting the current motion attitude into an online strategy network in a mobile network; the current motion attitude comprises a yaw angle, a pitch angle, a roll angle, a three-dimensional coordinate, an angular velocity and a linear velocity;
calculating a current reward value according to a reward function and the path of the Actor-critical reinforcement learning model; the reward function isR=r 0-ρ 1||Δφ,Δθ,Δψ||-ρ 2||Δx,Δy,Δz||2(ii) a Wherein, Deltax,Δy,ΔzIs a three-dimensional coordinate state quantity, Δ φ, Δθ,ΔψRespectively a yaw angle state quantity, a pitch angle state quantity and a roll angle state quantity,r 0in order to award the constant amount of the prize,ρ 1||Δφ,Δθ,Δψ| | is a two-norm relative orientation error,ρ 2||Δx,Δy,Δz||2is a two-norm of the relative position error,ρ 1andρ 2a first coefficient and a second coefficient, respectively;
fusing the current reward value and the state function into a training sample and adding the training sample into a sample space; the state function iss=[g,Δx,Δy,Δz,Δφ,Δθ,Δψ,u,v,w,p,q,r](ii) a Wherein,gis a constant value of the state, and the state,u,v,was the state quantity of the linear velocity,p,q,ras the state quantity of the angular velocity,sis a function of the state;
fusing the samples in the sample space through a Gaussian mixture model, and compressing the sample space; the Gaussian mixture model is(ii) a Wherein,P(x) In order to obtain a compressed sample space,Kfor the class of samples in the sample space before compression,is the probability of the class distribution,andσ k respectively, mean class and variance class: (R i ,s i ) Is the first in sample space before compressioniThe number of the samples is one,Nis a Gaussian distribution density function of the submodel;
inputting the compressed samples in the sample space into a target-behavior network in an evaluation network, performing gradient calculation, and updating parameters of the online state-behavior network;
optimizing and updating parameters of an online policy network in the action network through the gradient obtained by evaluating network calculation, and updating parameters of a target policy network through the gradient of the online policy network after accumulating the gradient for multiple times;
generating a new state function through an online strategy network in the action network, wherein the new state function is used for controlling the mechanical arm and the propeller;
and circulating the steps until convergence, so that the motion result of the operation type underwater robot conforms to the path.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210183134.6A CN114248893B (en) | 2022-02-28 | 2022-02-28 | Operation type underwater robot for sea cucumber fishing and control method thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210183134.6A CN114248893B (en) | 2022-02-28 | 2022-02-28 | Operation type underwater robot for sea cucumber fishing and control method thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114248893A true CN114248893A (en) | 2022-03-29 |
CN114248893B CN114248893B (en) | 2022-05-13 |
Family
ID=80796982
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210183134.6A Active CN114248893B (en) | 2022-02-28 | 2022-02-28 | Operation type underwater robot for sea cucumber fishing and control method thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114248893B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114700947A (en) * | 2022-04-20 | 2022-07-05 | 中国科学技术大学 | Robot based on visual-touch fusion and grabbing system and method thereof |
CN114739389A (en) * | 2022-05-17 | 2022-07-12 | 中国船舶科学研究中心 | Deep sea operation type cable controlled submersible underwater navigation device and use method thereof |
CN114973391A (en) * | 2022-06-30 | 2022-08-30 | 北京万里红科技有限公司 | Eyeball tracking method, device and equipment applied to metacarpal space |
CN115009478A (en) * | 2022-06-15 | 2022-09-06 | 江苏科技大学 | Intelligent underwater fishing robot and fishing method thereof |
CN116062130A (en) * | 2022-12-20 | 2023-05-05 | 昆明理工大学 | Shallow water underwater robot based on full degree of freedom |
CN116243720A (en) * | 2023-04-25 | 2023-06-09 | 广东工业大学 | AUV underwater object searching method and system based on 5G networking |
CN116255908A (en) * | 2023-05-11 | 2023-06-13 | 山东建筑大学 | Underwater robot-oriented marine organism positioning measurement device and method |
CN116405644A (en) * | 2023-05-31 | 2023-07-07 | 湖南开放大学(湖南网络工程职业学院、湖南省干部教育培训网络学院) | Remote control system and method for computer network equipment |
CN117029838A (en) * | 2023-10-09 | 2023-11-10 | 广东电网有限责任公司阳江供电局 | Navigation control method and system for underwater robot |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20130096549A (en) * | 2012-02-22 | 2013-08-30 | 한국과학기술원 | Jellyfish-polyp removal robot using remotely operated vehicle |
KR20140013209A (en) * | 2012-07-20 | 2014-02-05 | 삼성중공업 주식회사 | Subsea equipment, underwater operation system and underwater operation method |
CN106780356A (en) * | 2016-11-15 | 2017-05-31 | 天津大学 | Image defogging method based on convolutional neural networks and prior information |
CN107146248A (en) * | 2017-04-27 | 2017-09-08 | 杭州电子科技大学 | A kind of solid matching method based on double-current convolutional neural networks |
CN107977671A (en) * | 2017-10-27 | 2018-05-01 | 浙江工业大学 | A kind of tongue picture sorting technique based on multitask convolutional neural networks |
CN111062990A (en) * | 2019-12-13 | 2020-04-24 | 哈尔滨工程大学 | Binocular vision positioning method for underwater robot target grabbing |
CN112809703A (en) * | 2021-02-10 | 2021-05-18 | 中国人民解放军国防科技大学 | Bottom sowing sea cucumber catching robot based on ESRGAN enhanced super-resolution and CNN image recognition |
CN113500610A (en) * | 2021-07-19 | 2021-10-15 | 浙江大学台州研究院 | Underwater harvesting robot |
CN113561178A (en) * | 2021-07-30 | 2021-10-29 | 燕山大学 | Intelligent grabbing device and method for underwater robot |
WO2022021804A1 (en) * | 2020-07-28 | 2022-02-03 | 谈斯聪 | Underwater robot device and underwater regulation and control management optimization system and method |
-
2022
- 2022-02-28 CN CN202210183134.6A patent/CN114248893B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20130096549A (en) * | 2012-02-22 | 2013-08-30 | 한국과학기술원 | Jellyfish-polyp removal robot using remotely operated vehicle |
KR20140013209A (en) * | 2012-07-20 | 2014-02-05 | 삼성중공업 주식회사 | Subsea equipment, underwater operation system and underwater operation method |
CN106780356A (en) * | 2016-11-15 | 2017-05-31 | 天津大学 | Image defogging method based on convolutional neural networks and prior information |
CN107146248A (en) * | 2017-04-27 | 2017-09-08 | 杭州电子科技大学 | A kind of solid matching method based on double-current convolutional neural networks |
CN107977671A (en) * | 2017-10-27 | 2018-05-01 | 浙江工业大学 | A kind of tongue picture sorting technique based on multitask convolutional neural networks |
CN111062990A (en) * | 2019-12-13 | 2020-04-24 | 哈尔滨工程大学 | Binocular vision positioning method for underwater robot target grabbing |
WO2022021804A1 (en) * | 2020-07-28 | 2022-02-03 | 谈斯聪 | Underwater robot device and underwater regulation and control management optimization system and method |
CN112809703A (en) * | 2021-02-10 | 2021-05-18 | 中国人民解放军国防科技大学 | Bottom sowing sea cucumber catching robot based on ESRGAN enhanced super-resolution and CNN image recognition |
CN113500610A (en) * | 2021-07-19 | 2021-10-15 | 浙江大学台州研究院 | Underwater harvesting robot |
CN113561178A (en) * | 2021-07-30 | 2021-10-29 | 燕山大学 | Intelligent grabbing device and method for underwater robot |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114700947A (en) * | 2022-04-20 | 2022-07-05 | 中国科学技术大学 | Robot based on visual-touch fusion and grabbing system and method thereof |
CN114739389A (en) * | 2022-05-17 | 2022-07-12 | 中国船舶科学研究中心 | Deep sea operation type cable controlled submersible underwater navigation device and use method thereof |
CN115009478A (en) * | 2022-06-15 | 2022-09-06 | 江苏科技大学 | Intelligent underwater fishing robot and fishing method thereof |
CN115009478B (en) * | 2022-06-15 | 2023-10-27 | 江苏科技大学 | Intelligent underwater fishing robot and fishing method thereof |
CN114973391A (en) * | 2022-06-30 | 2022-08-30 | 北京万里红科技有限公司 | Eyeball tracking method, device and equipment applied to metacarpal space |
CN116062130A (en) * | 2022-12-20 | 2023-05-05 | 昆明理工大学 | Shallow water underwater robot based on full degree of freedom |
CN116243720B (en) * | 2023-04-25 | 2023-08-22 | 广东工业大学 | AUV underwater object searching method and system based on 5G networking |
CN116243720A (en) * | 2023-04-25 | 2023-06-09 | 广东工业大学 | AUV underwater object searching method and system based on 5G networking |
CN116255908B (en) * | 2023-05-11 | 2023-08-15 | 山东建筑大学 | Underwater robot-oriented marine organism positioning measurement device and method |
CN116255908A (en) * | 2023-05-11 | 2023-06-13 | 山东建筑大学 | Underwater robot-oriented marine organism positioning measurement device and method |
CN116405644A (en) * | 2023-05-31 | 2023-07-07 | 湖南开放大学(湖南网络工程职业学院、湖南省干部教育培训网络学院) | Remote control system and method for computer network equipment |
CN116405644B (en) * | 2023-05-31 | 2024-01-12 | 湖南开放大学(湖南网络工程职业学院、湖南省干部教育培训网络学院) | Remote control system and method for computer network equipment |
CN117029838A (en) * | 2023-10-09 | 2023-11-10 | 广东电网有限责任公司阳江供电局 | Navigation control method and system for underwater robot |
CN117029838B (en) * | 2023-10-09 | 2024-01-23 | 广东电网有限责任公司阳江供电局 | Navigation control method and system for underwater robot |
Also Published As
Publication number | Publication date |
---|---|
CN114248893B (en) | 2022-05-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114248893B (en) | Operation type underwater robot for sea cucumber fishing and control method thereof | |
CN112258618B (en) | Semantic mapping and positioning method based on fusion of prior laser point cloud and depth map | |
US12072705B2 (en) | Intelligent decision-making method and system for unmanned surface vehicle | |
CN108491880B (en) | Object classification and pose estimation method based on neural network | |
CN111602517B (en) | Distributed visual active perception method for string-type fruits and application of distributed visual active perception method | |
Wang et al. | 360sd-net: 360 stereo depth estimation with learnable cost volume | |
CN109702741B (en) | Mechanical arm vision grasping system and method based on self-supervision learning neural network | |
CN111666998B (en) | Endoscope intelligent intubation decision-making method based on target point detection | |
CN111738261A (en) | Pose estimation and correction-based disordered target grabbing method for single-image robot | |
CN113553943B (en) | Target real-time detection method and device, storage medium and electronic device | |
CN116255908B (en) | Underwater robot-oriented marine organism positioning measurement device and method | |
Li et al. | Learning view and target invariant visual servoing for navigation | |
CN112418171A (en) | Zebra fish spatial attitude and heart position estimation method based on deep learning | |
CN111831010A (en) | Unmanned aerial vehicle obstacle avoidance flight method based on digital space slice | |
CN114998573A (en) | Grabbing pose detection method based on RGB-D feature depth fusion | |
CN110866548A (en) | Infrared intelligent matching identification and distance measurement positioning method and system for insulator of power transmission line | |
Wang et al. | An adaptive and online underwater image processing algorithm implemented on miniature biomimetic robotic fish | |
Charco et al. | Transfer Learning from Synthetic Data in the Camera Pose Estimation Problem. | |
Sleaman et al. | Indoor mobile robot navigation using deep convolutional neural network | |
CN116359910A (en) | Multi-view collaborative tracking method and device for fast moving target under low illumination condition | |
CN115810188A (en) | Method and system for identifying three-dimensional pose of fruit on tree based on single two-dimensional image | |
Zhang et al. | Underwater autonomous grasping robot based on multi-stage cascade DetNet | |
CN118333909B (en) | Sea surface scene-oriented multi-view image acquisition and preprocessing system and method | |
Zheng et al. | Policy-based monocular vision autonomous quadrotor obstacle avoidance method | |
Reskó et al. | Artificial neural network based object tracking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |