CN113084817A - Object searching and grabbing control method of underwater bionic robot in turbulent flow environment - Google Patents

Object searching and grabbing control method of underwater bionic robot in turbulent flow environment Download PDF

Info

Publication number
CN113084817A
CN113084817A CN202110406037.4A CN202110406037A CN113084817A CN 113084817 A CN113084817 A CN 113084817A CN 202110406037 A CN202110406037 A CN 202110406037A CN 113084817 A CN113084817 A CN 113084817A
Authority
CN
China
Prior art keywords
underwater
bionic robot
moment
control
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110406037.4A
Other languages
Chinese (zh)
Other versions
CN113084817B (en
Inventor
王宇
王睿
蔡明学
王硕
谭民
马进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN202110406037.4A priority Critical patent/CN113084817B/en
Publication of CN113084817A publication Critical patent/CN113084817A/en
Application granted granted Critical
Publication of CN113084817B publication Critical patent/CN113084817B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/1633Programme controls characterised by the control loop compliant, force, torque control, e.g. combined with position control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention belongs to the field of underwater bionic robot control, and particularly relates to an object searching and grabbing control method of an underwater bionic robot in a turbulent flow environment, aiming at solving the problem that the existing underwater bionic robot is difficult to search and grab objects in the turbulent flow environment. The method comprises extracting the position and contour of the obstacle; acquiring and detecting whether an object to be grabbed is contained in a visual image, if so, solving an expected motion angle of each joint of the mechanical arm, calculating disturbance force of the mechanical arm in the grabbing process, and compensating the disturbance force to the control of the underwater bionic robot body to obtain total control input force/moment; otherwise, acquiring an external turbulence estimation value, and further acquiring the control input force/force rejection at the moment of t +1-t + N; and acquiring control quantities of the bionic webs on the two sides, and controlling the underwater bionic robot to realize object searching or grabbing control. The invention realizes the object search and the autonomous grabbing of the underwater bionic robot in the turbulent flow environment and improves the control robustness.

Description

Object searching and grabbing control method of underwater bionic robot in turbulent flow environment
Technical Field
The invention belongs to the field of underwater bionic robot control, and particularly relates to an object searching and grabbing control method, system and device of an underwater bionic robot in a turbulent flow environment.
Background
With the continuous deepening of human exploration on marine environment and the increasing development demand, the underwater operation robot serving as an important tool for marine development is widely applied to underwater object fishing, archaeology, emergency rescue, marine product fishing and the like. The autonomous capability and the intelligent level of the underwater operation robot are improved, the autonomous underwater operation robot and the research autonomous operation control method are developed, the underwater autonomous operation is realized, and the research vision of the underwater operation robot in the future is formed.
In order to improve the autonomy and the intelligence of the underwater bionic robot, the underwater bionic robot has certain autonomous navigation capability, can search a target from a current initial point, and then completes an operation task. At present, the Navigation of the underwater bionic robot mainly includes an underwater slam (singular Localization And mapping) Navigation technology based on vision And an underwater Terrain Navigation Technology (TAN) based on underwater acoustic equipment. Among them, vision-based underwater SLAM navigation technologies, for example: in 2008, researchers at the university of Girona in Spain use a SLAM algorithm based on extended Kalman filtering to navigate the submarine topography; hong et al in 2016 proposed an underwater SLAM method based on monocular vision, which focused on solving the loop problem when determining the relative motion of a robot through image matching. However, due to the particularity of the underwater environment, in practical application, the underwater image is affected by light and turbidity of the underwater environment, and is accompanied by large noise and interference, and image enhancement is often needed for correction. In addition, the underwater environment is often time-varying, which affects map matching accuracy, which also makes the application of SLAM autonomous navigation based on underwater vision difficult.
The prior map-based technology is applied to underwater terrain navigation technology based on underwater acoustic devices. For example: song et al propose a novel underwater topography matching method for topography navigation, improving the matching accuracy through an underwater digital topography and real-time depth measurements of a multi-beam sonar; chowdhary proposes a method for planning underwater terrain navigation by using a priori sparse map, and by means of the sparse map, an underwater bionic robot can rapidly perform map matching through data acquired by equipment such as a sonar instrument, a multi-beam Doppler velocimeter and the like; kim et al propose a submarine topography following method based on supervise learning, according to the map track point of presetting, utilize the sonar of bionical robot bottom installation under water, follow the change of topography in real time, reach the purpose of navigation. However, the underwater terrain navigation based on the prior map needs a relatively accurate digital map, and the underwater bionic robot needs to be provided with a high-precision navigation system and an underwater acoustic sensor to have relatively good navigation precision. In addition, the underwater topography map matching may generate accumulated errors due to ocean currents, the accuracy of the accumulated errors depends on a navigation system, an underwater sound sensor, the accuracy of a map and the change situation of the topography, and the accumulated errors are main factors for restricting the development of the underwater topography navigation technology based on the prior map.
When an underwater autonomous operation task is carried out in a real water area, on one hand, an underwater operation robot needs an underwater navigation technology to determine the relative position of an underwater target object; on the other hand, when the target object is grabbed, the control precision of the tail end position of the mechanical arm of the underwater operation robot is influenced due to the turbulent flow. The underwater grabbing control research is a key technology for realizing autonomous operation of an underwater operation robot, and particularly relates to suspension operation control in a disturbed flow state.
The underwater operation robot is a highly redundant and strongly coupled nonlinear system, and the complex dynamic model and the uncertainty of the unknown hydrodynamic model influence the precision and the effect of the grabbing control of the underwater operation robot and the autonomous grabbing. The underwater operation robot grabbing control method mainly comprises body and arm coordination control based on redundant kinematics, body and arm decoupling control based on a dynamic model and model-free underwater operation robot grabbing control. However, although research on autonomous operation control methods of underwater operation robots based on redundant kinematics has made great progress, most of the literature mainly uses simulation results to show the effectiveness of the proposed methods, and platform experimental verification is lacking. When an autonomous operation task is executed in a marine environment, due to a complex underwater environment, an underwater operation robot is difficult to establish an accurate dynamic model, disturbed flow changing along with time exists outside, and the decoupling control effect of a body and an arm based on a dynamic model is limited. Based on the above, the invention provides an object searching and grabbing control method of an underwater bionic robot in a disturbed flow environment.
Disclosure of Invention
In order to solve the above problems in the prior art, that is, to solve the problem that the existing underwater biomimetic robot is difficult to search for and autonomously grasp an object in a turbulent environment, the first aspect of the present invention provides an object search and grasp control method for an underwater biomimetic robot in a turbulent environment, the method comprising:
s10, acquiring a sonar image within a set angle-scanning distance range in front of the underwater bionic robot through an obstacle avoidance sonar, and extracting the position and the outline of an obstacle after preprocessing the sonar image;
s20, acquiring a visual image acquired by a front camera of the underwater bionic robot, detecting whether the visual image contains an object to be grabbed, if so, acquiring the position of the object to be grabbed, and jumping to the step S50; otherwise, jumping to step S30;
s30, generating a corresponding obstacle avoidance route by combining the position and contour information of the obstacle and a preset search path; during driving forward according to the obstacle avoidance route, based on state information and control input force/moment at the moment t of the underwater bionic robot, obtaining an external disturbed flow estimated value observed at the current moment through a pre-constructed disturbed flow observer based on a radial basis function neural network; the state information comprises position, quality, speed, acceleration and course; t represents the current time;
s40, inputting the external turbulence estimation value, the control input force/moment and the state information of the underwater bionic robot at the moment t into a pre-constructed LSTM prediction model, acquiring the control input force/force rejection of the underwater bionic robot at the moment t +1-t + N as a first force/moment, and jumping to the step S60;
s50, based on the position of an object to be grabbed, combining the outline and position information of an obstacle, solving the expected motion angle of each joint of the mechanical arm of the underwater bionic robot through reverse motion, calculating the disturbance force of the mechanical arm in the grabbing process by utilizing a Newton-Euler model, compensating the disturbance force into the control of the underwater bionic robot body, and obtaining the total control input force/moment as a first force/moment;
s60, based on the first force/moment of the underwater bionic robot, constructing a mapping relation between the kinematic control quantity and the control parameters through fuzzy reasoning to obtain bionic web control quantities on two sides; controlling the underwater bionic robot to realize object searching or grabbing control according to the control quantity of the bionic webs on the two sides;
s70, executing steps S10-S60 in a circulating mode until all the objects to be grabbed are obtained.
In some preferred embodiments, the pre-treatment comprises: and carrying out threshold segmentation, median filtering, corrosion and expansion processing on the sonar image.
In some preferred embodiments, the dynamic model of the underwater biomimetic robot is:
Figure BDA0003022372450000041
wherein eta is [ x, y, z, psi ═ x, y, z, phi]The position and the course of the underwater bionic robot are shown,
Figure BDA0003022372450000042
Figure BDA0003022372450000043
respectively representing the speed and the acceleration of the underwater bionic robot, M (eta) representing mass and an additional mass matrix,
Figure BDA0003022372450000044
a matrix of coriolis forces and centripetal forces is represented,
Figure BDA0003022372450000045
a linear damping matrix is represented, u represents the control input force/torque, and d represents the disturbance value over time.
In some preferred embodiments, the radial basis function neural network-based perturbation observer RBF-DOB is:
Figure BDA0003022372450000046
Figure BDA0003022372450000047
Figure BDA0003022372450000048
Figure BDA0003022372450000049
Figure BDA00030223724500000410
wherein, sigma is an internal variable of DOB,
Figure BDA00030223724500000411
representing the first derivative of the internal variables of the DOB,
Figure BDA00030223724500000412
in order to observe the estimated value of the external turbulence,
Figure BDA00030223724500000413
represents a non-linear equation, phi is a reversible matrix,
Figure BDA0003022372450000051
in order to obtain the gain of the DOB,
Figure BDA0003022372450000052
for the system to know the approximation of the hydrodynamic parameter,
Figure BDA0003022372450000053
the self-adaptive weight of the RBF network is represented, and h (d) represents the Gaussian function output of the hidden layer of the RBF network.
In some preferred embodiments, the disturbance compensation of the robot arm to the underwater bionic robot body is calculated by the following steps:
Figure BDA0003022372450000054
BfBB R0 0f0
BτBB R0 0τ0+B PB→0×B R0 0f0
wherein the content of the first and second substances,
Figure BDA0003022372450000055
for feedforward compensation to the mechanical arm disturbance amount of the bionic body,BR0is a (3 x 3) rotation matrix from coordinate system 0 to B according to the D-H parameters,BPB→0is from the origin of the coordinate system B to the origin of the coordinate system 0, and represents a (3 × 1) position column vector in the coordinate system B,ifiiτirespectively representing the generalized force and moment of the connecting rod i under the coordinate system i.
In some preferred embodiments, the RBF neural network in the disturbing flow observer RBF-DOB is:
Y=W*Th(ξ)+δ
Figure BDA0003022372450000056
xi represents an input vector of the RBF neural network, and i represents the number of nodes of the intermediate hidden layer; h isiRepresenting the output of a Gaussian function at the ith node in the hidden layer of the RBF neural network, c representing the central vector of the Gaussian function, b being the width of a baseband, W*=[w1,w2,...,wn]TAnd expressing an ideal weight vector from a hidden layer to an output layer of the RBF neural network, wherein delta is an approximation error of the RBF neural network.
In a second aspect of the present invention, an object searching and grasping control system for an underwater biomimetic robot in a turbulent flow environment is provided, the system comprising: the device comprises an obstacle extraction module, an object to be grabbed detection module, an external turbulence estimation value calculation module, an LSTM prediction module, a grabbing disturbance compensation module, a control quantity acquisition module and a circulation module;
the obstacle extraction module is configured to acquire a sonar image within a set angle-scanning distance range in front of the underwater bionic robot through obstacle avoidance sonar, and extract the position and the outline of an obstacle after preprocessing the sonar image;
the object to be grabbed detection module is configured to acquire a visual image acquired by a front camera of the underwater bionic robot, detect whether the visual image contains an object to be grabbed, acquire the position of the object to be grabbed if the visual image contains the object to be grabbed, and skip the grabbing disturbance compensation module; otherwise, skipping to an external disturbed flow estimated value calculation module;
the external disturbed flow estimated value calculation module is configured to generate a corresponding obstacle avoidance route by combining the position and contour information of the obstacle and a preset search path; during driving forward according to the obstacle avoidance route, based on state information and control input force/moment at the moment t of the underwater bionic robot, obtaining an external disturbed flow estimated value observed at the current moment through a pre-constructed disturbed flow observer based on a radial basis function neural network; the state information comprises position, quality, speed, acceleration and course; t represents the current time;
the LSTM prediction module is configured to input an external turbulence estimation value, a control input force/moment and state information of the underwater bionic robot at the time t into a pre-constructed LSTM prediction model, obtain a control input force/force rejection of the underwater bionic robot at the time t +1-t + N as a first force/moment, and skip the control quantity obtaining module;
the grabbing disturbance compensation module is configured to solve an expected motion angle of each joint of a mechanical arm of the underwater bionic robot through reverse motion based on the position of an object to be grabbed and in combination with the outline and position information of an obstacle, calculate disturbance force of the mechanical arm in the grabbing process by using a Newton-Euler model, compensate the disturbance force into the control of the underwater bionic robot body, and obtain total control input force/moment serving as first force/moment;
the control quantity acquisition module is configured to construct a mapping relation between the kinematic control quantity and the control parameters through fuzzy reasoning based on the first force/moment of the underwater bionic robot to obtain the control quantity of the bionic webbings on two sides; controlling the underwater bionic robot to realize object searching or grabbing control according to the control quantity of the bionic webs on the two sides;
the circulating module is configured to circularly execute the obstacle extracting module and the control quantity acquiring module until all the targets to be grabbed are obtained.
In a third aspect of the present invention, a storage device is provided, in which a plurality of programs are stored, and the programs are adapted to be loaded and executed by a processor to implement the above-mentioned object searching and grabbing control method for an underwater biomimetic robot in a turbulent flow environment.
In a fourth aspect of the present invention, a processing apparatus is provided, which includes a processor, a storage device; a processor adapted to execute various programs; a storage device adapted to store a plurality of programs; the program is suitable for being loaded and executed by a processor to realize the object searching and grabbing control method of the underwater bionic robot in the turbulent flow environment.
The invention has the beneficial effects that:
the invention realizes the object search and the autonomous grabbing of the underwater bionic robot in the turbulent flow environment and improves the control robustness.
(1) The method constructs a turbulent flow observer based on a radial basis function neural network, and the RBF neural network approaches unknown hydrodynamic parameters in the DOB, so that the accuracy of turbulent flow estimation is improved;
(2) from the aspect of dynamic control, the influence of the motion of the mechanical arm on the body is compensated by respectively controlling the motion of the underwater bionic robot body and the motion of the mechanical arm and utilizing a Newton-Euler model, decoupling control is realized, and the robustness of the control of the underwater bionic robot is improved;
(3) the method combines a long-time memory network nonlinear model prediction control method, utilizes an NMPC rolling optimization strategy, and solves the current optimal control input and control sequence which minimizes the objective function under the condition of satisfying the constraint in each control cycle by using a nonlinear optimization algorithm C/GMRES, thereby improving the object searching and grabbing efficiency of the underwater bionic robot.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of an object searching and grabbing control method of an underwater bionic robot in a disturbed flow environment according to an embodiment of the invention;
FIG. 2 is a schematic diagram of a frame of an object searching and grasping control system of an underwater biomimetic robot in a disturbed flow environment according to an embodiment of the present invention;
FIG. 3 is a detailed framework schematic diagram of an autonomous object grabbing method under a turbulent flow environment of an underwater bionic robot according to an embodiment of the invention;
FIG. 4 is a schematic diagram of sonar operation and imaging according to one embodiment of the present invention;
fig. 5 is a schematic diagram of a sonar image preprocessing flow according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an imaging process and image filtering results of a pool wall of a sonar scanning laboratory according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of an underwater object search strategy according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of an RBF-DOB spoiler observer according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of an RBF neural network according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of an LSTM predictive model network framework according to an embodiment of the present invention;
FIG. 11 is a schematic representation of an ROS based underwater simulation environment of an embodiment of the present invention;
fig. 12 is a schematic view illustrating a target object searched by binocular vision in a tour process of the underwater bionic robot according to an embodiment of the present invention;
FIG. 13 is a schematic diagram of a sensor data variation curve during an underwater target search process according to an embodiment of the present invention;
FIG. 14 is a schematic diagram of a sequence of video screenshots for searching an underwater target object in accordance with an embodiment of the present invention;
FIG. 15 is a diagram illustrating a state variation curve of a robot according to an embodiment of the present invention;
FIG. 16 is a schematic diagram of a sequence of automatically capturing video shots by an object in a turbulent flow environment according to an embodiment of the present invention;
FIG. 17 is a schematic diagram illustrating an estimated spoiler value during a process of searching for and grabbing an object according to an embodiment of the present invention;
fig. 18 is a schematic diagram illustrating changes in joint angles of a robot arm during a process of gripping a target object according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
The invention relates to an object searching and grabbing control method of an underwater bionic robot in a disturbed flow environment, which comprises the following steps of:
s10, acquiring a sonar image within a set angle-scanning distance range in front of the underwater bionic robot through an obstacle avoidance sonar, and extracting the position and the outline of an obstacle after preprocessing the sonar image;
s20, acquiring a visual image acquired by a front camera of the underwater bionic robot, detecting whether the visual image contains an object to be grabbed, if so, acquiring the position of the object to be grabbed, and jumping to the step S50; otherwise, jumping to step S30;
s30, generating a corresponding obstacle avoidance route by combining the position and contour information of the obstacle and a preset search path; during driving forward according to the obstacle avoidance route, based on state information and control input force/moment at the moment t of the underwater bionic robot, obtaining an external disturbed flow estimated value observed at the current moment through a pre-constructed disturbed flow observer based on a radial basis function neural network; the state information comprises position, quality, speed, acceleration and course; t represents the current time;
s40, inputting the external turbulence estimation value, the control input force/moment and the state information of the underwater bionic robot at the moment t into a pre-constructed LSTM prediction model, acquiring the control input force/force rejection of the underwater bionic robot at the moment t +1-t + N as a first force/moment, and jumping to the step S60;
s50, based on the position of an object to be grabbed, combining the outline and position information of an obstacle, solving the expected motion angle of each joint of the mechanical arm of the underwater bionic robot through reverse motion, calculating the disturbance force of the mechanical arm in the grabbing process by utilizing a Newton-Euler model, compensating the disturbance force into the control of the underwater bionic robot body, and obtaining the total control input force/moment as a first force/moment;
s60, based on the first force/moment of the underwater bionic robot, constructing a mapping relation between the kinematic control quantity and the control parameters through fuzzy reasoning to obtain bionic web control quantities on two sides; controlling the underwater bionic robot to realize object searching or grabbing control according to the control quantity of the bionic webs on the two sides;
s70, executing steps S10-S60 in a circulating mode until all the objects to be grabbed are obtained.
In order to more clearly explain the object searching and grabbing control method of the underwater bionic robot in the disturbed flow environment, the following will expand the detailed description of the steps in one embodiment of the method in accordance with the accompanying drawings.
S10, acquiring a sonar image within a set angle-scanning distance range in front of the underwater bionic robot through an obstacle avoidance sonar, and extracting the position and the outline of an obstacle after preprocessing the sonar image;
in this embodiment, the obstacle is detected by the obstacle avoidance sonar, which is specifically as follows:
s11, scanning the obstacle in front of the robot according to a fixed stepping angle by using a mechanical scanning sonar Micro-DST;
and S12, generating sonar images of underwater obstacles by analyzing the echo data intensity Bin value and the return time of each Ping sonar. Fig. 4 illustrates the principle of detecting an obstacle by DST sonar, wherein the sonar emits sound waves according to a certain frequency within a set scanning angle, when the sound waves encounter the obstacle, the intensity Bin value of the returned sound waves is large, and the screen color scale displays different colors according to different Bin values returned by the sound waves. By calculating the time and the azimuth angle of the returned sound wave, the azimuth information of the obstacle is determined, and the azimuth information can be displayed by using a sound wave radar map.
And S13, limiting the scanning distance of the sonar, performing threshold segmentation, median filtering, erosion and expansion operations on the sonar image, and removing the background noise of the sonar image. Wherein the content of the first and second substances,
the scanning distance is defined. In the underwater object searching process, the sonar scanning distance can be limited according to the requirements of the working range, so that the phenomenon that one point corresponds to multiple return distances due to the multipath effect is avoided, and unreasonable noise information is removed.
Threshold segmentation, namely calculating a threshold of a Bin value in each Ping data, as shown in formula (1):
BinThreshold=Binmin+(Binmax-Binmin)/2 (1)
wherein Bin isThresholdFor a desired threshold value of Bin value in Ping data, BinmaxIs the maximum value of Bin value in Ping data, BinminFor the minimum value of Bin values in Ping data, Bin values smaller than the threshold are discarded and Bin values larger than the threshold are retained.
Median filtering, that is, calculating the median filtering output of the two-dimensional sonar image, as shown in formula (2):
g(x,y)=Med{f(x-k),(y-l)},k l∈Ω (2)
wherein g (·) represents an image after median filtering, f (·) represents an original image, Ω is a two-dimensional filtering template, x and y represent pixel coordinates of the sonar image respectively, k and l represent indexes of the two-dimensional filtering template respectively, and Med represents median taking.
And (3) corrosion and expansion, namely, the sonar image is subjected to corrosion and expansion treatment, and the corrosion and the expansion can be alternately carried out.
In addition, fig. 5 exemplarily shows a sonar image processing flow; the following illustrates the filtering effect of sonar images: the experiment verification of sonar scanning barrier and image processing is carried out in a laboratory pool, the sonar scanning angle is set to be 180 degrees, the maximum scanning distance is limited to be 5m, and the wall of the laboratory pool is scanned to build an image. The dimensions of the laboratory pool were approximately 5m long and 4m wide, with the robot in place in the pool as shown in fig. 6 (g). Fig. 6(a) -6(e) show the imaging situation of the pool wall in the course of scanning 180 ° by sonar, and fig. 6(f) shows the contour map of the pool wall obtained by the sonar image processing procedure. FIG. 7 is search track of bionic robot under water, and bionic robot utilizes being used to lead and carries out directional control under water, detects the barrier in robot the place ahead through the sonar, when having a plurality of search targets in the underwater region, and bionic robot under water can continue to search for all the other targets according to search track after grabbing one of them target, and snatch.
S14: and extracting the outline and position information of the obstacle in the sonar image.
S20, acquiring a visual image acquired by a front camera of the underwater bionic robot, detecting whether the visual image contains an object to be grabbed, if so, acquiring the position of the object to be grabbed, and jumping to the step S50; otherwise, jumping to step S30;
in the embodiment, the target to be captured is identified and positioned through the visual image acquired by the front camera of the underwater bionic robot. The method comprises the steps of acquiring a visual image acquired by a front camera of the underwater bionic robot, detecting whether the visual image contains an object to be grabbed, if not, searching for the object, otherwise, grabbing the target object.
S30, generating a corresponding obstacle avoidance route by combining the position and contour information of the obstacle and a preset search path; during driving forward according to the obstacle avoidance route, based on state information and control input force/moment at the moment t of the underwater bionic robot, obtaining an external disturbed flow estimated value observed at the current moment through a pre-constructed disturbed flow observer based on a radial basis function neural network; the state information comprises position, quality, speed, acceleration and course; t represents the current time;
in the embodiment, the underwater bionic robot detects the obstacle through the obstacle avoidance sonar, predicts and follows the change of the underwater terrain, keeps a relatively proper distance with the underwater terrain all the time, and identifies and positions the underwater target object by using the vision system according to the search track. The method comprises the following specific steps:
s31, generating a corresponding obstacle avoidance route by combining the position and contour information of the obstacle and a preset search path, and driving the underwater bionic robot to move forward according to the obstacle avoidance route;
and S32, estimating external turbulence in real time in the process that the underwater bionic robot drives to advance according to the obstacle avoidance route. According to the method, an external disturbed flow estimated value is obtained through a pre-constructed disturbed flow observer RBF-DOB based on a radial basis function neural network, wherein the DOB can estimate current disturbed flow information through a robot state and a control action and correct the estimated value by using a difference value between an estimated output disturbed flow value and an actual output disturbed flow value; the RBF neural network mainly approximates unknown hydrodynamic parameters in the DOB, improving the accuracy of the spoiler estimation, as shown in fig. 8. The method comprises the following specific steps:
s321, constructing a dynamic model of the underwater bionic robot, as shown in formula (3):
Figure BDA0003022372450000131
wherein eta is [ x, y, z, psi ═ x, y, z, phi]Indicating the position and heading of the robot,
Figure BDA0003022372450000132
respectively representing the velocity and acceleration of the robot, M (eta) representing a mass and an additional mass matrix,
Figure BDA0003022372450000133
a matrix of coriolis forces and centripetal forces is represented,
Figure BDA0003022372450000134
a linear damping matrix is represented, u represents the control input force/torque, and d represents the disturbance over time.
S322, designing the RBF-DOB disturbed flow observer according to the dynamic model of the underwater bionic robot, as shown in formulas (4), (5), (6), (7) and (8):
Figure BDA0003022372450000135
Figure BDA0003022372450000136
Figure BDA0003022372450000141
Figure BDA0003022372450000142
Figure BDA0003022372450000143
wherein, sigma is an internal variable of DOB,
Figure BDA0003022372450000144
representing the first derivative of the internal variables of the DOB,
Figure BDA0003022372450000145
in order to observe the estimated value of the external turbulence,
Figure BDA0003022372450000146
represents a non-linear equation, phi is a reversible matrix,
Figure BDA0003022372450000147
in order to obtain the gain of the DOB,
Figure BDA0003022372450000148
for the system to know the approximation of the hydrodynamic parameter,
Figure BDA0003022372450000149
the self-adaptive weight of the RBF network is represented, and h (d) represents the Gaussian function output of the hidden layer of the RBF network.
S323, designing an RBF neural network according to the RBF-DOB disturbed flow observer, as shown in the formula (9) (10):
Figure BDA00030223724500001410
Y=W*Th(ξ)+δ (10)
where xi represents the input vector of RBF neural network, i represents the number of nodes in intermediate hidden layer, hiRepresenting the output of the Gaussian function at the ith node in the hidden layer, c representing the center vector of the Gaussian function, b being the base band width, W*=[w1,w2,...,wn]TRepresenting the ideal weight vector from the hidden layer to the output layer, and delta is the approximation error of the network. Fig. 9 is a schematic structural diagram of an RBF neural network.
And inputting the state information of the underwater bionic robot at the current moment t and the control input force/moment into the constructed turbulent flow observer based on the radial basis function neural network to obtain an external turbulent flow estimated value observed at the current moment. Wherein the status information includes location, mass, speed, acceleration, and heading.
S40, inputting the external turbulence estimation value, the control input force/moment and the state information of the underwater bionic robot at the moment t into a pre-constructed LSTM prediction model, acquiring the control input force/force rejection of the underwater bionic robot at the moment t +1-t + N as a first force/moment, and jumping to the step S60;
in this embodiment, the LSTM prediction model is composed of a state prediction network and a turbulent flow prediction network, and is used for predicting the state of the NMPC system in a turbulent flow state. Fig. 10 is a schematic diagram of the LSTM prediction model network framework, and it can be seen from the diagram that the input of the LSTM prediction model network is the state η (k) of the system at the current time k, and the control input u (k) is [ u ], (k) ]* 0(k),u(k+1|k),...,u(k+N-1|k)]Estimated turbulence value
Figure BDA0003022372450000151
The value of η (k) can be obtained by a sensor, and u (k) is a sequence of control quantities obtained by NMPC roll optimization. And after being obtained by using the RBF-DOB turbulence observer, the turbulence estimation value is input into the turbulence prediction network, so that a turbulence prediction value sequence at N moments in the future is obtained, and the turbulence prediction value sequence is input into the state prediction network according to the time sequence. The state prediction network predicts the state value of the system at each moment according to the state of the system, the control input sequence and the disturbed flow prediction sequence, and the actual state value is obtainedAnd predicting the state sequence at N moments in the future.
According to the LSTM prediction model network, the prediction states obtained with each sampling time as the starting point of prediction are as shown in equations (11), (12), (13) and (14):
Figure BDA0003022372450000152
Figure BDA0003022372450000153
Figure BDA0003022372450000154
η(k|k)=η(k) (13)
Figure BDA0003022372450000155
wherein k is the sampling time,
Figure BDA0003022372450000156
respectively representing the control input and the state of the system,
Figure BDA0003022372450000157
representing a disturbance estimation value, i is 1, and N +1 is a prediction state index value; h (-) and h' (-) represent nonlinear activation functions, Wη、Wu、WdW ', W' represent the corresponding weight matrices.
And obtaining a state sequence of the robot at a future moment through the LSTM prediction model network, and solving a current optimal control input and control sequence which enables the objective function to be minimized under the condition of meeting the constraint in each control period by using a non-linear optimization algorithm C/GMRES through an NMPC rolling optimization strategy. As shown in equations (15) (16):
Figure BDA0003022372450000161
Figure BDA0003022372450000162
where r (k + j | k) represents a desired state reference value from the sampling time k, η (k + j | k) represents a state prediction output from the sampling time k, Δ u (k + j | k) ═ u (k + j | k) -u (k + j-1| k) represents an increment of a control input, N (N ≧ 1) represents a prediction time domain, and N ≧ 1 represents a prediction time domainu(N≥Nu≧ 1) represents a control time domain,
Figure BDA0003022372450000163
and
Figure BDA0003022372450000164
is a symmetric positive definite weight matrix.
The controlled object is experimented with to collect data samples to train the LSTM predictive model network. And simulating by using a mathematical model to generate a turbulence value, observing the turbulence value through the RBF-DOB to obtain a turbulence estimation value at each sampling moment, training a turbulence prediction network in the LSTM prediction model network by using the turbulence estimation value, and finishing the training of the turbulence prediction network according to the length of a prediction time domain designed by the system. For example, the following settings may be made: the data samples were divided into training and testing groups. Because the dimensions of the input sample data are not uniform, the data samples are normalized in order to facilitate gradient calculation and accelerate convergence speed. The training model is finally determined by analyzing the loss value of each training cycle during the training process.
The state prediction sequence and the control input sequence of the system at each moment are obtained through an NMPC simulation experiment, the obtained disturbed flow prediction sequence is combined to form training sample data, and the training state prediction network is trained offline by using the sample data.
S50, based on the position of an object to be grabbed, combining the outline and position information of an obstacle, solving the expected motion angle of each joint of the mechanical arm of the underwater bionic robot through reverse motion, calculating the disturbance force of the mechanical arm in the grabbing process by utilizing a Newton-Euler model, compensating the disturbance force into the control of the underwater bionic robot body, and obtaining the total control input force/moment as a first force/moment;
in the embodiment, when an object to be grabbed is detected through a front camera of the underwater bionic robot, the position of the object to be grabbed is obtained, the contour and the position information of an obstacle-avoiding sonar are combined, the expected motion angle of each joint of the mechanical arm of the underwater bionic robot is solved through reverse motion, the disturbance force of the mechanical arm in the grabbing process is calculated by utilizing a Newton-Euler model, the disturbance force is compensated to the control of the underwater bionic robot body, and the total control input force/moment is obtained and serves as the first force/moment.
The disturbance compensation of the robot arm to the underwater bionic robot body comprises the following specific calculation processes:
s51, calculating the force/moment of the underwater bionic robot mechanical arm base according to the Newton-Euler method, as shown in the formula (17) (18):
BfBB R0 0f0 (17)
BτBB R0 0τ0+B PB→0×B R0 0f0 (18)
wherein the content of the first and second substances,BR0is a (3 × 3) rotation matrix from coordinate system 0 to B according to D-H parameters;BPB→0is from the origin of the coordinate system B to the origin of the coordinate system 0, and represents a (3 × 1) position column vector in the coordinate system B;ifiiτirespectively representing the generalized force and moment of the connecting rod i under the coordinate system i.
S52, calculating the mechanical arm disturbance force and moment compensated to the bionic body in a feedforward mode according to the force and moment of the mechanical arm base, wherein the mechanical arm disturbance force and moment compensated to the bionic body in a feedforward mode are shown in a formula (19):
Figure BDA0003022372450000171
wherein the content of the first and second substances,
Figure BDA0003022372450000172
the mechanical arm disturbance amount of the bionic body is compensated for feedforward.
S60, based on the first force/moment of the underwater bionic robot, constructing a mapping relation between the kinematic control quantity and the control parameters through fuzzy reasoning to obtain bionic web control quantities on two sides; controlling the underwater bionic robot to realize object searching or grabbing control according to the control quantity of the bionic webs on the two sides;
in the embodiment, a mapping relation between the dynamic control quantity and the control parameters of the underwater bionic robot is established based on fuzzy reasoning to obtain the control quantities of the bionic webs on two sides; and controlling the underwater bionic robot according to the control quantity of the bionic webs on the two sides to realize object searching and grabbing control.
S70, executing steps S10-S60 in a circulating mode until all the objects to be grabbed are obtained.
In this embodiment, when there are a plurality of search targets in the underwater region, after one of the targets is grabbed by the underwater biomimetic robot, the underwater biomimetic robot continues to search for the remaining targets according to the search trajectory and grab the remaining targets.
In addition, in order to verify the effectiveness of the invention, underwater target object searching and capturing simulation experiments are carried out in an underwater simulation environment based on ROS. Fig. 11 shows a constructed ROS-based underwater simulation environment, in which some underwater target objects are arranged on a relief-changed terrain, and the robot performs target search using binocular vision through an underwater object search strategy, and then finishes capturing the target objects in a disturbed flow environment. The construction of the disturbed flow environment is that a constant value disturbance (amplitude value 0.5), a sine change disturbance (amplitude value 0.4) and a constant value disturbance (amplitude value 0.7) are respectively applied to the underwater operation robot in the direction, the direction and the yaw direction. Fig. 12 is a schematic view exemplarily showing a target object search through binocular vision during a tour of a robot. Fig. 13 is a schematic diagram illustrating a variation curve of sensor data in an underwater target searching process. Fig. 14 is a schematic diagram of an exemplary video screenshot sequence for searching underwater target objects. The robot provided by the embodiment of the invention keeps a distance of about 1m with the underwater terrain all the time, and can search objects according to the search track in the underwater environment.
Fig. 15 and 16 show a change curve of the robot state of the underwater operation robot in the autonomous grabbing process under the disturbed flow environment and a video screenshot sequence respectively. Fig. 17 shows three-directional spoiler values estimated by using the RBF-DOB spoiler observer, and it can be seen that the estimated spoiler values substantially conform to the set spoiler values. Fig. 18 shows the angle change of each joint of the robot arm during the grasping process. It can be seen that when the target object appears in the working space of the mechanical arm, the tail end of the mechanical arm starts to move in a hand grab mode, the relative position of the tail end of the mechanical arm and the target object is calculated in real time, and finally the object grabbing is achieved.
An object search and capture control system of an underwater biomimetic robot in a disturbed flow environment according to a second embodiment of the present invention, as shown in fig. 2, specifically includes: the system comprises an obstacle extraction module 100, an object to be grabbed detection module 200, an external turbulence estimation value calculation module 300, an LSTM prediction module 400, a grabbing disturbance compensation module 500, a control quantity acquisition module 600 and a circulation module 700;
the obstacle extraction module 100 is configured to acquire a sonar image within a set angle-scanning distance range in front of the underwater bionic robot through obstacle avoidance sonar, and extract the position and the outline of an obstacle after preprocessing the sonar image;
the object to be grabbed detection module 200 is configured to acquire a visual image acquired by a front camera of the underwater bionic robot, detect whether the visual image contains an object to be grabbed, acquire the position of the object to be grabbed if the visual image contains the object to be grabbed, and skip the grabbing disturbance compensation module 500; otherwise, skipping to the external disturbed flow estimated value calculation module 300;
the external disturbed flow estimated value calculation module 300 is configured to generate a corresponding obstacle avoidance route by combining the position and contour information of the obstacle and a preset search path; during driving forward according to the obstacle avoidance route, based on state information and control input force/moment at the moment t of the underwater bionic robot, obtaining an external disturbed flow estimated value observed at the current moment through a pre-constructed disturbed flow observer based on a radial basis function neural network; the state information comprises position, quality, speed, acceleration and course; t represents the current time;
the LSTM prediction module 400 is configured to input an external turbulence estimation value, a control input force/moment, and state information of the underwater biomimetic robot at the time t into a pre-constructed LSTM prediction model, obtain a control input force/force rejection of the underwater biomimetic robot at the time t +1-t + N as a first force/moment, and jump to the control quantity obtaining module 600;
the grabbing disturbance compensation module 500 is configured to solve an expected motion angle of each joint of a mechanical arm of the underwater bionic robot through reverse motion based on the position of an object to be grabbed and in combination with the outline and position information of an obstacle, calculate disturbance force of the mechanical arm in the grabbing process by using a Newton-Euler model, compensate the disturbance force into the control of the underwater bionic robot body, and obtain total control input force/moment serving as first force/moment;
the control quantity obtaining module 600 is configured to construct a mapping relation between a kinematic control quantity and control parameters through fuzzy reasoning based on the first force/moment of the underwater bionic robot to obtain bionic web control quantities on two sides; controlling the underwater bionic robot to realize object searching or grabbing control according to the control quantity of the bionic webs on the two sides;
the loop module 700 is configured to loop the obstacle extraction module 100-the control amount acquisition module 600 until all the targets to be grabbed are obtained.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process and related description of the system described above may refer to the corresponding process in the foregoing method embodiment, and details are not described herein again.
It should be noted that, the object searching and grasping control system of the underwater biomimetic robot in the disturbed flow environment provided in the above embodiment is only exemplified by the division of the above functional modules, and in practical applications, the above functions may be allocated to different functional modules according to needs, that is, the modules or steps in the embodiment of the present invention are further decomposed or combined, for example, the modules in the above embodiment may be combined into one module, or may be further split into a plurality of sub-modules, so as to complete all or part of the above described functions. The names of the modules and steps involved in the embodiments of the present invention are only for distinguishing the modules or steps, and are not to be construed as unduly limiting the present invention.
A storage device according to a third embodiment of the present invention stores therein a plurality of programs, which are suitable for being loaded by a processor and implementing the above-described object search and capture control method for an underwater biomimetic robot in a turbulent flow environment.
A processing apparatus according to a fourth embodiment of the present invention includes a processor, a storage device; a processor adapted to execute various programs; a storage device adapted to store a plurality of programs; the program is suitable for being loaded and executed by a processor to realize the object searching and grabbing control method of the underwater bionic robot in the turbulent flow environment.
It can be clearly understood by those skilled in the art that, for convenience and brevity, the specific working processes and related descriptions of the storage device and the processing device described above may refer to the corresponding processes in the foregoing method examples, and are not described herein again.
It should be noted that the computer readable medium mentioned above in the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The terms "first," "second," and the like are used for distinguishing between similar elements and not necessarily for describing or implying a particular order or sequence.
The terms "comprises," "comprising," or any other similar term are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.

Claims (9)

1. An object searching and grabbing control method of an underwater bionic robot in a disturbed flow environment is characterized by comprising the following steps:
s10, acquiring a sonar image within a set angle-scanning distance range in front of the underwater bionic robot through an obstacle avoidance sonar, and extracting the position and the outline of an obstacle after preprocessing the sonar image;
s20, acquiring a visual image acquired by a front camera of the underwater bionic robot, detecting whether the visual image contains an object to be grabbed, if so, acquiring the position of the object to be grabbed, and jumping to the step S50; otherwise, jumping to step S30;
s30, generating a corresponding obstacle avoidance route by combining the position and contour information of the obstacle and a preset search path; during driving forward according to the obstacle avoidance route, based on state information and control input force/moment at the moment t of the underwater bionic robot, obtaining an external disturbed flow estimated value observed at the current moment through a pre-constructed disturbed flow observer based on a radial basis function neural network; the state information comprises position, quality, speed, acceleration and course; t represents the current time;
s40, inputting the external turbulence estimation value, the control input force/moment and the state information of the underwater bionic robot at the moment t into a pre-constructed LSTM prediction model, acquiring the control input force/force rejection of the underwater bionic robot at the moment t +1-t + N as a first force/moment, and jumping to the step S60;
s50, based on the position of an object to be grabbed, combining the outline and position information of an obstacle, solving the expected motion angle of each joint of the mechanical arm of the underwater bionic robot through reverse motion, calculating the disturbance force of the mechanical arm in the grabbing process by utilizing a Newton-Euler model, compensating the disturbance force into the control of the underwater bionic robot body, and obtaining the total control input force/moment as a first force/moment;
s60, based on the first force/moment of the underwater bionic robot, constructing a mapping relation between the kinematic control quantity and the control parameters through fuzzy reasoning to obtain bionic web control quantities on two sides; controlling the underwater bionic robot to realize object searching or grabbing control according to the control quantity of the bionic webs on the two sides;
s70, executing steps S10-S60 in a circulating mode until all the objects to be grabbed are obtained.
2. The method for controlling object searching and grabbing of the underwater biomimetic robot in the disturbed flow environment according to claim 1, wherein the preprocessing comprises: and carrying out threshold segmentation, median filtering, corrosion and expansion processing on the sonar image.
3. The method for controlling object searching and grabbing of the underwater biomimetic robot in the disturbed flow environment according to claim 1, wherein the dynamic model of the underwater biomimetic robot is:
Figure FDA0003022372440000021
wherein eta is [ x, y, z, psi ═ x, y, z, phi]The position and the course of the underwater bionic robot are shown,
Figure FDA0003022372440000022
respectively representing the speed and the acceleration of the underwater bionic robot, M (eta) representing mass and an additional mass matrix,
Figure FDA0003022372440000023
a matrix of coriolis forces and centripetal forces is represented,
Figure FDA0003022372440000024
a linear damping matrix is represented, u represents the control input force/torque, and d represents the disturbance value over time.
4. The method for controlling object searching and grabbing of the underwater bionic robot under the disturbed flow environment according to claim 3, wherein the disturbed flow observer RBF-DOB based on the radial basis function neural network is:
Figure FDA0003022372440000025
Figure FDA0003022372440000026
Figure FDA0003022372440000027
Figure FDA0003022372440000028
Figure FDA0003022372440000029
wherein, sigma is an internal variable of DOB,
Figure FDA00030223724400000210
representing the first derivative of the internal variables of the DOB,
Figure FDA00030223724400000211
in order to observe the estimated value of the external turbulence,
Figure FDA00030223724400000212
represents a non-linear equation, phi is a reversible matrix,
Figure FDA00030223724400000213
in order to obtain the gain of the DOB,
Figure FDA00030223724400000214
for the system to know the approximation of the hydrodynamic parameter,
Figure FDA00030223724400000215
the self-adaptive weight of the RBF network is represented, and h (d) represents the Gaussian function output of the hidden layer of the RBF network.
5. The method for controlling object searching and grabbing of the underwater bionic robot under the disturbed flow environment according to claim 1, wherein the disturbance compensation of the robot arm to the underwater bionic robot body is calculated by:
Figure FDA0003022372440000031
BfBBR0 0f0
BτBBR0 0τ0+BPB→0×BR0 0f0
wherein the content of the first and second substances,
Figure FDA0003022372440000032
for feedforward compensation to the mechanical arm disturbance amount of the bionic body,BR0is a 3 x 3 rotation matrix from coordinate system 0 to B according to the D-H parameters,BPB→0is from the origin of the coordinate system B to the origin of the coordinate system 0, and represents a 3 × 1 position column vector in the coordinate system B,ifiiτirespectively representing the generalized force and moment of the connecting rod i under the coordinate system i.
6. The method for controlling object searching and grabbing of the underwater bionic robot under the disturbed flow environment according to claim 1, wherein the RBF neural network in the disturbed flow observer RBF-DOB is:
Y=W*Th(ξ)+δ
Figure FDA0003022372440000033
where xi represents the input vector of RBF neural network, i represents the number of nodes in intermediate hidden layer, hiTo representThe output of a Gaussian function at the ith node in the hidden layer of the RBF neural network, c represents the central vector of the Gaussian function, b is the width of a baseband, and W is the width of the baseband*=[w1,w2,...,wn]TAnd expressing an ideal weight vector from a hidden layer to an output layer of the RBF neural network, wherein delta is an approximation error of the RBF neural network, and Y expresses a true value approximated by the RBF neural network.
7. The utility model provides an object search and snatch control system of bionic robot under vortex environment under water which characterized in that, this system includes: the device comprises an obstacle extraction module, an object to be grabbed detection module, an external turbulence estimation value calculation module, an LSTM prediction module, a grabbing disturbance compensation module, a control quantity acquisition module and a circulation module;
the obstacle extraction module is configured to acquire a sonar image within a set angle-scanning distance range in front of the underwater bionic robot through obstacle avoidance sonar, and extract the position and the outline of an obstacle after preprocessing the sonar image;
the object to be grabbed detection module is configured to acquire a visual image acquired by a front camera of the underwater bionic robot, detect whether the visual image contains an object to be grabbed, acquire the position of the object to be grabbed if the visual image contains the object to be grabbed, and skip the grabbing disturbance compensation module; otherwise, skipping to an external disturbed flow estimated value calculation module;
the external disturbed flow estimated value calculation module is configured to generate a corresponding obstacle avoidance route by combining the position and contour information of the obstacle and a preset search path; during driving forward according to the obstacle avoidance route, based on state information and control input force/moment at the moment t of the underwater bionic robot, obtaining an external disturbed flow estimated value observed at the current moment through a pre-constructed disturbed flow observer based on a radial basis function neural network; the state information comprises position, quality, speed, acceleration and course; t represents the current time;
the LSTM prediction module is configured to input an external turbulence estimation value, a control input force/moment and state information of the underwater bionic robot at the time t into a pre-constructed LSTM prediction model, obtain a control input force/force rejection of the underwater bionic robot at the time t +1-t + N as a first force/moment, and skip the control quantity obtaining module;
the grabbing disturbance compensation module is configured to solve an expected motion angle of each joint of a mechanical arm of the underwater bionic robot through reverse motion based on the position of an object to be grabbed and in combination with the outline and position information of an obstacle, calculate disturbance force of the mechanical arm in the grabbing process by using a Newton-Euler model, compensate the disturbance force into the control of the underwater bionic robot body, and obtain total control input force/moment serving as first force/moment;
the control quantity acquisition module is configured to construct a mapping relation between the kinematic control quantity and the control parameters through fuzzy reasoning based on the first force/moment of the underwater bionic robot to obtain the control quantity of the bionic webbings on two sides; controlling the underwater bionic robot to realize object searching or grabbing control according to the control quantity of the bionic webs on the two sides;
the circulating module is configured to circularly execute the obstacle extracting module and the control quantity acquiring module until all the targets to be grabbed are obtained.
8. A storage device having a plurality of programs stored therein, wherein the programs are adapted to be loaded and executed by a processor to implement the method for controlling object searching and capturing of an underwater biomimetic robot under turbulent environment as recited in any one of claims 1-6.
9. A processing device comprising a processor, a storage device; a processor adapted to execute various programs; a storage device adapted to store a plurality of programs; the program is suitable for being loaded and executed by a processor to realize the object searching and grabbing control method of the underwater bionic robot in the turbulent flow environment according to any one of claims 1-6.
CN202110406037.4A 2021-04-15 2021-04-15 Object searching and grabbing control method of underwater robot in turbulent flow environment Active CN113084817B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110406037.4A CN113084817B (en) 2021-04-15 2021-04-15 Object searching and grabbing control method of underwater robot in turbulent flow environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110406037.4A CN113084817B (en) 2021-04-15 2021-04-15 Object searching and grabbing control method of underwater robot in turbulent flow environment

Publications (2)

Publication Number Publication Date
CN113084817A true CN113084817A (en) 2021-07-09
CN113084817B CN113084817B (en) 2022-08-19

Family

ID=76677899

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110406037.4A Active CN113084817B (en) 2021-04-15 2021-04-15 Object searching and grabbing control method of underwater robot in turbulent flow environment

Country Status (1)

Country Link
CN (1) CN113084817B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114800487A (en) * 2022-03-14 2022-07-29 中国科学院自动化研究所 Underwater robot operation control method based on disturbance observation technology
CN115229780A (en) * 2021-10-18 2022-10-25 达闼机器人股份有限公司 Mechanical arm motion path planning method and device
CN115303455A (en) * 2022-09-16 2022-11-08 北京大学 Underwater bionic robot motion control method, device, equipment and storage medium
CN117055586A (en) * 2023-06-28 2023-11-14 中国科学院自动化研究所 Underwater robot tour search and grabbing method and system based on self-adaptive control

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7296530B1 (en) * 2005-12-12 2007-11-20 United States Of America As Represented By The Secretary Of The Navy Unmanned system for underwater object inspection, identification and/or neutralization
KR20120129002A (en) * 2011-05-18 2012-11-28 부산대학교 산학협력단 Underwater robot and Method for controlling the same
CN102902271A (en) * 2012-10-23 2013-01-30 上海大学 Binocular vision-based robot target identifying and gripping system and method
CN103869824A (en) * 2014-03-05 2014-06-18 河海大学常州校区 Biological antenna model-based multi-robot underwater target searching method and device
WO2016165316A1 (en) * 2015-04-15 2016-10-20 上海海事大学 Underwater body device of underwater robot and method for autonomous obstacle avoidance
CN106708069A (en) * 2017-01-19 2017-05-24 中国科学院自动化研究所 Coordinated planning and control method of underwater mobile operation robot
CN108858199A (en) * 2018-07-27 2018-11-23 中国科学院自动化研究所 The method of the service robot grasp target object of view-based access control model
KR20190106093A (en) * 2018-03-07 2019-09-18 주식회사 하이드로봇테크앤리서치 Data collection apparatus for exploring seabed
CN110488847A (en) * 2019-08-09 2019-11-22 中国科学院自动化研究所 The bionic underwater robot Hovering control mthods, systems and devices of visual servo
KR20190136386A (en) * 2018-05-30 2019-12-10 포항공과대학교 산학협력단 Underwater Robot and Method for Sampling of Weight Object in Underwater
CN110969158A (en) * 2019-11-06 2020-04-07 中国科学院自动化研究所 Target detection method, system and device based on underwater operation robot vision
CN111652118A (en) * 2020-05-29 2020-09-11 大连海事大学 Marine product autonomous grabbing guiding method based on underwater target neighbor distribution
CN112318508A (en) * 2020-08-14 2021-02-05 大连海事大学 Method for evaluating strength of underwater robot-manipulator system subjected to ocean current disturbance
KR20210034895A (en) * 2019-09-23 2021-03-31 포항공과대학교 산학협력단 Method and Underwater Robot for Scan Route Setting of Underwater Object using Acoustic Camera

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7296530B1 (en) * 2005-12-12 2007-11-20 United States Of America As Represented By The Secretary Of The Navy Unmanned system for underwater object inspection, identification and/or neutralization
KR20120129002A (en) * 2011-05-18 2012-11-28 부산대학교 산학협력단 Underwater robot and Method for controlling the same
CN102902271A (en) * 2012-10-23 2013-01-30 上海大学 Binocular vision-based robot target identifying and gripping system and method
CN103869824A (en) * 2014-03-05 2014-06-18 河海大学常州校区 Biological antenna model-based multi-robot underwater target searching method and device
WO2016165316A1 (en) * 2015-04-15 2016-10-20 上海海事大学 Underwater body device of underwater robot and method for autonomous obstacle avoidance
CN106708069A (en) * 2017-01-19 2017-05-24 中国科学院自动化研究所 Coordinated planning and control method of underwater mobile operation robot
KR20190106093A (en) * 2018-03-07 2019-09-18 주식회사 하이드로봇테크앤리서치 Data collection apparatus for exploring seabed
KR20190136386A (en) * 2018-05-30 2019-12-10 포항공과대학교 산학협력단 Underwater Robot and Method for Sampling of Weight Object in Underwater
CN108858199A (en) * 2018-07-27 2018-11-23 中国科学院自动化研究所 The method of the service robot grasp target object of view-based access control model
CN110488847A (en) * 2019-08-09 2019-11-22 中国科学院自动化研究所 The bionic underwater robot Hovering control mthods, systems and devices of visual servo
KR20210034895A (en) * 2019-09-23 2021-03-31 포항공과대학교 산학협력단 Method and Underwater Robot for Scan Route Setting of Underwater Object using Acoustic Camera
CN110969158A (en) * 2019-11-06 2020-04-07 中国科学院自动化研究所 Target detection method, system and device based on underwater operation robot vision
CN111652118A (en) * 2020-05-29 2020-09-11 大连海事大学 Marine product autonomous grabbing guiding method based on underwater target neighbor distribution
CN112318508A (en) * 2020-08-14 2021-02-05 大连海事大学 Method for evaluating strength of underwater robot-manipulator system subjected to ocean current disturbance

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MINGXUE CAI等: "Grasping Marine Products With Hybrid-Driven Underwater Vehicle-Manipulator System", 《IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING》 *
申雄,徐国华等: "一种欠定位水下机器人的目标搜索研究", 《中国机械工程》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115229780A (en) * 2021-10-18 2022-10-25 达闼机器人股份有限公司 Mechanical arm motion path planning method and device
CN114800487A (en) * 2022-03-14 2022-07-29 中国科学院自动化研究所 Underwater robot operation control method based on disturbance observation technology
CN114800487B (en) * 2022-03-14 2024-02-02 中国科学院自动化研究所 Underwater robot operation control method based on disturbance observation technology
CN115303455A (en) * 2022-09-16 2022-11-08 北京大学 Underwater bionic robot motion control method, device, equipment and storage medium
CN117055586A (en) * 2023-06-28 2023-11-14 中国科学院自动化研究所 Underwater robot tour search and grabbing method and system based on self-adaptive control
CN117055586B (en) * 2023-06-28 2024-05-14 中国科学院自动化研究所 Underwater robot tour search and grabbing method and system based on self-adaptive control

Also Published As

Publication number Publication date
CN113084817B (en) 2022-08-19

Similar Documents

Publication Publication Date Title
CN113084817B (en) Object searching and grabbing control method of underwater robot in turbulent flow environment
CN110333739B (en) AUV (autonomous Underwater vehicle) behavior planning and action control method based on reinforcement learning
CN108319293B (en) UUV real-time collision avoidance planning method based on LSTM network
CN109239709B (en) Autonomous construction method for local environment map of unmanned ship
Silveira et al. An open-source bio-inspired solution to underwater SLAM
Mu et al. End-to-end navigation for autonomous underwater vehicle with hybrid recurrent neural networks
CN103776453A (en) Combination navigation filtering method of multi-model underwater vehicle
CN108008099A (en) A kind of pollution sources localization method
CN109579850B (en) Deepwater intelligent navigation method based on auxiliary inertial navigation to water velocity
CN111830978A (en) Under-actuated unmanned ship obstacle avoidance path planning and control method and system
Zeng et al. Exploiting ocean energy for improved AUV persistent presence: path planning based on spatiotemporal current forecasts
Oliveira et al. Three-dimensional mapping with augmented navigation cost through deep learning
Feng et al. Automatic tracking method for submarine cables and pipelines of AUV based on side scan sonar
Inzartsev et al. Underwater pipeline inspection method for AUV based on laser line recognition: Simulation results
Lin et al. The fuzzy-based visual intelligent guidance system of an autonomous underwater vehicle: realization of identifying and tracking underwater target objects
CN116958439B (en) Pipeline three-dimensional reconstruction method based on multi-sensor fusion in full water environment
Demim et al. NH∞-SLAM algorithm for autonomous underwater vehicle
CN113064422A (en) Autonomous underwater vehicle path planning method based on double neural network reinforcement learning
Ebert et al. Deep radar sensor models for accurate and robust object tracking
CN108459614B (en) UUV real-time collision avoidance planning method based on CW-RNN network
CN114609925B (en) Training method of underwater exploration strategy model and underwater exploration method of bionic machine fish
CN115690343A (en) Robot laser radar scanning and mapping method based on visual following
CN115373383A (en) Autonomous obstacle avoidance method and device for garbage recovery unmanned boat and related equipment
CN114384509A (en) Safe driving decision generation method supported by intelligent driving vehicle data
Cristi et al. Motion estimation and modeling of the environment for underwater vehicles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant