CN115139315A - Grabbing motion planning method for picking mechanical arm - Google Patents

Grabbing motion planning method for picking mechanical arm Download PDF

Info

Publication number
CN115139315A
CN115139315A CN202210884969.4A CN202210884969A CN115139315A CN 115139315 A CN115139315 A CN 115139315A CN 202210884969 A CN202210884969 A CN 202210884969A CN 115139315 A CN115139315 A CN 115139315A
Authority
CN
China
Prior art keywords
picking
mechanical arm
planning
fruit
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210884969.4A
Other languages
Chinese (zh)
Inventor
王东
潘青慧
连捷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN202210884969.4A priority Critical patent/CN115139315A/en
Publication of CN115139315A publication Critical patent/CN115139315A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J15/00Gripping heads and other end effectors
    • B25J15/08Gripping heads and other end effectors having finger members
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J18/00Arms
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/023Optical sensing devices including video camera means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Multimedia (AREA)
  • Manipulator (AREA)

Abstract

The invention provides a grabbing motion planning method for a picking mechanical arm. The whole picking planning process is divided into an off-line planning part and an on-line planning part by dividing the working space of the mechanical arm and defining a pre-picking plane, wherein the off-line planning part is trained based on a reinforcement learning algorithm PPO algorithm. Firstly, the position of the fruit is obtained through primary recognition based on a wide visual angle, the position of the fruit is projected to a pre-picking plane according to the position of the fruit, the position of the pre-picking point of the fruit is obtained, and a track reaching each pre-picking point is output through reinforcement learning. Then modeling is carried out on the off-line planning part and the on-line planning part, a traveler problem is modeled, and the optimal picking sequence is planned. And finally, the mechanical arm reaches the pre-picking point in sequence according to the planned picking sequence to identify the fruit stalks at the near-sighted angle to finish picking. The mechanical arm adopts off-line planning and planning picking sequence, so that the success rate and picking efficiency of the picking robot can be obviously improved.

Description

Grabbing motion planning method for picking mechanical arm
Technical Field
The invention belongs to the field of robot agricultural picking, and particularly relates to a picking mechanical arm grabbing motion planning method which is suitable for automatic picking of pimento fruits in a plant factory or a plant greenhouse.
Background
With the aging process of the Chinese population accelerating, the population dividend gradually disappears, which leads to the rising of labor cost, and the outflow of rural population leads to the sharp reduction of labor force under the background of novel industrialization, urbanization and agricultural modernization. China is a big agricultural country, a large amount of labor force is needed to pick fruits and vegetables such as tomatoes, pimentos, apples and oranges every year in mature seasons, workers often need to push hot days to pick the fruits and vegetables, the labor intensity is high, and the picking efficiency is low. In order to reduce the harvesting cost and improve the harvesting efficiency, it is very necessary to develop intelligent agriculture and develop a fruit and vegetable picking robot with autonomous perception and accurate grabbing capability. With the continuous development of technologies such as machine vision, sensors and robots, more and more intelligent fruit and vegetable picking robots replace human beings to complete fruit and vegetable picking tasks, but the design of the fruit and vegetable picking robots still faces a plurality of problems to be solved urgently.
The first point is that the identification by the vision system is not accurate enough. The visual system is just as good as the eyes of a person, and can be grasped accurately only if the visual system is accurately seen. The agricultural picking environment is an unstructured environment, and due to the existence of factors such as branch and leaf shielding, illumination change and the like in the picking environment, the identification precision of fruit stems can be greatly influenced. In addition, fruit stems of fruits are thinner than fruits in the identification process, and accurate identification is difficult, so that the robustness and accuracy of an identification algorithm are very necessary to be improved.
The second point is that the picking efficiency of the picking robot is low and the system is unreliable. The six-degree-of-freedom industrial mechanical arm is used for picking in the fruit picking process, and the mechanical arm is expected to quickly and efficiently reach a picking point of a fruit stem of a fruit according to a certain specific posture to pick the fruit. The working space of the mechanical arm is divided into a flexible working space and a non-flexible working space, the flexible working space is a working space which can be reached in any posture, and due to the limitation of the posture in the picking process of the mechanical arm under the non-flexible working space, the mechanical arm can be frequently subjected to no solution or solution, but the movement of the mechanical arm can be greatly changed, and fruit picking cannot be carried out according to an expected path. These can lead to problems with inefficient picking robots, unreliable systems (low success rates), etc., and ultimately result in farmers refusing to use these picking robots. Therefore, the demand of how to ensure that the picking robot can pick quickly, reliably and efficiently in the field of mobile picking is still great, and researches are urgently needed.
Disclosure of Invention
The picking robot aims to overcome the influence of shielding, illumination and fine fruit stalks on the fruit stalk identification accuracy in a picking environment and solve the problems of low efficiency and unreliable system of the picking robot. The invention provides a grabbing motion planning method for a picking mechanical arm.
The specific technical scheme of the invention is as follows:
a picking mechanical arm grabbing motion planning method comprises the following steps:
step 1, sensing a picking environment by using a binocular camera1 under a wide viewing angle, primarily identifying fruits, and identifying the number of the fruits and three-dimensional coordinates of each fruit under a camera coordinate system to obtain an initial picking point;
step 2, defining a pre-picking plane between the mechanical arm and the target fruit, and projecting the initial picking point obtained in the step 1 to the pre-picking plane to obtain a pre-picking point;
step 3, obtaining the track from the initial position to each pre-picking point of the mechanical arm according to off-line planning; modeling the fruits to be picked under the current visual angle by combining the offline planned track and the online plan, and planning the optimal picking sequence of the mechanical arms to reach the pre-picking points in sequence; the mechanical arm firstly reaches a pre-picking point after the picking sequence is planned, then carries out secondary recognition through the binocular camera2 under a near visual angle to obtain a corresponding target picking point, then carries out picking through online planning, returns to the pre-picking point after the picking is finished, and then reaches the next pre-picking point according to the planned picking sequence;
and 4, repeating the step 3 by the mechanical arm until all the pre-picking points in the pre-picking plane in the step 2 are traversed and returning to the initial position of the mechanical arm.
Further, yolov is adopted for the primary identification of the fruit in the step 1.
Further, the pre-picking plane setting in the step 2 is specifically as follows: as shown in FIG. 10, the robot arm has a working space of a sphere with radius Rcm, the pre-picking plane is 10-15cm from the fruit, the distance from the pre-picking plane to the robot arm base coordinate system is (R-15, R-10), the pre-picking plane has a length Rcm and a width of 0.75Rcm. The arrow indicates the position and attitude of the robotic arm to the pre-picking plane. The size of the pre-picking plane can be adjusted according to the requirements, and obvious changes or variations are still within the protection scope of the invention.
Further, step 3, the off-line planning is to train by adopting a reinforcement learning algorithm and output a track reaching each pre-picking point.
Further, in step 3, the reinforcement learning algorithm is a PPO algorithm.
Further, step 3, modeling is carried out on the whole picking process, off-line planning and on-line planning of the mechanical arm, the picking sequence of the mechanical arm sequentially reaching pre-picking points is planned, an energy cost function is constructed, and the minimum sum of the joint angle changes of the mechanical arm is used as an optimization index; modeling is carried out to solve the problem of the traveling salesman, and the sequence of reaching the pre-picking points, namely the optimal picking sequence of the mechanical arm, is planned.
Further, step 3, the secondary identification method specifically comprises the steps of identifying fruit stems of the fruits by using a binocular camera2 under a near-sighted angle after the fruits reach the pre-picking point, identifying the fruits by using yolov, and obtaining interested regions of the fruit stems.
Further, a lightweight example segmentation network yolact is used for the region of interest, the position of fruit stalks of the fruits is obtained, and a target picking point is obtained. Solves the problems that fruit stalks of fruits are thinner than fruits and difficult to accurately identify in the identification process, and the influence of larger illumination change due to the shielding of branches and leaves in the picking environment,
further, step 3, the online planning specifically includes that a moveit framework in an ROS robot operating system is used for controlling a mechanical arm to pick, a bidirectional RRT is used for path planning of the mechanical arm in the online planning process, and a bionic Optimization BioIK (Biologic optimized Optimization) algorithm is used for inverse solution solving of the mechanical arm.
In order to solve the problems of low efficiency and unreliable system (low success rate) of the picking robot, the working space of the mechanical arm is divided, a pre-picking plane is defined, the whole picking planning process is divided into an off-line planning part and an on-line planning part, and the off-line planning part is trained on the basis of a near-end strategy Optimization (PPO) algorithm in a reinforcement learning algorithm. Based on yolov, a binocular vision camera1 is adopted to carry out primary identification with a wide viewing angle to obtain the position of the sweet pepper fruit, the position of the sweet pepper fruit is projected to a pre-picking plane according to the position of the sweet pepper fruit to obtain the position of the pre-picking point of the sweet pepper fruit, and the track reaching each pre-picking point is output through reinforcement learning. The picking process comprises the following steps: firstly, the mechanical arm needs to reach a pre-picking point, then secondary identification and online planning are carried out to pick, the mechanical arm returns to the pre-picking point after picking is finished, and then the mechanical arm reaches the next pre-picking point. This process is repeated until the fruit is picked at the current view angle, and finally the fruit returns to the initial position. A schematic of the on-line and off-line planning during the picking process is shown in fig. 7.
The on-line planning process of the mechanical arm is calculated after the mechanical arm reaches a point on the pre-picking plane, and the process is as follows: the mechanical arm starts from the pre-picking point, reaches the target picking point for picking, and then returns to the pre-picking point. The sum of the angle changes of all joints in the online planning process of the mechanical arm is used as the energy consumed by the mechanical arm, and the smaller the sum of the angle changes of all the joints is, the smaller the energy cost is. As can be seen from the schematic diagram of fig. 7, the energy consumed by the mechanical arm to complete the picking of all fruits at the current view angle in the online planning process can be regarded as a constant.
The off-line planning stage process is a process that the mechanical arm traverses all the pre-picking points from the starting point in sequence and then returns to the initial position. In the off-line planning stage, the mechanical arms pass through the pre-picking points in different sequences, and the energy cost consumed by the mechanical arms is different. In order to reduce the energy cost consumption of the mechanical arm in the picking process and improve the picking efficiency of the mechanical arm, the off-line planning part and the on-line planning part are modeled to form a traveler problem, and the optimal picking sequence is planned.
Since the cost functions of energy consumed by the robot arm in the on-line planning during the picking process are all the same, the energy cost function in the on-line planning process can be regarded as a constant in the process of planning the optimal picking sequence. The mechanical arm starts from the starting position, sequentially passes through all the pre-picking points and finally returns to the starting position, the minimum sum of the angle changes of all joints of the mechanical arm is the optimal picking sequence, and the consumed energy is minimum. And solving the optimal picking sequence through a genetic algorithm, and calculating the optimal picking sequence when the mechanical arm reaches a pre-picking point.
The success rate and picking efficiency of the picking robot can be remarkably improved by adopting off-line planning and picking sequence planning.
Drawings
Fig. 1 is a flow chart of a picking mechanical arm grabbing motion planning method.
Fig. 2 is a schematic view of the real working environment of the picking robot.
Fig. 3 is a schematic diagram of a coordinate conversion relationship between the robot arm and 2 binocular cameras.
FIG. 4 is a schematic diagram of a reinforcement learning simulation environment.
Fig. 5 is a schematic view of a region of interest of a sweet pepper fruit stem.
Fig. 6 is a schematic view of a fruit projected onto a pre-picking plane.
Fig. 7 is a schematic diagram of the picking robot offline planning and online planning.
Fig. 8 is a schematic diagram of secondary identification and online planning of a picking robot.
Fig. 9 is a schematic diagram of a yolact network structure of a sweet pepper stem identification algorithm of the invention.
Fig. 10 is a schematic view of the pre-picking plane size dimensions.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings.
The invention provides a grabbing motion planning method for a picking mechanical arm. The method is applied to the fruit picking operation of the agricultural picking robot in the standardized planting scenes of plant factories, plant greenhouses and the like. The fruit of the invention takes sweet pepper as an example, and realizes a planning method for grabbing motion of a picking mechanical arm, and a flow chart is shown in fig. 1. The specific implementation steps are as follows:
a picking mechanical arm grabbing motion planning method comprises the following steps:
step 1, building experimental equipment of the picking robot, wherein the experimental equipment consists of two parts in total, namely the picking robot and a pre-picking plane. In the picking process of sweet peppers, the pre-picking plane divides the motion planning of the mechanical arm into two parts, namely an off-line planning part and an on-line planning part as shown in fig. 2. Because the fruit stalks of the sweet peppers have fewer pixel points in the whole image and consist of two parts in the identification of the sweet peppers, the initial position is initially identified through the binocular vision camera1 at a wide visual angle, and after the fruit stalks reach the pre-picking points on the pre-picking plane, the fruit stalks are accurately identified under the near-sighted angle through the binocular vision camera 2. The binocular vision camera1 is installed at the position shown in fig. 3, the binocular vision camera1 is mainly responsible for the initial identification of the fruit position, the camera coordinate system of the camera1 is a camera1_ frame, and the conversion relation between the camera coordinate system of the binocular vision camera1, the camera1_ frame and the mechanical arm base coordinate system base _ link is determined through camera calibration (eyes are outside the hands).
The conversion relationship between the camera coordinate system camera1_ frame and the robot arm base coordinate system base _ link is represented by (x, y, z, yaw1, pitch1, roll 1). (X, Y, Z, yaw1, pitch1, roll 1) means that the base _ link coordinate system is rotated yaw1 along the X-axis, pitch1 along the Y-axis, roll1 along the Z-axis, and then translated in the base _ link coordinate system, X along the X-axis, Y along the Y-axis, and Z along the Z-axis. Equation (1) shows that the obtained three-dimensional coordinates of sweet pepper are converted from the camera coordinate system camera1_ frame to the base coordinate system base _ link of the mechanical arm, and are expressed by base _ link p. R camera1_base_link From a coordinate system of a binocular vision camera1 to a mechanical armRotation matrix of the base coordinate system, T camera1_base_link The translation coordinates of the camera1 coordinate system to the robot arm base coordinate system.
base_link P=R camera1_base_link · camera1_frame P+T camera1_base_link (1)
The binocular vision camera2 is mounted on the end of a six-degree-of-freedom superior universal robot arm. And determining the conversion relation between the camera coordinate system camera2_ frame and the robot arm end coordinate system tool0 through hand-eye calibration. The conversion relationship between the camera coordinate system camera2_ frame and the robot arm end coordinate system tool0 is represented by (x 1, y1, z1, yaw, pitch, roll). (X1, Y1, Z1, yaw, pitch, roll) means that the tool0 coordinate system is rotated yaw along the X-axis, pitch along the Y-axis, roll along the Z-axis, and then translated with the tool0 coordinate system, X1 along the X-axis, Y1 along the Y-axis, and Z1 along the Z-axis. Starting a binocular vision camera and a six-degree-of-freedom priority mechanical arm, and loading a URDF model of the mechanical arm and the camera, wherein the URDF model describes the position and the posture of each joint of the mechanical arm and the conversion relation between the position and the coordinate position of the camera. As shown in fig. 3, a base coordinate system of the robot arm is defined as base _ link, a coordinate system of the camera2 is defined as camera2_ frame, and a coordinate system of the robot arm end is defined as tool0, and the positional relationship between the camera coordinate system camera2_ frame and the robot arm base coordinate system is determined by using translation and rotation. Equation (2) shows that the obtained three-dimensional coordinates of sweet pepper are converted from the camera2 coordinate system camera2_ frame to the robot arm end coordinate system tool0 tool0 And P represents. R camera2_tool0 Is a rotation matrix from the camera2 coordinate system to the robot arm end coordinate system, T camera2_tool0 The translation coordinates of the camera2 coordinate system to the robot arm end coordinate system. Formula (3) shows that the three-dimensional coordinates of sweet pepper obtained from the camera2 are converted from the robot end coordinate system tool0 to the robot base coordinate system base _ link, base_link and P represents the position of the sweet pepper under the reference system of the mechanical arm base coordinate system base _ link. R tool0_base_link Is a rotation matrix from the end coordinate system of the arm to the base coordinate system of the arm, T tool0_base_link The translation coordinates of the robot arm end coordinate system to the robot arm base coordinate system.
tool0 P=R camera2_tool0 · camera2_frame P+T camera2_tool0 (2)
base_link P=R tool0_base_link · tool0 P+T tool0_base_link (3)
And 2, performing off-line learning based on reinforcement learning, controlling the mechanical arm to reach a pre-picking plane in a specific posture based on a PPO reinforcement learning algorithm, and using the learned model for off-line planning. The real environment is set up as shown in fig. 2, the direction of an arrow in a pre-picking plane in the real environment represents the grabbing posture of the tail end of the mechanical arm, the arrow is perpendicular to the pre-picking plane, the intersection point of the arrow and the picking plane is the position of the mechanical arm reaching the pre-picking plane, the position pointed by one arrow can be represented by (x, y, z, alpha, beta, gamma), and the position to be reached by the tail end of the mechanical arm and the grabbing posture are included. And (3) building a simulation environment, and training in a Pybullet simulation environment, wherein the simulation environment is shown in FIG. 4. The solid white lines in the figure represent predefined bounding boxes, and the end of the robot arm training process leaves the bounding box, is awarded a negative reward, and the training is terminated and restarted. The object in the figure represents a target point in the mechanical arm training process, the grabbing gesture of the mechanical arm reaching the target point of the object is (alpha, beta, gamma), and the training point on the pre-picking plane passes through a function f random (x, y, z) yields, function f random (x, y, z) represents the position coordinates (x, y, z) on the pre-picking plane with the mechanical arm base coordinate system as the reference system, and random (a, b) function represents a random number ranging between a and b.
Figure BDA0003764381380000071
And controlling the mechanical arm to reach the pre-picking plane in a specific posture by adopting a PPO reinforcement learning algorithm, wherein a reward function of the mechanical arm in the learning process is defined as follows: the robot arm gives a positive reward when it reaches the target point, a negative reward when the tip exceeds the predefined bounding box in advance, a negative reward when the maximum number of steps of the robot arm movement is reached but the target point has not been reached. The distance between the coordinates of the tail end of the mechanical arm and the coordinates of the target point is calculated to be used as the basis of the reward function. The terminal coordinates of the mechanical arm in the current state are (current.x, current.y, current.z), and the target point coordinates are (target.x, target.y, target.z). The criterion of the reward function is expressed by equation 8, and when the distance is less than or equal to 1, that is, the end of the mechanical arm reaches the sphere with the radius of 1cm by taking the target point as the center, the mechanical arm is considered to reach the target point.
sruare_dx=(current.x-target.x) 2 (5)
sruare_dy=(current.y-target.y) 2 (6)
sruare_dz=(current.z-target.z) 2 (7)
distance=sqrt(sruare_dx+square_dy+square_dz) (8)
And 3, primarily identifying the positions of the sweet pepper fruits in the picking environment. And (5) subscribing the RGB images of the binocular vision camera1 and identifying the position of the fruit by deep learning yolov. Configuring dark red pepper (dark red pepper) in a red pepper (ROS) system, collecting a data set of sweet peppers, labeling by using labeling software, training by using yolov, simultaneously subscribing depth images of cameras and RGB (red green pepper) aligned topics in the ROS system, collecting images by using a binocular camera, identifying the sweet peppers by operating the yolov in the dark red pepper (dark red pepper) system, returning the identification result of the sweet peppers by the ROS system, framing the positions of the sweet peppers, and obtaining the pixel positions of the identification rectangular frames at 640 and 480, wherein the pixel coordinates of points represented by four corners of the rectangular frames are A as shown in a sweet pepper fruit identification box A in figure 5 1 (x1,y1),A 2 (x2,y2),A 3 (x3,y3),A 4 (x 4, y 4), we can find the center position C (x) of the fruit pimento by four pixel positions i ,y i ) The following is obtained by equation 9:
Figure BDA0003764381380000081
calculating pixel points of the centers of the sweet pepper fruits according to the identification frames of the sweet pepper fruits, and calculating to obtain the centers of the N sweet pepper fruits through a formula 9 when the number of the sweet pepper fruits identified by the deep learning algorithm is NC (x) for pixel point coordinate i ,y i ) I ∈ N represents, and the coordinate P corresponding to the world coordinate system of the binocular vision camera is obtained by matching the depth image of the binocular vision camera i =[X i ,Y i ,Z i ]I ∈ N is obtained by formula 10.
Figure BDA0003764381380000082
In the formula (x) i ,y i ) Pixel coordinates representing a depth image, Z a depth value of a binocular camera, f a focal length of the camera, dx and dy physical distances corresponding to a horizontal unit pixel and a vertical unit pixel, respectively, in mm, u 0 ,v 0 Respectively, the horizontal pixel midpoint and the vertical pixel midpoint.
Step 4, according to the result P of the primary recognition in the step 3 i =[X i ,Y i ,Z i ]And i belongs to N, and the position of the point projected on the pre-picking plane is obtained. The coordinates of the N fruits projected onto the pre-picking plane are found and these coordinate positions are taken as the pre-arrival points of the robotic arm, as shown schematically in fig. 6. When N points are projected on the picking plane, points on the pre-picking plane are marked as A i (i =1.. N). The Z axis of the world coordinate system of the binocular vision camera at the initial position of the mechanical arm is perpendicular to the pre-picking plane at a distance d l Therefore, the position of the sweet pepper fruit projected onto the pre-picking plane recognized by the camera is A i =[X i ,Y i ,dl]And i belongs to N, and the positions of all the pre-picking points relative to the tail end base _ link of the mechanical arm base coordinate system are obtained through the conversion relation between the camera coordinate system camera1_ frame and the mechanical arm base coordinate system base _ link.
base_link P i (X i ,Y i ,Z i )=R camera1_base_link ·A i (X i ,Y i ,dl)+T camera1_base_link (i∈N) (11)
And 5, planning the optimal picking sequence of the mechanical arm according to the positions of the pre-picking points. Loading the file of the strong learning off-line planning training in the step 2Inputting the position coordinates of the pre-picking point, and solving the condition that the tail end tool0 of the mechanical arm reaches the pre-picking point A i I ∈ N. The mechanical arm consists of two parts in the planning process, and the mechanical arm reaches a pre-picking plane and is picked by secondary planning. The mechanical arm needs to reach a pre-picking point in the picking process, and then secondary sensing recognition is carried out to pick. No matter what picking sequence is adopted in the process of online planning of the mechanical arm, the energy cost consumed when the mechanical arm finishes picking is the same. In the off-line planning stage, the mechanical arms pass through the pre-picking points in different orders, and the energy cost consumed by the mechanical arms is different. In order to reduce the consumption of energy cost in the mechanical arm picking process, an offline planning part and an online planning part are modeled to construct an energy cost function. Cost of on-line planning section Z c Representing, the planning cost of the off-line phase by M c Showing the robot arm from A i Point movement to A i+1 The time-lapse trajectory is represented by F in equation 12 c (A i ,A i+1 ) Functional representation, θ in equation 12 N Is a 6-dimensional vector representing the angular positions of 6 joints of the mechanical arm. The sum of the angle changes of each joint in the motion process of the mechanical arm represents a cost function, the smaller the sum of the angle changes of the joints of the mechanical arm passing through each pre-picking point is, the lower the consumed energy cost is, and the function D (F) is c (A i ,A i+1 ) Denotes a robot arm from A i Point movement to A i+1 The sum of the joint angle changes.
F c (A i ,A i+1 )={θ 1 ,θ 2 …θ N } (12)
Figure BDA0003764381380000101
D(F c (A i ,A i+1 ) Denotes a robot arm from A) i To A i+1 Energy cost of, with T c Representing the sum of the energy cost consumptions. Is expressed by equation 14
Figure BDA0003764381380000102
As shown in fig. 7, the energy consumed by the mechanical arm to complete picking of all fruits at the current view angle can be modeled into a traveler problem, the mechanical arm starts from the starting position, sequentially passes through all pre-picking points and finally returns to the starting position, the sum of the angle changes of all joints of the mechanical arm is the optimal picking sequence, and the consumed energy is the minimum. And solving the optimal picking sequence through a genetic algorithm, and calculating the sequence of the mechanical arm reaching the pre-picking point.
And 6, controlling the mechanical arm to sequentially reach a pre-picking point according to the planned optimal picking sequence, and then carrying out secondary identification and online planning to pick, as shown in fig. 8. The secondary recognition process is done by the camera2 based on the near view. The process is as follows, when the mechanical arm reaches a pre-picking point, the fruit stems of the sweet peppers are identified, RGB images of the binocular vision camera2 are subscribed, the positions of the fruit of the sweet peppers are identified through a deep learning algorithm yolov, inference is carried out through the obtained fruit positions, and the region of interest of the fruit stems of the sweet peppers is obtained according to the positions of the fruit of the sweet peppers, as shown in an identification box B in fig. 5. The detailed steps are as follows, the position of the sweet pepper fruit is determined through the minimum external rectangle frame, the center of mass of the sweet pepper fruit is calculated, the region-of-interest identification frame B of the sweet pepper fruit stem is deduced upwards through the center of mass of the sweet pepper fruit, and the pixel position of the rectangular frame of the sweet pepper fruit stem at 640 x 480 is obtained through the identification frame B. The identification frame B of sweet pepper stems is obtained in the following way, the length of the fruit identification frame A is kept unchanged, and the fruit identification frame A is translated upwards
Figure BDA0003764381380000103
A sweet pepper stem region B was obtained as shown in fig. 5. The frame selection area B comprises fruits, leaves and stems of the sweet peppers and fruit stalks of the sweet peppers. And then identifying the sweet pepper stem area through a lightweight example segmentation network yolact network to obtain picking points. A schematic representation of the yolact network recognizing sweet pepper fruit stalks is shown in FIG. 9.
And 7, finishing picking of the sweet peppers when the picking points are reached. And (4) fruit picking is carried out through the on-line planning of the identified picking points, and the mechanical arm is controlled to finish picking of the sweet peppers. And then, controlling the mechanical arm to reach the next pre-picking point, and repeating the step 7 to pick until all the pre-picking points are traversed. And (5) entering the next picking place, and repeating the steps for picking.
The algorithm of the embodiment is subjected to a simulation comparison experiment in a gazebo simulation environment, the comparison algorithm does not adopt an off-line planning algorithm and does not plan a picking sequence, the picking success rate and the average time for picking a single fruit are used as evaluation indexes, after the position of the fruit is identified, the fruit stem is identified at a position 15cm ahead of the fruit in a specific posture, then picking is carried out, and the fruit stem returns to the initial position after picking is finished. This process is repeated until the fruit is picked, the picking sequence of the fruit being randomized. In the simulation process, 8 picked fruits are arranged, 10 groups of experiments are arranged, and the positions of the fruits in the 10 groups of experiments are random. The experimental results show that the picking success rate of the comparison algorithm is 77%, the picking time of a single fruit is 22s, the picking success rate of the off-line planning and planning picking sequence is 92%, and the picking time of a single fruit is 16s. Compared with the prior art, the picking success rate is improved by 15 percent, and the picking efficiency is improved by 27.3 percent. Experiments show that the success rate and picking efficiency of the picking robot can be obviously improved by adopting off-line planning and picking sequence planning.

Claims (10)

1. A picking mechanical arm grabbing motion planning method is characterized by comprising the following steps:
step 1, sensing a picking environment by using a binocular camera1 under a wide viewing angle, primarily identifying fruits, and identifying the number of the fruits and three-dimensional coordinates of each fruit under a camera coordinate system to obtain an initial picking point;
step 2, defining a pre-picking plane between the mechanical arm and the target fruit, and projecting the initial picking point obtained in the step 1 to the pre-picking plane to obtain a pre-picking point;
step 3, obtaining the track from the initial position to each pre-picking point of the mechanical arm according to off-line planning; modeling the fruits to be picked under the current visual angle by combining the offline planned track and the online plan, and planning the optimal picking sequence of the mechanical arms to reach the pre-picking points in sequence; the mechanical arm firstly reaches a pre-picking point after the picking sequence is planned, then carries out secondary recognition through the binocular camera2 under a near visual angle to obtain a corresponding target picking point, then carries out picking through online planning, returns to the pre-picking point after the picking is finished, and then reaches the next pre-picking point according to the planned picking sequence;
and 4, repeating the step 3 by the mechanical arm until all the pre-picking points in the pre-picking plane in the step 2 are traversed and returning to the initial position of the mechanical arm.
2. The picking manipulator grabbing motion planning method according to claim 1, wherein the binocular vision camera is adopted in the step 1 to perform primary fruit identification at a wide viewing angle, and the identification method adopts yolov.
3. The picking manipulator grabbing motion planning method according to claim 1, wherein the pre-picking plane setting method in the step 2 specifically comprises the following steps: the working space of the mechanical arm is a sphere with the radius of R cm, the position from the pre-picking plane to the fruit is 10-15cm, the distance from the pre-picking plane to the mechanical arm base coordinate system is (R-15, R-10), the length of the pre-picking plane is Rcm, and the width of the pre-picking plane is 0.75Rcm.
4. The method for planning the grabbing motion of the picking manipulator according to claim 1, wherein in step 3, the off-line planning is training by using a reinforcement learning algorithm, and a trajectory reaching each pre-picking point is output.
5. The method for planning the grabbing motion of the picking arm according to claim 4, wherein in step 3, the reinforcement learning algorithm is a PPO algorithm.
6. The method for planning the grabbing motion of the mechanical arm according to claim 1, wherein in step 3, modeling is performed on fruits to be picked at a current view angle by combining an offline planned trajectory and an online plan, and an energy cost function is constructed, wherein the minimum sum of the joint angle changes of the mechanical arm is used as an optimization index; modeling is carried out to form a problem of a traveling salesman, and the optimal picking sequence of the mechanical arm sequentially reaching the pre-picking point is planned.
7. The method for planning the grabbing motion of a picking arm of claim 6, wherein in step 3, the energy cost function of the on-line planning process at the current view angle can be regarded as a constant.
8. The picking mechanical arm grabbing motion planning method according to claim 1, wherein in the step 3, the secondary identification method is to identify fruit stalks of fruits by using a binocular camera2 under a near-sighted angle after reaching a pre-picking point, and the fruits are firstly identified by using yolov to obtain regions of interest of the fruit stalks; further, a lightweight example segmentation network yolact is used for the region of interest, the position of fruit stalks of the fruits is obtained, and a target picking point is obtained.
9. The method for planning the grabbing motion of the picking manipulator according to claim 1, wherein in step 3, the on-line planning is specifically to control the manipulator to pick by using a moveit framework in an ROS robot operating system, perform the path planning of the manipulator by using a bidirectional RRT in the on-line planning process, and perform the inverse solution of the manipulator by using a bionics optimization BioIK algorithm.
10. The picking arm grabbing motion planning method of claim 1, wherein, in step 3, the online planning of the picking process of the robotic arm comprises a process of shearing fruit stalks by an end effector of the robotic arm; the terminal end of the mechanical arm is provided with the shearing and clamping integrated end effector, the fruit stems are sheared when the fruit stems are clamped, the cylinder collecting device is arranged below the end effector, the clamping jaws loosen, the fruits can fall into the cylinder collecting device, the mechanical arm does not need to return to place the fruits, and the fruits can be collected through the end effector in the picking process.
CN202210884969.4A 2022-07-26 2022-07-26 Grabbing motion planning method for picking mechanical arm Pending CN115139315A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210884969.4A CN115139315A (en) 2022-07-26 2022-07-26 Grabbing motion planning method for picking mechanical arm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210884969.4A CN115139315A (en) 2022-07-26 2022-07-26 Grabbing motion planning method for picking mechanical arm

Publications (1)

Publication Number Publication Date
CN115139315A true CN115139315A (en) 2022-10-04

Family

ID=83413440

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210884969.4A Pending CN115139315A (en) 2022-07-26 2022-07-26 Grabbing motion planning method for picking mechanical arm

Country Status (1)

Country Link
CN (1) CN115139315A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117617002A (en) * 2024-01-04 2024-03-01 太原理工大学 Method for automatically identifying tomatoes and intelligently harvesting tomatoes

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117617002A (en) * 2024-01-04 2024-03-01 太原理工大学 Method for automatically identifying tomatoes and intelligently harvesting tomatoes

Similar Documents

Publication Publication Date Title
CN111602517B (en) Distributed visual active perception method for string-type fruits and application of distributed visual active perception method
CN110202583B (en) Humanoid manipulator control system based on deep learning and control method thereof
Åstrand et al. An agricultural mobile robot with vision-based perception for mechanical weed control
WO2023056670A1 (en) Mechanical arm autonomous mobile grabbing method under complex illumination conditions based on visual-tactile fusion
Van Henten et al. An autonomous robot for harvesting cucumbers in greenhouses
CN111496770A (en) Intelligent carrying mechanical arm system based on 3D vision and deep learning and use method
Edan Design of an autonomous agricultural robot
Zhaoxin et al. Design a robot system for tomato picking based on yolo v5
CN111260649B (en) Close-range mechanical arm sensing and calibrating method
Li et al. Design of a lightweight robotic arm for kiwifruit pollination
CN113597874B (en) Weeding robot and weeding path planning method, device and medium thereof
CN115299245B (en) Control method and control system of intelligent fruit picking robot
CN114260895A (en) Method and system for determining grabbing obstacle avoidance direction of mechanical arm of picking machine
Zhou et al. Design, development, and field evaluation of a rubber tapping robot
CN115139315A (en) Grabbing motion planning method for picking mechanical arm
CN114830915A (en) Litchi vision picking robot based on laser radar navigation and implementation method thereof
Anh et al. Developing robotic system for harvesting pineapples
Li et al. Identification of the operating position and orientation of a robotic kiwifruit pollinator
Yang et al. Vision based fruit recognition and positioning technology for harvesting robots
Peng et al. Research progress of urban dual-arm humanoid grape harvesting robot
Rajendran et al. Towards autonomous selective harvesting: A review of robot perception, robot design, motion planning and control
Li et al. Design of multifunctional seedbed planting robot based on MobileNetV2-SSD
Chen et al. Design and implementation of an artificial intelligence of things-based autonomous mobile robot system for pitaya harvesting
Libin et al. Overview of research on agricultural robot in China
Ji et al. A Comprehensive Review of the Research of the “Eye–Brain–Hand” Harvesting System in Smart Agriculture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination