CN117617002A - Method for automatically identifying tomatoes and intelligently harvesting tomatoes - Google Patents

Method for automatically identifying tomatoes and intelligently harvesting tomatoes Download PDF

Info

Publication number
CN117617002A
CN117617002A CN202410008288.0A CN202410008288A CN117617002A CN 117617002 A CN117617002 A CN 117617002A CN 202410008288 A CN202410008288 A CN 202410008288A CN 117617002 A CN117617002 A CN 117617002A
Authority
CN
China
Prior art keywords
tomatoes
mechanical arm
tomato
fruit
end effector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410008288.0A
Other languages
Chinese (zh)
Inventor
高亚鹏
高烁
李宇晗
李海芳
阎东军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taiyuan University of Technology
Original Assignee
Taiyuan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taiyuan University of Technology filed Critical Taiyuan University of Technology
Priority to CN202410008288.0A priority Critical patent/CN117617002A/en
Publication of CN117617002A publication Critical patent/CN117617002A/en
Pending legal-status Critical Current

Links

Landscapes

  • Manipulator (AREA)

Abstract

The invention belongs to the field of agricultural picking robot research, and particularly relates to a method for automatically identifying tomatoes and intelligently harvesting tomatoes, which comprises the following steps: constructing a data set; constructing a tomato identification network model YOLOv8 s-tool; designing a two-stage track planning method of the mechanical arm; realizing a control strategy of the end effector; setting an on-line tomato grading system, thereby completing the identification, grabbing, picking and grading treatment of tomatoes. The invention is designed and developed based on the concepts of visual perception, mechanical arm operation and gripper grabbing, improves the identification accuracy of small target fruits and blocked fruits, greatly improves the probability of successful planning by utilizing the two-stage process of mechanical arm motion planning, and finally, the end effector carries out online grading collection on the picked tomatoes according to the fruit diameter, thereby better realizing rapid nondestructive picking and correct grading harvesting of greenhouse tomatoes.

Description

Method for automatically identifying tomatoes and intelligently harvesting tomatoes
Technical Field
The invention belongs to the technical field of agricultural picking robots, and particularly relates to a method for automatically identifying tomatoes and intelligently harvesting tomatoes.
Background
In recent years, the rise of mechanical and automatic production makes the agricultural robot have more and more widely applied and focused, and by combining the robot technology and the artificial intelligence algorithm, the agricultural production activities such as transplanting, spraying, trimming and harvesting are hopefully replaced or assisted by manpower, so that the production cost is saved and the production efficiency is improved.
At present, a great deal of research on agricultural picking robots, such as robots for picking fruits and vegetables such as strawberries, cherries, apples, cucumbers, eggplants, peppers and the like, has been reported worldwide, but most of research is focused on a single aspect of a robot system, such as target detection, grabbing, motion planning and the like, few researches only put forward a prototype system model of the robot, or neglect a complex unstructured environment such as a field, and experiments are only carried out under a simulated environment or laboratory condition, and the research has obtained encouraging results, but cannot realize commercialization, so that picking operation is really replaced by manpower.
Tomatoes are important vegetable crops, and the cultivation and picking process of tomatoes need a great deal of manual participation, and especially, the harvesting and picking account for about 40% -50% of the whole production workload. Meanwhile, the fruit diameter grading of tomatoes is important for marketing, the traditional manual grading is time-consuming and labor-consuming, is easily influenced by subjective factors, and is inevitably subjected to error grading due to some human factors. It can be seen that the rapid, accurate and complete harvesting of greenhouse tomatoes remains a challenge for robots.
Disclosure of Invention
Aiming at the technical problems that the traditional manual classification is time-consuming and labor-consuming, is easily influenced by subjective factors and is difficult to avoid the wrong classification of some human factors, the invention provides a method for automatically identifying and intelligently harvesting tomatoes, and the intelligent coordination control of key actions such as visual perception of target fruits, movement of a picking mechanical arm, grabbing by an end effector, fruit classification harvesting and the like is realized, so that rapid nondestructive picking and correct classification harvesting of tomatoes are completed.
In order to solve the technical problems, the invention adopts the following technical scheme:
a method for automatic identification and intelligent harvesting of tomatoes, comprising the steps of:
s1, acquiring a tomato data set required by training, preprocessing the data set, and constructing the data set;
s2, constructing a tomato recognition algorithm model Yolov 8S-tool with double functions of fruit detection and instance segmentation based on a Yolov8S architecture;
s3, designing a two-stage track planning method of the mechanical arm based on an RRT-connect algorithm and an RRT algorithm;
s4, realizing a control strategy of the end effector by using a CH340 module, a Raspberry Pi PicoW control panel and an STM32 control panel;
s5, setting an online tomato grading system, and grading and collecting the picked tomatoes.
The tomato data set required for training in the step S1 comprises two data sets:
data set I: a laboratory simulated tomato dataset comprising RGB images at a rate of 1920 x 1080 of 895 Zhang Fenbian; data set II: a field tomato dataset comprising RGB images at a rate of 1920 x 1080 of 277 Zhang Fenbian.
The method for preprocessing the tomato image dataset in the S1 comprises the following steps: taking into consideration the data enhancement of the original data set formed by LSTD and FTD by using brightness change, saturation change, contrast change, rotation, gaussian noise, spiced salt noise and pepper noise, finally obtaining a data set of 9376 images according to 8:1:1, a training set, a validation set and a test set were divided in proportion, 7500 pictures were used for training of the model, 937 pictures were used for validation of the model, and 939 pictures were used for testing of the model.
The network model YOLOv 8S-tool in S2 is modified based on YOLOv8S architecture, and includes a shared encoding network and two decoding networks for bearing different tasks: the coding network comprises a backbone network and a neck network, and a small target detection layer is added in the neck network and is used for improving the detection performance of small tomatoes in an image; the detection branch in the decoding network reserves a YOLOv8 detection head network, and a small target area detection output branch is added, so that a fruit area integral boundary box is obtained; the segmentation branch carries out inverse convolution up-sampling on the bottom layer characteristics for 3 times based on nearest neighbor interpolation to obtain a two-dimensional characteristic diagram consistent with the size of an original image, and confidence of each pixel based on fruits or background areas is obtained, so that semantic segmentation of the visible pixels of the fruits is realized, on the basis, position matching is carried out on the discrete segmentation areas of the fruits by combining with a fruit integral boundary detection frame, if the central pixel coordinates of the discrete segmentation areas are located in the same frame, the discrete segmentation areas are considered to belong to the same fruits, and therefore example segmentation of the visible pixel areas of the fruits is realized.
The two-stage trajectory planning method of the mechanical arm designed in the step S3 divides the motion process of the mechanical arm into two stages, wherein the first stage realizes the motion of the mechanical arm PTP circular interpolation mode based on the RRT-connect algorithm, and the second stage realizes the motion of the mechanical arm PTP linear interpolation mode based on the RRT algorithm.
The two-stage track planning method for the mechanical arm designed in the step S3 comprises the following steps: O-XYZ is a base coordinate system of the mechanical arm, O is a coordinate origin, M, P, N is the coordinate system origin of the end effector when the mechanical arm is in different poses, N' is the projection of the fruit center N on an XOY plane, and when the mechanical arm is in an initial pose, the coordinate system origin of the end effector is M; when the mechanical arm reaches the target pose of the first-stage track planning, the origin of the coordinate system of the end effector is at P; when the mechanical arm reaches the target pose of the second-stage track planning, the origin of the coordinate system of the end effector is N, namely the three-dimensional coordinate of the tomato center pixel point.
The method for realizing the motion of the mechanical arm PTP circular interpolation mode based on the RRT-connect algorithm in the first stage comprises the following steps:
the image acquisition point M is given before picking, that is, the coordinates of the point M are known, the coordinates of the fruit center point N are obtained by a camera, and the coordinates are converted into the coordinates (x) N ,y N ,z N ) The distance between the point P and the point N in the Z direction is defined as D, the distance between the point P and the point N in the Y direction is defined as the relative base coordinate system, the length is 12cm, and the calculation formula of the point P coordinate is as follows:
wherein: (x) P ,y P ,z P ) After the coordinate value of the point P is obtained, the mechanical arm realizes PTP circular arc interpolation mode motion from the point M to the point P based on the RRT-connection algorithm until the base joint rotates to the position of an angle alpha with the Y axis, wherein the calculation formula of alpha is as follows: α=arctan (-x) N /y N )。
The method for realizing the motion of the mechanical arm PTP linear interpolation mode based on the RRT algorithm in the second stage comprises the following steps: when picking fruits, the mechanical arm moves in a plane formed by the points NON ', the stage realizes the movement of PTP in a linear interpolation mode based on the RRT algorithm, the end effector only needs to linearly translate in the plane NON', the origin of the coordinate system of the end effector is enabled to coincide with the center of the identified tomato pose, and after the finger envelops the tomato, the fruit stem separation is realized through rotation and pulling of the sixth joint.
In the step S4, the control strategy for implementing the end effector by using the CH340 module, the Raspberry Pi PicoW control board and the STM32 control board is as follows: the MachCClaw node of the ROS can firstly try to open the CH340 serial port, if the serial port is successfully opened, the serial port is successfully opened at a terminal, and the serial port name is displayed as follows: v/dev/ttyCH 341USB0; through subscribing the depth camera node in the ROS, the value of data sent by the depth camera is received for judgment: if the data is 1, the identified tomatoes are large tomatoes, wherein the large tomatoes are tomatoes with the fruit diameter being larger than or equal to 7.5cm, the sent command is semi-closed, and the initial state of the mechanical claw is fully opened; the STM32 control board receives the grabbing instruction, and is provided with a 70N force envelope large tomato, so that the situation that the mechanical claw grabs the tomato excessively with excessive force to damage the tomato is prevented, and the sixth joint of the mechanical arm performs rotation-dragging to realize fruit-stem separation; if the data is 0, indicating that the identified tomatoes are small tomatoes, wherein the small tomatoes are tomatoes with the fruit diameter smaller than 7.5cm, and sending a command to be completely closed; the initial state of the mechanical claw is half-open, the STM32 control board receives a grabbing instruction, 40N force envelope small tomatoes are arranged, the mechanical claw is prevented from falling due to insufficient grabbing force, and rotation and pulling are carried out by a sixth joint of the mechanical arm to realize fruit stem separation.
The method of the tomato online grading system set for the harvested tomatoes in the step S5 is as follows: for large tomatoes with fruit diameters greater than or equal to 7.5cm, after the mechanical arm successfully picks, the Movet-! The target_phase-1 pose set in the bag moves to the position above the big fruit collecting basket, the end effector loosens fingers, and the fruits fall into the collecting basket; for small tomatoes with the fruit diameter smaller than 7.5cm, after the mechanical arm successfully picks, the Movet-! The target_phase-2 pose set in the bag moves above the small fruit collecting basket, the end effector loosens fingers, and the fruits fall into the collecting basket, so that grading collection is achieved.
Compared with the prior art, the invention has the beneficial effects that:
1. the improved model YOLOv8 s-tool based on YOLOv8s is a multi-task network model with fruit detection and instance segmentation functions, the global information in a structured image is effectively reserved by using a small target detection layer and segmentation branches and combined with local geometric features, calculation information in the model is fully enriched, the model training effect is improved, information loss in training is reduced, and the problem that recognition accuracy of small target fruits and blocked fruits is low is well solved.
2. According to the two-stage track planning method of the mechanical arm, provided by the invention, under the condition that the first-stage planning fails, the target position can be planned through the second-stage planning, and as a supplement, the probability of successful planning of the two-stage track planning is greatly improved compared with the single-stage track planning.
3. Compared with single-stage track planning, the two-stage track planning method effectively avoids the collision of the mechanical arm with branches, leaves and fruits. The control strategy of the end effector is realized through the trinity of the CH340 module, the Raspberry Pi PicoW plate and the STM32 control plate, so that the efficient grabbing is realized at low cost, the tomatoes which are picked are subjected to online classified collection according to the fruit diameter size, the classification threshold value can be dynamically set according to the variety of the tomatoes, and the intelligent and accurate classified collection is realized.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It will be apparent to those skilled in the art from this disclosure that the drawings described below are merely exemplary and that other embodiments may be derived from the drawings provided without undue effort.
The structures, proportions, sizes, etc. shown in the present specification are shown only for the purposes of illustration and description, and are not intended to limit the scope of the invention, which is defined by the claims, so that any structural modifications, changes in proportions, or adjustments of sizes, which do not affect the efficacy or the achievement of the present invention, should fall within the scope of the invention.
FIG. 1 is a diagram showing a structure of a YOLOv8 s-tool model according to the present invention;
FIG. 2 is a diagram of the perception result of the YOLOv8 s-tool model of the invention;
FIG. 3 is a diagram illustrating a two-stage trajectory planning process for a robotic arm according to the present invention;
FIG. 4 is a control flow diagram of an end effector in accordance with an implementation of the present invention;
fig. 5 is a graph of the results of an on-line grading test of tomatoes set in the invention;
fig. 6 is a plot of an orchard field application test of the present invention.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions in the embodiments of the present invention will be clearly and completely described below, and it is apparent that the described embodiments are only some embodiments of the present application, but not all embodiments, and these descriptions are only for further illustrating the features and advantages of the present invention, not limiting the claims of the present invention; all other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The following describes in further detail the embodiments of the present invention with reference to the drawings and examples. The following examples are illustrative of the invention and are not intended to limit the scope of the invention.
The embodiment is realized under the ROS system, and the embodiment provides a method for automatically identifying and intelligently harvesting tomatoes, which comprises the following steps:
1. data preparation
The data samples of this example contained two tomato dataset images:
data set I: the laboratory simulation tomato dataset (Laboratory simulation of tomato dataset, LSTD) contains RGB images with a rate of 1920 x 1080 of 895 Zhang Fenbian. Data set II: a field tomato dataset (Field tomato dataset, FTD) containing RGB images at a rate of 1920 x 1080 of 277 Zhang Fenbian, yielded 1172 raw datasets.
Performing data enhancement processing on the data set, and processing an original data set formed by LSTD and FTD by considering brightness change, saturation change, contrast change, rotation, gaussian noise, salt and pepper noise to obtain a data set of 9376 images according to 8:1:1, a training set, a validation set and a test set were divided in proportion, 7500 pictures were used for training of the model, 937 pictures were used for validation of the model, and 939 pictures were used for testing of the model.
2. Model construction and training
The constructed network model YOLOv8 s-tool is modified based on the YOLOv8s architecture, and the specific network model structure is shown in fig. 1, wherein the network model comprises a shared coding network and two decoding networks which bear different tasks: the coding network comprises a backbone network and a neck network, and a small target detection layer is added in the neck network and is used for improving the detection performance of small tomatoes in an image; the detection branch in the decoding network reserves a YOLOv8 detection head network, and a small target area detection output branch is added, so that a fruit area integral boundary box is obtained. The segmentation branch carries out inverse convolution up-sampling on the bottom layer characteristics for 3 times based on nearest neighbor interpolation to obtain a two-dimensional characteristic diagram consistent with the size of an original image, and confidence of each pixel based on fruits or background areas is obtained, so that semantic segmentation of the visible pixels of the fruits is realized, on the basis, position matching is carried out on the discrete segmentation areas of the fruits by combining with a fruit integral boundary detection frame, if the central pixel coordinates of the discrete segmentation areas are located in the same frame, the discrete segmentation areas are considered to belong to the same fruits, and therefore example segmentation of the visible pixel areas of the fruits is realized. The YOLOv8 s-tool model trained on the pre-processed dataset was compared to other YOLO series models and the comparison results are shown in table 1.
TABLE 1 comparison of the Performance of Yolov5s, yolov7-tiny, yolov8s and Yolov8 s-timto
3. Tomato perception
The trained YOLOv8 s-tool model is used for recognizing tomatoes in a greenhouse, three-dimensional coordinates and fruit diameter of the tomatoes are obtained, and a perception result is shown in fig. 2.
4. Two-stage trajectory planning for robotic arms
And (3) utilizing the fruit position information obtained in the last step to realize the motion planning of the mechanical arm and achieve the picking pose.
As shown in fig. 3, O-XYZ is a base coordinate system of the mechanical arm, O is a coordinate origin, M, P, N is a coordinate system origin where the end effector is located when the mechanical arm is in different poses, N' is a projection of the fruit center N on the XOY plane, and when the mechanical arm is in an initial pose, the coordinate system origin of the end effector is M; when the mechanical arm reaches the target pose of the first-stage track planning, the origin of the coordinate system of the end effector is at P; when the mechanical arm reaches the target pose of the second-stage track planning, the origin of the coordinate system of the end effector is N, namely the three-dimensional coordinate of the tomato center pixel point.
The first stage: M.fwdarw.P
The image acquisition point M is given before picking, that is, the coordinates of the point M are known, the coordinates of the fruit center point N can be obtained by a camera, and the coordinates converted into a relative basic coordinate system are (x) N ,y N ,z N ) Therefore, to complete the motion planning of the whole path, the coordinates of the point P must be determined, so as to implement two-stage different motion methods, wherein the distance between the point P and the point N in the Z direction is defined as D, the distance between the relative base coordinate system in the Y direction is defined as 12cm, and the calculation formula of the point P coordinates is as follows:
wherein: (x) P ,y P ,z P ) After the coordinate value of the point P is obtained, the mechanical arm realizes PTP circular arc interpolation mode motion from the point M to the point P based on the RRT-connection algorithm until the base joint rotates to the position of an angle alpha with the Y axis, wherein the calculation formula of alpha is as follows: α=arctan (-x) N /y N );
And a second stage: P-N
When picking fruits, the mechanical arm moves in a plane formed by points NON ', the stage realizes the motion of PTP in a linear interpolation mode based on an RRT algorithm, the end effector only needs to perform linear translation in the plane NON', the origin of a coordinate system of the end effector is enabled to coincide with the center of the identified tomato pose, and after the finger envelops the tomato, the fruit stem separation is realized through the rotation and the pulling of a sixth joint.
5. End effector grasping
After the mechanical arm reaches the picking pose, the end effector adopts different forces to grasp according to tomatoes with different fruit diameters, and a driver supporting CAN bus communication is arranged in the paw to directly carry out data communication with the industrial personal computer, and the control flow of the end effector is shown in figure 4. The STM32 control board is provided with a force of 40N to realize that the end effector is switched from full-open to half-open to grasp small tomatoes with fruit diameters smaller than 7.5cm, and the end effector is switched from half-open to full-close to grasp large tomatoes with fruit diameters larger than or equal to 7.5 cm.
6. Tomato fractionation and collection
In this embodiment, the tomatoes captured by the end effector are collected in stages according to the size of the fruit diameter, as shown in fig. 5. For large tomatoes with fruit diameters greater than or equal to 7.5cm, after the mechanical arm successfully picks, the Movet-! The target_phase-1 pose set in the bag moves to the position above the big fruit collecting basket, the end effector loosens fingers, and the fruits fall into the collecting basket; for small tomatoes with the fruit diameter smaller than 7.5cm, after the mechanical arm successfully picks, the Movet-! The target_phase-2 pose set in the bag moves above the small fruit collecting basket, the end effector loosens fingers, and the fruits fall into the collecting basket, so that grading collection is achieved.
7. Performance evaluation
As shown in fig. 6, an application test performed on site in an orchard is shown, the evaluation results are shown in table 2, and 79 fruits out of 82 visible fruits are accurately identified by using the YOLOv8 s-bitmap model, so that the overall identification accuracy reaches 96.34%. According to the two-stage track planning method of the mechanical arm adopted by the embodiment, 69 fruits are successfully picked out of 79 fruits which are correctly identified, and the picking success rate reaches 87.34%; complete picking operations were performed on all 79 fruits, with overall time consumption 563s, successful picking of 69 fruits, average picking time per fruit 8.159s; the 69 fruits successfully picked are all successfully classified, and the classification accuracy reaches 100%.
Table 2 results table for field application test in orchard
The preferred embodiments of the present invention have been described in detail, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of those skilled in the art without departing from the spirit of the present invention, and the various changes are included in the scope of the present invention.

Claims (10)

1. The method for automatically identifying and intelligently harvesting tomatoes is characterized by comprising the following steps of: comprises the following steps:
s1, acquiring a tomato data set required by training, preprocessing the data set, and constructing the data set;
s2, constructing a tomato recognition algorithm model Yolov 8S-tool with double functions of fruit detection and instance segmentation based on a Yolov8S architecture;
s3, designing a two-stage track planning method of the mechanical arm based on an RRT-connect algorithm and an RRT algorithm;
s4, realizing a control strategy of the end effector by using a CH340 module, a Raspberry Pi PicoW control panel and an STM32 control panel;
s5, setting an online tomato grading system, and grading and collecting the picked tomatoes.
2. A method for automatic identification and intelligent harvesting of tomatoes as claimed in claim 1, characterized in that: the tomato data set required for training in the step S1 comprises two data sets:
data set I: a laboratory simulated tomato dataset comprising RGB images at a rate of 1920 x 1080 of 895 Zhang Fenbian; data set II: a field tomato dataset comprising RGB images at a rate of 1920 x 1080 of 277 Zhang Fenbian.
3. A method for automatic identification and intelligent harvesting of tomatoes as claimed in claim 1, characterized in that: the method for preprocessing the tomato image dataset in the S1 comprises the following steps: taking into consideration the data enhancement of the original data set formed by LSTD and FTD by using brightness change, saturation change, contrast change, rotation, gaussian noise, spiced salt noise and pepper noise, finally obtaining a data set of 9376 images according to 8:1:1, a training set, a validation set and a test set were divided in proportion, 7500 pictures were used for training of the model, 937 pictures were used for validation of the model, and 939 pictures were used for testing of the model.
4. A method for automatic identification and intelligent harvesting of tomatoes as claimed in claim 1, characterized in that: the network model YOLOv 8S-tool in S2 is modified based on YOLOv8S architecture, and includes a shared encoding network and two decoding networks for bearing different tasks: the coding network comprises a backbone network and a neck network, and a small target detection layer is added in the neck network and is used for improving the detection performance of small tomatoes in an image; the detection branch in the decoding network reserves a YOLOv8 detection head network, and a small target area detection output branch is added, so that a fruit area integral boundary box is obtained; the segmentation branch carries out inverse convolution up-sampling on the bottom layer characteristics for 3 times based on nearest neighbor interpolation to obtain a two-dimensional characteristic diagram consistent with the size of an original image, and confidence of each pixel based on fruits or background areas is obtained, so that semantic segmentation of the visible pixels of the fruits is realized, on the basis, position matching is carried out on the discrete segmentation areas of the fruits by combining with a fruit integral boundary detection frame, if the central pixel coordinates of the discrete segmentation areas are located in the same frame, the discrete segmentation areas are considered to belong to the same fruits, and therefore example segmentation of the visible pixel areas of the fruits is realized.
5. A method for automatic identification and intelligent harvesting of tomatoes as claimed in claim 1, characterized in that: the two-stage trajectory planning method of the mechanical arm designed in the step S3 divides the motion process of the mechanical arm into two stages, wherein the first stage realizes the motion of the mechanical arm PTP circular interpolation mode based on the RRT-connect algorithm, and the second stage realizes the motion of the mechanical arm PTP linear interpolation mode based on the RRT algorithm.
6. The method for automatic tomato identification and intelligent harvesting according to claim 5, wherein: the two-stage track planning method for the mechanical arm designed in the step S3 comprises the following steps: O-XYZ is a base coordinate system of the mechanical arm, O is a coordinate origin, M, P, N is the coordinate system origin of the end effector when the mechanical arm is in different poses, N' is the projection of the fruit center N on an XOY plane, and when the mechanical arm is in an initial pose, the coordinate system origin of the end effector is M; when the mechanical arm reaches the target pose of the first-stage track planning, the origin of the coordinate system of the end effector is at P; when the mechanical arm reaches the target pose of the second-stage track planning, the origin of the coordinate system of the end effector is N, namely the three-dimensional coordinate of the tomato center pixel point.
7. The method for automatic tomato identification and intelligent harvesting according to claim 6, wherein: the method for realizing the motion of the mechanical arm PTP circular interpolation mode based on the RRT-connect algorithm in the first stage comprises the following steps:
the image acquisition point M is given before picking, that is, the coordinates of the point M are known, the coordinates of the fruit center point N are obtained by a camera, and the coordinates are converted into the coordinates (x) N ,y N ,z N ) The distance between the point P and the point N in the Z direction is defined as D, the distance between the point P and the point N in the Y direction is defined as the relative base coordinate system, the length is 12cm, and the calculation formula of the point P coordinate is as follows:
wherein: (x) P ,y P ,z P ) After the coordinate value of the point P is obtained, the mechanical arm realizes PTP circular arc interpolation mode motion from the point M to the point P based on the RRT-connection algorithm until the base joint rotates to the position of an angle alpha with the Y axis, wherein the calculation formula of alpha is as follows: α=arctan (-x) N /y N )。
8. The method for automatic tomato identification and intelligent harvesting according to claim 6, wherein: the method for realizing the motion of the mechanical arm PTP linear interpolation mode based on the RRT algorithm in the second stage comprises the following steps: when picking fruits, the mechanical arm moves in a plane formed by the points NON ', the stage realizes the movement of PTP in a linear interpolation mode based on the RRT algorithm, the end effector only needs to linearly translate in the plane NON', the origin of the coordinate system of the end effector is enabled to coincide with the center of the identified tomato pose, and after the finger envelops the tomato, the fruit stem separation is realized through rotation and pulling of the sixth joint.
9. A method for automatic identification and intelligent harvesting of tomatoes as claimed in claim 1, characterized in that: in the step S4, the control strategy for implementing the end effector by using the CH340 module, the Raspberry Pi PicoW control board and the STM32 control board is as follows: the MachCClaw node of the ROS can firstly try to open the CH340 serial port, if the serial port is successfully opened, the serial port is successfully opened at a terminal, and the serial port name is displayed as follows: v/dev/ttyCH 341USB0; through subscribing the depth camera node in the ROS, the value of data sent by the depth camera is received for judgment: if the data is 1, the identified tomatoes are large tomatoes, wherein the large tomatoes are tomatoes with the fruit diameter being larger than or equal to 7.5cm, the sent command is semi-closed, and the initial state of the mechanical claw is fully opened; the STM32 control board receives the grabbing instruction, and is provided with a 70N force envelope large tomato, so that the situation that the mechanical claw grabs the tomato excessively with excessive force to damage the tomato is prevented, and the sixth joint of the mechanical arm performs rotation-dragging to realize fruit-stem separation; if the data is 0, indicating that the identified tomatoes are small tomatoes, wherein the small tomatoes are tomatoes with the fruit diameter smaller than 7.5cm, and sending a command to be completely closed; the initial state of the mechanical claw is half-open, the STM32 control board receives a grabbing instruction, 40N force envelope small tomatoes are arranged, the mechanical claw is prevented from falling due to insufficient grabbing force, and rotation and pulling are carried out by a sixth joint of the mechanical arm to realize fruit stem separation.
10. A method for automatic identification and intelligent harvesting of tomatoes as claimed in claim 1, characterized in that: the method of the tomato online grading system set for the harvested tomatoes in the step S5 is as follows: for large tomatoes with fruit diameters greater than or equal to 7.5cm, after the mechanical arm successfully picks, the Movet-! The target_phase-1 pose set in the bag moves to the position above the big fruit collecting basket, the end effector loosens fingers, and the fruits fall into the collecting basket; for small tomatoes with the fruit diameter smaller than 7.5cm, after the mechanical arm successfully picks, the Movet-! The target_phase-2 pose set in the bag moves above the small fruit collecting basket, the end effector loosens fingers, and the fruits fall into the collecting basket, so that grading collection is achieved.
CN202410008288.0A 2024-01-04 2024-01-04 Method for automatically identifying tomatoes and intelligently harvesting tomatoes Pending CN117617002A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410008288.0A CN117617002A (en) 2024-01-04 2024-01-04 Method for automatically identifying tomatoes and intelligently harvesting tomatoes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410008288.0A CN117617002A (en) 2024-01-04 2024-01-04 Method for automatically identifying tomatoes and intelligently harvesting tomatoes

Publications (1)

Publication Number Publication Date
CN117617002A true CN117617002A (en) 2024-03-01

Family

ID=90027183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410008288.0A Pending CN117617002A (en) 2024-01-04 2024-01-04 Method for automatically identifying tomatoes and intelligently harvesting tomatoes

Country Status (1)

Country Link
CN (1) CN117617002A (en)

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EA201101257A1 (en) * 2011-10-03 2013-04-30 Открытое Акционерное Общество "Казанский Жировой Комбинат" METHOD OF BIOCONTROL QUALITY OF KETCHUP AND SAUMS ON A TOMATO BASIS
CN104742127A (en) * 2015-04-08 2015-07-01 深圳市山龙科技有限公司 Robot control method and robot
CN109407621A (en) * 2018-01-30 2019-03-01 武汉呵尔医疗科技发展有限公司 S type acceleration and deceleration motion control method in a kind of sampling mechanical arm interpolation
JP2019097448A (en) * 2017-11-30 2019-06-24 株式会社デンソー Harvesting robot system
CN113575111A (en) * 2021-09-01 2021-11-02 南京农业大学 Real-time identification positioning and intelligent picking device for greenhouse tomatoes
CN113808194A (en) * 2021-11-17 2021-12-17 季华实验室 Method and device for acquiring picking angle of cluster tomatoes, electronic equipment and storage medium
CN114022878A (en) * 2021-08-28 2022-02-08 山西农业大学 Improved YOLOv 5-based string-type tomato real-time detection method
CN114175927A (en) * 2021-11-29 2022-03-15 季华实验室 Cherry tomato picking method and cherry tomato picking manipulator
CN114846998A (en) * 2022-05-27 2022-08-05 云南农业大学 Tomato picking method and system of binocular robot based on YOLOv4 algorithm
CN115139315A (en) * 2022-07-26 2022-10-04 大连理工大学 Grabbing motion planning method for picking mechanical arm
CN115299245A (en) * 2022-09-13 2022-11-08 南昌工程学院 Control method and control system of intelligent fruit picking robot
CN115424247A (en) * 2022-06-24 2022-12-02 中国农业科学院农业信息研究所 Greenhouse tomato identification and detection method adopting CBAM and octave convolution to improve YOLOV5
CN115984704A (en) * 2023-02-10 2023-04-18 浙江理工大学 Plant and fruit detection algorithm of tomato picking robot
CN116030456A (en) * 2023-01-09 2023-04-28 吉林大学 Detection method for tomato maturity of string based on improved YOLOv5 network
CN116110042A (en) * 2023-02-15 2023-05-12 山西农业大学 Tomato detection method based on CBAM attention mechanism of YOLOv7
RU2796270C1 (en) * 2022-09-11 2023-05-22 Федеральное государственное бюджетное образовательное учреждение высшего образования "Московский государственный университет имени М.В.Ломоносова" (МГУ) Device for automated picking of tomatoes
CN116363505A (en) * 2023-03-07 2023-06-30 中科合肥智慧农业协同创新研究院 Target picking method based on picking robot vision system
CN116391506A (en) * 2022-09-22 2023-07-07 江苏大学 Tomato fruit high-speed collection system and method and tomato fruit picking machine
CN116524344A (en) * 2023-02-14 2023-08-01 山西农业大学 Tomato string picking point detection method based on RGB-D information fusion
CN117152735A (en) * 2023-09-01 2023-12-01 安徽大学 Tomato maturity grading method based on improved yolov5s

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EA201101257A1 (en) * 2011-10-03 2013-04-30 Открытое Акционерное Общество "Казанский Жировой Комбинат" METHOD OF BIOCONTROL QUALITY OF KETCHUP AND SAUMS ON A TOMATO BASIS
CN104742127A (en) * 2015-04-08 2015-07-01 深圳市山龙科技有限公司 Robot control method and robot
JP2019097448A (en) * 2017-11-30 2019-06-24 株式会社デンソー Harvesting robot system
CN109407621A (en) * 2018-01-30 2019-03-01 武汉呵尔医疗科技发展有限公司 S type acceleration and deceleration motion control method in a kind of sampling mechanical arm interpolation
CN114022878A (en) * 2021-08-28 2022-02-08 山西农业大学 Improved YOLOv 5-based string-type tomato real-time detection method
CN113575111A (en) * 2021-09-01 2021-11-02 南京农业大学 Real-time identification positioning and intelligent picking device for greenhouse tomatoes
CN113808194A (en) * 2021-11-17 2021-12-17 季华实验室 Method and device for acquiring picking angle of cluster tomatoes, electronic equipment and storage medium
CN114175927A (en) * 2021-11-29 2022-03-15 季华实验室 Cherry tomato picking method and cherry tomato picking manipulator
CN114846998A (en) * 2022-05-27 2022-08-05 云南农业大学 Tomato picking method and system of binocular robot based on YOLOv4 algorithm
CN115424247A (en) * 2022-06-24 2022-12-02 中国农业科学院农业信息研究所 Greenhouse tomato identification and detection method adopting CBAM and octave convolution to improve YOLOV5
CN115139315A (en) * 2022-07-26 2022-10-04 大连理工大学 Grabbing motion planning method for picking mechanical arm
RU2796270C1 (en) * 2022-09-11 2023-05-22 Федеральное государственное бюджетное образовательное учреждение высшего образования "Московский государственный университет имени М.В.Ломоносова" (МГУ) Device for automated picking of tomatoes
CN115299245A (en) * 2022-09-13 2022-11-08 南昌工程学院 Control method and control system of intelligent fruit picking robot
CN116391506A (en) * 2022-09-22 2023-07-07 江苏大学 Tomato fruit high-speed collection system and method and tomato fruit picking machine
CN116030456A (en) * 2023-01-09 2023-04-28 吉林大学 Detection method for tomato maturity of string based on improved YOLOv5 network
CN115984704A (en) * 2023-02-10 2023-04-18 浙江理工大学 Plant and fruit detection algorithm of tomato picking robot
CN116524344A (en) * 2023-02-14 2023-08-01 山西农业大学 Tomato string picking point detection method based on RGB-D information fusion
CN116110042A (en) * 2023-02-15 2023-05-12 山西农业大学 Tomato detection method based on CBAM attention mechanism of YOLOv7
CN116363505A (en) * 2023-03-07 2023-06-30 中科合肥智慧农业协同创新研究院 Target picking method based on picking robot vision system
CN117152735A (en) * 2023-09-01 2023-12-01 安徽大学 Tomato maturity grading method based on improved yolov5s

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李海芳: "基于改进YOLOv4 的苹果检测与果径估测方法", 《激光杂志》, 28 February 2022 (2022-02-28), pages 58 - 65 *

Similar Documents

Publication Publication Date Title
Yu et al. Real-time visual localization of the picking points for a ridge-planting strawberry harvesting robot
Wang et al. Review of smart robots for fruit and vegetable picking in agriculture
Mu et al. Design and simulation of an integrated end-effector for picking kiwifruit by robot
Brown et al. Design and evaluation of a modular robotic plum harvesting system utilizing soft components
Zahid et al. Technological advancements towards developing a robotic pruner for apple trees: A review
Hannan et al. Current developments in automated citrus harvesting
Zhaoxin et al. Design a robot system for tomato picking based on YOLO v5
Anh et al. Developing robotic system for harvesting pineapples
Li et al. A review on structural development and recognition–localization methods for end-effector of fruit–vegetable picking robots
Ji et al. Research on key technology of truss tomato harvesting robot in greenhouse
Xiao et al. Review of research advances in fruit and vegetable harvesting robots
Rajendran et al. Towards autonomous selective harvesting: A review of robot perception, robot design, motion planning and control
Liu et al. The Vision-Based Target Recognition, Localization, and Control for Harvesting Robots: A Review
Beegam et al. Hybrid consensus and recovery block-based detection of ripe coffee cherry bunches using RGB-D sensor
Jin et al. Intelligent tomato picking robot system based on multimodal depth feature analysis method
Park et al. Human-centered approach for an efficient cucumber harvesting robot system: Harvest ordering, visual servoing, and end-effector
Wang et al. Adaptive end‐effector pose control for tomato harvesting robots
Campbell et al. An integrated actuation-perception framework for robotic leaf retrieval: detection, localization, and cutting
Lehnert et al. A sweet pepper harvesting robot for protected cropping environments
Ji et al. A Comprehensive Review of the Research of the “Eye–Brain–Hand” Harvesting System in Smart Agriculture
Oliveira et al. End-effectors for harvesting manipulators-state of the art review
CN117617002A (en) Method for automatically identifying tomatoes and intelligently harvesting tomatoes
Kim et al. Detecting Ripeness of Strawberry and Coordinates of Strawberry Stalk using Deep Learning
Goondram et al. Strawberry Detection using Mixed Training on Simulated and Real Data
Silwal Machine vision system for robotic apple harvesting in fruiting wall orchards

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination