CN115529967B - Grape bud picking robot for wine brewing and bud picking method - Google Patents

Grape bud picking robot for wine brewing and bud picking method Download PDF

Info

Publication number
CN115529967B
CN115529967B CN202211368560.3A CN202211368560A CN115529967B CN 115529967 B CN115529967 B CN 115529967B CN 202211368560 A CN202211368560 A CN 202211368560A CN 115529967 B CN115529967 B CN 115529967B
Authority
CN
China
Prior art keywords
main controller
mechanical arm
depth camera
bud
buds
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211368560.3A
Other languages
Chinese (zh)
Other versions
CN115529967A (en
Inventor
苏宝峰
张士豪
房玉林
宋育阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwest A&F University
Original Assignee
Northwest A&F University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwest A&F University filed Critical Northwest A&F University
Priority to CN202211368560.3A priority Critical patent/CN115529967B/en
Publication of CN115529967A publication Critical patent/CN115529967A/en
Application granted granted Critical
Publication of CN115529967B publication Critical patent/CN115529967B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01GHORTICULTURE; CULTIVATION OF VEGETABLES, FLOWERS, RICE, FRUIT, VINES, HOPS OR SEAWEED; FORESTRY; WATERING
    • A01G7/00Botany in general
    • A01G7/06Treatment of growing trees or plants, e.g. for preventing decay of wood, for tingeing flowers or wood, for prolonging the life of plants
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01GHORTICULTURE; CULTIVATION OF VEGETABLES, FLOWERS, RICE, FRUIT, VINES, HOPS OR SEAWEED; FORESTRY; WATERING
    • A01G17/00Cultivation of hops, vines, fruit trees, or like trees
    • A01G17/02Cultivation of hops or vines
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J5/00Manipulators mounted on wheels or on carriages
    • B25J5/005Manipulators mounted on wheels or on carriages mounted on endless tracks or belts

Landscapes

  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Botany (AREA)
  • Environmental Sciences (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Ecology (AREA)
  • Forests & Forestry (AREA)
  • Wood Science & Technology (AREA)
  • Manipulator (AREA)

Abstract

The invention provides a wine grape bud picking robot and a bud picking method, comprising a depth camera and a fixed bracket, wherein the bottom of the depth camera is connected with the fixed bracket, the depth camera is bolted and fixed with the fixed bracket through 2M 3 fixed holes at the rear of the fixed bracket, and the depth camera is electrically connected with a main controller through wires; a laser module is fixedly arranged in the circular through hole in the middle of the fixed support through M2 fixing holes, the bottom surface of the fixed support is connected with a mechanical arm, and the mechanical arm is bolted and fixed with the fixed support through 4M 3 fixing holes at the tail end; the invention erases the auxiliary buds of the spring wine grapes by using the RGB-D technology, the deep learning technology and the movement path planning method, and reduces the orchard pollution caused by bud picking while efficiently and precisely positioning the auxiliary buds of the spring wine grapes, thereby improving the bud picking automation degree and the whole yield of the orchard and reducing the labor intensity and the labor cost of orchard management staff.

Description

Grape bud picking robot for wine brewing and bud picking method
Technical Field
The invention belongs to the field of flower and fruit thinning of fruits and vegetables, and particularly relates to a wine grape bud picking robot and a bud picking method.
Background
Grape is one of the earliest fruit tree varieties planted in the world, has the advantages of wide fruit application, long plant life and high economic benefit, and is used for erasing maldeveloped buds, inadequately grown buds and partial buds with overlarge growth density on grape vines, and finally, 1 bud with better growth condition is reserved. The companion buds are also called dual buds, and are used as a most common companion character in the grape germination period and are generally divided into main buds and auxiliary buds;
The existing grape bud picking device is divided into a mechanical type and a pesticide spraying type, wherein the mechanical bud picking device and a mechanical type (patent publication number: CN102165897A, patent right is ended in 2015) use a driving plate and a cutter disc to erase grape buds, but the method does not use an optical method to distinguish main buds and auxiliary buds, only erase pointed buds and ill-developed buds, the precision is poor, the labor intensity of orchard management staff in bud picking period can be relieved, but the yield increasing effect is poor (the main buds and the auxiliary buds are erased). The spraying type bud picking device is mainly used for carrying out bud growth inhibition by spraying an inhibitory agent, is usually sprayed by manual knapsack, is sprayed by a small number of machines, has the problem that main buds and auxiliary buds are simultaneously inhibited from growing, is unfavorable for flower and fruit thinning of an orchard to increase yield, is only suitable for winter bud picking in certain areas, and is generally used for pruning in winter bud picking in Guanzhong areas, and cannot be applied.
Therefore, the two prior arts are not used for accurately distinguishing the main bud from the auxiliary bud, and the operation area is large, so that the problem of low bud picking precision is caused, and the yield of the follow-up grape is not favorable.
In summary, the invention provides a grape bud picking robot and a bud picking method for wine brewing to solve the problems.
Disclosure of Invention
In order to solve the technical problems, the invention provides a wine grape bud picking robot and a bud picking method, which are used for solving the problems that in the prior art, main buds and auxiliary buds are not accurately distinguished, the operation area is large, so that bud picking precision is low, and the follow-up grape yield increase is not facilitated.
The utility model provides a making wine grape bud picking robot and bud picking method, includes depth camera and fixed bolster, the bottom of depth camera is connected with the fixed bolster, the depth camera is fixed with the fixed bolster bolt through 2M 3 fixed orifices at fixed bolster rear, the depth camera is through electric wire electric connection main control unit;
The laser module is fixedly installed in a circular through hole in the middle of the fixing support through M2 fixing holes, the bottom surface of the fixing support is connected with a mechanical arm, the mechanical arm is fixedly bolted with the fixing support through 4M 3 fixing holes at the tail end, the laser module is connected with a GPIO interface at the tail end of the mechanical arm through a DuPont line, the main controller is fixed on the inner side of the bottom of the mechanical arm, the bottom of the mechanical arm is connected with a crawler chassis, and the crawler chassis 6 is fixedly bolted with the mechanical arm through 4M 4 fixing holes of a surface fixing plate of the crawler chassis.
Preferably, the model of the depth camera is INTEL REALSENSE D435,435, and the laser module is a laser module with output power of 250mw and wavelength of 650 nm.
Preferably, the model of the mechanical arm is MyCobot-280, and the model of the main controller is Raspberry Pi4B.
Preferably, the main controller is connected with the mechanical arm driving board through 40 GPIO interfaces, and the target detection network used by the main controller is YOLOv target detection network.
A method for sprouting wine grape comprises the following steps:
S1, slowly moving a crawler chassis forwards, reading motor encoder data by a mechanical arm, performing initialization calibration, starting a main controller by controlling a depth camera after the calibration is finished, operating a color sensor after the parameter initialization of the depth camera is performed, and identifying double buds in an RGB image after the main controller obtains the RGB image of the depth camera;
S2, after the twin buds are detected, the main controller controls the crawler chassis to stop running, and records two-dimensional coordinates of the centroid of the twin buds under a camera coordinate system;
S3, the main controller controls the depth camera to acquire depth data of the two-dimensional coordinates of the gemmule centroid under the camera coordinate system, the depth camera acquires the depth data of the current two-dimensional coordinates by utilizing the infrared laser emitter and the infrared sensor of the depth camera and converts the depth data into the depth coordinates of the gemmule centroid, and the main controller records the three-dimensional coordinates of the gemmule centroid under the world coordinate system at the moment;
S4, the main controller issues the three-dimensional coordinates of the recorded gemmule centroid under the world coordinate system to an ROS node, a RViz platform in the ROS is used for determining the coordinate conversion relation between the mechanical arm and the gemmule centroid, and the relation is recorded in a quaternion form;
S5, expressing quaternions between the mechanical arm and the twin buds, receiving a generated path track by the mechanical arm through MoveIt plug-in units in the RViz platform, and executing according to the path track;
S6, after the mechanical arm moves to a first appointed point, the main controller controls the depth camera to operate the color sensor, and after the main controller obtains an RGB image of the depth camera, sub buds in the RGB image are identified;
S7, after the auxiliary buds are detected, the main controller records two-dimensional coordinates of the centroid of the auxiliary buds under a camera coordinate system;
S8, the main controller controls the depth camera to acquire depth data of the two-dimensional coordinates of the secondary bud centroid under the camera coordinate system, the depth camera acquires the depth data of the current two-dimensional coordinates by utilizing the infrared laser emitter and the infrared sensor of the depth camera and converts the depth data into the depth coordinates of the secondary bud centroid, and the main controller records the three-dimensional coordinates of the secondary bud centroid under the world coordinate system at the moment;
s9, the main controller issues the three-dimensional coordinates of the recorded sub-bud centroid under the world coordinate system to an ROS node, a RViz platform in the ROS is used for determining the coordinate conversion relationship between the mechanical arm and the sub-bud centroid, and the relationship is recorded in a quaternion form;
s10, a quaternion expressed between the mechanical arm and the auxiliary bud is received by the mechanical arm through a MoveIt plug-in a RViz platform, and the generated path track is executed according to the path track;
S11, after the mechanical arm moves to a second designated point, the main controller controls a GPIO interface at the tail end of the mechanical arm in a TTL mode, so that the laser module is controlled to emit laser, and the auxiliary bud is erased;
And S12, after the auxiliary buds are erased, the main controller controls the depth camera to operate the color sensor, after the main controller obtains RGB images of the depth camera, the auxiliary buds in the RGB images are identified, if no auxiliary buds are detected in the 4-frame images, the mechanical arm restores the initial position, the main controller controls the crawler chassis to slowly move forwards, and otherwise, the steps are repeatedly executed.
Preferably, the main controller in S1 controls the depth camera through a USBType-C line.
Preferably, the internal reference of the depth camera in S3 is to convert the image coordinate system into a camera coordinate system, and the centroid coordinates of the dual-bud and the sub-bud of the grape under the image coordinate system are registered.
Preferably, in S10, the mechanical arm reads the position of the end of the mechanical arm, determines whether the coordinates of the grape auxiliary bud are in the working space at this time, performs inverse kinematics analysis, and substitutes the analysis result into an a-algorithm to perform optimal path planning in Moveit.
Compared with the prior art, the invention has the following beneficial effects:
1. According to the invention, a depth camera is used for loading a deep learning network, so that twin buds of grapes can be rapidly identified, then, twin bud areas are identified, main buds and auxiliary buds are distinguished, three-dimensional coordinates of the auxiliary buds under a real coordinate system are issued to a mechanical arm, after the mechanical arm subscribes the coordinates, a laser module carried at the tail end of the mechanical arm moves to a designated point position along with the mechanical arm, and then, the purpose of erasing the auxiliary buds is realized by emitting laser.
2. The invention can accurately identify the main buds and the auxiliary buds by using the RGB-D technology, effectively improves the identification and positioning precision of the auxiliary buds, and can identify and erase the grape auxiliary buds with different heights in a large range by using the mechanical arm.
3. The track planning of the orchard can be performed by using the crawler-type chassis and the ROS operating system, so that the automatic erasing of the orchard is realized, the automatic erasing degree of grape auxiliary buds is improved, and the labor intensity of grape garden management staff is reduced.
4. According to the invention, the laser is used for bud picking, so that pollution loss is reduced, and the accuracy of auxiliary bud erasing is improved.
Drawings
FIG. 1 is a schematic diagram of a three-dimensional structure of a wine grape bud picking robot of the invention;
FIG. 2 is a schematic side view of the grape bud picking robot for wine making;
FIG. 3 is a schematic diagram of a back view structure of the wine grape bud picking robot of the invention;
FIG. 4 is a schematic diagram of a front view structure of the wine grape bud picking robot of the invention;
FIG. 5 is a technical roadmap of the method of the invention for budding grapes for wine production.
In the figure:
1. a depth camera; 2. a fixed bracket; 3. a laser module; 4. a mechanical arm; 5. a main controller; 6. a crawler chassis.
Detailed Description
Embodiments of the present invention are described in further detail below with reference to the accompanying drawings and examples. The following examples are illustrative of the invention but are not intended to limit the scope of the invention.
As shown in fig. 1-5, the invention provides a wine grape bud picking robot and a bud picking method, wherein the wine grape bud picking robot comprises a depth camera 1 and a fixed support 2, the bottom of the depth camera 1 is connected with the fixed support 2, the depth camera 1 is bolted and fixed with the fixed support 2 through 2M 3 fixed holes behind the fixed support 2, and the depth camera 1 is electrically connected with a main controller 5 through wires;
the laser module 3 is fixedly installed in the circular through hole in the middle of the fixed support 2 through the M2 fixed holes, the bottom surface of the fixed support 2 is connected with the mechanical arm 4, the mechanical arm 4 is fixedly bolted with the fixed support 2 through the terminal 4M 3 fixed holes, the laser module 3 is connected with the terminal GPIO interface of the mechanical arm 4 through the DuPont line, the main controller 5 is fixed on the inner side of the bottom of the mechanical arm 4, the bottom of the mechanical arm 4 is connected with the crawler-type chassis 6, and the crawler-type chassis 6 is fixedly bolted with the mechanical arm 4 through the 4M 4 fixed holes of the surface fixed plate.
Referring to fig. 1, 2 and 4, the model number of the depth camera 1 is INTEL REALSENSE D435, and the laser module 3 is a laser module with an output power of 250mw and a wavelength of 650 nm.
Referring to fig. 1-4, the mechanical arm 4 is MyCobot-280, and the main controller 5 is a Raspberry Pi4B.
Referring to fig. 2-4, the main controller 5 is connected to the driving board of the mechanical arm 4 through 40 GPIO interfaces, and the target detection network used by the main controller 5 is YOLOv target detection network.
A method for sprouting wine grape comprises the following steps:
S1, slowly moving a crawler chassis 6 forwards, reading motor encoder data by a mechanical arm 4, performing initialization calibration, starting a main controller 5 by controlling a depth camera 1 after the calibration is finished, operating a color sensor after the parameter initialization of the depth camera 1, and operating a target detection network based on deep learning after the main controller 5 obtains an RGB image of the depth camera 1 to identify double buds in the RGB image;
s2, after the twin buds are detected, the main controller 5 controls the crawler-type chassis 6 to stop running, and records two-dimensional coordinates of the centroid of the twin buds under a camera coordinate system;
s3, the main controller 5 controls the depth camera 1 to acquire depth data of the two-dimensional coordinates of the gemmule centroid under the camera coordinate system, the depth camera 1 acquires the depth data of the current two-dimensional coordinates by utilizing an infrared laser emitter and an infrared sensor of the main controller 1, the depth data are converted into the depth coordinates of the gemmule centroid by bilinear interpolation means according to the corresponding relation of the reference scale in the depth camera 1, and the main controller 5 records the three-dimensional coordinates of the gemmule centroid under the world coordinate system at the moment;
S4, the main controller 5 issues the three-dimensional coordinates of the recorded gemmule centroid under the world coordinate system to the ROS node, and a RViz platform in the ROS is used for determining the coordinate conversion relationship between the mechanical arm 4 and the gemmule centroid and recording the relationship in a quaternion form;
s5, loading quaternion expressed between the mechanical arm 4 and the twin buds to a path planning model through MoveIt plug-in a RViz platform, and receiving a generated path track by the mechanical arm 4 and executing according to the path track;
s6, after the mechanical arm 4 moves to a first appointed point, the main controller 5 controls the depth camera 1 to operate a color sensor, and after the main controller 5 obtains an RGB image of the depth camera 1, the main controller operates a target detection network based on depth learning to identify sub buds in the RGB image;
s7, after the auxiliary buds are detected, the main controller 5 records two-dimensional coordinates of the centroid of the auxiliary buds under a camera coordinate system;
S8, the main controller 5 controls the depth camera 1 to acquire depth data of the two-dimensional coordinates of the secondary bud centroid under the camera coordinate system, the depth camera 1 acquires the depth data under the current two-dimensional coordinates by utilizing an infrared laser emitter and an infrared sensor of the main controller 1, the depth data are converted into the depth coordinates of the secondary bud centroid by bilinear interpolation means according to the corresponding relation of the reference scale in the depth camera 1, and the main controller 5 records the three-dimensional coordinates of the secondary bud centroid under the world coordinate system at the moment;
S9, the main controller 5 issues the three-dimensional coordinates of the recorded sub-bud centroid under the world coordinate system to an ROS node, a RViz platform in the ROS is used for determining the coordinate conversion relationship between the mechanical arm 4 and the sub-bud centroid, and the relationship is recorded in a quaternion form;
S10, loading quaternions expressed between the mechanical arm 4 and the auxiliary buds to a path planning model through MoveIt plug-in units in a RViz platform, and receiving a generated path track by the mechanical arm 4 and executing according to the path track;
s11, after the mechanical arm 4 moves to a second designated point, the main controller 5 controls a GPIO interface at the tail end of the mechanical arm 4 in a TTL mode, so that the laser module 3 is controlled to emit laser, and the auxiliary bud is erased;
And S12, after the auxiliary buds are erased, the main controller 5 controls the depth camera 1 to operate the color sensor, after the main controller 5 obtains RGB images of the depth camera 1, the target detection network based on deep learning is operated to identify the auxiliary buds in the RGB images, if no auxiliary buds are detected in the 4-frame images, the mechanical arm 4 restores the initial position, the main controller 5 controls the crawler chassis 6 to slowly move forwards, and otherwise, the steps are repeatedly executed.
Referring to fig. 5, in S1, the main controller 5 controls the depth camera 1 through the USB Type-C line, and can accurately identify the main bud and the auxiliary bud by using the depth camera 1, thereby effectively improving the identification and positioning accuracy of the auxiliary bud.
Referring to fig. 5, in S3, the internal reference of the depth camera 1 is to convert the image coordinate system into the camera coordinate system, and the centroid coordinates of the dual-bud and the sub-bud of the grape under the image coordinate system are registered.
Referring to fig. 5, in S10, the mechanical arm 4 reads the position of the end of the mechanical arm, determines whether the coordinates of the grape auxiliary bud are in the working space at this time, if not, the mechanical arm 4 resumes the initial position, if so, performs inverse kinematics analysis, and substitutes the analysis result into an a algorithm to perform optimal path planning in Moveit.
The specific working principle is as follows: as shown in fig. 1-5, when the wine grape bud picking robot and method are used, firstly, the crawler chassis 6 slowly moves forwards, the mechanical arm 4 reads motor encoder data, initial calibration is performed, after the calibration is completed, the main controller 5 controls the depth camera 1 to start through the USB Type-C line, after the parameter initialization of the depth camera 1 is performed, the color sensor is operated, after the main controller 5 obtains RGB images of the depth camera 1, the target detection network based on deep learning is operated, dual buds in the RGB images are identified, after the dual buds are detected, the main controller 5 controls the crawler chassis 6 to stop operation, and records the two-dimensional coordinates of the dual-bud centroid under the camera coordinate system, the main controller 5 controls the depth camera 1 to acquire depth data of the dual-bud centroid under the camera coordinate system, the depth camera 1 acquires the depth data under the current two-dimensional coordinates by utilizing an infrared laser emitter and an infrared sensor of the main controller 1, the depth data is converted into the depth coordinates of the dual-bud centroid by bilinear interpolation means according to the corresponding relation of a reference scale in the depth camera 1, the main controller 5 records the three-dimensional coordinates of the dual-bud centroid under the world coordinate system at the moment, the main controller 5 issues the recorded three-dimensional coordinates of the dual-bud centroid under the world coordinate system to the ROS node, Determining the coordinate conversion relation between the mechanical arm 4 and the centroid of the gemmules through a RViz platform in the ROS, recording the relation in the form of quaternion, loading the quaternion expressed between the mechanical arm 4 and the gemmules to a path planning model through a MoveIt plug-in the RViz platform, receiving the generated path track by the mechanical arm 4, executing according to the path track, controlling the depth camera 1 to operate a color sensor by a main controller 5 after the mechanical arm 4 moves to a first appointed point, operating a target detection network based on deep learning after the main controller 5 obtains an RGB image of the depth camera 1, identifying the secondary gemmules in the RGB image, After the auxiliary bud is detected, the main controller 5 records the two-dimensional coordinate of the auxiliary bud centroid under the camera coordinate system, the main controller 5 controls the depth camera 1 to acquire depth data of the two-dimensional coordinate of the auxiliary bud centroid under the camera coordinate system, the depth camera 1 acquires the depth data under the current two-dimensional coordinate by utilizing an infrared laser emitter and an infrared sensor of the depth camera 1, the depth data is converted into the depth coordinate of the auxiliary bud centroid by bilinear interpolation means according to the corresponding relation of a reference scale in the depth camera 1, the main controller 5 records the three-dimensional coordinate of the auxiliary bud centroid under the world coordinate system at the moment, the main controller 5 distributes the recorded three-dimensional coordinate of the auxiliary bud centroid under the world coordinate system to an ROS node, Determining a coordinate conversion relation between the mechanical arm 4 and the centroid of the auxiliary bud through a RViz platform in the ROS, and recording the relation in a quaternion form; The quaternion expressed between the mechanical arm 4 and the auxiliary bud is loaded to a path planning model through a MoveIt plug-in a RViz platform, and the generated path track is received by the mechanical arm 4 and executed according to the path track; after the mechanical arm 4 moves to the second designated point, the main controller 5 controls the GPIO interface at the tail end of the mechanical arm 4 in a TTL mode, so as to control the laser module 3 to emit laser and erase the auxiliary bud; after the auxiliary buds are erased, the main controller 5 controls the depth camera 1 to operate the color sensor, after the main controller 5 obtains RGB images of the depth camera 1, the target detection network based on deep learning is operated to identify the auxiliary buds in the RGB images, if no auxiliary buds are detected in the 4-frame images, the mechanical arm 4 restores the initial position, the main controller 5 controls the crawler-type chassis 6 to slowly move forwards, otherwise, the steps are repeatedly executed, and the wine grape bud picking robot and the bud picking method are characterized.
The embodiments of the present invention have been shown and described for the purpose of illustration and description, it being understood that the above embodiments are illustrative and not to be construed as limiting the invention, and that variations, modifications, alternatives and variations may be made therein by one of ordinary skill in the art without departing from the scope of the invention.

Claims (4)

1. A brewing grape bud picking method based on a brewing grape bud picking robot comprises a depth camera (1) and a fixed support (2), and is characterized in that: the bottom of the depth camera (1) is connected with a fixed support (2), the depth camera (1) is fixedly bolted with the fixed support (2) through 2M 3 fixed holes behind the fixed support (2), and the depth camera (1) is electrically connected with a main controller (5) through an electric wire;
The laser module (3) is fixedly arranged in a circular through hole in the middle of the fixed support (2) through an M2 fixing hole, the bottom surface of the fixed support (2) is connected with the mechanical arm (4), the mechanical arm (4) is fixedly connected with the fixed support (2) through the terminal 4M 3 fixing holes in a bolting way, the laser module (3) is connected with a GPIO interface at the terminal of the mechanical arm (4) through a DuPont line, the main controller (5) is fixed on the inner side of the bottom of the mechanical arm (4), the bottom of the mechanical arm (4) is connected with the crawler-type chassis (6), and the crawler-type chassis (6) is fixedly connected with the mechanical arm (4) through the 4M 4 fixing holes of a surface fixing plate of the crawler-type chassis;
the wine grape bud picking method comprises the following steps:
S1, slowly moving a crawler chassis (6) forwards, reading motor encoder data by a mechanical arm (4), carrying out initialization calibration, starting a main controller (5) by controlling a depth camera (1) after the calibration is finished, operating a color sensor after the depth camera (1) carries out parameter initialization, and operating a target detection network based on deep learning after the main controller (5) obtains RGB images of the depth camera (1) to identify double buds in the RGB images;
s2, after the twin buds are detected, the main controller (5) controls the crawler-type chassis (6) to stop running, and records two-dimensional coordinates of the centroid of the twin buds under a camera coordinate system;
S3, a main controller (5) controls a depth camera (1) to acquire depth data of two-dimensional coordinates of the dual-bud centroid under a camera coordinate system, the depth camera (1) acquires the depth data of the current two-dimensional coordinates by utilizing an infrared laser emitter and an infrared sensor of the main controller (1), the depth data are converted into the depth coordinates of the dual-bud centroid through a bilinear interpolation means according to the corresponding relation of a reference scale in the depth camera (1), and the main controller (5) records the three-dimensional coordinates of the dual-bud centroid under a world coordinate system at the moment;
S4, the main controller (5) issues the three-dimensional coordinates of the recorded gemmules centroids under the world coordinate system to the ROS nodes, a RViz platform in the ROS is used for determining the coordinate conversion relation between the mechanical arm (4) and the gemmules centroids, and the coordinate conversion relation is recorded in a quaternion form;
S5, a quaternion expressed between the mechanical arm (4) and the twin buds is loaded to a path planning model through a MoveIt plug-in a RViz platform, and the generated path track is received by the mechanical arm (4) and executed according to the path track;
s6, after the mechanical arm (4) moves to a first appointed point, the main controller (5) controls the depth camera (1) to operate the color sensor, and after the main controller (5) obtains an RGB image of the depth camera (1), the main controller operates a target detection network based on deep learning to identify auxiliary buds in the RGB image;
s7, after the auxiliary buds are detected, the main controller (5) records two-dimensional coordinates of the centroid of the auxiliary buds under a camera coordinate system;
S8, a main controller (5) controls a depth camera (1) to acquire depth data of two-dimensional coordinates of the secondary bud centroid under a camera coordinate system, the depth camera (1) acquires the depth data of the current two-dimensional coordinates by utilizing an infrared laser emitter and an infrared sensor of the main controller, the depth data are converted into the depth coordinates of the secondary bud centroid by bilinear interpolation means according to the corresponding relation of a reference scale in the depth camera (1), and the main controller (5) records the three-dimensional coordinates of the secondary bud centroid under a world coordinate system at the moment;
S9, the main controller (5) issues the three-dimensional coordinates of the recorded secondary bud centroid under the world coordinate system to the ROS node, a RViz platform in the ROS is used for determining the coordinate conversion relation between the mechanical arm (4) and the secondary bud centroid, and the coordinate conversion relation is recorded in a quaternion form;
S10, loading quaternions expressed between the mechanical arm (4) and the auxiliary buds to a path planning model through MoveIt plug-in units in a RViz platform, and receiving a generated path track by the mechanical arm (4) and executing according to the path track;
s11, after the mechanical arm (4) moves to a second designated point, the main controller (5) controls a GPIO interface at the tail end of the mechanical arm (4) in a TTL mode, so that the laser module (3) is controlled to emit laser, and the auxiliary bud is erased;
S12, after the auxiliary buds are erased, the main controller (5) controls the depth camera (1) to operate the color sensor, after the main controller (5) obtains RGB images of the depth camera (1), the target detection network based on deep learning is operated, the auxiliary buds in the RGB images are identified, if no auxiliary buds are detected in 4 frames of images, the mechanical arm (4) restores the initial position, the main controller (5) controls the crawler chassis (6) to slowly move forwards, and otherwise, the steps are repeatedly executed;
In the S1, a main controller (5) controls the depth camera (1) through a USBType-C line;
The internal reference of the depth camera (1) in the step S3 is that an image coordinate system is converted into a camera coordinate system, and the centroid coordinates of the grape dual-bud and the auxiliary bud under the image coordinate system are registered;
And (S10) the mechanical arm (4) reads the tail end position of the mechanical arm, judges whether the grape auxiliary bud coordinate is in the working space at the moment, if not, the mechanical arm (4) restores the initial position, if so, inverse kinematics analysis is carried out, and the analysis result is substituted into an A algorithm to carry out optimal path planning in Moveit.
2. The method for sprouting of wine grapes based on the sprouting robot of wine grapes according to claim 1, wherein the method comprises the following steps: the model of the depth camera (1) is INTEL REALSENSE D and 435, and the laser module (3) is a laser module with output power of 250mw and wavelength of 650 nm.
3. The method for sprouting of wine grapes based on the sprouting robot of wine grapes according to claim 1, wherein the method comprises the following steps: the model of the mechanical arm (4) is MyCobot-280, and the model of the main controller (5) is Raspberry Pi4B.
4. The method for sprouting of wine grapes based on the sprouting robot of wine grapes according to claim 1, wherein the method comprises the following steps: the main controller (5) is connected with a driving board of the mechanical arm (4) through 40 GPIO interfaces, and a target detection network used by the main controller (5) is YOLOv target detection network.
CN202211368560.3A 2022-11-03 2022-11-03 Grape bud picking robot for wine brewing and bud picking method Active CN115529967B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211368560.3A CN115529967B (en) 2022-11-03 2022-11-03 Grape bud picking robot for wine brewing and bud picking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211368560.3A CN115529967B (en) 2022-11-03 2022-11-03 Grape bud picking robot for wine brewing and bud picking method

Publications (2)

Publication Number Publication Date
CN115529967A CN115529967A (en) 2022-12-30
CN115529967B true CN115529967B (en) 2024-06-21

Family

ID=84720570

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211368560.3A Active CN115529967B (en) 2022-11-03 2022-11-03 Grape bud picking robot for wine brewing and bud picking method

Country Status (1)

Country Link
CN (1) CN115529967B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113597966A (en) * 2021-07-20 2021-11-05 山东农业大学 Intelligent robot and method for flower and fruit thinning of grapes based on image recognition
CN115067099A (en) * 2021-03-15 2022-09-20 西北农林科技大学 Apple flower thinning machine

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7854108B2 (en) * 2003-12-12 2010-12-21 Vision Robotics Corporation Agricultural robot system and method
FR2994057B1 (en) * 2012-07-31 2015-04-03 Tiam VINE-SIZED ROBOT COMPRISING IMAGE CAPTURING MEANS USING LASER BEAM PROJECTION MEANS
CN103444452A (en) * 2013-09-09 2013-12-18 镇江万山红遍农业园 Method for promoting grape fruitage and parent branch sprouting orderliness
WO2015059021A1 (en) * 2013-10-25 2015-04-30 Basf Se System and method for extracting buds from a stalk of a graminaceous plant
CA2996575A1 (en) * 2018-02-06 2019-08-06 Mary Elizabeth Ann Brooks Cannabis bud trimming tool cleaning device and methodology
BR112022005239A2 (en) * 2019-10-01 2022-06-14 Monsanto Technology Llc Cross-pollination via liquid-mediated pollen transfer to closed stigmas of flowers from recipient plants
CN111837678B (en) * 2020-06-24 2022-08-23 江苏大学 Tobacco flying topping robot
CN111837701A (en) * 2020-09-02 2020-10-30 山东农业大学 Flower thinning arm based on image recognition and use method thereof
CN112233121A (en) * 2020-10-16 2021-01-15 中国农业科学院农业资源与农业区划研究所 Fruit yield estimation method based on binocular space positioning and intelligent segmentation
CN114511849B (en) * 2021-12-30 2024-05-17 广西慧云信息技术有限公司 Grape thinning identification method based on graph attention network
CN114731840B (en) * 2022-04-07 2022-12-27 仲恺农业工程学院 Double-mechanical-arm tea picking robot based on machine vision
CN115082815B (en) * 2022-07-22 2023-04-07 山东大学 Tea bud picking point positioning method and device based on machine vision and picking system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115067099A (en) * 2021-03-15 2022-09-20 西北农林科技大学 Apple flower thinning machine
CN113597966A (en) * 2021-07-20 2021-11-05 山东农业大学 Intelligent robot and method for flower and fruit thinning of grapes based on image recognition

Also Published As

Publication number Publication date
CN115529967A (en) 2022-12-30

Similar Documents

Publication Publication Date Title
CN111418349B (en) Intelligent fruit picking robot and method for realizing fruit picking
CN109848955B (en) Suspension type track agriculture intelligent inspection robot based on multidimensional sensor
CN110579987A (en) intelligent orchard information control system and method based on LORA communication
CN108633482A (en) A kind of fruit picking aircraft
CN114080905B (en) Picking method based on digital twins and cloud picking robot system
CN113207675B (en) Airflow vibration type facility crop automatic pollination device and method
CN109247153A (en) A kind of fertile mandarin orange branch pruning intelligent robot based on Internet of Things
Khort et al. Robotized platform for picking of strawberry berries
CN115529967B (en) Grape bud picking robot for wine brewing and bud picking method
CN111085982A (en) Orchard robot active vision detection system and detection method
KR20100106883A (en) Automatic pest recognition and control system and method
CN114190358A (en) Control method of bird repelling device, electronic device, and storage medium
JP2023158970A (en) Forestry management system and forestry management method
WO2023231408A1 (en) Automatic fruit harvesting apparatus mounted on unmanned aerial vehicle, and control method therefor
Chatzimichali et al. Design of an advanced prototype robot for white asparagus harvesting
CN108934613B (en) Automatic sprinkling irrigation system of solar photovoltaic vegetable planting greenhouse
CN111360782A (en) Multifunctional robot based on aerial rail type
CN216328365U (en) Material taking composite robot for ground vat fermentation process
CN212083995U (en) Based on wheeled multi-functional robot
CN114137974A (en) Intertillage control method, device and system and electronic equipment
CN216982615U (en) Tomato picking robot
CN111338357A (en) Based on wheeled multi-functional robot
CN115104588A (en) Automatic spraying machine who sprays liquid medicine of distinguishable leaf surface plant diseases and insect pests intelligence
CN112098369B (en) Method and device for detecting apple flowers with moldy heart disease based on diffuse reflection light
CN114568418B (en) Agricultural impurity removal system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant