CN115529967A - Bud picking robot and bud picking method for wine grapes - Google Patents

Bud picking robot and bud picking method for wine grapes Download PDF

Info

Publication number
CN115529967A
CN115529967A CN202211368560.3A CN202211368560A CN115529967A CN 115529967 A CN115529967 A CN 115529967A CN 202211368560 A CN202211368560 A CN 202211368560A CN 115529967 A CN115529967 A CN 115529967A
Authority
CN
China
Prior art keywords
depth camera
main controller
bud
mechanical arm
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211368560.3A
Other languages
Chinese (zh)
Other versions
CN115529967B (en
Inventor
苏宝峰
张士豪
房玉林
宋育阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwest A&F University
Original Assignee
Northwest A&F University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwest A&F University filed Critical Northwest A&F University
Priority to CN202211368560.3A priority Critical patent/CN115529967B/en
Publication of CN115529967A publication Critical patent/CN115529967A/en
Application granted granted Critical
Publication of CN115529967B publication Critical patent/CN115529967B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01GHORTICULTURE; CULTIVATION OF VEGETABLES, FLOWERS, RICE, FRUIT, VINES, HOPS OR SEAWEED; FORESTRY; WATERING
    • A01G7/00Botany in general
    • A01G7/06Treatment of growing trees or plants, e.g. for preventing decay of wood, for tingeing flowers or wood, for prolonging the life of plants
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01GHORTICULTURE; CULTIVATION OF VEGETABLES, FLOWERS, RICE, FRUIT, VINES, HOPS OR SEAWEED; FORESTRY; WATERING
    • A01G17/00Cultivation of hops, vines, fruit trees, or like trees
    • A01G17/02Cultivation of hops or vines
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J5/00Manipulators mounted on wheels or on carriages
    • B25J5/005Manipulators mounted on wheels or on carriages mounted on endless tracks or belts

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Ecology (AREA)
  • Forests & Forestry (AREA)
  • Botany (AREA)
  • Environmental Sciences (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Wood Science & Technology (AREA)
  • Manipulator (AREA)

Abstract

The invention provides a wine grape bud picking robot and a bud picking method, wherein the wine grape bud picking robot comprises a depth camera and a fixing support, the bottom of the depth camera is connected with the fixing support, the depth camera is fixedly bolted with the fixing support through 2M 3 fixing holes in the rear of the fixing support, and the depth camera is electrically connected with a main controller through a wire; the laser module is fixedly installed in the circular through hole in the middle of the fixed support through an M2 fixed hole, the bottom surface of the fixed support is connected with a mechanical arm, and the mechanical arm is bolted and fixed with the fixed support through 4M 3 fixed holes at the tail end; by using the RGB-D technology, the deep learning technology and the motion path planning method, the auxiliary buds of the grapes for wine production in spring are erased, efficient and high-precision positioning of the auxiliary buds of the grapes for wine production in spring is realized, orchard pollution caused by bud erasing is reduced, the automation degree of bud erasing and the overall yield of the orchard are improved, and the labor intensity and the labor cost of orchard managers are reduced.

Description

Bud picking robot and bud picking method for wine grapes
Technical Field
The invention belongs to the field of flower and fruit thinning of fruits and vegetables, and particularly relates to a bud picking robot and a bud picking method for wine grapes.
Background
The grape is one of the earliest fruit tree varieties planted in the world, has the advantages of wide fruit application, long plant life and high economic benefit, and 1 bud with good growth condition is finally reserved on the vine of the grape, wherein the buds with poor development, the buds with improper growth positions and the partial buds with overlarge growth density are erased. The accompanying buds are also called double buds, and are usually divided into main buds and auxiliary buds as the most common accompanying character in the grape germination period;
the existing grape bud picking device is divided into a mechanical type and a spraying type, wherein the mechanical type bud picking device and method (patent publication No. CN102165897A, which has been patented in 2015), uses a drive plate and a cutter disc to erase grape buds, but the method does not use an optical method to distinguish main buds and auxiliary buds, only can erase sharp buds and poorly developed buds, has poor precision, and can relieve the labor intensity of orchard managers during bud picking, but has poor yield increasing effect (the main buds and the auxiliary buds are erased). A spray type bud picking device mainly sprays inhibitory drugs to inhibit the growth of buds, usually sprays by manual carrying, and a small number of the buds are sprayed by machines, so that the device also has the problems that the growth of main buds and auxiliary buds is inhibited simultaneously, and the orchard flower thinning and fruit thinning are not facilitated to increase the yield.
Therefore, in both the two prior arts, the main bud and the auxiliary bud are not accurately distinguished, and the operation area is large, so that the problem of low bud picking precision is caused, and the subsequent yield increase of the grapes is not facilitated.
In conclusion, the invention provides a wine grape bud picking robot and a bud picking method, which aim to solve the problems.
Disclosure of Invention
In order to solve the technical problems, the invention provides a wine grape bud picking robot and a bud picking method, which aim to solve the problems that in the prior art, main buds and auxiliary buds are not accurately distinguished, and the operation area is large, so that bud picking precision is low, and the subsequent grape yield increase is not facilitated.
A wine grape bud picking robot and a bud picking method comprise a depth camera and a fixing support, wherein the bottom of the depth camera is connected with the fixing support, the depth camera is fixedly bolted with the fixing support through 2M 3 fixing holes in the rear of the fixing support, and the depth camera is electrically connected with a main controller through a wire;
there is the laser module in the circular through-hole at fixed bolster middle part through M2 fixed orifices fixed mounting, the bottom surface of fixed bolster is connected with the arm, the arm is fixed through terminal 4M 3 fixed orificess and fixed bolster bolt joint, the laser module passes through DuPont line and the terminal GPIO interface connection of arm, main control unit is fixed in the inboard of arm bottom, the arm bottom is connected with crawler-type chassis, crawler-type chassis 6 is fixed through 4M 4 fixed orificess and arm bolt joint of its surface mounting plate.
Preferably, the type of the depth camera is Intel RealSense D435, and the laser module is a laser module with an output power of 250mw and a wavelength of 650 nm.
Preferably, the model of the mechanical arm is MyCobot-280, and the model of the main controller is Raspberry Pi4B.
Preferably, the main controller is connected with the mechanical arm drive board through 40 GPIO interfaces, and the target detection network used by the main controller is a YOLOv5 target detection network.
A bud picking method for wine grapes comprises the following steps:
s1, a crawler-type chassis slowly moves forwards, a mechanical arm reads motor encoder data to perform initialization calibration, after the calibration is completed, a main controller controls a depth camera to start, after the depth camera performs parameter initialization, a color sensor is operated, and after the main controller obtains RGB images of the depth camera, gemmules in the RGB images are identified;
s2, after the gemmules are detected, the main controller controls the crawler-type chassis to stop running and records two-dimensional coordinates of the centroids of the gemmules in a camera coordinate system;
s3, the master controller controls the depth camera to acquire depth data of the two-dimensional coordinates of the centroid of the gemmule buds under a camera coordinate system, the depth camera acquires the depth data of the current two-dimensional coordinates by utilizing an infrared laser emitter and an infrared sensor of the depth camera and converts the depth data into the depth coordinates of the centroid of the gemmule buds, and the master controller records the three-dimensional coordinates of the centroid of the gemmule buds under the world coordinate system;
s4, the main controller issues three-dimensional coordinates of the recorded double-bud centroids in a world coordinate system to ROS nodes, determines a coordinate conversion relation between the mechanical arm and the double-bud centroids through a RViz platform in the ROS, and records the relation in a quaternion form;
s5, receiving a path track generated by a mechanical arm through a quaternion expressed between the mechanical arm and the gemmule through a Moveit plug-in the RViz platform, and executing according to the path track;
s6, after the mechanical arm moves to the first designated point, the main controller controls the depth camera to operate the color sensor, and after the main controller obtains the RGB image of the depth camera, the auxiliary bud in the RGB image is identified;
s7, after detecting the accessory bud, the main controller records a two-dimensional coordinate of the centroid of the accessory bud under a camera coordinate system;
s8, the master controller controls the depth camera to acquire depth data of the two-dimensional coordinates of the secondary bud centroid under a camera coordinate system, the depth camera acquires the depth data under the current two-dimensional coordinates by using an infrared laser transmitter and an infrared sensor of the depth camera and converts the depth data into the depth coordinates of the secondary bud centroid, and the master controller records the three-dimensional coordinates of the secondary bud centroid under the world coordinate system;
s9, the main controller issues the three-dimensional coordinates of the recorded secondary bud centroids in a world coordinate system to ROS nodes, determines the coordinate conversion relation between the mechanical arm and the secondary bud centroids through a RViz platform in the ROS, and records the relation in a quaternion form;
s10, receiving a path track generated by a mechanical arm through a quaternion expressed between the mechanical arm and an auxiliary bud through a Moveit plug-in the RViz platform, and executing according to the path track;
s11, after the mechanical arm moves to a second designated point, the main controller controls a GPIO interface at the tail end of the mechanical arm in a TTL mode, so that the laser module is controlled to emit laser, and the auxiliary buds are erased;
s12, after the subsidiary buds are completely erased, the main controller controls the depth camera to operate the color sensor, after the main controller obtains the RGB image of the depth camera, the subsidiary buds in the RGB image are identified, if the subsidiary buds are not detected in 4 frames of images, the mechanical arm restores to the initial position, the main controller controls the crawler-type chassis to slowly move forwards, and if the subsidiary buds are not detected in 4 frames of images, the steps are repeatedly executed.
Preferably, the master controller in the S1 controls the depth camera through the USBType-C line.
Preferably, the internal reference of the depth camera in S3 is to convert the image coordinate system into a camera coordinate system, and the centroid coordinates of the grape gemmiparous and the gemmiparous under the image coordinate system are registered.
Preferably, the mechanical arm in S10 reads the position of its own end, determines whether the coordinates of the subsidiary bud of the grape are in the working space at that time, performs inverse kinematics analysis, and substitutes the analysis result into the a-x algorithm to perform optimal path planning in Moveit.
Compared with the prior art, the invention has the following beneficial effects:
1. according to the method, the depth camera is used, the deep learning network is loaded, the gemmulous buds of the grapes can be rapidly identified, the gemmulous bud areas are identified, the main buds and the auxiliary buds are distinguished, the three-dimensional coordinates of the auxiliary buds in the real coordinate system are issued to the mechanical arm, after the mechanical arm subscribes the coordinates, the laser module carried at the tail end of the mechanical arm moves to an appointed point position along with the mechanical arm, and then the purpose of erasing the auxiliary buds is achieved in a laser emitting mode.
2. The invention can accurately identify the primary buds and the secondary buds by using the RGB-D technology, effectively improve the identification and positioning accuracy of the secondary buds, and can identify and erase the secondary buds of grapes with different heights in a larger range by using the mechanical arm.
3. According to the invention, the track planning of the orchard can be carried out by using the crawler-type chassis and the ROS operating system, the automatic erasing of the orchard is realized, the automatic erasing degree of the grape secondary buds is improved, and the labor intensity of management personnel of the orchard is reduced.
4. The invention uses laser to smear buds, thereby reducing pollution loss and improving the precision of the secondary bud erasing.
Drawings
FIG. 1 is a schematic perspective view of a grape bud picking robot for wine brewing according to the present invention;
FIG. 2 is a schematic side view of a wine grape bud picking robot according to the present invention;
FIG. 3 is a schematic diagram of a back view structure of the wine grape bud picking robot of the present invention;
FIG. 4 is a schematic structural view of a wine grape bud picking robot in a front view;
FIG. 5 is a technical route chart of the grape budding method of wine brewing of the present invention.
In the figure:
1. a depth camera; 2. fixing a bracket; 3. a laser module; 4. a mechanical arm; 5. a main controller; 6. a crawler-type chassis.
Detailed Description
Embodiments of the present invention will be described in further detail with reference to the drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
As shown in fig. 1-5, the invention provides a wine grape bud picking robot and a bud picking method, comprising a depth camera 1 and a fixing support 2, wherein the bottom of the depth camera 1 is connected with the fixing support 2, the depth camera 1 is bolted and fixed with the fixing support 2 through 2M 3 fixing holes at the rear of the fixing support 2, and the depth camera 1 is electrically connected with a main controller 5 through an electric wire;
there is laser module 3 through M2 fixed orifices fixed mounting in the circular through-hole at 2 middle parts of fixed bolster, the bottom surface of fixed bolster 2 is connected with arm 4, arm 4 is fixed through terminal 4M 3 fixed orificess and fixed bolster 2 bolt joints, laser module 3 is through 4 terminal GPIO interface connection of dupont line and arm, main control unit 5 is fixed in the inboard of 4 bottoms of arm, 4 bottoms of arm are connected with crawler-type chassis 6, crawler-type chassis 6 is fixed through 4M 4 fixed orificess and the 4 bolt joints of arm of its fixed surface plate.
Referring to fig. 1, 2 and 4, the depth camera 1 is of the type Intel RealSense D435, and the laser module 3 is a laser module with an output power of 250mw and a wavelength of 650 nm.
Referring to fig. 1-4, the robotic arm 4 is of the type MyCobot-280 and the main controller 5 is of the type Raspberry Pi4B.
Referring to fig. 2-4, the main controller 5 is connected to the driving board of the robot arm 4 through 40 GPIO interfaces, and the target detection network used by the main controller 5 is the YOLOv5 target detection network.
A bud picking method for wine grapes comprises the following steps:
s1, a crawler-type chassis 6 slowly moves forwards, a mechanical arm 4 reads motor encoder data to perform initialization calibration, after the calibration is completed, a main controller 5 controls a depth camera 1 to start, after the depth camera 1 performs parameter initialization, a color sensor is operated, after the main controller 5 obtains RGB images of the depth camera 1, a target detection network based on deep learning is operated, and double sprouts in the RGB images are identified;
s2, after the gemmules are detected, the main controller 5 controls the crawler-type chassis 6 to stop running, and two-dimensional coordinates of the centroids of the gemmules under a camera coordinate system are recorded;
s3, the main controller 5 controls the depth camera 1 to acquire depth data of the two-dimensional coordinates of the double-gemmule centroids in a camera coordinate system, the depth camera 1 acquires the depth data of the current two-dimensional coordinates by using an infrared laser emitter and an infrared sensor of the depth camera 1, the depth data are converted into the depth coordinates of the double-gemmule centroids by a bilinear interpolation means according to the corresponding relation of a reference scale in the depth camera 1, and the main controller 5 records the three-dimensional coordinates of the double-gemmule centroids in a world coordinate system;
s4, the main controller 5 issues the three-dimensional coordinates of the recorded double bud centroids in a world coordinate system to ROS nodes, determines the coordinate conversion relation between the mechanical arm 4 and the double bud centroids through a RViz platform in the ROS, and records the relation in a quaternion form;
s5, loading quaternion expressed between the mechanical arm 4 and the gemmules to a path planning model through a Moveit plug-in the RViz platform, receiving the generated path track by the mechanical arm 4, and executing according to the path track;
s6, after the mechanical arm 4 moves to a first designated point, the main controller 5 controls the depth camera 1 to operate the color sensor, and after the main controller 5 obtains the RGB image of the depth camera 1, the target detection network based on deep learning is operated to identify the accessory buds in the RGB image;
s7, after detecting the accessory bud, the main controller 5 records a two-dimensional coordinate of the centroid of the accessory bud under a camera coordinate system;
s8, the main controller 5 controls the depth camera 1 to acquire depth data of the two-dimensional coordinate of the secondary bud centroid under a camera coordinate system, the depth camera 1 acquires the depth data of the current two-dimensional coordinate by using an infrared laser emitter and an infrared sensor of the depth camera 1, the depth data are converted into the depth coordinate of the secondary bud centroid by a bilinear interpolation means according to the corresponding relation of a reference scale in the depth camera 1, and the main controller 5 records the three-dimensional coordinate of the secondary bud centroid under a world coordinate system at the moment;
s9, the main controller 5 issues the three-dimensional coordinates of the recorded secondary bud centroids in the world coordinate system to ROS nodes, determines the coordinate conversion relation between the mechanical arm 4 and the secondary bud centroids through a RViz platform in the ROS, and records the relation in a quaternion form;
s10, loading quaternions expressed between the mechanical arm 4 and the auxiliary buds to a path planning model through a Moveit plug-in the RViz platform, receiving the generated path track by the mechanical arm 4, and executing the path track;
s11, after the mechanical arm 4 moves to a second designated point, the main controller 5 controls a GPIO interface at the tail end of the mechanical arm 4 in a TTL mode, so that the laser module 3 is controlled to emit laser, and the auxiliary buds are erased;
s12, after the auxiliary buds are completely erased, the main controller 5 controls the depth camera 1 to operate the color sensor, after the main controller 5 obtains the RGB images of the depth camera 1, the target detection network based on deep learning is operated to identify the auxiliary buds in the RGB images, if the auxiliary buds are not detected in 4 frames of images, the mechanical arm 4 restores the initial position, the main controller 5 controls the crawler-type chassis 6 to slowly move forwards, and if the auxiliary buds are not detected in 4 frames of images, the steps are repeatedly executed.
Referring to fig. 5, in S1, the main controller 5 controls the depth camera 1 through the USB Type-C line, and the depth camera 1 can accurately identify the main bud and the subsidiary bud, thereby effectively improving the identification and positioning accuracy of the subsidiary bud.
Referring to fig. 5, the internal parameter of the depth camera 1 in S3 is to convert the image coordinate system into a camera coordinate system, and by registering the centroid coordinates of the grape gemmules and the gemmules in the image coordinate system.
Referring to fig. 5, in S10, the mechanical arm 4 reads the position of its own end, and determines whether the coordinates of the secondary buds of the grapes are in the working space, if not, the mechanical arm 4 restores the initial position, and if so, inverse kinematics analysis is performed, and the analysis result is substituted into the a × algorithm to perform optimal path planning in Moveit.
The specific working principle is as follows: as shown in figures 1-5, when the wine grape bud picking robot and the method are used, firstly, the crawler-Type chassis 6 moves forwards slowly, the mechanical arm 4 reads motor encoder data to perform initialization calibration, after the calibration is finished, the main controller 5 controls the depth camera 1 to start through a USB Type-C line, after the parameter initialization is performed on the depth camera 1, the color sensor is operated, after the main controller 5 obtains an RGB image of the depth camera 1, the target detection network based on deep learning is operated to identify the gemmules in the RGB image, after the gemmules are detected, the main controller 5 controls the crawler-Type chassis 6 to stop operating and records two-dimensional coordinates of the centroid of the gemmules in a camera coordinate system, the main controller 5 controls the depth camera 1 to acquire the depth data of the two-dimensional coordinates of the centroid of the gemmules in the camera coordinate system, the depth camera 1 obtains depth data under a current two-dimensional coordinate by using an infrared laser transmitter and an infrared sensor of the depth camera 1, converts the depth data into a depth coordinate at a gemmule centroid by a bilinear interpolation means according to a corresponding relation of a reference scale in the depth camera 1, a main controller 5 records a three-dimensional coordinate of the gemmule centroid under a world coordinate system at the moment, the main controller 5 issues the recorded three-dimensional coordinate of the gemmule centroid under the world coordinate system to an ROS node, a RViz platform in the ROS is used for determining a coordinate conversion relation between a mechanical arm 4 and the gemmule centroid, and records the relation in a quaternion form, a quaternion expressed between the mechanical arm 4 and the gemmule is loaded to a path planning model through a Moveit plug-in the RViz platform, a generated path track is received by the mechanical arm 4 and executed according to the path track, and after the mechanical arm 4 moves to a first designated point, the method comprises the steps that a main controller 5 controls a depth camera 1 to operate a color sensor, the main controller 5 operates a target detection network based on deep learning after obtaining an RGB image of the depth camera 1, identifies a secondary bud in the RGB image, after the secondary bud is detected, the main controller 5 records a two-dimensional coordinate of a secondary bud centroid under a camera coordinate system, the main controller 5 controls the depth camera 1 to obtain depth data of the two-dimensional coordinate of the secondary bud centroid under the camera coordinate system, the depth camera 1 obtains the depth data under the current two-dimensional coordinate by using an infrared laser transmitter and an infrared sensor of the depth camera 1, the depth data are converted into a depth coordinate of the secondary bud centroid by a bilinear interpolation means according to the corresponding relation of a parameter scale in the depth camera 1, the main controller 5 records a three-dimensional coordinate of the secondary bud centroid under a world coordinate system at the moment, the main controller 5 distributes the recorded three-dimensional coordinate of the secondary bud centroid under the world coordinate system to a node, a coordinate conversion relation between a mechanical arm 4 and the secondary bud centroid is determined through an ROS platform in a quaternion form, and the relation is recorded; a quaternion expressed between the mechanical arm 4 and the auxiliary bud is loaded to the path planning model through a Moveit plug-in the RViz platform, and the generated path track is received by the mechanical arm 4 and executed according to the path track; after the mechanical arm 4 moves to the second designated point, the main controller 5 controls the GPIO interface at the tail end of the mechanical arm 4 in a TTL mode, so that the laser module 3 is controlled to emit laser, and the auxiliary buds are erased; after the auxiliary buds are completely erased, the main controller 5 controls the depth camera 1 to operate the color sensor, after the main controller 5 obtains the RGB images of the depth camera 1, the target detection network based on deep learning is operated to identify the auxiliary buds in the RGB images, if the auxiliary buds are not detected in 4 frames of images, the mechanical arm 4 restores to the initial position, the main controller 5 controls the crawler-type chassis 6 to slowly move forwards, otherwise, the steps are repeatedly executed, and the wine grape bud wiping robot and the bud wiping method are characterized.
While embodiments of the present invention have been shown and described above for purposes of illustration and description, it will be understood that the above embodiments are illustrative and not restrictive of the current invention, and that changes, modifications, substitutions and alterations may be made in the above embodiments by those of ordinary skill in the art without departing from the scope of the present invention.

Claims (8)

1. The utility model provides a wine grape bud picking robot, includes degree of depth camera (1) and fixed bolster (2), its characterized in that: the bottom of the depth camera (1) is connected with a fixing support (2), the depth camera (1) is fixedly bolted with the fixing support (2) through 2M 3 fixing holes in the rear of the fixing support (2), and the depth camera (1) is electrically connected with a main controller (5) through an electric wire;
there are laser module (3) through M2 fixed orifices fixed mounting in the circular through-hole at fixed bolster (2) middle part, the bottom surface of fixed bolster (2) is connected with arm (4), arm (4) are fixed through terminal 4M 3 fixed orifices and fixed bolster (2) bolt, laser module (3) are through dupont line and the terminal GPIO interface connection of arm (4), main control unit (5) are fixed in the inboard of arm (4) bottom, arm (4) bottom is connected with crawler-type chassis (6), crawler-type chassis (6) are fixed through 4M 4 fixed orifices and arm (4) bolt of its fixed surface plate.
2. The wine grape bud picking robot of claim 1, wherein: the depth camera (1) is of an Intel RealSense D435 type, and the laser module (3) is a laser module with the output power of 250mw and the wavelength of 650 nm.
3. The wine grape bud picking robot of claim 1, wherein: the model of the mechanical arm (4) is MyCobot-280, and the model of the main controller (5) is Raspberry Pi4B.
4. The wine grape bud picking robot of claim 1, wherein: the master controller (5) is connected with a driving board of the mechanical arm (4) through 40 GPIO interfaces, and a target detection network used by the master controller (5) is a YOLOv5 target detection network.
5. A bud picking method for wine grapes is characterized by comprising the following steps: the method comprises the following steps:
s1, a crawler-type chassis (6) moves forwards slowly, a mechanical arm (4) reads motor encoder data to perform initialization calibration, after calibration is completed, a main controller (5) controls a depth camera (1) to start, after parameter initialization is performed on the depth camera (1), a color sensor is operated, after the main controller (5) obtains RGB images of the depth camera (1), a target detection network based on deep learning is operated, and gemmules in the RGB images are identified;
s2, after the gemmules are detected, the main controller (5) controls the crawler-type chassis (6) to stop running, and two-dimensional coordinates of the centroids of the gemmules under a camera coordinate system are recorded;
s3, the main controller (5) controls the depth camera (1) to acquire depth data of the two-dimensional coordinates of the double-bud centroid under a camera coordinate system, the depth camera (1) acquires the depth data under the current two-dimensional coordinates by using an infrared laser emitter and an infrared sensor of the depth camera (1), the depth data are converted into the depth coordinates of the double-bud centroid through a bilinear interpolation means according to the corresponding relation of a reference scale in the depth camera (1), and the main controller (5) records the three-dimensional coordinates of the double-bud centroid under a world coordinate system;
s4, the main controller (5) issues the three-dimensional coordinates of the recorded double bud centroids in a world coordinate system to ROS nodes, determines the coordinate conversion relation between the mechanical arm (4) and the double bud centroids through a RViz platform in the ROS, and records the relation in a quaternion form;
s5, loading quaternion expressed between the mechanical arm (4) and the gemmules to a path planning model through a Moveit plug-in the RViz platform, receiving the generated path track by the mechanical arm (4), and executing according to the path track;
s6, after the mechanical arm (4) moves to a first designated point, the main controller (5) controls the depth camera (1) to operate the color sensor, and after the main controller (5) obtains the RGB image of the depth camera (1), the target detection network based on deep learning is operated to recognize the auxiliary buds in the RGB image;
s7, after detecting the auxiliary buds, the main controller (5) records two-dimensional coordinates of the centers of the auxiliary buds in a camera coordinate system;
s8, the main controller (5) controls the depth camera (1) to acquire depth data of the two-dimensional coordinate of the secondary bud centroid under a camera coordinate system, the depth camera (1) acquires the depth data under the current two-dimensional coordinate by using an infrared laser emitter and an infrared sensor of the depth camera (1), the depth data are converted into the depth coordinate of the secondary bud centroid through a bilinear interpolation means according to the corresponding relation of a reference scale in the depth camera (1), and the main controller (5) records the three-dimensional coordinate of the secondary bud centroid under a world coordinate system at the moment;
s9, the main controller (5) issues the three-dimensional coordinates of the recorded secondary bud centroids under a world coordinate system to ROS nodes, determines the coordinate conversion relation between the mechanical arm (4) and the secondary bud centroids through a RViz platform in the ROS, and records the relation in a quaternion form;
s10, loading quaternions expressed between the mechanical arm (4) and the auxiliary bud to a path planning model through a Moveit plug-in the RViz platform, receiving the generated path track by the mechanical arm (4), and executing according to the path track;
s11, after the mechanical arm (4) moves to a second designated point, the main controller (5) controls a GPIO interface at the tail end of the mechanical arm (4) in a TTL mode, so that the laser module (3) is controlled to emit laser, and the auxiliary buds are erased;
s12, after the subsidiary buds are completely erased, the main controller (5) controls the depth camera (1) to operate the color sensor, the main controller (5) operates a target detection network based on deep learning after acquiring RGB images of the depth camera (1) to identify the subsidiary buds in the RGB images, if the subsidiary buds are not detected in 4 frames of images, the mechanical arm (4) restores to the initial position, the main controller (5) controls the crawler-type chassis (6) to slowly move forwards, and if the subsidiary buds are not detected, the steps are repeatedly executed.
6. The grape wine degermination method as set forth in claim 5, characterized in that: and in the S1, the main controller (5) controls the depth camera (1) through the USB Type-C line.
7. The grape wine degermination method as set forth in claim 5, characterized in that: and in the S3, the internal reference of the depth camera (1) is to convert the image coordinate system into a camera coordinate system, and the centroid coordinates of the grape gemmules and the subsidiary buds under the image coordinate system are registered.
8. The grape wine degermination method as set forth in claim 5, characterized in that: and in the S10, the mechanical arm (4) reads the position of the tail end of the mechanical arm, judges whether the grape secondary bud coordinate is in the operation space or not, if not, the mechanical arm (4) restores the initial position, if so, inverse kinematics analysis is carried out, and the analysis result is substituted into an A-star algorithm to carry out optimal path planning in the Moveit.
CN202211368560.3A 2022-11-03 2022-11-03 Grape bud picking robot for wine brewing and bud picking method Active CN115529967B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211368560.3A CN115529967B (en) 2022-11-03 2022-11-03 Grape bud picking robot for wine brewing and bud picking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211368560.3A CN115529967B (en) 2022-11-03 2022-11-03 Grape bud picking robot for wine brewing and bud picking method

Publications (2)

Publication Number Publication Date
CN115529967A true CN115529967A (en) 2022-12-30
CN115529967B CN115529967B (en) 2024-06-21

Family

ID=84720570

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211368560.3A Active CN115529967B (en) 2022-11-03 2022-11-03 Grape bud picking robot for wine brewing and bud picking method

Country Status (1)

Country Link
CN (1) CN115529967B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118318624A (en) * 2024-05-20 2024-07-12 西北农林科技大学 Grape auxiliary bud erasing device and method

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060213167A1 (en) * 2003-12-12 2006-09-28 Harvey Koselka Agricultural robot system and method
CN103444452A (en) * 2013-09-09 2013-12-18 镇江万山红遍农业园 Method for promoting grape fruitage and parent branch sprouting orderliness
FR2994057A1 (en) * 2012-07-31 2014-02-07 Tiam Wine grape pruning robot, has controlling unit connected with treatment unit to direct cutting unit on cutting points, and reading and recording unit reading and recording images to project laser beam on grapes and branches
WO2015059021A1 (en) * 2013-10-25 2015-04-30 Basf Se System and method for extracting buds from a stalk of a graminaceous plant
CA2996575A1 (en) * 2018-02-06 2019-08-06 Mary Elizabeth Ann Brooks Cannabis bud trimming tool cleaning device and methodology
CN111837701A (en) * 2020-09-02 2020-10-30 山东农业大学 Flower thinning arm based on image recognition and use method thereof
CN112233121A (en) * 2020-10-16 2021-01-15 中国农业科学院农业资源与农业区划研究所 Fruit yield estimation method based on binocular space positioning and intelligent segmentation
CN113597966A (en) * 2021-07-20 2021-11-05 山东农业大学 Intelligent robot and method for flower and fruit thinning of grapes based on image recognition
WO2021258411A1 (en) * 2020-06-24 2021-12-30 江苏大学 Flying robot for top pruning tobacco
CN114501985A (en) * 2019-10-01 2022-05-13 孟山都技术公司 Cross-pollination by liquid-mediated delivery of pollen onto closed stigmas of flowers from recipient plants
CN114511849A (en) * 2021-12-30 2022-05-17 广西慧云信息技术有限公司 Grape thinning identification method based on graph attention network
CN114731840A (en) * 2022-04-07 2022-07-12 仲恺农业工程学院 Double-mechanical-arm tea picking robot based on machine vision
CN115082815A (en) * 2022-07-22 2022-09-20 山东大学 Tea bud picking point positioning method and device based on machine vision and picking system
CN115067099A (en) * 2021-03-15 2022-09-20 西北农林科技大学 Apple flower thinning machine

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060213167A1 (en) * 2003-12-12 2006-09-28 Harvey Koselka Agricultural robot system and method
FR2994057A1 (en) * 2012-07-31 2014-02-07 Tiam Wine grape pruning robot, has controlling unit connected with treatment unit to direct cutting unit on cutting points, and reading and recording unit reading and recording images to project laser beam on grapes and branches
CN103444452A (en) * 2013-09-09 2013-12-18 镇江万山红遍农业园 Method for promoting grape fruitage and parent branch sprouting orderliness
WO2015059021A1 (en) * 2013-10-25 2015-04-30 Basf Se System and method for extracting buds from a stalk of a graminaceous plant
CA2996575A1 (en) * 2018-02-06 2019-08-06 Mary Elizabeth Ann Brooks Cannabis bud trimming tool cleaning device and methodology
CN114501985A (en) * 2019-10-01 2022-05-13 孟山都技术公司 Cross-pollination by liquid-mediated delivery of pollen onto closed stigmas of flowers from recipient plants
WO2021258411A1 (en) * 2020-06-24 2021-12-30 江苏大学 Flying robot for top pruning tobacco
CN111837701A (en) * 2020-09-02 2020-10-30 山东农业大学 Flower thinning arm based on image recognition and use method thereof
CN112233121A (en) * 2020-10-16 2021-01-15 中国农业科学院农业资源与农业区划研究所 Fruit yield estimation method based on binocular space positioning and intelligent segmentation
CN115067099A (en) * 2021-03-15 2022-09-20 西北农林科技大学 Apple flower thinning machine
CN113597966A (en) * 2021-07-20 2021-11-05 山东农业大学 Intelligent robot and method for flower and fruit thinning of grapes based on image recognition
CN114511849A (en) * 2021-12-30 2022-05-17 广西慧云信息技术有限公司 Grape thinning identification method based on graph attention network
CN114731840A (en) * 2022-04-07 2022-07-12 仲恺农业工程学院 Double-mechanical-arm tea picking robot based on machine vision
CN115082815A (en) * 2022-07-22 2022-09-20 山东大学 Tea bud picking point positioning method and device based on machine vision and picking system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
马起林;姜润丽;: "酿酒葡萄生长季修剪技术", 西北园艺(果树), no. 01, 10 February 2014 (2014-02-10), pages 19 *
魏晓峰 等: "" 简约化修剪对酿酒葡萄叶幕参数及果实品质的影响"", 《中外葡萄与葡萄酒》, 15 September 2016 (2016-09-15), pages 10 - 15 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118318624A (en) * 2024-05-20 2024-07-12 西北农林科技大学 Grape auxiliary bud erasing device and method

Also Published As

Publication number Publication date
CN115529967B (en) 2024-06-21

Similar Documents

Publication Publication Date Title
CN115529967A (en) Bud picking robot and bud picking method for wine grapes
CN111830984B (en) Multi-machine cooperative car washing system and method based on unmanned car washing equipment
CN109328973A (en) A kind of intelligent system and its control method of tapping rubber of rubber tree
CN108550141A (en) A kind of movement wagon box automatic identification and localization method based on deep vision information
CN108633482A (en) A kind of fruit picking aircraft
CN114080905B (en) Picking method based on digital twins and cloud picking robot system
CN213290258U (en) Based on crawler-type multifunctional robot
CN112804452B (en) Intelligent phenotype collection trolley and collection method based on high-stalk crops
CN115299245B (en) Control method and control system of intelligent fruit picking robot
CN113207675A (en) Airflow vibration type facility crop automatic pollination device and method
CN110088703B (en) Method for navigating and self-positioning an autonomously traveling processing device
CN115553192A (en) Natural rubber tree tapping robot and using method thereof
CN111513428A (en) Robot three-dimensional vision system and method for sole and vamp scanning operation
CN116117807A (en) Chilli picking robot and control method
CN113906900B (en) Sugarcane harvester and method for adjusting position and posture of cutter head of sugarcane harvester based on multi-sensor fusion
CN112288751A (en) Automatic floor sweeping device and control algorithm
CN213731765U (en) Mobile robot with tracking function
CN218398132U (en) Indoor multifunctional operation robot of transformer substation
CN115880688A (en) Method for positioning and erasing subsidiary buds of wine grapes
CN113804190B (en) Fruit tree three-dimensional point cloud acquisition method and device
CN115892543A (en) Automatic take-off and landing control equipment for unmanned aerial vehicle
CN216982615U (en) Tomato picking robot
CN214628052U (en) Control system of plug seedling transplanting robot
CN114879690A (en) Scene parameter adjusting method and device, electronic equipment and storage medium
CN114942421A (en) Omnidirectional scanning multiline laser radar autonomous positioning device and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant