CN110249793B - Robot tail end-depth camera configuration method for trellis grape harvesting and servo control method - Google Patents

Robot tail end-depth camera configuration method for trellis grape harvesting and servo control method Download PDF

Info

Publication number
CN110249793B
CN110249793B CN201910385289.6A CN201910385289A CN110249793B CN 110249793 B CN110249793 B CN 110249793B CN 201910385289 A CN201910385289 A CN 201910385289A CN 110249793 B CN110249793 B CN 110249793B
Authority
CN
China
Prior art keywords
depth camera
range
horizontal
pose
cluster
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910385289.6A
Other languages
Chinese (zh)
Other versions
CN110249793A (en
Inventor
刘继展
袁妍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University
Original Assignee
Jiangsu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University filed Critical Jiangsu University
Priority to CN201910385289.6A priority Critical patent/CN110249793B/en
Publication of CN110249793A publication Critical patent/CN110249793A/en
Application granted granted Critical
Publication of CN110249793B publication Critical patent/CN110249793B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01DHARVESTING; MOWING
    • A01D46/00Picking of fruits, vegetables, hops, or the like; Devices for shaking trees or shrubs
    • A01D46/28Vintaging machines, i.e. grape harvesting machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30188Vegetation; Agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Environmental Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a robot tail end-depth camera configuration method and a servo control method for trellis grape harvesting, and relates to the field of agricultural robots. Aiming at the robot picking of the trellis-cultivated grapes, a specific depth camera downward and finger upward configuration relation based on a close-range pose is provided; in order to realize that the picking robot can position and pick target grapes in an unknown trellis grape environment, a far-view positive pose positioning method, a near-view spike-stalk detection method and a continuous operation flow based on a depth camera are provided, so that the far-view positive pose, the successive target locking and the hand-eye approach identification picking are completed. The method has the advantages of simplified algorithm and good real-time property and stability.

Description

Robot tail end-depth camera configuration method for trellis grape harvesting and servo control method
Technical Field
The invention relates to the field of agricultural robots, in particular to a robot tail end-depth camera configuration method and a servo control method for trellis grape harvesting.
Background
The fresh grapes are mostly cultivated by adopting a horizontal shed frame, the shed surface is parallel to the ground, the ventilation and light transmission are good, the crop quality is good, the plant diseases and insect pests are light, and the ground operation including machinery is convenient. The mode of clamping and shearing the cob is mostly adopted for picking the fresh grapes by the robot, and the robot must autonomously complete the one-by-one identification and continuous picking operation of the cob in the actual horizontal trellis environment. The existing identification method only cuts and judges artificially acquired images, but does not consider the problems of how to autonomously lock a grape spike-spike target and autonomously sort to finish one-by-one identification and picking of spike-spikes in an actually unknown trellis environment by a robot.
Disclosure of Invention
The invention provides a robot end-depth camera configuration method and a servo control method for trellis grape harvesting, which realize autonomous one-by-one identification and continuous picking operation of a robot on cobs in an actual horizontal trellis environment.
In order to solve the technical problems, the specific technical scheme adopted by the invention comprises the following steps:
a robot end-depth camera configuration method for harvesting trellis grapes comprises size parameters of an end effector, configuration parameters between the end effector and a depth camera and a close-range pose P of the depth camera relative to fruit ears0And (4) determining.
Further, the dimension parameter of the end effector is the length L of the fingerf,LfBy picking pose P1Determines the geometrical relationship of:
Figure BDA0002054652830000011
C1the compensation amount of the vertical deviation of the fingers relative to the fruit cluster when the fingers clamp the cob; rmaxIs the maximum value of the ear width R.
Further, configuration parameters between the end effector and the depth camera and a close-range pose P of the depth camera relative to the cluster0The determination process of (2) is: in close range pose P0The lower end point G of the cluster is located on the vertical central line of the depth camera field of view, and the depth camera detects within a set detection range [ D ]1,D3]Maximum vertical viewing angle theta0Horizontal angle of view
Figure BDA0002054652830000014
Performing detection with horizontal view angle
Figure BDA0002054652830000012
Maximum vertical viewing angle θ0Satisfy the requirement of
Figure BDA0002054652830000013
Vertical height of depth camera relative to fruit cluster
Figure BDA0002054652830000021
Closest horizontal distance D from ear to depth cameraH=D1+C3Vertical distance H of finger to depth cameraa=Lftanθ1(ii) a Wherein DHIs the closest horizontal distance, k, from the ear to the depth camera1To a safety factor, C2For compensating for vertical deviations of the cob from the ear, RminIs the minimum value of the width R of the cluster, theta1Upper viewing angle, H, that can be achieved by depth camera after being blocked by fingerk-maxFor each hanging grape cluster to be separated from the horizontal trellis height space HkMaximum value of (C)3For different earsNearest detection distance D caused by shape1Distance D from nearest levelHL is the length of the cluster.
A servo-control method of trellis grape harvest comprising: a long-range view pose positioning method, a short-range view spike-axle detecting method and a continuous operation flow.
Further, the distant view posture correction positioning method comprises the following steps: depth camera height from ground H1Maximum detection range [ D ]1,D2]And maximum horizontal viewing angle
Figure BDA0002054652830000025
Maximum vertical viewing angle θ0Carrying out horizontal swing detection; searching a trunk of the grape by using a vertical straight line detection method with a length threshold rule; the depth camera is driven to swing and adjust through the mechanical arm, so that the trunks on the two sides are symmetrical in the field of view of the depth camera; positive position P for depth camera to reach distant view2And stops the swing.
Further, the long-range positive pose P2Height from ground H for depth camera to be level, depth camera1The depth camera is located in the line center of the trunks on the two sides, and the view finding direction is parallel to the lines of the trunks.
Further, the close-range pose positioning method comprises the following steps: the depth camera continuously moves forwards and corrects the position P with a long shot2In the detection range [ D ]1,D3]Horizontal viewing angle range
Figure BDA0002054652830000026
Vertical viewing angle range theta2Searching and detecting are carried out; with reference to the first depth point M appearing in the depth camera field of view, the depth camera remains horizontal and moves to P3(ii) a At P3The position is that the lowest point appearing in the view field of the depth camera is taken as the lower endpoint G of the cluster; applying a close-range pose P of a depth camera relative to the ear0The method for determining (2) positions the depth camera at a close-range pose P0
Further, the horizontal viewing angle range
Figure BDA0002054652830000022
Vertical viewing angle range theta2Satisfy the requirement of
Figure BDA0002054652830000023
The depth camera is at a distance D from the first depth point M4,D4Need to satisfy D4+C4≤D3,θ2And D4Need to satisfy
Figure BDA0002054652830000024
Wherein k is2Is the horizontal view angle coefficient, W is the interline distance of the grape trunk, Lp-maxAt the maximum height of the horizontal canopy frame, HK-minFor each hanging grape cluster to be separated from the horizontal trellis height space HkMinimum value of (1), C4Is the difference in depth value between the first depth point M and the lower end point G of the ear due to the difference in position in the ear, k3Is the vertical range coefficient between the first depth point M and the lower end point G.
Further, the close-range cob detection method comprises the following steps: in close range pose P0Depth camera in detection range [ D1,D4]Maximum vertical viewing angle theta0Horizontal angle of view
Figure BDA0002054652830000031
Point cloud data is obtained internally, and the search length is not less than L2Vertical straight line A, B; if the vertical straight line A, B is at a horizontal distance D relative to the lower end point Ge≤D0While the vertical distance H of the lower end of the vertical line A, B relative to the lower end point G of the eare≤H0Then the vertical line A, B is determined to be the cob.
Further, the continuous operation process comprises:
step one, a picking robot drives to enter a horizontal shed frame cultivation area;
secondly, determining the long-range positive pose P of the depth camera by applying a long-range positive pose positioning method2And the mechanical arm sends the depth camera to the long-shot positive pose P2
Thirdly, a close-range pose positioning method is applied to enable the depth camera to lock the lower endpoint G of the 1 st cluster1The mechanical arm drives the depth camera to be positioned at the close-range pose P0
Step four, detecting the 1 st cob by applying a close-range cob detection method;
step five, the picking robot finishes picking the ith cluster;
step six, the mechanical arm drives the depth camera to return to the current positive distant view pose P2
Step seven, the depth camera uses the maximum detection range [ D ]1,D2]Detecting the horizontal swing at the maximum visual angle;
step eight, the mechanical arm drives the depth camera to move along the positive posture P2Forward until point cloud data appears in a depth camera view field;
step nine, applying a close-range pose positioning method, repeating the step three to the step eight, and continuously locking the fruit cluster target, identifying the cob and picking operation in the horizontal shed frame;
step ten, until the depth camera cannot find the tree trunks on the two sides in the step seven, the picking robot is driven out.
The invention has the following beneficial effects: aiming at the robot picking of the grapes cultivated in the trellis, based on the fact that a specific depth camera corresponding to a close-range pose is arranged on the lower portion and the upper portion of fingers, the robot enters the trellis to correct the distant-range pose and sequentially lock and guide hands and eyes to approach and recognize the picking, continuous recognition picking operation of the robot on the grapes in an unknown trellis environment is achieved, the algorithm is simplified, and the real-time performance and the stability are good.
Drawings
FIG. 1 is a schematic view of the initial state of trellis viticulture and robot harvesting;
FIG. 2 is a schematic diagram of a finger-depth camera configuration of an end effector;
FIG. 3 is a close-up pose schematic view of the end effector;
FIG. 4 is a schematic top view of a depth camera-ear position relationship in a close-range pose;
FIG. 5 is a flow chart of a visual servo-controlled continuous operation for trellis grape harvesting;
FIG. 6 is a block diagram of a perspective pose positioning method;
FIG. 7 is a perspective pose view;
FIG. 8 is a block diagram of a close-range pose positioning method;
FIG. 9 is a schematic view of a vertical view range of a close-range pose positioning process;
FIG. 10 is a block diagram of a method for detecting a close-range cob;
FIG. 11 is a schematic diagram of the effect of the detection of the short-range cob.
In the figure, 1, a chassis, 2, a mechanical arm, 3, a depth camera, 4, an end effector, 5, a fruit cluster, 6, a horizontal shelf, 7, a blade, 8, a vine, 9, a trunk, 10, a cob, 11, a finger, 12 and the ground.
Detailed Description
The technical solution of the present invention is further described in detail with reference to the accompanying drawings and specific embodiments.
As shown in FIG. 1 and FIG. 2, fresh grapes are cultivated in horizontal shed frames with height H of 6p1800-2000 mm, 0.4-0.45 m grids are formed between the side column pair pull wires, most of the blades 7 and the vines 8 are blocked by the grids, and the grape clusters 5 and the spike stalks 10 are shielded less. The length L and the width R of the fruit cluster 5 are between 100 mm-250 mm and 80 mm-200 mm, and the height space H between each hanging grape fruit cluster 5 in the horizontal shed frame 6 and the horizontal shed frame 6kThe distance between every two vertical grape ears 5 is 30-120 mm, the distance between every two vertical grape ears 5 is 200-500 mm, and the lowest height of the grape ears 5 from the ground is 1300-1600 mm.
As shown in fig. 1, the picking robot is composed of a chassis 1, a robot arm 2, an end effector 4, and a depth camera 3, the end robot arm 2 is mounted on the chassis 1, the end effector 4 is mounted on a wrist of the robot arm 2, and the depth camera 3 is mounted on the end effector 4.
The depth camera 3 obtains three-dimensional depth point cloud of an object in a view field through active emission and diffuse reflection receiving of an infrared light source
Figure BDA0002054652830000041
Wherein
Figure BDA0002054652830000042
Is a horizontal angular coordinate, theta is a vertical angular coordinate,
Figure BDA0002054652830000043
as a coordinate point
Figure BDA0002054652830000044
Figure BDA0002054652830000045
The object depth value of (2). The effective depth detection range of the depth camera 3 is [ D ]1,D2]The field of view of the depth camera 3 is the maximum horizontal angle of view
Figure BDA0002054652830000046
Maximum vertical viewing angle θ0. The close-range detection of the depth camera 3 can effectively avoid interference of illumination and the like, acquire more accurate and rich point cloud data of the cluster 5 and the small cob 10, and is more beneficial to realizing the identification of the cob 10.
As shown in fig. 2, to achieve close-range detection of the ear 5 and the cob 10 by the depth camera 3, the depth camera 3 is mounted on the end effector 4 and moves along with the end effector 4, and at the same time, the mounting of the depth camera 3, the size of the finger 11 of the end effector 4, and the close-range pose of the depth camera 3 should satisfy:
(1) the cluster 5 and the cob 10 are in the detection range of the depth camera 3, the depth camera 3 is close enough to the cluster 5 and the cob 10, and the depth camera 3 can detect and obtain complete cluster 5 and point cloud data with enough length in the field of view;
(2) meanwhile, in the close-range pose, the end effector 4 does not interfere with the ears 5 and the horizontal shed frames 6;
(3) at picking pose P1The end effector 4 does not interfere with the ears 5 and the horizontal shelf 6.
According to the above requirements, the robot end-depth camera configuration method for trellis grape harvesting comprises the size parameters of the end effector 4 and the configuration between the end effector 4 and the depth camera 3Close-range pose P of parameter and depth camera 3 relative to ear 50Determination of (1):
(1) as shown in fig. 2, the height space H between each hanging grape ear 5 and the horizontal shelf 6 is formed by clamping and shearing the cobkWith the limited configuration of the finger 11 on top and the depth camera 3 on bottom, the finger 11 and the depth camera 3 are kept horizontal, wherein the size parameter of the end effector 4, i.e. the length L of the finger 11f,LfBy picking pose P1Determines the geometrical relationship of:
Figure BDA0002054652830000051
in the formula C1The amount of compensation for the vertical deviation of the fingers 11 from the ears 5 when holding the cob 10 by the fingers 11 is C in the embodiment1=20mm;RmaxIs the maximum value of the width R of the cluster 5.
(2) As shown in fig. 4, in the close-range pose P0The lower end point G of ear 5 is located on the vertical centerline of the depth camera 3 field of view.
(3) As shown in fig. 4, in the close-range pose P0The depth camera 3 detects within a set detection range [ D ]1,D3]Maximum vertical viewing angle theta0Horizontal angle of view
Figure BDA0002054652830000052
Detecting, so that the complete depth point cloud of the cluster 5 and the cob 10 is obtained, and meanwhile, redundant data in a view field are reduced; wherein the horizontal viewing angle
Figure BDA0002054652830000053
Comprises the following steps:
Figure BDA0002054652830000054
in the formula, DHIs the closest horizontal distance of the ear 5 to the depth camera 3; k is a radical of1For safety factor, k is taken in the examples1=1.2。
(4) As shown in fig. 3, the depth camera 3 is able to obtain a complete depth point cloud of the ear 5 and cob 10 in the vertical direction:
Figure BDA0002054652830000055
in the formula, C2C is taken as the compensation amount of the vertical deviation of the cob 10 relative to the fruit cluster 5 in the embodiment2=10mm;RminIs the minimum value of the width R of the cluster 5; theta1The upper viewing angle that the depth camera 3 can achieve after being occluded by the finger 11; hk-maxA height space H for each vertical grape cluster 5 to be separated from the horizontal shed frame 6kIs measured.
(5) As shown in fig. 3, in the close-range pose P0Vertical height H of depth camera 3 relative to ear 5bThe following endpoint G is taken as a reference and is determined by the following formula:
Figure BDA0002054652830000061
in the formula (4), the closest horizontal distance D from the ear 5 to the depth camera 3HIs determined by the following formula:
DH=D1+C3 (5)
in the formula D1Is the nearest detection distance of the depth camera 3; c3To take into account the shortest detection distance D resulting from the spike shape of the different ears 51Distance D from nearest levelHThe deviation of (2) is C in the examples3=5mm。
(6) As shown in fig. 2, in the close-range pose P0Vertical distance H of finger 11 from depth camera 3aIs determined by the following formula together with formula (1) and formula (2):
Ha=Lftanθ1 (6)
as shown in fig. 5, the visual servo control method for trellis grape harvesting includes a continuous operation process, a long-range view pose positioning method, a short-range view pose positioning method, and a short-range view cob detection method.
Wherein the continuous operation flow comprises the following steps:
(1) the picking robot runs into the cultivation area of the horizontal shed frame 6;
(2) by applying the long-range view positive pose positioning method, if the trunks 9 on both sides are found, the long-range view positive pose P of the depth camera 3 can be determined2And the mechanical arm 2 sends the depth camera 3 to the long-shot positive pose P2
(3) The depth camera 3 is locked at the lower endpoint G of the 1 st fruit ear 5 by applying a close-range pose positioning method1The mechanical arm 2 drives the depth camera 3 to be positioned at the close-range pose P0
(4) The 1 st cob 10 is detected by applying a close-range cob detection method;
(5) the picking robot finishes picking the ith fruit ear 5;
(6) the mechanical arm 2 drives the depth camera 3 to return to the long-shot positive pose P2
(7) Depth camera 3 with maximum detection range [ D1,D2]Detecting the horizontal swing at the maximum visual angle, if the trunks 9 at the two sides are found, indicating that the picking robot is still in the cultivation area of the horizontal shed frame 6;
(8) the mechanical arm 2 drives the depth camera 3 to positively position the position along the long shot P2Forward, if the point cloud data can not appear in the field of view of the depth camera 3, the chassis 1 is started to drive the depth camera 3 to continue to be in a positive position P along the long shot2Forward until point cloud data appears in the field of view of the depth camera 3;
(9) repeating the steps (3) to (8) by applying a close-range pose positioning method, and continuously locking the cluster 5 target, identifying the cob 10 and picking operation in the horizontal shed frame 6;
(10) and (4) until the depth camera 3 cannot find the tree trunks 9 on the two sides in the step (7), which indicates that the picking robot reaches the boundary of the cultivation area of the horizontal shed frame 6, the picking robot is driven out.
As shown in fig. 6, the method for positioning the forward pose of the long shot comprises the following steps:
(1) depth camera 3 at ground clearance H1Maximum detection range [ D ]1,D2]And maximum horizontal viewing angle
Figure BDA0002054652830000076
Maximum vertical viewing angle θ0Horizontal swing detection, example H1=1400mm、D1=200mm、D2=1500mm、
Figure BDA0002054652830000075
θ0=57°;
(2) Searching for grape trunk 9 by using length threshold rule vertical line detection method, wherein length L of straight line is taken according to length threshold rule in the embodiment1≥200mm;
(3) The depth camera 3 is driven by the mechanical arm 2 to swing and adjust, so that the trunks 9 on the two sides are symmetrical in the view field of the depth camera 3;
(4) positive position P for depth camera 3 to reach distant view2And stops the swing.
As shown in fig. 7, the perspective positive pose P2Height H from ground for the depth camera 3 to be horizontal, the depth camera 31The depth camera 3 is located in the middle of the trunks 9 on both sides and the viewing direction is parallel to the row of trunks 9.
As shown in fig. 8, the close-range pose positioning method includes:
(1) the depth camera 3 continuously moves forward and is in a positive position P in a long-range view2In the ear-axis detection range [ D ]1,D3]Horizontal viewing angle range
Figure BDA0002054652830000074
Vertical viewing angle range theta2Inner search detection, embodiment D3700mm, horizontal viewing angle range
Figure BDA0002054652830000073
Ensuring that the depth camera 3 is in a positive distant view attitude P2Only information other than the two-sided trunk 9 is obtained (fig. 7):
Figure BDA0002054652830000071
in the formula, k2For horizontal viewing angle coefficients, examplesTaking the value as 0.8; w is the inter-row distance of the grape trunk 9.
Vertical viewing angle range theta2Ensuring that the depth camera 3 is in a positive distant view attitude P2The first depth point M obtained belongs to ear 5 (fig. 9):
Figure BDA0002054652830000072
in the formula (7), Hp-maxIs the maximum height of the horizontal canopy frame 6, HK-minA height space H for each vertical grape cluster 5 to be separated from the horizontal shed frame 6kIs measured.
(2) As shown in FIG. 9, with reference to the first depth point M appearing in the field of view of depth camera 3, depth camera 3 remains horizontal and moves to P3The depth camera 3 is spaced apart from the first depth point M by a distance D4The lower end point G of the cluster 5 is brought into the detection range [ D ]1,D3]Then D is4The requirements are as follows:
D4+C4≤D3 (9)
in the formula, C4C is the difference between the depth values of the first depth point M and the lower end point G caused by the position difference in the ear 54=120mm。
At the same time, the depth camera 3 remains horizontal and moves to P3Ensuring that the lower end G of the ear 5 enters the vertical viewing angle theta2In the method, the depth camera 3 can obtain the point cloud data of the lower endpoint G of the cluster 5:
Figure BDA0002054652830000081
in the formula, k3K is a vertical range coefficient between the first depth point M and the lower end point G in the embodiment3=0.9。
(3) At P3The position, the lowest point appearing in the field of view of the depth camera 3 is taken as the lower endpoint G of the cluster 5;
(4) based on the lower end point G, a method for positioning the close-range pose of the depth camera 3 relative to the cluster 5 is appliedDepth camera 3 is positioned at close range pose P0
As shown in fig. 10 and 11, the method for detecting the short-range cob comprises the following steps:
(1) in close range pose P0Depth camera 3 is in the detection range [ D1,D4]Maximum vertical viewing angle theta0Horizontal angle of view
Figure BDA0002054652830000082
Point cloud data is obtained internally, and the search length is not less than L2And a vertical straight line A, B, L in the example2=10mm;
(2) If the vertical line A, B is at a horizontal distance D relative to the lower end point Ge≤D0And the vertical distance H between the lower end of the vertical straight line A, B and the lower end point Ge≤H0Then the vertical line A, B is determined to be the cob 10, D in the example0=20mm,H0=260mm。
The present invention can be realized in light of the above. Other variations and modifications which may occur to those skilled in the art without departing from the spirit and scope of the invention are intended to be included within the scope of the invention.

Claims (8)

1. A robot end-depth camera configuration method for harvesting trellis grapes is characterized by comprising size parameters of an end effector (4), configuration parameters between the end effector (4) and a depth camera (3) and a close-range pose P of the depth camera (3) relative to an ear (5)0Determination of (1);
configuration parameters between the end effector (4) and the depth camera (3) and a close-range pose P of the depth camera (3) relative to the cluster (5)0The determination process of (2) is: in close range pose P0The lower end point G of the cluster (5) is positioned on the vertical central line of the field of view of the depth camera (3), and the depth camera (3) detects within a set detection range [ D ]1,D3]Maximum vertical viewing angle theta0Horizontal angle of view
Figure FDA0003000122240000011
Carry out detection ofIntermediate horizontal viewing angle
Figure FDA0003000122240000012
Maximum vertical viewing angle θ0Satisfy the requirement of
Figure FDA0003000122240000013
The vertical height of the depth camera (3) relative to the fruit cluster (5)
Figure FDA0003000122240000014
The closest horizontal distance D from the cluster (5) to the depth camera (3)H=D1+C3The vertical distance H of the finger (11) relative to the depth camera (3)a=Lftanθ1(ii) a Wherein DHIs the closest horizontal distance, k, from the ear (5) to the depth camera (3)1To a safety factor, C2For compensating for vertical deviations of the cob (10) from the ear (5), RminIs the minimum value of the width R of the cluster (5), theta1An upper viewing angle, H, which can be achieved by the depth camera (3) after being shielded by the finger (11)k-maxThe height space H of each vertical grape cluster (5) from the horizontal canopy frame (6)kMaximum value of (C)3The shortest detection distance D caused by different ear shapes of the fruit cluster (5)1Distance D from nearest levelHL is the length of the cluster (5) and RmaxThe maximum value of the width R of the cluster (5);
the acquisition process of the lower endpoint G is as follows: the depth camera (3) continuously moves forwards and corrects the position P with a long-range view2In the detection range [ D ]1,D3]Horizontal viewing angle range
Figure FDA0003000122240000016
Vertical viewing angle range theta2Searching and detecting are carried out; with reference to a first depth point M appearing in the field of view of the depth camera (3), the depth camera (3) is kept horizontal and moved to P3(ii) a At P3The position is the lowest point appearing in the visual field of the depth camera (3) as the lower endpoint G of the cluster (5); the long shot positive pose P2For the depth camera (3) being a horizontal, depth camera (3)Height above ground H1The depth camera (3) is located in the row center of the trunks (9) on the two sides, and the view finding direction is parallel to the rows of the trunks (9).
2. The method for robotic end-depth camera configuration for trellis grape harvest of claim 1, characterized in that the size parameter of the end effector (4) is the length L of the finger (11)f,LfBy picking pose P1Determines the geometrical relationship of:
Figure FDA0003000122240000015
C1when the spike shaft (10) is clamped by the fingers (11), the fingers (11) are vertically deviated relative to the cluster (5).
3. A servo control method for trellis grape harvesting is characterized by comprising the following steps: a long-shot pose positioning method, a short-shot cob detection method and a continuous operation flow;
the long shot positive pose P2The height H of the depth camera (3) from the ground is horizontal and the height of the depth camera (3) from the ground is1The depth camera (3) is located in the row center of the trunks (9) on the two sides, and the view finding direction is parallel to the rows of the trunks (9).
4. The servo control method for trellis grape harvest of claim 3, wherein the long-range posture correction positioning method is: depth camera (3) at height H from ground1Maximum detection range [ D ]1,D2]And maximum horizontal viewing angle
Figure FDA0003000122240000025
Maximum vertical viewing angle θ0Carrying out horizontal swing detection; searching a trunk (9) of the grape by using a vertical straight line detection method with a length threshold rule; the depth camera (3) is driven to swing and adjust through the mechanical arm (2), so that the trunks (9) on the two sides are symmetrical in the view field of the depth camera (3); the depth camera (3) reaches the positive position P of the long shot2And stops the swing.
5. The servo control method for trellis grape harvest of claim 3, wherein the close-range pose positioning method is: the depth camera (3) continuously moves forwards and corrects the position P with a long-range view2In the detection range [ D ]1,D3]Horizontal viewing angle range
Figure FDA0003000122240000026
Vertical viewing angle range theta2Searching and detecting are carried out; with reference to a first depth point M appearing in the field of view of the depth camera (3), the depth camera (3) is kept horizontal and moved to P3(ii) a At P3The position is the lowest point appearing in the visual field of the depth camera (3) as the lower endpoint G of the cluster (5); applying a close-range pose P of a depth camera relative to the ear0The depth camera (3) is positioned at a close-range pose P0
6. The servo-control method of trellis grape harvest of claim 5, wherein the horizontal view range
Figure FDA0003000122240000021
Vertical viewing angle range theta2Satisfy the requirement of
Figure FDA0003000122240000022
Figure FDA0003000122240000023
The depth camera (3) is at a distance D from the first depth point M4,D4Need to satisfy D4+C4≤D3,θ2And D4Need to satisfy
Figure FDA0003000122240000024
Wherein k is2Is a horizontal view angle coefficient, W is an inter-row distance of the grape trunk (9), Lp-maxIs the maximum height of the horizontal canopy frame (6), HK-minThe height space H of each vertical grape cluster (5) from the horizontal canopy frame (6)kMinimum value of (1), C4K is the difference in depth value between the first depth point M and the lower end point G of the ear (5) due to the difference in position in the ear (5)3Is the vertical range coefficient between the first depth point M and the lower end point G.
7. The servo control method for trellis grape harvest of claim 3, wherein the close-range cob detection method is: in close range pose P0The depth camera (3) is in the detection range [ D ]1,D4]Maximum vertical viewing angle theta0Horizontal angle of view
Figure FDA0003000122240000027
Point cloud data is obtained internally, and the search length is not less than L2Vertical straight line A, B; if the vertical straight line A, B is at a horizontal distance D relative to the lower end point Ge≤D0While the vertical distance H of the lower end of the vertical straight line A, B relative to the lower end point G of the ear (5)e≤H0Then the vertical line A, B is determined to be the cob (10).
8. Servo control method for trellis grape harvest according to any of claims 4-7, characterised in that the continuous process flow is:
step one, the picking robot drives to enter a cultivation area of a horizontal shed frame (6);
secondly, determining the long-shot positive pose P of the depth camera (3) by applying a long-shot positive pose positioning method2And the mechanical arm (2) sends the depth camera (3) to the positive position P of the long shot2
Thirdly, a close-range pose positioning method is applied to enable the depth camera (3) to lock the lower endpoint G of the 1 st fruit cluster (5)1The mechanical arm (2) drives the depth camera (3) to be positioned at a close-range pose P0
Step four, the 1 st cob (10) is detected by applying a close-range cob detection method;
step five, the picking robot finishes picking the ith cluster (5);
step six, the mechanical arm(2) Drives the depth camera (3) to return to the current positive position P of the distant view2
Step seven, the depth camera (3) uses the maximum detection range [ D ]1,D2]Detecting the horizontal swing at the maximum visual angle;
step eight, the mechanical arm (2) drives the depth camera (3) to move along the righting posture P2Forward until point cloud data appear in a view field of the depth camera (3);
step nine, applying a close-range pose positioning method, repeating the step three to the step eight, and continuously locking the target of the cluster (5), identifying the cob (10) and picking in the horizontal shed frame (6);
step ten, until the depth camera (3) cannot find the tree trunks (9) on the two sides in the step seven, the picking robot is pulled out.
CN201910385289.6A 2019-05-09 2019-05-09 Robot tail end-depth camera configuration method for trellis grape harvesting and servo control method Active CN110249793B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910385289.6A CN110249793B (en) 2019-05-09 2019-05-09 Robot tail end-depth camera configuration method for trellis grape harvesting and servo control method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910385289.6A CN110249793B (en) 2019-05-09 2019-05-09 Robot tail end-depth camera configuration method for trellis grape harvesting and servo control method

Publications (2)

Publication Number Publication Date
CN110249793A CN110249793A (en) 2019-09-20
CN110249793B true CN110249793B (en) 2021-06-18

Family

ID=67914483

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910385289.6A Active CN110249793B (en) 2019-05-09 2019-05-09 Robot tail end-depth camera configuration method for trellis grape harvesting and servo control method

Country Status (1)

Country Link
CN (1) CN110249793B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111062988B (en) * 2019-11-29 2024-02-13 佛山科学技术学院 Grape pose estimation method based on local point cloud
CN117765085B (en) * 2024-02-22 2024-06-07 华南农业大学 Tea bud leaf picking far and near alternate positioning method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6025790A (en) * 1997-08-04 2000-02-15 Fuji Jukogyo Kabushiki Kaisha Position recognizing system of autonomous running vehicle
JP2006344131A (en) * 2005-06-10 2006-12-21 Seiko Epson Corp Image processor, electronic equipment, program, information medium and image processing method
CN102194233A (en) * 2011-06-28 2011-09-21 中国农业大学 Method for extracting leading line in orchard
CN106954426A (en) * 2017-03-23 2017-07-18 江苏大学 A kind of robot based on close shot depth transducer approaches positioning picking method in real time
CN107750643A (en) * 2017-10-25 2018-03-06 重庆工商大学 The vision system of strawberry picking robot
CN108908344A (en) * 2018-08-17 2018-11-30 云南电网有限责任公司昆明供电局 A kind of crusing robot mechanical arm tail end space-location method
CN108901366A (en) * 2018-06-19 2018-11-30 华中农业大学 A kind of Incorporate citrus picking method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6025790A (en) * 1997-08-04 2000-02-15 Fuji Jukogyo Kabushiki Kaisha Position recognizing system of autonomous running vehicle
JP2006344131A (en) * 2005-06-10 2006-12-21 Seiko Epson Corp Image processor, electronic equipment, program, information medium and image processing method
CN102194233A (en) * 2011-06-28 2011-09-21 中国农业大学 Method for extracting leading line in orchard
CN106954426A (en) * 2017-03-23 2017-07-18 江苏大学 A kind of robot based on close shot depth transducer approaches positioning picking method in real time
CN107750643A (en) * 2017-10-25 2018-03-06 重庆工商大学 The vision system of strawberry picking robot
CN108901366A (en) * 2018-06-19 2018-11-30 华中农业大学 A kind of Incorporate citrus picking method
CN108908344A (en) * 2018-08-17 2018-11-30 云南电网有限责任公司昆明供电局 A kind of crusing robot mechanical arm tail end space-location method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
草莓采摘机器人远-近景组合视觉系统设计;张曼;《中国优秀硕士学位论文全文数据库 农业科技辑》;20190115(第01期);摘要,第9-48页 *

Also Published As

Publication number Publication date
CN110249793A (en) 2019-09-20

Similar Documents

Publication Publication Date Title
Lehnert et al. Autonomous sweet pepper harvesting for protected cropping systems
Zhao et al. A review of key techniques of vision-based control for harvesting robot
CN112715162B (en) System for intelligent string type fruit of picking
Foglia et al. Agricultural robot for radicchio harvesting
KR100784830B1 (en) Harvesting robot system for bench cultivation type strawberry
CN110249793B (en) Robot tail end-depth camera configuration method for trellis grape harvesting and servo control method
Edan Design of an autonomous agricultural robot
US11477942B2 (en) Robotic fruit harvesting machine with fruit-pair picking and hybrid motorized-pneumatic robot arms
JP2012055207A (en) System and plant for cultivating plant, harvesting device, and method for cultivating plant
CN112802099A (en) Picking method suitable for string-shaped fruits
Rajendra et al. Machine vision algorithm for robots to harvest strawberries in tabletop culture greenhouses
CN114260895A (en) Method and system for determining grabbing obstacle avoidance direction of mechanical arm of picking machine
US20230189713A1 (en) Fruit picking robotic installation on platforms
Fujinaga et al. Development and evaluation of a tomato fruit suction cutting device
Jin et al. Far-near combined positioning of picking-point based on depth data features for horizontal-trellis cultivated grape
Feng et al. Design and test of harvesting robot for table-top cultivated strawberry
Rajendran et al. Towards autonomous selective harvesting: A review of robot perception, robot design, motion planning and control
Park et al. Human-centered approach for an efficient cucumber harvesting robot system: Harvest ordering, visual servoing, and end-effector
CN116058176A (en) Fruit and vegetable picking mechanical arm control system based on double-phase combined positioning
CN113063349B (en) Rubber tree cutting point detection system and detection method
Liu et al. History and present situations of robotic harvesting technology: a review
Burks et al. Opportunity of robotics in precision horticulture.
CN116034732B (en) Fuzzy picking method for string tomatoes
CN113924861A (en) Automatic harvesting system for greenhouse vegetable cultivation
Milburn et al. Computer-Vision Based Real Time Waypoint Generation for Autonomous Vineyard Navigation with Quadruped Robots

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant