CN114946403A - Tea picking robot based on calibration-free visual servo and tea picking control method thereof - Google Patents
Tea picking robot based on calibration-free visual servo and tea picking control method thereof Download PDFInfo
- Publication number
- CN114946403A CN114946403A CN202210799879.5A CN202210799879A CN114946403A CN 114946403 A CN114946403 A CN 114946403A CN 202210799879 A CN202210799879 A CN 202210799879A CN 114946403 A CN114946403 A CN 114946403A
- Authority
- CN
- China
- Prior art keywords
- picking
- tea
- uncalibrated
- visual servo
- servo control
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000000007 visual effect Effects 0.000 title claims abstract description 79
- 241001122767 Theaceae Species 0.000 title claims abstract description 47
- 238000000034 method Methods 0.000 title claims abstract description 21
- 230000007246 mechanism Effects 0.000 claims abstract description 39
- 238000004364 calculation method Methods 0.000 claims abstract description 9
- 239000011159 matrix material Substances 0.000 claims description 52
- 230000006870 function Effects 0.000 claims description 28
- 238000004422 calculation algorithm Methods 0.000 claims description 14
- 230000002068 genetic effect Effects 0.000 claims description 9
- 238000005457 optimization Methods 0.000 claims description 9
- 230000003068 static effect Effects 0.000 claims description 5
- 238000013519 translation Methods 0.000 claims description 4
- 230000003993 interaction Effects 0.000 description 9
- 239000013598 vector Substances 0.000 description 9
- 238000010586 diagram Methods 0.000 description 4
- 238000013461 design Methods 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000009897 systematic effect Effects 0.000 description 2
- OAICVXFJPJFONN-UHFFFAOYSA-N Phosphorus Chemical compound [P] OAICVXFJPJFONN-UHFFFAOYSA-N 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000011217 control strategy Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 229910052751 metal Inorganic materials 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A01—AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
- A01D—HARVESTING; MOWING
- A01D46/00—Picking of fruits, vegetables, hops, or the like; Devices for shaking trees or shrubs
- A01D46/30—Robotic devices for individually picking crops
-
- A—HUMAN NECESSITIES
- A01—AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
- A01D—HARVESTING; MOWING
- A01D46/00—Picking of fruits, vegetables, hops, or the like; Devices for shaking trees or shrubs
- A01D46/04—Picking of fruits, vegetables, hops, or the like; Devices for shaking trees or shrubs of tea
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/003—Programme-controlled manipulators having parallel kinematics
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1679—Programme controls characterised by the tasks executed
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Life Sciences & Earth Sciences (AREA)
- Environmental Sciences (AREA)
- Manipulator (AREA)
Abstract
The invention discloses a tea picking robot based on uncalibrated visual servo and a tea picking control method, which comprise the following steps: the system comprises a camera, a picking hand, a visual controller and a Delta parallel mechanism, wherein the visual controller is used for obtaining uncalibrated visual servo control information through an uncalibrated visual servo control model according to a tender shoot image obtained by the camera; and the Delta parallel mechanism is used for controlling the picking hands to pick tea according to the uncalibrated visual servo control information. By adopting the technical scheme of the invention, the convergence time of picking calculation can be shortened, the working efficiency is improved, and the tea picking robot can accurately finish picking work.
Description
Technical Field
The invention belongs to the technical field of tea picking, and particularly relates to a tea picking robot based on a non-calibration visual servo and a tea picking control method thereof.
Background
At present, a tea picking robot is controlled by means of machine vision to obtain image information, picking coordinate points are obtained through an image processing algorithm in an upper computer and then are sent to a lower computer, and a picking hand moves. The tea picking robot is divided into two parts of image recognition and track running in the mode, but the precision and the efficiency are insufficient. Therefore, a control method of visual servo is provided, image characteristic information is obtained through a visual sensor arranged on a robot and is used as feedback information to drive a picking hand to approach a target position, so that the time for reading and transmitting a coordinate point is saved, and the precision is improved. Visual servoing can be divided into three modes, position-based (PBVS), image-based (IBVS) and hybrid-based (HBVS), by the difference of feedback information. The PBVS forms a closed-loop control system in a 3D Cartesian space, is highly dependent on the precision calibration and the accurate geometric model of the vision sensor, has higher difficulty in model calibration, and can cause that the target is possibly positioned outside a visual field because the image characteristic signal is outside a control loop. IBVS forms a closed loop system in two-dimensional image space and designs a feedback control strategy according to error signals defined by image characteristics. HBVS, also known as 2.5D visual servoing, includes 3D space and 2D space, and is computationally expensive, and if the computational accuracy is insufficient, the performance of the system will also be degraded. Compared with other two methods, the IBVS has high precision and low design difficulty.
Traditional visual servo depends heavily on accurate calibration of a camera, a robot and a 'hand eye', and image errors often occur in the calibration process. In the traditional method, the state of the picking hand is influenced by two aspects, namely image information acquired by a camera in the tea-leaf picker and internal parameters determined by a system through calibration. The latter is susceptible to interference from external factors such as the environment, making the positioning inaccurate. The conventional visual servoing therefore requires recalibration, increases calibration and maintenance costs, does not meet the tea-picking robot cost control requirements, and also increases the amount of calculations.
Disclosure of Invention
The invention aims to solve the technical problem of providing a tea picking robot based on non-calibration visual servo and a tea picking control method thereof, which can shorten the convergence time of picking calculation, improve the working efficiency and enable the tea picking robot to accurately finish picking work.
In order to achieve the purpose, the invention adopts the following technical scheme:
a tea-picking robot based on uncalibrated visual servoing comprises: a camera, a picking hand, a vision controller and a Delta parallel mechanism, wherein,
the visual controller is used for obtaining uncalibrated visual servo control information through an uncalibrated visual servo control model according to the tender shoot image obtained by the camera;
and the Delta parallel mechanism is used for controlling the picking hands to pick tea according to the uncalibrated visual servo control information.
Preferably, the vision controller includes:
the acquisition module is used for acquiring a tender shoot image shot by the camera;
and the servo control module is used for comparing the characteristics of the tender shoot image with the characteristics of the target expected image through an uncalibrated visual servo task function and an extreme learning machine algorithm based on genetic optimization to obtain uncalibrated visual servo control information.
Preferably, the Delta parallel mechanism comprises:
the calculation module is used for obtaining a picking track according to the uncalibrated visual servo control information and the Jacobian matrix; the Jacobian matrix is a relational matrix of the motion speed of the picking hand and the rotating speed of the motor;
and the picking module is used for controlling the picking hands to pick tea leaves according to the picking track.
Preferably, the degree of freedom of the Delta parallel mechanism is 3DOF, and the motion of a static platform of the Delta parallel mechanism is kept in translation in space.
The invention also provides a tea-picking control method based on the uncalibrated visual servo tea-picking robot, which comprises the following steps:
step S1, obtaining uncalibrated visual servo control information through an uncalibrated visual servo control model according to the tender shoot image obtained by the camera through a visual controller;
and step S2, controlling the picking hands to pick tea leaves through a Delta parallel mechanism according to the uncalibrated visual servo control information.
Preferably, step S1 includes:
acquiring a tender shoot image shot by a camera;
and comparing the characteristics of the tender shoot image with the characteristics of the target expected image through an uncalibrated visual servo task function and an extreme learning machine algorithm based on genetic optimization to obtain uncalibrated visual servo control information.
Preferably, step S2 includes:
obtaining a picking track according to the uncalibrated visual servo control information and the Jacobian matrix; the Jacobian matrix is a relational matrix of the motion speed of the picking hand and the rotating speed of the motor;
and controlling the picking hands to pick tea leaves according to the picking track.
Preferably, the degree of freedom of the Delta parallel mechanism is 3DOF, and the motion of a static platform of the Delta parallel mechanism is kept in translation in space.
The method comprises the steps that visual servo control information is obtained through a visual controller through a non-calibration visual servo task function and an extreme learning machine algorithm based on genetic optimization; and controlling the picking hands to pick tea leaves through a Delta parallel mechanism according to the visual servo control information. By adopting the technical scheme of the invention, the convergence time of picking calculation can be shortened, the working efficiency is improved, and the tea picking robot can accurately finish picking work.
Drawings
FIG. 1 is a schematic structural diagram of a tea picking robot based on uncalibrated visual servoing according to the invention;
FIG. 2(a) is a simplified model diagram of the mechanism of the Delta parallel mechanism;
FIG. 2(b) is a schematic diagram of a mechanism single branch model of a Delta parallel mechanism;
FIG. 3 is a block diagram of a vision controller;
FIG. 4 is a schematic view of a pinhole model;
FIG. 5 is a flow chart of the GA-ELM algorithm.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations
Example 1:
as shown in fig. 1, the present invention provides a tea-picking robot based on uncalibrated visual servoing, comprising: a camera, a picking hand, a vision controller and a Delta parallel mechanism, wherein,
the visual controller is used for obtaining uncalibrated visual servo control information through an uncalibrated visual servo control model according to the tender shoot image obtained by the camera; and the Delta parallel mechanism is used for controlling the picking hands to pick tea leaves according to the uncalibrated visual servo control information.
As an implementation of this embodiment, the vision controller includes:
the acquisition module is used for acquiring a tender shoot image shot by the camera;
and the servo control module is used for comparing the characteristics of the tender shoot image with the characteristics of the target expected image through an uncalibrated visual servo task function and an extreme learning machine algorithm based on genetic optimization to obtain uncalibrated visual servo control information.
As an implementation manner of the embodiment of the invention, the Delta parallel mechanism comprises:
the calculation module is used for obtaining a picking track according to the uncalibrated visual servo control information and the Jacobian matrix; the Jacobian matrix is a relational matrix of the motion speed of the picking hand and the rotating speed of the motor;
and the picking module is used for controlling the picking hands to pick tea leaves according to the picking track.
The working flow of picking the leaves by the tea picking robot is as follows:
(1) the tea picking robot moves to the initial position of picking work under the guidance of the camera, the vision controller prepares for work, and the tea picking robot starts to work in a closed loop after receiving the tender shoot image.
(2) The camera shoots and acquires a tender shoot image in the working range, and then the tender shoot image is transmitted to a PC (personal computer) of the tea picking robot, and the PC acquires coordinates and characteristic points of tender shoots on one hand and keeps the tender shoot image as a target in the PC on the other hand to be compared with a subsequent shot image.
(3) Guiding a tea picking robot by a vision controller based on a calibration-free vision servo, controlling a picking hand to carry a camera to move to a target point while shooting a picture by a Delta parallel mechanism, comparing by virtue of a task function until the picking hand reaches a working point, and adsorbing leaf buds into a collection box by the flow of gas in a negative pressure suction pipe after cutting a tender bud picking point by the picking hand; and (4) working the shot picking points in place one by one through the picking tracks.
(4) And (3) when the image in the space is compared without errors, the tea picking robot moves to the next tea ridge under the guidance of the eye unit, the step (2) is repeated until the task in the current tea ridge is finished, and the step (1) is operated.
The image is captured by a camera, and when the objects are picked, this process is defined herein as one working cycle. The process of picking hands from one picking point to another is defined as picking a cycle.
Further, the degree of freedom of the Delta parallel mechanism is 3DOF, and the motion of the static platform keeps translating in the space.
As can be seen from FIG. 2(a), the center point of the Delta parallel mechanism stationary platform is O, and the center point of the movable platform is O'. The length of the active arm is denoted as | A i B i |=L 1 The slave arm is | B i C i |=L 2 And so on, | OA i |=R,|C i O' | ═ r where i ═ 1,2, 3. As can be seen from FIG. 2(b), O in the stationary platen is a base point of a coordinate system, and a rectangular coordinate system O-xyz is provided in the space, and OA is provided in the O-xy plane i Respectively form an included angle alpha with the x axis 1 =0、α 2 =2π/3、α 3 4 pi/3, the angle of the driver to the O-xy plane is denoted θ i I is 1,2,3, and O' is expressed as coordinates (x, y, z).
A closed-loop vector method is used, and a Delta mechanism kinematic position equation established based on a geometric method can be obtained according to the geometric relation of a Delta mechanism, and is shown in a formula (1):
let Δ R be R-R, then let
I.e. the motor rotation angle theta i The final expression is:
theta. taking into account the mechanical characteristics and environmental constraints under the specific tea plucking workspace i Taking the positive value to solve.
The essential condition of the uncalibrated visual servo is the establishment of a Jacobian matrix which can link the motion speed of the picking hand with the rotating speed of the motor.
Let Delta parallel mechanism kinematic equation be
P=f(θ) (4)
And P is a mapping relation matrix of the position point of the tail end of the mechanism and the rotating angle of each shaft, and the two ends of the formula (4) are simultaneously derived from the time t to obtain the mapping relation between the moving speed of the tail end of the mechanism in the working space and the rotating speed of each motor:
wherein,is the velocity vector of the end of the mechanism in the workspace,representing the driving joint velocity vector, J (θ) is a partial derivative matrix, i.e., the target velocity Jacobian matrix, i.e., the picking hand velocity Jacobian matrix.
Let D i Represents B i C i
By the formulas (6) and (7), the tail end speed rule required by work is converted into the speed rule required by the three rotating motors, so that accurate speed control is completed.
Further, the control target without calibrated visual servoing can be described by the task function equation (8):
E(t)=f(m(t),a)-f * (8)
wherein, f and f * Current state and desired state of the system, respectively, m (t) is the image measurement, and a is the relevant model parameter, such as camera focal length. To minimize the objective function to the control objective.
In IBUVS, the performance of uncalibrated servo depends on two parts, namely estimation calculation of an image interaction matrix and selection of a controller gain. Among them, the solution of the inverse matrix of the image interaction matrix representing the mapping from 3D space to 2D image plane plays an important role in IBUVS systems [20-23 ]]. The method aims at the problems of difficulty and singularity of the inversion of the interaction matrix. Hair brushThe embodiment of the invention adopts an intelligent neural network convergence unit as a vision controller, wherein a genetic optimization-based extreme learning machine (GA-ELM) is designed. The fixed gain can make the convergence speed of the characteristic error in the system convergence to zero and the working speed of the picking hand mutually restrict, the gain value is large, the convergence speed is increased, the speed of the picking hand is also increased, and the risk of exceeding the restriction exists. Otherwise, the convergence rate is slowed down, and the calculation efficiency of the system is reduced. Thus, embodiments of the present invention define Fuzzy Logic (FL) cell based controllers with variable gain λ a Instead of a fixed gain, the calculated input is the L of the task function and its derivative 2 Norm to obtain proper gain.
As shown in fig. 3, a task function is constructed by using a vision controller as a calibration-free servo control model according to the comparison between the target expected characteristics and the current image characteristics. After the image information of the tea leaves is processed, the tea leaves enter a servo system for circulation. And solving the inverse of the image interaction matrix by the image information through a neural network intelligent approximation unit serving as a visual controller. L of errors and error derivatives in images during a picking cycle 2 Norm (| | E | | | non-conducting phosphor) 2 ,d||E|| 2 Dt) will be used as input to calculate the proposed variable gain lambda a . The inverse matrix and the variable gain jointly act on the robot kinematic controller based on a Fuzzy Logic (FL) unit, and the robot follows the constraint condition of the image view field, so that the Jacobian matrix of the Delta parallel mechanism and the inverse of the matrix are obtained. Then, on one hand, the picking hand is driven to move through the rotation angle of the motor, and on the other hand, the variable gain lambda is influenced through the actual working constraint condition M (J) a . The camera moves along with the picking hand, and the shot image is compared with the expected image, so that the task function is minimized. The embodiment of the invention extracts the characteristics from the tea tender shoot images in the form of point characteristics.
FIG. 4 depicts a pinhole model, a three-dimensional spatial coordinate point, denoted P (X, Y, Z) in the camera's coordinate system. The analytical derivation of the interaction matrix uses the point characteristics of the pinhole model, where λ represents the focal length of the camera.
In fig. 4, the relationship between the actual target point P (X, Y, Z) and its coordinates at the point P (u, v) on the projection plane can be expressed by equation (9):
the relationship between the projection speed of the point features on the image plane and the picking hand speed is given by equation (10):
wherein L is s For the interaction matrix, Z is the position depth of the point P in the space, and the acquisition complexity is high. For this purpose, use is made ofIn place of L s Z value using the desired value f * The value of Z in (1). The above formula is only one point feature, and each feature point is required to be obtained separately [24-26] 。
And (3) transmitting the speed as a control signal to a servo system according to the design steps of the controller, and after the task function is determined, expressing the relation between the error and the speed as an equation (11):
reducing the characteristic error of the task function in an exponential manner, resulting in equation (12):
combining formula (11) with formula (12) to obtain the final defined formula (13) of velocity:
L s + is a crossInverse of the reciprocal matrix, λ being the gain value, v K Are vectors of linear and angular velocities of the camera in a reference coordinate system. The interaction matrix is composed of L s ∈R k×6 By definition, the output value after convergence of the inverse matrix varies with the number of point features, but is usedConvergence produces a fixed output value. And isThere are some obstacles to the analysis, not only the singularity of the matrix, but also the difficulty of the analysis caused by the noise in the camera and the feature image. The input of the convergence unit is a task function, and convergence is carried out to obtain a product function of the interaction inverse matrix and the error vector. Thus, 6 outputs that affect only the linear velocity and the angular velocity regardless of the number of feature points can be obtained.
The work focus of the vision controller based on genetic optimization extreme learning machine (GA-ELM) algorithm is as follows: the ELM is used as a single hidden layer algorithm with a high learning speed, and error output needs to be minimized after learning. Before learning, the input weight W, the output weight beta and the bias b of the ELM are optimized through the GA algorithm, improper algorithm parameter selection is avoided, and the output precision of the ELM algorithm is improved. The flow is as shown in figure 5 and,
1) GA-ELM initialization parameters and parameters of an encoded ELM network;
2) obtaining the optimal parameters in the GA through the initial population and the evaluation adaptability;
3) after S-C-M, obtaining an ELM optimal parameter;
4) putting the optimal parameters into learning, starting the network learning, and fitting and calculating the inverse of the interaction matrix
Further, during actual work, the motion of the picking hand, the insufficient contrast of the target background and the reason that the tender bud is shielded by other leaf buds all have the possibility of image feature loss, so that the servo task cannot be completed. Moreover, the head may have noise in the transmission after image capture, and there is an error in image processing, which affects the system accuracy. According to the embodiment of the invention, a homography matrix is used for constructing the task function, the solution of the homography matrix depends on the characteristic points in the image, and the number of the characteristic points is not less than 4 pairs. In the image of the tea tender shoot, the characteristic points are clear. When the number of the identified feature points is increased, the image noise has more appropriate robustness with the help of the task function in the system. The number of the characteristic points does not influence the dimension of the homography matrix and the task function, and further does not influence the real-time performance of the system.
The conventional uncalibrated visual servoing task function is defined as equation (14):
The systematic error vector is expressed as equation (15):
wherein h is 0 Is formed by stacking identity matrices I 3×3 Is constructed from rows of (a). The embodiment of the invention can be regarded as that the current camera frames F and F are enabled * And (4) overlapping. To ensure this, constraints are definedTo obtain a unique projection homography between the current and desired feature points in each iterationIt can be shown that e-0 and only when the rotation matrix R-I 3×3 And the position vector t is 0。
In an embodiment of the invention, constraints are definedWherein h is 4 Is the last element of the homography matrix and introduces a Direct Linear Transformation (DLT) to estimate the homography matrix. First, it is necessary to normalize the two images separately, as follows:
translating the feature points intoAndensuring the centroids of the feature points to be at the origin;
Based on the nature of the homogeneous coordinate, the transformed pixel coordinates may be used to construct a homogeneous simultaneous equation, as in equation (16):
wherein A is a coefficient matrix,is composed ofRow i of (1); the purpose of the normalization operation described above is to prevent the coefficient matrix a from being ill-conditioned by image noise.
Therefore, the temperature of the molten metal is controlled,in a ratio ofThe new task function is constructed as equation (17):
The systematic error vector is defined as:
where e is 0 and only if the rotation matrix R is 0 and the position vector t is 0.
The new task function fully utilizes the characteristic that the homography matrix has four degrees of freedom. The dimension of the task function is reduced, the state space of the visual servo system needing on-line estimation is simplified, and therefore the real-time performance of the servo system is further improved.
In conventional servo systems, the convergence speed of the task function is considered to be the most important performance criterion, so that the speed limit of the picking hand is not taken into account. The servo system should converge faster within the speed limit, defining a variable gain λ a Which is derived from the input of the error of the image and the error derivative norm, is then cycled into the FL unit based system motion controller. Lambda a The change of (2) has influence on the running speed of the picking hand, and important constraint conditions such as working boundary and the like are also required to be considered in the trajectory planning of the Delta parallel mechanism. Therefore, the constraint established in the designed calibration-free servo system is the limitation of the motion of the picking hand and can be used at any positionThe total evaluation function of the intentional stopping and singularity avoidance is defined as formula (19):
where J (θ) is the Jacobian matrix of the picking hand's velocity, which after determination enters the system loop, and this function also serves as the input to the FL unit.
Example 2:
the invention also provides a tea picking control method of the tea picking robot, which comprises the following steps:
step S1, obtaining uncalibrated visual servo control information through an uncalibrated visual servo control model according to the tender shoot image obtained by the camera through a visual controller;
and step S2, controlling the picking hands to pick tea leaves through a Delta parallel mechanism according to the uncalibrated visual servo control information.
As an implementation manner of this embodiment, step S1 includes:
acquiring a tender shoot image shot by a camera;
and comparing the characteristics of the tender shoot image with the characteristics of the target expected image through an uncalibrated visual servo task function and an extreme learning machine algorithm based on genetic optimization to obtain uncalibrated visual servo control information.
As an implementation manner of the embodiment of the present invention, step S2 includes:
obtaining a picking track according to the uncalibrated visual servo control information and the Jacobian matrix; the Jacobian matrix is a relational matrix of the motion speed of the picking hand and the rotating speed of the motor;
and controlling the picking hands to pick tea leaves according to the picking track.
Furthermore, the tea picking robot is guided by the vision controller, the Delta parallel mechanism controls the picking hand to carry the camera and move to a target point, pictures are taken while the picking hand carries the camera, comparison is carried out by virtue of a task function until the picking hand reaches a working point, and after the picking hand cuts a tender bud picking point, the tender bud is adsorbed into the collecting box by the flowing of gas in the negative pressure suction pipe.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.
Claims (8)
1. A tea-picking robot based on uncalibrated visual servoing is characterized by comprising: a camera, a picking hand, a vision controller and a Delta parallel mechanism, wherein,
the visual controller is used for obtaining uncalibrated visual servo control information through an uncalibrated visual servo control model according to the tender shoot image obtained by the camera;
and the Delta parallel mechanism is used for controlling the picking hands to pick tea according to the uncalibrated visual servo control information.
2. The tea-plucking robot based on uncalibrated visual servoing of claim 1, wherein the visual controller comprises:
the acquisition module is used for acquiring a tender shoot image shot by the camera;
and the servo control module is used for comparing the characteristics of the tender shoot image with the characteristics of the target expected image through an uncalibrated visual servo task function and an extreme learning machine algorithm based on genetic optimization to obtain visual servo control information.
3. The tea-plucking robot based on uncalibrated visual servoing of claim 2, wherein the Delta parallel mechanism comprises:
the calculation module is used for obtaining a picking track according to the uncalibrated visual servo control information and the Jacobian matrix; the Jacobian matrix is a relational matrix of the motion speed of the picking hand and the rotating speed of the motor;
and the picking module is used for controlling the picking hands to pick tea leaves according to the picking track.
4. The tea-plucking robot based on uncalibrated visual servoing of claim 3, wherein the degree of freedom of the Delta parallel mechanism is 3DOF, and the motion of the static platform of the Delta parallel mechanism is kept in translation in space.
5. A tea picking control method based on a non-calibration visual servo tea picking robot is characterized by comprising the following steps:
step S1, obtaining uncalibrated visual servo control information through the uncalibrated visual servo control model according to the tender shoot image obtained by the camera through the visual controller;
and step S2, controlling a picking hand to pick tea leaves through a Delta parallel mechanism according to the uncalibrated visual servo control information.
6. The tea-plucking control method based on the uncalibrated vision servo tea-plucking robot of claim 5, wherein the step S1 comprises:
acquiring a tender shoot image shot by a camera;
and comparing the characteristics of the tender shoot image with the characteristics of the target expected image through an uncalibrated visual servo task function and an extreme learning machine algorithm based on genetic optimization to obtain visual servo control information.
7. The tea-plucking control method based on the uncalibrated vision servo tea-plucking robot of claim 6, wherein the step S2 comprises:
obtaining a picking track according to the visual servo control information and the Jacobian matrix; the Jacobian matrix is a relational matrix of the motion speed of the picking hand and the rotating speed of the motor;
and controlling the picking hands to pick tea leaves according to the picking track.
8. The tea-plucking control method based on the uncalibrated visual servo tea-plucking robot according to claim 7, wherein the degree of freedom of the Delta parallel mechanism is 3DOF, and the motion of the static platform of the Delta parallel mechanism is kept in translation in space.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210799879.5A CN114946403A (en) | 2022-07-06 | 2022-07-06 | Tea picking robot based on calibration-free visual servo and tea picking control method thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210799879.5A CN114946403A (en) | 2022-07-06 | 2022-07-06 | Tea picking robot based on calibration-free visual servo and tea picking control method thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114946403A true CN114946403A (en) | 2022-08-30 |
Family
ID=82967926
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210799879.5A Pending CN114946403A (en) | 2022-07-06 | 2022-07-06 | Tea picking robot based on calibration-free visual servo and tea picking control method thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114946403A (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016193781A1 (en) * | 2015-05-29 | 2016-12-08 | Benemérita Universidad Autónoma De Puebla | Motion control system for a direct drive robot through visual servoing |
CN107443369A (en) * | 2017-06-25 | 2017-12-08 | 重庆市计量质量检测研究院 | A kind of robotic arm of the inverse identification of view-based access control model measurement model is without demarcation method of servo-controlling |
CN109848984A (en) * | 2018-12-29 | 2019-06-07 | 芜湖哈特机器人产业技术研究院有限公司 | A kind of visual servo method controlled based on SVM and ratio |
CN111428712A (en) * | 2020-03-19 | 2020-07-17 | 青岛农业大学 | Famous tea picking machine based on artificial intelligence recognition and recognition method for picking machine |
CN112099442A (en) * | 2020-09-11 | 2020-12-18 | 哈尔滨工程大学 | Parallel robot vision servo system and control method |
CN213991734U (en) * | 2020-05-12 | 2021-08-20 | 青岛科技大学 | Parallel type automatic famous tea picking robot |
CN114568126A (en) * | 2022-03-17 | 2022-06-03 | 南京信息工程大学 | Tea picking robot based on machine vision and working method |
-
2022
- 2022-07-06 CN CN202210799879.5A patent/CN114946403A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016193781A1 (en) * | 2015-05-29 | 2016-12-08 | Benemérita Universidad Autónoma De Puebla | Motion control system for a direct drive robot through visual servoing |
CN107443369A (en) * | 2017-06-25 | 2017-12-08 | 重庆市计量质量检测研究院 | A kind of robotic arm of the inverse identification of view-based access control model measurement model is without demarcation method of servo-controlling |
CN109848984A (en) * | 2018-12-29 | 2019-06-07 | 芜湖哈特机器人产业技术研究院有限公司 | A kind of visual servo method controlled based on SVM and ratio |
CN111428712A (en) * | 2020-03-19 | 2020-07-17 | 青岛农业大学 | Famous tea picking machine based on artificial intelligence recognition and recognition method for picking machine |
CN213991734U (en) * | 2020-05-12 | 2021-08-20 | 青岛科技大学 | Parallel type automatic famous tea picking robot |
CN112099442A (en) * | 2020-09-11 | 2020-12-18 | 哈尔滨工程大学 | Parallel robot vision servo system and control method |
CN114568126A (en) * | 2022-03-17 | 2022-06-03 | 南京信息工程大学 | Tea picking robot based on machine vision and working method |
Non-Patent Citations (2)
Title |
---|
彭明: "番茄串采摘机械手无标定视觉伺服控制方法研究", CNKI优秀硕士学位论文全文库(专辑:农业科技;信息科技), vol. 2022, no. 05, pages 13 - 30 * |
杨化林等: "基于时间与急动度最优的并联式采茶机器人轨迹规划混合策略", 机械工程学报, vol. 58, no. 9, pages 62 - 70 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109202912B (en) | Method for registering target contour point cloud based on monocular depth sensor and mechanical arm | |
CN110039542B (en) | Visual servo tracking control method with speed and direction control function and robot system | |
Stavnitzky et al. | Multiple camera model-based 3-D visual servo | |
Chaumette et al. | Visual servo control. II. Advanced approaches [Tutorial] | |
Mariottini et al. | Image-based visual servoing for nonholonomic mobile robots using epipolar geometry | |
CN108469823B (en) | Homography-based mobile robot formation following method | |
Zou et al. | An end-to-end calibration method for welding robot laser vision systems with deep reinforcement learning | |
CN112734823B (en) | Image-based visual servo jacobian matrix depth estimation method | |
CN111203880A (en) | Image vision servo control system and method based on data driving | |
CN113733088A (en) | Mechanical arm kinematics self-calibration method based on binocular vision | |
CN114067210A (en) | Mobile robot intelligent grabbing method based on monocular vision guidance | |
CN108469729B (en) | Human body target identification and following method based on RGB-D information | |
Conticelli et al. | Nonlinear controllability and stability analysis of adaptive image-based systems | |
CN114770461A (en) | Monocular vision-based mobile robot and automatic grabbing method thereof | |
CN109542094B (en) | Mobile robot vision stabilization control without desired images | |
CN115032984A (en) | Semi-autonomous navigation method and system for port logistics intelligent robot | |
CN114946403A (en) | Tea picking robot based on calibration-free visual servo and tea picking control method thereof | |
CN117021066A (en) | Robot vision servo motion control method based on deep reinforcement learning | |
CN114578817B (en) | Control method of intelligent carrier based on multi-sensor detection and multi-data fusion | |
CN114714347A (en) | Robot vision servo control system and method combining double arms with hand-eye camera | |
Vahrenkamp et al. | Planning and execution of grasping motions on a humanoid robot | |
Lei et al. | Multi-stage 3d pose estimation method of robot arm based on RGB image | |
CN114683271B (en) | Visual driving and controlling integrated control system of heterogeneous chip | |
Saeed et al. | Real time, dynamic target tracking using image motion | |
Fourmy et al. | Visually Guided Model Predictive Robot Control via 6D Object Pose Localization and Tracking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |