CN114946403A - Tea picking robot based on calibration-free visual servo and tea picking control method thereof - Google Patents
Tea picking robot based on calibration-free visual servo and tea picking control method thereof Download PDFInfo
- Publication number
- CN114946403A CN114946403A CN202210799879.5A CN202210799879A CN114946403A CN 114946403 A CN114946403 A CN 114946403A CN 202210799879 A CN202210799879 A CN 202210799879A CN 114946403 A CN114946403 A CN 114946403A
- Authority
- CN
- China
- Prior art keywords
- picking
- tea
- uncalibrated
- visual servo
- servo control
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A01—AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
- A01D—HARVESTING; MOWING
- A01D46/00—Picking of fruits, vegetables, hops, or the like; Devices for shaking trees or shrubs
- A01D46/30—Robotic devices for individually picking crops
-
- A—HUMAN NECESSITIES
- A01—AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
- A01D—HARVESTING; MOWING
- A01D46/00—Picking of fruits, vegetables, hops, or the like; Devices for shaking trees or shrubs
- A01D46/04—Picking of fruits, vegetables, hops, or the like; Devices for shaking trees or shrubs of tea
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/003—Programme-controlled manipulators having parallel kinematics
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1679—Programme controls characterised by the tasks executed
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Life Sciences & Earth Sciences (AREA)
- Environmental Sciences (AREA)
- Manipulator (AREA)
Abstract
Description
技术领域technical field
本发明属于茶叶采摘技术领域,尤其涉及一种基于无标定视觉伺服的采茶机器人及其采茶控制方法。The invention belongs to the technical field of tea picking, and in particular relates to a tea picking robot based on non-calibration visual servo and a tea picking control method.
背景技术Background technique
目前对采茶机器人的控制常依靠机器视觉获得图像信息,通过上位机中的图像处理算法得到采摘坐标点,再将其发送至下位机,使采摘手运动。这种方式将采茶机器人分成了图像识别与轨迹运行两部分,但其精度与效率便有所不足。因而出现了视觉伺服的控制方法,通过安装在机器人上的视觉传感器获得图像特征信息,使其作为反馈信息驱动采摘手向目标位置靠近,便省去了坐标点读取与传送的时间,精度也会提高。通过反馈信息的不同,视觉伺服可分为基于位置(PBVS)、基于图像(IBVS)和基于混合(HBVS)三种模式。PBVS构成了3D笛卡尔空间中的闭环控制系统,高度依赖视觉传感器的精度标定和精确的几何模型,模型校准难度较大,且因为图像特征信号在控制回路之外,目标可能位于视场之外。IBVS在二维图像空间中形成一个闭环系统,并根据图像特征定义的误差信号设计反馈控制策略。HBVS又称为2.5D视觉伺服,其包含3D空间和2D空间,计算量大,若计算精度不足,系统的性能也会下降。与其他两种方法相比,IBVS的精度高、设计难度低。At present, the control of tea picking robots often relies on machine vision to obtain image information, and the picking coordinate points are obtained through the image processing algorithm in the upper computer, and then sent to the lower computer to make the picking hand move. In this way, the tea picking robot is divided into two parts: image recognition and trajectory operation, but its accuracy and efficiency are insufficient. Therefore, the control method of visual servoing has emerged. The image feature information is obtained through the visual sensor installed on the robot, and it is used as feedback information to drive the picking hand to approach the target position, which saves the time for reading and transmitting coordinate points, and the accuracy is also improved. will improve. According to the different feedback information, visual servoing can be divided into three modes: position-based (PBVS), image-based (IBVS) and hybrid-based (HBVS). PBVS constitutes a closed-loop control system in 3D Cartesian space, which is highly dependent on the precision calibration of vision sensors and accurate geometric models. Model calibration is difficult, and because the image feature signal is outside the control loop, the target may be located outside the field of view. . IBVS forms a closed-loop system in a two-dimensional image space, and designs a feedback control strategy according to the error signal defined by the image features. HBVS is also known as 2.5D visual servo, which includes 3D space and 2D space, and requires a large amount of calculation. If the calculation accuracy is insufficient, the performance of the system will also decrease. Compared with the other two methods, IBVS has high precision and low design difficulty.
传统的视觉伺服会着重依靠相机、机器人以及“手眼”的精确标定,标定过程中常出现图像误差。传统方法中采摘手的状态受两方面影响,一为采茶机中摄像头获取的图像信息,一为系统通过标定确定的内部参数。后者易受到环境等外部因素的干扰,使定位不准确。传统的视觉伺服因此便需要重新校准,标定和维护成本便增加,这不符合采茶机器人成本控制要求,并且也增加了计算量。Traditional visual servoing relies heavily on the accurate calibration of cameras, robots and "hand-eye", and image errors often occur during the calibration process. In the traditional method, the state of the picking hands is affected by two aspects, one is the image information obtained by the camera in the tea picking machine, and the other is the internal parameters determined by the system through calibration. The latter is easily interfered by external factors such as the environment, which makes the positioning inaccurate. Therefore, the traditional visual servo needs to be recalibrated, and the calibration and maintenance costs will increase, which does not meet the cost control requirements of the tea picking robot, and also increases the amount of calculation.
发明内容SUMMARY OF THE INVENTION
本发明要解决的技术问题是,提供一种基于无标定视觉伺服的采茶机器人及其采茶控制方法,能够缩短采摘计算的收敛时间,提高工作效率,使得采茶机器人能精准完成采摘工作。The technical problem to be solved by the present invention is to provide a tea picking robot based on non-calibration visual servo and a tea picking control method thereof, which can shorten the convergence time of picking calculation, improve the work efficiency, and enable the tea picking robot to accurately complete the picking work.
为实现上述目的,本发明采用如下的技术方案:For achieving the above object, the present invention adopts the following technical scheme:
一种基于无标定视觉伺服的采茶机器人,包括:摄像头、采摘手、视觉控制器和Delta并联机构,其中,A tea picking robot based on uncalibrated visual servo, comprising: a camera, a picking hand, a vision controller and a Delta parallel mechanism, wherein,
视觉控制器,用于根据所述摄像头获取的嫩芽图像,通过无标定视觉伺服控制模型得到无标定视觉伺服控制信息;a vision controller for obtaining uncalibrated visual servo control information through the uncalibrated visual servo control model according to the shoot image obtained by the camera;
Delta并联机构,用于根据所述无标定视觉伺服控制信息,控制所述采摘手进行茶叶采摘。The Delta parallel mechanism is used for controlling the picking hands to pick tea leaves according to the uncalibrated visual servo control information.
作为优选,所述视觉控制器包括:Preferably, the vision controller includes:
获取模块,用于获取摄像头拍摄的嫩芽图像;The acquisition module is used to acquire the sprout image captured by the camera;
伺服控制模块,用于通过无标定视觉伺服任务函数和基于遗传优化的极限学习机算法将所述嫩芽图像的特征与目标期望图像的特征进行比对,得到无标定视觉伺服控制信息。The servo control module is used for comparing the characteristics of the sprout image with the characteristics of the target desired image through the uncalibrated visual servo task function and the extreme learning machine algorithm based on genetic optimization to obtain uncalibrated visual servo control information.
作为优选,所述Delta并联机构包括:Preferably, the Delta parallel mechanism includes:
计算模块,用于根据所述无标定视觉伺服控制信息和雅克比矩阵,得到采摘轨迹;其中,所述雅克比矩阵为采摘手运动速度与电机转速的关系矩阵;a calculation module, used for obtaining the picking trajectory according to the uncalibrated visual servo control information and the Jacobian matrix; wherein, the Jacobian matrix is the relationship matrix between the movement speed of the picking hand and the rotational speed of the motor;
采摘模块,根据所述采摘轨迹,控制所述采摘手进行茶叶采摘。The picking module controls the picking hands to pick tea leaves according to the picking track.
作为优选,所述Delta并联机构的自由度为3DOF,所述Delta并联机构的静平台运动保持在空间中平动。Preferably, the degree of freedom of the delta parallel mechanism is 3DOF, and the motion of the static platform of the delta parallel mechanism is kept in translation in space.
本发明还提供一种基于无标定视觉伺服采茶机器人的采茶控制方法,包括以下步骤:The present invention also provides a tea picking control method based on a non-calibrated visual servo tea picking robot, comprising the following steps:
步骤S1、通过视觉控制器根据所述摄像头获取的嫩芽图像,通过无标定视觉伺服控制模型得到无标定视觉伺服控制信息;Step S1, obtaining the uncalibrated visual servo control information through the uncalibrated visual servo control model according to the shoot image obtained by the camera through the vision controller;
步骤S2、通过Delta并联机构根据所述无标定视觉伺服控制信息,控制所述采摘手进行茶叶采摘。Step S2, controlling the picking hands to pick tea leaves according to the uncalibrated visual servo control information through the Delta parallel mechanism.
作为优选,步骤S1包括:Preferably, step S1 includes:
获取摄像头拍摄的嫩芽图像;Get the sprout image captured by the camera;
通过无标定视觉伺服任务函数和基于遗传优化的极限学习机算法将所述嫩芽图像的特征与目标期望图像的特征进行比对,得到无标定视觉伺服控制信息。The features of the sprout image are compared with the features of the target desired image through the uncalibrated visual servo task function and the extreme learning machine algorithm based on genetic optimization, and the uncalibrated visual servo control information is obtained.
作为优选,步骤S2包括:Preferably, step S2 includes:
根据所述无标定视觉伺服控制信息和雅克比矩阵,得到采摘轨迹;其中,所述雅克比矩阵为采摘手运动速度与电机转速的关系矩阵;According to the uncalibrated visual servo control information and the Jacobian matrix, the picking trajectory is obtained; wherein, the Jacobian matrix is the relationship matrix between the movement speed of the picking hand and the rotational speed of the motor;
根据所述采摘轨迹,控制所述采摘手进行茶叶采摘。According to the picking track, the picking hands are controlled to pick tea leaves.
作为优选,所述Delta并联机构的自由度为3DOF,所述Delta并联机构的静平台运动保持在空间中平动。Preferably, the degree of freedom of the delta parallel mechanism is 3DOF, and the motion of the static platform of the delta parallel mechanism is kept in translation in space.
本发明通过视觉控制器通过无标定视觉伺服任务函数和基于遗传优化的极限学习机算法得到视觉伺服控制信息;通过Delta并联机构根据所述视觉伺服控制信息,控制所述采摘手进行茶叶采摘。采用本发明的技术方案,能够缩短采摘计算的收敛时间,提高工作效率,使得采茶机器人能精准完成采摘工作。The invention obtains visual servo control information through a visual controller through a non-calibrated visual servo task function and an extreme learning machine algorithm based on genetic optimization; and controls the picker to pick tea leaves according to the visual servo control information through a delta parallel mechanism. By adopting the technical scheme of the present invention, the convergence time of the picking calculation can be shortened, the work efficiency can be improved, and the tea picking robot can accurately complete the picking work.
附图说明Description of drawings
图1为本发明基于无标定视觉伺服的采茶机器人的结构示意图;Fig. 1 is the structure schematic diagram of the tea picking robot based on uncalibrated visual servo of the present invention;
图2(a)为Delta并联机构的机构简化模型示意图;Figure 2(a) is a schematic diagram of the simplified model of the Delta parallel mechanism;
图2(b)为Delta并联机构的机构单支链模型示意图;Figure 2(b) is a schematic diagram of the single branch chain model of the Delta parallel mechanism;
图3为视觉控制器的结构图;Fig. 3 is the structure diagram of the vision controller;
图4为针孔模型示意图;Figure 4 is a schematic diagram of a pinhole model;
图5为GA—ELM算法流程图。Figure 5 is a flowchart of the GA-ELM algorithm.
具体实施方式Detailed ways
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本发明实施例的组件可以以各种不同的配置来布置和设计In order to make the purposes, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments These are some embodiments of the present invention, but not all embodiments. The components of the embodiments of the invention generally described and illustrated in the drawings herein may be arranged and designed in a variety of different configurations
实施例1:Example 1:
如图1所示,本发明提供一种基于无标定视觉伺服的采茶机器人,包括:摄像头、采摘手、视觉控制器和Delta并联机构,其中,As shown in FIG. 1 , the present invention provides a tea picking robot based on uncalibrated visual servoing, including: a camera, a picking hand, a vision controller and a Delta parallel mechanism, wherein,
视觉控制器,用于根据所述摄像头获取的嫩芽图像,通过无标定视觉伺服控制模型得到无标定视觉伺服控制信息;Delta并联机构,用于根据所述无标定视觉伺服控制信息,控制所述采摘手进行茶叶采摘。The vision controller is used to obtain the uncalibrated visual servo control information through the uncalibrated visual servo control model according to the shoot image obtained by the camera; the delta parallel mechanism is used to control the uncalibrated visual servo control information according to the uncalibrated visual servo control information. Pickers carry out tea picking.
作为本实施例的一种实施方式,视觉控制器包括:As an implementation of this embodiment, the vision controller includes:
获取模块,用于获取摄像头拍摄的嫩芽图像;The acquisition module is used to acquire the sprout image captured by the camera;
伺服控制模块,用于通过无标定视觉伺服任务函数和基于遗传优化的极限学习机算法将所述嫩芽图像的特征与目标期望图像的特征进行比对,得到无标定视觉伺服控制信息。The servo control module is used for comparing the characteristics of the sprout image with the characteristics of the target desired image through the uncalibrated visual servo task function and the extreme learning machine algorithm based on genetic optimization to obtain uncalibrated visual servo control information.
作为本发明实施例的一种实施方式,Delta并联机构包括:As an implementation of the embodiment of the present invention, the Delta parallel mechanism includes:
计算模块,用于根据所述无标定视觉伺服控制信息和雅克比矩阵,得到采摘轨迹;其中,所述雅克比矩阵为采摘手运动速度与电机转速的关系矩阵;a calculation module, used for obtaining the picking trajectory according to the uncalibrated visual servo control information and the Jacobian matrix; wherein, the Jacobian matrix is the relationship matrix between the movement speed of the picking hand and the rotational speed of the motor;
采摘模块,根据所述采摘轨迹,控制所述采摘手进行茶叶采摘。The picking module controls the picking hands to pick tea leaves according to the picking track.
所述采茶机器人进行叶采摘的工作流程如下:The workflow of leaf picking by the tea picking robot is as follows:
(1)采茶机器人在摄像头的引导下,运动到采摘工作的初始位置,视觉控制器准备工作,当接收到嫩芽图像后采茶机器人开始闭环工作。(1) Under the guidance of the camera, the tea picking robot moves to the initial position of the picking work, and the vision controller prepares for work. After receiving the tender bud image, the tea picking robot starts to work in a closed loop.
(2)摄像头拍摄获取工作范围内的嫩芽图像,进而使其传送到采茶机器人的PC机中,PC机一方面获取嫩芽的坐标以及特征点,另一方面将嫩芽图像作为目标留存在PC机中,与后续的拍摄图像对比。(2) The camera captures and obtains the tender bud image within the working range, and then transmits it to the PC of the tea picking robot. On the one hand, the PC obtains the coordinates and feature points of the tender bud, and on the other hand, the tender bud image is retained as the target. On a PC, compare with subsequent captured images.
(3)通过基于无标定视觉伺服的视觉控制器对采茶机器人的引导,Delta并联机构控制采摘手携带摄像头一边向目标点运动,一边拍摄图片,借助任务函数对比,直至到达工作点,通过采摘手剪切嫩芽采摘点后,再由负压吸管中气体的流动将叶芽吸附至收集箱中;通过上述采摘轨迹,将所拍摄到的采摘点一一工作到位。(3) The tea picking robot is guided by the vision controller based on the uncalibrated visual servo, and the Delta parallel mechanism controls the picking hand to move the camera to the target point while taking pictures. After cutting the bud picking points by hand, the leaf buds are adsorbed into the collection box by the flow of gas in the negative pressure straw; through the above picking track, the picked picking points that were photographed are worked in place one by one.
(4)当本空间内的图像对比无误后,采茶机器人在“眼”单元的引导下运动到下一块茶垄,并重复步骤(2),直至当前茶垄内任务完毕,再运行步骤(1)。(4) When the images in this space are compared correctly, the tea picking robot moves to the next tea ridge under the guidance of the "eye" unit, and repeats step (2) until the task in the current tea ridge is completed, and then runs step ( 1).
通过摄像头拍摄图像,当将所拍摄的目标都完成采摘时,这一过程被本文定义为一次工作循环。定义采摘手从一个采摘点到另一个采摘点的过程为采摘一个周期。The image is captured by the camera, and when all the captured targets are picked, this process is defined as a work cycle in this paper. The process of picking hand from one picking point to another picking point is defined as picking cycle.
进一步,所述Delta并联机构的自由度3DOF,静平台的运动保持在空间中平动。Further, the degree of freedom of the Delta parallel mechanism is 3DOF, and the motion of the static platform is kept in translation in space.
由图2(a)中可知,Delta并联机构静平台中心点为O,动平台的中心点为O’。主动臂的长度表示为|AiBi|=L1,从动臂为|BiCi|=L2,依此类推,|OAi|=R,|CiO′|=r,其中i=1,2,3。由图2(b)可知,静平台中的O为坐标系的基点,设置了位于空间的直角坐标系O-xyz,在O-xy平面中,OAi分别与x轴的夹角为α1=0、α2=2π/3、α3=4π/3,驱动器与O-xy平面的夹角则表示为θi,i=1,2,3,将O’表示为坐标(x,y,z)。It can be seen from Figure 2(a) that the center point of the static platform of the Delta parallel mechanism is O, and the center point of the moving platform is O'. The length of the master arm is expressed as |A i B i |=L 1 , the slave arm is |B i C i |=L 2 , and so on, |OA i |=R, |C i O′|=r, where i=1,2,3. It can be seen from Figure 2(b) that O in the static platform is the base point of the coordinate system, and a rectangular coordinate system O-xyz located in space is set. In the O-xy plane, the angle between OA i and the x-axis is α 1 =0, α 2 =2π/3, α 3 =4π/3, the angle between the driver and the O-xy plane is expressed as θ i , i=1,2,3, and O' is expressed as the coordinates (x, y ,z).
由闭环矢量法,根据Delta机构的几何关系,可得基于几何方法建立的Delta机构运动学位置方程,如式(1)所示:By the closed-loop vector method, according to the geometric relationship of the Delta mechanism, the kinematic position equation of the Delta mechanism established based on the geometric method can be obtained, as shown in formula (1):
令ΔR=r-R,再令Let ΔR=r-R, then let
即电机转角θi最终表达式为:That is, the final expression of the motor rotation angle θ i is:
考虑到机构特性以及具体采茶工作空间下的环境约束,θi取正值解。Considering the characteristics of the mechanism and the environmental constraints under the specific tea picking workspace, θ i takes a positive solution.
无标定视觉伺服的必要条件是雅克比矩阵的建立,其能将采摘手运动速度与电机转速联系起来。A necessary condition for uncalibrated visual servoing is the establishment of Jacobian matrix, which can relate the movement speed of the picking hand to the motor speed.
设Delta并联机构运动学方程为Let the kinematic equation of the delta parallel mechanism be
P=f(θ) (4)P=f(θ) (4)
P为机构末端位置点与各轴转角的映射关系矩阵,将式(4)两端同时对时间t求导,得机构末端在工作空间内的运动速度与各电机转速之间的映射关系:P is the mapping relationship matrix between the position point of the end of the mechanism and the rotation angle of each axis. The two sides of equation (4) are derived at the same time for the time t, and the mapping relationship between the motion speed of the end of the mechanism in the working space and the speed of each motor is obtained:
其中,为机构末端在工作空间内的速度矢量,表示驱动关节速度矢量,J(θ)是一个偏导数矩阵,即目标速度雅克比矩阵,也即采摘手的速度雅克比矩阵。in, is the velocity vector of the end of the mechanism in the workspace, Represents the driving joint velocity vector, J(θ) is a partial derivative matrix, namely the target velocity Jacobian matrix, that is, the picker's velocity Jacobian matrix.
令Di代表BiCi Let Di denote B i C i
由式(6)、(7),将工作需要的末端速度规律转换到三个旋转电机需要的速度规律,从而完成精准的速度控制。By formulas (6) and (7), the end speed law required by the work is converted into the speed law required by the three rotating motors, so as to complete the precise speed control.
进一步,无标定视觉伺服的控制目标可用任务函数式(8)来描述:Further, the control objective of uncalibrated visual servoing can be described by the task function equation (8):
E(t)=f(m(t),a)-f* (8)E(t)=f(m(t),a)-f * (8)
其中,f和f*分别为系统的当前状态和期望状态,m(t)是图像测量值,a为相关模型参数,如相机焦距。以使目标函数最小化为控制目标。where f and f * are the current and desired states of the system, respectively, m(t) is the image measurement, and a is the relevant model parameter, such as the camera focal length. The control objective is to minimize the objective function.
在IBUVS中,无标定伺服的性能依赖两个部分,一是图像交互矩阵的估计计算,另一个是控制器增益的选择。其中,表示从3D空间到2D图像平面的映射关系的图像交互矩阵逆矩阵的求解在IBUVS系统中起着重要的作用[20-23]。针对交互矩阵求逆的困难与奇异性问题。本发明实施例采用智能神经网络收敛单元作为视觉控制器,其中设计了基于遗传优化的极限学习机(GA—ELM)。固定不变的增益会使得系统中特征误差收敛为零的收敛速度与采摘手的工作速度相互制约,增益值大,收敛速度增加,采摘手的速度也会增加,有超过约束的风险。反之则使得收敛速度减慢,系统的计算效率降低。于是本发明实施例定义了基于模糊逻辑(FL)单元的控制器,用可变增益λa其替代固定增益,计算的输入为任务函数及其导数的L2范数,得到合适的增益。In IBUVS, the performance of uncalibrated servo depends on two parts, one is the estimation and calculation of the image interaction matrix, and the other is the choice of controller gain. Among them, the solution of the image interaction matrix inverse matrix representing the mapping relationship from 3D space to 2D image plane plays an important role in the IBUVS system [20-23]. Difficulty and singularity problems for inversion of interaction matrix. The embodiment of the present invention adopts an intelligent neural network convergence unit as a vision controller, and an extreme learning machine (GA-ELM) based on genetic optimization is designed. The fixed gain will make the convergence speed of the characteristic error converge to zero in the system and the working speed of the picker mutually restrict. If the gain value is large, the convergence speed will increase, and the speed of the picker will also increase, and there is a risk of exceeding the constraint. On the contrary, the convergence speed will be slowed down, and the computational efficiency of the system will be reduced. Therefore, the embodiment of the present invention defines a controller based on a fuzzy logic (FL) unit, which replaces the fixed gain with a variable gain λ a , and the input of the calculation is the L 2 norm of the task function and its derivative to obtain an appropriate gain.
如图3所示,采用视觉控制器为无标定伺服控制模型,根据目标期望特征与当前图像特征的对比,构造了任务函数。处理好茶叶图像信息后,其进入伺服系统中循环。图像信息通过作为视觉控制器的神经网络智能逼近单元,求解图像交互矩阵的逆。一个采摘周期内,图像中的误差和误差导数的L2范数(||E||2,d||E||2/dt)将作为输入,计算提出的可变增益λa。逆矩阵与可变增益共同作用于基于模糊逻辑(FL)单元的机器人运动学控制器,并且机器人会遵循图像视场的约束条件,从而得到Delta并联机构的雅克比矩阵与其矩阵的逆。之后,一方面通过电机转角驱动采摘手运动,另一方面又通过实际工作约束条件M(J),影响可变增益λa。摄像头随采摘手运动,将拍摄的图像与期望图像比对,使任务函数最小化。本发明实施例以点特征的形式从茶叶嫩芽图像中提取特征。As shown in Fig. 3, the vision controller is used as the uncalibrated servo control model, and the task function is constructed according to the comparison between the desired characteristics of the target and the current image characteristics. After processing the tea image information, it enters the servo system to circulate. The image information is intelligently approximated by a neural network as a vision controller to solve the inverse of the image interaction matrix. During one picking cycle, the L 2 norm (||E|| 2 , d||E|| 2 /dt) of the error and error derivative in the image will be taken as input to calculate the proposed variable gain λ a . The inverse matrix and variable gain work together in the robot kinematic controller based on fuzzy logic (FL) unit, and the robot will follow the constraints of the image field of view, so as to obtain the Jacobian matrix of the delta parallel mechanism and its inverse. After that, on the one hand, the movement of the picking hand is driven by the rotation angle of the motor, and on the other hand, the variable gain λ a is affected by the actual working constraint M(J). The camera moves with the picker and compares the captured image with the expected image to minimize the task function. The embodiments of the present invention extract features from images of tea sprouts in the form of point features.
图4描述了针孔模型,一个三维空间坐标点,在摄像头的坐标系中表示为P(X,Y,Z)。交互矩阵的解析推导便采用了针孔模型的点特征,其中λ表示为摄像头的焦距大小。Figure 4 depicts the pinhole model, a coordinate point in three-dimensional space, represented as P(X, Y, Z) in the camera's coordinate system. The analytical derivation of the interaction matrix adopts the point feature of the pinhole model, where λ is the focal length of the camera.
图4中投影面上有点p(u,v),实际目标点P(X,Y,Z)与其坐标的关系可以用式(9)表示:In Figure 4, there is a point p(u, v) on the projection surface, and the relationship between the actual target point P(X, Y, Z) and its coordinates can be expressed by formula (9):
点特征在图像平面上投影速度与采摘手速度的关系由式(10)给出:The relationship between the projection speed of the point feature on the image plane and the speed of the picking hand is given by equation (10):
其中,Ls为交互矩阵,Z是点P在空间中的位置深度,获取的复杂程度较高。为此,使用代替Ls,Z值使用了期望值f*的Z值。上式仅为一个点特征,须对每个特征点分别求得[24-26]。Among them, L s is the interaction matrix, Z is the position depth of point P in space, and the complexity of acquisition is high. For this, use Instead of L s , the Z value of the expected value f * is used. The above formula is only a point feature, which must be obtained separately for each feature point [24-26] .
遵循控制器的设计步骤,将速度作为控制信号传输到伺服系统中,在任务函数确定后,误差与速度之间的关系表示为式(11):Following the design steps of the controller, the speed is transmitted to the servo system as a control signal. After the task function is determined, the relationship between the error and the speed is expressed as formula (11):
以指数的形式减少任务函数的特征误差,得到式(12):The characteristic error of the task function is reduced exponentially, and Equation (12) is obtained:
联立式(11)与式(12),最终得到速度的定义式(13):Simultaneously formula (11) and formula (12), finally obtain the definition formula (13) of speed:
Ls +是交互矩阵的逆矩阵,λ是增益值,vK为是参考坐标系中相机线速度和角速度的矢量。交互矩阵由Ls∈Rk×6定义,逆矩阵收敛后的输出值则随着点特征数量变化而变化,但使用收敛则会产生固定输出值。并且的解析存在一些障碍,不仅仅是矩阵的奇异性,还有相机以及特征图像中的噪声都会使其解析难度增加。收敛单元的输入为任务函数,收敛得到交互逆矩阵与误差向量的乘积函数。由此,便能得到6个与特征点数量无关且仅影响线速度和角速度的输出。L s + is the inverse of the interaction matrix, λ is the gain value, and v K is the vector of the camera's linear and angular velocities in the reference frame. The interaction matrix is defined by L s ∈ R k×6 , and the output value after the convergence of the inverse matrix changes with the number of point features, but using Convergence produces a fixed output value. and There are some obstacles to the analysis of , not only the singularity of the matrix, but also the noise in the camera and the feature image will make the analysis more difficult. The input of the convergence unit is the task function, and the product function of the interaction inverse matrix and the error vector is obtained by convergence. As a result, six outputs that are independent of the number of feature points and only affect the linear and angular velocities can be obtained.
视觉控制器基于遗传优化的极限学习机(GA—ELM)算法的工作重点为:ELM作为一种学习速度较快的单隐藏层算法,学习后须使误差输出最小化。在其学习之前,通过GA算法优化其输入权重W,输出权重β以及偏置b,避免算法参数选择的不当,促进ELM算法提高输出精度。流程成如图5所示,The focus of the vision controller's extreme learning machine (GA-ELM) algorithm based on genetic optimization is: ELM, as a single hidden layer algorithm with fast learning speed, must minimize the error output after learning. Before its learning, the GA algorithm is used to optimize its input weight W, output weight β and bias b to avoid improper selection of algorithm parameters and promote the ELM algorithm to improve the output accuracy. The process is shown in Figure 5,
1)、GA—ELM初始化参数,编码ELM网络的参数;1), GA-ELM initialization parameters, encoding the parameters of the ELM network;
2)、通过初始种群及评价适应性,得到GA中最佳参数;2), through the initial population and evaluation of adaptability, to obtain the best parameters in GA;
3)、经过S—C—M后,ELM最优参数获得;3) After S-C-M, the optimal parameters of ELM are obtained;
4)、将最优参数放入学习中,网络开始学习,拟合计算交互矩阵的逆 4), put the optimal parameters into the learning, the network starts to learn, and the inverse of the interaction matrix is fitted and calculated
进一步,实际工作时,采摘手的运动、目标背景对比度的不够、嫩芽被其他叶芽所遮挡原因,都会有图像特征缺失的可能性,进而使得伺服任务无法完成。并且,头在图像拍摄后的传输中可能存在有噪声,图像处理便有误差,进而影响系统精度。本发明实施例根据这一问题,使用单应式矩阵构造任务函数,单应式矩阵的求解依赖于图像中的特征点,特征点的数目不少于4对。在茶叶嫩芽的图像中,特征点较为明确。当特征点的识别数目增加后,图像噪声便在系统中任务函数的帮助下有着更加合适的鲁棒性。特征点个数不会影响单应矩阵以及任务函数的维数,更不会对系统的实时性产生影响。Further, in actual work, the movement of the picking hand, insufficient contrast of the target background, and the buds are blocked by other leaf buds, all of which may cause the loss of image features, thus making the servo task impossible to complete. In addition, there may be noise in the transmission of the head after the image is captured, and there will be errors in the image processing, thereby affecting the accuracy of the system. According to this problem, the embodiment of the present invention uses a homography matrix to construct a task function. The solution of the homography matrix depends on the feature points in the image, and the number of feature points is not less than 4 pairs. In the image of tea buds, the feature points are more clear. When the number of identified feature points increases, the image noise has a more suitable robustness with the help of the task function in the system. The number of feature points will not affect the dimensions of the homography matrix and the task function, nor will it affect the real-time performance of the system.
常规的无标定视觉伺服的任务函数被定义]为式(14):The task function of conventional uncalibrated visual servoing is defined as Eq. (14):
其中,是估计的单应矩阵的第i行。in, is the ith row of the estimated homography matrix.
系统的误差向量表示为式(15):The error vector of the system is expressed as Equation (15):
其中,h0是通过堆叠单位矩阵I3×3的行来构造的。本发明实施例可以认为是使当前相机帧F与F*重合。为了保证这一点,定义了约束以获得每次迭代中当前和所需特征点之间的唯一投影单应矩阵可以证明e=0当且仅当旋转矩阵R=I3×3和位置向量t=0。where h 0 is constructed by stacking the rows of the identity matrix I 3×3 . The embodiment of the present invention can be considered as making the current camera frame F and F * coincide. To guarantee this, constraints are defined to obtain a unique projected homography matrix between the current and desired feature points in each iteration It can be shown that e=0 if and only if the rotation matrix R=I 3×3 and the position vector t=0.
在本发明实施例中,定义了约束其中h4是单应矩阵的最后一个元素,并且引入了直接线性变换(DLT)来估计单应矩阵。首先,有必要对两幅图像分别进行归一化,具体如下:In this embodiment of the present invention, constraints are defined where h4 is the last element of the homography matrix and a direct linear transform (DLT) is introduced to estimate the homography matrix. First, it is necessary to normalize the two images separately, as follows:
将特征点平移为和保证这些特征点的质心在原点;translate feature points to and Ensure that the centroids of these feature points are at the origin;
将特征点缩放为和以使它们与原点的平均距离等于 Scale the feature points to and so that their average distance from the origin equals
基于齐次坐标的性质,可以用变换的像素坐标来构造齐次联立方程,如式(16):Based on the properties of homogeneous coordinates, the transformed pixel coordinates can be used to construct homogeneous simultaneous equations, such as equation (16):
其中,A为系数矩阵,为的第i行;上述归一化操作的目的是防止系数矩阵A因图像噪声而病态。where A is the coefficient matrix, for The ith row of ; the purpose of the above normalization operation is to prevent the coefficient matrix A from being ill-conditioned by image noise.
因此,的比例为新的任务函数构造为式(17):therefore, The ratio is The new task function is constructed as formula (17):
其中反映了单应矩阵的四个自由度。in Reflects the four degrees of freedom of the homography matrix.
系统的误差矢量定义为:The error vector of the system is defined as:
其中e=0当且仅当旋转矩阵R=0且位置向量t=0。where e=0 if and only if the rotation matrix R=0 and the position vector t=0.
这个新的任务函数充分利用了单应矩阵具有四个自由度的特性。这不仅降低了任务函数的维数,而且简化了需要在线估计的视觉伺服系统的状态空间,从而进一步提高了伺服系统的实时性。This new task function takes full advantage of the four-degree-of-freedom property of the homography matrix. This not only reduces the dimension of the task function, but also simplifies the state space of the visual servo system that needs to be estimated online, thereby further improving the real-time performance of the servo system.
在传统的伺服系统中,任务函数的收敛速度被视作最重要的性能标准,如此便没有将采摘手的速度限制考虑在内。伺服系统应在速度限制内更快地收敛,定义了可变增益λa,其通过图像的误差及误差导数范数的输入得到,继而被循环至基于FL单元的系统运动控制器中。λa的改变对于采摘手的运行速度有影响,还需在Delta并联机构的轨迹规划中,考虑工作边界等重要约束条件。于是设计的无标定伺服系统中建立约束,是对采摘手运动的限制,同时也是其能够在任意时刻停止及奇异性避免的总评价函数,定义如式(19):In traditional servo systems, the speed of convergence of the task function is considered the most important performance criterion, so that the speed limit of the picker is not taken into account. The servo system should converge faster within the speed limit, a variable gain λ a is defined, which is obtained from the input of the error of the image and the norm of the derivative of the error, which is then looped into the FL unit based system motion controller. The change of λ a has an impact on the running speed of the picker, and important constraints such as the working boundary need to be considered in the trajectory planning of the Delta parallel mechanism. Therefore, a constraint is established in the designed non-calibration servo system, which is the restriction on the movement of the picking hand, and is also the total evaluation function that it can stop at any time and avoid singularity, which is defined as formula (19):
其中,J(θ)为采摘手的速度雅克比矩阵,在确定后,进入系统循环,该函数也作为了FL单元的输入。Among them, J(θ) is the speed Jacobian matrix of the picking hand. After it is determined, it enters the system loop, and this function is also used as the input of the FL unit.
实施例2:Example 2:
本发明还提供一种采茶机器人的采茶控制方法,包括以下步骤:The present invention also provides a tea picking control method of a tea picking robot, comprising the following steps:
步骤S1、通过视觉控制器根据所述摄像头获取的嫩芽图像,通过无标定视觉伺服控制模型得到无标定视觉伺服控制信息;Step S1, obtaining the uncalibrated visual servo control information through the uncalibrated visual servo control model according to the shoot image obtained by the camera through the vision controller;
步骤S2、通过Delta并联机构根据所述无标定视觉伺服控制信息,控制所述采摘手进行茶叶采摘。Step S2, controlling the picking hands to pick tea leaves according to the uncalibrated visual servo control information through the Delta parallel mechanism.
作为本实施例的一种实施方式,步骤S1包括:As an implementation of this embodiment, step S1 includes:
获取摄像头拍摄的嫩芽图像;Obtain the sprout image captured by the camera;
通过无标定视觉伺服任务函数和基于遗传优化的极限学习机算法将所述嫩芽图像的特征与目标期望图像的特征进行比对,得到无标定视觉伺服控制信息。The features of the sprout image are compared with the features of the target desired image through the uncalibrated visual servo task function and the extreme learning machine algorithm based on genetic optimization, and the uncalibrated visual servo control information is obtained.
作为本发明实施例的一种实施方式,步骤S2包括:As an implementation manner of the embodiment of the present invention, step S2 includes:
根据所述无标定视觉伺服控制信息和雅克比矩阵,得到采摘轨迹;其中,所述雅克比矩阵为采摘手运动速度与电机转速的关系矩阵;According to the uncalibrated visual servo control information and the Jacobian matrix, the picking trajectory is obtained; wherein, the Jacobian matrix is the relationship matrix between the movement speed of the picking hand and the rotational speed of the motor;
根据所述采摘轨迹,控制所述采摘手进行茶叶采摘。According to the picking track, the picking hands are controlled to pick tea leaves.
进一步,通过视觉控制器对采茶机器人的引导,Delta并联机构控制采摘手携带摄像头一边向目标点运动,一边拍摄图片,借助任务函数对比,直至到达工作点,通过采摘手剪切嫩芽采摘点后,再由负压吸管中气体的流动将叶芽吸附至收集箱中。Further, through the guidance of the vision controller to the tea picking robot, the Delta parallel mechanism controls the picking hand to move the camera to the target point while taking pictures, and compares it with the help of the task function until it reaches the working point, and cuts the tender bud picking point by the picking hand. After that, the leaf buds are adsorbed into the collection box by the flow of gas in the negative pressure straw.
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present invention, but not to limit them; although the present invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: The technical solutions described in the foregoing embodiments can still be modified, or some or all of the technical features thereof can be equivalently replaced; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the technical solutions of the embodiments of the present invention. scope.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210799879.5A CN114946403A (en) | 2022-07-06 | 2022-07-06 | Tea picking robot based on calibration-free visual servo and tea picking control method thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210799879.5A CN114946403A (en) | 2022-07-06 | 2022-07-06 | Tea picking robot based on calibration-free visual servo and tea picking control method thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114946403A true CN114946403A (en) | 2022-08-30 |
Family
ID=82967926
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210799879.5A Pending CN114946403A (en) | 2022-07-06 | 2022-07-06 | Tea picking robot based on calibration-free visual servo and tea picking control method thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114946403A (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016193781A1 (en) * | 2015-05-29 | 2016-12-08 | Benemérita Universidad Autónoma De Puebla | Motion control system for a direct drive robot through visual servoing |
CN107443369A (en) * | 2017-06-25 | 2017-12-08 | 重庆市计量质量检测研究院 | A calibration-free servo control method for robotic arms based on inverse identification of visual measurement models |
CN109848984A (en) * | 2018-12-29 | 2019-06-07 | 芜湖哈特机器人产业技术研究院有限公司 | A kind of visual servo method controlled based on SVM and ratio |
CN111428712A (en) * | 2020-03-19 | 2020-07-17 | 青岛农业大学 | Famous tea picking machine based on artificial intelligence recognition and recognition method for picking machine |
CN112099442A (en) * | 2020-09-11 | 2020-12-18 | 哈尔滨工程大学 | Parallel robot vision servo system and control method |
CN213991734U (en) * | 2020-05-12 | 2021-08-20 | 青岛科技大学 | A parallel automatic famous tea picking robot |
CN114568126A (en) * | 2022-03-17 | 2022-06-03 | 南京信息工程大学 | Tea picking robot based on machine vision and working method |
-
2022
- 2022-07-06 CN CN202210799879.5A patent/CN114946403A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016193781A1 (en) * | 2015-05-29 | 2016-12-08 | Benemérita Universidad Autónoma De Puebla | Motion control system for a direct drive robot through visual servoing |
CN107443369A (en) * | 2017-06-25 | 2017-12-08 | 重庆市计量质量检测研究院 | A calibration-free servo control method for robotic arms based on inverse identification of visual measurement models |
CN109848984A (en) * | 2018-12-29 | 2019-06-07 | 芜湖哈特机器人产业技术研究院有限公司 | A kind of visual servo method controlled based on SVM and ratio |
CN111428712A (en) * | 2020-03-19 | 2020-07-17 | 青岛农业大学 | Famous tea picking machine based on artificial intelligence recognition and recognition method for picking machine |
CN213991734U (en) * | 2020-05-12 | 2021-08-20 | 青岛科技大学 | A parallel automatic famous tea picking robot |
CN112099442A (en) * | 2020-09-11 | 2020-12-18 | 哈尔滨工程大学 | Parallel robot vision servo system and control method |
CN114568126A (en) * | 2022-03-17 | 2022-06-03 | 南京信息工程大学 | Tea picking robot based on machine vision and working method |
Non-Patent Citations (2)
Title |
---|
彭明: "番茄串采摘机械手无标定视觉伺服控制方法研究", CNKI优秀硕士学位论文全文库(专辑:农业科技;信息科技), vol. 2022, no. 05, pages 13 - 30 * |
杨化林等: "基于时间与急动度最优的并联式采茶机器人轨迹规划混合策略", 机械工程学报, vol. 58, no. 9, pages 62 - 70 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Kragic et al. | Survey on visual servoing for manipulation | |
Stavnitzky et al. | Multiple camera model-based 3-D visual servo | |
Chaumette et al. | Visual servoing and visual tracking | |
Chaumette et al. | Visual servo control. II. Advanced approaches [Tutorial] | |
Khadraoui et al. | Visual servoing in robotics scheme using a camera/laser-stripe sensor | |
Li et al. | Automated visual positioning and precision placement of a workpiece using deep learning | |
WO2024027647A1 (en) | Robot control method and system and computer program product | |
Fanello et al. | 3D stereo estimation and fully automated learning of eye-hand coordination in humanoid robots | |
Grigorescu et al. | Robust camera pose and scene structure analysis for service robotics | |
Fang et al. | A sampling-based motion planning method for active visual measurement with an industrial robot | |
CN118143929A (en) | A robot 3D vision-guided grasping method | |
Ruan et al. | Feature-based autonomous target recognition and grasping of industrial robots | |
CN113211433B (en) | Separated visual servo control method based on composite characteristics | |
Conticelli et al. | Nonlinear controllability and stability analysis of adaptive image-based systems | |
CN112621746A (en) | PID control method with dead zone and mechanical arm visual servo grabbing system | |
CN117218210A (en) | Binocular active vision semi-dense depth estimation method based on bionic eyes | |
CN118721200A (en) | Visual servo control method, device and storage medium for dual-arm collaborative robot | |
Elsheikh et al. | Practical design of a path following for a non-holonomic mobile robot based on a decentralized fuzzy logic controller and multiple cameras | |
CN113910218A (en) | Robot calibration method and device based on kinematics and deep neural network fusion | |
Taylor et al. | Grasping unknown objects with a humanoid robot | |
Xiao et al. | One-shot sim-to-real transfer policy for robotic assembly via reinforcement learning with visual demonstration | |
CN119540731A (en) | Edible mushroom bud detection and collection system, method, medium and device | |
CN114946403A (en) | Tea picking robot based on calibration-free visual servo and tea picking control method thereof | |
Allotta et al. | On the use of linear camera-object interaction models in visual servoing | |
CN118990489A (en) | Double-mechanical-arm cooperative carrying system based on deep reinforcement learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |