CN107363831A - The teleoperation robot control system and method for view-based access control model - Google Patents

The teleoperation robot control system and method for view-based access control model Download PDF

Info

Publication number
CN107363831A
CN107363831A CN201710428209.1A CN201710428209A CN107363831A CN 107363831 A CN107363831 A CN 107363831A CN 201710428209 A CN201710428209 A CN 201710428209A CN 107363831 A CN107363831 A CN 107363831A
Authority
CN
China
Prior art keywords
master station
controller
bracket
spheres
sphere
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710428209.1A
Other languages
Chinese (zh)
Other versions
CN107363831B (en
Inventor
王硕
席宝
鲁涛
蔡莹皓
刘乃军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN201710428209.1A priority Critical patent/CN107363831B/en
Publication of CN107363831A publication Critical patent/CN107363831A/en
Application granted granted Critical
Publication of CN107363831B publication Critical patent/CN107363831B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/06Control stands, e.g. consoles, switchboards
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1689Teleoperation

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

本发明涉及一种基于视觉的遥操作机器人控制系统及方法,所述控制系统包括主站和从站,所述主站包括一字支架、十字支架、深度相机及主站控制器,所述主站控制器与深度相机连接,根据彩色图像及对应的深度图像确定所述十字支架的位姿增量及一字支架与竖直方向的夹角;所述从站包括机械臂、与所述机械臂的末端连接的机械抓手以及机械臂控制器;所述机械臂控制器通过网路与主站控制器连接;所述机械臂控制器分别与所述机械臂和机械抓手连接,根据所述十字支架的位姿增量控制所述机械臂的目标位姿,根据所述夹角控制所述机械抓手的开闭,从而使得操作者能够自然的执行操作任务,同时简化设备结构,提高对操作者复杂动作的识别度,以及提高控制精度。

The present invention relates to a vision-based remote operation robot control system and method. The control system includes a master station and a slave station. The master station includes a straight bracket, a cross bracket, a depth camera and a master station controller. The master station The station controller is connected with the depth camera, and determines the pose increment of the cross support and the angle between the straight support and the vertical direction according to the color image and the corresponding depth image; the slave station includes a mechanical arm, and the mechanical arm The end of the arm is connected to the mechanical gripper and the robotic arm controller; the robotic arm controller is connected to the main station controller through the network; the robotic arm controller is connected to the mechanical arm and the mechanical gripper respectively, according to the The pose increment of the cross bracket controls the target pose of the mechanical arm, and controls the opening and closing of the mechanical gripper according to the included angle, so that the operator can perform the operation tasks naturally, while simplifying the equipment structure and improving Recognition of complex actions of the operator and improvement of control precision.

Description

基于视觉的遥操作机器人控制系统及方法Vision-based teleoperation robot control system and method

技术领域technical field

本发明涉及远程遥操作机器人控制技术领域,特别是涉及一种基于视觉的遥操作机器人控制系统及方法。The invention relates to the technical field of remote operation robot control, in particular to a vision-based remote operation robot control system and method.

背景技术Background technique

尽管机器机器人技术近年来获得了长足发展,但是当前机器人仍然不能够独立地完成一些复杂的、危险的操作任务,比如处理核废料、拆除爆炸物、水下以及空间探勘等。Although machine robotics has made great strides in recent years, current robots are still unable to independently perform complex and dangerous operational tasks, such as disposing of nuclear waste, dismantling explosives, underwater and space exploration, etc.

目前,机器人遥操作技术既可以将人类与危险的操作环境隔离开,又利用了人类处理问题的能力和经验,因而具有较强的实用价值和广阔的应用前景。At present, robot teleoperation technology can not only isolate humans from dangerous operating environments, but also utilize human ability and experience in dealing with problems, so it has strong practical value and broad application prospects.

然而,现有的遥操作方法一般分为接触式和非接触式两类。其中,接触式遥操作方法通常使用外骨骼设备、数据手套、惯性测量等方法来获取操作者的运动信息,进而利用这些运动信息控制机器人,实现对机器人的遥操作。该些方法因为需要在操作者身上穿戴传感器设备,会造成操作动作受限和不自然的情况。非接触式遥操作方法通常采用基于视觉测量操作者动作行为的方法,因此,操作者在操作时动作会很自然,有利于操作者完成较为复杂的遥操作任务,但该种方法对视觉测量的精确度要求较高。However, the existing teleoperation methods are generally divided into two types: contact type and non-contact type. Among them, the contact teleoperation method usually uses exoskeleton equipment, data gloves, inertial measurement and other methods to obtain the motion information of the operator, and then uses the motion information to control the robot to realize the teleoperation of the robot. Because these methods need to wear the sensor device on the operator's body, the operation movement will be limited and unnatural. The non-contact teleoperation method usually adopts a method based on visual measurement of the operator's action behavior. Therefore, the operator will move naturally during the operation, which is conducive to the operator to complete more complex teleoperation tasks. The accuracy requirement is higher.

在基于视觉的遥操作系统中,不使用特定标记物的方法对操作者来说是最自然的。但是,这类方法对识别算法的要求很高,目前仅能完成一些简单的操作任务,比如简单的夹取等。相比之下,使用特定标记物的视觉方法具有算法简单的特点,在设计简单有效的标记物和快速高效的检测算法的基础上可以获得较高的检测精度。In vision-based teleoperation systems, methods that do not use specific markers are most natural to the operator. However, this type of method has high requirements on the recognition algorithm, and currently it can only complete some simple operation tasks, such as simple clipping and so on. In contrast, the vision method using specific markers has the characteristics of simple algorithm, and high detection accuracy can be obtained on the basis of designing simple and effective markers and fast and efficient detection algorithms.

发明内容Contents of the invention

为了解决现有技术中的上述问题,即为了解决接触式遥操作方法需要的外部设备结构复杂、操作者执行操作任务时不自然以及对操作者复杂动作识别度低的问题,本发明提供了一种基于视觉的遥操作机器人控制系统及方法。In order to solve the above-mentioned problems in the prior art, that is, in order to solve the problems of the complex structure of the external equipment required by the touch-type remote operation method, the unnaturalness of the operator when performing the operation task, and the low degree of recognition of the operator's complex actions, the present invention provides a A vision-based teleoperation robot control system and method.

为实现上述目的,本发明提供了如下方案:To achieve the above object, the present invention provides the following scheme:

一种基于视觉的遥操作机器人控制系统,所述控制系统包括主站和从站,所述主站通过网路与所述从站连接;其中,A vision-based teleoperation robot control system, the control system includes a master station and a slave station, the master station is connected to the slave station through a network; wherein,

所述主站包括:The master station includes:

一字支架和十字支架,为手持设备,通过操作者的手部运动分别带动所述十字支架和一字支架运动;The cross support and the cross support are hand-held devices, and the cross support and the cross support are respectively driven to move by the hand movement of the operator;

深度相机,用于采集所述十字支架和一字支架运动时的彩色图像及对应的深度图像;Depth camera, used to collect color images and corresponding depth images when the cross support and the straight support move;

主站控制器,与所述深度相机连接,用于根据所述彩色图像及对应的深度图像确定所述十字支架的位姿增量及一字支架与竖直方向的夹角;The main station controller is connected with the depth camera, and is used to determine the pose increment of the cross bracket and the angle between the straight bracket and the vertical direction according to the color image and the corresponding depth image;

所述从站包括机械臂、与所述机械臂的末端连接的机械抓手以及机械臂控制器;所述机械臂控制器通过网路与所述主站控制器连接,接收所述十字支架的位姿增量及一字支架与竖直方向的夹角;所述机械臂控制器分别与所述机械臂和机械抓手连接,用于根据所述十字支架的位姿增量控制所述机械臂的目标位姿,根据所述夹角控制所述机械抓手的开闭。The slave station includes a robotic arm, a mechanical gripper connected to the end of the robotic arm, and a robotic arm controller; the robotic arm controller is connected to the master station controller through a network, and receives information from the cross support. Pose increment and the angle between the straight bracket and the vertical direction; the robotic arm controller is respectively connected with the mechanical arm and the mechanical gripper for controlling the mechanical arm according to the posture increment of the cross bracket The target pose of the arm, and the opening and closing of the mechanical gripper is controlled according to the included angle.

可选的,所述十字支架上的三个端部分别设置有一个球体,且所述十字支架的连接处设置有一个球体;所述一字支架的两端分别设置有一个球体,且六个球体的颜色均不相同。Optionally, a sphere is provided at the three ends of the cross bracket, and a sphere is provided at the joint of the cross bracket; a sphere is respectively provided at both ends of the straight bracket, and the six The spheres are all different colors.

可选的,所述主站还包括:Optionally, the master station also includes:

六个分类器,分别与所述深度相机和主站控制器连接,用于根据所述彩色图像中六个球体的颜色进行分类识别;Six classifiers are respectively connected with the depth camera and the main station controller, and are used to classify and recognize the colors of the six spheres in the color image;

所述主站控制器还用于采用重心法确定六个球体的球心位置,并根据各所述球心位置及所述深度图像确定六个球体在当前帧中摄像机坐标系下的三维空间位置。The master station controller is also used to determine the positions of the centers of the six spheres using the center of gravity method, and determine the three-dimensional positions of the six spheres in the current frame in the camera coordinate system according to the positions of the centers of the spheres and the depth image .

可选的,所述主站还包括:Optionally, the master station also includes:

Kalman滤波器,与所述主站控制器连接,用于根据各球体在当前帧中摄像机坐标系下的三维空间位置,预测各球体在下一帧中的位置;Kalman filter, connected with the master station controller, for predicting the position of each spheroid in the next frame according to the three-dimensional space position of each spheroid in the camera coordinate system in the current frame;

所述主站控制器还用于根据预测的各球体在下一帧中的位置及所述深度相机采集的彩色图像进行局部检测确定球体的三维空间位置。The master station controller is also used to perform local detection and determine the three-dimensional space position of the spheres according to the predicted positions of the spheres in the next frame and the color image collected by the depth camera.

可选的,所述主站还包括均值滤波器,与所述控制器连接,用于对所述十字支架的位姿增量进行滤波,并将滤波后的位姿增量通过网络发送至所述机械臂控制器。Optionally, the master station further includes an average value filter connected to the controller for filtering the pose increment of the cross support, and sending the filtered pose increment to the The robotic arm controller described above.

可选的,所述从站还包括网络相机,用于采集机械臂与机械抓手的运动及工作场景的图像;Optionally, the slave station also includes a network camera, which is used to collect images of the movement of the mechanical arm and the mechanical gripper and the working scene;

所述主站还包括显示器,与所述主站控制器连接;所述主站控制器还用于接收所述网络相机采集的机械臂与机械抓手的运动及工作场景的图像,并发送至所述显示器进行显示。The master station also includes a display connected to the master station controller; the master station controller is also used to receive the images of the motion and working scene of the mechanical arm and the mechanical gripper collected by the network camera, and send them to The display performs display.

根据本发明的实施例,本发明公开了以下技术效果:According to the embodiments of the present invention, the present invention discloses the following technical effects:

本发明基于视觉的遥操作机器人控制系统通过设置深度相机采集在操作者手部运动时带动十字支架和一字支架运动的彩色图像及对应的深度图像,通过主站控制器确定十字支架的位姿增量及一字支架与竖直方向的夹角;使得机械臂控制器可根据十字支架的位姿增量控制机械臂的目标位姿,根据夹角控制机械抓手的开闭,从而使得操作者能够自然的执行操作任务,同时简化设备结构,提高对操作者复杂动作的识别度,以及提高控制精度。The vision-based teleoperation robot control system of the present invention collects the color images and corresponding depth images that drive the movement of the cross support and the straight support when the operator's hand moves by setting a depth camera, and determines the pose of the cross support through the master station controller Increment and the angle between the straight bracket and the vertical direction; so that the robot arm controller can control the target pose of the robot arm according to the pose increment of the cross bracket, and control the opening and closing of the mechanical gripper according to the angle, so that the operation The operator can perform operating tasks naturally, while simplifying the equipment structure, improving the recognition of the operator's complex movements, and improving the control accuracy.

为实现上述目的,本发明提供了如下方案:To achieve the above object, the present invention provides the following scheme:

一种基于视觉的遥操作机器人控制方法,所述控制方法包括:A vision-based teleoperation robot control method, the control method comprising:

在主站,通过深度相机采集操作者手部运动时带动所述十字支架和一字支架运动的彩色图像及对应的深度图像;At the main station, the color image and the corresponding depth image that drive the movement of the cross bracket and the straight bracket when the operator's hand moves are collected by the depth camera;

通过主站控制器根据所述彩色图像及对应的深度图像确定所述十字支架的位姿增量及一字支架与竖直方向的夹角;Determine the pose increment of the cross bracket and the angle between the straight bracket and the vertical direction through the master station controller according to the color image and the corresponding depth image;

在从站,通过机械臂控制器接收所述十字支架的位姿增量及一字支架与竖直方向的夹角,并根据所述十字支架的位姿增量控制所述机械臂的目标位姿,根据所述夹角控制所述机械抓手的开闭。At the slave station, the pose increment of the cross support and the angle between the straight support and the vertical direction are received by the controller of the mechanical arm, and the target position of the mechanical arm is controlled according to the pose increment of the cross support. posture, and control the opening and closing of the mechanical gripper according to the included angle.

可选的,所述十字支架上的三个端部分别设置有一个球体,且所述十字支架的连接处设置有一个球体;所述一字支架的两端分别设置有一个球体,且六个球体的颜色均不相同。Optionally, a sphere is provided at the three ends of the cross bracket, and a sphere is provided at the joint of the cross bracket; a sphere is respectively provided at both ends of the straight bracket, and the six The spheres are all different colors.

可选的,所述控制方法还包括:Optionally, the control method also includes:

通过六个分类器根据所述彩色图像中六个球体的颜色进行分类识别;通过所述主站控制器采用重心法确定六个球体的球心位置,并根据各所述球心位置及所述深度图像确定六个球体在当前帧中摄像机坐标系下的三维空间位置。Carry out classification and identification according to the colors of the six spheres in the color image through six classifiers; determine the positions of the centers of the six spheres by the center of gravity method through the master station controller, and according to the positions of the centers of the spheres and the The depth image determines the three-dimensional positions of the six spheres in the camera coordinate system in the current frame.

可选的,所述控制方法还包括:Optionally, the control method also includes:

通过Kalman滤波器根据各球体在当前帧中摄像机坐标系下的三维空间位置,预测各球体在下一帧中的位置;通过所述主站控制器根据预测的各球体在下一帧中的位置及所述深度相机采集的彩色图像进行局部检测确定球体的三维空间位置。Predict the position of each sphere in the next frame according to the three-dimensional space position of each sphere in the camera coordinate system in the current frame through the Kalman filter; The color image collected by the depth camera is used for local detection to determine the three-dimensional space position of the sphere.

根据本发明的实施例,本发明公开了以下技术效果:According to the embodiments of the present invention, the present invention discloses the following technical effects:

本发明基于视觉的遥操作机器人控制方法通过深度相机采集在操作者手部运动时带动十字支架和一字支架运动的彩色图像及对应的深度图像,通过主站控制器确定十字支架的位姿增量及一字支架与竖直方向的夹角;进而使得机械臂控制器可根据十字支架的位姿增量控制机械臂的目标位姿,根据夹角控制机械抓手的开闭,从而使得操作者能够自然的执行操作任务,同时简化设备结构,提高对操作者复杂动作的识别度,以及提高控制精度。The vision-based teleoperation robot control method of the present invention uses a depth camera to collect the color images and corresponding depth images that drive the movement of the cross bracket and the straight bracket when the operator's hand moves, and determines the pose increase of the cross bracket through the master station controller. The amount and the angle between the straight bracket and the vertical direction; then the controller of the robotic arm can control the target pose of the robotic arm according to the pose increment of the cross bracket, and control the opening and closing of the mechanical gripper according to the included angle, so that the operation The operator can perform operating tasks naturally, while simplifying the equipment structure, improving the recognition of the operator's complex movements, and improving the control accuracy.

附图说明Description of drawings

图1是本发明基于视觉的遥操作机器人控制系统的结构示意图;Fig. 1 is a schematic structural view of the vision-based teleoperated robot control system of the present invention;

图2是本发明实施例计算位姿的方法;Fig. 2 is the method for calculating pose of the embodiment of the present invention;

图3为本发明实施例的立体视觉软件流程图;Fig. 3 is the flow chart of stereoscopic vision software of the embodiment of the present invention;

图4为本发明实施例的颜色表的第一张颜色子表图;Fig. 4 is the first color subtable figure of the color table of the embodiment of the present invention;

图5为本发明实施例的颜色表的第二张颜色子表图;Fig. 5 is the second color subtable figure of the color table of the embodiment of the present invention;

图6为本发明实施例的颜色表的第三张颜色子表图。Fig. 6 is a diagram of the third color subtable of the color table according to the embodiment of the present invention.

符号说明:Symbol Description:

操作者—1,一字支架—2,十字支架—3,深度相机—4,控制计算机—5,网络—6,机械臂控制器—7,机械抓手—8,机械臂—9,网络相机—10。Operator - 1, straight support - 2, cross support - 3, depth camera - 4, control computer - 5, network - 6, robotic arm controller - 7, mechanical gripper - 8, mechanical arm - 9, network camera —10.

具体实施方式detailed description

下面参照附图来描述本发明的优选实施方式。本领域技术人员应当理解的是,这些实施方式仅仅用于解释本发明的技术原理,并非旨在限制本发明的保护范围。Preferred embodiments of the present invention are described below with reference to the accompanying drawings. Those skilled in the art should understand that these embodiments are only used to explain the technical principles of the present invention, and are not intended to limit the protection scope of the present invention.

本发明提供一种基于视觉的遥操作机器人控制系统及方法,通过设置深度相机采集在操作者手部运动时带动十字支架和一字支架运动的彩色图像及对应的深度图像,通过主站控制器确定十字支架的位姿增量及一字支架与竖直方向的夹角;使得机械臂控制器可根据十字支架的位姿增量控制机械臂的目标位姿,根据夹角控制机械抓手的开闭,从而使得操作者能够自然的执行操作任务,同时简化设备结构,提高对操作者复杂动作的识别度,以及提高控制精度。The invention provides a vision-based teleoperation robot control system and method. By setting a depth camera, the color image and the corresponding depth image driven by the movement of the cross bracket and the straight bracket when the operator's hand moves are collected, and the main station controller Determine the pose increment of the cross bracket and the angle between the straight bracket and the vertical direction; so that the robot arm controller can control the target pose of the robot arm according to the pose increment of the cross bracket, and control the position of the mechanical gripper according to the included angle. Open and close, so that the operator can perform the operation tasks naturally, while simplifying the equipment structure, improving the recognition of the operator's complex movements, and improving the control accuracy.

如图1所示,本发明基于视觉的遥操作机器人控制系统包括主站和从站,所述主站通过网路6与所述从站连接。As shown in FIG. 1 , the vision-based teleoperation robot control system of the present invention includes a master station and a slave station, and the master station is connected to the slave station through a network 6 .

其中,所述主站包括一字支架2、十字支架3、深度相机4及主站控制器,所述一字支架2和十字支架3为手持设备,通过操作者1的手部运动分别带动所述十字支架3和一字支架2运动;所述深度相机4用于采集所述十字支架3和一字支架2运动时的彩色图像及对应的深度图像;所述主站控制器与所述深度相机4连接,用于根据所述彩色图像及对应的深度图像确定所述十字支架3的位姿增量及一字支架2与竖直方向的夹角。Wherein, the master station includes a straight frame 2, a cross frame 3, a depth camera 4 and a master station controller. The straight frame 2 and the cross frame 3 are handheld devices, which are respectively driven by the hand movement of the operator 1. The movement of the cross support 3 and the in-line support 2; the depth camera 4 is used to collect color images and corresponding depth images when the cross support 3 and the in-line support 2 move; the master station controller and the depth The camera 4 is connected to determine the pose increment of the cross support 3 and the angle between the straight support 2 and the vertical direction according to the color image and the corresponding depth image.

所述从站包括机械臂9、与所述机械臂9的末端连接的机械抓手8以及机械臂控制器7;其中,所述机械臂控制器7通过网路6与所述主站控制器连接,用于接收所述十字支架3的位姿增量及一字支架2与竖直方向的夹角;所述机械臂控制器7分别所述机械臂9和机械抓手8连接,根据所述十字支架3的位姿增量控制所述机械臂9的目标位姿,根据所述夹角控制所述机械抓手8的开闭。其中,所述机械臂9的目标位姿为所述机械臂9的初始位姿加滤波后的位姿增量。具体地,当一字支架2与竖直方向的夹角为0(即所述一字支架2处于竖直姿态)时,机械抓手8闭合;当一字支架2与竖直方向的夹角为90°(即所述一字支架2处于水平状态)时,机械抓手8张开。The slave station includes a robot arm 9, a robot gripper 8 connected to the end of the robot arm 9, and a robot arm controller 7; wherein, the robot arm controller 7 communicates with the master station controller through a network 6 connection, for receiving the pose increment of the cross support 3 and the angle between the straight support 2 and the vertical direction; the mechanical arm controller 7 is connected with the mechanical arm 9 and the mechanical gripper 8 respectively, according to the The pose increment of the cross support 3 controls the target pose of the mechanical arm 9, and controls the opening and closing of the mechanical gripper 8 according to the included angle. Wherein, the target pose of the robotic arm 9 is the initial pose of the robotic arm 9 plus a filtered pose increment. Specifically, when the angle between the inline bracket 2 and the vertical direction is 0 (that is, the inline bracket 2 is in a vertical posture), the mechanical gripper 8 is closed; when the angle between the inline bracket 2 and the vertical direction When the angle is 90° (that is, the straight bracket 2 is in a horizontal state), the mechanical gripper 8 is opened.

优选地,所述十字支架3上的三个端部分别设置有一个球体,且所述十字支架3的连接处设置有一个球体;所述一字支架2的两端分别设置有一个球体,且六个球体的颜色均不相同。例如,所述十字支架3上的四个球体的颜色分别为:连接处的球体颜色为红色,三个端部的球体颜色为蓝色、绿色和黄色;所述一字支架2的两端上的球体颜色可为紫色和黑色;但并不以此为限。Preferably, the three ends on the cross bracket 3 are respectively provided with a sphere, and the joint of the cross bracket 3 is provided with a sphere; the two ends of the inline bracket 2 are respectively provided with a sphere, and The six spheres are all different colors. For example, the colors of the four spheres on the cross bracket 3 are respectively: the color of the sphere at the joint is red, and the colors of the spheres at the three ends are blue, green and yellow; Spheres can be colored purple and black; however, they are not limited to.

进一步地,所述主站还包括六个分类器,分别与所述深度相机4和主站控制器连接,用于根据所述彩色图像中六个球体的颜色进行分类识别;Further, the master station also includes six classifiers, which are respectively connected to the depth camera 4 and the master station controller, for classifying and identifying according to the colors of the six spheres in the color image;

所述主站控制器还用于采用重心法确定六个球体的球心位置,并根据各所述球心位置及所述深度图像确定六个球体在当前帧中摄像机坐标系下的三维空间位置。The master station controller is also used to determine the positions of the centers of the six spheres using the center of gravity method, and determine the three-dimensional positions of the six spheres in the current frame in the camera coordinate system according to the positions of the centers of the spheres and the depth image .

如图2所示,红色球体的坐标为(xr,yr,zr)、蓝色球体的坐标为(xb,yb,zb)、绿色球体的坐标为(xg,yg,zg)、黄色球体的坐标为(xy,yy,zy),其中,x、y、z分别表示摄像机坐标系下三维方向。As shown in Figure 2, the coordinates of the red sphere are (x r , y r , z r ), the coordinates of the blue sphere are (x b , y b , z b ), and the coordinates of the green sphere are (x g , y g , z g ), the coordinates of the yellow sphere are (x y , y y , z y ), where x, y, and z respectively represent the three-dimensional directions in the camera coordinate system.

红球到绿球的向量:Vector from red ball to green ball:

(x1,y1,z1)=(xg-xr,yg-yr,zg-zr)-------公式(1);(x 1 ,y 1 ,z 1 )=(x g -x r ,y g -y r ,z g -z r )-------formula (1);

红球到黄球的向量:Vector from red ball to yellow ball:

(x2,y2,z3)=(xy-xr,yy-yr,zy-zr)-------公式(2);(x 2 ,y 2 ,z 3 )=(x y -x r ,y y -y r ,z y -z r )-------formula (2);

红球到绿球的向量为上面两个向量的叉乘:The vector from the red ball to the green ball is the cross product of the above two vectors:

(x3,y3,z3)=(y1z2-y2z1,x2z1-x1z2,x1y2-x2y1)--公式(3);(x 3 ,y 3 ,z 3 )=(y 1 z 2 -y 2 z 1 ,x 2 z 1 -x 1 z 2 ,x 1 y 2 -x 2 y 1 )--Formula (3);

将上面三个向量归一化得到单位向量i、j、k,通过i、j、k可以得到十字支架3在摄像机坐标系下相对于摄像机坐标系的翻滚角R、倾斜角P、和偏航角Y。结合十字支架3上红球的位置和翻滚角R、倾斜角P、和偏航角Y就得到十字支架的位姿=(xr,yr,zr,R,P,Y)-----公式(4)。Normalize the above three vectors to obtain unit vectors i, j, k, and through i, j, k, the roll angle R, tilt angle P, and yaw of the cross support 3 relative to the camera coordinate system in the camera coordinate system can be obtained Angle Y. Combining the position of the red ball on the cross support 3, the roll angle R, the tilt angle P, and the yaw angle Y, the pose of the cross support = (x r , y r , z r , R, P, Y)--- --Formula (4).

进一步地,所述主站还包括存储器,存储有颜色表,各所述分类器与存储器连接,调取所述颜色表,通过查表法确定颜色球的位置。其中,所述颜色表表征6个分类器对256×256×256种颜色的分类结果。具体的,所述颜色表包括3张颜色子表,每张颜色子表的大小为256×256。Further, the master station further includes a memory storing a color table, each of the classifiers is connected to the memory, retrieves the color table, and determines the position of the color ball through a table look-up method. Wherein, the color table represents the classification results of 6 classifiers for 256×256×256 colors. Specifically, the color table includes 3 color sub-tables, and the size of each color sub-table is 256×256.

如图4至图6所示,本发明实施例使用YCbCr颜色空间,且对各球体的颜色进行编号,编号为非零整数,比如1代表蓝色,2代表绿色,3代表黄色,4代表红色;三张颜色子表的大小为256×256;当需要对一个颜色(Y,Cb,Cr)进行分类时,首先根据颜色的Cb和Cr值查第一张表(如图4所示),如果对应值为0,则表示该颜色是背景;如果非零则继续查看第二张表(如图5所示)和第三张表(如图6所示),如果该颜色的Y分量的值在第二张表和第三张表的对应值之间,则表示它是球体的颜色,否则是背景;例如,对于颜色对(x2,y2,z),在第一张表中(x2,y2)位置的值为3,表示它可能是黄色,继续查看第二张表和第三张表,得到对应位置的值为84和229,如果84≤z≤229,则表示该颜色是黄色,否则该颜色是背景。As shown in Figures 4 to 6, the embodiment of the present invention uses the YCbCr color space, and numbers the colors of each sphere, and the numbers are non-zero integers, such as 1 for blue, 2 for green, 3 for yellow, and 4 for red The size of the three color subtables is 256 * 256; when a color (Y, Cb, Cr) needs to be classified, at first check the first table (as shown in Figure 4) according to the Cb and Cr values of the color, If the corresponding value is 0, it means that the color is the background; if it is non-zero, continue to look at the second table (as shown in Figure 5) and the third table (as shown in Figure 6), if the Y component of the color A value between the corresponding values in the second and third tables indicates that it is the color of the sphere, otherwise it is the background; for example, for a color pair (x 2 , y 2 , z), in the first table (x 2 , y 2 ) The value of the position is 3, which means it may be yellow. Continue to look at the second and third tables, and get the values of the corresponding positions 84 and 229. If 84≤z≤229, it means the color is yellow, otherwise the color is the background.

此外,所述主站还包括Kalman滤波器(图中未示出),与所述主站控制器连接,用于根据各球体在当前帧中摄像机坐标系下的三维空间位置,预测各球体在下一帧中的位置;所述主站控制器还用于根据预测的各球体在下一帧中的位置及所述深度相机采集的彩色图像进行局部检测确定球体的三维空间位置。In addition, the master station also includes a Kalman filter (not shown in the figure), which is connected to the controller of the master station and is used to predict the position of each sphere in the following three-dimensional space according to the three-dimensional position of each sphere in the camera coordinate system in the current frame The position in one frame; the master station controller is also used to perform local detection to determine the three-dimensional space position of the sphere according to the predicted position of each sphere in the next frame and the color image collected by the depth camera.

以对十字支架上球体的检测为例:如图3所示,第一步:所述主站控制器接收深度相机获得的彩色图像,并针对十字支架上的4个球体对所述彩色图像进行全局检测,直至确定各球体在当前帧中摄像机坐标系下的三维空间位置,并将检测成功后用4个球在当前帧中的位置发送至所述Kalman滤波器;所述Kalman滤波器预测各球体在下一帧中的位置并发送至所述主站控制器,从而使得所述主站控制器可在预测位置附近的小范围内进行局部检测;若检测成功则更新Kalman滤波器的状态,否则检测失败转入全局检测。同理,一字支架上的2个球的检测采用同样的流程,在此不再赘述。Take the detection of the sphere on the cross bracket as an example: as shown in Figure 3, the first step: the master station controller receives the color image obtained by the depth camera, and performs a process on the color image for the four spheres on the cross bracket Global detection until the three-dimensional space position of each sphere in the camera coordinate system in the current frame is determined, and the positions of the 4 spheres in the current frame are sent to the Kalman filter after the detection is successful; the Kalman filter predicts each The position of the sphere in the next frame is sent to the master station controller, so that the master station controller can perform local detection in a small range near the predicted position; if the detection is successful, the state of the Kalman filter is updated, otherwise If the detection fails, it will be transferred to the global detection. Similarly, the same process is used for the detection of the two balls on the straight bracket, so it will not be repeated here.

具体地,所述Kalman滤波器的状态方程为:Specifically, the state equation of the Kalman filter is:

其中,X(k)是k时刻系统的状态,A是系统参数,Z(k)是系统k时刻的测量值,H为系统的观测矩阵,W(k)表示调整控制过程的噪声,V(k)表示测量过程的噪声。在本实施例中,所述调整控制过程的噪声和测量过程的噪声均为高斯白噪声。Among them, X(k) is the state of the system at time k, A is the system parameter, Z(k) is the measured value of the system at time k, H is the observation matrix of the system, W(k) represents the noise of the adjustment control process, V( k) represents the noise of the measurement process. In this embodiment, both the noise of the adjustment control process and the noise of the measurement process are Gaussian white noise.

假设状态X(k)只包含k时刻x方向的位置和速度,要包含y方向和z方向只需将它们扩充进X(k)即可,则X(k)、A和H的设置方式为:Assuming that the state X(k) only includes the position and velocity in the x direction at time k, to include the y direction and z direction, you only need to expand them into X(k), then the setting method of X(k), A and H is :

X(k)=[x(k)v(k)]T----------公式(6);X(k)=[x(k)v(k)] T ---------- formula (6);

Z(k)=[z(k)]T----------------公式(8);Z(k)=[z(k)] T ---------------- formula (8);

H=[1 0]----------------------公式(9);H=[1 0]---------------------- formula (9);

将十字支架上的6个颜色球的空间位置x,y、z作为观测值,从而确定观测矩阵,6个球的空间位置x、y、z和对应方向的速度vx、vy、vz作为状态变量,从而确定Kalman滤波器的状态方程。Take the spatial positions x, y, and z of the six color balls on the cross bracket as observation values, thereby determining the observation matrix, the spatial positions x, y, z of the six balls and the velocities v x , v y , v z in the corresponding directions As a state variable, to determine the state equation of the Kalman filter.

进一步地,所述主站还包括均值滤波器,与所述控制器连接,用于对所述十字支架的位姿增量进行滤波,并将滤波后的位姿增量通过网络发送至所述机械臂控制器7,以控制所述机械臂9的目标位姿。Further, the master station also includes an average value filter connected to the controller for filtering the pose increment of the cross support, and sending the filtered pose increment to the The robotic arm controller 7 is used to control the target pose of the robotic arm 9 .

此外,为使得操作者准确控制所述机械臂9及机械抓手8的动作,以完成操作任务,所述从站还包括网络相机10,用于采集机械臂9与机械抓手8的运动及工作场景的图像;所述主站还包括显示器,与所述主站控制器连接;所述主站控制器还用于接收所述网络相机采集的机械臂与机械抓手的运动及工作场景的图像,并发送至所述显示器进行显示。在本实施例中,所述主站控制器与显示器集成在一台控制计算机5中。In addition, in order to enable the operator to accurately control the movements of the mechanical arm 9 and the mechanical gripper 8 to complete the operation task, the slave station also includes a network camera 10 for collecting the motion and The image of the working scene; the master station also includes a display connected to the master station controller; the master station controller is also used to receive the movement of the mechanical arm and the mechanical gripper collected by the network camera and the image of the work scene image and send it to the monitor for display. In this embodiment, the master station controller and display are integrated in one control computer 5 .

本发明基于视觉的遥操作机器人控制系统的工作过程(如图1所示):在主站,操作者1左右手分别手持一字支架2和十字支架3并运动,十字支架3控制机械臂9的末端位姿,一字支架2控制机械抓手8的开闭;深度相机4同时获得一字支架2和十字支架3运动时的彩色图像和对应的深度图像,因此可以确定场景中的一个点在摄像机坐标系下的三维位置,通过立体视觉的方法就可以得到操作者1手持的一字支架2和十字支架3上的6个球体的空间位置;主站的控制计算机5根据这6个球体的空间位置解算出十字支架的位姿和一字支架与竖直方向的夹角;当一字支架处于竖直姿态时,机械抓手8闭合,处于水平状态时,机械抓手8张开;十字支架的位姿用来表示操作者手部的位姿;系统启动时,控制计算机通过这种方式首先得到操作者1手部的初始位姿,然后不断将后续得到的位姿与初始位姿作差,得到主站操作者1手部的位姿增量(即十字支架的位姿增量);所述位姿增量经过一个均值滤波器后得到滤波后的位姿增量;通过网络6将一字支架与竖直方向的夹角和位姿增量传递给所述从站的机械臂控制器7,机械臂控制器7把控制信号传递给机械臂和机械抓手,控制机械臂9的运动和机械抓手8的张开和闭合;同时所述从站的网络相机10对机械臂的运动和机械臂所在场景进行图像采集,并将所采集得到的图像传回主站并在主站显示器上显示。The working process of the vision-based teleoperation robot control system of the present invention (as shown in FIG. 1 ): at the master station, the operator 1 holds the straight bracket 2 and the cross bracket 3 with both hands respectively and moves, and the cross bracket 3 controls the movement of the mechanical arm 9. The end pose, the one-shaped support 2 controls the opening and closing of the mechanical gripper 8; the depth camera 4 simultaneously obtains the color image and the corresponding depth image of the one-shaped support 2 and the cross support 3 during movement, so it can be determined that a point in the scene is in the For the three-dimensional position in the camera coordinate system, the spatial positions of the six spheres on the straight bracket 2 and the cross bracket 3 held by the operator 1 can be obtained through the method of stereo vision; Calculate the pose of the cross bracket and the angle between the straight bracket and the vertical direction by solving the spatial position; when the straight bracket is in a vertical posture, the mechanical gripper 8 is closed, and when it is in a horizontal state, the mechanical gripper 8 is opened; The pose of the bracket is used to represent the pose of the operator’s hand; when the system is started, the control computer first obtains the initial pose of the hand of the operator 1 in this way, and then continuously compares the subsequent obtained pose with the initial pose. difference, to obtain the pose increment of the master station operator 1 hand (that is, the pose increment of the cross bracket); the pose increment is obtained after a mean value filter; The angle between the straight bracket and the vertical direction and the pose increment are transmitted to the mechanical arm controller 7 of the slave station, and the mechanical arm controller 7 transmits the control signal to the mechanical arm and the mechanical gripper to control the mechanical arm 9 The movement of the robot and the opening and closing of the mechanical gripper 8; at the same time, the network camera 10 of the slave station collects images of the movement of the robot arm and the scene where the robot arm is located, and transmits the collected images back to the master station and displays them on the master station. displayed on the station monitor.

本发明基于视觉的遥操作机器人控制系统通过一个深度相机、一个一字支架和一个十字支架提取操作者手部位姿和状态,从而实现遥操作具有操作自然、成本低廉等特点;采用Kalman滤波器和颜色表可以减小颜色检测时的计算量,能够满足系统的实时性要求;使用网络相机将操作现场的图像传回主站能够直观地显示机械臂在操作场景中的状态,有利于增强临场感和完成相对复杂的工作。The vision-based teleoperation robot control system of the present invention extracts the posture and state of the operator's hand through a depth camera, a straight bracket and a cross bracket, thereby realizing teleoperation with the characteristics of natural operation and low cost; Kalman filter and The color table can reduce the amount of calculation during color detection and meet the real-time requirements of the system; using the network camera to send the image of the operation site back to the main station can intuitively display the state of the robot arm in the operation scene, which is conducive to enhancing the sense of presence and complete relatively complex tasks.

此外,本发明还提供了一种基于视觉的遥操作机器人控制方法。具体的,本发明基于视觉的遥操作机器人控制方法包括:In addition, the present invention also provides a vision-based teleoperation robot control method. Specifically, the vision-based teleoperation robot control method of the present invention includes:

在主站,通过深度相机采集操作者手部运动时带动所述十字支架和一字支架运动的彩色图像及对应的深度图像;At the main station, the color image and the corresponding depth image that drive the movement of the cross bracket and the straight bracket when the operator's hand moves are collected by the depth camera;

通过主站控制器根据所述彩色图像及对应的深度图像确定所述十字支架的位姿增量及一字支架与竖直方向的夹角;Determine the pose increment of the cross bracket and the angle between the straight bracket and the vertical direction through the master station controller according to the color image and the corresponding depth image;

在从站,通过机械臂控制器接收所述十字支架的位姿增量及一字支架与竖直方向的夹角,并根据所述十字支架的位姿增量控制所述机械臂的目标位姿,根据所述夹角控制所述机械抓手的开闭。At the slave station, the pose increment of the cross support and the angle between the straight support and the vertical direction are received by the controller of the mechanical arm, and the target position of the mechanical arm is controlled according to the pose increment of the cross support. posture, and control the opening and closing of the mechanical gripper according to the included angle.

其中,所述通过主站控制器根据所述彩色图像及对应的深度图像确定所述十字支架的位姿增量及一字支架与竖直方向的夹角,具体包括:Wherein, the determination of the pose increment of the cross bracket and the angle between the straight bracket and the vertical direction by the master station controller according to the color image and the corresponding depth image specifically includes:

步骤101:所述操作者在主站手持十字支架和一字支架进行运动,主站控制器通过视觉测量的方法解算十字支架的初始位姿和一字支架与竖直方向的夹角;Step 101: The operator holds the cross bracket and the straight bracket in the main station to move, and the master station controller calculates the initial pose of the cross bracket and the angle between the straight bracket and the vertical direction through the method of visual measurement;

步骤102:操作者在主站手持十字支架和一字支架继续运动,主站控制器不断将后续得到的操作者手部位姿与初始位姿作差,得到十字支架的位姿增量,再用一个均值滤波器对该位姿增量进行滤波处理。Step 102: The operator holds the cross bracket and the straight bracket at the master station and continues to move. The controller of the master station continuously makes a difference between the operator's hand posture obtained later and the initial pose to obtain the pose increment of the cross bracket, and then uses A mean filter filters the pose increments.

在从站,机械臂控制器根据所述十字支架的位姿增量控制所述机械臂的目标位姿,根据所述夹角控制所述机械抓手的开闭。其中,所述机械臂的目标位姿为机械臂的初始位姿加滤波后的位姿增量。具体地,当一字支架与竖直方向的夹角为0(即所述一字支架处于竖直姿态)时,机械抓手闭合;当一字支架与竖直方向的夹角为90°(即所述一字支架处于水平状态)时,机械抓手张开。At the slave station, the robotic arm controller controls the target pose of the robotic arm according to the pose increment of the cross bracket, and controls the opening and closing of the mechanical gripper according to the included angle. Wherein, the target pose of the robotic arm is the initial pose of the robotic arm plus a filtered pose increment. Specifically, when the angle between the inline bracket and the vertical direction is 0 (that is, the inline bracket is in a vertical posture), the mechanical gripper is closed; when the angle between the inline bracket and the vertical direction is 90° ( That is, when the inline bracket is in a horizontal state), the mechanical gripper opens.

可选的,所述十字支架上的三个端部分别设置有一个球体,且所述十字支架的连接处设置有一个球体;所述一字支架的两端分别设置有一个球体,且六个球体的颜色均不相同。例如,所述十字支架3上的四个球体的颜色分别为:连接处的球体颜色为红色,三个端部的球体颜色为蓝色、绿色和黄色;所述一字支架2的两端上的球体颜色可为紫色和黑色;但并不以此为限。Optionally, a sphere is provided at the three ends of the cross bracket, and a sphere is provided at the joint of the cross bracket; a sphere is respectively provided at both ends of the straight bracket, and the six The spheres are all different colors. For example, the colors of the four spheres on the cross bracket 3 are respectively: the color of the sphere at the joint is red, and the colors of the spheres at the three ends are blue, green and yellow; Spheres can be colored purple and black; however, they are not limited to.

本发明基于视觉的遥操作机器人控制方法还包括:通过六个分类器根据所述彩色图像中六个球体的颜色进行分类识别;通过所述主站控制器采用重心法确定六个球体的球心位置,并根据各所述球心位置及所述深度图像确定六个球体在摄像机坐标系下的三维空间位置。The vision-based teleoperation robot control method of the present invention also includes: classifying and identifying the six spheres in the color image through six classifiers; determining the centers of the six spheres through the master station controller position, and determine the three-dimensional space positions of the six spheres in the camera coordinate system according to the positions of the centers of the spheres and the depth images.

如图2所示,红色球体的坐标为(xr,yr,zr)、蓝色球体的坐标为(xb,yb,zb)、绿色球体的坐标为(xg,yg,zg)、黄色球体的坐标为(xy,yy,zy)。As shown in Figure 2, the coordinates of the red sphere are (x r , y r , z r ), the coordinates of the blue sphere are (x b , y b , z b ), and the coordinates of the green sphere are (x g , y g ,z g ), the coordinates of the yellow sphere are (x y ,y y ,z y ).

红球到绿球的向量:Vector from red ball to green ball:

(x1,y1,z1)=(xg-xr,yg-yr,zg-zr)-------公式(1);(x 1 ,y 1 ,z 1 )=(x g -x r ,y g -y r ,z g -z r )-------formula (1);

红球到黄球的向量:Vector from red ball to yellow ball:

(x2,y2,z3)=(xy-xr,yy-yr,zy-zr)-------公式(2);(x 2 ,y 2 ,z 3 )=(x y -x r ,y y -y r ,z y -z r )-------formula (2);

红球到绿球的向量为上面两个向量的叉乘:The vector from the red ball to the green ball is the cross product of the above two vectors:

(x3,y3,z3)=(y1z2-y2z1,x2z1-x1z2,x1y2-x2y1)------公式(3);(x 3 ,y 3 ,z 3 )=(y 1 z 2 -y 2 z 1 ,x 2 z 1 -x 1 z 2 ,x 1 y 2 -x 2 y 1 )------Formula( 3);

将上面三个向量归一化得到单位向量i、j、k,通过i、j、k可以得到十字支架3在摄像机坐标系下相对于摄像机坐标系的翻滚角R、倾斜角P、和偏航角Y。结合十字支架3上红球的位置和翻滚角R、倾斜角P、和偏航角Y就得到十字支架的位姿=(xr,yr,zr,R,P,Y)----公式(4)。Normalize the above three vectors to obtain unit vectors i, j, k, and through i, j, k, the roll angle R, tilt angle P, and yaw of the cross support 3 relative to the camera coordinate system in the camera coordinate system can be obtained Angle Y. Combining the position of the red ball on the cross support 3, the roll angle R, the tilt angle P, and the yaw angle Y, the pose of the cross support = (x r , y r , z r , R, P, Y)--- - Formula (4).

进一步地,本发明基于视觉的遥操作机器人控制方法还包括调取存储的颜色表,各分类器通过查表法确定颜色球的位置。所述颜色表表征6个分类器对256×256×256种颜色的分类结果。具体的,所述颜色表包括3张颜色子表,每张颜色子表的大小为256×256。Furthermore, the vision-based teleoperated robot control method of the present invention also includes calling a stored color table, and each classifier determines the position of the color ball through a table look-up method. The color table represents the classification results of 6 classifiers for 256×256×256 colors. Specifically, the color table includes 3 color sub-tables, and the size of each color sub-table is 256×256.

如图4至图6所示,本发明实施例使用YCbCr颜色空间,且对各球体的颜色进行编号,编号为非零整数,比如1代表蓝色,2代表绿色,3代表黄色,4代表红色;三张颜色子表的大小为256×256;当需要对一个颜色(Y,Cb,Cr)进行分类时,首先根据颜色的Cb和Cr值查第一张表(如图4所示),如果对应值为0,则表示该颜色是背景;如果非零则继续查看第二张表(如图5所示)和第三张表(如图6所示),如果该颜色的Y分量的值在第二张表和第三张表的对应值之间,则表示它是球体的颜色,否则是背景;例如,对于颜色对(x2,y2,z),在第一张表中(x2,y2)位置的值为3,表示它可能是黄色,继续查看第二张表和第三张表,得到对应位置的值为84和229,如果84≤z≤229,则表示该颜色是黄色,否则该颜色是背景。As shown in Figures 4 to 6, the embodiment of the present invention uses the YCbCr color space, and numbers the colors of each sphere, and the numbers are non-zero integers, such as 1 for blue, 2 for green, 3 for yellow, and 4 for red The size of the three color subtables is 256 * 256; when a color (Y, Cb, Cr) needs to be classified, at first check the first table (as shown in Figure 4) according to the Cb and Cr values of the color, If the corresponding value is 0, it means that the color is the background; if it is non-zero, continue to look at the second table (as shown in Figure 5) and the third table (as shown in Figure 6), if the Y component of the color A value between the corresponding values in the second and third tables indicates that it is the color of the sphere, otherwise it is the background; for example, for a color pair (x 2 , y 2 , z), in the first table (x 2 , y 2 ) The value of the position is 3, which means it may be yellow. Continue to look at the second and third tables, and get the values of the corresponding positions 84 and 229. If 84≤z≤229, it means the color is yellow, otherwise the color is the background.

进一步地,本发明基于视觉的遥操作机器人控制方法还包括:通过Kalman滤波器根据各球体在当前帧中摄像机坐标系下的三维空间位置,预测各球体在下一帧中的位置;通过所述主站控制器根据预测的各球体在下一帧中的位置及所述深度相机采集的彩色图像进行局部检测确定球体的三维空间位置。Further, the vision-based teleoperation robot control method of the present invention also includes: predicting the position of each sphere in the next frame through the Kalman filter according to the three-dimensional space position of each sphere under the camera coordinate system in the current frame; The station controller performs local detection according to the predicted position of each sphere in the next frame and the color image collected by the depth camera to determine the three-dimensional space position of the sphere.

以对十字支架上球体的检测为例:如图3所示,第一步:通过深度相机获得的彩色图像,对所述彩色图像进行全局检测十字支架上的4个球体,直至确定球体在当前帧中的位置,并将检测成功后用4个球在当前帧中的位置发送至所述Kalman滤波器;所述Kalman滤波器预测各球体在下一帧中的位置,从而使得所述主站控制器可在预测位置附近的小范围内进行局部检测;若检测成功则更新Kalman滤波器的状态,否则检测失败转入全局检测,从而实现对球体的快速定位,提高检测速度。同理,一字支架上的2个球的检测采用同样的流程。Take the detection of the sphere on the cross bracket as an example: as shown in Figure 3, the first step: use the color image obtained by the depth camera to globally detect the four spheres on the cross bracket until it is determined that the sphere is in the current position. position in the frame, and send the positions of the four balls in the current frame to the Kalman filter after successful detection; the Kalman filter predicts the position of each ball in the next frame, so that the master station controls The detector can perform local detection in a small area near the predicted position; if the detection is successful, the state of the Kalman filter will be updated, otherwise the detection will fail and transfer to the global detection, so as to realize the rapid positioning of the sphere and improve the detection speed. In the same way, the detection of the two balls on the straight bracket adopts the same process.

其中,所述Kalman滤波器的状态方程为:Wherein, the state equation of the Kalman filter is:

其中,X(k)是k时刻系统的状态,A是系统参数,Z(k)是系统k时刻的测量值,H为系统的观测矩阵,W(k)表示调整控制过程的噪声,V(k)表示测量过程的噪声。在本实施例中,所述调整控制过程的噪声和测量过程的噪声均为高斯白噪声。Among them, X(k) is the state of the system at time k, A is the system parameter, Z(k) is the measured value of the system at time k, H is the observation matrix of the system, W(k) represents the noise of the adjustment control process, V( k) represents the noise of the measurement process. In this embodiment, both the noise of the adjustment control process and the noise of the measurement process are Gaussian white noise.

假设状态X(k)只包含k时刻x方向的位置和速度,要包含y方向和z方向只需将它们扩充进X(k)即可,则X(k)、A和H的设置方式为:Assuming that the state X(k) only includes the position and velocity in the x direction at time k, to include the y direction and z direction, you only need to expand them into X(k), then the setting method of X(k), A and H is :

X(k)=[x(k)v(k)]T----------公式(6);X(k)=[x(k)v(k)] T ---------- formula (6);

Z(k)=[z(k)]T----------------公式(8);Z(k)=[z(k)] T ---------------- formula (8);

H=[1 0]----------------------公式(9);H=[1 0]---------------------- formula (9);

将十字支架上的6个颜色球的空间坐标x,y、z作为观测值,从而确定观测矩阵,6个球的空间位置x、y、z和对应方向的速度vx、vy、vz作为状态变量,从而确定Kalman滤波器的状态方程。The space coordinates x, y, z of the 6 color balls on the cross support are used as observation values to determine the observation matrix, the spatial positions x, y, z of the 6 balls and the velocities v x , v y , v z in the corresponding directions As a state variable, to determine the state equation of the Kalman filter.

此外,为使得操作者准确控制所述机械臂9及机械抓手8的动作,以完成操作任务,本发明基于视觉的遥操作机器人控制方法通过在从站的网络相机采集机械臂与机械抓手的运动及工作场景的图像,并通过网络回传至主站的主站控制器,通过主站控制器发送至显示器进行显示,便于操作者的观察。优选的,所述主站控制器与显示器集成在一台控制计算机上。In addition, in order to allow the operator to accurately control the movements of the robotic arm 9 and the robotic gripper 8 to complete the operation task, the vision-based teleoperation robot control method of the present invention collects the information of the robotic arm and the robotic gripper through the network camera at the slave station. The image of the movement and working scene will be sent back to the master station controller of the master station through the network, and then sent to the monitor for display through the master station controller, which is convenient for the operator to observe. Preferably, the master station controller and the display are integrated on one control computer.

相对于现有技术,本发明基于视觉的遥操作机器人控制方法与上述基于视觉的遥操作机器人控制系统的有益效果相同,在此不再赘述。Compared with the prior art, the vision-based teleoperation robot control method of the present invention has the same beneficial effects as the above-mentioned vision-based teleoperation robot control system, which will not be repeated here.

至此,已经结合附图所示的优选实施方式描述了本发明的技术方案,但是,本领域技术人员容易理解的是,本发明的保护范围显然不局限于这些具体实施方式。在不偏离本发明的原理的前提下,本领域技术人员可以对相关技术特征作出等同的更改或替换,这些更改或替换之后的技术方案都将落入本发明的保护范围之内。So far, the technical solutions of the present invention have been described in conjunction with the preferred embodiments shown in the accompanying drawings, but those skilled in the art will easily understand that the protection scope of the present invention is obviously not limited to these specific embodiments. Without departing from the principles of the present invention, those skilled in the art can make equivalent changes or substitutions to relevant technical features, and the technical solutions after these changes or substitutions will all fall within the protection scope of the present invention.

Claims (10)

1.一种基于视觉的遥操作机器人控制系统,其特征在于,所述控制系统包括主站和从站,所述主站通过网路与所述从站连接;其中,1. A vision-based teleoperated robot control system, characterized in that, the control system comprises a master station and a slave station, and the master station is connected with the slave station through a network; wherein, 所述主站包括:The master station includes: 一字支架和十字支架,为手持设备,通过操作者的手部运动分别带动所述十字支架和一字支架运动;The cross support and the cross support are hand-held devices, and the cross support and the cross support are respectively driven to move by the hand movement of the operator; 深度相机,用于采集所述十字支架和一字支架运动时的彩色图像及对应的深度图像;Depth camera, used to collect color images and corresponding depth images when the cross support and the straight support move; 主站控制器,与所述深度相机连接,用于根据所述彩色图像及对应的深度图像确定所述十字支架的位姿增量及一字支架与竖直方向的夹角;The main station controller is connected with the depth camera, and is used to determine the pose increment of the cross bracket and the angle between the straight bracket and the vertical direction according to the color image and the corresponding depth image; 所述从站包括机械臂、与所述机械臂的末端连接的机械抓手以及机械臂控制器;所述机械臂控制器通过网路与所述主站控制器连接,用于接收所述十字支架的位姿增量及一字支架与竖直方向的夹角;所述机械臂控制器分别与所述机械臂和机械抓手连接,用于根据所述十字支架的位姿增量控制所述机械臂的目标位姿,根据所述夹角控制所述机械抓手的开闭。The slave station includes a robotic arm, a mechanical gripper connected to the end of the robotic arm, and a robotic arm controller; the robotic arm controller is connected to the master station controller through a network for receiving the cross The pose increment of the bracket and the angle between the straight bracket and the vertical direction; the controller of the mechanical arm is connected with the mechanical arm and the mechanical gripper respectively, and is used to control the cross bracket according to the pose increment of the cross bracket. The target pose of the mechanical arm is controlled, and the opening and closing of the mechanical gripper is controlled according to the included angle. 2.根据权利要求1所述的基于视觉的遥操作机器人控制系统,其特征在于,所述十字支架上的三个端部分别设置有一个球体,且所述十字支架的连接处设置有一个球体;所述一字支架的两端分别设置有一个球体,且六个球体的颜色均不相同。2. The vision-based teleoperated robot control system according to claim 1, wherein a sphere is provided at the three ends of the cross support respectively, and a sphere is provided at the junction of the cross support ; Two ends of the straight bracket are respectively provided with a sphere, and the colors of the six spheres are all different. 3.根据权利要求2所述的基于视觉的遥操作机器人控制系统,其特征在于,所述主站还包括:3. The vision-based teleoperated robot control system according to claim 2, wherein the master station also includes: 六个分类器,分别与所述深度相机和主站控制器连接,用于根据所述彩色图像中六个球体的颜色进行分类识别;Six classifiers are respectively connected with the depth camera and the main station controller, and are used to classify and recognize the colors of the six spheres in the color image; 所述主站控制器还用于采用重心法确定六个球体的球心位置,并根据各所述球心位置及所述深度图像确定六个球体在当前帧中摄像机坐标系下的三维空间位置。The master station controller is also used to determine the positions of the centers of the six spheres using the center of gravity method, and determine the three-dimensional positions of the six spheres in the current frame in the camera coordinate system according to the positions of the centers of the spheres and the depth image . 4.根据权利要求3所述的基于视觉的遥操作机器人控制系统,其特征在于,所述主站还包括:4. The vision-based teleoperated robot control system according to claim 3, wherein the master station also includes: Kalman滤波器,与所述主站控制器连接,用于根据各球体在当前帧中摄像机坐标系下的三维空间位置,预测各球体在下一帧中的位置;Kalman filter, connected with the master station controller, for predicting the position of each spheroid in the next frame according to the three-dimensional space position of each spheroid in the camera coordinate system in the current frame; 所述主站控制器还用于根据预测的各球体在下一帧中的位置及所述深度相机采集的彩色图像进行局部检测确定球体的三维空间位置。The master station controller is also used to perform local detection and determine the three-dimensional space position of the spheres according to the predicted positions of the spheres in the next frame and the color image collected by the depth camera. 5.根据权利要求1所述的基于视觉的遥操作机器人控制系统,其特征在于,所述主站还包括均值滤波器,与所述控制器连接,用于对所述十字支架的位姿增量进行滤波,并将滤波后的位姿增量通过网络发送至所述机械臂控制器。5. The teleoperated robot control system based on vision according to claim 1, wherein the master station also includes a mean value filter connected with the controller for increasing the pose of the cross support. The amount is filtered, and the filtered pose increment is sent to the controller of the robotic arm through the network. 6.根据权利要求1-5中任一项所述的基于视觉的遥操作机器人控制系统,其特征在于,所述从站还包括网络相机,用于采集机械臂与机械抓手的运动及工作场景的图像;6. The vision-based teleoperation robot control system according to any one of claims 1-5, wherein the slave station also includes a network camera for collecting motion and work of the mechanical arm and the mechanical gripper an image of the scene; 所述主站还包括显示器,与所述主站控制器连接;所述主站控制器还用于接收所述网络相机采集的机械臂与机械抓手的运动及工作场景的图像,并发送至所述显示器进行显示。The master station also includes a display connected to the master station controller; the master station controller is also used to receive the images of the motion and working scene of the mechanical arm and the mechanical gripper collected by the network camera, and send them to The display performs display. 7.一种基于视觉的遥操作机器人控制方法,其特征在于,所述控制方法包括:7. A vision-based teleoperated robot control method, characterized in that the control method comprises: 在主站,通过深度相机采集操作者手部运动时带动所述十字支架和一字支架运动的彩色图像及对应的深度图像;At the main station, the color image and the corresponding depth image that drive the movement of the cross bracket and the straight bracket when the operator's hand moves are collected by the depth camera; 通过主站控制器根据所述彩色图像及对应的深度图像确定所述十字支架的位姿增量及一字支架与竖直方向的夹角;Determine the pose increment of the cross bracket and the angle between the straight bracket and the vertical direction through the master station controller according to the color image and the corresponding depth image; 在从站,通过机械臂控制器接收所述十字支架的位姿增量及一字支架与竖直方向的夹角,并根据所述十字支架的位姿增量控制所述机械臂的目标位姿,根据所述夹角控制所述机械抓手的开闭。At the slave station, the pose increment of the cross support and the angle between the straight support and the vertical direction are received by the controller of the mechanical arm, and the target position of the mechanical arm is controlled according to the pose increment of the cross support. posture, and control the opening and closing of the mechanical gripper according to the included angle. 8.根据权利要求7所述的基于视觉的遥操作机器人控制方法,其特征在于,所述十字支架上的三个端部分别设置有一个球体,且所述十字支架的连接处设置有一个球体;所述一字支架的两端分别设置有一个球体,且六个球体的颜色均不相同。8. The vision-based teleoperated robot control method according to claim 7, wherein a sphere is respectively arranged at the three ends of the cross support, and a sphere is provided at the joint of the cross support ; Two ends of the straight bracket are respectively provided with a sphere, and the colors of the six spheres are all different. 9.根据权利要求8所述的基于视觉的遥操作机器人控制方法,其特征在于,所述控制方法还包括:9. The vision-based teleoperated robot control method according to claim 8, wherein the control method further comprises: 通过六个分类器根据所述彩色图像中六个球体的颜色进行分类识别;通过所述主站控制器采用重心法确定六个球体的球心位置,并根据各所述球心位置及所述深度图像确定六个球体在当前帧中摄像机坐标系下的三维空间位置。Carry out classification and identification according to the colors of the six spheres in the color image through six classifiers; determine the positions of the centers of the six spheres by the center of gravity method through the master station controller, and according to the positions of the centers of the spheres and the The depth image determines the three-dimensional positions of the six spheres in the camera coordinate system in the current frame. 10.根据权利要求9所述的基于视觉的遥操作机器人控制方法,其特征在于,所述控制方法还包括:10. The vision-based teleoperated robot control method according to claim 9, wherein the control method further comprises: 通过Kalman滤波器根据各球体在当前帧中摄像机坐标系下的三维空间位置,预测各球体在下一帧中的位置;通过所述主站控制器根据预测的各球体在下一帧中的位置及所述深度相机采集的彩色图像进行局部检测确定球体的三维空间位置。Predict the position of each sphere in the next frame according to the three-dimensional space position of each sphere in the camera coordinate system in the current frame through the Kalman filter; The color image collected by the depth camera is used for local detection to determine the three-dimensional space position of the sphere.
CN201710428209.1A 2017-06-08 2017-06-08 Teleoperation robot control system and method based on vision Active CN107363831B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710428209.1A CN107363831B (en) 2017-06-08 2017-06-08 Teleoperation robot control system and method based on vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710428209.1A CN107363831B (en) 2017-06-08 2017-06-08 Teleoperation robot control system and method based on vision

Publications (2)

Publication Number Publication Date
CN107363831A true CN107363831A (en) 2017-11-21
CN107363831B CN107363831B (en) 2020-01-10

Family

ID=60304837

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710428209.1A Active CN107363831B (en) 2017-06-08 2017-06-08 Teleoperation robot control system and method based on vision

Country Status (1)

Country Link
CN (1) CN107363831B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110421558A (en) * 2019-06-21 2019-11-08 中国科学技术大学 Universal remote control system and method towards power distribution network Work robot
CN111633653A (en) * 2020-06-04 2020-09-08 上海机器人产业技术研究院有限公司 Mechanical arm control system and method based on visual positioning

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102961811A (en) * 2012-11-07 2013-03-13 上海交通大学 Trachea intubating system and method based on remotely operated mechanical arm
CN103302668A (en) * 2013-05-22 2013-09-18 东南大学 Kinect-based space teleoperation robot control system and method thereof
CN104570731A (en) * 2014-12-04 2015-04-29 重庆邮电大学 Uncalibrated human-computer interaction control system and method based on Kinect
CN104589356A (en) * 2014-11-27 2015-05-06 北京工业大学 Dexterous hand teleoperation control method based on Kinect human hand motion capturing
CN106003076A (en) * 2016-06-22 2016-10-12 潘小胜 Powder spraying robot based on stereoscopic vision
US20160354927A1 (en) * 2014-02-04 2016-12-08 Microsoft Technology Licensing, Llc Controlling a robot in the presence of a moving object

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102961811A (en) * 2012-11-07 2013-03-13 上海交通大学 Trachea intubating system and method based on remotely operated mechanical arm
CN103302668A (en) * 2013-05-22 2013-09-18 东南大学 Kinect-based space teleoperation robot control system and method thereof
US20160354927A1 (en) * 2014-02-04 2016-12-08 Microsoft Technology Licensing, Llc Controlling a robot in the presence of a moving object
CN104589356A (en) * 2014-11-27 2015-05-06 北京工业大学 Dexterous hand teleoperation control method based on Kinect human hand motion capturing
CN104570731A (en) * 2014-12-04 2015-04-29 重庆邮电大学 Uncalibrated human-computer interaction control system and method based on Kinect
CN106003076A (en) * 2016-06-22 2016-10-12 潘小胜 Powder spraying robot based on stereoscopic vision

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110421558A (en) * 2019-06-21 2019-11-08 中国科学技术大学 Universal remote control system and method towards power distribution network Work robot
CN111633653A (en) * 2020-06-04 2020-09-08 上海机器人产业技术研究院有限公司 Mechanical arm control system and method based on visual positioning

Also Published As

Publication number Publication date
CN107363831B (en) 2020-01-10

Similar Documents

Publication Publication Date Title
WO2023056670A1 (en) Mechanical arm autonomous mobile grabbing method under complex illumination conditions based on visual-tactile fusion
US20210205986A1 (en) Teleoperating Of Robots With Tasks By Mapping To Human Operator Pose
Qin et al. Anyteleop: A general vision-based dexterous robot arm-hand teleoperation system
US12131529B2 (en) Virtual teach and repeat mobile manipulation system
Mazhar et al. Towards real-time physical human-robot interaction using skeleton information and hand gestures
US9089971B2 (en) Information processing apparatus, control method thereof and storage medium
CN108908334A (en) A kind of intelligent grabbing system and method based on deep learning
CN108422435A (en) Remote monitoring and control system based on augmented reality
CN107030692B (en) A method and system for manipulator teleoperation based on perception enhancement
WO2016193781A1 (en) Motion control system for a direct drive robot through visual servoing
CN110744544B (en) Service robot vision grabbing method and service robot
Lippiello et al. 3D monocular robotic ball catching with an iterative trajectory estimation refinement
Yang et al. Real-time human-robot interaction in complex environment using kinect v2 image recognition
Han et al. Grasping control method of manipulator based on binocular vision combining target detection and trajectory planning
CN107363831B (en) Teleoperation robot control system and method based on vision
CN114299039B (en) Robot and collision detection device and method thereof
Kragic et al. Model based techniques for robotic servoing and grasping
Yang et al. Visual servoing control of baxter robot arms with obstacle avoidance using kinematic redundancy
Schnaubelt et al. Autonomous assistance for versatile grasping with rescue robots
Zhou et al. Visual servo control system of 2-DOF parallel robot
Lu et al. Human-robot collision detection based on the improved camshift algorithm and bounding box
Infantino et al. Visual control of a robotic hand
Bai et al. Kinect-based hand tracking for first-person-perspective robotic arm teleoperation
Stefańczyk et al. Localization of essential door features for mobile manipulation
Xu et al. Design of a human-robot interaction system for robot teleoperation based on digital twinning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant