CN107363831B - Teleoperation robot control system and method based on vision - Google Patents

Teleoperation robot control system and method based on vision Download PDF

Info

Publication number
CN107363831B
CN107363831B CN201710428209.1A CN201710428209A CN107363831B CN 107363831 B CN107363831 B CN 107363831B CN 201710428209 A CN201710428209 A CN 201710428209A CN 107363831 B CN107363831 B CN 107363831B
Authority
CN
China
Prior art keywords
sphere
controller
mechanical arm
support
vision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710428209.1A
Other languages
Chinese (zh)
Other versions
CN107363831A (en
Inventor
王硕
席宝
鲁涛
蔡莹皓
刘乃军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN201710428209.1A priority Critical patent/CN107363831B/en
Publication of CN107363831A publication Critical patent/CN107363831A/en
Application granted granted Critical
Publication of CN107363831B publication Critical patent/CN107363831B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/06Control stands, e.g. consoles, switchboards
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1689Teleoperation

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The invention relates to a teleoperation robot control system and a teleoperation robot control method based on vision, wherein the control system comprises a master station and a slave station, the master station comprises a cross support, a depth camera and a master station controller, the master station controller is connected with the depth camera, and the pose increment of the cross support and the included angle between the cross support and the vertical direction are determined according to a color image and a corresponding depth image; the slave station comprises a mechanical arm, a mechanical hand and a mechanical arm controller, wherein the mechanical hand is connected with the tail end of the mechanical arm; the mechanical arm controller is connected with the main station controller through a network; the mechanical arm controller is respectively connected with the mechanical arm and the mechanical hand, the target pose of the mechanical arm is controlled according to the pose increment of the cross support, and the mechanical hand is controlled to be opened and closed according to the included angle, so that an operator can naturally execute an operation task, the equipment structure is simplified, the recognition degree of complex actions of the operator is improved, and the control precision is improved.

Description

Teleoperation robot control system and method based on vision
Technical Field
The invention relates to the technical field of remote teleoperation robot control, in particular to a teleoperation robot control system and method based on vision.
Background
Despite the great advances in robotics in recent years, robots are still currently unable to independently perform complex, dangerous operational tasks such as handling nuclear waste, demolition of explosives, underwater and space exploration, etc.
At present, the teleoperation technology of the robot can isolate human from dangerous operation environment and utilize the capability and experience of human to process problems, thereby having stronger practical value and wide application prospect.
However, existing teleoperation methods are generally classified into two types, contact and noncontact. The contact teleoperation method generally uses exoskeleton equipment, data gloves, inertial measurement and other methods to acquire motion information of an operator, and then uses the motion information to control the robot so as to realize teleoperation on the robot. These methods result in limited and unnatural operating motions due to the need to wear the sensor device on the operator. The non-contact teleoperation method generally adopts a method based on visual measurement of action behaviors of an operator, so that the operator can naturally act during operation, and the method is favorable for the operator to complete a more complex teleoperation task, but has higher requirements on the accuracy of visual measurement.
In vision-based teleoperation systems, the approach of not using a specific marker is most natural for the operator. However, the method has high requirements on the recognition algorithm, and only simple operation tasks such as simple clamping and the like can be completed at present. In contrast, the visual method using a specific marker has the characteristic of simple algorithm, and can obtain higher detection accuracy on the basis of designing a simple and effective marker and a fast and efficient detection algorithm.
Disclosure of Invention
In order to solve the problems in the prior art, namely, to solve the problems that the structure of external equipment required by a contact-type teleoperation method is complex, an operator is unnatural when performing an operation task, and the recognition degree of complex actions of the operator is low, the invention provides a teleoperation robot control system and method based on vision.
In order to achieve the purpose, the invention provides the following scheme:
a vision-based teleoperation robot control system comprises a master station and a slave station, wherein the master station is connected with the slave station through a network; wherein the content of the first and second substances,
the master station includes:
the linear support and the cross support are handheld devices, and the cross support and the linear support are respectively driven to move by the hand movement of an operator;
the depth camera is used for acquiring a color image and a corresponding depth image when the cross support and the linear support move;
the main station controller is connected with the depth camera and is used for determining the pose increment of the cross support and the included angle between the linear support and the vertical direction according to the color image and the corresponding depth image;
the slave station comprises a mechanical arm, a mechanical hand and a mechanical arm controller, wherein the mechanical hand is connected with the tail end of the mechanical arm; the mechanical arm controller is connected with the main station controller through a network and receives the pose increment of the cross support and the included angle between the linear support and the vertical direction; the mechanical arm controller is respectively connected with the mechanical arm and the mechanical hand grip and used for controlling the target pose of the mechanical arm according to the pose increment of the cross support and controlling the mechanical hand grip to be opened or closed according to the included angle.
Optionally, three end portions of the cross-shaped bracket are respectively provided with a sphere, and a sphere is arranged at the joint of the cross-shaped bracket; two ends of the straight support are respectively provided with a sphere, and the colors of the six spheres are different.
Optionally, the master station further includes:
the six classifiers are respectively connected with the depth camera and the main station controller and used for carrying out classification and identification according to the colors of six spheres in the color image;
and the main controller is also used for determining the sphere center positions of six spheres by adopting a gravity center method and determining the three-dimensional space positions of the six spheres in the current frame under the camera coordinate system according to the sphere center positions and the depth image.
Optionally, the master station further includes:
the Kalman filter is connected with the main station controller and used for predicting the position of each sphere in the next frame according to the three-dimensional space position of each sphere in the current frame under the camera coordinate system;
and the main controller is also used for carrying out local detection according to the predicted position of each sphere in the next frame and the color image collected by the depth camera to determine the three-dimensional space position of the sphere.
Optionally, the master station further includes a mean filter, connected to the controller, and configured to filter the pose increment of the cross bracket, and send the filtered pose increment to the manipulator controller through a network.
Optionally, the slave station further comprises a network camera for acquiring the motion of the mechanical arm and the mechanical gripper and the image of the working scene;
the master station also comprises a display which is connected with the master station controller; the main station controller is also used for receiving the motion of the mechanical arm and the mechanical hand grip and the image of the working scene collected by the network camera and sending the motion and the image to the display for displaying.
According to the embodiment of the invention, the invention discloses the following technical effects:
the teleoperation robot control system based on vision acquires a color image and a corresponding depth image which drive a cross support and a linear support to move when the hands of an operator move by arranging a depth camera, and determines the pose increment of the cross support and the included angle between the linear support and the vertical direction by a main station controller; the manipulator controller can control the target pose of the manipulator according to the pose increment of the cross support and control the mechanical gripper to open and close according to the included angle, so that an operator can naturally execute an operation task, the equipment structure is simplified, the recognition degree of complex actions of the operator is improved, and the control precision is improved.
In order to achieve the purpose, the invention provides the following scheme:
a vision-based teleoperated robot control method, the control method comprising:
the method comprises the steps that a main station acquires a color image and a corresponding depth image which drive a cross support and a linear support to move when hands of an operator move through a depth camera;
determining the pose increment of the cross support and the included angle between the linear support and the vertical direction according to the color image and the corresponding depth image through a main station controller;
and at the slave station, receiving the pose increment of the cross support and the included angle between the linear support and the vertical direction through a mechanical arm controller, controlling the target pose of the mechanical arm according to the pose increment of the cross support, and controlling the mechanical hand to open and close according to the included angle.
Optionally, three end portions of the cross-shaped bracket are respectively provided with a sphere, and a sphere is arranged at the joint of the cross-shaped bracket; two ends of the straight support are respectively provided with a sphere, and the colors of the six spheres are different.
Optionally, the control method further includes:
carrying out classification and identification according to the colors of six spheres in the color image through six classifiers; and determining the sphere center positions of six spheres by the main station controller by adopting a gravity center method, and determining the three-dimensional space positions of the six spheres in the current frame under the camera coordinate system according to the sphere center positions and the depth image.
Optionally, the control method further includes:
predicting the position of each sphere in the next frame according to the three-dimensional space position of each sphere in the current frame under the camera coordinate system by a Kalman filter; and the main station controller carries out local detection according to the predicted position of each sphere in the next frame and the color image collected by the depth camera to determine the three-dimensional space position of the sphere.
According to the embodiment of the invention, the invention discloses the following technical effects:
the teleoperation robot control method based on vision acquires a color image and a corresponding depth image which drive a cross support and a linear support to move when the hands of an operator move through a depth camera, and determines the pose increment of the cross support and the included angle between the linear support and the vertical direction through a main station controller; and the manipulator controller can control the target pose of the manipulator according to the pose increment of the cross support and control the opening and closing of the mechanical gripper according to the included angle, so that an operator can naturally execute an operation task, the equipment structure is simplified, the recognition degree of complex actions of the operator is improved, and the control precision is improved.
Drawings
FIG. 1 is a schematic diagram of the structure of a vision-based teleoperated robotic control system of the present invention;
FIG. 2 is a method of calculating pose according to an embodiment of the invention;
FIG. 3 is a flow chart of the stereoscopic vision software according to the embodiment of the present invention;
FIG. 4 is a first color sub-chart of a color chart according to an embodiment of the present invention;
FIG. 5 is a second color sub-chart of the color chart according to the embodiment of the invention;
FIG. 6 is a third color sub-table diagram of the color table according to the embodiment of the invention.
Description of the symbols:
the system comprises an operator-1, a linear support-2, a cross support-3, a depth camera-4, a control computer-5, a network-6, a mechanical arm controller-7, a mechanical gripper-8, a mechanical arm-9 and a network camera-10.
Detailed Description
Preferred embodiments of the present invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are only for explaining the technical principle of the present invention, and are not intended to limit the scope of the present invention.
The invention provides a teleoperation robot control system and a teleoperation robot control method based on vision.A depth camera is arranged to collect a color image and a corresponding depth image which drive a cross support and a linear support to move when the hands of an operator move, and a main station controller is used for determining the pose increment of the cross support and the included angle between the linear support and the vertical direction; the manipulator controller can control the target pose of the manipulator according to the pose increment of the cross support and control the mechanical gripper to open and close according to the included angle, so that an operator can naturally execute an operation task, the equipment structure is simplified, the recognition degree of complex actions of the operator is improved, and the control precision is improved.
As shown in fig. 1, the vision-based teleoperated robot control system of the present invention includes a master station and a slave station, and the master station is connected to the slave station through a network 6.
The main station comprises a linear support 2, a cross support 3, a depth camera 4 and a main station controller, wherein the linear support 2 and the cross support 3 are handheld devices, and the cross support 3 and the linear support 2 are respectively driven to move by the movement of hands of an operator 1; the depth camera 4 is used for acquiring a color image and a corresponding depth image when the cross support 3 and the linear support 2 move; and the main station controller is connected with the depth camera 4 and is used for determining the pose increment of the cross support 3 and the included angle between the linear support 2 and the vertical direction according to the color image and the corresponding depth image.
The slave station comprises a mechanical arm 9, a mechanical hand 8 connected with the tail end of the mechanical arm 9 and a mechanical arm controller 7; the mechanical arm controller 7 is connected with the main station controller through a network 6 and is used for receiving the pose increment of the cross support 3 and the included angle between the linear support 2 and the vertical direction; the mechanical arm controller 7 is respectively connected with the mechanical arm 9 and the mechanical hand 8, controls the target pose of the mechanical arm 9 according to the pose increment of the cross support 3, and controls the opening and closing of the mechanical hand 8 according to the included angle. The target pose of the mechanical arm 9 is the pose increment of the initial pose of the mechanical arm 9 after filtering. Specifically, when the included angle between the linear support 2 and the vertical direction is 0 (that is, the linear support 2 is in the vertical posture), the mechanical gripper 8 is closed; when the included angle between the linear support 2 and the vertical direction is 90 degrees (namely, the linear support 2 is in a horizontal state), the mechanical gripper 8 is opened.
Preferably, three ends of the cross-shaped bracket 3 are respectively provided with a sphere, and the joint of the cross-shaped bracket 3 is provided with a sphere; two ends of the straight support 2 are respectively provided with a sphere, and the colors of the six spheres are different. For example, the colors of the four spheres on the cross-shaped support 3 are respectively: the color of the sphere at the joint is red, and the colors of the spheres at the three ends are blue, green and yellow; the colors of the spheres on the two ends of the linear bracket 2 can be purple and black; but not limited thereto.
Further, the master station further comprises six classifiers which are respectively connected with the depth camera 4 and the master station controller and used for carrying out classification and identification according to the colors of six spheres in the color image;
and the main controller is also used for determining the sphere center positions of six spheres by adopting a gravity center method and determining the three-dimensional space positions of the six spheres in the current frame under the camera coordinate system according to the sphere center positions and the depth image.
As shown in FIG. 2, the coordinates of the red sphere are (x)r,yr,zr) The coordinates of the blue sphere are (x)b,yb,zb) The coordinates of the green sphere are (x)g,yg,zg) The coordinates of the yellow sphere are (x)y,yy,zy) Wherein x, y and z respectively represent three-dimensional directions under a camera coordinate system.
Red to green vector:
(x1,y1,z1)=(xg-xr,yg-yr,zg-zr) -formula (1);
vector from red to yellow:
(x2,y2,z3)=(xy-xr,yy-yr,zy-zr) -formula (2);
the red to green vector is a cross product of the two above vectors:
(x3,y3,z3)=(y1z2-y2z1,x2z1-x1z2,x1y2-x2y1) -formula (3);
the three vectors are normalized to obtain unit vectors i, j and k, and the roll angle R, the inclination angle P and the yaw angle Y of the cross support 3 relative to the camera coordinate system under the camera coordinate system can be obtained through the unit vectors i, j and k. Combining the position of the red ball on the cross support 3 with the roll angle R, the inclination angle P and the yaw angle Y to obtain the pose (x) of the cross supportr,yr,zr,RP, Y) - - - - - - -formula (4).
Furthermore, the master station also comprises a memory, wherein a color table is stored in the memory, each classifier is connected with the memory, the color table is called, and the position of the color ball is determined by a table look-up method. Wherein, the color table represents the classification result of 256 × 256 × 256 colors by 6 classifiers. Specifically, the color table includes 3 color sub-tables, and the size of each color sub-table is 256 × 256.
As shown in fig. 4 to 6, the embodiment of the present invention uses the YCbCr color space, and the colors of each sphere are numbered as non-zero integers, such as 1 for blue, 2 for green, 3 for yellow, and 4 for red; the size of the three color sub-tables is 256 multiplied by 256; when a color (Y, Cb, Cr) needs to be classified, first look up a first table according to the Cb and Cr values of the color (as shown in fig. 4), and if the corresponding value is 0, it indicates that the color is a background; if not, continuing to look at the second table (as shown in FIG. 5) and the third table (as shown in FIG. 6), if the value of the Y component of the color is between the corresponding values of the second table and the third table, indicating that it is the color of the sphere, otherwise, it is the background; for example, for color pair (x)2,y2Z) in the first table (x)2,y2) The value of the position is 3, indicating that it may be yellow, and continuing to look at the second and third tables, resulting in the values of 84 and 229 for the corresponding position, if 84 ≦ z ≦ 229, indicating that the color is yellow, otherwise the color is background.
In addition, the master station further comprises a Kalman filter (not shown in the figure), connected to the master station controller, for predicting the position of each sphere in the next frame according to the three-dimensional spatial position of each sphere in the camera coordinate system in the current frame; and the main controller is also used for carrying out local detection according to the predicted position of each sphere in the next frame and the color image collected by the depth camera to determine the three-dimensional space position of the sphere.
Taking the detection of the sphere on the cross-shaped bracket as an example: as shown in fig. 3, the first step: the main station controller receives a color image obtained by a depth camera, performs global detection on the color image aiming at 4 spheres on a cross support until the three-dimensional space position of each sphere under a camera coordinate system in the current frame is determined, and sends the positions of the 4 spheres in the current frame to the Kalman filter after the detection is successful; the Kalman filter predicts the position of each sphere in the next frame and sends the position to the master controller, so that the master controller can perform local detection in a small range near the predicted position; and if the detection is successful, updating the state of the Kalman filter, otherwise, switching to global detection if the detection is failed. Similarly, the detection of 2 balls on the linear support adopts the same process, and is not described herein again.
Specifically, the state equation of the Kalman filter is:
wherein, x (k) is the state of the system at time k, a is the system parameter, z (k) is the measured value of the system at time k, H is the observation matrix of the system, w (k) represents the noise of the adjustment control process, and v (k) represents the noise of the measurement process. In this embodiment, the noise of the adjustment control process and the noise of the measurement process are both white gaussian noise.
Assuming that the state x (k) only includes the position and velocity in the x direction at the time k, and only needs to expand the y direction and the z direction into x (k), the setting modes of x (k), a and H are:
X(k)=[x(k)v(k)]T-formula (6);
Figure BDA0001316729240000092
Z(k)=[z(k)]T- - - - - - - - - - - - - - - - - - - - - -formula (8);
h ═ 10-o-formula (9);
taking the spatial positions x, y and z of the 6 color balls on the cross support as observed values, thereby determining an observation matrix, the spatial positions x, y and z of the 6 balls and the speeds v in corresponding directionsx、vy、vzAs state variables, thereby determining KaThe state equation of the lman filter.
Further, the master station further includes an average filter, connected to the controller, and configured to filter the pose increment of the cross bracket, and send the filtered pose increment to the manipulator controller 7 through a network, so as to control the target pose of the manipulator 9.
In addition, in order to enable an operator to accurately control the actions of the mechanical arm 9 and the mechanical hand grip 8 so as to complete an operation task, the slave station further comprises a network camera 10 for acquiring images of the motion and working scene of the mechanical arm 9 and the mechanical hand grip 8; the master station also comprises a display which is connected with the master station controller; the main station controller is also used for receiving the motion of the mechanical arm and the mechanical hand grip and the image of the working scene collected by the network camera and sending the motion and the image to the display for displaying. In the present embodiment, the main station controller and the display are integrated in one control computer 5.
The working process of the teleoperation robot control system based on vision (as shown in figure 1) of the invention is as follows: in a master station, an operator 1 respectively holds a linear support 2 and a cross support 3 by the left hand and the right hand and moves, the cross support 3 controls the tail end pose of a mechanical arm 9, and the linear support 2 controls the opening and closing of a mechanical gripper 8; the depth camera 4 simultaneously obtains a color image and a corresponding depth image when the linear support 2 and the cross support 3 move, so that the three-dimensional position of a point in a scene under a camera coordinate system can be determined, and the spatial positions of 6 spheres on the linear support 2 and the cross support 3 held by the operator 1 can be obtained by a stereoscopic vision method; the control computer 5 of the main station calculates the pose of the cross support and the included angle between the linear support and the vertical direction according to the space positions of the 6 spheres; when the linear support is in a vertical posture, the mechanical gripper 8 is closed, and when the linear support is in a horizontal state, the mechanical gripper 8 is opened; the pose of the cross support is used for representing the pose of the hand of an operator; when the system is started, the control computer firstly obtains the initial pose of the hand of the operator 1 in the mode, and then continuously makes a difference between the subsequently obtained pose and the initial pose to obtain the pose increment of the hand of the master station operator 1 (namely the pose increment of the cross support); the pose increment is filtered through an average filter to obtain a pose increment after filtering; the included angle and the pose increment between the linear support and the vertical direction are transmitted to a mechanical arm controller 7 of the slave station through a network 6, the mechanical arm controller 7 transmits a control signal to a mechanical arm and a mechanical gripper, and the motion of the mechanical arm 9 and the opening and closing of the mechanical gripper 8 are controlled; meanwhile, the network camera 10 of the slave station collects images of the motion of the mechanical arm and the scene where the mechanical arm is located, transmits the collected images back to the master station and displays the images on the display of the master station.
The teleoperation robot control system based on vision extracts the hand pose and state of an operator through the depth camera, the linear support and the cross support, so that teleoperation is realized, and the teleoperation robot control system has the characteristics of natural operation, low cost and the like; the Kalman filter and the color table can reduce the calculated amount during color detection, and can meet the real-time requirement of the system; the network camera is used for transmitting the image of the operation site back to the master station, so that the state of the mechanical arm in the operation scene can be visually displayed, the telepresence can be enhanced, and relatively complex work can be completed.
In addition, the invention also provides a teleoperation robot control method based on vision. Specifically, the method for controlling the teleoperation robot based on the vision comprises the following steps:
the method comprises the steps that a main station acquires a color image and a corresponding depth image which drive a cross support and a linear support to move when hands of an operator move through a depth camera;
determining the pose increment of the cross support and the included angle between the linear support and the vertical direction according to the color image and the corresponding depth image through a main station controller;
and at the slave station, receiving the pose increment of the cross support and the included angle between the linear support and the vertical direction through a mechanical arm controller, controlling the target pose of the mechanical arm according to the pose increment of the cross support, and controlling the mechanical hand to open and close according to the included angle.
The determining, by the host controller, the pose increment of the cross bracket and the included angle between the linear bracket and the vertical direction according to the color image and the corresponding depth image specifically includes:
step 101: the operator holds the cross support and the linear support to move in the master station, and the master station controller calculates the initial pose of the cross support and the included angle between the linear support and the vertical direction through a vision measurement method;
step 102: the operator holds the cross support and the linear support to continue moving at the master station, the master station controller continuously makes the hand pose of the operator obtained subsequently and the initial pose of the operator poor to obtain the pose increment of the cross support, and then an average filter is used for filtering the pose increment.
And at the slave station, the mechanical arm controller controls the target pose of the mechanical arm according to the pose increment of the cross support and controls the mechanical hand to open and close according to the included angle. And the target pose of the mechanical arm is the pose increment of the mechanical arm after the initial pose and the filter are added. Specifically, when the included angle between the linear support and the vertical direction is 0 (namely, the linear support is in a vertical posture), the mechanical gripper is closed; when the included angle between the linear support and the vertical direction is 90 degrees (namely the linear support is in a horizontal state), the mechanical hand grip is opened.
Optionally, three end portions of the cross-shaped bracket are respectively provided with a sphere, and a sphere is arranged at the joint of the cross-shaped bracket; two ends of the straight support are respectively provided with a sphere, and the colors of the six spheres are different. For example, the colors of the four spheres on the cross-shaped support 3 are respectively: the color of the sphere at the joint is red, and the colors of the spheres at the three ends are blue, green and yellow; the colors of the spheres on the two ends of the linear bracket 2 can be purple and black; but not limited thereto.
The teleoperation robot control method based on vision of the invention also comprises the following steps: carrying out classification and identification according to the colors of six spheres in the color image through six classifiers; and determining the sphere center positions of six spheres by the master controller by adopting a gravity center method, and determining the three-dimensional space positions of the six spheres under the camera coordinate system according to the sphere center positions and the depth image.
As shown in FIG. 2, the coordinates of the red sphere are (x)r,yr,zr) The coordinates of the blue sphere are (x)b,yb,zb) The coordinates of the green sphere are (x)g,yg,zg) The coordinates of the yellow sphere are (x)y,yy,zy)。
Red to green vector:
(x1,y1,z1)=(xg-xr,yg-yr,zg-zr) -formula (1);
vector from red to yellow:
(x2,y2,z3)=(xy-xr,yy-yr,zy-zr) -formula (2);
the red to green vector is a cross product of the two above vectors:
(x3,y3,z3)=(y1z2-y2z1,x2z1-x1z2,x1y2-x2y1) -formula (3);
the three vectors are normalized to obtain unit vectors i, j and k, and the roll angle R, the inclination angle P and the yaw angle Y of the cross support 3 relative to the camera coordinate system under the camera coordinate system can be obtained through the unit vectors i, j and k. Combining the position of the red ball on the cross support 3 with the roll angle R, the inclination angle P and the yaw angle Y to obtain the pose (x) of the cross supportr,yr,zrR, P, Y) - - - - -formula (4).
Furthermore, the teleoperation robot control method based on vision also comprises the step of calling a stored color table, and each classifier determines the position of the color ball through a table look-up method. The color table characterizes the classification results of the 6 classifiers for 256 × 256 × 256 colors. Specifically, the color table includes 3 color sub-tables, and the size of each color sub-table is 256 × 256.
As shown in fig. 4 to 6, the embodiment of the present invention uses YCbCr color spaceAnd the colors of the spheres are numbered as non-zero integers, such as 1 for blue, 2 for green, 3 for yellow, and 4 for red; the size of the three color sub-tables is 256 multiplied by 256; when a color (Y, Cb, Cr) needs to be classified, first look up a first table according to the Cb and Cr values of the color (as shown in fig. 4), and if the corresponding value is 0, it indicates that the color is a background; if not, continuing to look at the second table (as shown in FIG. 5) and the third table (as shown in FIG. 6), if the value of the Y component of the color is between the corresponding values of the second table and the third table, indicating that it is the color of the sphere, otherwise, it is the background; for example, for color pair (x)2,y2Z) in the first table (x)2,y2) The value of the position is 3, indicating that it may be yellow, and continuing to look at the second and third tables, resulting in the values of 84 and 229 for the corresponding position, if 84 ≦ z ≦ 229, indicating that the color is yellow, otherwise the color is background.
Further, the control method of the teleoperation robot based on vision of the invention also comprises the following steps: predicting the position of each sphere in the next frame according to the three-dimensional space position of each sphere in the current frame under the camera coordinate system by a Kalman filter; and the main station controller carries out local detection according to the predicted position of each sphere in the next frame and the color image collected by the depth camera to determine the three-dimensional space position of the sphere.
Taking the detection of the sphere on the cross-shaped bracket as an example: as shown in fig. 3, the first step: globally detecting 4 spheres on a cross support through a color image obtained by a depth camera until the position of the sphere in the current frame is determined, and sending the positions of the 4 spheres in the current frame to the Kalman filter after the detection is successful; the Kalman filter predicts the position of each sphere in the next frame, so that the master station controller can perform local detection in a small range near the predicted position; if the detection is successful, the state of the Kalman filter is updated, otherwise, the detection failure is switched to global detection, so that the sphere is quickly positioned, and the detection speed is increased. In the same way, the same flow is adopted for detecting 2 balls on the linear support.
Wherein, the state equation of the Kalman filter is as follows:
Figure BDA0001316729240000141
wherein, x (k) is the state of the system at time k, a is the system parameter, z (k) is the measured value of the system at time k, H is the observation matrix of the system, w (k) represents the noise of the adjustment control process, and v (k) represents the noise of the measurement process. In this embodiment, the noise of the adjustment control process and the noise of the measurement process are both white gaussian noise.
Assuming that the state x (k) only includes the position and velocity in the x direction at the time k, and only needs to expand the y direction and the z direction into x (k), the setting modes of x (k), a and H are:
X(k)=[x(k)v(k)]T-formula (6);
Figure BDA0001316729240000142
Z(k)=[z(k)]T- - - - - - - - - - - - - - - - - - - - - -formula (8);
h ═ 10-o-formula (9);
taking the space coordinates x, y and z of 6 color balls on the cross support as observed values so as to determine an observation matrix, the space positions x, y and z of the 6 balls and the speeds v in corresponding directionsx、vy、vzAs state variables, the state equations of the Kalman filter are thus determined.
In addition, in order to enable an operator to accurately control the actions of the mechanical arm 9 and the mechanical gripper 8 so as to complete an operation task, the vision-based teleoperation robot control method collects images of the motion and working scenes of the mechanical arm and the mechanical gripper through a network camera at a slave station, transmits the images back to a master station controller of a master station through a network, and transmits the images to a display through the master station controller for displaying, so that the operator can observe conveniently. Preferably, the main station controller and the display are integrated on a control computer.
Compared with the prior art, the control method of the teleoperation robot based on the vision has the same beneficial effects as the teleoperation robot control system based on the vision, and the description is omitted.
So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.

Claims (8)

1. A teleoperation robot control system based on vision is characterized by comprising a master station and a slave station, wherein the master station is connected with the slave station through a network; wherein the content of the first and second substances,
the master station includes:
the linear support and the cross support are handheld devices, and the cross support and the linear support are respectively driven to move by the hand movement of an operator;
the depth camera is used for acquiring a color image and a corresponding depth image when the cross support and the linear support move;
the main station controller is connected with the depth camera and is used for determining the pose increment of the cross support and the included angle between the linear support and the vertical direction according to the color image and the corresponding depth image;
the slave station comprises a mechanical arm, a mechanical hand and a mechanical arm controller, wherein the mechanical hand is connected with the tail end of the mechanical arm; the mechanical arm controller is connected with the main station controller through a network and used for receiving the pose increment of the cross support and the included angle between the linear support and the vertical direction; the mechanical arm controller is respectively connected with the mechanical arm and the mechanical hand and used for controlling the target pose of the mechanical arm according to the pose increment of the cross support and controlling the mechanical hand to be opened or closed according to the included angle;
three end parts of the cross-shaped bracket are respectively provided with a sphere, and the joint of the cross-shaped bracket is provided with a sphere; two ends of the straight support are respectively provided with a sphere, and the colors of the six spheres are different.
2. The vision-based teleoperated robot control system of claim 1, wherein the master station further comprises:
the six classifiers are respectively connected with the depth camera and the main station controller and used for carrying out classification and identification according to the colors of six spheres in the color image;
and the main controller is also used for determining the sphere center positions of six spheres by adopting a gravity center method and determining the three-dimensional space positions of the six spheres in the current frame under the camera coordinate system according to the sphere center positions and the depth image.
3. The vision-based teleoperated robot control system of claim 2, wherein the master station further comprises:
the Kalman filter is connected with the main station controller and used for predicting the position of each sphere in the next frame according to the three-dimensional space position of each sphere in the current frame under the camera coordinate system;
and the main controller is also used for carrying out local detection according to the predicted position of each sphere in the next frame and the color image collected by the depth camera to determine the three-dimensional space position of the sphere.
4. The vision-based teleoperated robot control system of claim 1, wherein the master station further comprises an averaging filter coupled to the controller for filtering the pose increment of the spider and sending the filtered pose increment to the robotic arm controller via a network.
5. The vision-based teleoperated robot control system of any one of claims 1-4, wherein the secondary station further comprises a webcam for capturing images of the motion and working scene of the robotic arm and the robotic gripper;
the master station also comprises a display which is connected with the master station controller; the main station controller is also used for receiving the motion of the mechanical arm and the mechanical hand grip and the image of the working scene collected by the network camera and sending the motion and the image to the display for displaying.
6. A vision-based teleoperated robot control method, the control method comprising:
in the master station, a color image and a corresponding depth image which drive the cross support and the linear support to move when the hands of an operator move are collected through a depth camera;
determining the pose increment of the cross support and the included angle between the linear support and the vertical direction according to the color image and the corresponding depth image through a main station controller;
at the slave station, receiving the pose increment of the cross support and the included angle between the linear support and the vertical direction through a mechanical arm controller, controlling the target pose of the mechanical arm according to the pose increment of the cross support, and controlling the opening and closing of the mechanical hand grip according to the included angle;
three end parts of the cross-shaped bracket are respectively provided with a sphere, and the joint of the cross-shaped bracket is provided with a sphere; two ends of the straight support are respectively provided with a sphere, and the colors of the six spheres are different.
7. The vision-based teleoperated robot control method of claim 6, further comprising:
carrying out classification and identification according to the colors of six spheres in the color image through six classifiers; and determining the sphere center positions of six spheres by the main station controller by adopting a gravity center method, and determining the three-dimensional space positions of the six spheres in the current frame under the camera coordinate system according to the sphere center positions and the depth image.
8. The vision-based teleoperated robot control method of claim 7, further comprising:
predicting the position of each sphere in the next frame according to the three-dimensional space position of each sphere in the current frame under the camera coordinate system by a Kalman filter; and the main station controller carries out local detection according to the predicted position of each sphere in the next frame and the color image collected by the depth camera to determine the three-dimensional space position of the sphere.
CN201710428209.1A 2017-06-08 2017-06-08 Teleoperation robot control system and method based on vision Active CN107363831B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710428209.1A CN107363831B (en) 2017-06-08 2017-06-08 Teleoperation robot control system and method based on vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710428209.1A CN107363831B (en) 2017-06-08 2017-06-08 Teleoperation robot control system and method based on vision

Publications (2)

Publication Number Publication Date
CN107363831A CN107363831A (en) 2017-11-21
CN107363831B true CN107363831B (en) 2020-01-10

Family

ID=60304837

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710428209.1A Active CN107363831B (en) 2017-06-08 2017-06-08 Teleoperation robot control system and method based on vision

Country Status (1)

Country Link
CN (1) CN107363831B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110421558B (en) * 2019-06-21 2023-04-28 中国科学技术大学 Universal teleoperation system and method for power distribution network operation robot
CN111633653A (en) * 2020-06-04 2020-09-08 上海机器人产业技术研究院有限公司 Mechanical arm control system and method based on visual positioning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102961811A (en) * 2012-11-07 2013-03-13 上海交通大学 Trachea intubating system and method based on remotely operated mechanical arm
CN103302668A (en) * 2013-05-22 2013-09-18 东南大学 Kinect-based space teleoperation robot control system and method thereof
CN104570731A (en) * 2014-12-04 2015-04-29 重庆邮电大学 Uncalibrated human-computer interaction control system and method based on Kinect
CN104589356A (en) * 2014-11-27 2015-05-06 北京工业大学 Dexterous hand teleoperation control method based on Kinect human hand motion capturing
CN106003076A (en) * 2016-06-22 2016-10-12 潘小胜 Powder spraying robot based on stereoscopic vision

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9452531B2 (en) * 2014-02-04 2016-09-27 Microsoft Technology Licensing, Llc Controlling a robot in the presence of a moving object

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102961811A (en) * 2012-11-07 2013-03-13 上海交通大学 Trachea intubating system and method based on remotely operated mechanical arm
CN103302668A (en) * 2013-05-22 2013-09-18 东南大学 Kinect-based space teleoperation robot control system and method thereof
CN104589356A (en) * 2014-11-27 2015-05-06 北京工业大学 Dexterous hand teleoperation control method based on Kinect human hand motion capturing
CN104570731A (en) * 2014-12-04 2015-04-29 重庆邮电大学 Uncalibrated human-computer interaction control system and method based on Kinect
CN106003076A (en) * 2016-06-22 2016-10-12 潘小胜 Powder spraying robot based on stereoscopic vision

Also Published As

Publication number Publication date
CN107363831A (en) 2017-11-21

Similar Documents

Publication Publication Date Title
US20210205986A1 (en) Teleoperating Of Robots With Tasks By Mapping To Human Operator Pose
CN113696186B (en) Mechanical arm autonomous moving and grabbing method based on visual-touch fusion under complex illumination condition
CN107914272B (en) Method for grabbing target object by seven-degree-of-freedom mechanical arm assembly
CN109955254B (en) Mobile robot control system and teleoperation control method for robot end pose
CN112634318B (en) Teleoperation system and method for underwater maintenance robot
CN110744544B (en) Service robot vision grabbing method and service robot
Melchiorre et al. Collison avoidance using point cloud data fusion from multiple depth sensors: a practical approach
CN108202316A (en) A kind of crusing robot and control method of automatic switch cabinet door
Yang et al. Real-time human-robot interaction in complex environment using kinect v2 image recognition
CN107363831B (en) Teleoperation robot control system and method based on vision
Leitner et al. Transferring spatial perception between robots operating in a shared workspace
Han et al. Grasping control method of manipulator based on binocular vision combining target detection and trajectory planning
Yang et al. Visual servoing control of baxter robot arms with obstacle avoidance using kinematic redundancy
Quesada et al. Holo-SpoK: Affordance-aware augmented reality control of legged manipulators
Sugimoto et al. Half-diminished reality image using three rgb-d sensors for remote control robots
Kragic et al. Model based techniques for robotic servoing and grasping
WO2020179416A1 (en) Robot control device, robot control method, and robot control program
Makita et al. Offline direct teaching for a robotic manipulator in the computational space
Cho et al. Development of VR visualization system including deep learning architecture for improving teleoperability
CN115194774A (en) Binocular vision-based control method for double-mechanical-arm gripping system
Taylor et al. Hybrid position-based visual servoing with online calibration for a humanoid robot
CN116867611A (en) Fusion static large-view-field high-fidelity movable sensor for robot platform
Walęcki et al. Control system of a service robot's active head exemplified on visual servoing
Xu et al. Design of a human-robot interaction system for robot teleoperation based on digital twinning
Nishida et al. Development of Pilot Assistance System with Stereo Vision for Robot Manipulation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant