CN107363831A - The teleoperation robot control system and method for view-based access control model - Google Patents
The teleoperation robot control system and method for view-based access control model Download PDFInfo
- Publication number
- CN107363831A CN107363831A CN201710428209.1A CN201710428209A CN107363831A CN 107363831 A CN107363831 A CN 107363831A CN 201710428209 A CN201710428209 A CN 201710428209A CN 107363831 A CN107363831 A CN 107363831A
- Authority
- CN
- China
- Prior art keywords
- sphere
- mechanical arm
- controller
- support
- vision
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 238000001514 detection method Methods 0.000 claims description 24
- 239000003086 colorant Substances 0.000 claims description 22
- 230000033001 locomotion Effects 0.000 claims description 18
- 230000005484 gravity Effects 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims description 4
- 241000239290 Araneae Species 0.000 claims description 2
- 238000012935 Averaging Methods 0.000 claims 1
- 150000001875 compounds Chemical class 0.000 abstract 1
- 230000006855 networking Effects 0.000 abstract 1
- 239000013598 vector Substances 0.000 description 12
- 238000005259 measurement Methods 0.000 description 7
- 239000011159 matrix material Substances 0.000 description 4
- 239000003550 marker Substances 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000002360 explosive Substances 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/06—Control stands, e.g. consoles, switchboards
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1679—Programme controls characterised by the tasks executed
- B25J9/1689—Teleoperation
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Manipulator (AREA)
Abstract
The present invention relates to the teleoperation robot control system and method for a kind of view-based access control model, the control system includes main website and slave station, the main website includes a word support, cross frame, depth camera and main station controller, the main station controller is connected with depth camera, and the pose increment and the angle of a word support and vertical direction of the cross frame are determined according to coloured image and corresponding depth image;The slave station includes mechanical arm, the mechanical gripper being connected with the end of the mechanical arm and mechanical arm controller;The mechanical arm controller is connected by networking with main station controller;The mechanical arm controller is connected with the mechanical arm and mechanical gripper respectively, according to the object pose of mechanical arm described in the pose increment control algorithm of the cross frame, the opening and closing of the mechanical gripper is controlled according to the angle, so that operator can naturally perform operation task, simplify device structure simultaneously, the resolution to operator's compound action is improved, and improves control accuracy.
Description
Technical Field
The invention relates to the technical field of remote teleoperation robot control, in particular to a teleoperation robot control system and method based on vision.
Background
Despite the great advances in robotics in recent years, robots are still currently unable to independently perform complex, dangerous operational tasks such as handling nuclear waste, demolition of explosives, underwater and space exploration, etc.
At present, the teleoperation technology of the robot can isolate human from dangerous operation environment and utilize the capability and experience of human to process problems, thereby having stronger practical value and wide application prospect.
However, existing teleoperation methods are generally classified into two types, contact and noncontact. The contact teleoperation method generally uses exoskeleton equipment, data gloves, inertial measurement and other methods to acquire motion information of an operator, and then uses the motion information to control the robot so as to realize teleoperation on the robot. These methods result in limited and unnatural operating motions due to the need to wear the sensor device on the operator. The non-contact teleoperation method generally adopts a method based on visual measurement of action behaviors of an operator, so that the operator can naturally act during operation, and the method is favorable for the operator to complete a more complex teleoperation task, but has higher requirements on the accuracy of visual measurement.
In vision-based teleoperation systems, the approach of not using a specific marker is most natural for the operator. However, the method has high requirements on the recognition algorithm, and only simple operation tasks such as simple clamping and the like can be completed at present. In contrast, the visual method using a specific marker has the characteristic of simple algorithm, and can obtain higher detection accuracy on the basis of designing a simple and effective marker and a fast and efficient detection algorithm.
Disclosure of Invention
In order to solve the problems in the prior art, namely, to solve the problems that the structure of external equipment required by a contact-type teleoperation method is complex, an operator is unnatural when performing an operation task, and the recognition degree of complex actions of the operator is low, the invention provides a teleoperation robot control system and method based on vision.
In order to achieve the purpose, the invention provides the following scheme:
a vision-based teleoperation robot control system comprises a master station and a slave station, wherein the master station is connected with the slave station through a network; wherein,
the master station includes:
the linear support and the cross support are handheld devices, and the cross support and the linear support are respectively driven to move by the hand movement of an operator;
the depth camera is used for acquiring a color image and a corresponding depth image when the cross support and the linear support move;
the main station controller is connected with the depth camera and is used for determining the pose increment of the cross support and the included angle between the linear support and the vertical direction according to the color image and the corresponding depth image;
the slave station comprises a mechanical arm, a mechanical hand and a mechanical arm controller, wherein the mechanical hand is connected with the tail end of the mechanical arm; the mechanical arm controller is connected with the main station controller through a network and receives the pose increment of the cross support and the included angle between the linear support and the vertical direction; the mechanical arm controller is respectively connected with the mechanical arm and the mechanical hand grip and used for controlling the target pose of the mechanical arm according to the pose increment of the cross support and controlling the mechanical hand grip to be opened or closed according to the included angle.
Optionally, three end portions of the cross-shaped bracket are respectively provided with a sphere, and a sphere is arranged at the joint of the cross-shaped bracket; two ends of the straight support are respectively provided with a sphere, and the colors of the six spheres are different.
Optionally, the master station further includes:
the six classifiers are respectively connected with the depth camera and the main station controller and used for carrying out classification and identification according to the colors of six spheres in the color image;
and the main controller is also used for determining the sphere center positions of six spheres by adopting a gravity center method and determining the three-dimensional space positions of the six spheres in the current frame under the camera coordinate system according to the sphere center positions and the depth image.
Optionally, the master station further includes:
the Kalman filter is connected with the main station controller and used for predicting the position of each sphere in the next frame according to the three-dimensional space position of each sphere in the current frame under the camera coordinate system;
and the main controller is also used for carrying out local detection according to the predicted position of each sphere in the next frame and the color image collected by the depth camera to determine the three-dimensional space position of the sphere.
Optionally, the master station further includes a mean filter, connected to the controller, and configured to filter the pose increment of the cross bracket, and send the filtered pose increment to the manipulator controller through a network.
Optionally, the slave station further comprises a network camera for acquiring the motion of the mechanical arm and the mechanical gripper and the image of the working scene;
the master station also comprises a display which is connected with the master station controller; the main station controller is also used for receiving the motion of the mechanical arm and the mechanical hand grip and the image of the working scene collected by the network camera and sending the motion and the image to the display for displaying.
According to the embodiment of the invention, the invention discloses the following technical effects:
the teleoperation robot control system based on vision acquires a color image and a corresponding depth image which drive a cross support and a linear support to move when the hands of an operator move by arranging a depth camera, and determines the pose increment of the cross support and the included angle between the linear support and the vertical direction by a main station controller; the manipulator controller can control the target pose of the manipulator according to the pose increment of the cross support and control the mechanical gripper to open and close according to the included angle, so that an operator can naturally execute an operation task, the equipment structure is simplified, the recognition degree of complex actions of the operator is improved, and the control precision is improved.
In order to achieve the purpose, the invention provides the following scheme:
a vision-based teleoperated robot control method, the control method comprising:
the method comprises the steps that a main station acquires a color image and a corresponding depth image which drive a cross support and a linear support to move when hands of an operator move through a depth camera;
determining the pose increment of the cross support and the included angle between the linear support and the vertical direction according to the color image and the corresponding depth image through a main station controller;
and at the slave station, receiving the pose increment of the cross support and the included angle between the linear support and the vertical direction through a mechanical arm controller, controlling the target pose of the mechanical arm according to the pose increment of the cross support, and controlling the mechanical hand to open and close according to the included angle.
Optionally, three end portions of the cross-shaped bracket are respectively provided with a sphere, and a sphere is arranged at the joint of the cross-shaped bracket; two ends of the straight support are respectively provided with a sphere, and the colors of the six spheres are different.
Optionally, the control method further includes:
carrying out classification and identification according to the colors of six spheres in the color image through six classifiers; and determining the sphere center positions of six spheres by the main station controller by adopting a gravity center method, and determining the three-dimensional space positions of the six spheres in the current frame under the camera coordinate system according to the sphere center positions and the depth image.
Optionally, the control method further includes:
predicting the position of each sphere in the next frame according to the three-dimensional space position of each sphere in the current frame under the camera coordinate system by a Kalman filter; and the main station controller carries out local detection according to the predicted position of each sphere in the next frame and the color image collected by the depth camera to determine the three-dimensional space position of the sphere.
According to the embodiment of the invention, the invention discloses the following technical effects:
the teleoperation robot control method based on vision acquires a color image and a corresponding depth image which drive a cross support and a linear support to move when the hands of an operator move through a depth camera, and determines the pose increment of the cross support and the included angle between the linear support and the vertical direction through a main station controller; and the manipulator controller can control the target pose of the manipulator according to the pose increment of the cross support and control the opening and closing of the mechanical gripper according to the included angle, so that an operator can naturally execute an operation task, the equipment structure is simplified, the recognition degree of complex actions of the operator is improved, and the control precision is improved.
Drawings
FIG. 1 is a schematic diagram of the structure of a vision-based teleoperated robotic control system of the present invention;
FIG. 2 is a method of calculating pose according to an embodiment of the invention;
FIG. 3 is a flow chart of the stereoscopic vision software according to the embodiment of the present invention;
FIG. 4 is a first color sub-chart of a color chart according to an embodiment of the present invention;
FIG. 5 is a second color sub-chart of the color chart according to the embodiment of the invention;
FIG. 6 is a third color sub-table diagram of the color table according to the embodiment of the invention.
Description of the symbols:
the system comprises an operator-1, a linear support-2, a cross support-3, a depth camera-4, a control computer-5, a network-6, a mechanical arm controller-7, a mechanical gripper-8, a mechanical arm-9 and a network camera-10.
Detailed Description
Preferred embodiments of the present invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are only for explaining the technical principle of the present invention, and are not intended to limit the scope of the present invention.
The invention provides a teleoperation robot control system and a teleoperation robot control method based on vision.A depth camera is arranged to collect a color image and a corresponding depth image which drive a cross support and a linear support to move when the hands of an operator move, and a main station controller is used for determining the pose increment of the cross support and the included angle between the linear support and the vertical direction; the manipulator controller can control the target pose of the manipulator according to the pose increment of the cross support and control the mechanical gripper to open and close according to the included angle, so that an operator can naturally execute an operation task, the equipment structure is simplified, the recognition degree of complex actions of the operator is improved, and the control precision is improved.
As shown in fig. 1, the vision-based teleoperated robot control system of the present invention includes a master station and a slave station, and the master station is connected to the slave station through a network 6.
The main station comprises a linear support 2, a cross support 3, a depth camera 4 and a main station controller, wherein the linear support 2 and the cross support 3 are handheld devices, and the cross support 3 and the linear support 2 are respectively driven to move by the movement of hands of an operator 1; the depth camera 4 is used for acquiring a color image and a corresponding depth image when the cross support 3 and the linear support 2 move; and the main station controller is connected with the depth camera 4 and is used for determining the pose increment of the cross support 3 and the included angle between the linear support 2 and the vertical direction according to the color image and the corresponding depth image.
The slave station comprises a mechanical arm 9, a mechanical hand 8 connected with the tail end of the mechanical arm 9 and a mechanical arm controller 7; the mechanical arm controller 7 is connected with the main station controller through a network 6 and is used for receiving the pose increment of the cross support 3 and the included angle between the linear support 2 and the vertical direction; the mechanical arm controller 7 is respectively connected with the mechanical arm 9 and the mechanical hand 8, controls the target pose of the mechanical arm 9 according to the pose increment of the cross support 3, and controls the opening and closing of the mechanical hand 8 according to the included angle. The target pose of the mechanical arm 9 is the pose increment of the initial pose of the mechanical arm 9 after filtering. Specifically, when the included angle between the linear support 2 and the vertical direction is 0 (that is, the linear support 2 is in the vertical posture), the mechanical gripper 8 is closed; when the included angle between the linear support 2 and the vertical direction is 90 degrees (namely, the linear support 2 is in a horizontal state), the mechanical gripper 8 is opened.
Preferably, three ends of the cross-shaped bracket 3 are respectively provided with a sphere, and the joint of the cross-shaped bracket 3 is provided with a sphere; two ends of the straight support 2 are respectively provided with a sphere, and the colors of the six spheres are different. For example, the colors of the four spheres on the cross-shaped support 3 are respectively: the color of the sphere at the joint is red, and the colors of the spheres at the three ends are blue, green and yellow; the colors of the spheres on the two ends of the linear bracket 2 can be purple and black; but not limited thereto.
Further, the master station further comprises six classifiers which are respectively connected with the depth camera 4 and the master station controller and used for carrying out classification and identification according to the colors of six spheres in the color image;
and the main controller is also used for determining the sphere center positions of six spheres by adopting a gravity center method and determining the three-dimensional space positions of the six spheres in the current frame under the camera coordinate system according to the sphere center positions and the depth image.
As shown in FIG. 2, the coordinates of the red sphere are (x)r,yr,zr) The coordinates of the blue sphere are (x)b,yb,zb) The coordinates of the green sphere are (x)g,yg,zg) The coordinates of the yellow sphere are (x)y,yy,zy) Wherein x, y and z respectively represent three-dimensional directions under a camera coordinate system.
Red to green vector:
(x1,y1,z1)=(xg-xr,yg-yr,zg-zr) -formula (1);
vector from red to yellow:
(x2,y2,z3)=(xy-xr,yy-yr,zy-zr) -formula (2);
the red to green vector is a cross product of the two above vectors:
(x3,y3,z3)=(y1z2-y2z1,x2z1-x1z2,x1y2-x2y1) -formula (3);
normalizing the above three vectors to obtain unit directionThe quantities i, j, k by means of which the roll angle R, the tilt angle P and the yaw angle Y of the spider 3 relative to the camera coordinate system can be derived. Combining the position of the red ball on the cross support 3 with the roll angle R, the inclination angle P and the yaw angle Y to obtain the pose (x) of the cross supportr,yr,zrR, P, Y) - - - - - -formula (4).
Furthermore, the master station also comprises a memory, wherein a color table is stored in the memory, each classifier is connected with the memory, the color table is called, and the position of the color ball is determined by a table look-up method. Wherein, the color table represents the classification result of 256 × 256 × 256 colors by 6 classifiers. Specifically, the color table includes 3 color sub-tables, and the size of each color sub-table is 256 × 256.
As shown in fig. 4 to 6, the embodiment of the present invention uses the YCbCr color space, and numbers the colors of the respective spheres, the numbers are non-zero integers, such as 1 for blue, 2 for green, 3 for yellow, and 4 for red, the size of the three color sub-tables is 256 × 256, when a color (Y, Cb, Cr) needs to be classified, the first table is first looked up according to the Cb and Cr values of the color (as shown in fig. 4), if the corresponding value is 0, the color is the background, if non-zero, the second table (as shown in fig. 5) and the third table (as shown in fig. 6) are continuously looked up, if the value of the Y component of the color is between the corresponding values of the second table and the third table, the color is the color of the sphere, otherwise, for example, the color pair (x) is a color2,y2Z) in the first table (x)2,y2) The value of the position is 3, indicating that it may be yellow, and continuing to look at the second and third tables, resulting in the values of 84 and 229 for the corresponding position, if 84 ≦ z ≦ 229, indicating that the color is yellow, otherwise the color is background.
In addition, the master station further comprises a Kalman filter (not shown in the figure), connected to the master station controller, for predicting the position of each sphere in the next frame according to the three-dimensional spatial position of each sphere in the camera coordinate system in the current frame; and the main controller is also used for carrying out local detection according to the predicted position of each sphere in the next frame and the color image collected by the depth camera to determine the three-dimensional space position of the sphere.
Taking the detection of the sphere on the cross-shaped bracket as an example: as shown in fig. 3, the first step: the main station controller receives a color image obtained by a depth camera, performs global detection on the color image aiming at 4 spheres on a cross support until the three-dimensional space position of each sphere under a camera coordinate system in the current frame is determined, and sends the positions of the 4 spheres in the current frame to the Kalman filter after the detection is successful; the Kalman filter predicts the position of each sphere in the next frame and sends the position to the master controller, so that the master controller can perform local detection in a small range near the predicted position; and if the detection is successful, updating the state of the Kalman filter, otherwise, switching to global detection if the detection is failed. Similarly, the detection of 2 balls on the linear support adopts the same process, and is not described herein again.
Specifically, the state equation of the Kalman filter is:
wherein, x (k) is the state of the system at time k, a is the system parameter, z (k) is the measured value of the system at time k, H is the observation matrix of the system, w (k) represents the noise of the adjustment control process, and v (k) represents the noise of the measurement process. In this embodiment, the noise of the adjustment control process and the noise of the measurement process are both white gaussian noise.
Assuming that the state x (k) only includes the position and velocity in the x direction at the time k, and only needs to expand the y direction and the z direction into x (k), the setting modes of x (k), a and H are:
X(k)=[x(k)v(k)]T-formula (6);
Z(k)=[z(k)]T- - - - - - - - - - - - - - - - - - - - - -formula (8);
h ═ 10-o-formula (9);
taking the spatial positions x, y and z of the 6 color balls on the cross support as observed values, thereby determining an observation matrix, the spatial positions x, y and z of the 6 balls and the speeds v in corresponding directionsx、vy、vzAs state variables, the state equations of the Kalman filter are thus determined.
Further, the master station further includes an average filter, connected to the controller, and configured to filter the pose increment of the cross bracket, and send the filtered pose increment to the manipulator controller 7 through a network, so as to control the target pose of the manipulator 9.
In addition, in order to enable an operator to accurately control the actions of the mechanical arm 9 and the mechanical hand grip 8 so as to complete an operation task, the slave station further comprises a network camera 10 for acquiring images of the motion and working scene of the mechanical arm 9 and the mechanical hand grip 8; the master station also comprises a display which is connected with the master station controller; the main station controller is also used for receiving the motion of the mechanical arm and the mechanical hand grip and the image of the working scene collected by the network camera and sending the motion and the image to the display for displaying. In the present embodiment, the main station controller and the display are integrated in one control computer 5.
The working process of the teleoperation robot control system based on vision (as shown in figure 1) of the invention is as follows: in a master station, an operator 1 respectively holds a linear support 2 and a cross support 3 by the left hand and the right hand and moves, the cross support 3 controls the tail end pose of a mechanical arm 9, and the linear support 2 controls the opening and closing of a mechanical gripper 8; the depth camera 4 simultaneously obtains a color image and a corresponding depth image when the linear support 2 and the cross support 3 move, so that the three-dimensional position of a point in a scene under a camera coordinate system can be determined, and the spatial positions of 6 spheres on the linear support 2 and the cross support 3 held by the operator 1 can be obtained by a stereoscopic vision method; the control computer 5 of the main station calculates the pose of the cross support and the included angle between the linear support and the vertical direction according to the space positions of the 6 spheres; when the linear support is in a vertical posture, the mechanical gripper 8 is closed, and when the linear support is in a horizontal state, the mechanical gripper 8 is opened; the pose of the cross support is used for representing the pose of the hand of an operator; when the system is started, the control computer firstly obtains the initial pose of the hand of the operator 1 in the mode, and then continuously makes a difference between the subsequently obtained pose and the initial pose to obtain the pose increment of the hand of the master station operator 1 (namely the pose increment of the cross support); the pose increment is filtered through an average filter to obtain a pose increment after filtering; the included angle and the pose increment between the linear support and the vertical direction are transmitted to a mechanical arm controller 7 of the slave station through a network 6, the mechanical arm controller 7 transmits a control signal to a mechanical arm and a mechanical gripper, and the motion of the mechanical arm 9 and the opening and closing of the mechanical gripper 8 are controlled; meanwhile, the network camera 10 of the slave station collects images of the motion of the mechanical arm and the scene where the mechanical arm is located, transmits the collected images back to the master station and displays the images on the display of the master station.
The teleoperation robot control system based on vision extracts the hand pose and state of an operator through the depth camera, the linear support and the cross support, so that teleoperation is realized, and the teleoperation robot control system has the characteristics of natural operation, low cost and the like; the Kalman filter and the color table can reduce the calculated amount during color detection, and can meet the real-time requirement of the system; the network camera is used for transmitting the image of the operation site back to the master station, so that the state of the mechanical arm in the operation scene can be visually displayed, the telepresence can be enhanced, and relatively complex work can be completed.
In addition, the invention also provides a teleoperation robot control method based on vision. Specifically, the method for controlling the teleoperation robot based on the vision comprises the following steps:
the method comprises the steps that a main station acquires a color image and a corresponding depth image which drive a cross support and a linear support to move when hands of an operator move through a depth camera;
determining the pose increment of the cross support and the included angle between the linear support and the vertical direction according to the color image and the corresponding depth image through a main station controller;
and at the slave station, receiving the pose increment of the cross support and the included angle between the linear support and the vertical direction through a mechanical arm controller, controlling the target pose of the mechanical arm according to the pose increment of the cross support, and controlling the mechanical hand to open and close according to the included angle.
The determining, by the host controller, the pose increment of the cross bracket and the included angle between the linear bracket and the vertical direction according to the color image and the corresponding depth image specifically includes:
step 101: the operator holds the cross support and the linear support to move in the master station, and the master station controller calculates the initial pose of the cross support and the included angle between the linear support and the vertical direction through a vision measurement method;
step 102: the operator holds the cross support and the linear support to continue moving at the master station, the master station controller continuously makes the hand pose of the operator obtained subsequently and the initial pose of the operator poor to obtain the pose increment of the cross support, and then an average filter is used for filtering the pose increment.
And at the slave station, the mechanical arm controller controls the target pose of the mechanical arm according to the pose increment of the cross support and controls the mechanical hand to open and close according to the included angle. And the target pose of the mechanical arm is the pose increment of the mechanical arm after the initial pose and the filter are added. Specifically, when the included angle between the linear support and the vertical direction is 0 (namely, the linear support is in a vertical posture), the mechanical gripper is closed; when the included angle between the linear support and the vertical direction is 90 degrees (namely the linear support is in a horizontal state), the mechanical hand grip is opened.
Optionally, three end portions of the cross-shaped bracket are respectively provided with a sphere, and a sphere is arranged at the joint of the cross-shaped bracket; two ends of the straight support are respectively provided with a sphere, and the colors of the six spheres are different. For example, the colors of the four spheres on the cross-shaped support 3 are respectively: the color of the sphere at the joint is red, and the colors of the spheres at the three ends are blue, green and yellow; the colors of the spheres on the two ends of the linear bracket 2 can be purple and black; but not limited thereto.
The teleoperation robot control method based on vision of the invention also comprises the following steps: carrying out classification and identification according to the colors of six spheres in the color image through six classifiers; and determining the sphere center positions of six spheres by the master controller by adopting a gravity center method, and determining the three-dimensional space positions of the six spheres under the camera coordinate system according to the sphere center positions and the depth image.
As shown in FIG. 2, the coordinates of the red sphere are (x)r,yr,zr) The coordinates of the blue sphere are (x)b,yb,zb) The coordinates of the green sphere are (x)g,yg,zg) The coordinates of the yellow sphere are (x)y,yy,zy)。
Red to green vector:
(x1,y1,z1)=(xg-xr,yg-yr,zg-zr) -formula (1);
vector from red to yellow:
(x2,y2,z3)=(xy-xr,yy-yr,zy-zr) -formula (2);
the red to green vector is a cross product of the two above vectors:
(x3,y3,z3)=(y1z2-y2z1,x2z1-x1z2,x1y2-x2y1) -formula (3);
the three vectors are normalized to obtain unit vectors i, j and k, and the roll angle R, the inclination angle P and the yaw angle Y of the cross support 3 relative to the camera coordinate system under the camera coordinate system can be obtained through the unit vectors i, j and k. Combining the position of the red ball on the cross support 3 with the roll angle R, the inclination angle P and the yaw angle Y to obtain the pose (x) of the cross supportr,yr,zrR, P, Y) - - - - -formula (4).
Furthermore, the teleoperation robot control method based on vision also comprises the step of calling a stored color table, and each classifier determines the position of the color ball through a table look-up method. The color table characterizes the classification results of the 6 classifiers for 256 × 256 × 256 colors. Specifically, the color table includes 3 color sub-tables, and the size of each color sub-table is 256 × 256.
As shown in fig. 4 to 6, the embodiment of the present invention uses the YCbCr color space, and numbers the colors of the respective spheres, the numbers are non-zero integers, such as 1 for blue, 2 for green, 3 for yellow, and 4 for red, the size of the three color sub-tables is 256 × 256, when a color (Y, Cb, Cr) needs to be classified, the first table is first looked up according to the Cb and Cr values of the color (as shown in fig. 4), if the corresponding value is 0, the color is the background, if non-zero, the second table (as shown in fig. 5) and the third table (as shown in fig. 6) are continuously looked up, if the value of the Y component of the color is between the corresponding values of the second table and the third table, the color is the color of the sphere, otherwise, for example, the color pair (x) is a color2,y2Z) in the first table (x)2,y2) The value of the position is 3, indicating that it may be yellow, and continuing to look at the second and third tables, resulting in the values of 84 and 229 for the corresponding position, if 84 ≦ z ≦ 229, indicating that the color is yellow, otherwise the color is background.
Further, the control method of the teleoperation robot based on vision of the invention also comprises the following steps: predicting the position of each sphere in the next frame according to the three-dimensional space position of each sphere in the current frame under the camera coordinate system by a Kalman filter; and the main station controller carries out local detection according to the predicted position of each sphere in the next frame and the color image collected by the depth camera to determine the three-dimensional space position of the sphere.
Taking the detection of the sphere on the cross-shaped bracket as an example: as shown in fig. 3, the first step: globally detecting 4 spheres on a cross support through a color image obtained by a depth camera until the position of the sphere in the current frame is determined, and sending the positions of the 4 spheres in the current frame to the Kalman filter after the detection is successful; the Kalman filter predicts the position of each sphere in the next frame, so that the master station controller can perform local detection in a small range near the predicted position; if the detection is successful, the state of the Kalman filter is updated, otherwise, the detection failure is switched to global detection, so that the sphere is quickly positioned, and the detection speed is increased. In the same way, the same flow is adopted for detecting 2 balls on the linear support.
Wherein, the state equation of the Kalman filter is as follows:
wherein, x (k) is the state of the system at time k, a is the system parameter, z (k) is the measured value of the system at time k, H is the observation matrix of the system, w (k) represents the noise of the adjustment control process, and v (k) represents the noise of the measurement process. In this embodiment, the noise of the adjustment control process and the noise of the measurement process are both white gaussian noise.
Assuming that the state x (k) only includes the position and velocity in the x direction at the time k, and only needs to expand the y direction and the z direction into x (k), the setting modes of x (k), a and H are:
X(k)=[x(k)v(k)]T-formula (6);
Z(k)=[z(k)]T- - - - - - - - - - - - - - - - - - - - - -formula (8);
h ═ 10-o-formula (9);
taking the space coordinates x, y and z of 6 color balls on the cross support as observed values so as to determine an observation matrix, the space positions x, y and z of the 6 balls and the speeds v in corresponding directionsx、vy、vzAs state variables, the state equations of the Kalman filter are thus determined.
In addition, in order to enable an operator to accurately control the actions of the mechanical arm 9 and the mechanical gripper 8 so as to complete an operation task, the vision-based teleoperation robot control method collects images of the motion and working scenes of the mechanical arm and the mechanical gripper through a network camera at a slave station, transmits the images back to a master station controller of a master station through a network, and transmits the images to a display through the master station controller for displaying, so that the operator can observe conveniently. Preferably, the main station controller and the display are integrated on a control computer.
Compared with the prior art, the control method of the teleoperation robot based on the vision has the same beneficial effects as the teleoperation robot control system based on the vision, and the description is omitted.
So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.
Claims (10)
1. A teleoperation robot control system based on vision is characterized by comprising a master station and a slave station, wherein the master station is connected with the slave station through a network; wherein,
the master station includes:
the linear support and the cross support are handheld devices, and the cross support and the linear support are respectively driven to move by the hand movement of an operator;
the depth camera is used for acquiring a color image and a corresponding depth image when the cross support and the linear support move;
the main station controller is connected with the depth camera and is used for determining the pose increment of the cross support and the included angle between the linear support and the vertical direction according to the color image and the corresponding depth image;
the slave station comprises a mechanical arm, a mechanical hand and a mechanical arm controller, wherein the mechanical hand is connected with the tail end of the mechanical arm; the mechanical arm controller is connected with the main station controller through a network and used for receiving the pose increment of the cross support and the included angle between the linear support and the vertical direction; the mechanical arm controller is respectively connected with the mechanical arm and the mechanical hand grip and used for controlling the target pose of the mechanical arm according to the pose increment of the cross support and controlling the mechanical hand grip to be opened or closed according to the included angle.
2. The vision-based teleoperated robot control system of claim 1, wherein three ends of the cross-shaped support are respectively provided with a sphere, and a joint of the cross-shaped support is provided with a sphere; two ends of the straight support are respectively provided with a sphere, and the colors of the six spheres are different.
3. The vision-based teleoperated robot control system of claim 2, wherein the master station further comprises:
the six classifiers are respectively connected with the depth camera and the main station controller and used for carrying out classification and identification according to the colors of six spheres in the color image;
and the main controller is also used for determining the sphere center positions of six spheres by adopting a gravity center method and determining the three-dimensional space positions of the six spheres in the current frame under the camera coordinate system according to the sphere center positions and the depth image.
4. The vision-based teleoperated robot control system of claim 3, wherein the master station further comprises:
the Kalman filter is connected with the main station controller and used for predicting the position of each sphere in the next frame according to the three-dimensional space position of each sphere in the current frame under the camera coordinate system;
and the main controller is also used for carrying out local detection according to the predicted position of each sphere in the next frame and the color image collected by the depth camera to determine the three-dimensional space position of the sphere.
5. The vision-based teleoperated robot control system of claim 1, wherein the master station further comprises an averaging filter coupled to the controller for filtering the pose increment of the spider and sending the filtered pose increment to the robotic arm controller via a network.
6. The vision-based teleoperated robot control system of any one of claims 1-5, wherein the secondary station further comprises a webcam for capturing images of the motion and working scene of the robotic arm and the robotic gripper;
the master station also comprises a display which is connected with the master station controller; the main station controller is also used for receiving the motion of the mechanical arm and the mechanical hand grip and the image of the working scene collected by the network camera and sending the motion and the image to the display for displaying.
7. A vision-based teleoperated robot control method, the control method comprising:
the method comprises the steps that a main station acquires a color image and a corresponding depth image which drive a cross support and a linear support to move when hands of an operator move through a depth camera;
determining the pose increment of the cross support and the included angle between the linear support and the vertical direction according to the color image and the corresponding depth image through a main station controller;
and at the slave station, receiving the pose increment of the cross support and the included angle between the linear support and the vertical direction through a mechanical arm controller, controlling the target pose of the mechanical arm according to the pose increment of the cross support, and controlling the mechanical hand to open and close according to the included angle.
8. The vision-based teleoperated robot control method of claim 7, wherein three ends of the cross-shaped bracket are respectively provided with a sphere, and a joint of the cross-shaped bracket is provided with a sphere; two ends of the straight support are respectively provided with a sphere, and the colors of the six spheres are different.
9. The vision-based teleoperated robot control method of claim 8, further comprising:
carrying out classification and identification according to the colors of six spheres in the color image through six classifiers; and determining the sphere center positions of six spheres by the main station controller by adopting a gravity center method, and determining the three-dimensional space positions of the six spheres in the current frame under the camera coordinate system according to the sphere center positions and the depth image.
10. The vision-based teleoperated robot control method of claim 9, further comprising:
predicting the position of each sphere in the next frame according to the three-dimensional space position of each sphere in the current frame under the camera coordinate system by a Kalman filter; and the main station controller carries out local detection according to the predicted position of each sphere in the next frame and the color image collected by the depth camera to determine the three-dimensional space position of the sphere.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710428209.1A CN107363831B (en) | 2017-06-08 | 2017-06-08 | Teleoperation robot control system and method based on vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710428209.1A CN107363831B (en) | 2017-06-08 | 2017-06-08 | Teleoperation robot control system and method based on vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107363831A true CN107363831A (en) | 2017-11-21 |
CN107363831B CN107363831B (en) | 2020-01-10 |
Family
ID=60304837
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710428209.1A Active CN107363831B (en) | 2017-06-08 | 2017-06-08 | Teleoperation robot control system and method based on vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107363831B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110421558A (en) * | 2019-06-21 | 2019-11-08 | 中国科学技术大学 | Universal remote control system and method towards power distribution network Work robot |
CN111633653A (en) * | 2020-06-04 | 2020-09-08 | 上海机器人产业技术研究院有限公司 | Mechanical arm control system and method based on visual positioning |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102961811A (en) * | 2012-11-07 | 2013-03-13 | 上海交通大学 | Trachea intubating system and method based on remotely operated mechanical arm |
CN103302668A (en) * | 2013-05-22 | 2013-09-18 | 东南大学 | Kinect-based space teleoperation robot control system and method thereof |
CN104570731A (en) * | 2014-12-04 | 2015-04-29 | 重庆邮电大学 | Uncalibrated human-computer interaction control system and method based on Kinect |
CN104589356A (en) * | 2014-11-27 | 2015-05-06 | 北京工业大学 | Dexterous hand teleoperation control method based on Kinect human hand motion capturing |
CN106003076A (en) * | 2016-06-22 | 2016-10-12 | 潘小胜 | Powder spraying robot based on stereoscopic vision |
US20160354927A1 (en) * | 2014-02-04 | 2016-12-08 | Microsoft Technology Licensing, Llc | Controlling a robot in the presence of a moving object |
-
2017
- 2017-06-08 CN CN201710428209.1A patent/CN107363831B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102961811A (en) * | 2012-11-07 | 2013-03-13 | 上海交通大学 | Trachea intubating system and method based on remotely operated mechanical arm |
CN103302668A (en) * | 2013-05-22 | 2013-09-18 | 东南大学 | Kinect-based space teleoperation robot control system and method thereof |
US20160354927A1 (en) * | 2014-02-04 | 2016-12-08 | Microsoft Technology Licensing, Llc | Controlling a robot in the presence of a moving object |
CN104589356A (en) * | 2014-11-27 | 2015-05-06 | 北京工业大学 | Dexterous hand teleoperation control method based on Kinect human hand motion capturing |
CN104570731A (en) * | 2014-12-04 | 2015-04-29 | 重庆邮电大学 | Uncalibrated human-computer interaction control system and method based on Kinect |
CN106003076A (en) * | 2016-06-22 | 2016-10-12 | 潘小胜 | Powder spraying robot based on stereoscopic vision |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110421558A (en) * | 2019-06-21 | 2019-11-08 | 中国科学技术大学 | Universal remote control system and method towards power distribution network Work robot |
CN111633653A (en) * | 2020-06-04 | 2020-09-08 | 上海机器人产业技术研究院有限公司 | Mechanical arm control system and method based on visual positioning |
Also Published As
Publication number | Publication date |
---|---|
CN107363831B (en) | 2020-01-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210205986A1 (en) | Teleoperating Of Robots With Tasks By Mapping To Human Operator Pose | |
CN113696186B (en) | Mechanical arm autonomous moving and grabbing method based on visual-touch fusion under complex illumination condition | |
CN107914272B (en) | Method for grabbing target object by seven-degree-of-freedom mechanical arm assembly | |
CN109397249B (en) | Method for positioning and grabbing robot system by two-dimensional code based on visual identification | |
CN109955254B (en) | Mobile robot control system and teleoperation control method for robot end pose | |
CN111055281A (en) | ROS-based autonomous mobile grabbing system and method | |
CN112634318B (en) | Teleoperation system and method for underwater maintenance robot | |
WO2016193781A1 (en) | Motion control system for a direct drive robot through visual servoing | |
Melchiorre et al. | Collison avoidance using point cloud data fusion from multiple depth sensors: a practical approach | |
CN108202316A (en) | A kind of crusing robot and control method of automatic switch cabinet door | |
CN110744544B (en) | Service robot vision grabbing method and service robot | |
CN113751981B (en) | Space high-precision assembling method and system based on binocular vision servo | |
Lippiello et al. | 3D monocular robotic ball catching with an iterative trajectory estimation refinement | |
CN113829343A (en) | Real-time multi-task multi-person man-machine interaction system based on environment perception | |
Yang et al. | Real-time human-robot interaction in complex environment using kinect v2 image recognition | |
WO2020179416A1 (en) | Robot control device, robot control method, and robot control program | |
CN107363831B (en) | Teleoperation robot control system and method based on vision | |
Han et al. | Grasping control method of manipulator based on binocular vision combining target detection and trajectory planning | |
CN114299039B (en) | Robot and collision detection device and method thereof | |
Yang et al. | Visual servoing control of baxter robot arms with obstacle avoidance using kinematic redundancy | |
Sugimoto et al. | Half-diminished reality image using three rgb-d sensors for remote control robots | |
CN110722547B (en) | Vision stabilization of mobile robot under model unknown dynamic scene | |
Lu et al. | Human-robot collision detection based on the improved camshift algorithm and bounding box | |
CN115194774A (en) | Binocular vision-based control method for double-mechanical-arm gripping system | |
Taylor et al. | Hybrid position-based visual servoing with online calibration for a humanoid robot |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |