CN109955254B - Mobile robot control system and teleoperation control method for robot end pose - Google Patents

Mobile robot control system and teleoperation control method for robot end pose Download PDF

Info

Publication number
CN109955254B
CN109955254B CN201910363155.4A CN201910363155A CN109955254B CN 109955254 B CN109955254 B CN 109955254B CN 201910363155 A CN201910363155 A CN 201910363155A CN 109955254 B CN109955254 B CN 109955254B
Authority
CN
China
Prior art keywords
pose
virtual
mechanical arm
robot
gesture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910363155.4A
Other languages
Chinese (zh)
Other versions
CN109955254A (en
Inventor
纪鹏
马凤英
李敏
王斌鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanke Huazhi (Shandong) robot intelligent technology Co.,Ltd.
Original Assignee
Qilu University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qilu University of Technology filed Critical Qilu University of Technology
Priority to CN201910363155.4A priority Critical patent/CN109955254B/en
Publication of CN109955254A publication Critical patent/CN109955254A/en
Priority to PCT/CN2020/087846 priority patent/WO2020221311A1/en
Priority to KR1020207030337A priority patent/KR102379245B1/en
Application granted granted Critical
Publication of CN109955254B publication Critical patent/CN109955254B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1689Teleoperation

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The control method comprises the steps of establishing a driving relation between an operator gesture pose and a tail end pose of the multi-degree-of-freedom mechanical arm, and realizing continuous control over the tail end pose of the multi-degree-of-freedom mechanical arm. Meanwhile, the following process of the tail end of the virtual mechanical arm and the virtual gesture model is displayed in the head-mounted virtual display, so that the control process is more visual, and the problems that the control mode is complex and the tail end pose of the vehicle-mounted multi-freedom-degree reconnaissance system cannot be visually controlled in the control of the vehicle-mounted multi-freedom-degree reconnaissance system of the existing mobile reconnaissance robot are solved.

Description

Mobile robot control system and teleoperation control method for robot end pose
Technical Field
The disclosure relates to the technical field related to remote control of mobile robots, in particular to a mobile robot control system and a teleoperation control method for the terminal pose of a robot.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
The mobile reconnaissance robot is composed of a mobile robot body and a vehicle-mounted reconnaissance system, and can execute various combat missions such as battlefield approaching reconnaissance and monitoring, sneak assault, fixed point clearing, nuclear biochemical treatment, anti-terrorism and anti-explosion. The traditional vehicle-mounted reconnaissance system generally comprises a camera and a two-degree-of-freedom tripod head, and the control mode of the traditional vehicle-mounted reconnaissance system generally realizes the pitching control of the tripod head through the angle information of the pitch angle and the yaw angle of a rocker. For a mobile scout robot carrying a multi-degree-of-freedom scout system, the scout system generally comprises a multi-degree-of-freedom mechanical arm and a scout camera, wherein the scout camera is fixedly connected to the tail end of the multi-degree-of-freedom mechanical arm. The robot end pose is the position and the posture of the robot end effector in a designated coordinate system, the end effector of the mobile reconnaissance robot is a camera, the reconnaissance robot end pose is determined by the end pose of the multi-freedom mechanical arm, the end pose control of the multi-freedom mechanical arm usually adopts a control mode of combining a button or a rocker with the button, and an operator needs to memorize the corresponding relation between each button and each joint of the vehicle-mounted multi-freedom mechanical arm, so the operation mode has high complexity and is not intuitive.
In recent years, a mode of controlling the terminal pose of the vehicle-mounted multi-free reconnaissance system by using gestures has appeared. A more common gesture control mode is to wear data gloves or inertial elements, and the gesture control method has the advantages of high recognition rate and good stability, and has the defects that the control on the tail end position of the vehicle-mounted multi-degree-of-freedom reconnaissance system cannot be realized, only the posture can be controlled, and the input equipment is expensive and is inconvenient to wear. The other gesture control mode is a control mode based on vision, and the control mode can be divided into a control mode based on image classification and a control mode based on image processing, wherein the former generally analyzes the types of gestures by combining a vision sensor with a mode recognition method, and then realizes the motion control of the tail end pose of the vehicle-mounted multi-freedom-degree reconnaissance system according to the type information of the gestures, such as upward movement, downward movement and the like, and the defect that the continuous control of the tail end pose of the vehicle-mounted multi-freedom-degree reconnaissance system cannot be quickly and accurately realized; the latter generally analyzes the motion track of the gesture by combining a visual sensor with an image processing method, and then realizes the position control of the tail end of the vehicle-mounted multi-degree-of-freedom reconnaissance system according to the position information of the track, and has the defect that the tail end posture of the vehicle-mounted multi-degree-of-freedom reconnaissance system cannot be controlled.
Disclosure of Invention
The invention provides a mobile robot control system based on wearable binocular vision and a teleoperation control method of the terminal pose of a robot, aiming at a mobile robot carrying a multi-degree-of-freedom mechanical arm, and aims to solve the problems that the existing mobile reconnaissance robot vehicle-mounted multi-degree-of-freedom reconnaissance system is complex in control mode and cannot intuitively control the terminal pose of the vehicle-mounted multi-degree-of-freedom reconnaissance system by realizing continuous control of the terminal position and the pose of the multi-degree-of-freedom mechanical arm through free wearing and detaching.
In order to achieve the purpose, the following technical scheme is adopted in the disclosure:
one or more embodiments provide a mobile robot control system including a master-end wearable teleoperational control device and a slave-end robot, the master-end wearable teleoperational control device and the slave-end robot communicating wirelessly, the master-end wearable teleoperational control device worn on an operator for sending control instructions and receiving data collected from the slave-end robot;
wearable teleoperation controlling means of main node includes wearable binocular camera device, wears virtual display, teleoperation controller and main node wireless communication equipment, the teleoperation controller is connected with wearable binocular camera device, head wear virtual display and main node wireless communication equipment respectively, and wearable binocular camera device is used for gathering the image of operator's gesture. The head-mounted virtual display is used for displaying images shot by the slave end robot and displaying a virtual model of a mechanical arm of the slave end robot and a virtual model of gestures of an operator.
Set up wearable binocular camera device and wear virtual display at operator's head, can realize the collection of two visual angle images, wear virtual display setting and can realize the investigation image of virtual model and collection simultaneously, can make the operator have the sensation of being personally on the scene, can realize long-range from end robot's visual control, liberated operator's both hands through setting up of wearable device, alleviateed operator's burden.
One or more embodiments provide a teleoperation control method for a robot end pose based on the mobile robot control system, which is characterized by comprising the following steps:
step 1, setting a traction hand shape and a detachable hand shape;
step 2, constructing a virtual mechanical arm and a virtual gesture model and displaying the virtual mechanical arm and the virtual gesture model at the front end of a scene body of the head-mounted virtual display;
step 3, collecting the double-view angle images of the binocular camera;
step 4, detecting by adopting a gesture detection algorithm, judging whether the gesture of the operator exists in the double-view-angle image, if so, executing the next step, otherwise, executing the step 3;
step 5, performing hand type recognition on the gesture by adopting a hand type recognition algorithm, judging whether a traction hand type appears, if so, executing the next step, otherwise, executing the step 3;
step 6, processing the shot double-view-angle images and solving the pose P of the traction gesture in the coordinate system of the wearable binocular camera deviceHPosition and pose PHTranslating into a pose description P in a screen coordinate system of a head mounted virtual displayVAdopt the transformed pose PVDriving a virtual gesture model in a scene volume of a head-mounted virtual display;
step 7, judging the pose P of the virtual gesture modelVWhether the difference with the tail end pose P _ M of the virtual mechanical arm N6 is smaller than a preset threshold value or not is judged, if yes, the next step is executed, and if not, the step 3 is executed;
step 8, enabling the pose of the multi-degree-of-freedom mechanical arm to change along with the traction hand pose of an operator;
step 9, judging whether a detachable hand shape appears, if so, stopping changing the pose of the multi-freedom-degree mechanical arm along with the pose of the traction hand shape of the operator, and executing the step 3; otherwise, step 8 is performed.
According to the teleoperation control method for the tail end pose of the robot, when a traction hand type is detected, the continuous control of the tail end pose of the multi-degree-of-freedom mechanical arm is realized by establishing the driving relation between the gesture pose of an operator and the tail end pose of the multi-degree-of-freedom mechanical arm; when the hand type of the operator is detected to be detached, the detachment is carried out, so that the pose of the multi-degree-of-freedom mechanical arm stops changing along with the pose of the traction hand type of the operator. Meanwhile, the following process and the detaching process of the tail end of the virtual mechanical arm and the virtual gesture model are displayed in the head-mounted virtual display, so that the control process is more visual. The control on the multi-degree-of-freedom mechanical arm of the slave-end robot is started and stopped by setting corresponding gestures, and the control method is simple and reliable.
Compared with the prior art, the beneficial effect of this disclosure is:
(1) this openly sets up wearable binocular camera device and wear virtual display at operator's head, can realize the collection of two visual angle images, wears the virtual display setting and can realize the investigation image of virtual model and collection simultaneously, can make the operator have the sensation of being personally on the scene, can realize long-range from end robot's visual control, has liberated operator's both hands through setting up of wearable device, has alleviateed operator's burden.
(2) The teleoperation control method for the tail end pose of the robot comprises the steps of controlling the pose of an operator gesture, the pose of a virtual gesture model, the tail end pose of a virtual mechanical arm and the tail end pose of a multi-degree-of-freedom mechanical arm, and achieving continuous control over the tail end pose of the multi-degree-of-freedom mechanical arm by establishing a driving relation between the gesture pose of the operator and the tail end pose of the multi-degree-of-freedom mechanical arm. Meanwhile, the following process of the tail end of the virtual mechanical arm and the virtual gesture model is displayed in the head-mounted virtual display, so that the control process is more visual. The control on the multi-degree-of-freedom mechanical arm of the slave-end robot is started and stopped by setting corresponding gestures, and the control method is simple and reliable.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application.
Fig. 1 is a schematic illustration of virtual wear of embodiment 2 of the present disclosure;
FIG. 2 is a schematic illustration of a virtual detachment of embodiment 2 of the present disclosure;
fig. 3 is a flowchart of a control method of embodiment 2 of the present disclosure;
wherein: n1, a mobile robot body, N2, a multi-degree-of-freedom mechanical arm, N3, a detection camera, N4, video glasses, N5, a binocular camera, N6 and a virtual mechanical arm.
The specific implementation mode is as follows:
the present disclosure is further described with reference to the following drawings and examples.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise. It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The embodiments will be described in detail below with reference to the accompanying drawings.
The robot can be classified into a plurality of types according to different end effectors, the end effectors are fixed at the end of a robot arm of the robot for executing corresponding tasks, the end effectors include dexterous hands, grippers, cameras and the like, the end effector of the inspection robot is an inspection camera, the inspection robot is taken as an example in the embodiment for description, but the method for continuously controlling the end pose of the robot is not limited to the inspection robot, and is applicable to control of all robots.
Example 1
In the technical solution disclosed in one or more embodiments, as shown in fig. 1 and 2, a mobile robot control system includes a master-end wearable teleoperation control device and a slave-end robot, the master-end wearable teleoperation control device and the slave-end robot communicate wirelessly, the master-end wearable teleoperation control device is worn on an operator and used for sending control instructions and receiving data collected by the slave-end robot;
wearable teleoperation controlling means of main node includes wearable binocular camera device, wears virtual display, teleoperation controller and main node wireless communication equipment, the teleoperation controller is connected with wearable binocular camera device, head wear virtual display and main node wireless communication equipment respectively, and wearable binocular camera device is used for gathering the image of operator's gesture. The head-mounted virtual display is used for displaying images shot by the slave end robot and displaying a virtual model of a mechanical arm of the slave end robot and a virtual model of gestures of an operator. The binocular camera device is set to acquire double-view images.
The teleoperation controller can be a wearable computer, the wearable computer can acquire double-view images of gestures shot by the wearable binocular camera device in real time, calculate pose information of the gestures of an operator according to the double-view images of the gestures, and display a virtual gesture model at the front end of a perspective view body of the video glasses in real time according to the pose information of the gestures;
the wearable binocular camera device can be a binocular camera N5, the binocular camera N5 be used for acquiring the binocular vision images of the gestures of the operator. The operator uses the gesture pose in the visual field range of the binocular camera N5 to realize the control of the tail end pose of the vehicle-mounted multi-freedom reconnaissance system.
The head-mounted virtual display may be video glasses N4 for displaying a scout image taken from the scout camera N3 of the end-robot, and for displaying a virtual model of the multi-degree-of-freedom mechanical arm N2 and a virtual model of the operator's gestures, wherein the scout image may be located at the rear end of the see-through view volume of the video glasses, and the virtual model of the multi-degree-of-freedom mechanical arm N2 and the virtual model of the operator's gestures are located at the front end of the see-through view volume of the video glasses; the embodiment adopts perspective view display, and can adopt other view bodies. The perspective view body is a view body through perspective projection, and the perspective projection view body is similar to a pyramid with the top and the bottom cut, namely a prismoid.
The slave-end robot comprises a mobile robot body N1, a multi-degree-of-freedom mechanical arm N2, a detection camera N3, wireless communication equipment and a vehicle-mounted controller, wherein the vehicle-mounted controller is respectively connected with the mobile robot body N1, the multi-degree-of-freedom mechanical arm N2, the detection camera N3 and the slave-end wireless communication equipment. The reconnaissance camera N3 is installed at the tail end of the multi-degree-of-freedom mechanical arm N2 and used for collecting reconnaissance data, and the mobile robot body N1 further comprises a vehicle body driving motor set and a motor driver, wherein the motor driver is respectively connected with the vehicle-mounted controller and the driving motor set. The mobile robot body N1 receives control of the main-end wearable teleoperation control device through the vehicle-mounted controller and moves the position. The vehicle-mounted controller sends the control command to the motor driver, and the motor driver controls and drives the corresponding motor of the motor set to realize the movement of the slave robot position.
The vehicle-mounted multi-degree-of-freedom mechanical arm N2 receives control of a main-end wearable teleoperation control device to execute corresponding actions, and the vehicle-mounted multi-degree-of-freedom mechanical arm N2 comprises a link mechanism, a mechanical arm driver and a mechanical arm driving motor set. The vehicle-mounted controller sends the control command to the mechanical arm driver, and the mechanical arm driver drives the corresponding motor of the mechanical arm driving motor set to realize the movement of the angle and the position of the link mechanism, so that the joint angle information of each joint of the multi-degree-of-freedom mechanical arm N2 is changed.
The virtual model of the arm of the slave robot is a virtual model of the multi-degree-of-freedom arm N2. The virtual model of the multi-degree-of-freedom robot N2 may be a virtual robot N6 drawn according to the D-H parameters of the multi-degree-of-freedom robot N2.
The operator uses the gesture pose in the visual field range of the binocular camera N5 to realize the control of the tail end pose of the vehicle-mounted multi-freedom reconnaissance system.
Example 2
The embodiment provides a teleoperation control method for a robot end pose of a mobile robot control system based on embodiment 1, and as shown in fig. 1 to 3, specifically, a teleoperation control method for a robot end pose of multiple degrees of freedom, which can realize continuous control of a position and a posture of an end of a robot arm through a motion of a gesture, and includes the following steps:
step 1, setting a traction hand shape and a detachable hand shape;
the traction hand type means that when the operator is detected to be the hand type, the pose of the virtual gesture model is kept coincident with the terminal pose of the virtual mechanical arm in the video glasses, and the operator can drive the position and the posture (namely the pose) of the virtual gesture model in the video glasses N4 through the pose of the gesture, so that the virtual gesture model can continuously control the terminal pose of the virtual mechanical arm N6 in real time.
When the gesture is changed into the detaching gesture, the virtual gesture model does not move along with the gesture of the operator any more, and the gesture of the operator cannot continuously control the virtual mechanical arm N6 in real time.
The traction hand type and the release hand type can be any hand type and can be set according to needs, the traction hand type can be a hand type representing a Cartesian coordinate system, a ring finger and a little finger in the hand type are in a bent state, a thumb, an index finger and a middle finger in the hand type are in a straight state, and the three fingers are perpendicular to each other to form the Cartesian coordinate system; the detachable hand can be a single-hand fist-clenching hand.
Before step 1, the method may further comprise the steps of initializing and establishing a wireless connection:
initializing a teleoperation controller and a slave-end robot;
establishing a wireless communication channel between the teleoperation controller and the slave end robot N1;
step 2: constructing a virtual mechanical arm and a virtual gesture model and displaying the virtual mechanical arm and the virtual gesture model at the front end of a scene body of the head-mounted virtual display;
the method for constructing the virtual mechanical arm and displaying the virtual mechanical arm at the front end of the view body of the head-mounted virtual display in the step 2 specifically comprises the following steps:
21) reading joint angle information of each joint of the multi-degree-of-freedom mechanical arm of the slave robot;
the action of the multi-degree-of-freedom mechanical arm is controlled by the vehicle-mounted controller, and the mechanical arm driver drives the corresponding motor of the mechanical arm driving motor set to realize the movement of the angle and the position of the link mechanism, so that the joint angle information of each joint of the multi-degree-of-freedom mechanical arm N2 is changed. The joint angle information of each joint of the multi-degree-of-freedom mechanical arm can be directly read by a vehicle-mounted controller.
22) The teleoperation controller calculates the D-H parameters of the multi-degree-of-freedom mechanical arm according to the collected joint angle information;
23) and constructing a virtual mechanical arm according to the D-H parameters of the multi-degree-of-freedom mechanical arm, and displaying the virtual mechanical arm at the front end of the view body of the head-mounted virtual display.
The angles of the joints of the virtual mechanical arm N6 are controlled by the received joint angle information, the base coordinate system of the virtual mechanical arm N6 is described by the screen coordinate system of the video glasses N4, and the end coordinate system of the virtual mechanical arm N6 is marked as (O)M-XM-YM-ZM) The pose of the tip of the virtual mechanical arm N6 is represented by PMA representation comprising position information and pose information;
the method for constructing the virtual gesture model may specifically be:
(1) establishing a three-dimensional virtual gesture model of a traction hand shape offline by using 3D modeling software;
(2) the three-dimensional virtual gesture model is loaded and rendered in real-time to the front end of the head mounted virtual display view volume, whose position and pose in the view volume is driven by the position and pose of the operator's pulling hand.
In order to facilitate the operation of the operator with purpose and accuracy, the video glasses N4 may further display the reconnaissance environment information where the robot terminal is located, specifically, may display the reconnaissance image acquired by the reconnaissance camera N3 in the view body of the video glasses N4, and may further include the step of displaying the image taken from the robot terminal on the head-mounted virtual display, specifically as follows: acquiring a scout image of a slave robot terminal; the teleoperation controller receives the scout image and displays the scout image at the rear end of the view body wearing the virtual display in real time.
Step 3, acquiring a double-view-angle image of the binocular camera N5; the hand shape information of the operator is collected by the binocular camera N5. The dual view image includes images of both left and right views.
Step 4, detecting by adopting a gesture detection algorithm, judging whether the gesture of the operator exists in the double-view-angle image, if so, executing the next step, otherwise, executing the step 3; step 5 is executed as soon as the operator's gesture appears in the dual view image.
The gesture detection algorithm may be specifically a gesture detection algorithm based on a skin tone threshold.
Step 5, performing hand type recognition on the gesture by adopting a hand type recognition algorithm, judging whether a traction hand type appears, if so, executing the next step, otherwise, executing the step 3; the hand shape recognition algorithm is specifically a hand shape recognition algorithm based on deep learning.
When the traction hand shape is detected in the double-view-angle image, the control of the multi-degree-of-freedom mechanical arm N2 through the hand shape traction of the operator is realized. If the hand type of the tractor does not appear, the binocular camera N5 is executed to acquire the hand type information of the operator again.
Step 6, processing the shot double-view-angle images and solving the pose P of the traction gesture in the coordinate system of the wearable binocular camera deviceHPosition and pose PHTranslating into a pose description P in a screen coordinate system of a head mounted virtual displayVAdopt the transformed pose PVDriving a virtual gesture model in a scene volume of a head-mounted virtual display;
solving pose P of traction gesture in wearable binocular camera coordinate systemHThe DeepPrior + + algorithm can be adopted, and can realize the estimation of the potential pose under the stereoscopic vision.
Solving pose P of traction gesture in wearable binocular camera coordinate systemHThe following steps may also be employed:
(1) traction gesture pose PHThe method comprises the steps of obtaining position information and attitude information, wherein the solution of the position information is directly realized by using gesture detection results in left and right views and a parallax principle;
(2) traction gesture pose PHThe pose information of (a) is implemented using a regression learning based method:
traction gesture pose PHThe posture information of (2) is realized by a regression learning-based method, specifically comprising the following steps:
(2.1) firstly, acquiring a double-view gesture image and a corresponding gesture data set. The handheld three-axis attitude sensor can be adopted to respectively rotate around three axes of the three-axis attitude sensor in front of the double-view camera and acquire double-view gesture detection result images corresponding to each time of output data of the attitude sensor. Two frames of gesture images and one frame of posture data acquired at the same time are taken as an input sample and an output sample respectively. And the acquired double-visual-angle gesture images and the corresponding gesture data are respectively used as an input sample training set and an output sample set.
And (2.2) fitting the mapping relation between the double-view gesture image and the posture data by using a regression learning method.
And (2.3) through the two steps, the posture information of the traction gesture can be directly solved through the gesture image with the double visual angles.
Step 6 may be to establish a correspondence between the operator's traction gesture and the virtual gesture model, and to map the pose P to the corresponding poseHConversion to pose PV. The specific corresponding relation can be a direct proportional relation, and the pose P of the traction gesture of the operator in the coordinate system of the wearable binocular camera deviceHPosition information and pose P ofVIs proportional to the position information of the object, pose PHAttitude information and pose P ofVIs also in direct proportion.
The pose P of the traction gestureHDescribed in the coordinate system of the binocular camera N5, the palm of the hand of the dragging gesture can be specified as the origin, and the coordinate system of the origin at the palm of the hand of the dragging gesture is (O)H-XH-YH-ZH) The direction pointed by the middle finger is the X-axis direction, the direction pointed by the thumb is the Y-axis direction, the direction pointed by the middle finger is the Z-axis direction, and the pose P isHIs determined by the origin O at the palm of the hand in the pull gestureHDescription of the offset with respect to the origin of the binocular camera N5 coordinate system, pose PHBy the coordinate system X of the traction gestureHAxis, YHAxis and ZHAxis rotation description for each axis of the binocular camera N5 coordinate system.
The pose P of the virtual traction gestureVIs described in the screen coordinate system of the video glasses N4, the palm of the virtual traction gesture can be defined as the origin, and the origin coordinate system at the palm of the virtual traction gesture is marked as (O)V-XV-YV-ZV) The direction pointed by the middle finger of the virtual dragging gesture is the X-axis direction, and the direction pointed by the thumb is the Y-axis directionThe direction of the direction, the middle finger is the Z-axis direction, wherein the pose PVIs determined by the origin O at the palm of the virtual hand gestureVOffset description of the origin of the screen coordinate system, pose P, with respect to video glasses N4VBy the coordinate system X of the virtual traction gestureVAxis, YVAxis and ZVAxis to axis rotation description of the screen coordinate system of the video glasses N4.
Secondly, adopting the transformed pose PVAnd driving a virtual gesture model in the scene body of the head-mounted virtual display, and enabling the virtual gesture model to start moving along with the gesture movement of the operator.
The driving method specifically comprises the following steps: after the virtual gesture model is loaded into the head-mounted virtual display, the position information required for real-time rendering in the scene body is represented by the pose PVIs directly assigned, and the posture information required by the real-time rendering in the sight scene body is represented by a posture PVAnd directly assigning values to the attitude information of the user.
The position and the attitude of the virtual gesture model are determined by the position and the attitude PVThe position information and the attitude information of the virtual gesture model are directly assigned in real time, so that the pose of the virtual gesture model in the visual scene and the pose PVCompletely consistent, so that the pose of the virtual gesture model can be understood as the pose of PVAnd (5) driving.
Step 7, judging the pose P of the virtual gesture modelVAnd the end pose P of the virtual mechanical arm N6VWhether the difference is less than a preset threshold value, if so, executing the next step; otherwise, executing step 3;
step 7 is a process of realizing wearing, in particular to a pose P of a virtual gesture modelVAnd the end pose P of the virtual mechanical arm N6MIs approaching quickly. Through step 6, the operator moves the traction gesture, so that the virtual gesture model also moves until the pose P of the virtual gesture modelVApproach to the end pose P of the virtual mechanical arm N6M
The specific implementation process of the step 7 is as follows: operator viewing virtual tow gesture pose P in perspective view volume of video glasses N4VAnd the end pose P of the virtual mechanical arm N6MIn betweenRelative relationship, by constantly moving the pose P of the distraction gestureHTo enable the pose P of a virtual drag gesture in the perspective view volume of video glasses N4VAnd the end pose P of the virtual mechanical arm N6MThe difference between the two poses is described by the following formula:
d=|PV-PM|
pose P when virtual drag gestureVAnd the end pose P of the virtual mechanical arm N6MWhen the difference d is smaller than the preset threshold value, the tail end of the virtual mechanical arm N6 is considered to be overlapped with the virtual traction gesture, and the tail end of the virtual mechanical arm N6 is visually considered to be virtually worn on the virtual traction gesture at the moment. In this process, the teleoperational controller is implemented by performing steps 3-7 a number of times. When the wearing process is finished, the multi-degree-of-freedom mechanical arm N2 can be pulled.
Step 8, enabling the pose of the multi-degree-of-freedom mechanical arm to change along with the traction hand pose of an operator;
the step 8 specifically comprises the following steps:
make the end pose P of the virtual mechanical armMValue of (d) and pose P of the virtual gesture modelVSolving the joint angle values corresponding to the virtual mechanical arm N6; concretely, solving the pose P of the end of the virtual mechanical arm in real time through an inverse kinematics solving algorithm of the robotMValue of (d) and pose P of the virtual gesture modelVWhen the values are equal, the virtual arm N6 has the corresponding joint angle values.
And converting the solved joint angle values corresponding to the virtual mechanical arm into control instructions and transmitting the control instructions to the slave end robot, so that the joint angle of each joint of the multi-degree-of-freedom mechanical arm is equal to the joint angle value of each virtual mechanical arm.
Specifically, the method can be as follows: the teleoperation controller converts each joint angle of the virtual mechanical arm N6 into a control command and sends the control command to the slave end robot N1 through a wireless communication channel, the control command is converted into a motor driving command after the received control command is read by a vehicle-mounted controller of the slave end robot N1, and then each joint motor of a mechanical arm driving motor group of the multi-degree-of-freedom mechanical arm N2 is controlled by a mechanical arm driver to start rotating, so that the joint angle of each joint of the multi-degree-of-freedom mechanical arm N2 is the same as that of the virtual mechanical arm N6; thereby enabling the pose of the multi-degree-of-freedom mechanical arm N2 to follow the change in the gesture pose of the operator.
In order to enable the position of the virtual mechanical arm to be adjusted to be the posture more intuitively during operation and control of an operator, the posture change of the virtual mechanical arm N6 can be displayed in real time in the video glasses N4.
The step 8 further comprises: and redrawing the virtual mechanical arm N6 in the view volume according to the solved joint angle value corresponding to the virtual mechanical arm N6. And redrawing the virtual mechanical arm N6 in a perspective view body of the video glasses N4 according to each joint angle value of the virtual mechanical arm N6 obtained by the inverse kinematics algorithm of the robot, so that the tail end pose of the virtual mechanical arm N6 is always the same as the tail end pose of the virtual traction gesture.
Step 9, judging whether a detachable hand shape appears, if so, stopping changing the pose of the multi-freedom-degree mechanical arm along with the pose of the traction hand shape of the operator, and executing the step 3; otherwise, step 8 is performed. The method comprises the steps of judging whether the gesture of an operator is changed into a detaching gesture in real time in the traction process, wherein the detaching gesture can be set to be in a left hand fist making state, if the gesture of the operator is changed into the detaching gesture, the pose of the tail end of the N6 virtual mechanical arm is not controlled by the operator any more, and the tail end of the N6 virtual mechanical arm can be vividly considered to be detached from the gesture of the operator at the moment. The detachment gesture can be any gesture, can be a one-hand gesture, and can also be a two-hand gesture. The towing process is now complete, and may end or may execute other commands.
Example 3
The present embodiment also provides an electronic device comprising a memory and a processor, and computer instructions stored on the memory and executed on the processor, wherein the computer instructions, when executed by the processor, perform the steps of the method of the first embodiment.
Example 4
The present embodiment also provides a computer-readable storage medium for storing computer instructions which, when executed by a processor, perform the steps of the method of the first embodiment.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.
Although the present disclosure has been described with reference to specific embodiments, it should be understood that the scope of the present disclosure is not limited thereto, and those skilled in the art will appreciate that various modifications and changes can be made without departing from the spirit and scope of the present disclosure.

Claims (10)

1. A mobile robot control system is characterized in that: the remote control system comprises a master-end wearable remote operation control device and a slave-end robot, wherein the master-end wearable remote operation control device and the slave-end robot are in wireless communication, and the master-end wearable remote operation control device is worn on an operator and used for sending a control instruction and receiving data collected by the slave-end robot;
the wearable teleoperation control device at the main end comprises a wearable binocular camera device, a head-mounted virtual display, a teleoperation controller and main-end wireless communication equipment, wherein the teleoperation controller is respectively connected with the wearable binocular camera device, the head-mounted virtual display and the main-end wireless communication equipment, and the wearable binocular camera device is used for acquiring images of gestures of an operator; the head-mounted virtual display is used for displaying images shot by the slave end robot and displaying a virtual model of a mechanical arm of the slave end robot and a virtual model of gestures of an operator;
the teleoperational controller is configured to:
processing the shot double-view images and solving the pose P of the traction gesture in the coordinate system of the wearable binocular camera deviceHPosition and pose PHTranslating into a pose description P in a screen coordinate system of a head mounted virtual displayVAdopt the transformed pose PVDrive theA virtual gesture model in a scene volume of the head-mounted virtual display; judging the pose P of the virtual gesture modelVAnd the end pose P of the virtual mechanical arm N6MIf the difference is smaller than the preset threshold value, the pose of the multi-degree-of-freedom mechanical arm is changed along with the traction hand pose of the operator.
2. The mobile robot control system according to claim 1, wherein: the slave robot comprises a mobile robot body, a multi-degree-of-freedom mechanical arm, a detection camera, wireless communication equipment and a vehicle-mounted controller, wherein the vehicle-mounted controller is respectively connected with the mobile robot body N1, the multi-degree-of-freedom mechanical arm N2, the detection camera N3 and the wireless communication equipment; the mobile robot body receives the control of the master-end wearable teleoperation control device to move on the position, the vehicle-mounted multi-degree-of-freedom mechanical arm receives the control of the master-end wearable teleoperation control device to execute corresponding actions, and the virtual model of the mechanical arm of the slave-end robot is the virtual model of the multi-degree-of-freedom mechanical arm.
3. The mobile robot control system according to claim 1, wherein: the mobile robot body further comprises a vehicle body driving motor set and a motor driver, and the motor driver is connected with the vehicle-mounted controller and the driving motor set respectively.
4. The teleoperation control method of the robot end pose of the mobile robot control system according to any one of claims 1 to 3, characterized by comprising the steps of:
step 1, setting a traction hand shape and a detachable hand shape;
step 2, constructing a virtual mechanical arm and a virtual gesture model and displaying the virtual mechanical arm and the virtual gesture model at the front end of a scene body of the head-mounted virtual display;
step 3, collecting the double-view angle images of the binocular camera;
step 4, detecting by adopting a gesture detection algorithm, judging whether the gesture of the operator exists in the double-view-angle image, if so, executing the next step, otherwise, executing the step 3;
step 5, performing hand type recognition on the gesture by adopting a hand type recognition algorithm, judging whether a traction hand type appears, if so, executing the next step, otherwise, executing the step 3;
step 6, processing the shot double-view-angle images and solving the pose P of the traction gesture in the coordinate system of the wearable binocular camera deviceHPosition and pose PHTranslating into a pose description P in a screen coordinate system of a head mounted virtual displayVAdopt the transformed pose PVDriving a virtual gesture model in a scene volume of a head-mounted virtual display;
step 7, judging the pose P of the virtual gesture modelVAnd the end pose P of the virtual mechanical arm N6MWhether the difference is less than a preset threshold value, if so, executing the next step; otherwise, executing step 3;
step 8, enabling the pose of the multi-degree-of-freedom mechanical arm to change along with the traction hand pose of an operator;
step 9, judging whether a detachable hand shape appears, if so, stopping changing the pose of the multi-freedom-degree mechanical arm along with the pose of the traction hand shape of the operator, and executing the step 3; otherwise, step 8 is performed.
5. The teleoperation control method of the robot end pose of the mobile robot control system according to claim 4, characterized by: the step 8 of changing the pose of the multi-degree-of-freedom mechanical arm along with the pose of the traction hand of the operator comprises the following steps:
make the end pose P of the virtual mechanical armMValue of (d) and pose P of the virtual gesture modelVSolving the angle values of each joint corresponding to the virtual mechanical arm;
converting the solved joint angle values corresponding to the virtual mechanical arm into control instructions and transmitting the control instructions to the slave end robot, so that the joint angle of each joint of the multi-degree-of-freedom mechanical arm is equal to the joint angle value of each virtual mechanical arm;
or/and
the step 8 further comprises: and redrawing the virtual mechanical arm N6 in the view volume according to the solved joint angle value corresponding to the virtual mechanical arm N6.
6. The teleoperation control method of the robot end pose of the mobile robot control system according to claim 4, characterized by: pose P of traction gesture in wearable binocular camera coordinate systemHPosition information and pose P ofVIs proportional to the position information of the object, pose PHAttitude information and pose P ofVIs also in direct proportion.
7. The teleoperation control method of the robot end pose of the mobile robot control system according to claim 4, characterized by: the method for constructing the multi-virtual mechanical arm and displaying the multi-virtual mechanical arm at the front end of the view body of the head-mounted virtual display in the step 2 specifically comprises the following steps:
reading joint angle information of each joint of the multi-degree-of-freedom mechanical arm of the slave robot;
the teleoperation controller calculates the D-H parameters of the multi-degree-of-freedom mechanical arm according to the collected joint angle information;
and constructing a virtual mechanical arm according to the D-H parameters of the multi-degree-of-freedom mechanical arm, and displaying the virtual mechanical arm at the front end of the viewing body of the head-mounted virtual display.
8. The teleoperation control method of the robot end pose of the mobile robot control system according to claim 4, characterized by: step 3 is preceded by the step of displaying the image taken from the end robot on a head mounted virtual display:
acquiring a scout image of a slave robot terminal;
the teleoperation controller receives the scout image and displays the scout image at the rear end of the view body wearing the virtual display in real time.
9. An electronic device comprising a memory and a processor and computer instructions stored on the memory and executable on the processor, the computer instructions when executed by the processor performing the steps of the method of any of claims 4 to 8.
10. A computer-readable storage medium storing computer instructions which, when executed by a processor, perform the steps of the method of any one of claims 4 to 8.
CN201910363155.4A 2019-04-30 2019-04-30 Mobile robot control system and teleoperation control method for robot end pose Active CN109955254B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201910363155.4A CN109955254B (en) 2019-04-30 2019-04-30 Mobile robot control system and teleoperation control method for robot end pose
PCT/CN2020/087846 WO2020221311A1 (en) 2019-04-30 2020-04-29 Wearable device-based mobile robot control system and control method
KR1020207030337A KR102379245B1 (en) 2019-04-30 2020-04-29 Wearable device-based mobile robot control system and control method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910363155.4A CN109955254B (en) 2019-04-30 2019-04-30 Mobile robot control system and teleoperation control method for robot end pose

Publications (2)

Publication Number Publication Date
CN109955254A CN109955254A (en) 2019-07-02
CN109955254B true CN109955254B (en) 2020-10-09

Family

ID=67026942

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910363155.4A Active CN109955254B (en) 2019-04-30 2019-04-30 Mobile robot control system and teleoperation control method for robot end pose

Country Status (1)

Country Link
CN (1) CN109955254B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020221311A1 (en) * 2019-04-30 2020-11-05 齐鲁工业大学 Wearable device-based mobile robot control system and control method
CN110413113A (en) * 2019-07-18 2019-11-05 华勤通讯技术有限公司 A kind of on-vehicle machines people and exchange method
CN110394803A (en) * 2019-08-14 2019-11-01 纳博特南京科技有限公司 A kind of robot control system
CN110815258B (en) * 2019-10-30 2023-03-31 华南理工大学 Robot teleoperation system and method based on electromagnetic force feedback and augmented reality
CN111476909B (en) * 2020-03-04 2021-02-02 哈尔滨工业大学 Teleoperation control method and teleoperation control system for compensating time delay based on virtual reality
KR20230003003A (en) * 2020-07-01 2023-01-05 베이징 서제리 테크놀로지 씨오., 엘티디. Master-slave movement control method, robot system, equipment and storage medium
CN112405530B (en) * 2020-11-06 2022-01-11 齐鲁工业大学 Robot vision tracking control system and control method based on wearable vision
CN112650120A (en) * 2020-12-22 2021-04-13 华中科技大学同济医学院附属协和医院 Robot remote control system, method and storage medium
CN113146612A (en) * 2021-01-05 2021-07-23 上海大学 Virtual-real combination and man-machine interaction underwater remote control robot manipulator operation system and method
CN113822251B (en) * 2021-11-23 2022-02-08 齐鲁工业大学 Ground reconnaissance robot gesture control system and control method based on binocular vision
CN114713421B (en) * 2022-05-05 2023-03-24 罗海华 Control method and system for remote control spraying
CN114683288B (en) * 2022-05-07 2023-05-30 法奥意威(苏州)机器人系统有限公司 Robot display and control method and device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3214835A1 (en) * 2016-02-16 2017-09-06 Ricoh Company, Ltd. Information terminal, recording medium, communication control method, and communication system
CN108453742A (en) * 2018-04-24 2018-08-28 南京理工大学 Robot man-machine interactive system based on Kinect and method
CN108638069A (en) * 2018-05-18 2018-10-12 南昌大学 A kind of mechanical arm tail end precise motion control method
CN108828996A (en) * 2018-05-31 2018-11-16 四川文理学院 A kind of the mechanical arm remote control system and method for view-based access control model information
CN109219856A (en) * 2016-03-24 2019-01-15 宝利根 T·R 有限公司 For the mankind and robot cooperated system and method

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104057450B (en) * 2014-06-20 2016-09-07 哈尔滨工业大学深圳研究生院 A kind of higher-dimension motion arm teleoperation method for service robot
JP6654066B2 (en) * 2016-03-11 2020-02-26 ソニー・オリンパスメディカルソリューションズ株式会社 Medical observation device
CN109933097A (en) * 2016-11-21 2019-06-25 清华大学深圳研究生院 A kind of robot for space remote control system based on three-dimension gesture
US10486311B2 (en) * 2017-02-20 2019-11-26 Flir Detection, Inc. Robotic gripper camera
CN108044625B (en) * 2017-12-18 2019-08-30 中南大学 A kind of robot arm control method based on the virtual gesture fusion of more Leapmotion
CN208713510U (en) * 2018-08-01 2019-04-09 珠海市有兴精工机械有限公司 CNC processing elasticity crawl gripper
CN109571403B (en) * 2018-12-12 2021-12-03 杭州申昊科技股份有限公司 Intelligent inspection robot for magnetic track trace navigation and navigation method thereof
CN109514521B (en) * 2018-12-18 2020-06-26 合肥工业大学 Servo operation system and method for human hand cooperation dexterous hand based on multi-information fusion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3214835A1 (en) * 2016-02-16 2017-09-06 Ricoh Company, Ltd. Information terminal, recording medium, communication control method, and communication system
CN109219856A (en) * 2016-03-24 2019-01-15 宝利根 T·R 有限公司 For the mankind and robot cooperated system and method
CN108453742A (en) * 2018-04-24 2018-08-28 南京理工大学 Robot man-machine interactive system based on Kinect and method
CN108638069A (en) * 2018-05-18 2018-10-12 南昌大学 A kind of mechanical arm tail end precise motion control method
CN108828996A (en) * 2018-05-31 2018-11-16 四川文理学院 A kind of the mechanical arm remote control system and method for view-based access control model information

Also Published As

Publication number Publication date
CN109955254A (en) 2019-07-02

Similar Documents

Publication Publication Date Title
CN109955254B (en) Mobile robot control system and teleoperation control method for robot end pose
WO2020221311A1 (en) Wearable device-based mobile robot control system and control method
CN109164829B (en) Flying mechanical arm system based on force feedback device and VR sensing and control method
Krupke et al. Comparison of multimodal heading and pointing gestures for co-located mixed reality human-robot interaction
CN110039545B (en) Robot remote control system and control method based on wearable equipment
US10384348B2 (en) Robot apparatus, method for controlling the same, and computer program
CN108127669A (en) A kind of robot teaching system and implementation based on action fusion
CN102814814B (en) Kinect-based man-machine interaction method for two-arm robot
CN111459277B (en) Mechanical arm teleoperation system based on mixed reality and interactive interface construction method
CN112634318B (en) Teleoperation system and method for underwater maintenance robot
CN107968915A (en) Underwater robot camera pan-tilt real-time control system and its method
CN106444810A (en) Unmanned plane mechanical arm aerial operation system with help of virtual reality, and control method for unmanned plane mechanical arm aerial operation system
CN113183133B (en) Gesture interaction method, system, device and medium for multi-degree-of-freedom robot
CN113021357A (en) Master-slave underwater double-arm robot convenient to move
CN111590567B (en) Space manipulator teleoperation planning method based on Omega handle
CN108828996A (en) A kind of the mechanical arm remote control system and method for view-based access control model information
JP2020196060A (en) Teaching method
CN112405530B (en) Robot vision tracking control system and control method based on wearable vision
CN205983222U (en) Unmanned aerial vehicle machine carries hardware connection structure of first visual angle nacelle device
CN110695990A (en) Mechanical arm control system based on Kinect gesture recognition
CN207888651U (en) A kind of robot teaching system based on action fusion
CN107363831B (en) Teleoperation robot control system and method based on vision
Chu et al. Hands-free assistive manipulator using augmented reality and tongue drive system
CN114714358A (en) Method and system for teleoperation of mechanical arm based on gesture protocol
CN112959342B (en) Remote operation method for grabbing operation of aircraft mechanical arm based on operator intention identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210406

Address after: 203-d, Shanke Zhongchuang space, 19 Keyuan Road, Lixia District, Jinan City, Shandong Province

Patentee after: Shanke Huazhi (Shandong) robot intelligent technology Co.,Ltd.

Address before: 250353 University Road, Changqing District, Ji'nan, Shandong Province, No. 3501

Patentee before: Qilu University of Technology

TR01 Transfer of patent right