WO2020221311A1 - 基于可穿戴设备的移动机器人控制系统及控制方法 - Google Patents
基于可穿戴设备的移动机器人控制系统及控制方法 Download PDFInfo
- Publication number
- WO2020221311A1 WO2020221311A1 PCT/CN2020/087846 CN2020087846W WO2020221311A1 WO 2020221311 A1 WO2020221311 A1 WO 2020221311A1 CN 2020087846 W CN2020087846 W CN 2020087846W WO 2020221311 A1 WO2020221311 A1 WO 2020221311A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- hand
- control
- robot
- virtual
- degree
- Prior art date
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/006—Controls for manipulators by means of a wireless system for controlling one or several manipulators
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/08—Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1671—Programme controls characterised by programming, planning systems for manipulators characterised by simulation, either to verify existing program or to create and verify new program, CAD/CAM oriented, graphic oriented programming systems
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/002—Specific input/output arrangements not covered by G06F3/01 - G06F3/16
- G06F3/005—Input arrangements through a video camera
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
Definitions
- the present disclosure relates to the technical field related to remote control of mobile robots, and in particular to a mobile robot control system and control method based on wearable devices.
- a mobile reconnaissance robot is usually composed of a mobile robot body and a vehicle-mounted reconnaissance system. It can perform various combat tasks such as battlefield approach reconnaissance and surveillance, stealth raids, fixed-point clearance, nuclear, biological and chemical processing, and anti-terrorism and EOD.
- the traditional vehicle-mounted reconnaissance system is generally composed of a camera and a two-degree-of-freedom gimbal, and its control method generally realizes the pitch control of the gimbal through the angle information of the pitch and yaw angle of the joystick.
- the reconnaissance system is generally composed of a multi-degree-of-freedom manipulator and a reconnaissance camera, where the reconnaissance camera is fixedly connected to the end of the multi-degree-of-freedom manipulator.
- the end pose of the robot refers to the position and posture of the end effector of the robot in the specified coordinate system.
- the end effector of the mobile reconnaissance robot is a camera.
- the end pose of the reconnaissance robot is determined by the end pose of the multi-degree-of-freedom manipulator, and there are many freedoms.
- the end pose control of the robotic arm usually uses a button or a joystick combined with a button control method. The operator needs to memorize the correspondence between each button and each joint of the vehicle-mounted multi-freedom manipulator. Therefore, this operation method is very complicated and Not intuitive.
- gestures to control the end pose of a vehicle-mounted multi-free reconnaissance system.
- a more common gesture control method is to use data gloves or inertial elements. Its advantages are high recognition rate and good stability, but the disadvantage is that it cannot control the end position of the vehicle-mounted multi-degree-of-freedom reconnaissance system and can only control the attitude.
- input devices are expensive and inconvenient to wear.
- the other gesture control method is based on vision. This control method can be divided into the control method based on image classification and the control method based on image processing. The former generally analyzes the types of gestures through visual sensors combined with pattern recognition methods.
- the disadvantage is that it cannot quickly and accurately realize the continuous control of the end-position of the vehicle-mounted multi-degree of freedom reconnaissance system;
- the latter generally analyzes the motion trajectory of gestures through visual sensors combined with image processing methods, and then realizes the position control of the end of the vehicle multi-degree-of-freedom reconnaissance system based on the position information of the trajectory.
- Its disadvantage is that it cannot realize the end attitude of the vehicle multi-degree of freedom reconnaissance system. control.
- the traditional remote control system of a mobile robot is usually implemented by a control box or control box with a joystick and buttons.
- the buttons of the control box are more complicated.
- the operator needs to memorize the corresponding relationship between each button and the mobile robot and the vehicle-mounted manipulator.
- the control method is very unintuitive.
- Mobile robots and vehicle-mounted reconnaissance systems can’t get rid of their dependence on joysticks, and joysticks need the support of a control box and related hardware devices. Therefore, the controllers of traditional mobile reconnaissance robots are generally larger in size. The problem is that it is not convenient to carry and transport.
- Gesture is one of the more natural means of human communication, especially for special forces, sign language is a necessary means to communicate with teammates and convey instructions. Especially when it is inconvenient to use voice to communicate, gestures are almost the only means of communication and instructions between special forces.
- human-computer interaction remote control based on human gestures mainly adopts the method of wearing data gloves or inertial elements. Its advantages are high recognition rate and good stability, but the disadvantages are expensive input devices and inconvenient wearing. Therefore, for fully armed soldiers, how to improve the portability and intuitiveness of the man-machine interactive teleoperation control system of ground-armed reconnaissance robots is a very urgent need.
- the present disclosure proposes a mobile robot control system and control method based on wearable devices, and provides a mobile robot control system and control method based on wearable devices for mobile robots equipped with multi-degree-of-freedom manipulators. Wearing and detaching realize the continuous control of the end position and posture of the multi-degree-of-freedom manipulator to solve the complicated control methods in the control of the existing mobile reconnaissance robot vehicle multi-degree-of-freedom reconnaissance system and the inability to intuitively control the end position of the vehicle multi-degree-of-freedom reconnaissance system The question of posture.
- the first aspect of the present disclosure provides a mobile robot control system based on a wearable device, including a master-end wearable teleoperation control device and a slave-end robot.
- the master-end wearable teleoperation control device and the slave-end robot are wireless
- the master-end wearable teleoperation control device is worn on the operator and is used to send control instructions and receive data collected by the slave robot;
- the main-end wearable teleoperation control device includes a wearable binocular camera device, a head-mounted virtual display, a teleoperation controller and a main-end wireless communication device.
- the teleoperation controller is respectively connected with the wearable binocular camera device and the head-mounted virtual
- the display is connected to the master-end wireless communication device, the wearable binocular camera device is used to collect the image of the operator's gesture, the head-mounted virtual display is used to display the image taken by the slave robot and the robot arm of the slave robot Virtual model and virtual model of operator gestures.
- the operator’s head is equipped with a wearable binocular camera device and a head-mounted virtual display, which can realize dual-view image collection.
- the head-mounted virtual display setting can simultaneously realize the virtual model and the collected surveillance images, which can make the operator feel physically present.
- the sense of environment can realize the intuitive control of the remote slave robot.
- the setting of the wearable device frees the operator's hands and reduces the operator's burden.
- a second aspect of the present disclosure provides a teleoperation control method for the end pose of a robot based on the above-mentioned mobile robot control system, which includes the following steps:
- Step 101 Set the traction hand type and the release hand type
- Step 102 Construct a virtual robot arm and a virtual gesture model, and display the virtual robot arm and the virtual gesture model on the front end of the visual body of the head-mounted virtual display;
- Step 103 Collect dual-view images of the binocular camera
- Step 104 Use a gesture detection algorithm to detect and determine whether there is an operator's gesture in the dual-view image, if yes, go to step 105, otherwise go to step 103;
- Step 105 Use a hand shape recognition algorithm to perform hand shape recognition on the gesture, and determine whether a traction hand shape appears, if yes, go to step 106, otherwise go to step 103;
- Step 106 Process the captured dual-view image and calculate the pose P H of the traction gesture in the coordinate system of the wearable binocular camera device, and convert the pose P H to the position in the screen coordinate system of the head-mounted virtual display.
- Pose description P V which uses the transformed pose P V to drive the virtual gesture model in the visual body of the head-mounted virtual display;
- Step 107 Determine whether the difference between the pose P V of the virtual gesture model and the end pose P_M of the virtual manipulator N6 is less than a preset threshold, if yes, go to step 108, otherwise go to step 103;
- Step 108 Make the pose of the multi-degree-of-freedom manipulator follow the change of the operator's traction hand pose
- Step 109 It is judged whether there is a detachable hand type. If it is, the position of the multi-degree-of-freedom manipulator stops following the change of the operator's pulling hand position, and step 103 is executed; otherwise, step 108 is executed.
- the teleoperation control method for the end pose of the robot provided in the second aspect of the present disclosure, when the traction hand type is detected, the driving relationship between the operator's gesture pose and the end pose of the multi-degree-of-freedom manipulator is established to realize the multi-degree-of-freedom machine Continuous control of the posture of the end of the arm; when the disengagement hand is detected, the disengagement is performed so that the posture of the multi-degree-of-freedom manipulator stops following the change of the operator's traction hand posture.
- the following process and the detaching process of the end of the virtual manipulator and the virtual gesture model are displayed in the head-mounted virtual display, making the control process more intuitive.
- the third aspect of the present disclosure provides a control method based on the above-mentioned mobile robot control system, which separately collects the actions of the operator’s left and right hands, and controls the movement of the mobile robot body through the actions of one hand.
- the motion control of the on-board multi-degree-of-freedom manipulator of the mobile robot includes the following steps:
- Step 201 Collect images within the shooting range of the wearable device of the operator;
- Step 202 Determine whether there is a hand area in the collected image, if not, go to step 201; otherwise, perform preprocessing on the collected image to obtain a hand piece;
- Step 203 Determine whether the obtained hand piece is a left-hand piece or a right-hand piece by using the left-hand and right-hand discrimination algorithm, so as to determine whether the movement is the left hand or the right hand;
- Step 204 Control the movement of the vehicle body of the mobile robot by the movement of one of the hands, and control the movement of the on-board multi-degree-of-freedom manipulator of the mobile robot by the movement of the other hand, and then perform step 201.
- the control method proposed in the third aspect of the present disclosure uses different hands of the operator to control different action parts of the slave robot, and respectively control the movement of the vehicle body on the road and the action of the end of the multi-degree-of-freedom manipulator. Separate control of the left and right hands can make the slave robot execute the command action more accurately, and reduce the error rate of the slave robot.
- a fourth aspect of the present disclosure provides a control method based on the above-mentioned mobile robot control system.
- the method for controlling wide-range movement of a multi-degree-of-freedom manipulator on-board a robot includes the following steps:
- Step 301 Set the engaging hand type and the corresponding gesture action.
- the engaging hand type can be set to the end of the vehicle-mounted multi-degree-of-freedom manipulator waiting for the next command at the current position;
- Step 302 Collect images within the shooting range of the wearable device of the operator;
- Step 303 Determine whether there is a hand area in the collected image, if not, go to step 302; otherwise, preprocess the collected image to obtain a hand piece, and go to the next step;
- Step 304 Use a hand shape recognition algorithm to recognize hand shape on the preprocessed hand piece to obtain hand shape information
- Step 305 Determine whether the obtained hand shape information is a connecting hand type. If it is, the end of the vehicle-mounted multi-degree-of-freedom manipulator continuously executes the corresponding control instructions of the previous hand shape and the second hand shape of the connecting hand shape, and executes it Step 302; otherwise, go to the next step.
- Step 306 Perform a corresponding action according to the corresponding hand shape, and perform step 302.
- the control method of the fourth aspect of the present disclosure can realize incremental continuous and precise control of the end position of the reconnaissance system by setting the connecting hand type, and the control is more in line with human operating habits.
- a wearable binocular camera device and a head-mounted virtual display are set on the operator's head, which can realize dual-view image collection, and the head-mounted virtual display setting can simultaneously realize the virtual model and the collected surveillance images , Can make the operator have the immersive feeling, can realize the intuitive control of the remote slave robot, and liberate the operator's hands through the setting of the wearable device, and reduce the operator's burden.
- the control process is the pose of the operator's gesture—the pose of the virtual gesture model—the end pose of the virtual manipulator—the end pose of the multi-degree-of-freedom manipulator.
- the control method proposed in the third aspect of the present disclosure uses different hands of the operator to control different action parts of the slave robot during control. Separate control of the left and right hands can make the slave robot perform command actions more accurately, and set different gesture types for the control of different parts of the slave robot, which are gesture recognition and motion track recognition, and again distinguish the control is movement
- the robot body is also a vehicle-mounted multi-degree-of-freedom manipulator, which reduces the misoperation rate of the slave robot.
- the operator's two hand movements are different types of movements, which avoids causing confusion for the operator.
- the control logic is simple, easy to remember and easy to operate.
- the control method proposed in the fourth aspect of the present disclosure changes the area within the operator’s ear-hook camera field of view into a virtual touch screen area by setting the articulation hand type, freeing the operator from the physical controller.
- Dependence Compared with only using gesture types to achieve discrete control of robot actions, the present disclosure can achieve incremental continuous precise control of the end position of the reconnaissance system, and the control is more in line with human operating habits.
- the present disclosure can achieve precise control of the detection direction of the mobile reconnaissance robot vehicle-mounted multi-degree-of-freedom manipulator.
- FIG. 1 is a schematic diagram of virtual wear in Embodiment 2 of the present disclosure
- FIG. 2 is a schematic diagram of virtual detachment of Embodiment 2 of the present disclosure
- Figure 4 is a schematic structural diagram of a system according to one or more embodiments.
- Fig. 5 is a block diagram of a wearable remote operation control device at the master end in Embodiment 1 of the present disclosure
- FIG. 6 is a block diagram of the slave robot in Embodiment 1 of the present disclosure.
- FIG. 7 is a flowchart of the method of Embodiment 3 of the present disclosure.
- FIG. 8 is a flowchart of the method of Embodiment 4 of the present disclosure.
- FIG. 9 is a schematic diagram of a gesture used by an operator to control the movement of the mobile robot body in Embodiment 4 of the present disclosure.
- FIG. 10 is a schematic diagram of a gesture of the operator controlling the end of the vehicle-mounted multi-degree-of-freedom manipulator to turn left in Embodiment 4 of the present disclosure
- FIG. 11 is a schematic diagram of a gesture of the operator controlling the end of the vehicle-mounted multi-degree-of-freedom manipulator to turn right in Embodiment 4 of the disclosure;
- FIG. 12 is a schematic diagram of a gesture of an operator controlling the end of the vehicle-mounted multi-degree-of-freedom manipulator to move up in Embodiment 4 of the present disclosure
- FIG. 13 is a schematic diagram of a gesture of an operator controlling the end of the vehicle-mounted multi-degree-of-freedom manipulator to move down in Embodiment 4 of the present disclosure
- FIG. 14 is a schematic diagram of a gesture of the operator controlling the end of the vehicle-mounted multi-degree-of-freedom manipulator to tilt up in Embodiment 4 of the present disclosure
- FIG. 15 is a schematic diagram of the gesture of the operator controlling the end of the vehicle-mounted multi-degree-of-freedom manipulator to bend down in Embodiment 4 of the present disclosure
- N1 mobile robot body, N2, multi-degree-of-freedom manipulator, N3, surveillance camera, N4, video glasses, N5, binocular camera, N6, virtual manipulator;
- main-end wearable remote operation control device 101, remote operation controller, 102, left wearable visual equipment, 103, right wearable visual equipment, 104, head-mounted virtual display, 105, wireless audio prompt equipment, 106.
- Wireless data transmission equipment 107.
- Wireless image transmission equipment 107.
- Slave robot 201, vehicle controller, 202, mobile robot body, 203, linkage mechanism, 204, weapon device, 205, laser ranging sensor, 206, hand and eye monitoring camera, 207, reconnaissance camera, 208, laser Radar, 209, slave wireless data transmission equipment, 210, slave wireless image transmission equipment, 211, motor driver, 212, robotic arm driver, 213, car body drive motor unit, 214, robotic arm drive motor unit.
- Robots can be divided into many categories according to different end effectors.
- the end effector is fixed on the end of the robot arm to perform corresponding tasks.
- End effectors such as dexterous hands and grippers, cameras, etc., detect the end effectors of the robot
- this embodiment takes a surveillance robot as an example for description, but the continuous control method of the end pose of the robot in the present disclosure is not limited to the surveillance robot, but is applicable to the control of all robots.
- a mobile robot control system includes a master-end wearable teleoperation control device and a slave-end robot.
- the master-end wearable The teleoperation control device and the slave robot communicate wirelessly, and the master-end wearable teleoperation control device is worn on the operator and is used to send control instructions and receive data collected by the slave robot;
- the main-end wearable teleoperation control device includes a wearable binocular camera device, a head-mounted virtual display, a teleoperation controller and a main-end wireless communication device.
- the teleoperation controller is respectively connected with the wearable binocular camera device and the head-mounted virtual
- the display is connected to the master-end wireless communication device, the wearable binocular camera device is used to collect the image of the operator's gesture, the head-mounted virtual display is used to display the image taken by the slave robot and the robot arm of the slave robot Virtual model and virtual model of operator gestures. Setting as a binocular camera device can realize the collection of dual-view images.
- the remote operation controller may be a wearable computer that can collect in real time dual-view images of gestures taken by the wearable binocular camera device, and calculate the pose information of the operator's gesture based on the dual-view images of the gesture, and Gesture pose information displays a virtual gesture model in real time on the front end of the perspective view body of the video glasses;
- the wearable binocular camera device may be a binocular camera N5, and the binocular camera N5 is used to collect dual-view images of the operator's gesture.
- the operator uses the gesture pose within the field of view of the binocular camera N5 to control the end pose of the vehicle-mounted multi-free reconnaissance system.
- the head-mounted virtual display can be video glasses N4, used to display the reconnaissance images taken from the end robot reconnaissance camera N3, and the virtual model of the multi-degree-of-freedom manipulator N2 and the virtual model of the operator’s gestures, where the reconnaissance image can be located
- the rear end of the perspective view body of the video glasses, the virtual model of the multi-degree-of-freedom manipulator N2 and the virtual model of the operator's gestures are located at the front end of the perspective view body of the video glasses; this embodiment adopts the perspective view body display, which can Use other visual bodies.
- the perspective view volume is the view volume through the perspective projection.
- the perspective projection view volume is similar to a pyramid whose top and bottom are cut, that is, the prism. Its characteristics are: near large and far small.
- the slave robot includes a mobile robot body N1, a multi-degree-of-freedom manipulator N2, a surveillance camera N3, a slave-end wireless communication device, and a vehicle-mounted controller, which is connected to the mobile robot body N1, a multi-degree-of-freedom manipulator N2, and a surveillance camera.
- the camera N3 is connected to the slave wireless communication device.
- the reconnaissance camera N3 is installed at the end of the multi-degree-of-freedom manipulator N2 for collecting reconnaissance data.
- the mobile robot body N1 also includes a vehicle body drive motor unit and a motor driver, which are respectively connected to the vehicle controller and the drive motor unit.
- the mobile robot body N1 receives the control of the wearable teleoperation control device of the master terminal through the vehicle-mounted controller to move the position.
- the vehicle-mounted controller sends the control command to the motor driver, and the motor driver controls the corresponding motor of the driving motor group to realize the movement of the robot position from the end.
- the multi-degree-of-freedom manipulator N2 receives the control of the main-end wearable teleoperation control device and executes corresponding actions.
- the multi-degree-of-freedom manipulator N2 includes a linkage mechanism, a mechanical arm driver and a mechanical arm drive motor group.
- the on-board controller sends the control command to the robot arm driver, and the robot arm driver drives the corresponding motor of the robot arm drive motor group to realize the movement of the link mechanism angle and position, thereby changing the joint angle information of each joint of the multi-degree-of-freedom robot arm N2 .
- the virtual model of the robot arm of the slave robot is the virtual model of the multi-degree-of-freedom robot arm N2.
- the virtual model of the multi-degree-of-freedom manipulator N2 may be a virtual manipulator N6 drawn according to the D-H parameters of the multi-degree-of-freedom manipulator N2.
- the operator uses the gesture pose within the field of view of the binocular camera N5 to control the end pose of the vehicle-mounted multi-free reconnaissance system.
- a robot remote control system based on a wearable device includes a master-end wearable teleoperation control device 100 and a slave-end robot 200 that are wirelessly connected, the master-end wearable teleoperation control
- the device 100 is worn on the operator and is used to send control instructions and receive data collected from the robot 200;
- the master-end wearable teleoperation control device 100 includes a wearable binocular camera device, a head-mounted virtual display 104, a master-end wireless communication device and a teleoperation controller 101, the wearable binocular camera device, a head-mounted virtual display 104 and the master-end wireless communication device are respectively connected to the remote operation controller; the wearable binocular camera device is worn on the operator’s head position to collect the operator’s actions, and the remote operation controller 101 generates control instructions according to the corresponding actions. Send to the slave robot 200.
- the wearable binocular camera device includes a left wearable vision device 102 and a right wearable vision device 103, which are worn on the left and right sides of the operator's head, and can shoot The image in front of the operator is used to collect the motion information of the operator's hand.
- the hand motion information may include position information and hand shape information of the hand in the image.
- the left wearable vision device and the right wearable vision device may specifically be ear-hook cameras.
- the head-mounted virtual display 104 can display pictures taken by the surveillance camera carried by the slave robot 200; the teleoperation controller receives the picture information taken by the slave robot 200, and controls the head-mounted virtual display 104 to display the pictures taken on-site.
- the virtual display 104 may specifically be video glasses.
- the master-end wireless communication device realizes wireless transmission through the wireless transmission module, which can be divided into a wireless data transmission device 106 for transmitting data and a wireless image transmission device 107 for transmitting image and video data, which realizes the master-end wearable remote operation control device
- the information transmission between 100 and the slave robot 200 is specifically used to send control instructions to the slave robot 200, receive sensor data sent back from the slave robot 200, and receive image data sent back from the slave robot 200.
- the wireless transmission module may include an image transmission station for transmitting image data and a data transmission station for transmitting control commands, such as a 5.8GHz wireless image transmission station and a 433MHz wireless data transmission station. If the remote control distance is short, a WIFI communication module can be used to realize image transmission and control command transmission at the same time.
- the master-end wearable teleoperation control device 100 may also include a wireless audio prompt device 105, which is connected to the teleoperation controller 101 and is used to prompt the operator of the control instruction to be executed.
- the slave robot 200 may be specifically a ground-armed reconnaissance robot for performing reconnaissance tasks, including a mobile robot body and a vehicle-mounted multi-degree-of-freedom manipulator.
- the mobile robot car body may include a mobile robot body 202, a car body drive motor group 213, a motor driver 211, a surveillance camera 207, and a lidar 208, a wireless communication device from the end and a vehicle controller 201.
- the slave-end wireless communication device includes a slave-end wireless data transmission device 209 and a slave-end wireless image transmission device 210 for storing and transmitting data and images, respectively. It can move under the control of the master-end wearable teleoperation control device 100, which is used to replace the operator to enter the dangerous area to perform combat tasks.
- the motor driver 211, the vehicle body drive motor group 213, and the mobile robot body 202 are connected in sequence.
- the motor driver 211 is used to control the vehicle body drive motor group 213 according to the control instructions sent by the master.
- the vehicle body drive motor group 213 is connected to move The robot body 202 realizes the movement from the end robot 200.
- the vehicle body driving motor group 213 includes at least a left motor and a right motor.
- the left motor and the right motor can rotate in the same direction, and can control the robot to move forward and backward.
- the left motor and the right motor can rotate in opposite directions, and can control the robot to turn left or right.
- the lidar 208 is used to measure the obstacle information around the ground-armed reconnaissance robot at the slave end.
- the lidar 208 is connected to the onboard controller 201.
- the onboard controller 201 receives the measured obstacle information and transmits the obstacle information to the master terminal.
- the remote operation controller 101 can display obstacle information on the head-mounted virtual display 104 of the master terminal.
- the structure of the slave wireless communication device and the master wireless communication device can be the same, and the same wireless transmission module can be selected.
- the reconnaissance camera 207 is used to photograph battlefield environment information and can be directly set on the vehicle body.
- the reconnaissance camera 207 is connected to the vehicle controller 201 and is used to transmit the collected environmental images to the remote operation controller of the master terminal.
- the vehicle-mounted multi-degree-of-freedom manipulator includes a link mechanism 203, a manipulator drive motor group 214, a manipulator driver 212, a laser ranging sensor 205, a hand-eye monitoring camera 206 and a weapon device 204.
- the end of the link mechanism 203 is fixed with a hand-eye monitoring camera 206.
- the link mechanism 203, the robotic arm drive motor group 214 and the robotic arm driver 212 are sequentially connected.
- the link mechanism 203 is composed of at least two links.
- the robotic arm driver 212 receives the control information sent by the master and controls it according to the control information.
- the manipulator drives the motor group 214 to work, thereby driving the linkage mechanism 203 to move to the position where the operator wants to move, and the hand-eye monitoring camera 207 provided at the end of the linkage mechanism 203 captures the image information of the target of interest.
- the laser ranging sensor 205 and the weapon device 204 are used for reconnaissance and strike missions with the robotic arm driver 212, and both can be set at the end of the linkage mechanism 203; the laser ranging sensor 205 is used to measure the distance information of hitting the target.
- the settings of the surveillance camera 207 and the hand-eye monitoring camera 207 are used to collect different images.
- the surveillance camera 207 collects environmental data, and realizes the collection of environmental images through the path by the movement of the slave robot 200.
- the hand-eye surveillance camera 207 is used for control by the operator For image acquisition of key areas or regions of interest, the setting of two cameras realizes the image acquisition of the robot's work site without blind spots.
- the vehicle-mounted controller 201 can control and collect the data of the laser radar 208, the laser ranging sensor 205, the reconnaissance camera 207 and the hand-eye monitoring camera 207 and send it wirelessly to the master remote operation device, and can also receive the master through the slave wireless communication device.
- the motor driver 211 or the robot arm driver 212 controls the corresponding vehicle body driving motor group 213 or the robot arm driving motor group 214 according to the control instructions.
- This embodiment provides a remote operation control method for the end pose of a robot based on the mobile robot control system described in Example 1, as shown in Figures 1 to 3, specifically the end pose remote operation control of a multi-degree-of-freedom manipulator
- the method can realize continuous control of the position and posture of the end of the robotic arm through the movement of gestures, including the following steps:
- Step 101 Set the traction hand type and the release hand type
- the traction hand type means that when the operator is detected to be the hand type, the pose of the virtual gesture model is kept coincident with the end pose of the virtual manipulator in the video glasses, and the operator can drive the video glasses through the pose of the gesture
- the position and posture (ie pose) of the virtual gesture model in N4 the virtual gesture model can perform real-time continuous control of the end pose of the virtual manipulator N6.
- the virtual gesture model no longer follows the operator's gesture movement, and the operator's gesture cannot perform real-time continuous control of the virtual manipulator N6.
- the traction hand type and the release hand type can be any hand type, and can be set according to the needs.
- the traction hand type can be a hand type representing a Cartesian coordinate system.
- the ring finger and little finger of the hand type are in a curved state.
- the thumb, index finger, and middle finger are in a straight state, and the three fingers are perpendicular to each other to form a Cartesian coordinate system;
- the detachable hand can be a one-handed fist hand.
- steps of initializing and establishing a wireless connection may also be included:
- Step 102 Construct a virtual robotic arm and a virtual gesture model and display them on the front end of the visual body of the head-mounted virtual display;
- step 102 the method of constructing a virtual mechanical arm and displaying it on the front end of the visual body of the head-mounted virtual display is specifically as follows:
- the action of the multi-degree-of-freedom manipulator is controlled by the on-board controller.
- the manipulator driver drives the corresponding motor of the manipulator drive motor group to realize the movement of the angle and position of the linkage mechanism, thereby changing the joints of each joint of the multi-degree-of-freedom manipulator N2 Angle information.
- the joint angle information of each joint of the multi-degree-of-freedom manipulator can be directly read by the on-board controller.
- the teleoperation controller calculates the D-H parameters of the multi-degree-of-freedom manipulator according to the collected joint angle information
- the angle of each joint of the virtual manipulator N6 is controlled by the received joint angle information, the base coordinate system of the virtual manipulator N6 is described by the screen coordinate system of the video glasses N4, and the end coordinate system of the virtual manipulator N6 is denoted as (O M -X M -Y M -Z M) , a virtual terminal end of the robot arm N6 posture denoted by P M, including position information and posture information;
- the construction method of the virtual gesture model can be specifically as follows:
- the surveillance environment information of the slave robot can also be displayed in the video glasses N4.
- the surveillance images collected by the surveillance camera N3 can be displayed on the vision of the video glasses N4
- the body may also include the step of displaying the image taken by the slave robot on the head-mounted virtual display, which is specifically as follows: collect the reconnaissance image from the slave robot; the teleoperation controller receives the reconnaissance image and displays it in the head-mounted virtual display in real time. The rear end of the viewing body.
- Step 103 Collect dual-view images of the binocular camera N5; collect the hand shape information of the operator through the binocular camera N5.
- the dual-view image includes images of left and right views.
- Step 104 Use a gesture detection algorithm to detect and determine whether there is an operator's gesture in the dual-view image. If yes, proceed to the next step; otherwise, proceed to step 103; as long as the operator's gesture appears in the dual-view image, then Go to step 105.
- the gesture detection algorithm may specifically be a gesture detection algorithm based on a skin color threshold.
- Step 105 Use a hand type recognition algorithm to perform hand type recognition on the gesture, and determine whether there is a traction hand type, if yes, go to the next step, otherwise go to step 103; the hand type recognition algorithm is specifically a hand type recognition algorithm based on deep learning.
- the multi-degree-of-freedom manipulator N2 When the traction hand shape is detected in the dual dual-view images, the multi-degree-of-freedom manipulator N2 must be controlled by the operator's hand traction. If there is no traction hand shape, perform step 3 again to collect the operator's hand shape information through the binocular camera N5.
- Step 106 Process the captured dual-view image and calculate the pose P H of the traction gesture in the coordinate system of the wearable binocular camera device, and convert the pose P H to the position in the screen coordinate system of the head-mounted virtual display.
- Pose description P V which uses the transformed pose P V to drive the virtual gesture model in the visual body of the head-mounted virtual display;
- the DeepPrior++ algorithm can be used to solve the pose P H of the traction gesture in the coordinate system of the wearable binocular camera device.
- the DeepPrior++ algorithm can realize the estimation of the gesture pose under stereo vision.
- Solving the pose P H of the traction gesture in the coordinate system of the wearable binocular camera device can also adopt the following steps:
- the traction gesture P H includes position information and posture information.
- the solution of the position information is directly realized by using the gesture detection results in the left and right views and the parallax principle;
- the posture information of the traction gesture P H is realized using a method based on regression learning:
- the posture information of the traction gesture P H can be implemented using the method based on regression learning as follows:
- the hand-held three-axis attitude sensor can be used to rotate around the three axes of the three-axis attitude sensor in front of the dual-view camera and collect the dual-view gesture detection result image corresponding to each output data of the attitude sensor.
- Two frames of gesture images and one frame of gesture data acquired at the same time are used as input samples and output samples, respectively.
- the collected dual-view gesture images and corresponding posture data are used as input sample training set and output sample set respectively.
- the posture information of the traction gesture can be solved directly through the dual-view gesture image.
- Step 106 may firstly establish the correspondence between the operator's traction gesture and the virtual gesture model, and convert the pose P H into the pose P V through the correspondence.
- the specific corresponding relationship can be a proportional relationship.
- the position information of the pose P H of the operator's traction gesture in the coordinate system of the wearable binocular camera device and the position information of the pose P V are in a proportional relationship.
- the pose of the pose P H The information is also proportional to the posture information of the pose P V.
- the pose P H of the pulling gesture is described in the coordinate system of the binocular camera N5.
- the palm of the pulling gesture can be specified as the origin, and the origin coordinate system of the palm of the pulling gesture is (O H -X H -Y H- Z H ), the direction pointed by the middle finger of the traction gesture is the X-axis direction, the direction pointed by the thumb is the Y-axis direction, and the direction pointed by the middle finger is the Z-axis direction.
- the position information of the pose P H is determined by the origin of the palm of the traction gesture The offset description of the O H relative to the origin of the N5 coordinate system of the binocular camera.
- the posture information of the pose P H is based on the rotation of the X H axis, Y H axis and Z H axis of the coordinate system of the traction gesture to each axis of the Binocular camera N5 coordinate system description.
- the pose P V of the virtual traction gesture is described in the screen coordinate system of the video glasses N4, and the palm of the virtual traction gesture can be specified as the origin, and the origin coordinate system of the palm of the virtual traction gesture is denoted as ( OV ⁇ X V -Y V -Z V ), the direction of the middle finger of the virtual traction gesture is the X-axis direction, the direction of the thumb is the Y-axis direction, the direction of the middle finger is the Z-axis direction, and the position information of the pose P V It is described by the offset of the origin O V at the palm of the virtual traction gesture relative to the origin of the screen coordinate system of the video glasses N4.
- the posture information of the pose P V is determined by the coordinate system X V axis, Y V axis and Z V axis of the virtual traction gesture.
- the pose transformation using P V driver model wearing a virtual display gesture virtual view volume of the virtual model of the gesture begins to follow the gesture of moving the mobile operator.
- the driving method specifically includes: a gesture-dimensional virtual model wearing a virtual display is loaded into the location information which is required for real-time rendering the view volume is directly assigned by the position information of the position and orientation of P V, which visual
- the pose information required for real-time rendering in the volume is directly assigned by the pose information of the pose P V.
- Step 107 Judge whether the difference between the pose P V of the virtual gesture model and the end pose P V of the virtual robot arm N6 is less than a preset threshold, if yes, proceed to the next step; otherwise, proceed to step 3;
- Step 107 is the process of realizing the wearing. Specifically, the distance between the pose P V of the virtual gesture model and the end pose P M of the virtual manipulator N6 is quickly approached. Through step 6, the operator moves the traction gesture, so that the virtual gesture model also moves until the pose P V of the virtual gesture model approaches the end pose P M of the virtual robot arm N6.
- Step 107 is a specific implementation process:
- the operator observes a relative relationship between a perspective view volume of the virtual video glasses N4 traction gesture pose P V virtual terminal N6 manipulator pose P M, by constantly moving the traction bits gesture P H is used to make the difference between the pose P V of the virtual traction gesture in the perspective view volume of the video glasses N4 and the end pose P M of the virtual manipulator N6 continuously decrease, and the difference between the two poses is described by the following formula :
- the image at this time can be considered a virtual machine
- the teleoperation controller is implemented by executing steps 103-107 multiple times.
- the wearing process is completed, the multi-degree-of-freedom manipulator N2 can be pulled.
- Step 108 Make the pose of the multi-degree-of-freedom manipulator follow the change of the operator's traction hand pose
- step 108 The steps of step 108 are specifically:
- the corresponding joint angle values are converted into control instructions and transmitted to the slave robot, so that the joint angles of the joints of the multi-degree-of-freedom manipulator are equal to the joint angles of the virtual manipulator.
- the teleoperation controller converts the joint angles of the virtual manipulator N6 into control instructions and sends them to the slave robot N1 through the wireless communication channel.
- the joint angles of N6 are the same; so that the pose of the multi-degree-of-freedom manipulator N2 follows the change of the gesture pose of the operator.
- the position of the virtual robot arm can be adjusted to the posture, and the position and posture change of the virtual robot arm N6 can be displayed in the video glasses N4 in real time.
- the step 108 further includes: redrawing the virtual robot arm N6 in the viewing volume according to the calculated joint angle values of the virtual robot arm N6.
- the joint angle values of the virtual manipulator N6 are redrawn in the perspective view of the video glasses N4, so that the end pose of the virtual manipulator N6 is always the same as the virtual traction gesture. The end pose remains the same.
- Step 109 It is judged whether there is a detachable hand type. If it is, the position of the multi-degree-of-freedom manipulator stops following the change of the operator's pulling hand position, and step 103 is executed; otherwise, step 108 is executed.
- the detachment gesture can be set to a left-hand fist state. If the operator's gesture becomes a detachment gesture, the end pose of the virtual robotic arm N6 is no longer controlled by the operator. It is visually believed that at this time the end of the virtual robotic arm N6 has virtually detached from the operator's gesture.
- the releasing gesture can be any gesture, one-handed gesture, or two-handed gesture. At this point, the traction process is over, and other commands can be completed or executed.
- This embodiment provides a control method based on the robot control system described in embodiment 1, which separately collects the actions of the operator’s left and right hands, and controls the movement of the mobile robot body through the actions of one hand.
- the motion controls the motion of the on-board multi-degree-of-freedom manipulator arm of the mobile robot.
- the remote operation controller can collect the images taken by the wearable binocular camera device and analyze whether there are the operator's left and right hands and their hand type and position coordinates in the image. When detecting the operator's left and right hands, it can be based on the type and position of the hand
- the coordinates send corresponding control commands to the slave robot 200 through the wireless communication device to control the movement of the slave robot 200 and the movement of the vehicle-mounted multi-degree-of-freedom manipulator.
- the name of the control command can also be prompted by wireless audio before the control command is issued
- the device feeds back to the operator. In addition, it can also process sensor data and monitoring images sent back from the end robot 200 received through the wireless communication device, and display them on the head-mounted virtual display 104.
- Step 201 Collect images within the shooting range of the wearable device of the operator;
- the wearable binocular camera device can specifically be a wearable camera, which is set on the head of the manipulator to collect images around the manipulator.
- the manipulator needs to place the corresponding hand on the controller according to the control to be performed. Do the corresponding actions within the camera range.
- the left and right cameras can be set up, and the left and right cameras collect the left image and the right image respectively.
- the image stitching method can be used to cut off the overlapping parts of the two images and stitch them into a wide field of view image. Image.
- Step 202 Determine whether there is a hand area in the collected image, if not, go to step 201; otherwise, perform preprocessing on the collected image to obtain a hand piece;
- the method of judging whether there is a hand in the collected image can use a gesture detection algorithm, and the gesture detection algorithm can specifically use a gesture detection algorithm based on skin color.
- the specific method of preprocessing the collected image to obtain the hand piece is: if the presence of the hand is detected, the gesture segmentation algorithm is used to eliminate the background in the area containing the hand, and the scale normalization is further used to include the hand.
- the image of the part is normalized to the same size of the hand piece.
- Step 203 Determine whether the obtained hand piece is a left-hand piece or a right-hand piece by using the left-hand and right-hand discrimination algorithm, so as to determine whether the movement is the left hand or the right hand;
- the method for judging whether it is left-handed or right-handed by the left-handed discrimination algorithm can be specifically as follows:
- the left-handed discrimination algorithm judges that it is a binary classification problem.
- a classifier such as convolution Neural network
- Step 204 Control the movement of the vehicle body of the mobile robot by the movement of one of the hands, and control the movement of the on-board multi-degree-of-freedom manipulator of the mobile robot by the movement of the other hand, and then perform step 201.
- gestures and finger movement trajectories can be used to control which part of the slave robot 200 moves.
- This embodiment is set to control the movement of the mobile robot body through gestures, and the finger movement trajectory is set. Control the movement of the vehicle-mounted multi-degree-of-freedom robotic arm.
- step 204 one of the hands is used to control the movement of the mobile robot body.
- the specific steps are as follows:
- Step 2041 Set the correspondence between the motion control instructions of the slave robot 200 and the hand shape information;
- the hand shape information is the gesture information made by the operator, which may include fists, scissors hands, OK gestures, etc., and the motion control instructions include forward, Go back, turn left, turn right, turn around.
- the specific correspondence can be set according to specific needs. Generate the corresponding correspondence table.
- Step 2042 When the recognized hand piece is a hand set to control the movement of the mobile robot body, use hand shape recognition to calculate the hand piece for recognition to obtain hand shape information;
- different hands of the operator are used to control different action parts of the slave robot 200. Separate control of the left and right hands can make the slave robot 200 perform command actions more accurately. Firstly, it is judged whether it is the left hand or the right hand.
- the hand distinction is used to distinguish whether to control the mobile robot body or the vehicle-mounted multi-degree-of-freedom manipulator.
- the recognition of hand gestures and the recognition of motion trajectories again distinguish between the control of the mobile robot body and the vehicle-mounted multi-degree-of-freedom manipulator, which reduces the misoperation rate.
- the operator's two hand movements are different types of movements, which avoids causing confusion for the operator.
- the control logic is simple, easy to remember and easy to operate.
- any one of the hands can be set to control the movement of the mobile robot body, and in this embodiment, the left hand can be selected.
- the left-hand control of the movement of the mobile robot body is set, the movement of the vehicle-mounted multi-degree-of-freedom manipulator is controlled by the right hand.
- gestures and finger movement tracks can be used to control which part of the slave robot 200 moves can be set. In this embodiment, gesture control is set.
- Step 2043 Generate a motion control instruction of the slave robot 200 according to the corresponding relationship between the motion control instruction of the slave robot 200 and the hand shape information and the hand shape information obtained by recognition, and send the motion control instruction to the slave robot 200, and the slave robot 200 according to The control instruction executes the corresponding action.
- Step 2043 also includes the following steps: setting the motion name corresponding to the motion control instruction, and after generating the motion control instruction of the slave robot 200, sending the motion name corresponding to the motion control instruction to the wireless audio prompt device, and the wireless audio prompt device broadcasts Actions to be performed by the slave robot 200.
- the controller can determine whether the action to be performed is correct according to the broadcast.
- step 204 the movement of the on-board multi-degree-of-freedom manipulator of the mobile robot is controlled by the movement of the other hand.
- the specific steps are:
- Step 204-1 When the recognized hand piece is a hand set to control the movement of the on-board multi-degree-of-freedom manipulator of the mobile robot, use a fingertip positioning algorithm to analyze the motion trajectory of any fingertip in the image;
- Step 204-2 Generate a position tracking instruction according to the motion trajectory, and send the position tracking instruction to the slave robot 200;
- Step 204-3 The slave robot 200 generates the position coordinates of a specific action according to the position tracking instruction, and the end of the link mechanism 203 sequentially passes through the position coordinates to track the motion trajectory of the operator's fingertip.
- the fingertip positioning algorithm is used to analyze the trajectory of any fingertip in the image, and the fingertip positioning algorithm based on contour curvature and the fingertip positioning algorithm based on convex hull analysis can be used.
- the position coordinates can be set with the base of the link mechanism 203 as the origin.
- This embodiment proposes another control method based on the wearable device-based robot remote control system described in Embodiment 1.
- the method differs from the method in Embodiment 3 in that there is no need to distinguish between left and right hands for control.
- This embodiment uses different settings
- the gesture controls the movement of the mobile robot body and the on-board multi-degree-of-freedom manipulator control. It is possible to implement actions outside the imaging range of the wearable binocular camera device of the operator.
- the vehicle-mounted multi-degree-of-freedom manipulator when the vehicle-mounted multi-degree-of-freedom manipulator is controlled by recognizing the movement trajectory of the finger, the movement trajectory of the operator's hand needs to be completely within the display camera range, and the movement range of the operator needs to be in the wearable binocular camera Within the camera range of the device, the motion range of the vehicle-mounted multi-degree-of-freedom manipulator is restricted.
- This embodiment can realize the movement of the vehicle-mounted multi-degree-of-freedom manipulator in a wider range than that of the third embodiment.
- the mobile robot will keep moving forward until the operator’s gesture becomes a stop hand pattern, then the mobile robot stops moving forward. That is to say, when the mobile robot car body is moving forward, backward, turning left and turning right, it uses the stop hand to stop it.
- the movement of the robot body is a keeping movement, until a stop signal appears, otherwise it continues to move.
- the stop hand type may not be set.
- the pitch angle, upward movement distance, downward movement distance, left movement distance and right movement distance of the end of the robotic arm are realized by following the pitch hand type or the end traction hand type. , Once the pitching hand type or the end pulling hand type changes to the engaging hand type, the gesture exceeds the camera range, etc., the end of the robotic arm will naturally stop. Don't add a stop hand to control the end of the robotic arm.
- a method for controlling wide-range movement of a multi-degree-of-freedom manipulator on-board robot which includes the following steps:
- Step 301 Set the engaging hand type and the corresponding gesture action.
- the engaging hand type can be set to the end of the vehicle-mounted multi-degree-of-freedom manipulator waiting for the next command at the current position ;
- Step 302 Collect images within the shooting range of the wearable device of the operator;
- Step 303 Determine whether there is a hand area in the collected image, if not, go to step 302; otherwise, preprocess the collected image to obtain a hand piece, and go to the next step;
- Step 304 Use a hand shape recognition algorithm to recognize hand shape on the preprocessed hand piece to obtain hand shape information
- Step 305 Determine whether the obtained hand shape information is a connecting hand type. If it is, the end of the vehicle-mounted multi-degree-of-freedom manipulator continuously executes the corresponding control instructions of the previous hand shape and the second hand shape of the connecting hand shape, and executes it Step 302; otherwise, go to the next step.
- Step 306 Perform a corresponding action according to the corresponding hand shape, and perform step 302.
- step 301 the following steps are also included:
- the teleoperation controller and the slave robot 200 perform initialization operations.
- 3002 Establish a wireless communication channel between the teleoperation controller and the slave robot 200.
- the slave robot 200 collects the reconnaissance image of the robot camera, and then sends it to the teleoperation controller through the wireless communication channel.
- the teleoperation controller receives the reconnaissance image through the wireless communication device and displays the reconnaissance image in the video worn by the operator in real time. Glasses on.
- different hand shapes are set corresponding to different actions of the slave robot 200, and different hand shapes are set corresponding to the corresponding actions of the slave robot 200 for control.
- the operator can set according to needs.
- the actions of the slave robot 200 include the end action of the vehicle-mounted multi-degree-of-freedom manipulator arm and the movement of the mobile robot body.
- the movement of the mobile robot body movement includes stopping, forwarding, retreating, turning left, turning right, etc., this embodiment sets The corresponding relationship between the corresponding hand type and its corresponding gestures and control instructions can be shown in Figure 9.
- the empty-hand type H1 corresponds to no control instructions, and the slave robot 200 is stationary; the forward hand type H2 corresponds to the forward instruction, and the mobile robot car
- the body start motor driver 211 moves forward; in the same way, the left hand type H3, the right hand type H4, the back hand type H5, and the stop hand type H6 correspond to the left, right, back, and stop actions of the mobile robot body respectively.
- a corresponding correspondence table can be established, and the operator can change the correspondence table between gestures and control instructions according to his own habits.
- step 303 and step 304 may be the same as the method described in Embodiment 2.
- step 305 the end of the on-vehicle multi-degree-of-freedom manipulator continuously executes the corresponding control instructions of the previous hand type of the engaging hand type and the subsequent hand type of the engaging hand type specifically as follows:
- the end of the vehicle-mounted multi-degree-of-freedom manipulator will stop at the current position after performing the action corresponding to the previous hand type
- Step 302 to step 304 are executed, the next hand shape of the engaging hand is detected, and the end of the on-board multi-degree-of-freedom manipulator moves from the current position to perform the action corresponding to the next hand of the engaging hand.
- the empty-handed H1 is set as an example of the connection gesture.
- the movement of the end traction hand H8 combined with this hand corresponds to the upward, downward, left, and right movement of the end of the vehicle-mounted multi-degree-of-freedom manipulator.
- the device 201 sequentially sends the control instructions for moving the end of the multi-degree-of-freedom manipulator arm up to a distance of K2*U1, stopping at the current position, and moving up from the current position by a distance of K2*U1.
- K2 is the displacement coefficient, which is used to adjust the proportional relationship between the vertical movement distance of the end tractor H8 and the vertical movement distance of the end position of the multi-degree-of-freedom manipulator.
- L1 is greater than the preset threshold, then the operator poses the empty-handed H1 and moves the empty-handed L1 back to control In the field of view of the ear-hook camera, then once again put out the end pulling hand H8 and move it to the left again L2, L1 and L2 are the moving distances, then the operator’s end pulling hand H8 is in the operator’s ear-hook camera
- the leftward movement angle of the multi-degree-of-freedom robotic arm is
- the teleoperation controller sends to the vehicle controller 201 the deflection angle of the end of the multi-degree-of-freedom manipulator arm Stop at the current position, deflection angle to the left from the current position Distance control instructions.
- K1 is the deflection coefficient, which is used to adjust the ratio between the left-right movement distance of the end tractor H8 and the left-right deflection angle of the vehicle-mounted multi-degree-of-freedom manipulator.
- This embodiment proposes a method for controlling wide-range movement of a multi-degree-of-freedom manipulator arm on a robot vehicle.
- the area within the field of view of the operator’s ear-hook camera is turned into a virtual touch screen area, which frees the operator from physical control.
- the present invention can achieve incremental continuous precise control of the end position of the reconnaissance system, and the control is more in line with human operating habits.
- corresponding actions are performed according to the corresponding hand shape.
- the hand shape can be set according to personal habits or agreed hand shape, and the corresponding relationship between the hand shape and the corresponding action is set.
- the actions mainly include forward, backward, left turn, right turn and stop. This embodiment can be specifically as follows:
- the teleoperation controller does not issue a control instruction to the slave robot 200, and then continues to perform step 302;
- the teleoperation controller sends a stop control command through the wireless communication device to stop the mobile reconnaissance robot from moving, and then execute step 302;
- the actions of the multi-degree-of-freedom manipulator of the slave robot mainly include deflection at a certain angle and movement up and down, left and right, and this embodiment can be specifically as follows:
- the teleoperation controller sends out the pitch angle of the end-of-vehicle multi-degree-of-freedom manipulator through the wireless communication device and the pitch angle of the pitch-hand H7 Maintain consistent control instructions until the operator poses other hand shapes, and then go to step 302; this invention can realize the detection direction of the end of the multi-degree-of-freedom manipulator by measuring the rotation angle of a specific gesture type such as the pitch hand H7 in the image ( Relative to the precise control of the horizontal plane, the reconnaissance direction of the camera is equal to the pitch angle of the pitch hand.
- the hand type of the operator is the end traction hand type H8, as shown in Figure 10, Figure 11, Figure 12 and Figure 13, the displacement distance and displacement direction of the hand shape will be further detected.
- the preset threshold is exceeded, when the end-towing hand H8 moves left and right, the teleoperation controller sends out through the wireless communication device to enable the end of the multi-degree-of-freedom manipulator to deflect to the left or right, respectively or
- the control command, K1 deflection coefficient is used to adjust the proportional relationship between the left and right movement distance of the end traction hand H8 and the left and right deflection angle of the vehicle-mounted multi-degree-of-freedom manipulator.
- L and R are the leftward movement distance of the end traction hand H8 and Moving distance to the right, r is the radius of rotation of the end of the vehicle-mounted multi-degree-of-freedom manipulator around its base.
- the remote operation controller sends a control through the wireless communication device that can make the end position of the multi-degree-of-freedom manipulator move up or down by K2*U or K2*D respectively.
- K2 is the displacement coefficient, which is used to adjust the proportional relationship between the up and down movement distance of the end drag hand H8 and the up and down movement distance of the end position of the multi-degree-of-freedom manipulator.
- U and D are the up and right movement of the end drag H8 respectively Moving distance
- the teleoperation controller sends a control command through the wireless communication device to stop the vehicle-mounted multi-degree-of-freedom reconnaissance system from moving up and down, and step 302 is executed; when the end-towing hand H8 When it becomes a connecting hand type, step 305 is executed.
- the hand shape in this embodiment is only an example hand shape, and the specific hand shape can be set according to needs.
- An electronic device includes a memory, a processor, and computer instructions stored on the memory and running on the processor. When the computer instructions are executed by the processor, the steps described in the method in Embodiment 2, 3, or 4 are completed.
- a computer-readable storage medium for storing computer instructions, which when executed by a processor, complete the steps described in the method in Embodiment 2, 3, or 4.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- Optics & Photonics (AREA)
- Multimedia (AREA)
- Automation & Control Theory (AREA)
- Computer Networks & Wireless Communication (AREA)
- Manipulator (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
Claims (14)
- 基于可穿戴设备的移动机器人控制系统,其特征是:包括主端可穿戴遥操作控制装置和从端机器人,所述主端可穿戴遥操作控制装置和从端机器人通过无线通信,所述主端可穿戴遥操作控制装置穿戴在操作员身上,用于发送控制指令和接收从端机器人采集的数据;主端可穿戴遥操作控制装置包括可穿戴双目摄像装置、头戴虚拟显示器、遥操作控制器和主端无线通信设备,所述遥操作控制器分别与可穿戴双目摄像装置、头戴虚拟显示器和主端无线通信设备连接,可穿戴双目摄像装置用于采集操作员手势的图像,所述头戴虚拟显示器用于显示从端机器人拍摄的图像以及用于显示从端机器人的机械臂的虚拟模型和操作员手势的虚拟模型。
- 如权利要求1所述基于可穿戴设备的移动机器人控制系统,其特征是:从端机器人包括移动机器人本体、多自由度机械臂、侦查摄像头、无线通信设备和车载控制器,所述车载控制器分别与移动机器人本体N1、多自由度机械臂N2、侦查摄像头N3、无线通信设备连接;移动机器人本体接收主端可穿戴遥操作控制装置的控制进行位置上的移动,车载多自由度机械臂接收主端可穿戴遥操作控制装置的控制执行相应的动作,所述从端机器人的机械臂的虚拟模型为多自由度机械臂的虚拟模型。
- 基于权利要求1-2所述控制系统的控制方法,其特征是,包括如下步骤:步骤101、设置牵引手型和脱卸手型;步骤102、构建虚拟机械臂和虚拟手势模型并显示在头戴虚拟显示器视景体的前端;步骤103、采集双目摄像头的双视角图像;步骤104、采用手势检测算法检测,判断双视角图像中是否有操作员的手势存在,如果是,则执行下一步,否则执行步骤103;步骤105、采用手型识别算法对手势进行手型识别,判断是否出现了牵引手型,如果是,执行下一步,否则执行步骤103;步骤106、对拍摄的双视角图像进行处理并求解牵引手势在可穿戴双目摄像装置坐标系中的位姿P H,将位姿P H转换为在头戴虚拟显示器的屏幕坐标系中的位姿描述P V,采用转化后的位姿P V驱动头戴虚拟显示器视景体中的虚拟手势模型;步骤107、判断虚拟手势模型的位姿P V与虚拟机械臂N6末端位姿P V的差是否小于预设阈值,如果是,执行下一步;否则执行步骤103;步骤108、使得多自由度机械臂的位姿跟随操作员的牵引手型位姿变化;步骤109、判断是否出现脱卸手型,如果是,多自由度机械臂的位姿停止跟随操作员的牵引手型位姿变化,并执行步骤103;否则,执行步骤108。
- 如权利要求3所述控制方法,其特征是:所述步骤108使得多自由度机械臂的位姿跟随操作员的牵引手型位姿变化,步骤具体为:使得虚拟机械臂末端位姿P M的值与虚拟手势模型的位姿P V相等,求解虚拟机械臂对应的各关节角值;根据求解的虚拟机械臂对应的各关节角度值转换为控制指令传输至从端机器人,使得多自由度机械臂各关节的关节角度与虚拟机械臂的各关节角度值相等;或/和所述步骤108还包括:根据求解的虚拟机械臂N6对应的各关节角度值在视景体中对虚拟机械臂N6进行重绘。
- 如权利要求3所述控制方法,其特征是:牵引手势在可穿戴双目摄像装置坐标系中的位姿P H的位置信息和位姿P V的位置信息成正比例关系,位姿P H的姿态信息和位姿P V的姿态信息也成正比例关系。
- 如权利要求3所述控制方法,其特征是:所述步骤102构建多虚拟机械臂并显示在头戴虚拟显示器视景体的前端的方法具体为:读取从端机器人的多自由度机械臂的各关节的关节角信息;遥操作控制器根据采集的关节角信息计算多自由度机械臂的D-H参数;根据多自由度机械臂的D-H参数构建虚拟机械臂,并将虚拟机械臂显示在头戴虚拟显示器视景体的前端。
- 如权利要求3所述控制方法,其特征是:步骤103之前还包括,在头戴虚拟显示器上显示从端机器人拍摄的图像的步骤:采集从端机器人端的侦察图像;遥操作控制器接收侦察图像并其实时显示在头戴虚拟显示器的视景体后端。
- 基于权利要求1-2任一项所述控制系统的控制方法,其特征是,分别采集操控者的左手和右手的动作,通过一只手的动作控制移动机器人车体的移动,通过另一只手的动作控制移动机器人的车载多自由度机械臂的动作,包括如下步骤:步骤201:采集操控者可穿戴设备可拍摄范围内的图像;步骤202:判断采集的图像中是否有手部区域,如果没有,执行步骤201;否则,对采集的图像进行预处理,获得手部裁片;步骤203:利用左右手判别算法判断获得的手部裁片是左手裁片还是右手裁片,从而确定做动作的是左手还是右手;步骤204:通过其中一只手的动作控制移动机器人车体的移动,通过另一只手的动作控制移动机器人的车载多自由度机械臂的动作,然后,执行步骤201。
- 如权利要求8所述控制方法,其特征是:步骤204中通过其中一只手的动作控制移动机器人车体的移动,具体步骤为:设定从端机器人运动控制指令与手型信息的对应关系;当识别的手部裁片为设定为控制移动机器人车体的移动的一只手,采用手型识别算法对手部裁片进行识别,得到手型信息;根据从端机器人运动控制指令与手型信息的对应关系以及识别获得的手型信息生成从端 机器人运动控制指令,将运动控制指令发送至从端机器人,从端机器人根据控制指令执行相应的动作。
- 如权利要求8所述控制方法,其特征是:步骤204中通过另一只手的动作控制移动机器人的车载多自由度机械臂的动作,具体步骤为:当识别的手部裁片为设定为控制移动机器人的车载多自由度机械臂的动作的一只手,采用指尖定位算法分析任意指尖在图像中的运动轨迹;根据运动轨迹生成位置跟踪指令,并将位置跟踪指令发送至从端机器人;从端机器人根据位置跟踪指令生成具体动作的位置坐标,连杆机构末端依次经过位置坐标实现操控者指尖运动轨迹的跟踪。
- 基于权利要求1-2任一项所述控制系统的控制方法,其特征是,机器人车载多自由度机械臂宽范围移动的控制方法包括如下步骤:步骤301:设定衔接手型和对应的手势动作,设定不同的手型对应从端机器人不同的动作,衔接手型可以设定为车载多自由度机械臂末端在当前位置等待下一指令;步骤302:采集操控者可穿戴设备可拍摄范围内的图像;步骤303:判断采集的图像中是否有手部区域,如果没有,执行步骤302;否则,对采集的图像进行预处理,获得手部裁片,并执行下一步;步骤304:采用手型识别算法对预处理后的手部裁片进行手型识别,得到手型信息;步骤305:判断获得的手型信息是否为衔接手型,如果是,车载多自由度机械臂末端连续执行衔接手型前一手型和衔接手型后一手型的对应的控制指令的动作,并执行步骤302;否则,执行下一步;步骤306:按照相应的手型执行相应的动作,并执行步骤302。
- 如权利要求11所述控制方法,其特征是:步骤305中,车载多自由度机械臂末端连续执行衔接手型前一手型和衔接手型后一手型的对应的控制指令的动作,具体为:车载多自由度机械臂末端执行前一手型对应的动作后,停止在当前位置;执行步骤302-步骤304,检测到的衔接手型的下一手型,车载多自由度机械臂末端从当前位置动作执行衔接手型的下一手型对应的动作。
- 一种电子设备,其特征是,包括存储器和处理器以及存储在存储器上并在处理器上运行的计算机指令,所述计算机指令被处理器运行时,完成权利要求3-12任一项方法所述步骤。
- 一种计算机可读存储介质,其特征是,用于存储计算机指令,所述计算机指令被处理器执行时,完成权利要求3-12任一项方法所述步骤。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020207030337A KR102379245B1 (ko) | 2019-04-30 | 2020-04-29 | 웨어러블 디바이스 기반의 이동 로봇 제어 시스템 및 제어 방법 |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910363168.1A CN110039545B (zh) | 2019-04-30 | 2019-04-30 | 一种基于可穿戴设备的机器人远程控制系统及控制方法 |
CN201910363155.4A CN109955254B (zh) | 2019-04-30 | 2019-04-30 | 移动机器人控制系统及机器人末端位姿的遥操作控制方法 |
CN201910363155.4 | 2019-04-30 | ||
CN201910363168.1 | 2019-04-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020221311A1 true WO2020221311A1 (zh) | 2020-11-05 |
Family
ID=73028793
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/087846 WO2020221311A1 (zh) | 2019-04-30 | 2020-04-29 | 基于可穿戴设备的移动机器人控制系统及控制方法 |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR102379245B1 (zh) |
WO (1) | WO2020221311A1 (zh) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113099204A (zh) * | 2021-04-13 | 2021-07-09 | 北京航空航天大学青岛研究院 | 一种基于vr头戴显示设备的远程实景增强现实方法 |
CN113218249A (zh) * | 2021-05-30 | 2021-08-06 | 中国人民解放军火箭军工程大学 | 跟随式遥操作战车及控制方法 |
CN113229941A (zh) * | 2021-03-08 | 2021-08-10 | 上海交通大学 | 基于增强现实的介入机器人无接触遥操系统及标定方法 |
CN113741785A (zh) * | 2021-08-27 | 2021-12-03 | 深圳Tcl新技术有限公司 | 指令确定方法、装置、存储介质及电子设备 |
CN113768630A (zh) * | 2021-08-06 | 2021-12-10 | 武汉中科医疗科技工业技术研究院有限公司 | 主手夹持机构、主手控制台、手术机器人及主从对齐方法 |
CN114378823A (zh) * | 2022-01-20 | 2022-04-22 | 深圳市优必选科技股份有限公司 | 一种机器人动作控制方法、装置、可读存储介质及机器人 |
CN114578720A (zh) * | 2020-12-01 | 2022-06-03 | 合肥欣奕华智能机器股份有限公司 | 控制方法及控制系统 |
CN114643576A (zh) * | 2020-12-17 | 2022-06-21 | 中国科学院沈阳自动化研究所 | 一种基于虚拟力引导的人机协同目标抓取方法 |
CN114683272A (zh) * | 2020-12-31 | 2022-07-01 | 国网智能科技股份有限公司 | 变电站巡检机器人的增稳控制方法、控制器及机器人 |
CN114770583A (zh) * | 2022-04-29 | 2022-07-22 | 大连工业大学 | 一种基于vr的智能装配系统 |
CN115157261A (zh) * | 2022-07-27 | 2022-10-11 | 清华大学深圳国际研究生院 | 基于混合现实的柔性机械臂遥操作人机交互装置及方法 |
US20230037237A1 (en) * | 2021-07-19 | 2023-02-02 | Colorado School Of Mines | Gesture-controlled robotic feedback |
CN116052500A (zh) * | 2023-01-31 | 2023-05-02 | 苏州安全精灵智能科技有限公司 | 口罩防护体验方法、电子设备、系统及可读存储介质 |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102522142B1 (ko) * | 2021-07-05 | 2023-04-17 | 주식회사 피앤씨솔루션 | 양손 제스처를 이용해 조작 신호를 입력하는 착용형 증강현실 장치 및 양손 제스처를 이용한 착용형 증강현실 장치의 조작 방법 |
KR102532351B1 (ko) * | 2021-08-05 | 2023-05-15 | 서울대학교병원 | 헤드셋 기반의 비접촉 손동작 인식 기술을 활용한 수술 로봇 제어 시스템 |
KR102549631B1 (ko) * | 2022-07-21 | 2023-07-03 | 주식회사 포탈301 | 신체 일부의 자세 및 장치의 기울기를 이용한 실시간 작업 장치 및 카메라 제어 방법 및 장치 |
KR102525661B1 (ko) * | 2023-01-18 | 2023-04-24 | 박장준 | 작업 장치의 원격 제어를 위한 실시간 훈련 방법 및 장치 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103279191A (zh) * | 2013-06-18 | 2013-09-04 | 北京科技大学 | 一种基于手势识别技术的3d虚拟交互方法及系统 |
CN103398702A (zh) * | 2013-08-05 | 2013-11-20 | 青岛海通机器人系统有限公司 | 一种移动机器人远程操控装置及其操控技术 |
CN104057450A (zh) * | 2014-06-20 | 2014-09-24 | 哈尔滨工业大学深圳研究生院 | 一种针对服务机器人的高维操作臂遥操作方法 |
JP2016107379A (ja) * | 2014-12-08 | 2016-06-20 | ファナック株式会社 | 拡張現実対応ディスプレイを備えたロボットシステム |
CN109955254A (zh) * | 2019-04-30 | 2019-07-02 | 齐鲁工业大学 | 移动机器人控制系统及机器人末端位姿的遥操作控制方法 |
CN110039545A (zh) * | 2019-04-30 | 2019-07-23 | 齐鲁工业大学 | 一种基于可穿戴设备的机器人远程控制系统及控制方法 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011110621A (ja) * | 2009-11-24 | 2011-06-09 | Toyota Industries Corp | ロボットの教示データを作成する方法およびロボット教示システム |
CN107921639B (zh) * | 2015-08-25 | 2021-09-21 | 川崎重工业株式会社 | 多个机器人系统间的信息共享系统及信息共享方法 |
-
2020
- 2020-04-29 WO PCT/CN2020/087846 patent/WO2020221311A1/zh active Application Filing
- 2020-04-29 KR KR1020207030337A patent/KR102379245B1/ko active IP Right Grant
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103279191A (zh) * | 2013-06-18 | 2013-09-04 | 北京科技大学 | 一种基于手势识别技术的3d虚拟交互方法及系统 |
CN103398702A (zh) * | 2013-08-05 | 2013-11-20 | 青岛海通机器人系统有限公司 | 一种移动机器人远程操控装置及其操控技术 |
CN104057450A (zh) * | 2014-06-20 | 2014-09-24 | 哈尔滨工业大学深圳研究生院 | 一种针对服务机器人的高维操作臂遥操作方法 |
JP2016107379A (ja) * | 2014-12-08 | 2016-06-20 | ファナック株式会社 | 拡張現実対応ディスプレイを備えたロボットシステム |
CN109955254A (zh) * | 2019-04-30 | 2019-07-02 | 齐鲁工业大学 | 移动机器人控制系统及机器人末端位姿的遥操作控制方法 |
CN110039545A (zh) * | 2019-04-30 | 2019-07-23 | 齐鲁工业大学 | 一种基于可穿戴设备的机器人远程控制系统及控制方法 |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114578720B (zh) * | 2020-12-01 | 2023-11-07 | 合肥欣奕华智能机器股份有限公司 | 控制方法及控制系统 |
CN114578720A (zh) * | 2020-12-01 | 2022-06-03 | 合肥欣奕华智能机器股份有限公司 | 控制方法及控制系统 |
CN114643576A (zh) * | 2020-12-17 | 2022-06-21 | 中国科学院沈阳自动化研究所 | 一种基于虚拟力引导的人机协同目标抓取方法 |
CN114643576B (zh) * | 2020-12-17 | 2023-06-20 | 中国科学院沈阳自动化研究所 | 一种基于虚拟力引导的人机协同目标抓取方法 |
CN114683272B (zh) * | 2020-12-31 | 2023-09-12 | 国网智能科技股份有限公司 | 变电站巡检机器人的增稳控制方法、控制器及机器人 |
CN114683272A (zh) * | 2020-12-31 | 2022-07-01 | 国网智能科技股份有限公司 | 变电站巡检机器人的增稳控制方法、控制器及机器人 |
CN113229941A (zh) * | 2021-03-08 | 2021-08-10 | 上海交通大学 | 基于增强现实的介入机器人无接触遥操系统及标定方法 |
CN113099204B (zh) * | 2021-04-13 | 2022-12-13 | 北京航空航天大学青岛研究院 | 一种基于vr头戴显示设备的远程实景增强现实方法 |
CN113099204A (zh) * | 2021-04-13 | 2021-07-09 | 北京航空航天大学青岛研究院 | 一种基于vr头戴显示设备的远程实景增强现实方法 |
CN113218249A (zh) * | 2021-05-30 | 2021-08-06 | 中国人民解放军火箭军工程大学 | 跟随式遥操作战车及控制方法 |
CN113218249B (zh) * | 2021-05-30 | 2023-09-26 | 中国人民解放军火箭军工程大学 | 跟随式遥操作战车及控制方法 |
US20230037237A1 (en) * | 2021-07-19 | 2023-02-02 | Colorado School Of Mines | Gesture-controlled robotic feedback |
CN113768630A (zh) * | 2021-08-06 | 2021-12-10 | 武汉中科医疗科技工业技术研究院有限公司 | 主手夹持机构、主手控制台、手术机器人及主从对齐方法 |
CN113741785A (zh) * | 2021-08-27 | 2021-12-03 | 深圳Tcl新技术有限公司 | 指令确定方法、装置、存储介质及电子设备 |
CN114378823A (zh) * | 2022-01-20 | 2022-04-22 | 深圳市优必选科技股份有限公司 | 一种机器人动作控制方法、装置、可读存储介质及机器人 |
CN114378823B (zh) * | 2022-01-20 | 2023-12-15 | 深圳市优必选科技股份有限公司 | 一种机器人动作控制方法、装置、可读存储介质及机器人 |
CN114770583A (zh) * | 2022-04-29 | 2022-07-22 | 大连工业大学 | 一种基于vr的智能装配系统 |
CN115157261A (zh) * | 2022-07-27 | 2022-10-11 | 清华大学深圳国际研究生院 | 基于混合现实的柔性机械臂遥操作人机交互装置及方法 |
CN116052500A (zh) * | 2023-01-31 | 2023-05-02 | 苏州安全精灵智能科技有限公司 | 口罩防护体验方法、电子设备、系统及可读存储介质 |
Also Published As
Publication number | Publication date |
---|---|
KR20200140834A (ko) | 2020-12-16 |
KR102379245B1 (ko) | 2022-03-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020221311A1 (zh) | 基于可穿戴设备的移动机器人控制系统及控制方法 | |
CN109955254B (zh) | 移动机器人控制系统及机器人末端位姿的遥操作控制方法 | |
CN110039545B (zh) | 一种基于可穿戴设备的机器人远程控制系统及控制方法 | |
WO2023056670A1 (zh) | 复杂光照条件下基于视触融合的机械臂自主移动抓取方法 | |
Krupke et al. | Comparison of multimodal heading and pointing gestures for co-located mixed reality human-robot interaction | |
CN109164829B (zh) | 一种基于力反馈装置和vr感知的飞行机械臂系统及控制方法 | |
CN114080583B (zh) | 视觉教导和重复移动操纵系统 | |
US20200055195A1 (en) | Systems and Methods for Remotely Controlling a Robotic Device | |
KR101762638B1 (ko) | 최소 침습 수술 시스템에서 손 제스처 제어를 위한 방법 및 장치 | |
KR101789064B1 (ko) | 원격조종 최소 침습 종속 수술 기구의 손 제어를 위한 방법 및 시스템 | |
KR101785360B1 (ko) | 최소 침습 수술 시스템에서 손 존재 검출을 위한 방법 및 시스템 | |
US8155787B2 (en) | Intelligent interface device for grasping of an object by a manipulating robot and method of implementing this device | |
CN109983510A (zh) | 机器人控制系统、机械控制系统、机器人控制方法、机械控制方法和记录介质 | |
CN114728417A (zh) | 由远程操作员触发的机器人自主对象学习 | |
CN106313049A (zh) | 一种仿人机械臂体感控制系统及控制方法 | |
CN111459277B (zh) | 基于混合现实的机械臂遥操作系统及交互界面构建方法 | |
CN113183133B (zh) | 面向多自由度机器人的手势交互方法、系统、装置及介质 | |
CN113021357A (zh) | 一种便于移动的主从水下双臂机器人 | |
CN112828916A (zh) | 冗余机械臂遥操作组合交互装置及冗余机械臂遥操作系统 | |
CN108062102A (zh) | 一种手势控制具有辅助避障功能的移动机器人遥操作系统 | |
US20240149458A1 (en) | Robot remote operation control device, robot remote operation control system, robot remote operation control method, and program | |
CN115958575A (zh) | 类人灵巧操作移动机器人 | |
CN112959342B (zh) | 基于操作员意图识别的飞行机械臂抓取作业遥操作方法 | |
Bai et al. | Kinect-based hand tracking for first-person-perspective robotic arm teleoperation | |
KR101956900B1 (ko) | 최소 침습 수술 시스템에서 손 존재 검출을 위한 방법 및 시스템 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 20207030337 Country of ref document: KR Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20798282 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20798282 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20798282 Country of ref document: EP Kind code of ref document: A1 |