CN115157261A - Flexible mechanical arm teleoperation man-machine interaction device and method based on mixed reality - Google Patents

Flexible mechanical arm teleoperation man-machine interaction device and method based on mixed reality Download PDF

Info

Publication number
CN115157261A
CN115157261A CN202210894084.2A CN202210894084A CN115157261A CN 115157261 A CN115157261 A CN 115157261A CN 202210894084 A CN202210894084 A CN 202210894084A CN 115157261 A CN115157261 A CN 115157261A
Authority
CN
China
Prior art keywords
flexible
mechanical arm
arm
control
flexible mechanical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210894084.2A
Other languages
Chinese (zh)
Inventor
梁斌
陈蓉卉
王学谦
朱晓俊
陈章
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen International Graduate School of Tsinghua University
Original Assignee
Shenzhen International Graduate School of Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen International Graduate School of Tsinghua University filed Critical Shenzhen International Graduate School of Tsinghua University
Priority to CN202210894084.2A priority Critical patent/CN115157261A/en
Publication of CN115157261A publication Critical patent/CN115157261A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J18/00Arms
    • B25J18/06Arms flexible
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/10Programme-controlled manipulators characterised by positioning means for manipulator elements
    • B25J9/104Programme-controlled manipulators characterised by positioning means for manipulator elements with cables, chains or ribbons
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • B25J9/1666Avoiding collision or forbidden zones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Abstract

The invention discloses a flexible mechanical arm teleoperation man-machine interaction device and method based on mixed reality, wherein the teleoperation man-machine interaction device comprises: mix reality equipment and control module, mix reality equipment and connect control module, control module and flexible arm communication connection, control module carries out SLAM real time positioning and builds the picture, and passes through it will build the picture result and flexible arm position appearance under the operation scene presents for the operator to mix reality equipment, mix reality equipment and gather operator's developments, discernment operator's selected control mode and the artificial control command who sends, control module receives and analyzes the command to carry out motion control and seek way and keep away the barrier operation to flexible arm, will artificial control command converts flexible arm control command into, thereby is right flexible arm carries out motion control. The invention can improve the real-time performance, accuracy, safety and user-friendliness of the teleoperation human-computer interaction control of the flexible mechanical arm.

Description

Flexible mechanical arm teleoperation man-machine interaction device and method based on mixed reality
Technical Field
The invention relates to the technical field of teleoperation robots, in particular to a device and a method for teleoperation human-computer interaction of a flexible mechanical arm based on mixed reality.
Background
Nowadays, the robot technology only needs to meet the requirements of the manufacturing industry, and the robot technology is moved out of a factory, is expanded to wider fields including medicine, aerospace, underwater navigation and the like, and assists or even replaces human beings to complete repeated, heavy or dangerous tasks. Among them, in some working scenes in narrow and complex spaces, there are also growing application demands for robotics, such as earthquake ruin rescue, cabin drilling work, nuclear power station pipeline maintenance, satellite solar panel maintenance, medical auxiliary equipment such as endoscopes, dangerous narrow tube welding, and the like. Such narrow and complex spaces often have the following characteristics: compared with the common indoor environment, the space size is limited; the distribution of the obstacles is more complex, the visual angle is limited, and the internal information is completely unknown; relatively isolated and closed or hazardous; limited import and export, difficult direct access for human, etc. Therefore, for a mechanical arm which operates in a narrow space, higher flexibility, intelligence, accuracy and safety are required. Meanwhile, because the internal environment has the characteristics of unknown and complex, the intelligence and adaptability of the robot are limited, the robot is difficult to make a complete autonomous decision, and sometimes a human operator is required to assist the mechanical arm to perform interactive navigation control. Because the environmental space information is completely unknown, the self-positioning is firstly needed to be realized after the mechanical arm enters, and the environmental information is obtained through a sensor loaded on the mechanical arm; due to the requirement of user friendliness, collected original data are subjected to unified data processing and are provided for an operator in a relatively intuitive mode to assist the operator to perform decision control, and further the mechanical arm is guided to perform next-step movement and operation without performing complex professional training on the operator.
The ultra-redundant flexible mechanical arm is a mechanical arm with the degree of freedom not less than 16, and has better bending property and flexibility compared with the traditional industrial mechanical arm with limited degree of freedom. Therefore, the ultra-redundant flexible mechanical arm has wide application in the fields of nuclear, underwater operation, medical operation, aerospace and the like. The rope-driven super-redundant flexible mechanical arm is uniformly provided with the motor on the base, and the joint is driven to rotate by adjusting the length of the rope, so that the mechanical arm body is more portable, can be applied to complex narrow spaces, and can flexibly avoid functions such as barriers. Meanwhile, due to the ultrahigh degree of freedom and the complex configuration, the space sensing capability of the mechanical arm is weak, and the autonomous sensing and judgment of the mechanical arm on the environment and the accurate positioning and track planning of each joint of the mechanical arm are difficult to realize. Therefore, in many control systems of ultra-redundant flexible mechanical arms, human intervention by teleoperation human-computer interaction means is required.
The traditional mechanical arm teleoperation generally adopts bilateral teleoperation to establish a master-slave control structure. The master end needs to receive feedback information from the slave end, and the master end also comprises visual feedback, force feedback and the like besides basic information such as position, speed and the like. Conventional visual feedback is typically based on different kinds of cameras and 2D displays, etc., providing the operator with flat visual information. The technology has been developed over half a century and is applied to the fields of hazardous substance treatment, remote surgery, underwater robots, space robots, mobile robots and the like. However, research and research show that the traditional teleoperation interaction means has many disadvantages, such as limited viewing angle, large error, image frequency degradation, inconvenience in switching attention of multiple cameras, and the like; in addition, the traditional teleoperation usually uses a computer or a joystick or other equipment to send out an instruction, and the interaction means is not direct.
Thus, the academics have introduced the concept of immersive interaction, i.e., having the operator interact and make decisions in a simulated environment built from remote live events. Research experiments show that the 3D interactive interface can provide stronger immersion for an operator compared with a 2D interface, and effectively help the operator to understand the environment and make decisions. Among them, it is widely believed that visual servoing based on Virtual Reality (VR), augmented Reality (AR), and Mixed Reality (MR) in combination with Head-Mounted device (HMD) has a good prospect. The immersive interaction means can effectively relieve the problem of degradation of space perception performance caused by high complexity of the ultra-redundant flexible mechanical arm, and meanwhile, an intuitive 3D operation environment is provided for an operator.
The andrias martien-Barrio team from the spanish automated robot center (CAR, UPM-CSIC) proposed in 2020 a hybrid reality-based design of a human-machine interaction system for super redundant robotic arms applied to inspection operations. The super-redundant mechanical arm enters a space which cannot be reached by a human operator or is dangerous for the human operator to replace the human to carry out operation, a 3D environment virtual model and real-time mechanical arm position and posture are provided for the operator through mixed reality equipment, the operator can change the pose of the tail end of the mechanical arm through interaction means such as gestures and the like, and the master-slave bilateral position-position control is realized. After the target pose in the virtual space is adjusted, the operator presses the button to confirm that the control instruction is sent to the remote mechanical arm. However, this solution has the following disadvantages: (1) In the scheme, the virtual environment model is constructed in advance by adopting a Kinect depth camera and a Rtabomap construction method and then is applied to mechanical arm inspection operation. However, in a real application scene, the super-redundant flexible arm working environment cannot be observed in advance to build a diagram basically, and real-time performance and adaptability are required more. (2) In order to prevent collision caused by improper operation, the scheme adopts a mode of pressing a button to confirm control command transmission to control the motion of the mechanical arm, has poor real-time performance, is difficult to deal with sudden conditions, and has redundant operation. (3) According to the scheme, the pose at the tail end of the mechanical arm can be operated, the limitation is large, and the operation accuracy of an operator is high.
Da Sun et al propose a novel teleoperation system based on mixed reality. Interactive agents are designed and fuzzy logic algorithms are applied to reduce the possibility of accidents due to operator mishandling. The system builds an RGB-D camera at a fixed position in an experimental environment to transmit and reconstruct point cloud data in real time, and an operator can observe and operate through a mixed reality interface. However, the working space of the system is simple, the RGB-D camera is carried at a fixed position of the working space, the visual angle is fixed, the reconstructed point cloud model is fed back in real time, and the information of a shielding part is limited; the lack of a first view of the mechanical arm cannot cope with an emergency. Therefore, this solution is only suitable for the environment of simpler machines.
Disclosure of Invention
In order to make up for the defects of the background technology, the invention provides a flexible mechanical arm teleoperation human-computer interaction device and method based on mixed reality, and aims to solve the problems of low real-time performance, accuracy, safety and user friendliness of teleoperation human-computer interaction control.
The technical problem of the invention is solved by the following technical scheme:
the invention discloses a flexible mechanical arm teleoperation man-machine interaction device based on mixed reality, which comprises: mix reality equipment and control module, mix reality equipment and connect control module, control module and flexible arm communication connection, control module carries out SLAM real time positioning and builds the picture, and passes through it will build the picture result and flexible arm position appearance under the operation scene presents for the operator to mix reality equipment, mix reality equipment and gather operator's developments, discernment operator's selected control mode and the artificial control command who sends, control module receives and analyzes the command to carry out motion control and seek way and keep away the barrier operation to flexible arm, will artificial control command converts flexible arm control command into, thereby is right flexible arm carries out motion control.
In some embodiments, the following technical scheme is also included:
the mixed reality equipment is head-mounted equipment which is used for synchronously displaying the interactive virtual environment and the flexible arm model, carrying out head tracking, gesture recognition and voice recognition on an operator, recognizing a control mode selected by the operator and sending a manual control instruction.
The control module comprises an interaction and model control platform and a data processing and hardware control platform, and the interaction and model control platform is in communication connection with the data processing and hardware control platform; the interaction and model control platform comprises an interaction virtual model, a motion control and path finding and obstacle avoiding planning algorithm module and an interaction interface, and is used for constructing an interaction virtual environment and a flexible arm motion model, carrying out real-time obstacle avoiding and whole arm pose planning on the flexible mechanical arm, and transmitting a mapping result and pose data of the flexible mechanical arm under a scene to the mixed reality equipment; the data processing and hardware control platform comprises a data acquisition module, an SLAM algorithm module and a hardware control system, and is used for acquiring environmental data, positioning and drawing in real time by the SLAM and carrying out data communication on the flexible mechanical arm.
The data acquisition module comprises an RGB-D depth camera, and the RGB-D depth camera is arranged at the tail end of the flexible mechanical arm; and the control module carries out SLAM real-time positioning and mapping based on the data acquired by the RGB-D depth camera.
The data acquisition module further comprises a coded disc, and the coded disc is arranged at each joint of the flexible mechanical arm and used for reading the joint angle of the flexible mechanical arm.
The interactive interface comprises an interactive panel and a target end interactive entity and is used for providing a man-machine interaction interface.
The flexible mechanical arm adopts a super-redundancy design.
The invention also discloses a flexible mechanical arm teleoperation human-computer interaction system based on mixed reality, which comprises the flexible mechanical arm teleoperation human-computer interaction device based on mixed reality and a teleoperated flexible mechanical arm.
In some embodiments, the flexible mechanical arm is driven by a rope, each joint of the flexible mechanical arm has two degrees of freedom, adjacent joints are perpendicular to each other, and the flexible mechanical arm is connected through each joint to form an integral ultra-redundant flexible mechanical arm.
The invention also discloses a man-machine interaction method for the teleoperation of the flexible mechanical arm based on mixed reality, which comprises the following steps of: s1, collecting environmental data; s2, performing SLAM real-time positioning and mapping when the flexible mechanical arm moves in the space; s3, presenting the mapping result and the pose of the flexible mechanical arm in the operation scene to an operator through mixed reality equipment; s4, receiving a control mode selected by an operator in the mixed reality equipment according to the real-time reconstructed local map and the state of the flexible mechanical arm and a sent artificial control instruction; s5, analyzing the control mode and the artificial control instruction, performing motion control and path finding and obstacle avoiding operation on the flexible mechanical arm, converting the artificial control instruction into a flexible mechanical arm control instruction, and controlling the flexible mechanical arm to move through a hardware control system, so that automatic obstacle avoiding and whole arm pose planning are realized in the moving process of the flexible mechanical arm.
In some embodiments, further comprising: and repeating the steps S1 to S5 until the teleoperation task is finished.
In some embodiments, in the step S5, the performing of the path finding and obstacle avoiding operation on the flexible mechanical arm includes the following steps:
s5.1, constructing an acting point of the artificial potential field on the flexible mechanical arm;
s5.2, constructing an artificial potential field;
and S5.3, adopting a flexible mechanical arm maximum power whole arm collaborative planning strategy, namely moving towards the direction of the maximum sum of the force applied by each action point in the potential field in the unit planning period, and realizing the whole arm pose planning with multiple degrees of freedom.
Compared with the prior art, the invention has the advantages that:
the invention provides a flexible mechanical arm teleoperation man-machine interaction device based on mixed reality, which carries out SLAM real-time positioning and mapping on the space where a flexible mechanical arm is located through a control module, visually presents mapping results and the pose of the flexible mechanical arm in an operation scene to an operator through mixed reality equipment, acquires the dynamic state of the operator through the mixed reality equipment, identifies a control mode selected by the operator and a sent artificial control instruction, converts the artificial control instruction into a flexible mechanical arm control instruction through the control module, guides the tail end or each joint of the flexible mechanical arm to move according to a real-time reconstructed local map and the state of the flexible mechanical arm, flexibly adjusts the pose of the flexible mechanical arm, guides the movement and operation of the flexible mechanical arm, carries out movement control and path finding and obstacle avoiding operation in the movement process of the flexible mechanical arm, realizes automatic obstacle avoiding and whole arm pose planning, and accordingly improves the real-time, accuracy, safety and user friendliness of the flexible mechanical arm teleoperation man-machine interaction control.
Drawings
FIG. 1 is a schematic diagram of a hybrid reality-based flexible manipulator teleoperation human-computer interaction device in an embodiment of the invention;
FIG. 2 is a schematic view of a flexible robotic arm in an embodiment of the invention;
fig. 3 is a flow chart of the operation of the flexible manipulator teleoperation human-computer interaction device based on mixed reality in the embodiment of the invention.
Fig. 4 is a system information flow diagram of the data acquisition module and the SLAM algorithm module in the embodiment of the present invention.
Fig. 5 is a system information flow diagram of the SLAM algorithm module and the interactive virtual model in the embodiment of the present invention.
Fig. 6 is a schematic diagram (in a practical environment) of environment scanning by the flexible mechanical arm through the RGB-D camera in the embodiment of the present invention.
Fig. 7 is a schematic diagram of reconstructing an environmental point cloud model in real time (in a virtual environment) when an RGB-D camera scans the environment according to an embodiment of the present invention.
FIG. 8 is a schematic diagram of an end-expected pose of an interactive entity planning end of a mobile target end during operator interaction in an embodiment of the invention (in a virtual environment).
Fig. 9 is a schematic diagram (in a virtual environment) of a flexible mechanical arm moving to a target pose according to a calculation result of a road finding and obstacle avoidance plan according to the embodiment of the present invention.
Fig. 10 is a schematic diagram of the obstacle avoidance result of the flexible mechanical arm in the actual environment (in the actual environment) in the embodiment of the present invention.
FIG. 11 is a schematic diagram of an improved artificial potential field in an embodiment of the invention.
FIG. 12 is a schematic diagram of an interactive panel in an embodiment of the present invention.
FIG. 13 is a diagram of target end interactable entities in an embodiment of the invention.
Fig. 14 is a schematic view of a target distal end anti-shake control handle in an embodiment of the present invention.
Detailed Description
The invention will be further described with reference to the drawings and preferred embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
It should be noted that the terms of orientation such as left, right, up, down, top and bottom in the present embodiment are only relative concepts to each other or are referred to the normal use state of the product, and should not be considered as limiting.
The embodiment of the invention provides a flexible mechanical arm teleoperation human-computer interaction device based on mixed reality, which comprises: mix reality equipment and control module, mix reality equipment connection control module, control module and flexible arm communication connection, control module carries out SLAM (Simultaneous Localization and Mapping, fix a position in real time and build the picture), when building the picture to the environment, fix a position each joint of flexible arm, and present the position appearance of drawing result and flexible arm under the operation scene for the operator through mixing reality equipment, mixed reality equipment gathers operator's developments, the control mode that discernment operator selected and the artificial control command who sends, control module receives and analyzes the instruction, and carry out motion control and seek the way and keep away the barrier operation to flexible arm, change artificial control command into flexible arm control command, thereby carry out motion control to flexible arm.
Specifically, the mixed reality device (mixed reality interface) is a head-mounted device, and the head-mounted device is used for synchronously displaying the interactive virtual environment and the flexible arm model, performing head tracking, gesture recognition, voice recognition and the like on an operator, and recognizing a control mode selected by the operator and a manual control instruction sent by the operator.
Specifically, the control module comprises an interaction and model control platform and a data processing and hardware control platform, and the interaction and model control platform is in communication connection with the data processing and hardware control platform; the interaction and model control platform comprises an interaction virtual model, a motion control and path finding and obstacle avoiding planning algorithm module and an interaction interface, and is used for constructing an interaction virtual environment and a flexible arm motion model, carrying out real-time obstacle avoiding and whole arm pose planning on the flexible mechanical arm, and transmitting a mapping result and pose data of the flexible mechanical arm under a scene to the mixed reality equipment; the data processing and hardware control platform comprises a data acquisition module, an SLAM algorithm module and a hardware control system, and is used for acquiring environmental data, positioning and drawing the SLAM in real time and communicating data of the flexible mechanical arm.
The data acquisition module comprises an RGB-D depth camera, and the RGB-D depth camera is arranged at the tail end of the flexible mechanical arm. And the control module carries out SLAM real-time positioning and mapping based on the data acquired by the RGB-D depth camera.
Furthermore, the data acquisition module also comprises a code wheel which is arranged at each joint of the flexible mechanical arm and used for reading the joint angle of the flexible mechanical arm.
Wherein, interactive interface contains three components: the interactive panel, the target tail end interactive entity and the target tail end anti-shake control handle are used for providing a man-machine interaction interface.
After the SLAM algorithm module calculates results in real time, the interaction and model control platform subscribes camera tracks and point cloud data of the data processing and hardware control platform, and the system comprises a series of poses obtained by positioning the camera by the SLAM algorithm module of each sampling point; and (4) mapping the SLAM at the current stage to obtain dense point cloud data of the local map, wherein the dense point cloud data comprises RGB data and depth data. And after data type conversion and coordinate system conversion, processing the two data into Unity coordinates. The data is used for calibrating subsequent flexible mechanical arm joint positioning and autonomous obstacle avoidance links; and drawing the data into a color point cloud to form a virtual environment barrier model.
In some embodiments, the flexible robotic arm is a rope-based drive and is of a super-redundant design.
In another embodiment, the present invention provides a hybrid reality-based flexible manipulator teleoperation human-computer interaction system, which includes the hybrid reality-based flexible manipulator teleoperation human-computer interaction device and a teleoperated flexible manipulator.
Specifically, the flexible mechanical arm is driven by a rope, each joint of the flexible mechanical arm has two degrees of freedom, adjacent joints are perpendicular to each other, and the joints are connected to form the integrated ultra-redundant flexible mechanical arm.
In another embodiment, the invention provides a hybrid reality-based flexible manipulator teleoperation human-computer interaction method, which comprises the following steps:
s1, collecting environmental data.
S2, performing SLAM real-time positioning and mapping when the flexible mechanical arm moves in the space where the flexible mechanical arm is located. The SLAM algorithm module is based on an RTabmap method.
And S3, presenting the mapping result and the pose of the flexible mechanical arm in the operation scene to an operator through mixed reality equipment.
And S4, receiving a control mode selected by an operator in the mixed reality equipment according to the real-time reconstructed local map and the state of the flexible mechanical arm and sending a manual control instruction.
S5, analyzing the control mode and the artificial control instruction, performing motion control and path finding and obstacle avoiding operation on the flexible mechanical arm, converting the artificial control instruction into a flexible mechanical arm control instruction, and controlling the flexible mechanical arm to move through a hardware control system, so that automatic obstacle avoiding and whole arm pose planning are realized in the moving process of the flexible mechanical arm.
The method for carrying out the road finding and obstacle avoiding operation on the flexible mechanical arm comprises the following steps:
s5.1, constructing an action point of an artificial potential field on the flexible mechanical arm;
s5.2, constructing an artificial potential field;
and S5.3, adopting a flexible mechanical arm maximum power whole arm collaborative planning strategy, namely moving towards the direction of the maximum sum of the force applied by each action point in the potential field in the unit planning period, and realizing the whole arm pose planning with multiple degrees of freedom.
In some embodiments, further comprising: and repeating the steps S1 to S5 until the teleoperation task is finished.
Specific embodiments of the present invention are described in further detail below with reference to the accompanying drawings.
As shown in fig. 1, the hybrid reality-based flexible manipulator teleoperation human-computer interaction device of the embodiment includes a control module (a PC terminal of a computer) and a hybrid reality device, and an object interactively controlled by the teleoperation human-computer interaction device is a rope-driven ultra-redundant flexible manipulator.
The flexible mechanical arm 1 is located in a remote complex environment for operation, and the schematic diagram is shown in fig. 2, wherein the flexible mechanical arm 1 has 16 joints and 32 degrees of freedom. 12 ropes are driven by 12 drive motors, and each arm section is driven by every 3 ropes and comprises 4 joints and arm rods. Each joint is provided with two orthogonal axes to form a universal joint and has two degrees of freedom of pitching-yawing; the whole model is formed by connecting a series of universal joints in sequence. Each arm segment thus has 2 equivalent degrees of freedom and the overall model has 8 equivalent degrees of freedom. The structure enables the model to have flexible three-dimensional space motion capability. An RGB-D depth camera 2 with a model number of Intel reader D435 is installed at the end of the flexible mechanical arm 1 and is used for collecting environmental data, and a schematic diagram of the flexible mechanical arm 1 scanning the environment of the environmental obstacle 3 through the RGB-D depth camera 2 is shown in fig. 6. Coded discs are installed at all joints of the flexible mechanical arm, and joint angles can be read.
The mixed reality device (mixed reality interface) may be a head-mounted device (HoloLens) for synchronously displaying the interactive virtual environment and the flexible arm model, performing head tracking, gesture recognition, voice recognition, etc. on the operator, recognizing the control mode selected by the operator and issuing the manual control instruction. The HoloLens is a mixed reality head-mounted display developed by Microsoft corporation (Microsoft), which uses Microsoft HoloLens II in this embodiment. The HoloLens can scan and acquire environmental information in real time, capture the displacement of an operator, track hands and eyes, provide voice instruction service, and is sensitive and convenient to operate.
The computer PC end comprises an interaction and model control platform (Unity 3D) and a data processing and hardware control platform (ROS robot operating system), the data processing and hardware control platform comprises a data acquisition module, an SLAM algorithm module and a hardware control system, the data acquisition module comprises an RGB-D depth camera and a coded disc, and the computer PC end carries out SLAM real-time positioning and mapping based on data acquired by the RGB-D depth camera. In the Unity 3D, the model comprises two parts, namely an interactive virtual model and a motion control and route and obstacle seeking planning algorithm module: building a flexible mechanical arm motion model and a reconstructed environment point cloud map by using a three-dimensional modeling engine, and providing an interface for human-computer interaction; and compiling a motion control and path finding and obstacle avoiding planning algorithm module by using the C # script to realize real-time obstacle avoiding and whole arm pose planning of the flexible mechanical arm. Fig. 7 shows a schematic diagram of a real-time reconstruction environment point cloud model in a virtual environment when an RGB-D depth camera scans the environment, which specifically includes: the system comprises a flexible mechanical arm virtual model 4, an RGB-D depth camera virtual model 5 and an environment point cloud model 6; the schematic diagram of the planning end expected pose of the interactive entity of the mobile target end during the interaction of the operator is shown in FIG. 8, and comprises an operator gesture tracking virtual model 7; a schematic diagram of the flexible mechanical arm moving to reach the target pose according to the calculation result of the road-finding obstacle-avoiding planning is shown in fig. 9. Fig. 10 shows a schematic diagram of the obstacle avoidance result of the flexible mechanical arm in an actual environment.
In addition, unity 3D has an interactive interface with HoloLens, facilitating data transfer and presentation. The ROS robot operating system realizes communication among multiple nodes of the system, nodes such as an RGB-D depth camera, an SLAM algorithm module and a flexible arm hardware control system are connected in series, communication is realized through a WebSocket and a Unity 3D model, and efficient and normal operation of the teleoperation human-computer interaction device is maintained. The SLAM algorithm module is based on an RTabmap method. The system information flow in the embodiment of the present invention is specifically constructed as follows (i.e., an information transfer path):
(1) And the data acquisition module performs information interaction with the SLAM algorithm module. The information interaction is established inside the ROS robot operating system. Running an RGB-D depth camera acquisition node and an SLAM algorithm node in an ROS robot operation system in sequence, automatically opening an interactive interface with RGB-D depth camera data when the SLAM node runs, and constructing an information stream between the two nodes. A system information flow diagram of the data acquisition module and the SLAM algorithm module is shown in fig. 4.
(2) And the SLAM algorithm module performs information interaction with the interactive virtual model. The information interaction is established between the ROS robot operating system end and the Unity 3D end. The embodiment of the invention selects WebSocket (a protocol for carrying out full duplex communication on a single TCP connection) for communication. A Server (Server) end of the Websocket is constructed in the SLAM algorithm module, and a Client (Client) end is constructed in the Unity 3D. The server provides two kinds of message data in the SLAM node to the client, namely point cloud data of a/rtabmap/mapPath camera path and/rtabmap/cloud _ map, and unidirectional information transmission is completed. The system information flow diagram of the SLAM algorithm module and the interactive virtual model is shown in FIG. 5.
(3) And the interactive virtual model performs information interaction with the motion control and road finding and obstacle avoiding planning algorithm module. The information interaction is established inside the Unity 3D end and is realized simply through a script.
(4) And performing information interaction with the interactive virtual model by using the HoloLens. The information interaction is established between the Unity 3D and the HoloLens equipment at the PC end. Unity 3D provides MRTK (mixed reality Toolkit) that interacts with Microsoft HoloLens. The HoloLens and the PC end are connected in the same local area network, and information transmission can be carried out through a TCP/IP protocol.
The interactive interface of the present embodiment comprises three components: the interactive panel, the target tail end interactive entity and the target tail end anti-shake control handle.
(1) Interactive panel
The overall interactive panel is shown in FIG. 12, and comprises a main panel and a mode sub-panel.
The main panel contains a mode option and Reset button. The mode option provides the operator with a function selection and the Reset button provides the operator with an arm Reset function. After the specific mode is selected, the mode sub-panel of the ejection response comprises control preconditions, variables and the like in the mode, so that a larger operation space is provided for an operator.
In the mode sub-panel, anti-ShakeHandle and BaseLock options are included, representing Anti-shake control handle mode and end-lock mode, respectively.
(2) Target end interactable entities
As shown in fig. 13, a collision volume and an interactive property are set in Unity 3D, and after an operator wears HoloLens glasses, the pose of the operator can be changed in a grabbing manner.
When the target point is far away from the flexible mechanical arm, the influence weight of the gravitational field on pose planning is reduced; the flexible mechanical arm is limited by the moving speed of the flexible mechanical arm and the planning step length of the path finding and obstacle avoidance, and is easy to introduce larger control errors and other factors. Therefore, the Unity 3D is provided with the upper limit v of the movement speed of the target end interactive entity and the upper limit s of the distance between the target end interactive entity and the tail end of the mechanical arm, so that an operator is limited to cooperate with the flexible mechanical arm to conduct guidance for a plurality of times in a small scale.
(3) Target end anti-shake control handle
Because the embodiment does not provide any tactile feedback, the operator only interacts with the visual virtual model, the hand shaking situation will cause larger influence than that in the presence of the tactile feedback, unnecessary offset and error are caused in the flexible mechanical arm pose planning, and especially under the narrow and precise control of the terminal position, the influence caused by hand shaking can be reduced to a greater extent by switching to the target terminal anti-shaking control handle.
As shown in fig. 14, the target distal-end anti-shake control handle is formed by three mutually perpendicular handles extending out by unit length from x, y, and z axes of the target coordinate system. When the anti-shake mode is entered, the target terminal interactive entity loses the interactive attribute, and the three handles are endowed with the interactive attribute. When an operator drags one of the handles, the whole target tail end anti-shake control handle makes linear motion along the shaft, and the deviation in other directions caused by shake of the hands of the operator is eliminated. In addition, under the anti-shake control, the moving speed of the end is limited by the upper limit v of the moving speed and the upper limit s of the distance from the end of the mechanical arm as the target end interactive entity, and an operator is also required to guide the flexible mechanical arm a plurality of times in a small scale.
The operation flow of the flexible mechanical arm teleoperation human-computer interaction device based on mixed reality in this embodiment is shown in fig. 3. In each operation period, the flexible mechanical arm collects environment data through the RGB-D depth camera and sends the environment data and joint angle data read by the coded disc to the SLAM algorithm module in real time; the SLAM algorithm module runs an SLAM algorithm to position and establish a map, and synchronously displays an environment and a flexible mechanical arm model in HoloLens; an operator generates an artificial control instruction according to a current construction drawing and a mechanical arm posture through HoloLens, after the operation is accepted by a PC (personal computer) end, the motion control and the path finding and obstacle avoiding operation are carried out on the mechanical arm through a motion control and path finding and obstacle avoiding planning algorithm module, and a hardware control system converts the artificial control instruction into a mechanical arm control instruction so as to control the motion of the mechanical arm; the motion of the mechanical arm brings the sensor to update the environmental data, so that the system forms a closed loop.
The operation process of the man-machine interaction device for teleoperation of the flexible mechanical arm of the embodiment comprises the following steps:
(1) Starting a task;
(2) Opening the HoloLens and the computer;
(3) Starting an RGB-D depth camera node, an RTabmap node and a mechanical arm control node in the ROS, and starting a Unity 3D running environment in a computer;
(4) Starting the flexible mechanical arm to finish the pose calibration of the flexible mechanical arm and the virtual mechanical arm model;
(5) The flexible mechanical arm collects environmental data, feeds the environmental data back to a PC (personal computer) end of the computer, builds an SLAM (simultaneous localization and mapping) image and transmits the image to the HoloLens for imaging;
(6) Collecting a control mode and gesture voice data selected by an operator at a mixed reality interface, generating a control instruction, sending the control instruction to the flexible mechanical arm after passing through a motion control and path-finding and obstacle-avoiding planning algorithm module, and controlling the flexible mechanical arm to move;
(7) Repeating the above (5) to (6);
(8) Until the teleoperation task is finished.
In the step (5), after the SLAM algorithm module calculates a result in real time, the Unity 3D subscribes a camera track and point cloud data in the ROS through WebSocket, and the camera track and point cloud data comprise a series of poses obtained by positioning the camera through the SLAM algorithm module of each sampling point; and (4) mapping the SLAM at the current stage to obtain dense point cloud data of the local map, wherein the dense point cloud data comprises RGB data and depth data. And after data type conversion and coordinate system conversion, processing the two data into Unity coordinates. The data is used for calibrating subsequent flexible mechanical arm joint positioning and autonomous obstacle avoidance links; the data is drawn into a color point cloud in a Unity editor (editor) by a Mesh tool of Unity 3D to form a virtual environmental obstacle model.
In the embodiment, on the basis of the improved artificial potential field method obstacle avoidance algorithm based on the virtual guide pipeline, because the single planning period is short, the guide potential field of the artificial guide pipeline is removed according to the actual application scene, the distribution of the guide potential field is improved, the calculated amount is reduced, and the real-time performance and the flexibility of the system are well improved. In step (6), the flexible mechanical arm is considered to be in two types of artificial potential fields in this embodiment: repulsive potential field away from an obstacle rep And gravitational potential field pointing to object pose att . The schematic diagram is shown in fig. 11. The method for carrying out the road finding and obstacle avoidance operation on the flexible mechanical arm comprises the following specific steps:
(1) Constructing the point of application of an artificial potential field to a flexible manipulator
Repulsive force field rep Acting on the whole flexible mechanical arm to avoid the collision of any part of the flexible mechanical arm with the barrier. However, since the single planning period is short, setting too many sampling points will cause redundant calculation. Therefore, a repulsive potential field is set rep The acting points of the flexible mechanical arm are all joint points P of the flexible mechanical arm i (i∈(1,2,...,n))。
Gravitational potential field att Acting on the tail end of the flexible mechanical arm to guide the flexible mechanical arm to reach the correct target pose. First, the target point P is reached in order to bring the tip to the correct position Goal The gravitational field generated acts on the end joint point P End (ii) a In addition, in order to make the tail end reach the correct posture, two virtual target points are constructed at the distance L between the x axis and the y axis of the target coordinate system
Figure BDA0003768697620000121
And two virtual action points are constructed at the distance L between the x axis and the y axis of the terminal coordinate system
Figure BDA0003768697620000131
Then the
Figure BDA0003768697620000132
The generated gravitational field acts on
Figure BDA0003768697620000133
The generated gravitational field acts on
Figure BDA0003768697620000134
The set of points consisting of the points of action of the two artificial potential fields on the flexible manipulator is denoted Q.
(2) Constructing artificial potential fields
Repulsive potential field att And gravitational potential field rep The forces generated at the respective points of action are respectively denoted by F att 、F rep Expressed, the expression of the modified artificial potential field to the force at the point of action is as follows:
Figure BDA0003768697620000135
Figure BDA0003768697620000136
wherein k is att ,k rep ,d 1 ,d 2 P > 0 is an artificially set fixed parameter, P Goal Is the target point position vector, P is the action point position vector, P 0 And reconstructing a point position vector which is closest to the joint point in the point cloud.
(3) Flexible mechanical arm maximum power whole arm collaborative planning strategy
In order to realize the multi-degree-of-freedom whole arm pose planning, the embodiment of the invention adopts a maximum-power whole arm collaborative planning strategy, namely, the motion is carried out towards the direction in which the sum of the force applied by each action point in the potential field in the unit planning period is the maximum.
For the problem of planning the whole arm pose of the flexible mechanical arm, the state variable comprises joint angles theta i And
Figure BDA0003768697620000137
base displacement (d) x ,d y ,d z ) Base rotation euler angle (α, β, γ), expressed as m = (2 n + 6) -dimensional state variable
Figure BDA0003768697620000138
n represents the number of flexible manipulator joints and m represents the dimension of the state variable x.
The specific principle is as follows:
for a single point of action P i E, Q, and analyzing to obtain the stress of the strain in a unit planning period as follows:
Figure BDA0003768697620000139
then the point of action P is within the planning period i The work is as follows:
w i =F i *dP i
where dPi represents the derivative of Pi;
then, in the unit planning period, the total work of each action point is:
Figure BDA0003768697620000141
wherein T is a matrix transpose symbol;
total work to state variable
Figure BDA0003768697620000142
Obtaining a partial derivative:
Figure BDA0003768697620000143
wherein the content of the first and second substances,
Figure BDA0003768697620000144
i.e. point of action P i The Jacobian matrix of (J) can be expressed as J i . The total work of action points in the organized unit planning period is as follows:
Figure BDA0003768697620000145
where Δ x represents the amount of change in the state variable x in a unit period.
Because the planning is strictly limited in a shorter planning period, the embodiment of the invention needs to strictly limit the step length to prevent the problems of collision or uncontrollable and the like caused by larger errors.
In summary, the problem can be summarized as a convex optimization problem, which is expressed as follows:
Figure BDA0003768697620000146
wherein, a represents the upper limit of the displacement distance of any action point in the unit planning period, b i Representing single step upper limit, and forming positive correlation with the shortest distance to the point cloud of the obstacle and the distance to the target point; c. C i Represents a state variable x i The upper limit of (2) is only related to the robot arm model. In addition, the presence of Other Constraints may enable it to perform complex functions, such as locking the pose of the flexible robotic arm tip, locking the pose of the base, and so forth.
In some embodiments, the functionality provided by the present invention includes: (1) An operator is supported to control the flexible mechanical arm through a mixed reality interface, and the tail end of the flexible mechanical arm and the pose of each joint are adjusted; (2) The whole-course autonomous obstacle avoidance method supports artificial planning of a complete guide track in a known obstacle environment, and the flexible mechanical arm moves along the planned track and realizes autonomous obstacle avoidance; (3) The whole-course autonomous obstacle avoidance during real-time single-point operation is supported, namely when an operator controls the single-point displacement of the mechanical arm, the other parts of the mechanical arm can also realize autonomous obstacle avoidance, and the safety is ensured; (4) The method comprises the steps that environmental data are collected, SLAM mapping is carried out, the distribution situation of obstacles around the flexible mechanical arm and the position and posture of each joint of the flexible mechanical arm are updated in real time, the obstacles and the position and posture are presented to an operator in real time through a mixed reality interface, the operator observes a real-time scene in the mixed reality interface and issues an artificial control instruction, and remote operation is achieved; (5) And a proper interactive agent is designed to support more refined man-machine interactive control in a specific scene.
The embodiment of the invention adopts a Mixed Reality (MR) technology to realize the man-machine interaction navigation control of the ultra-redundant flexible mechanical arm in a narrow and complex environment. Acquiring environmental data through a depth camera loaded at the tail end of a flexible mechanical arm, carrying out 3D real-time positioning and Mapping (SLAM) on the space where the flexible mechanical arm is located, and visually presenting a Mapping result and the pose of the mechanical arm in an operation scene to a remote operator through a mixed reality Interface (MR Interface); an operator wears the HMD, the HMD is positioned at a third visual angle, different control modes are selected, the tail end or each joint of the mechanical arm is guided to move according to the local map reconstructed in real time and the state of the mechanical arm, the pose of the mechanical arm is adjusted flexibly, the motion and the operation of the mechanical arm are guided, and automatic obstacle avoidance and whole arm pose planning are realized in the motion process of the mechanical arm. In the process, an operator perceives the flexible mechanical arm and the environment where the flexible mechanical arm is located through immersive vision and gives feedback and control, so that man-machine interaction control forms a closed loop, and interactive navigation of the operator on the flexible mechanical arm in a complex space is realized. The embodiment of the invention can improve the real-time property, accuracy, safety and user-friendliness of the teleoperation human-computer interaction control.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several equivalent substitutions or obvious modifications can be made without departing from the spirit of the invention, and all the properties or uses are considered to be within the scope of the invention.

Claims (12)

1. A hybrid reality-based teleoperation human-computer interaction device of a flexible mechanical arm is characterized by comprising: mix reality equipment and control module, mix reality equipment and connect control module, control module and flexible arm communication connection, control module carries out SLAM real time positioning and builds the picture, and passes through it will build the picture result and flexible arm position appearance under the operation scene presents for the operator to mix reality equipment, mix reality equipment and gather operator's developments, discernment operator's selected control mode and the artificial control command who sends, control module receives and analyzes the command to carry out motion control and seek way and keep away the barrier operation to flexible arm, will artificial control command converts flexible arm control command into, thereby is right flexible arm carries out motion control.
2. The device according to claim 1, wherein the mixed reality device is a head-mounted device, and the head-mounted device is used for synchronously displaying the interactive virtual environment and the flexible arm model, performing head tracking, gesture recognition and voice recognition on the operator, and recognizing the control mode selected by the operator and the issued human control command.
3. The hybrid reality-based flexible manipulator teleoperation human-computer interaction device of claim 1, wherein the control module comprises an interaction and model control platform and a data processing and hardware control platform, and the interaction and model control platform and the data processing and hardware control platform are in communication connection; the interaction and model control platform comprises an interaction virtual model, a motion control and path finding and obstacle avoiding planning algorithm module and an interaction interface, and is used for constructing an interaction virtual environment and a flexible arm motion model, carrying out real-time obstacle avoiding and whole arm pose planning on the flexible mechanical arm, and transmitting a mapping result and pose data of the flexible mechanical arm under a scene to the mixed reality equipment; the data processing and hardware control platform comprises a data acquisition module, an SLAM algorithm module and a hardware control system, and is used for acquiring environmental data, positioning and drawing in real time by the SLAM and carrying out data communication on the flexible mechanical arm.
4. The mixed reality-based flexible robotic arm teleoperational human-computer interaction device of claim 3, wherein the data acquisition module comprises an RGB-D depth camera disposed at the end of the flexible robotic arm; and the control module carries out SLAM real-time positioning and mapping based on the data acquired by the RGB-D depth camera.
5. The mixed reality-based teleoperational human-computer interaction device of claim 4, wherein the data acquisition module further comprises a code wheel disposed at each joint of the flexible manipulator for reading joint angles of the flexible manipulator.
6. The hybrid reality-based flexible robotic arm teleoperational human-computer interaction device of claim 3, wherein the interactive interface comprises an interaction panel, a target end interactable entity for providing a human-computer interaction interface.
7. The mixed reality-based flexible robotic arm teleoperational human-computer interaction device of any one of claims 1-6, wherein the flexible robotic arm is of a super-redundant design.
8. A mixed reality-based flexible manipulator teleoperation human-computer interaction system is characterized by comprising the mixed reality-based flexible manipulator teleoperation human-computer interaction device according to any one of claims 1 to 7 and a teleoperated flexible manipulator.
9. The teleoperational human-computer interaction system of claim 8, wherein the flexible robotic arm is driven by a rope, each joint of the flexible robotic arm has two degrees of freedom, adjacent joints are perpendicular to each other, and the flexible robotic arm is connected by each joint to form an integrated super-redundant flexible robotic arm.
10. A man-machine interaction method for teleoperation of a flexible mechanical arm based on mixed reality is characterized in that the man-machine interaction device for teleoperation of the flexible mechanical arm based on mixed reality according to any one of claims 1-7 is adopted for man-machine interaction, and the method comprises the following steps:
s1, collecting environmental data;
s2, performing SLAM real-time positioning and mapping when the flexible mechanical arm moves in the space;
s3, presenting the mapping result and the pose of the flexible mechanical arm in the operation scene to an operator through mixed reality equipment;
s4, receiving a control mode selected by an operator in the mixed reality equipment according to the real-time reconstructed local map and the state of the flexible mechanical arm and a sent artificial control instruction;
s5, analyzing the control mode and the artificial control instruction, performing motion control and path finding and obstacle avoiding operation on the flexible mechanical arm, converting the artificial control instruction into a flexible mechanical arm control instruction, and controlling the flexible mechanical arm to move through a hardware control system, so that automatic obstacle avoiding and whole arm pose planning are realized in the moving process of the flexible mechanical arm.
11. The hybrid reality-based flexible robotic arm teleoperational human-computer interaction method of claim 10, further comprising: and repeating the steps S1 to S5 until the teleoperation task is finished.
12. The hybrid-reality-based flexible manipulator teleoperation human-computer interaction method as claimed in claim 10 or 11, wherein the step S5 of performing the road finding and obstacle avoidance operation on the flexible manipulator comprises the steps of:
s5.1, constructing an action point of an artificial potential field on the flexible mechanical arm;
s5.2, constructing an artificial potential field;
and S5.3, adopting a flexible mechanical arm maximum power whole arm collaborative planning strategy, namely moving towards the direction of the maximum sum of the force applied by each action point in the potential field in the unit planning period, and realizing the whole arm pose planning with multiple degrees of freedom.
CN202210894084.2A 2022-07-27 2022-07-27 Flexible mechanical arm teleoperation man-machine interaction device and method based on mixed reality Pending CN115157261A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210894084.2A CN115157261A (en) 2022-07-27 2022-07-27 Flexible mechanical arm teleoperation man-machine interaction device and method based on mixed reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210894084.2A CN115157261A (en) 2022-07-27 2022-07-27 Flexible mechanical arm teleoperation man-machine interaction device and method based on mixed reality

Publications (1)

Publication Number Publication Date
CN115157261A true CN115157261A (en) 2022-10-11

Family

ID=83497718

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210894084.2A Pending CN115157261A (en) 2022-07-27 2022-07-27 Flexible mechanical arm teleoperation man-machine interaction device and method based on mixed reality

Country Status (1)

Country Link
CN (1) CN115157261A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116214532A (en) * 2023-05-10 2023-06-06 河海大学 Autonomous obstacle avoidance grabbing system and grabbing method for submarine cable mechanical arm

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116214532A (en) * 2023-05-10 2023-06-06 河海大学 Autonomous obstacle avoidance grabbing system and grabbing method for submarine cable mechanical arm

Similar Documents

Publication Publication Date Title
Rakita et al. An autonomous dynamic camera method for effective remote teleoperation
Krupke et al. Comparison of multimodal heading and pointing gestures for co-located mixed reality human-robot interaction
US20200055195A1 (en) Systems and Methods for Remotely Controlling a Robotic Device
US20040189675A1 (en) Augmented reality system and method
CN112634318B (en) Teleoperation system and method for underwater maintenance robot
CN110834330B (en) Flexible mechanical arm teleoperation man-machine interaction terminal and method
CN111438673B (en) High-altitude operation teleoperation method and system based on stereoscopic vision and gesture control
CN111716365B (en) Immersive remote interaction system and method based on natural walking
Naceri et al. Towards a virtual reality interface for remote robotic teleoperation
CN110039547A (en) A kind of human-computer interaction terminal and method of flexible mechanical arm remote operating
CN112828916B (en) Remote operation combined interaction device for redundant mechanical arm and remote operation system for redundant mechanical arm
Szczurek et al. Multimodal multi-user mixed reality human–robot interface for remote operations in hazardous environments
GB2598345A (en) Remote operation of robotic systems
CN115469576A (en) Teleoperation system based on human-mechanical arm heterogeneous motion space hybrid mapping
CN115157261A (en) Flexible mechanical arm teleoperation man-machine interaction device and method based on mixed reality
CN112894820A (en) Flexible mechanical arm remote operation man-machine interaction device and system
Mallem et al. Computer-assisted visual perception in teleoperated robotics
CN110539315B (en) Construction robot based on virtual reality control
Senft et al. A Method For Automated Drone Viewpoints to Support Remote Robot Manipulation
CN116197899A (en) Active robot teleoperation system based on VR
Pryor et al. A Virtual Reality Planning Environment for High-Risk, High-Latency Teleoperation
Fernando et al. Effectiveness of Spatial Coherent Remote Drive Experience with a Telexistence Backhoe for Construction Sites.
Chen et al. A 3D Mixed Reality Interface for Human-Robot Teaming
Fournier et al. Immersive virtual environment for mobile platform remote operation and exploration
Yang et al. A web-based 3d virtual robot remote control system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination