CN110815258A - Robot teleoperation system and method based on electromagnetic force feedback and augmented reality - Google Patents

Robot teleoperation system and method based on electromagnetic force feedback and augmented reality Download PDF

Info

Publication number
CN110815258A
CN110815258A CN201911046808.2A CN201911046808A CN110815258A CN 110815258 A CN110815258 A CN 110815258A CN 201911046808 A CN201911046808 A CN 201911046808A CN 110815258 A CN110815258 A CN 110815258A
Authority
CN
China
Prior art keywords
robot
coordinate system
operator
text
electromagnetic force
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911046808.2A
Other languages
Chinese (zh)
Other versions
CN110815258B (en
Inventor
杜广龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201911046808.2A priority Critical patent/CN110815258B/en
Publication of CN110815258A publication Critical patent/CN110815258A/en
Application granted granted Critical
Publication of CN110815258B publication Critical patent/CN110815258B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/003Controls for manipulators by means of an audio-responsive input
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Computational Linguistics (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

The invention provides a robot teleoperation system and method based on electromagnetic force feedback and augmented reality. The system includes a nature control module and a nature feedback module. After the gesture text and the voice text of an operator are fused, a natural control module extracts a robot control instruction through an inference method to guide the virtual robot to move, and the remote real robot copies the movement of the virtual robot based on data sent through the Internet; the natural feedback module includes electromagnetic force feedback that allows an operator to feel the force of the robot and visual feedback that allows the operator to observe the virtual robot from any direction. By adopting the system to remotely operate the robot, the stress condition of the robot can be sensed in real time, the process of executing tasks by the robot can be observed, the strong immersion is realized, the operation efficiency and the operation accuracy are improved, the system is suitable for non-professional operators, and the system has universality and easy operability and has wide application range.

Description

Robot teleoperation system and method based on electromagnetic force feedback and augmented reality
Technical Field
The invention belongs to the field of robot control, and particularly relates to a robot teleoperation system and method based on electromagnetic force feedback and augmented reality.
Background
Remote operation of the robot allows the robot to operate in harsh environments that are inaccessible to humans. However, the conventional method is generally to observe the robot through video or 3D models, and lacks strength feedback, and has some disadvantages such as limited eyesight, unfriendly interaction, lack of immersion, and low remote operation efficiency. The coming of the 4.0 era of industry, the application field of the robot is more common, the use frequency is more and more frequent, on one hand, the operation of an operator on the robot is more convenient and easier to master, so that the operator can concentrate on tasks, and the operation efficiency is improved; on the other hand, natural and friendly interaction is provided for operators, so that the operators can feel real-time force feedback of the robot, real-time adjustment operation is facilitated, and the operation is more accurate and reliable.
Existing teleoperation methods can be divided into two main categories: contact and contactless. In the touch method, an operator holds a device to control a remote robot, such as a mouse, a keyboard, a data glove, an exoskeleton, and the like. For example, a serpentine Robot (P. Berth-Rayne, K. Leibrandt, et al, "Inverse kinematics control Methods for reducing Snake noise Robot Teleoperation Dual miniature acquisition Surgery," IEEE Robotics and Automation Letters, vol.3, No.3, pp.2501-2508,2018.); using a joystick with force feedback named "Phantom Device" to control a remote robotic arm (xiaonngxu, Aiguo Song, et al, "Visual-happy Aid Teleoperation Based on 3-deep Modeling and Updating," IEEE Transactions on industrial electronics, vol.63, No.10, pp.6419-6428,2016.); a plurality of sensors are attached to the hand and arm, and the measured pose of the Human arm is used to control the motion of the robotic arm (S.Fani, S.Ciotti, et al, "simple and technical improvements: Wearability and Teleimpidity improvements Human-Robot interaction in Teleoperation," IEEE robots & Automation Magazine, vol.25, No.1, pp.77-88,2018.). However, these methods are inefficient in interaction, are not natural enough, require professional operational knowledge and experience, are limited in the movement space of human hands by the movable space of the apparatus, and have a limited viewing angle for an operator to observe. In the non-contact method, the measuring equipment does not need to directly contact the human body, and the position and the posture of the human body can be indirectly acquired. For example, a physical marker is attached to a Human body, and then the position and posture of the Human hand are acquired through an image taken by a camera, thereby controlling a Robot arm (j. kofman, x. wu, et al, "Robot Manipulator Using a Vision-Based Human-Robot Interface," IEEE Transactions on Industrial Electronics, vol52, No.5, pp.1206-1219,2005.); the position and posture of the Human hand are obtained by the Leap Motion sensor, and the robot can be controlled without physical marks (guangling Du, Ping Zhang, and Xin Liu, "Markerless Human-manager Interface Using Leap Motion With Interactive Kalman Filter and Improved Particle Filter," IEEE Transactions on Industrial information, vol.12, No.2, pp.694-704,2016.). However, these methods lack force feedback, resulting in insufficient immersion and accuracy of the operation, and the movement space of the human hand is limited by the measurement space of the apparatus, while the problem of limited viewing angle still remains.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provides a robot teleoperation system and a method based on electromagnetic force feedback and augmented reality, wherein the system comprises a natural control module and a natural feedback module. After the gesture text and the voice text of an operator are fused, a natural control module extracts a robot control instruction through an inference method to guide the virtual robot to move, and the remote real robot copies the movement of the virtual robot based on data sent through the Internet; the natural feedback module includes electromagnetic force feedback that allows an operator to feel the force of the robot and visual feedback that allows the operator to observe the virtual robot from any direction.
The purpose of the invention is realized by at least one of the following technical solutions.
Teleoperation system of robot based on electromagnetic force feedback and augmented reality includes: the natural control module and the natural feedback module;
the natural control module comprises a movable operation platform, a voice acquisition module, a virtual robot and a remote real robot; the natural control module is used for extracting a robot control instruction to guide the virtual robot to move through an inference method after fusing a gesture text and a voice text of an operator acquired through the movable operation platform and the voice acquisition module, the virtual robot receives the robot control instruction and moves according to the instruction, the movement data is sent to the remote real robot through the Internet, and the remote real robot receives the data and copies the movement of the virtual robot;
the natural feedback module comprises an electromagnetic force feedback module and a visual feedback module; the electromagnetic force feedback module is used for enabling an operator to feel the force of the robot, and the visual feedback module is used for enabling the operator to observe the virtual robot from any direction.
Further, the movable operating platform comprises a tracking platform, a mobile robot, checkerboard pictures, a motion sensor and an electromagnet; an electromagnet and two motion sensors are fixed on the tracking platform, wherein the electromagnet is placed in the center of the platform, the two motion sensors are symmetrically fixed on two sides of the electromagnet and are respectively installed at the tail end of a connecting rod and face downwards at an angle of 45 degrees for expanding the operation space of the hands of an operator; the working space of a single motion sensor is a cone with a cone angle of 89.5 degrees, a height of 550 millimeters and a bottom radius of 550 millimeters, and is used for measuring the position and the direction of a palm and obtaining a gesture text of an operator through a corresponding algorithm; the electromagnet is used for generating an electromagnetic field to provide electromagnetic force feedback; the tracking platform is fixed at the tail end of a six-degree-of-freedom mechanical arm of the mobile robot, and the mobile robot is used for enabling the tracking platform, a sensor on the platform and an electromagnet to move in space; a checkerboard picture is pasted on a power box of the mobile robot and used for positioning the position of the mobile robot in space.
Further, the voice acquisition module adopts a microphone array built in the Kinect camera to collect voice of the operator, and converts the voice of the operator into a text form to obtain a voice text.
Further, the electromagnetic force feedback module comprises a coil and a permanent magnet; the coil is cylindrical, the center of the coil is an iron core, and a plurality of layers of copper wires are wound around the iron core and used for generating an electromagnetic field; the coil is fixed at the center of the tracking platform, and the permanent magnet is worn on the hand of an operator, so that the operator feels the stress of the robot; a PID controller is integrated in the coil to reduce the adverse effects of the coil and the permanent magnet, which is placed on the back of the human hand to avoid interfering with the operation of the operator.
Further, the visual feedback module comprises AR glasses; for enabling the operator to view the robot motion from any direction and to display real-time video of the remote real robot performing the task.
Further, in the remote operation system, a world coordinate system is defined as XWYWZW(ii) a According to the D-H model of the robot, defining the basic coordinate system of the mechanical arm of the mobile robot as XBYBZB(ii) a Coordinate system X defining a robot end effectorEYEZE(ii) a Defining the coordinate system of a Kinect camera in a voice acquisition module as XKYKZKWherein Z isKIs the optical axis of Kinect, XKIs the long side of the Kinect; coordinate system X for defining AR (augmented reality) glasses worn by operatorGYGZGDefining the coordinate system of the hand as XHYHZH,YHPerpendicular to the plane of the palm and pointing towards the back of the hand, XHCollinear with the line from the center of the palm to the middle finger; defining the coordinate system of the motion sensor as XLYLZL,XLAnd ZLAlong the long and short sides of the motion sensor, respectively; the checkerboard picture is fixed on the mobile robot, and the coordinate system of the checkerboard picture is defined as XIYIZIFor positioning the mobile robot to collect the voice of KinectPosition under the module coordinate system; the robot has a calibration box whose coordinate system is defined as XCYCZCFor calibrating the relationship between the virtual robot and the mobile robot; according to the relationship of the above coordinate system, the position and direction of the operator's hand measured in the motion sensor coordinate system are converted into coordinate values in the world coordinate system for controlling the virtual robot.
Further, the motion sensor obtains 6 parameters through measurement, wherein the parameters comprise 3 rotation angle components and 3 position components of a hand coordinate system relative to a motion sensor coordinate system, and an Interval Kalman Filter (IKF) is used for eliminating measurement errors of the measured hand position;
rotation matrix M from hand coordinate system to world coordinate systemH2WThe following were used:
Figure BDA0002254333240000031
wherein
Figure BDA0002254333240000032
Representing an angle between a positive direction of an i-axis of a hand coordinate system and a positive direction of a j-axis of a world coordinate system;
the position state at time k is defined as follows: x is the number ofk=[px,k,Vx,k,Ax,k,py,k,Vy,k,Ay,k,pz,k,Vz,k,Az,k]Wherein p isx,k,py,k,pz,kRepresenting the component of the palm centre in the world coordinate system, Vx,k,Vy,k,Vz,kRepresenting the velocity component of the human hand in each axis of the world coordinate system, Ax,k,Ay,k,Az,kIs the acceleration component measured in the hand coordinate system, and x is estimated from noisy measurements by IKFkA value of (d);
the motion sensor detects the hand direction in a motion sensor coordinate system, includingA roll angle phi, a pitch angle theta and a yaw angle psi; then, converting the measured Euler angle into a quaternion through a decomposed quaternion algorithm (FQA), and reducing the measurement error of the hand direction obtained by measurement by adopting Improved Particle Filtering (IPF); time tkThe approximate posterior density of (a) is defined as follows:
Figure BDA0002254333240000041
wherein
Figure BDA0002254333240000042
Is at a time tkN is the number of samples,
Figure BDA0002254333240000043
is the ith particle at time tkδ (x) is a dirac trigonometric function;
approximating state particles using an ensemble Kalman filterA set of initial state particles isOverall effect prediction
Figure BDA0002254333240000046
The following were used:
wherein wkRepresentative of model error, Qk-1Covariance representing model error; each particle has 4 states in its direction
Figure BDA0002254333240000048
It is represented by a unit quaternion and satisfies the following condition:wherein
Figure BDA00022543332400000410
Representing 4 elementary quaternion components, each particle at time tk+1The quaternion component of (a) is defined as follows:
Figure BDA00022543332400000411
in the formula of omegaaxis,kRepresenting the angular velocity component, axis ∈ (x, y, z), t being the sampling time; the IPF estimates the velocity and position for the direction of each particle, and assigning a weight to each particle based on the cumulative difference of the position estimated by the IKF and the calculated position for the ith particle may reduce the error in calculating the acceleration of the object in the world coordinate system, the position difference being defined as follows:
wherein
Figure BDA0002254333240000052
Is the accumulated position difference of the ith particle in the iteration of the s-th direction, Ms=ΔTs/t,
Figure BDA0002254333240000053
Is the ith oriented particle at time tkIn the state of the position of the movable body,
Figure BDA0002254333240000054
is the position of the ith particle on each axis of the world coordinate system predicted by IKF at time k;
the position and direction data of the human hand obtained by the filtering is expressed as a text "human hand position P ═ (P)x,k,py,k,pz,k) And the direction D is in the form of (phi, theta, psi)', and the gesture text is obtained.
Further, the gesture text and the voice text of the fusion operator are spliced behind the voice text; the robot control instruction is extracted through the reasoning method and is used for robot control, and the following steps are specifically carried out:
by using (Co)pt,Cdir,Cval,Cunit) Four attributes describe control instructions, CoptRepresenting the type of operation, CdirRepresents the direction of movement, CvalRepresents a movement value, CunitUnits representing motion values; when the operator controls the robot using voice and gestures, the gestures are used to indicate the direction of the robot movement, and thus the gesture text is represented as one direction vector. For example, the operator points in one Direction O and says "move 10mm in this Direction", the gesture text may be represented as "Direction O" or "Direction: [ x, y, z ]]", the fusion text is" Move 10mm in thisrirection: O (or [ x, y, z)]) ", the control instruction is fetched as (Co)pt=MOVE,Cdir=O(or[x,y,z]),Cval=10,Cunit=mm)。
Further, the electromagnetic force feedback is realized as follows:
estimating current and displacement of the coil from the expected force using a Back Propagation Neural Network (BPNN) in an artificial neural network; the BPNN comprises an input layer, two hidden layers with dynamically adjustable node quantity and an output layer; the BPNN model comprises 6 input parameters and 4 target output parameters; the input layer has 6 nodes for input parameter assignment, respectively hand position estimate P (P)x,py,pz) And force f from the environmente(fe,x,fe,y,fe,z) (ii) a There are 4 nodes in the output layer, corresponding to the present current I and the displacement D (D)x,dy,dz) (ii) a The data format of the training and testing data sets for the model are both (p)x,py,pz,fe,x,fe,y,fe,z,I,dx,dy,dz) Data were randomly assigned, with 70% used for training and the remainder for testing;
when data is collected, the input to the PID is the desired force feAnd hand position, currents I and dx,dy,dzDynamically adjusted to cause the coil to generate an appropriate force that can be felt by an operator; adjusting the current to produce a measured force f of the coilhShould be as equal as possible to a given desired force feThe deviation of the two forces should satisfy: l fe-fhE is less than or equal to e, and e is a deviation threshold value set manually.
The teleoperation method of the robot based on electromagnetic force feedback and augmented reality comprises the following steps:
s1, acquiring a gesture text of an operator through a motion sensor on an operation platform;
s2, obtaining the voice text of an operator through a voice acquisition module;
s3, processing the fusion text, which specifically comprises the following steps:
splicing the gesture text behind the voice text to realize the fusion of the gesture text and the voice text; robot control instructions are extracted through an inference method and used for robot control, and the method specifically comprises the following steps:
by using (C)opt,Cdir,Cval,Cunit) Four attribute description control commands, CoptRepresenting the type of operation, CdirRepresents the direction of movement, CvalRepresents a movement value, CunitUnits representing motion values; when the operator controls the robot using voice and gestures, the gestures may indicate the direction of the robot motion, so the gesture text is represented as one direction vector;
s4, electromagnetic force feedback is achieved through an electromagnetic force feedback module;
and S5, realizing visual feedback through a visual feedback module.
Compared with the prior art, the invention has the advantages that:
1. the invention provides non-contact force feedback, and an operator can feel the force feedback of the robot while performing natural interaction, so that the robot has stronger immersion.
2. The operator can guide the virtual robot to move by hands, the interaction mode is more visual, and the efficiency is higher.
3. The operator can observe the motion condition of the virtual robot from any angle, the obtained visual information is more sufficient, and the interaction is more reliable.
Drawings
Fig. 1 is a structural diagram of a robot teleoperation system based on electromagnetic force feedback and augmented reality in an embodiment of the present invention;
FIG. 2 is a schematic diagram of a coordinate system provided in an embodiment of the present invention;
FIG. 3 is a schematic diagram of closed loop control of force in an embodiment of the present invention;
fig. 4 is a flowchart of a robot teleoperation method based on electromagnetic force feedback and augmented reality according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention are described in detail below with reference to the accompanying drawings.
Example (b):
as shown in fig. 1, a robot teleoperation system based on electromagnetic force feedback and augmented reality includes: the natural control module and the natural feedback module;
the natural control module comprises a movable operation platform, a voice acquisition module, a virtual robot and a remote real robot; the natural control module is used for extracting a robot control instruction to guide the virtual robot to move through an inference method after fusing a gesture text and a voice text of an operator acquired through the movable operation platform and the voice acquisition module, the virtual robot receives the robot control instruction and moves according to the instruction, the movement data is sent to the remote real robot through the Internet, and the remote real robot receives the data and copies the movement of the virtual robot;
the natural feedback module comprises an electromagnetic force feedback module and a visual feedback module; the electromagnetic force feedback module is used for enabling an operator to feel the force of the robot, and the visual feedback module is used for enabling the operator to observe the virtual robot from any direction.
The movable operation platform comprises a tracking platform, a mobile robot, checkerboard pictures, a motion sensor and an electromagnet; an electromagnet and two motion sensors are fixed on the tracking platform, wherein the electromagnet is placed in the center of the platform, the two motion sensors are symmetrically fixed on two sides of the electromagnet and are respectively installed at the tail end of a connecting rod and face downwards at an angle of 45 degrees for expanding the operation space of the hands of an operator; the working space of a single motion sensor is a cone with a cone angle of 89.5 degrees, a height of 550 millimeters and a bottom radius of 550 millimeters, and is used for measuring the position and the direction of a palm and obtaining a gesture text of an operator through a corresponding algorithm; the electromagnet is used for generating an electromagnetic field to provide electromagnetic force feedback; the tracking platform is fixed at the tail end of a six-degree-of-freedom mechanical arm of the mobile robot, and the mobile robot is used for enabling the tracking platform, a sensor on the platform and an electromagnet to move in space; a checkerboard picture is pasted on a power box of the mobile robot and used for positioning the position of the mobile robot in space.
In this embodiment, the voice collecting module collects the voice of the operator by using a microphone array built in the Kinect camera, and the voice of the operator is recognized by Microsoft voice sdk (software Development kit) and converted into a text form to obtain a voice text.
The electromagnetic force feedback module comprises a coil and a permanent magnet; the coil is cylindrical, the center of the coil is an iron core, and a plurality of layers of copper wires are wound around the iron core and used for generating an electromagnetic field; the coil is fixed at the center of the tracking platform, and the permanent magnet is worn on the hand of an operator, so that the operator feels the stress of the robot; a PID controller is integrated in the coil to reduce the adverse effects of the coil and the permanent magnet, which is placed on the back of the human hand to avoid interfering with the operation of the operator.
The visual feedback module comprises AR glasses; for enabling the operator to view the robot motion from any direction and to display real-time video of the remote real robot performing the task.
In the remote operation system, as shown in FIG. 2, a definition is givenWorld coordinate system XWYWZW(ii) a According to the D-H model of the robot, defining the basic coordinate system of the mechanical arm of the mobile robot as XBYBZB(ii) a Coordinate system X defining a robot end effectorEYEZE(ii) a Defining the coordinate system of a Kinect camera in a voice acquisition module as XKYKZKWherein Z isKIs the optical axis of Kinect, XKIs the long side of the Kinect; coordinate system X for defining AR (augmented reality) glasses worn by operatorGYGZGDefining the coordinate system of the hand as XHYHZH,YHPerpendicular to the plane of the palm and pointing towards the back of the hand, XHCollinear with the line from the center of the palm to the middle finger; defining the coordinate system of the motion sensor as XLYLZL,XLAnd ZLAlong the long and short sides of the motion sensor, respectively; the checkerboard picture is fixed on the mobile robot, and the coordinate system of the checkerboard picture is defined as XIYIZIThe upper left corner of the checkerboard is the origin, ZIPerpendicular to the plane of the checkerboard, XIThe short edge of the checkerboard picture is used for positioning the position of the mobile robot under the Kinect coordinate system; the robot has a calibration box whose coordinate system is defined as XCYCZCFor calibrating the relationship between the virtual robot and the mobile robot; according to the relationship of the above coordinate system, the position and direction of the operator's hand measured in the motion sensor coordinate system are converted into coordinate values in the world coordinate system for controlling the virtual robot.
The motion sensor obtains 6 parameters through measurement, wherein the parameters comprise 3 rotation angle components and 3 position components of a hand coordinate system relative to a motion sensor coordinate system, and measurement errors of the measured hand position are eliminated by using an Interval Kalman Filter (IKF);
rotation matrix M from hand coordinate system to world coordinate systemH2WThe following were used:
Figure BDA0002254333240000081
wherein
Figure BDA0002254333240000082
Figure BDA0002254333240000083
Representing an angle between a positive direction of an i-axis of a hand coordinate system and a positive direction of a j-axis of a world coordinate system;
the position state at time k is defined as follows: x is the number ofk=[px,k,Vx,k,Ax,k,py,k,Vy,k,Ay,k,pz,k,Vz,k,Az,k]Wherein p isx,k,py,k,pz,kRepresenting the component of the palm centre in the world coordinate system, Vx,k,Vy,k,Vz,kRepresenting the velocity component of the human hand in each axis of the world coordinate system, Ax,k,Ay,k,Az,kIs the acceleration component measured in the hand coordinate system, and x is estimated from noisy measurements by IKFkA value of (d);
the motion sensor detects the direction of the hand in a motion sensor coordinate system, wherein the direction comprises a roll angle phi, a pitch angle theta and a yaw angle psi; then, converting the measured Euler angle into a quaternion through a decomposed quaternion algorithm (FQA), and reducing the measurement error of the hand direction obtained by measurement by adopting Improved Particle Filtering (IPF); time tkThe approximate posterior density of (a) is defined as follows:
Figure BDA0002254333240000084
wherein
Figure BDA0002254333240000085
Is at a time tkN is the number of samples,is the ith particle at time tkδ (x) is a dirac trigonometric function;
approximating state particles using an ensemble Kalman filter
Figure BDA0002254333240000087
A set of initial state particles is
Figure BDA0002254333240000088
Overall effect prediction
Figure BDA0002254333240000089
The following were used:
Figure BDA00022543332400000810
wherein wkRepresentative of model error, Qk-1Covariance representing model error; each particle has 4 states in its direction
Figure BDA0002254333240000091
It is represented by a unit quaternion and satisfies the following condition:
Figure BDA0002254333240000092
wherein
Figure BDA0002254333240000093
Representing 4 elementary quaternion components, each particle at time tk+1The quaternion component of (a) is defined as follows:
Figure BDA0002254333240000094
in the formula of omegaaxis,kRepresenting the angular velocity component, axis ∈ (x, y, z), t being the sampling time; the IPF estimates the velocity and position for the direction of each particle, and assigning a weight to each particle based on the cumulative difference of the position estimated by the IKF and the calculated position for the ith particle may reduce the error in calculating the acceleration of the object in the world coordinate system, the position difference being defined as follows:
Figure BDA0002254333240000095
wherein
Figure BDA0002254333240000096
Is the accumulated position difference of the ith particle in the iteration of the s-th direction, Ms=ΔTs/t,
Figure BDA0002254333240000097
Is the ith oriented particle at time tkIn the state of the position of the movable body,
Figure BDA0002254333240000098
is the position of the ith particle on each axis of the world coordinate system predicted by IKF at time k;
the position and direction data of the human hand obtained by the filtering is expressed as a text "human hand position P ═ (P)x,k,py,k,pz,k) And the direction D is in the form of (phi, theta, psi)', and the gesture text is obtained.
The gesture text and the voice text of the fusion operator are spliced behind the voice text; the robot control instruction is extracted through the reasoning method and is used for robot control, and the following steps are specifically carried out:
by using (Co)pt,Cdir,Cval,Cunit) Four attributes describe control instructions, CoptRepresenting the type of operation, CdirRepresents the direction of movement, CvalRepresents a movement value, CunitUnits representing motion values; when the operator controls the robot using voice and gestures, the gestures are used to indicate the direction of the robot movement, and thus the gesture text is represented as one direction vector. For example, the operator points in one Direction O and says "move 10mm in this Direction", the gesture text may be represented as "Direction O" or "Direction: [ x, y, z ]]", the fusion text is" Move 10mm in thisrirection: O (or [ x, y, z)]) ", the control instruction is fetched as (Co)pt=MOVE,Cdir=O(or[x,y,z]),Cval=10,Cunit=mm)。
The electromagnetic force feedback is realized as follows:
estimating current and displacement of the coil from the expected force using a Back Propagation Neural Network (BPNN) in an artificial neural network; the BPNN comprises an input layer, two hidden layers with dynamically adjustable node quantity and an output layer; the BPNN model comprises 6 input parameters and 4 target output parameters; the input layer has 6 nodes for input parameter assignment, respectively hand position estimate P (P)x,py,pz) And force f from the environmente(fe,x,fe,y,fe,z) (ii) a There are 4 nodes in the output layer, corresponding to the present current I and the displacement D (D)x,dy,dz) (ii) a The data format of the training and testing data sets for the model are both (p)x,py,pz,fe,x,fe,y,fe,z,I,dx,dy,dz) Data were randomly assigned, with 70% used for training and the remainder for testing;
in this example, after comparing the performance of different BPNN structures, a 6-14-8-4 neural network is used that provides convergence.
As shown in FIG. 3, the input to the PID is the desired force f when the data is collectedeAnd hand position, currents I and dx,dy,dzDynamically adjusted to cause the coil to generate an appropriate force that can be felt by an operator; adjusting the current to produce a measured force f of the coilhShould be as equal as possible to a given desired force fe, the deviation of the two forces should be satisfied: l fe-fhE is less than or equal to e, and e is a deviation threshold value set manually.
As shown in fig. 4, a method for teleoperation of a robot based on electromagnetic force feedback and augmented reality includes the following steps:
s1, acquiring a gesture text of an operator through a motion sensor on an operation platform;
s2, obtaining the voice text of an operator through a voice acquisition module;
s3, processing the fusion text, which specifically comprises the following steps:
splicing the gesture text behind the voice text to realize the fusion of the gesture text and the voice text; robot control instructions are extracted through an inference method and used for robot control, and the method specifically comprises the following steps:
by using (Co)pt,Cdir,Cval,Cunit) Four attributes describe control instructions, CoptRepresenting the type of operation, CdirRepresents the direction of movement, CvalRepresents a movement value, CunitUnits representing motion values; when the operator controls the robot using voice and gestures, the gestures may indicate the direction of the robot motion, so the gesture text is represented as one direction vector;
s4, electromagnetic force feedback is achieved through an electromagnetic force feedback module;
and S5, realizing visual feedback through a visual feedback module.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (10)

1. Teleoperation system of robot based on electromagnetic force feedback and augmented reality, its characterized in that includes: the natural control module and the natural feedback module;
the natural control module comprises a movable operation platform, a voice acquisition module, a virtual robot and a remote real robot; the natural control module is used for extracting a robot control instruction to guide the virtual robot to move through an inference method after fusing a gesture text and a voice text of an operator acquired through the movable operation platform and the voice acquisition module, the virtual robot receives the robot control instruction and moves according to the instruction, the movement data is sent to the remote real robot through the Internet, and the remote real robot receives the data and copies the movement of the virtual robot;
the natural feedback module comprises an electromagnetic force feedback module and a visual feedback module; the electromagnetic force feedback module is used for enabling an operator to feel the force of the robot, and the visual feedback module is used for enabling the operator to observe the virtual robot from any direction.
2. The electromagnetic force feedback and augmented reality based robot teleoperation system of claim 1, wherein the movable operation platform comprises a tracking platform, a mobile robot, a checkerboard picture, a motion sensor, and an electromagnet; an electromagnet and two motion sensors are fixed on the tracking platform, wherein the electromagnet is placed in the center of the platform, the two motion sensors are symmetrically fixed on two sides of the electromagnet and are respectively installed at the tail end of a connecting rod and face downwards at an angle of 45 degrees for expanding the operation space of the hands of an operator; the working space of a single motion sensor is a cone with a cone angle of 89.5 degrees, a height of 550 millimeters and a bottom radius of 550 millimeters, and is used for measuring the position and the direction of a palm and obtaining a gesture text of an operator through a corresponding algorithm; the electromagnet is used for generating an electromagnetic field to provide electromagnetic force feedback; the tracking platform is fixed at the tail end of a six-degree-of-freedom mechanical arm of the mobile robot, and the mobile robot is used for enabling the tracking platform, a sensor on the platform and an electromagnet to move in space; a checkerboard picture is pasted on a power box of the mobile robot and used for positioning the position of the mobile robot in space.
3. The robot teleoperation system based on electromagnetic force feedback and augmented reality of claim 1, wherein the voice collection module collects the voice of the operator by using a microphone array built in a Kinect camera and converts the voice of the operator into a text form to obtain a voice text.
4. The teleoperation system for robot based on electromagnetic force feedback and augmented reality of claim 1, wherein the electromagnetic force feedback module comprises a coil and a permanent magnet; the coil is cylindrical, the center of the coil is an iron core, and a plurality of layers of copper wires are wound around the iron core and used for generating an electromagnetic field; the coil is fixed at the center of the tracking platform, and the permanent magnet is worn on the hand of an operator, so that the operator feels the stress of the robot; a PID controller is integrated in the coil to reduce the adverse effects of the coil and the permanent magnet, which is placed on the back of the human hand to avoid interfering with the operation of the operator.
5. The electro-magnetic force feedback and augmented reality based teleoperation system of claim 1, wherein the visual feedback module comprises AR glasses; for enabling the operator to view the robot motion from any direction and to display real-time video of the remote real robot performing the task.
6. The teleoperation system for robot based on electromagnetic force feedback and augmented reality according to claim 1, wherein in the teleoperation system, a world coordinate system is defined as XWYWZW(ii) a According to the D-H model of the robot, defining the basic coordinate system of the mechanical arm of the mobile robot as XBYBZB(ii) a Coordinate system X defining a robot end effectorEYEZE(ii) a Defining the coordinate system of a Kinect camera in a voice acquisition module as XKYKZKWherein Z isKIs the optical axis of Kinect, XKIs the long side of the Kinect; coordinate system X for defining AR (augmented reality) glasses worn by operatorGYGZGDefining the coordinate system of the hand as XHYHZH,YHPerpendicular to the plane of the palm and pointing towards the back of the hand, XHCollinear with the line from the center of the palm to the middle finger; defining the coordinate system of the motion sensor as XLYLZL,XLAnd ZLAlong the long and short sides of the motion sensor, respectively; the checkerboard picture is fixed on the mobile robot, and the coordinate system of the checkerboard picture is defined as XIYIZIThe upper left corner of the checkerboard is the origin, ZIPerpendicular to the plane of the checkerboard, XIThe short edge of the checkerboard picture is used for positioning the mobile robot in the position of the Kinect voice acquisition module coordinate system; the robot has a calibration box whose coordinate system is defined as XCYCZCFor calibrating the relationship between the virtual robot and the mobile robot; according to the relationship of the above coordinate system, the position and direction of the operator's hand measured in the motion sensor coordinate system are converted into coordinate values in the world coordinate system for controlling the virtual robot.
7. The teleoperation system for robot based on electromagnetic force feedback and augmented reality according to claim 2, characterized in that the motion sensor obtains 6 parameters by measurement, including 3 rotation angle components and 3 position components of a hand coordinate system relative to a motion sensor coordinate system, and a measurement error of a measured hand position is eliminated using an Interval Kalman Filter (IKF);
rotation matrix from hand coordinate system to world coordinate systemThe following were used:
wherein
Figure FDA0002254333230000022
Figure FDA0002254333230000023
Representing an angle between a positive direction of an i-axis of a hand coordinate system and a positive direction of a j-axis of a world coordinate system;
the position state at time k is defined as follows: x is the number ofk=[px,k,Vx,k,Ax,k,py,k,Vy,k,Ay,k,pz,k,Vz,k,Az,k]Wherein p isx,k,py,k,pz,kRepresenting the component of the palm centre in the world coordinate system, Vx,k,Vy,k,Vz,kRepresenting the velocity component of the human hand in each axis of the world coordinate system, Ax,k,Ay,k,Az,kIs the acceleration component measured in the hand coordinate system, and x is estimated from noisy measurements by IKFkA value of (d);
the motion sensor detects the direction of the hand in a motion sensor coordinate system, wherein the direction comprises a roll angle phi, a pitch angle theta and a yaw angle psi; then, converting the measured Euler angle into a quaternion through a decomposed quaternion algorithm (FQA), and reducing the measurement error of the hand direction obtained by measurement by adopting Improved Particle Filtering (IPF); time tkThe approximate posterior density of (a) is defined as follows:
wherein
Figure FDA0002254333230000032
Is at a time tkN is the number of samples,is the ith particle at time tkδ (x) is a dirac trigonometric function;
approximating state particles using an ensemble Kalman filter
Figure FDA0002254333230000034
A set of initial state particles is
Figure FDA0002254333230000035
Overall effect predictionThe following were used:
Figure FDA0002254333230000037
wi,k-1:N(0,Qk-1);
wherein wkRepresentative of model error, Qk-1Covariance representing model error; each particle has 4 states in its direction
Figure FDA0002254333230000038
It is represented by a unit quaternion and satisfies the following condition:
Figure FDA0002254333230000039
wherein
Figure FDA00022543332300000310
Representing 4 elementary quaternion components, each particle at time tk+1The quaternion component of (a) is defined as follows:
Figure FDA00022543332300000311
in the formula of omegaaxis,kRepresenting the angular velocity component, axis ∈ (x, y, z), t being the sampling time; IPF estimates the velocity and position for the direction of each particle, assigning a weight to each particle based on the cumulative difference between the position estimated by IKF and the calculated position for the ith particle for reducing the error in calculating the acceleration of the object in the world coordinate system, the position difference being defined as follows:
Figure FDA00022543332300000312
wherein
Figure FDA00022543332300000313
Is the accumulated position difference of the ith particle in the iteration of the s-th direction, Ms=ΔTs/t,
Figure FDA00022543332300000314
Is the directly calculated i-th oriented particle at time tkIs measured in a coordinate system of the world and,
Figure FDA00022543332300000315
is the position of the ith particle on each axis of the world coordinate system predicted by IKF at time k;
the position and direction data of the human hand obtained by the filtering is expressed as a text "human hand position P ═ (P)x,k,py,k,pz,k) And the direction D is in the form of (phi, theta, psi)', and the gesture text is obtained.
8. The electromagnetic force feedback and augmented reality based robot teleoperation system of claim 1, wherein the gesture text and the voice text of the fusion operator are spliced behind the voice text; the robot control instruction is extracted through the reasoning method and is used for robot control, and the following steps are specifically carried out:
by using (C)opt,Cdir,Cval,Cunit) Four attribute description control commands, CoptRepresenting the type of operation, CdirRepresents the direction of movement, CvalRepresents a movement value, CunitUnits representing motion values; when the operator controls the robot using voice and gestures, the gestures are used to indicate the direction of the robot movement, and thus the gesture text is represented as one direction vector.
9. A teleoperational system for robots based on electromagnetic force feedback and augmented reality according to claim 4 is characterized by the following implementation of electromagnetic force feedback:
estimating current and displacement of the coil from the expected force using a Back Propagation Neural Network (BPNN) in an artificial neural network; the BPNN comprises an input layer, two hidden layers with dynamically adjustable node quantity and an output layer; the BPNN model comprises 6 input parameters and 4 target output parameters; the input layer has 6 nodes for input parameter assignment, respectively hand position estimate P (P)x,py,pz) And force f from the environmente(fe,x,fe,y,fe,z) (ii) a There are 4 nodes in the output layer, corresponding to the present current I and the displacement D (D)x,dy,dz) (ii) a The data format of the training and testing data sets for the model are both (p)x,py,pz,fe,x,fe,y,fe,z,I,dx,dy,dz) The data of the data set is randomly assigned, 70% of which is used for training and the rest for testing;
when data is collected, the input to the PID is the desired force feAnd hand position, currents I and dx,dy,dzDynamically adjusted to cause the coil to generate an appropriate force that can be felt by an operator; adjusting the current to produce a measured force f of the coilhShould be as equal as possible to a given desired force feThe deviation of the two forces should satisfy: l fe-fhE is less than or equal to e, and e is a deviation threshold value set manually.
10. The robot teleoperation method based on electromagnetic force feedback and augmented reality is characterized by comprising the following steps of:
s1, acquiring a gesture text of an operator through a motion sensor on an operation platform;
s2, obtaining the voice text of an operator through a voice acquisition module;
s3, processing the fusion text, which specifically comprises the following steps:
splicing the gesture text behind the voice text to realize the fusion of the gesture text and the voice text; robot control instructions are extracted through an inference method and used for robot control, and the method specifically comprises the following steps:
by using (C)opt,Cdir,Cval,Cunit) Four attribute description control commands, CoptRepresenting the type of operation, CdirRepresents the direction of movement, CvalRepresents a movement value, CunitUnits representing motion values; when the operator controls the robot using voice and gestures, the gestures may indicate the direction of the robot motion,the gesture text is represented as a direction vector;
s4, electromagnetic force feedback is achieved through an electromagnetic force feedback module;
and S5, realizing visual feedback through a visual feedback module.
CN201911046808.2A 2019-10-30 2019-10-30 Robot teleoperation system and method based on electromagnetic force feedback and augmented reality Active CN110815258B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911046808.2A CN110815258B (en) 2019-10-30 2019-10-30 Robot teleoperation system and method based on electromagnetic force feedback and augmented reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911046808.2A CN110815258B (en) 2019-10-30 2019-10-30 Robot teleoperation system and method based on electromagnetic force feedback and augmented reality

Publications (2)

Publication Number Publication Date
CN110815258A true CN110815258A (en) 2020-02-21
CN110815258B CN110815258B (en) 2023-03-31

Family

ID=69551554

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911046808.2A Active CN110815258B (en) 2019-10-30 2019-10-30 Robot teleoperation system and method based on electromagnetic force feedback and augmented reality

Country Status (1)

Country Link
CN (1) CN110815258B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111438499A (en) * 2020-03-30 2020-07-24 华南理工大学 5G + industrial AR-based assembly method using constraint-free force feedback
CN111459452A (en) * 2020-03-31 2020-07-28 北京市商汤科技开发有限公司 Interactive object driving method, device, equipment and storage medium
CN111459274A (en) * 2020-03-30 2020-07-28 华南理工大学 5G + AR-based remote operation method for unstructured environment
CN111459451A (en) * 2020-03-31 2020-07-28 北京市商汤科技开发有限公司 Interactive object driving method, device, equipment and storage medium
CN111459454A (en) * 2020-03-31 2020-07-28 北京市商汤科技开发有限公司 Interactive object driving method, device, equipment and storage medium
CN111724487A (en) * 2020-06-19 2020-09-29 广东浪潮大数据研究有限公司 Flow field data visualization method, device, equipment and storage medium
CN112667139A (en) * 2020-12-11 2021-04-16 深圳市越疆科技有限公司 Robot operation method, device, equipment and storage medium based on augmented reality
CN113313346A (en) * 2021-04-19 2021-08-27 贵州电网有限责任公司 Visual implementation method of artificial intelligence scheduling operation based on AR glasses
CN114310903A (en) * 2022-01-19 2022-04-12 梅蓉 Manipulator control method and system based on bilateral teleoperation
CN116476100A (en) * 2023-06-19 2023-07-25 兰州空间技术物理研究所 Remote operation system of multi-branch space robot

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003062775A (en) * 2001-08-24 2003-03-05 Japan Science & Technology Corp Teaching system for human hand type robot
CN105291138A (en) * 2015-11-26 2016-02-03 华南理工大学 Visual feedback platform improving virtual reality immersion degree
CN106095109A (en) * 2016-06-20 2016-11-09 华南理工大学 The method carrying out robot on-line teaching based on gesture and voice
CN107030692A (en) * 2017-03-28 2017-08-11 浙江大学 One kind is based on the enhanced manipulator teleoperation method of perception and system
CN107351058A (en) * 2017-06-08 2017-11-17 华南理工大学 Robot teaching method based on augmented reality
CN107430437A (en) * 2015-02-13 2017-12-01 厉动公司 The system and method that real crawl experience is created in virtual reality/augmented reality environment
CN108161882A (en) * 2017-12-08 2018-06-15 华南理工大学 A kind of robot teaching reproducting method and device based on augmented reality
CN108406725A (en) * 2018-02-09 2018-08-17 华南理工大学 Force feedback man-machine interactive system and method based on electromagnetic theory and mobile tracking
CN109521868A (en) * 2018-09-18 2019-03-26 华南理工大学 A kind of dummy assembly method interacted based on augmented reality and movement
CN109955254A (en) * 2019-04-30 2019-07-02 齐鲁工业大学 The remote operating control method of Mobile Robot Control System and robot end's pose

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003062775A (en) * 2001-08-24 2003-03-05 Japan Science & Technology Corp Teaching system for human hand type robot
CN107430437A (en) * 2015-02-13 2017-12-01 厉动公司 The system and method that real crawl experience is created in virtual reality/augmented reality environment
CN105291138A (en) * 2015-11-26 2016-02-03 华南理工大学 Visual feedback platform improving virtual reality immersion degree
CN106095109A (en) * 2016-06-20 2016-11-09 华南理工大学 The method carrying out robot on-line teaching based on gesture and voice
CN107030692A (en) * 2017-03-28 2017-08-11 浙江大学 One kind is based on the enhanced manipulator teleoperation method of perception and system
CN107351058A (en) * 2017-06-08 2017-11-17 华南理工大学 Robot teaching method based on augmented reality
CN108161882A (en) * 2017-12-08 2018-06-15 华南理工大学 A kind of robot teaching reproducting method and device based on augmented reality
CN108406725A (en) * 2018-02-09 2018-08-17 华南理工大学 Force feedback man-machine interactive system and method based on electromagnetic theory and mobile tracking
CN109521868A (en) * 2018-09-18 2019-03-26 华南理工大学 A kind of dummy assembly method interacted based on augmented reality and movement
CN109955254A (en) * 2019-04-30 2019-07-02 齐鲁工业大学 The remote operating control method of Mobile Robot Control System and robot end's pose

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
何子平: ""具有力反馈功能的人机交互技术及系统研究"", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111438499A (en) * 2020-03-30 2020-07-24 华南理工大学 5G + industrial AR-based assembly method using constraint-free force feedback
CN111459274A (en) * 2020-03-30 2020-07-28 华南理工大学 5G + AR-based remote operation method for unstructured environment
CN111459452A (en) * 2020-03-31 2020-07-28 北京市商汤科技开发有限公司 Interactive object driving method, device, equipment and storage medium
CN111459451A (en) * 2020-03-31 2020-07-28 北京市商汤科技开发有限公司 Interactive object driving method, device, equipment and storage medium
CN111459454A (en) * 2020-03-31 2020-07-28 北京市商汤科技开发有限公司 Interactive object driving method, device, equipment and storage medium
CN111459454B (en) * 2020-03-31 2021-08-20 北京市商汤科技开发有限公司 Interactive object driving method, device, equipment and storage medium
CN111724487A (en) * 2020-06-19 2020-09-29 广东浪潮大数据研究有限公司 Flow field data visualization method, device, equipment and storage medium
CN111724487B (en) * 2020-06-19 2023-05-16 广东浪潮大数据研究有限公司 Flow field data visualization method, device, equipment and storage medium
CN112667139A (en) * 2020-12-11 2021-04-16 深圳市越疆科技有限公司 Robot operation method, device, equipment and storage medium based on augmented reality
CN113313346A (en) * 2021-04-19 2021-08-27 贵州电网有限责任公司 Visual implementation method of artificial intelligence scheduling operation based on AR glasses
CN114310903A (en) * 2022-01-19 2022-04-12 梅蓉 Manipulator control method and system based on bilateral teleoperation
CN116476100A (en) * 2023-06-19 2023-07-25 兰州空间技术物理研究所 Remote operation system of multi-branch space robot

Also Published As

Publication number Publication date
CN110815258B (en) 2023-03-31

Similar Documents

Publication Publication Date Title
CN110815258B (en) Robot teleoperation system and method based on electromagnetic force feedback and augmented reality
Du et al. Markerless human–manipulator interface using leap motion with interval Kalman filter and improved particle filter
Hajiloo et al. Robust online model predictive control for a constrained image-based visual servoing
WO2021143294A1 (en) Sensor calibration method and apparatus, data measurement method and apparatus, device, and storage medium
CN105242533B (en) A kind of change admittance remote operating control method for merging multi information
Xu et al. Visual-haptic aid teleoperation based on 3-D environment modeling and updating
JP2021000678A (en) Control system and control method
JP7117237B2 (en) ROBOT CONTROL DEVICE, ROBOT SYSTEM AND ROBOT CONTROL METHOD
CN102814814A (en) Kinect-based man-machine interaction method for two-arm robot
Melchiorre et al. Collison avoidance using point cloud data fusion from multiple depth sensors: a practical approach
Kamali et al. Real-time motion planning for robotic teleoperation using dynamic-goal deep reinforcement learning
CN113103230A (en) Human-computer interaction system and method based on remote operation of treatment robot
US11422625B2 (en) Proxy controller suit with optional dual range kinematics
Chen et al. A human–robot interface for mobile manipulator
Lambrecht et al. Markerless gesture-based motion control and programming of industrial robots
Palmieri et al. Human arm motion tracking by kinect sensor using kalman filter for collaborative robotics
Li et al. Neural learning and kalman filtering enhanced teaching by demonstration for a baxter robot
CN110794969B (en) Natural man-machine interaction method for non-contact force feedback
WO2021171353A1 (en) Control device, control method, and recording medium
Du et al. A gesture-and speech-guided robot teleoperation method based on mobile interaction with unrestricted force feedback
Nandikolla et al. Teleoperation Robot Control of a Hybrid EEG‐Based BCI Arm Manipulator Using ROS
Lopez et al. Taichi algorithm: Human-like arm data generation applied on non-anthropomorphic robotic manipulators for demonstration
Du et al. A novel natural mobile human-machine interaction method with augmented reality
Grasshoff et al. 7dof hand and arm tracking for teleoperation of anthropomorphic robots
Griffin Shared control for dexterous telemanipulation with haptic feedback

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant