CN110815258B - Robot teleoperation system and method based on electromagnetic force feedback and augmented reality - Google Patents

Robot teleoperation system and method based on electromagnetic force feedback and augmented reality Download PDF

Info

Publication number
CN110815258B
CN110815258B CN201911046808.2A CN201911046808A CN110815258B CN 110815258 B CN110815258 B CN 110815258B CN 201911046808 A CN201911046808 A CN 201911046808A CN 110815258 B CN110815258 B CN 110815258B
Authority
CN
China
Prior art keywords
robot
coordinate system
operator
text
hand
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911046808.2A
Other languages
Chinese (zh)
Other versions
CN110815258A (en
Inventor
杜广龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201911046808.2A priority Critical patent/CN110815258B/en
Publication of CN110815258A publication Critical patent/CN110815258A/en
Application granted granted Critical
Publication of CN110815258B publication Critical patent/CN110815258B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/003Controls for manipulators by means of an audio-responsive input
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The invention provides a robot teleoperation system and method based on electromagnetic force feedback and augmented reality. The system includes a natural control module and a natural feedback module. After the gesture text and the voice text of an operator are fused, a natural control module extracts a robot control instruction through an inference method to guide the virtual robot to move, and the remote real robot copies the movement of the virtual robot based on data sent through the Internet; the natural feedback module includes electromagnetic force feedback that allows an operator to feel the force of the robot and visual feedback that allows the operator to observe the virtual robot from any direction. By adopting the system to remotely operate the robot, the stress condition of the robot can be sensed in real time, the process of executing tasks by the robot can be observed, strong immersion is achieved, the operation efficiency and the operation accuracy are improved, the system is suitable for non-professional operators, and the system has universality and easy operability and wide application range.

Description

Robot teleoperation system and method based on electromagnetic force feedback and augmented reality
Technical Field
The invention belongs to the field of robot control, and particularly relates to a robot teleoperation system and method based on electromagnetic force feedback and augmented reality.
Background
Remote operation of the robot allows the robot to work in harsh environments that are inaccessible to humans. However, the conventional method is generally to observe the robot through video or 3D models, and lacks strength feedback, and has some disadvantages such as limited eyesight, unfriendly interaction, lack of immersion, and low remote operation efficiency. The coming of the 4.0 era of industry, the application field of the robot is more common, the use frequency is more and more frequent, on one hand, the operation of an operator on the robot is more convenient and easier to master, so that the operator can concentrate on tasks, and the operation efficiency is improved; on the other hand, natural and friendly interaction is provided for operators, so that the operators can feel real-time force feedback of the robot, real-time adjustment operation is facilitated, and the operation is more accurate and reliable.
Existing teleoperation methods can be divided into two main categories: contact and contactless. In the touch method, an operator holds a device to control a remote robot, such as a mouse, a keyboard, a data glove, an exoskeleton, and the like. For example, a serpentine Robot operated using a hand-held controller (P. Berth-Rayne, K. Leibrandt, et al, 'Inverse Kinematics Control Methods for reducing Snake noise Robot Teleoperation Dual minimumally invasion Surgery,' IEEE robots and Automation Letters, vol.3, no.3, pp.2501-2508, 2018.); remote robotic arms (Xiaoonng Xu, aiguo Song, et al, "Visual-Haptical Aid Teleoperation Based on 3-D environmental Modeling and Updating," IEEE Transactions on Industrial Electronics, vol.63, no.10, pp.6419-6428, 2016.) are controlled using a joystick with force feedback, named "Phantom Device"; a plurality of sensors are attached to the hand and arm, and the measured pose of the Human arm is used to control the motion of the robotic arm (S.Fani, S.Ciotti, et al, "simple Robotics: wearability and Teleimpidity impropressions Human-Robot Interactions in Teleoperation," IEEE Robotics & Automation Magazine, vol.25, no.1, pp.77-88, 2018.). However, these methods are inefficient in interaction, are not natural enough, require professional operational knowledge and experience, are limited in the movement space of human hands by the movable space of the apparatus, and have a limited viewing angle for an operator to observe. In the non-contact method, the measuring equipment does not need to directly contact the human body, and the position and the posture of the human body can be indirectly obtained. For example, a physical marker is attached to a Human body, and then the position and posture of the Human hand are acquired by an image taken by a camera, thereby controlling a Robot arm (j. Kofman, x.wu, et al, "surgery of a Robot Manipulator Using a Vision-Based Human-Robot Interface," IEEE Transactions on Industrial Electronics, vol 52, no.5, pp.1206-1219,2005 "); the position and posture of the Human hand are obtained by the Leap Motion sensor, and the robot can be controlled without physical marks (guang Du, ping Zhang, and Xin Liu, "Markerless Human-manager Interface Using Leap Motion With Interactive Filter and Improved Particle Filter," IEEE Transactions on Industrial information, vol.12, no.2, pp.694-704, 2016.). However, these methods lack force feedback, resulting in insufficient immersion and accuracy of the operation, and the movement space of the human hand is limited by the measurement space of the apparatus, while the problem of limited viewing angle still remains.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provides a robot teleoperation system and a method based on electromagnetic force feedback and augmented reality, wherein the system comprises a natural control module and a natural feedback module. After the gesture text and the voice text of an operator are fused, a natural control module extracts a robot control instruction through an inference method to guide the virtual robot to move, and the remote real robot copies the movement of the virtual robot based on data sent through the Internet; the natural feedback module includes electromagnetic force feedback that allows an operator to feel the force of the robot and visual feedback that allows the operator to observe the virtual robot from any direction.
The purpose of the invention is realized by at least one of the following technical solutions.
Teleoperation system of robot based on electromagnetic force feedback and augmented reality includes: the natural control module and the natural feedback module;
the natural control module comprises a movable operation platform, a voice acquisition module, a virtual robot and a remote real robot; the natural control module is used for extracting a robot control instruction to guide the virtual robot to move through an inference method after the gesture text and the voice text of the operator acquired through the movable operation platform and the voice acquisition module are fused, the virtual robot receives the robot control instruction and moves according to the instruction, the movement data is sent to the remote real robot through the Internet, and the remote real robot receives the data and copies the movement of the virtual robot;
the natural feedback module comprises an electromagnetic force feedback module and a visual feedback module; the electromagnetic force feedback module is used for enabling an operator to feel the strength of the robot, and the visual feedback module is used for enabling the operator to observe the virtual robot from any direction.
Further, the movable operation platform comprises a tracking platform, a mobile robot, a checkerboard picture, a motion sensor and an electromagnet; an electromagnet and two motion sensors are fixed on the tracking platform, wherein the electromagnet is placed in the center of the platform, the two motion sensors are symmetrically fixed on two sides of the electromagnet and are respectively installed at the tail end of a connecting rod and face downwards at an angle of 45 degrees for expanding the operation space of the hands of an operator; the working space of a single motion sensor is a cone with a cone angle of 89.5 degrees, a height of 550 millimeters and a bottom radius of 550 millimeters, and is used for measuring the position and the direction of a palm and obtaining a gesture text of an operator through a corresponding algorithm; the electromagnet is used for generating an electromagnetic field to provide electromagnetic force feedback; the tracking platform is fixed at the tail end of a six-degree-of-freedom mechanical arm of the mobile robot, and the mobile robot is used for enabling the tracking platform, a sensor on the platform and an electromagnet to move in space; a checkerboard picture is pasted on a power box of the mobile robot and used for positioning the position of the mobile robot in space.
Further, the voice acquisition module adopts a microphone array built in the Kinect camera to collect voice of the operator, and converts the voice of the operator into a text form to obtain a voice text.
Further, the electromagnetic force feedback module comprises a coil and a permanent magnet; the coil is cylindrical, the center of the coil is an iron core, and a plurality of layers of copper wires are wound around the iron core and used for generating an electromagnetic field; the coil is fixed at the center of the tracking platform, and the permanent magnet is worn on the hand of an operator, so that the operator feels the stress of the robot; a PID controller is integrated in the coil to reduce the adverse effects of the coil and the permanent magnet, which is placed on the back of the human hand to avoid interfering with the operation of the operator.
Further, the visual feedback module comprises AR glasses; for enabling the operator to view the robot motion from any direction and to display real-time video of the remote real robot performing the task.
Further, in the remote operation system, a world coordinate system is defined as X W Y W Z W (ii) a According toThe robot D-H model defines the basic coordinate system of the mechanical arm of the mobile robot as X B Y B Z B (ii) a Coordinate system X defining a robot end effector E Y E Z E (ii) a Defining the coordinate system of a Kinect camera in a voice acquisition module as X K Y K Z K Wherein Z is K Is the optical axis of Kinect, X K Is the long side of the Kinect; coordinate system X for defining AR (Augmented Reality) glasses worn by operator G Y G Z G Defining the coordinate system of the hand as X H Y H Z H ,Y H Perpendicular to the plane of the palm and pointing towards the back of the hand, X H Collinear with the line from the center of the palm to the middle finger; defining the coordinate system of the motion sensor as X L Y L Z L ,X L And Z L Along the long and short sides of the motion sensor, respectively; the checkerboard picture is fixed on the mobile robot, and the coordinate system is defined as X I Y I Z I The positioning system is used for positioning the mobile robot in the position of the Kinect voice acquisition module coordinate system; the robot has a calibration box whose coordinate system is defined as X C Y C Z C For calibrating the relationship between the virtual robot and the mobile robot; according to the relationship of the above coordinate system, the position and direction of the operator's hand measured in the motion sensor coordinate system are converted into coordinate values in the world coordinate system for controlling the virtual robot.
Further, the motion sensor obtains 6 parameters through measurement, wherein the parameters comprise 3 rotation angle components and 3 position components of a hand coordinate system relative to a motion sensor coordinate system, and an Interval Kalman Filter (IKF) is used for eliminating measurement errors of the measured hand position;
rotation matrix M from hand coordinate system to world coordinate system H2W The following were used:
Figure BDA0002254333240000031
wherein
Figure BDA0002254333240000032
Figure BDA0002254333240000033
Representing an angle between a positive direction of an i-axis of a hand coordinate system and a positive direction of a j-axis of a world coordinate system;
the position state at time k is defined as follows: x is a radical of a fluorine atom k =[p x,k ,V x,k ,A x,k ,p y,k ,V y,k ,A y,k ,p z,k ,V z,k ,A z,k ]Wherein p is x,k ,p y,k ,p z,k Representing the component of the palm centre in the world coordinate system, V x,k ,V y,k ,V z,k Representing the velocity component of the human hand in each axis of the world coordinate system, A x,k ,A y,k ,A z,k Is the acceleration component measured in the hand coordinate system, and x is estimated from noisy measurements by IKF k A value of (d);
the motion sensor detects the direction of the hand in a motion sensor coordinate system, wherein the direction comprises a roll angle phi, a pitch angle theta and a yaw angle psi; then, converting the measured Euler angle into a quaternion through a decomposed quaternion algorithm (FQA), and reducing the measurement error of the hand direction obtained by measurement by adopting Improved Particle Filtering (IPF); at time t k The approximate posterior density of (a) is defined as follows:
Figure BDA0002254333240000041
wherein
Figure BDA0002254333240000042
Is at a time t k N is the number of samples, N>
Figure BDA0002254333240000043
Is the ith particle at time t k δ (x) is a dirac trigonometric function;
approximating state particles using an ensemble Kalman filter
Figure BDA0002254333240000044
Is based on a probability density function of a group of initial state particles being->
Figure BDA0002254333240000045
Total effect prediction->
Figure BDA0002254333240000046
The following:
Figure BDA0002254333240000047
wherein w k Representative of model error, Q k-1 Covariance representing model error; each particle has 4 states in its direction
Figure BDA0002254333240000048
It is represented by a unit quaternion and satisfies the following condition:
Figure BDA0002254333240000049
wherein->
Figure BDA00022543332400000410
Representing 4 elementary quaternion components, each particle at time t k+1 The quaternion component of (a) is defined as follows:
Figure BDA00022543332400000411
in the formula of omega axis,k Representing the angular velocity component, axis ∈ (x, y, z), t is the sampling time; the IPF estimates the velocity and position for the direction of each particle, and assigning a weight to each particle based on the cumulative difference of the position estimated by the IKF and the calculated position for the ith particle may reduce the error in calculating the acceleration of the object in the world coordinate system, the position difference being defined as follows:
Figure BDA0002254333240000051
wherein
Figure BDA0002254333240000052
Is the accumulated position difference of the ith particle in the iteration of the s-th direction, M s =ΔT s /t,/>
Figure BDA0002254333240000053
Is the ith oriented particle at time t k In the position status of->
Figure BDA0002254333240000054
Is the position of the ith particle on each axis of the world coordinate system predicted by IKF at time k;
representing the position and direction data of the human hand obtained by the filtering as a text "human hand position P = (P) x,k ,p y , k ,p z,k ) Direction D = (phi, theta, psi) ", resulting in gesture text.
Further, the gesture text and the voice text of the fusion operator are spliced behind the voice text; the robot control instruction is extracted through the inference method and is used for robot control, and the method specifically comprises the following steps:
by using (Co) pt ,C dir ,C val ,C unit ) Four attributes describe control instructions, co pt Representing the type of operation, C dir Represents the direction of movement, C val Represents a movement value, C unit Units representing motion values; when the operator controls the robot using voice and gestures, the gestures are used to indicate the direction of the robot movement, and thus the gesture text is represented as one direction vector. For example, the operator points in one Direction O and says "move 10mm in this Direction", the gesture text may be represented as "Direction O" or "Direction: [ x, y, z ]]"the fusion text is" Move 10mm in this direction O (or [ x, y, z)]) ", the control instruction is fetched as (Co) pt =MOVE,C dir =O(or[x,y,z]),C val =10,C unit =mm)。
Further, the electromagnetic force feedback is realized as follows:
estimating current and displacement of the coil from the expected force using a Back Propagation Neural Network (BPNN) in an artificial neural network; the BPNN comprises an input layer, two hidden layers with dynamically adjustable node quantity and an output layer; the BPNN model comprises 6 input parameters and 4 target output parameters; the input layer has 6 nodes for input parameter assignment, respectively hand position estimate P (P) x ,p y ,p z ) And force f from the environment e (f e,x ,f e,y ,f e,z ) (ii) a There are 4 nodes in the output layer corresponding to the present current I and the displacement D (D) x ,d y ,d z ) (ii) a The data format of the training and testing data sets for the model are both (p) x ,p y ,p z ,f e,x ,f e,y ,f e , z ,I,d x ,d y ,d z ) Data were randomly assigned, with 70% used for training and the remainder for testing;
when data is collected, the input to the PID is the desired force f e And hand position, currents I and d x ,d y ,d z Dynamically adjusted to cause the coil to generate an appropriate force that can be felt by an operator; adjusting the current to produce a measured force f h Should be as equal as possible to a given desired force f e The deviation of the two forces should satisfy: l f e -f h And e is less than or equal to e, and e is a deviation threshold value set manually.
The teleoperation method of the robot based on electromagnetic force feedback and augmented reality comprises the following steps:
s1, acquiring a gesture text of an operator through a motion sensor on an operation platform;
s2, obtaining the voice text of an operator through a voice acquisition module;
s3, processing the fusion text, which specifically comprises the following steps:
splicing the gesture text behind the voice text to realize the fusion of the gesture text and the voice text; robot control instructions are extracted through an inference method and used for robot control, and the method specifically comprises the following steps:
by (C) opt ,C dir ,C val ,C unit ) Four attribute description control commands, C opt Represents the type of operation, C dir Represents the direction of movement, C val Represents a movement value, C unit Units representing motion values; when the operator controls the robot using voice and gestures, the gestures may indicate the direction of the robot movement, so the gesture text is represented as one direction vector;
s4, electromagnetic force feedback is achieved through an electromagnetic force feedback module;
and S5, realizing visual feedback through a visual feedback module.
Compared with the prior art, the invention has the advantages that:
1. the invention provides non-contact force feedback, and an operator can feel the force feedback of the robot while performing natural interaction, so that the robot has stronger immersion.
2. The operator can guide the virtual robot to move by hands, the interaction mode is more visual, and the efficiency is higher.
3. The operator can observe the motion condition of the virtual robot from any angle, the obtained visual information is more sufficient, and the interaction is more reliable.
Drawings
Fig. 1 is a structural diagram of a robot teleoperation system based on electromagnetic force feedback and augmented reality in an embodiment of the present invention;
FIG. 2 is a schematic diagram of a coordinate system provided in an embodiment of the present invention;
FIG. 3 is a schematic diagram of the closed loop control of force in an embodiment of the present invention;
fig. 4 is a flowchart of a teleoperation method of a robot based on electromagnetic force feedback and augmented reality in an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention are described in detail below with reference to the accompanying drawings.
The embodiment is as follows:
as shown in fig. 1, a teleoperation system for a robot based on electromagnetic force feedback and augmented reality includes: the natural control module and the natural feedback module;
the natural control module comprises a movable operation platform, a voice acquisition module, a virtual robot and a remote real robot; the natural control module is used for extracting a robot control instruction to guide the virtual robot to move through an inference method after fusing a gesture text and a voice text of an operator acquired through the movable operation platform and the voice acquisition module, the virtual robot receives the robot control instruction and moves according to the instruction, the movement data is sent to the remote real robot through the Internet, and the remote real robot receives the data and copies the movement of the virtual robot;
the natural feedback module comprises an electromagnetic force feedback module and a visual feedback module; the electromagnetic force feedback module is used for enabling an operator to feel the force of the robot, and the visual feedback module is used for enabling the operator to observe the virtual robot from any direction.
The movable operation platform comprises a tracking platform, a mobile robot, checkerboard pictures, a motion sensor and an electromagnet; an electromagnet and two motion sensors are fixed on the tracking platform, wherein the electromagnet is placed in the center of the platform, the two motion sensors are symmetrically fixed on two sides of the electromagnet and are respectively installed at the tail end of a connecting rod and face downwards at an angle of 45 degrees for expanding the operation space of the hands of an operator; the working space of a single motion sensor is a cone with a cone angle of 89.5 degrees, a height of 550 millimeters and a bottom radius of 550 millimeters, and is used for measuring the position and the direction of a palm and obtaining a gesture text of an operator through a corresponding algorithm; the electromagnet is used for generating an electromagnetic field to provide electromagnetic force feedback; the tracking platform is fixed at the tail end of a six-degree-of-freedom mechanical arm of the mobile robot, and the mobile robot is used for enabling the tracking platform, a sensor on the platform and an electromagnet to move in space; a checkerboard picture is pasted on a power box of the mobile robot and used for positioning the position of the mobile robot in space.
In this embodiment, the voice acquisition module collects the voice of the operator by using a microphone array built in the Kinect camera, and the voice of the operator is recognized by a Microsoft voice SDK (Software Development Kit) and converted into a text form to obtain a voice text.
The electromagnetic force feedback module comprises a coil and a permanent magnet; the coil is cylindrical, the center of the coil is an iron core, and a plurality of layers of copper wires are wound around the iron core and used for generating an electromagnetic field; the coil is fixed at the center of the tracking platform, and the permanent magnet is worn on the hand of an operator, so that the operator feels the stress of the robot; a PID controller is integrated in the coil to reduce the adverse effects of the coil and the permanent magnet, which is placed on the back of the human hand to avoid interfering with the operation of the operator.
The visual feedback module comprises AR glasses; for enabling the operator to view the robot motion from any direction and to display real-time video of the remote real robot performing the task.
In the remote operation system, as shown in fig. 2, a world coordinate system X is defined W Y W Z W (ii) a Defining the base coordinate of the mechanical arm of the mobile robot as X according to the D-H model of the robot B Y B Z B (ii) a Coordinate system X defining a robot end effector E Y E Z E (ii) a Defining the coordinate system of a Kinect camera in a voice acquisition module as X K Y K Z K Wherein Z is K Is the optical axis of Kinect, X K Is the long side of the Kinect; coordinate system X for defining AR (Augmented Reality) glasses worn by operator G Y G Z G Defining the coordinate system of the hand as X H Y H Z H ,Y H Perpendicular to the plane of the palm and pointing towards the back of the hand, X H Collinear with the line from the center of the palm to the middle finger; defining the coordinate system of the motion sensor as X L Y L Z L ,X L And Z L Along the long and short sides of the motion sensor, respectively; checkerboard picture is fixed and is movedOn a mobile robot, the coordinate system is defined as X I Y I Z I The upper left corner of the checkerboard is the origin, Z I Perpendicular to the plane of the checkerboard, X I The short edge of the checkerboard picture is used for positioning the position of the mobile robot under the Kinect coordinate system; the robot has a calibration box whose coordinate system is defined as X C Y C Z C For calibrating the relationship between the virtual robot and the mobile robot; according to the relationship of the above coordinate system, the position and direction of the operator's hand measured in the motion sensor coordinate system are converted into coordinate values in the world coordinate system for controlling the virtual robot.
The motion sensor obtains 6 parameters through measurement, wherein the parameters comprise 3 rotation angle components and 3 position components of a hand coordinate system relative to a motion sensor coordinate system, and measurement errors of the measured hand position are eliminated by using an Interval Kalman Filter (IKF);
rotation matrix M from hand coordinate system to world coordinate system H2W The following:
Figure BDA0002254333240000081
wherein
Figure BDA0002254333240000082
Figure BDA0002254333240000083
Representing an angle between a positive direction of an i-axis of a hand coordinate system and a positive direction of a j-axis of a world coordinate system;
the position state at time k is defined as follows: x is the number of k =[p x,k ,V x,k ,A x,k ,p y,k ,V y,k ,A y,k ,p z,k ,V z,k ,A z,k ]Wherein p is x,k ,p y,k ,p z,k Representing the component of the palm centre in the world coordinate system, V x,k ,V y,k ,V z,k Representing the velocity component of the human hand in each axis of the world coordinate system, A x,k ,A y,k ,A z,k Is the acceleration component measured in the hand coordinate system, and x is estimated from noisy measurements by IKF k A value of (d);
the motion sensor detects the direction of the hand in a motion sensor coordinate system, wherein the direction comprises a roll angle phi, a pitch angle theta and a yaw angle psi; then, converting the measured Euler angle into a quaternion through a decomposed quaternion algorithm (FQA), and reducing the measurement error of the hand direction obtained by measurement by adopting Improved Particle Filtering (IPF); time t k The approximate posterior density of (a) is defined as follows:
Figure BDA0002254333240000084
wherein
Figure BDA0002254333240000085
Is at a time t k Is N is the number of samples, is based on the number of samples>
Figure BDA0002254333240000086
Is the ith particle at time t k δ (x) is a dirac trigonometric function;
approximating state particles using an ensemble Kalman filter
Figure BDA0002254333240000087
Is based on a probability density function of a group of initial state particles being->
Figure BDA0002254333240000088
Total effect prediction->
Figure BDA0002254333240000089
The following:
Figure BDA00022543332400000810
wherein w k Representative of model error, Q k-1 Representing co-ordination of model errorsVariance; each particle has 4 states in its direction
Figure BDA0002254333240000091
It is represented by a unit quaternion and satisfies the following condition:
Figure BDA0002254333240000092
wherein +>
Figure BDA0002254333240000093
Representing 4 elementary quaternion components, each particle at time t k+1 The quaternion component of (a) is defined as follows:
Figure BDA0002254333240000094
in the formula of omega axis,k Representing the angular velocity component, axis ∈ (x, y, z), t is the sampling time; the IPF estimates a velocity and a position for a direction of each particle, and assigning a weight of each particle according to a cumulative difference of the position estimated by the IKF and the calculated position of the ith particle may reduce an error of calculating an acceleration of the object in the world coordinate system, the position difference being defined as follows:
Figure BDA0002254333240000095
wherein
Figure BDA0002254333240000096
Is the accumulated position difference of the ith particle in the iteration of the s-th direction, M s =ΔT s /t,/>
Figure BDA0002254333240000097
Is the ith oriented particle at time t k In the position status of->
Figure BDA0002254333240000098
Is the ith particle predicted by IKF at time k at each world coordinate systemA position on the shaft;
the position and direction data of the human hand obtained by the filtering is expressed as a text "human hand position P = (P) x,k ,p y , k ,p z,k ) Direction D = (phi, theta, psi) ", resulting in gesture text.
The gesture text and the voice text of the fusion operator are spliced behind the voice text; the robot control instruction is extracted through the reasoning method and is used for robot control, and the following steps are specifically carried out:
by using (Co) pt ,C dir ,C val ,C unit ) Four attributes describe control instructions, co pt Representing the type of operation, C dir Represents the direction of movement, C val Represents a movement value, C unit Units representing motion values; when the operator controls the robot using voice and gestures, the gestures are used to indicate the direction of the robot movement, and thus the gesture text is represented as one direction vector. For example, the operator points in one Direction O and says "move 10mm in this Direction", the gesture text may be represented as "Direction O" or "Direction: [ x, y, z ]]", the fusion text is" Move 10mm in this direction O (or [ x, y, z]) ", the control instruction is fetched as (Co) pt =MOVE,C dir =O(or[x,y,z]),C val =10,C unit =mm)。
The electromagnetic force feedback is realized as follows:
estimating the current and displacement of the coil from the expected force using a Back Propagation Neural Network (BPNN) in an artificial neural network; the BPNN comprises an input layer, two hidden layers with dynamically adjustable node quantity and an output layer; the BPNN model comprises 6 input parameters and 4 target output parameters; the input layer has 6 nodes for input parameter assignment, respectively hand position estimate P (P) x ,p y ,p z ) And force f from the environment e (f e,x ,f e,y ,f e,z ) (ii) a There are 4 nodes in the output layer, corresponding to the present current I and the displacement D (D) x ,d y ,d z ) (ii) a The data format of the training and testing data sets for the model are both (p) x ,p y ,p z ,f e,x ,f e,y ,f e,z ,I,d x ,d y ,d z ) Data were randomly assigned, with 70% used for training and the remainder for testing;
in this example, after comparing the performance of different BPNN structures, a 6-14-8-4 neural network is used that provides convergence.
As shown in FIG. 3, the input to the PID when collecting the data is the desired force f e And hand position, currents I and d x ,d y ,d z Dynamically adjusted to cause the coil to generate an appropriate force that can be felt by an operator; adjusting the current to produce a measured force f of the coil h Should be as equal as possible to the given desired force fe, the deviation of the two forces should be satisfied: l f e -f h E is less than or equal to e, and e is a deviation threshold value set manually.
As shown in fig. 4, a method for teleoperation of a robot based on electromagnetic force feedback and augmented reality includes the following steps:
s1, acquiring a gesture text of an operator through a motion sensor on an operation platform;
s2, obtaining the voice text of an operator through a voice acquisition module;
s3, processing the fusion text, which specifically comprises the following steps:
splicing the gesture text behind the voice text to realize the fusion of the gesture text and the voice text; robot control instructions are extracted through an inference method and used for robot control, and the method specifically comprises the following steps:
by using (Co) pt ,C dir ,C val ,C unit ) Four attributes describe control instructions, co pt Representing the type of operation, C dir Represents the direction of movement, C val Represents a movement value, C unit Units representing motion values; when the operator controls the robot using voice and gestures, the gestures may indicate the direction of the robot motion, so the gesture text is represented as one direction vector;
s4, electromagnetic force feedback is achieved through an electromagnetic force feedback module;
and S5, realizing visual feedback through a visual feedback module.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such modifications are intended to be included in the scope of the present invention.

Claims (3)

1. Teleoperation system of robot based on electromagnetic force feedback and augmented reality, its characterized in that includes: the natural control module and the natural feedback module;
the natural control module comprises a movable operation platform, a voice acquisition module, a virtual robot and a remote real robot; the natural control module is used for extracting a robot control instruction to guide the virtual robot to move through an inference method after fusing a gesture text and a voice text of an operator acquired through the movable operation platform and the voice acquisition module, the virtual robot receives the robot control instruction and moves according to the instruction, the movement data is sent to the remote real robot through the Internet, and the remote real robot receives the data and copies the movement of the virtual robot;
the natural feedback module comprises an electromagnetic force feedback module and a visual feedback module; the electromagnetic force feedback module is used for enabling an operator to feel the force of the robot, and the visual feedback module is used for enabling the operator to observe the virtual robot from any direction;
the visual feedback module comprises AR glasses; real-time video for the operator to observe the robot motion from any direction and to display remote real robot performance tasks;
in a remote operating system, a world coordinate system X is defined W Y W Z W (ii) a Defining the base coordinate of the mechanical arm of the mobile robot as X according to the D-H model of the robot B Y B Z B (ii) a Defining the coordinate system of the robot end effector as X E Y E Z E (ii) a Defining the coordinate system of a Kinect camera in a voice acquisition module as X K Y K Z K Wherein Z is K Is the optical axis of Kinect, X K Is the long side of the Kinect; defining the coordinate system of AR glasses worn by an operator as X G Y G Z G Defining the coordinate system of the hand as X H Y H Z H ,Y H Perpendicular to the plane of the palm and pointing towards the back of the hand, X H Collinear with the line from the center of the palm to the middle finger; defining the coordinate system of the motion sensor as X L Y L Z L ,X L And Z L Along the long and short sides of the motion sensor, respectively; the checkerboard picture is fixed on the mobile robot, and the coordinate system of the checkerboard picture is defined as X I Y I Z I The upper left corner of the checkerboard is the origin, Z I Perpendicular to the plane of the checkerboard, X I The short edge of the checkerboard picture is used for positioning the mobile robot in the position of the Kinect voice acquisition module coordinate system; the robot has a calibration box whose coordinate system is defined as X C Y C Z C For calibrating the relationship between the virtual robot and the mobile robot; according to the relation of the coordinate system, the position and the direction of the hand of the operator measured in the coordinate system of the motion sensor are converted into coordinate values in the world coordinate system for controlling the virtual robot; the electromagnetic force feedback module comprises a coil and a permanent magnet; the coil is cylindrical, the center of the coil is an iron core, and a plurality of layers of copper wires are wound around the iron core and used for generating an electromagnetic field; the coil is fixed at the center of the tracking platform, and the permanent magnet is worn on the hand of an operator, so that the operator feels the stress of the robot; the coil is integrated with a PID controller for reducing the adverse effect of the coil and the permanent magnet, and the permanent magnet is placed on the back of a human hand to avoid interfering the operation of an operator; the motion sensor obtains 6 parameters through measurement, wherein the parameters comprise 3 rotation angle components and 3 position components of a hand coordinate system relative to a motion sensor coordinate system, and measurement errors of the measured hand position are eliminated by using an Interval Kalman Filter (IKF);
rotation matrix M from hand coordinate system to world coordinate system H2W The following were used:
Figure FDA0003983916130000011
wherein
Figure FDA0003983916130000012
Representing an angle between a positive direction of an i-axis of the hand coordinate system and a positive direction of a j-axis of the world coordinate system;
the position state at time k is defined as follows: x is the number of k =[p x,k ,V x,k ,A x,k ,p y,k ,V y,k ,A y,k ,p z,k ,V z,k ,A z,k ]Wherein p is x,k ,p y,k ,p z,k Representing the component of the palm center in the world coordinate system, V x,k ,V y,k ,V z,k Representing the velocity component of the human hand in each axis of the world coordinate system, A x,k ,A y,k ,A z,k Is an acceleration component measured in the hand coordinate system, estimated by IKF from noisy measurements xk A value of (d);
the motion sensor detects the direction of the hand in a motion sensor coordinate system, wherein the direction comprises a roll angle phi, a pitch angle theta and a yaw angle psi; then, converting the measured Euler angle into a quaternion through a decomposed quaternion algorithm (FQA), and reducing the measurement error of the hand direction obtained by measurement by adopting Improved Particle Filtering (IPF); at time t k The approximate posterior density of (a) is defined as follows:
Figure FDA0003983916130000021
wherein x is i,k Is at a time t k N is the number of samples, ω i,k Is the ith particle at time t k δ (x) is a dirac trigonometric function;
approximating state particles using an ensemble Kalman filter
Figure FDA0003983916130000022
A set of initial state particles is
Figure FDA0003983916130000023
Total effect prediction>
Figure FDA0003983916130000024
The following:
Figure FDA0003983916130000025
wherein w k Representative of model error, Q k-1 Covariance representing model error; each particle has 4 states in its direction
Figure FDA0003983916130000026
It is represented by a unit quaternion and satisfies the following condition:
Figure FDA0003983916130000027
wherein +>
Figure FDA0003983916130000028
Representing 4 elementary quaternion components, each particle at time t k+1 The quaternion component of (a) is defined as follows:
Figure FDA0003983916130000029
in the formula of omega axis,k Representing the angular velocity component, axis ∈ (x, y, z), t being the sampling time; IPF estimates the velocity and position for the direction of each particle, assigning a weight to each particle based on the cumulative difference between the position estimated by IKF and the calculated position for the ith particle for reducing the error in calculating the acceleration of the object in the world coordinate system, the position difference being defined as follows:
Figure FDA00039839161300000210
wherein
Figure FDA0003983916130000031
Is the accumulated position difference of the ith particle in the iteration of the s-th direction, M s =ΔT s /t,/>
Figure FDA0003983916130000032
Is the directly calculated i-th directional particle at time t k Is based on the world coordinate system, and (c) position in the world coordinate system of (4)>
Figure FDA0003983916130000033
Is the position of the ith particle on each axis of the world coordinate system predicted by IKF at time k;
the position and direction data of the human hand obtained by the filtering is expressed as a text "human hand position P = (P) x,k ,p y,k ,p z,k ) Direction D = (phi, theta, psi)', resulting in gesture text; the robot teleoperation system based on electromagnetic force feedback and augmented reality comprises the following steps:
s1, acquiring a gesture text of an operator through a motion sensor on an operation platform;
s2, obtaining the voice text of an operator through a voice acquisition module;
s3, processing the fusion text, which specifically comprises the following steps:
splicing the gesture text behind the voice text to realize the fusion of the gesture text and the voice text; the robot control instruction is extracted through an inference method and is used for robot control, and the method specifically comprises the following steps:
by (C) opt ,C dir ,C val ,C unit ) Four attribute description control commands, C opt Represents the type of operation, C dir Represents the direction of movement, C val Represents a movement value, C unit Units representing motion values; when the operator uses the sound and gesture control machineWhen a robot, the gesture may indicate the direction of robot motion, so the gesture text is represented as a direction vector;
s4, electromagnetic force feedback is achieved through an electromagnetic force feedback module;
and S5, realizing visual feedback through a visual feedback module.
2. The electromagnetic force feedback and augmented reality based robot teleoperation system of claim 1, wherein the movable operation platform comprises a tracking platform, a mobile robot, a checkerboard picture, a motion sensor, and an electromagnet; an electromagnet and two motion sensors are fixed on the tracking platform, wherein the electromagnet is placed in the center of the platform, the two motion sensors are symmetrically fixed on two sides of the electromagnet and are respectively installed at the tail end of a connecting rod and face downwards at an angle of 45 degrees for expanding the operation space of hands of an operator; the working space of a single motion sensor is a cone with a cone angle of 89.5 degrees, a height of 550 millimeters and a bottom radius of 550 millimeters, and is used for measuring the position and the direction of a palm and obtaining a gesture text of an operator through a corresponding algorithm; the electromagnet is used for generating an electromagnetic field to provide electromagnetic force feedback; the tracking platform is fixed at the tail end of a six-degree-of-freedom mechanical arm of the mobile robot, and the mobile robot is used for enabling the tracking platform, a sensor on the platform and an electromagnet to move in space; pasting a checkerboard picture on a power box of the mobile robot for positioning the position of the mobile robot in space; the LM is a body sensing controller;
the gesture text and the voice text of the fusion operator are spliced behind the voice text; the robot control instruction is extracted through the inference method and is used for robot control, and the method specifically comprises the following steps:
by using (C) opt ,C dir ,C val ,C unit ) Four attribute description control commands, C opt Representing the type of operation, C dir Represents the direction of movement, C val RepresentMotion value, C unit Units representing motion values; when the operator controls the robot using voice and gestures, the gestures are used to indicate the direction of the robot movement, and thus the gesture text is represented as one direction vector; the electromagnetic force feedback is realized as follows:
estimating current and displacement of the coil from the expected force using a Back Propagation Neural Network (BPNN) in an artificial neural network; the BPNN comprises an input layer, two hidden layers with dynamically adjustable node quantity and an output layer; the BPNN model comprises 6 input parameters and 4 target output parameters; the input layer has 6 nodes for input parameter assignment, respectively hand position estimate P (P) x ,p y ,p z ) And force f from the environment e (f e,x ,f e,y ,f e,z ) (ii) a There are 4 nodes in the output layer, corresponding to the present current I and the displacement D (D) x ,d y ,d z ) (ii) a The data format of the training and testing data sets for the model are both (p) x ,p y ,p z ,f e,x ,f e,y ,f e,z ,I,d x ,d y ,d z ) The data of the data set is randomly assigned, 70% of which is used for training and the rest for testing;
when data is collected, the input to the PID is the desired force f e And hand position, currents I and d x ,d y ,d z Dynamically adjusted to cause the coil to generate an appropriate force that can be felt by an operator; adjusting the current to produce a measured force f h Should be as equal as possible to a given desired force f e The deviation of the two forces should satisfy: l f e -f h And e is less than or equal to e, and e is a deviation threshold value set manually.
3. The teleoperation system for the robot based on the electromagnetic force feedback and the augmented reality as claimed in claim 1, wherein the voice collecting module collects the voice of the operator using a microphone array built in a Kinect camera and converts the voice of the operator into a text form to obtain a voice text.
CN201911046808.2A 2019-10-30 2019-10-30 Robot teleoperation system and method based on electromagnetic force feedback and augmented reality Active CN110815258B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911046808.2A CN110815258B (en) 2019-10-30 2019-10-30 Robot teleoperation system and method based on electromagnetic force feedback and augmented reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911046808.2A CN110815258B (en) 2019-10-30 2019-10-30 Robot teleoperation system and method based on electromagnetic force feedback and augmented reality

Publications (2)

Publication Number Publication Date
CN110815258A CN110815258A (en) 2020-02-21
CN110815258B true CN110815258B (en) 2023-03-31

Family

ID=69551554

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911046808.2A Active CN110815258B (en) 2019-10-30 2019-10-30 Robot teleoperation system and method based on electromagnetic force feedback and augmented reality

Country Status (1)

Country Link
CN (1) CN110815258B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111459274B (en) * 2020-03-30 2021-09-21 华南理工大学 5G + AR-based remote operation method for unstructured environment
CN111438499A (en) * 2020-03-30 2020-07-24 华南理工大学 5G + industrial AR-based assembly method using constraint-free force feedback
CN111459451A (en) * 2020-03-31 2020-07-28 北京市商汤科技开发有限公司 Interactive object driving method, device, equipment and storage medium
CN111459454B (en) * 2020-03-31 2021-08-20 北京市商汤科技开发有限公司 Interactive object driving method, device, equipment and storage medium
CN111459452B (en) * 2020-03-31 2023-07-18 北京市商汤科技开发有限公司 Driving method, device and equipment of interaction object and storage medium
CN111724487B (en) * 2020-06-19 2023-05-16 广东浪潮大数据研究有限公司 Flow field data visualization method, device, equipment and storage medium
CN113313346A (en) * 2021-04-19 2021-08-27 贵州电网有限责任公司 Visual implementation method of artificial intelligence scheduling operation based on AR glasses
CN114310903A (en) * 2022-01-19 2022-04-12 梅蓉 Manipulator control method and system based on bilateral teleoperation
CN116476100A (en) * 2023-06-19 2023-07-25 兰州空间技术物理研究所 Remote operation system of multi-branch space robot

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3884249B2 (en) * 2001-08-24 2007-02-21 独立行政法人科学技術振興機構 Teaching system for humanoid hand robot
US9696795B2 (en) * 2015-02-13 2017-07-04 Leap Motion, Inc. Systems and methods of creating a realistic grab experience in virtual reality/augmented reality environments
CN105291138B (en) * 2015-11-26 2017-10-20 华南理工大学 It is a kind of to strengthen the visual feedback platform of virtual reality immersion sense
CN106095109B (en) * 2016-06-20 2019-05-14 华南理工大学 The method for carrying out robot on-line teaching based on gesture and voice
CN107030692B (en) * 2017-03-28 2020-01-07 浙江大学 Manipulator teleoperation method and system based on perception enhancement
CN107351058A (en) * 2017-06-08 2017-11-17 华南理工大学 Robot teaching method based on augmented reality
CN108161882B (en) * 2017-12-08 2021-06-08 华南理工大学 Robot teaching reproduction method and device based on augmented reality
CN108406725A (en) * 2018-02-09 2018-08-17 华南理工大学 Force feedback man-machine interactive system and method based on electromagnetic theory and mobile tracking
CN109521868B (en) * 2018-09-18 2021-11-19 华南理工大学 Virtual assembly method based on augmented reality and mobile interaction
CN109955254B (en) * 2019-04-30 2020-10-09 齐鲁工业大学 Mobile robot control system and teleoperation control method for robot end pose

Also Published As

Publication number Publication date
CN110815258A (en) 2020-02-21

Similar Documents

Publication Publication Date Title
CN110815258B (en) Robot teleoperation system and method based on electromagnetic force feedback and augmented reality
Du et al. Markerless human–manipulator interface using leap motion with interval Kalman filter and improved particle filter
US20210205986A1 (en) Teleoperating Of Robots With Tasks By Mapping To Human Operator Pose
Hajiloo et al. Robust online model predictive control for a constrained image-based visual servoing
Xu et al. Visual-haptic aid teleoperation based on 3-D environment modeling and updating
WO2021143294A1 (en) Sensor calibration method and apparatus, data measurement method and apparatus, device, and storage medium
JP2021000678A (en) Control system and control method
Kamali et al. Real-time motion planning for robotic teleoperation using dynamic-goal deep reinforcement learning
Melchiorre et al. Collison avoidance using point cloud data fusion from multiple depth sensors: a practical approach
Liang et al. An augmented discrete-time approach for human-robot collaboration
Chen et al. A human–robot interface for mobile manipulator
Lepora et al. Pose-based tactile servoing: Controlled soft touch using deep learning
CN113103230A (en) Human-computer interaction system and method based on remote operation of treatment robot
Li et al. Neural learning and kalman filtering enhanced teaching by demonstration for a baxter robot
CN110794969B (en) Natural man-machine interaction method for non-contact force feedback
Lambrecht et al. Markerless gesture-based motion control and programming of industrial robots
Palmieri et al. Human arm motion tracking by kinect sensor using kalman filter for collaborative robotics
Zhao et al. A novel accurate positioning method for object pose estimation in robotic manipulation based on vision and tactile sensors
Parga et al. Tele-manipulation of robot arm with smartphone
Nandikolla et al. Teleoperation robot control of a hybrid eeg-based bci arm manipulator using ros
WO2021171353A1 (en) Control device, control method, and recording medium
Du et al. A novel natural mobile human-machine interaction method with augmented reality
Parga et al. Smartphone-based human machine interface with application to remote control of robot arm
Grasshoff et al. 7dof hand and arm tracking for teleoperation of anthropomorphic robots
Lopez et al. Taichi algorithm: human-like arm data generation applied on non-anthropomorphic robotic manipulators for demonstration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant