WO2021068334A1 - Système de commande intégré de commande d'entraînement - Google Patents

Système de commande intégré de commande d'entraînement Download PDF

Info

Publication number
WO2021068334A1
WO2021068334A1 PCT/CN2019/117615 CN2019117615W WO2021068334A1 WO 2021068334 A1 WO2021068334 A1 WO 2021068334A1 CN 2019117615 W CN2019117615 W CN 2019117615W WO 2021068334 A1 WO2021068334 A1 WO 2021068334A1
Authority
WO
WIPO (PCT)
Prior art keywords
module
force
joint
robot
control
Prior art date
Application number
PCT/CN2019/117615
Other languages
English (en)
Chinese (zh)
Inventor
冯伟
吴新宇
张艳辉
陈清朋
徐天添
马跃
孙建铨
王大帅
郭师峰
Original Assignee
深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳先进技术研究院 filed Critical 深圳先进技术研究院
Publication of WO2021068334A1 publication Critical patent/WO2021068334A1/fr

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls

Definitions

  • the invention belongs to the field of collaborative robots, in particular to an integrated drive and control control system.
  • collaborative robots do not require a separate isolation space, and can cooperate with humans to complete production tasks at close range. For example, on the assembly line of 3C products, humans can complete complex assembly tasks, while collaborative robots can quickly and accurately The task of picking and placing parts is completed. This collaborative division of labor greatly improves production efficiency and reduces production costs. In order to achieve this collaborative goal, it is necessary to ensure a safe human-computer interaction environment, which puts forward requirements for the control of collaborative robots far higher than traditional robots in terms of accuracy and flexibility.
  • industrial robots generally adopt a distributed control mode of "central motion controller + multiple servo drives", which is convenient in layout and simple in application.
  • Most of the traditional industrial robots work in the position control mode.
  • Each joint uses the driver to achieve precise position loop PID control, and receives the command requirements of the motion controller through the bus.
  • This mode has simple control algorithms and small calculations.
  • the data communication volume is not large.
  • it is necessary to implement complex feedforward control, compliance control and other algorithms.
  • the distributed architecture has limited signal transmission rate and synchronization mechanism problems. Its real-time and rapidity are difficult to meet the requirements of collaborative robots.
  • the modeling is complicated, the estimation accuracy is not high, and the parameters that cannot be modeled can be identified;
  • the active compliance force control information of the existing robots is mainly obtained through the force/torque sensors on the joints, but the force/torque sensors are large in size and not suitable for use on collaborative robots. If a smaller-volume force/torque sensor is used, Then there is the problem of high price; fourth, there is the problem that the anti-collision detection ability is limited, and the safety protection strategy is not suitable for use on collaborative robots.
  • the present invention provides the society with a method that can identify parameters that cannot be modeled, can perform active compliance control without sensors, and has good anti-collision detection capabilities and safety protection strategies suitable for use on collaborative robots. Integrated intelligent drive and control control system.
  • the technical solution of the present invention is to provide a drive-control integrated control system, which is used to control a collaborative robot (6)
  • the drive-control integrated control system includes: a control module (1), intelligent power Learning parameter identification module (2), sensorless active compliance control module (3), force feedback human-machine cooperation anti-collision control module (4) and multi-axis drive module (5);
  • the control module (1) is used to Set instructions, or/and by the intelligent dynamic parameter identification module (2), the sensorless active compliance control module (3) and the force feedback human-machine cooperation anti-collision control module (4) according to the collaborative robot (6)
  • the signal fed back from the state controls the multi-axis drive module (5) in real time, so that the multi-axis drive module (5) controls the motion of the collaborative robot (6); among them, the intelligent dynamic parameter identification module (2) ) Is used to feed back the mechanical parameter identification signal of the collaborative robot (6) to the control module (1) according to the motion state of the collaborative robot (6);
  • the sensorless active compliance control module (3) is used to respond to the motion state
  • the intelligent dynamic parameter identification module (2) includes: the nominal model (21), the nominal model (21) is based on the Lagrangian dynamic model, used for: according to the motion state of the collaborative robot (6), Obtain the movement speed of any point on each link of the collaborative robot, calculate the kinetic energy of each link of the collaborative robot during the movement, and the total kinetic energy of the movement of the collaborative robot; calculate the potential energy of each link of the collaborative robot during the movement, and the collaborative robot The total potential energy relative to the reference potential energy surface during the movement; construct the Lagrangian function of the collaborative robot according to the total kinetic energy and total potential energy of the collaborative robot; perform the derivative operation on the Lagrangian function to obtain the standard of the collaborative robot It is called the dynamic equation; the actual dynamic model (22) is used to obtain the actual dynamic equation of the actual dynamic model of the collaborative robot according to the preset parameters; the parameter identification neural network (23) is used to set the collaborative robot to In the torque working mode, a smooth torque curve is selected as the input of the collaborative robot from the minimum to the maximum joint torque,
  • the nominal kinetic equation is: Among them, D(q) ⁇ R n ⁇ n is a symmetric positive definite inertia matrix; Is the Coriolis force and centrifugal force matrix; G(q) ⁇ R n ⁇ 1 is the center of gravity term matrix; q is the mechanical joint angular displacement vector, Is the angular velocity vector of the robotic arm Is the angular acceleration vector of the manipulator; ⁇ R n is the control torque vector of each joint of the manipulator.
  • F(q) represents the friction of joint movement
  • the sensorless active compliance control module (3) includes a position loop (31), a force loop (32) and a force position hybrid control law output module (33); wherein the position loop (31) includes an end position input terminal (311) ), position selection matrix (312) and position control law module (313); the end position input terminal (311) is used to input the end position signal to the position selection matrix (312), and the end position signal passes through the position selection matrix (312) and The position signal processed by the position control law module (313) is input to the force position mixed control law output module (33); wherein the force loop (32) includes an end force input terminal (321), a force selection matrix (322), and a force control law The module (313) and the joint torque estimation module (324) based on motor current.
  • the position loop (31) includes an end position input terminal (311) ), position selection matrix (312) and position control law module (313); the end position input terminal (311) is used to input the end position signal to the position selection matrix (312), and the end position signal passes through the position selection matrix (312) and
  • the end force input terminal (321) is used to input the end force signal to the force selection matrix (322).
  • the end force signal depends on the passing force selection matrix (322) and the force
  • the force signal processed by the control law module (323) is input to the force position hybrid control law output module (33), and the joint torque estimation module (324) feeds back the real-time current of the joint motor (35) to the end force input terminal (321);
  • the position mixing control law output module (33) inputs the force position mixing control law output signal (G) to the joint motor (35).
  • the sensorless active compliance control module (3) also includes a robot kinematics model (314).
  • the robot kinematics model (314) feeds back the joint angle and angular velocity of the collaborative robot (6) to the end position input terminal (311).
  • the sensorless active compliance control module (3) further includes a compensation module (34), and the compensation module (34) is interposed between the force position hybrid control law output module (33) and the joint motor (35).
  • the force feedback human-machine cooperation anti-collision detection module (4) includes: the dynamic equation establishment module (41), which is used to establish the linkage coordinate system by the DH parameter method on the predetermined robot platform, and according to the Lagrangian dynamics
  • the robot dynamics equation is established by the scientific formula
  • the collision detection operator and the disturbance observer establishment module (42) according to the robot dynamics equation and momentum equation, construct the collision detection operator based on the constant energy of the robot and the disturbance based on the generalized momentum change Observer
  • data analysis module (43) based on the real-time feedback of the robot system current, determines the relationship between the torque of each joint and the collision force, and gives the robot Jacobian matrix solution method, and analyzes its effectiveness in detecting collisions
  • safety protection The strategy formulation module (44), based on the detection results of the collision detection model, formulates different safety protection strategies for different collision situations and combined with actual working conditions
  • simulation verification and optimization module (45) based on the ADAMS-Simulink joint simulation platform for robots The effectiveness of the collision detection
  • the force feedback human-machine cooperation anti-collision detection module (4) also includes: a monocular dual-view stereo matching module (47), used to construct a SVS-based monocular dual-view stereo matching model, and optimize geometric constraints on the loss function , Through the left and right view synthesis process and dual-view stereo matching, the accurate estimation of the detection target depth in the monocular image is realized; the convolution feature extraction module (48), based on the RGB image collected by the monocular camera, uses the ResNet model to perform deep convolution features Extraction; Human bone key point processing module (49), according to the prior knowledge of human bone joint geometry and the correlation between joints, optimize the design of the dual-branch deep convolutional neural network structure, and realize the synchronization processing of joint points and their joint relations.
  • a monocular dual-view stereo matching module used to construct a SVS-based monocular dual-view stereo matching model, and optimize geometric constraints on the loss function .
  • One branch uses the combination of probabilistic heat map and offset to perform human bone key point regression.
  • the other branch detects the joint association information of multiple people in the image, and forms human bone sequence data through bipartite map matching; human skeleton image data processing module (410), used to reconstruct the human skeleton image data set combined with the characteristics of the industrial human-machine collaboration scene, and perform joint point data annotation, combined with manual adjustment to obtain the posture data set for the industrial collaboration scene.
  • human skeleton image data processing module 410
  • the invention has the advantages of identifying parameters that cannot be modeled, performing active compliance control without sensors, and having good anti-collision detection capabilities and safety protection strategies, which are suitable for use on collaborative robots.
  • Figure 1 is a schematic block diagram of an embodiment of the method of the present invention.
  • FIG. 2 is a block diagram of the intelligent dynamic parameter identification module of the present invention.
  • FIG. 3 is a block diagram of the sensorless active compliance control module of the present invention.
  • FIG. 4 is a schematic block diagram of the force feedback human-machine cooperation anti-collision control module of the present invention.
  • Fig. 5 is a schematic structural diagram of another expression form of Fig. 4.
  • Figure 1 discloses an intelligent drive and control integrated control system for multi-axis collaborative industrial robots, including: control module 1, intelligent dynamic parameter identification module 2, sensorless active compliance Control module 3, force feedback man-machine cooperation anti-collision control module 4 and multi-axis drive module 5.
  • the control module 1 is used to follow the pre-set instructions, or/and the intelligent dynamic parameter identification module 2, the sensorless active compliance control module 3 and the force feedback human-machine cooperation anti-collision control module 4 according to the cooperation
  • the signal fed back from the state of the robot 6 controls the multi-axis drive module 5 in real time.
  • the intelligent dynamic parameter identification module 2 provides the mechanical parameter identification signal of the collaborative robot 6 to the control module 1 in time according to the motion state of the collaborative robot 6.
  • the sensorless active compliance control module 3 provides the position signal, force signal and environment signal of the collaborative robot 6 to the control module 1 in time according to the motion state of the collaborative robot 6.
  • the force feedback human-machine cooperation anti-collision control module 4 provides the control module 1 with a safety status signal of the collaborative robot 6 in time according to the motion state of the collaborative robot 6.
  • the multi-axis drive module 5 controls the movement of the collaborative robot 6 in real time according to the instructions of the control module 1.
  • the mains input interface 7 in FIG. 1 provides power for the control module 1 and the multi-axis drive module 5. Since the control module 1 requires low-voltage direct current, a power adapter 8 is provided between the mains input interface 7 and the control module 1.
  • the intelligent dynamic parameter identification module 2 of the present invention includes the nominal model 21 based on the Lagrangian dynamic model.
  • each link of the collaborative robot is calculated The movement speed of one point; calculate the kinetic energy of each link of the collaborative robot during the movement, and the total kinetic energy of the entire collaborative robot movement; calculate the potential energy of each link of the collaborative robot during the movement, and the relative Refer to the total potential energy of the potential energy surface; construct the Lagrangian function of the collaborative robot system based on the total kinetic energy and total potential energy of the collaborative robot obtained above; perform the derivative operation on the Lagrangian function obtained in the above process to obtain The nominal dynamic equation of the collaborative robot system.
  • the actual dynamics model 22 is used to establish an actual dynamics model based on the nominal model. Starting from the actual use of the collaborative robot system, adding preset parameters that are difficult to model to obtain the actual dynamics model of the collaborative robot Kinetic equation.
  • the neural network training sample acquisition module 23 acquires neural network training sample data, sets the collaborative robot to the torque working mode, selects a smooth torque curve as the input of the collaborative robot within the range of the minimum to maximum joint torque, and uses the code of each joint
  • the disk obtains the angular displacement, angular velocity and angular acceleration of each joint; the sampling time is set to t within a sampling period T, and N sets of data including torque, angular displacement, angular velocity and angular acceleration are taken as a training sample data.
  • the parameter identification neural network training module 24 uses the torque ⁇ (k) in the sample data to obtain the theoretical output value through the nominal model Combine the torque ⁇ (k) with the actual output value in the sample Input to the parameter identification neural network; get the output correction value The theoretical output value is combined with the output correction value to obtain the identification output value The difference between the actual output value and the identification output value to obtain the output error Use the output error to establish the loss function of the parameter identification neural network; adopt the optimization strategy of self-learning evolution; train the neural network to complete the correction of the dynamic model.
  • the nominal kinetic equation is:
  • D(q) ⁇ R n ⁇ n is a symmetric positive definite inertia matrix
  • G(q) ⁇ R n ⁇ 1 is the center of gravity term matrix
  • q is the mechanical joint angular displacement vector
  • Is the angular velocity vector of the robotic arm Is the angular acceleration vector of the manipulator
  • ⁇ R n is the control torque vector of each joint of the manipulator.
  • the actual dynamic equation is:
  • F(q) represents the friction of joint movement Represents the disturbance in the motion of the robotic arm.
  • the disturbance includes load changes, modeling errors or/and electrical interference.
  • the parameters that are difficult to model include friction parameters, clearance parameters, or/and deformation parameters of the collaborative robot.
  • FIG. 3 shows the sensorless active compliance control module 3, which includes a position loop 31 and a force loop 32.
  • the position loop 31 includes an end position input terminal 311, a position selection matrix 312, and a position control law module 313.
  • the end position input terminal 311 is used to input an end position signal to the position selection matrix 312, and the end position signal is input to the force position hybrid control law output module according to the position signal processed by the position selection matrix 312 and the position control law module 313 33;
  • the force loop 32 includes an end force input terminal 321, a force selection matrix 322, a force control law module 323, and a joint torque estimation module 324 based on motor current.
  • the end force input terminal 321 is used to input an end force signal to all According to the force selection matrix 322, the terminal force signal is input to the force position hybrid control law output module 33 according to the force signal processed by the force selection matrix 322 and the force control law module 323.
  • the joint torque estimation module 324 calculates the real-time current of the joint motor 35 The feedback is fed to the terminal force input terminal 321; the force position hybrid control law output module 33 inputs the force position hybrid control law output signal G to the joint motor 35, and the joint motor 35 controls the action of the collaborative robot 6 through the transmission mechanism 36.
  • the present invention further includes a robot kinematics model 314 that feeds back the joint angle and angular velocity of the collaborative robot 6 to the end position input terminal 311.
  • the present invention further includes a compensation module 34 interposed between the force-position hybrid control law output module 33 and the joint motor 35.
  • the position selection matrix 312 and the force selection matrix 322 are combined into one, which is called the compliance selection matrix S.
  • the expression of the compliance selection matrix S is:
  • the structure of the joint torque estimation module 324 based on the motor current is that the complete robot dynamics equation is Equation 1:
  • M ⁇ R n ⁇ n is the joint space inertia matrix
  • C ⁇ R n ⁇ n is the Coriolis force and centripetal force calculation matrix
  • g ⁇ R n ⁇ 1 is the gravity term vector
  • q ⁇ R n ⁇ 1 is the driving joint angle vector
  • ⁇ R n ⁇ 1 is the driving joint torque
  • N ⁇ R n ⁇ n is the diagonal matrix of the reduction ratio of each joint, and J m is the inertia of the motor rotor; in the derivation process, the friction term at the motor rotor Substituting the relationship between the motor torque module and the joint torque, the joint torque estimation module based on the motor current is obtained, and the force detection output is obtained:
  • the force feedback human-machine cooperation anti-collision detection module 4 includes:
  • the dynamic equation establishment module 41 is used to establish the linkage coordinate system by the D-H parameter method on the predetermined robot platform, and establish the robot dynamic equation according to the Lagrangian dynamic formula;
  • the collision detection operator and disturbance observer establishment module 42 constructs the collision detection operator based on the robot's constant energy and the disturbance observer based on the generalized momentum change;
  • the data analysis module 43 determines the relationship between the torque of each joint and the collision force, and provides a method for solving the Jacobian matrix of the robot, and analyzes its effectiveness in detecting collisions;
  • the safety protection strategy formulation module 44 based on the detection results of the collision detection model, formulates different safety protection strategies for different collision situations and in combination with actual working conditions;
  • Simulation verification and optimization module 45 based on the ADAMS-Simulink joint simulation platform, performs simulation verification and optimization on the effectiveness of the robot collision detection operator and the rationality of the safety protection strategy;
  • the actual effect verification module 46 verifies and evaluates the actual effect of the obstacle avoidance protection safety strategy based on force feedback.
  • the safety protection strategy includes: 1. Stop after collision, that is, after the robot control system detects the collision signal, the control system will immediately turn off the servo drive and enable; or, 2. After the collision, the robot control system switches control Mode, convert the position mode to the torque mode; or, 3. After the collision, the robot changes its original motion trajectory and leaves the collision area.
  • the present invention also includes:
  • Monocular and dual-view stereo matching module 47 used to construct a SVS-based monocular and dual-view stereo matching model, optimize geometric constraints on the loss function, and achieve detection targets in monocular images through the left and right view synthesis process and dual-view stereo matching Accurate estimation of depth;
  • the convolution feature extraction module 48 uses the ResNet model to extract deep convolution features based on the RGB images collected by the monocular camera;
  • the human bone key point processing module 49 optimizes the structure design of the dual-branch deep convolutional neural network based on the prior geometric knowledge of the human bone joints and the correlation between the joints, and realizes the synchronization processing of the joint points and the joint correlation.
  • One of the branches passes Probabilistic heat map and offset are combined to perform human bone key point regression, one branch detects the joint related information of multiple people in the image, and the human bone sequence data is formed through bipartite map matching;
  • the human skeleton image data processing module 410 based on the Microsoft COCO data set, combines the characteristics of industrial human-machine collaboration scenes to reconstruct the human skeleton image data set, uses the open source alphapose of Shanghai Jiaotong University for joint point data annotation, and combines manual adjustment to obtain industrial-oriented collaboration The pose dataset of the scene.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

La présente invention concerne un système de commande intégré de commande d'entraînement intelligent comprenant : un module de commande (1), un module d'identification de paramètre dynamique intelligent (2), un module de commande de conformité active sans capteur (3), un module de commande anti-collision de collaboration homme-machine à rétroaction de force (4) et un module d'entraînement à arbres multiples (5). Le module de commande (1) est conçu pour effectuer une commande en temps réel sur le module d'entraînement à arbres multiples (5) ; le module d'identification de paramètre dynamique intelligent (2) est conçu pour renvoyer, en fonction de l'état de mouvement d'un robot collaboratif (6), un signal d'identification de paramètre mécanique du robot collaboratif (6) au module de commande (1) à temps ; le module de commande de conformité active sans capteur (3) est conçu pour renvoyer, en fonction de l'état de mouvement du robot collaboratif (6), un signal de position, un signal de force et un signal environnemental du robot collaboratif (6) au module de commande (1) à temps ; le module de commande anti-collision de collaboration homme-machine à rétroaction de force (4) est conçu pour renvoyer, en fonction de l'état de mouvement du robot collaboratif (6), un signal d'état de sécurité du robot collaboratif (6) au module de commande (1) à temps ; et le module d'entraînement à arbres multiples (5) est conçu pour commander le mouvement du robot collaboratif en temps réel selon une instruction du module de commande. Ainsi, la capacité de commande du robot peut être améliorée.
PCT/CN2019/117615 2019-10-12 2019-11-12 Système de commande intégré de commande d'entraînement WO2021068334A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2019110895 2019-10-12
CNPCT/CN2019/110895 2019-10-12

Publications (1)

Publication Number Publication Date
WO2021068334A1 true WO2021068334A1 (fr) 2021-04-15

Family

ID=75436984

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/117615 WO2021068334A1 (fr) 2019-10-12 2019-11-12 Système de commande intégré de commande d'entraînement

Country Status (1)

Country Link
WO (1) WO2021068334A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101332604A (zh) * 2008-06-20 2008-12-31 哈尔滨工业大学 人机相互作用机械臂的控制方法
US20090106005A1 (en) * 2007-10-23 2009-04-23 Kabushiki Kaisha Toshiba Simulation reproducing apparatus
US20160096271A1 (en) * 2014-10-06 2016-04-07 The Johns Hopkins University Active vibration damping device
JP2016215303A (ja) * 2015-05-19 2016-12-22 キヤノン株式会社 ロボットシステム、ロボットシステムの制御方法、および監視コンソール
CN108582070A (zh) * 2018-04-17 2018-09-28 上海达野智能科技有限公司 机器人碰撞检测系统和方法、存储介质、操作系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090106005A1 (en) * 2007-10-23 2009-04-23 Kabushiki Kaisha Toshiba Simulation reproducing apparatus
CN101332604A (zh) * 2008-06-20 2008-12-31 哈尔滨工业大学 人机相互作用机械臂的控制方法
US20160096271A1 (en) * 2014-10-06 2016-04-07 The Johns Hopkins University Active vibration damping device
JP2016215303A (ja) * 2015-05-19 2016-12-22 キヤノン株式会社 ロボットシステム、ロボットシステムの制御方法、および監視コンソール
CN108582070A (zh) * 2018-04-17 2018-09-28 上海达野智能科技有限公司 机器人碰撞检测系统和方法、存储介质、操作系统

Similar Documents

Publication Publication Date Title
CN111015649B (zh) 一种驱控一体化控制系统
Wang et al. A hybrid visual servo controller for robust grasping by wheeled mobile robots
US9862090B2 (en) Surrogate: a body-dexterous mobile manipulation robot with a tracked base
CN108656112B (zh) 一种面向直接示教的机械臂零力控制实验系统
CN110825076B (zh) 基于视线和力反馈的移动机器人编队导航半自主控制方法
Luo et al. Real time human motion imitation of anthropomorphic dual arm robot based on Cartesian impedance control
WO2022252221A1 (fr) Système de file d'attente de robots mobiles, procédé de planification de trajet et procédé de suivi
CN113829343A (zh) 基于环境感知的实时多任务多人人机交互系统
CN112621746A (zh) 带死区的pid控制方法及机械臂视觉伺服抓取系统
Guo et al. A small opening workspace control strategy for redundant manipulator based on RCM method
Adjigble et al. An assisted telemanipulation approach: combining autonomous grasp planning with haptic cues
CN115122325A (zh) 一种具有视场约束的拟人化机械手鲁棒视觉伺服控制方法
Ye et al. Velocity decomposition based planning algorithm for grasping moving object
Jia et al. Human/robot interaction for human support system by using a mobile manipulator
Siradjuddin et al. A real-time model based visual servoing application for a differential drive mobile robot using beaglebone black embedded system
WO2021068334A1 (fr) Système de commande intégré de commande d'entraînement
CN116100565A (zh) 基于外骨骼机器人的沉浸式实时远程操作平台
Wang et al. A visual servoing system for interactive human-robot object transfer
Abadianzadeh et al. Visual servoing control of robot manipulator in 3D space using fuzzy hybrid controller
CN112077841B (zh) 一种提升机器人手臂操纵精度的多关节联动方法及系统
Huang et al. Vision guided dual arms robotic system with DSP and FPGA integrated system structure
Hentout et al. A telerobotic human/robot interface for mobile manipulators: A study of human operator performance
Anderson et al. Coordinated control and range imaging for mobile manipulation
TWI309597B (fr)
Sawalmeh et al. A surveillance 3D hand-tracking-based Tele-operated UGV

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19948710

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19948710

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19948710

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 15.02.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 19948710

Country of ref document: EP

Kind code of ref document: A1