WO2023123911A1 - 机器人碰撞检测方法、装置、电子设备及存储介质 - Google Patents

机器人碰撞检测方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2023123911A1
WO2023123911A1 PCT/CN2022/100144 CN2022100144W WO2023123911A1 WO 2023123911 A1 WO2023123911 A1 WO 2023123911A1 CN 2022100144 W CN2022100144 W CN 2022100144W WO 2023123911 A1 WO2023123911 A1 WO 2023123911A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
joint
hidden layer
input data
hidden
Prior art date
Application number
PCT/CN2022/100144
Other languages
English (en)
French (fr)
Inventor
冯长柱
Original Assignee
达闼科技(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 达闼科技(北京)有限公司 filed Critical 达闼科技(北京)有限公司
Publication of WO2023123911A1 publication Critical patent/WO2023123911A1/zh

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls

Definitions

  • the embodiments of the present invention relate to the field of robots, and in particular to a robot collision detection method, device, electronic equipment and storage medium.
  • a collision detection function has been added, that is, when the robot encounters a person or surrounding objects without prediction, it can detect a collision, stop immediately or enter a compliant mode, so as not to hurt people. or damage surrounding objects.
  • Collision detection algorithms commonly used in the industry are generally established based on mechanical models such as Newton-Euler equations or Euler-Lagrange equations.
  • the equations are relatively complicated. It is necessary to establish a reference coordinate system for each joint and calculate the Kinematic parameters, such as linear velocity, rotational velocity, position, attitude, rotational acceleration, linear acceleration, etc., and then calculate the force and moment of each joint, the regression of its model parameters will be affected by the manufacturing differences of each actuator, such as The nonlinearity of current/torque, the friction force of the rotating shaft, the accuracy error of assembly, the speed and position data acquisition error of the actuator, etc. will all affect the identification results of the final parameters, and lead to false triggering of collision detection during use, and regression
  • the motivational dances used all need to be carefully choreographed specific movements.
  • the purpose of the embodiments of the present invention is to provide a robot collision detection method, device, electronic equipment, and storage medium.
  • a deep learning model for collision detection By using a deep learning model for collision detection, the above defects caused by using a mechanical model for collision detection are avoided.
  • an embodiment of the present invention provides a robot collision detection method, including:
  • Embodiments of the present invention also provide a robot collision detection device, including:
  • the collection module is used to collect the kinematic parameters and measured torque of each joint of the robot;
  • a prediction module configured to input the kinematic parameters into a pre-trained deep network model to obtain the predicted torque of each joint;
  • a judging module configured to judge whether the robot collides according to the difference between the measured torque and the predicted torque of each joint.
  • Embodiments of the present invention also provide an electronic device, including:
  • the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the robot collision detection method as described above.
  • the embodiment of the present invention also provides a computer-readable storage medium, storing a computer program, and implementing the above robot collision detection method when the computer program is executed by a processor.
  • Embodiments of the present invention also provide a computer program, which implements the above robot collision detection method when the computer program is executed by a processor.
  • the embodiments of the present invention collect the kinematic parameters and measured torque of each joint of the robot; input the kinematic parameters into the pre-trained deep network model to obtain the predicted torque of each joint; according to the measurement of each joint The difference between the torque and the predicted torque determines whether the robot has collided.
  • this solution has the following advantages:
  • the deep network model is learned and driven based on data characteristics, which can avoid the establishment of complex mechanical models, and is suitable for serial manipulators or robots of various types, sizes, and forms.
  • the model has wide applicability and is simple and easy to understand;
  • the data feature learning based on the deep network model can adapt to the data features due to manufacturing differences and data collection errors, and there is no special requirement for the dance of training. Ordinary dance moves can be used for learning and training, eliminating the need to design incentive dances ;
  • the deep network model can be pre-trained, and then iteratively trained for each robot, which can save the training process of initial parameters and facilitate rapid deployment;
  • the iterative learning of the deep network model makes it have a relatively strong generalization ability. For example, when the robot changes physically, if the size of a certain joint changes, it only needs to do simple iterative training on the basis of the original parameters. Re-use; another example is that a certain dance has a false trigger collision. This problem can be solved by simply performing simple iterative training on this dance. However, the traditional mechanical model needs to be modified or artificially adjusted, which is very time-consuming. force.
  • Fig. 1 is the specific flowchart of the robot collision detection method according to the first embodiment of the present invention
  • Fig. 2 is a flow chart of the process of constructing an encoding network according to the first embodiment of the present invention
  • FIG. 3 is a schematic structural diagram of an encoding network according to a first embodiment of the present invention.
  • Fig. 4 is a flow chart of the process of constructing a decoding network according to the first embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of a decoding network according to a first embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of a robot collision detection device according to a second embodiment of the present invention.
  • Fig. 7 is a schematic structural diagram of an electronic device according to a third embodiment of the present invention.
  • the first embodiment of the present invention relates to a robot collision detection method, which is suitable for collision detection of industrial mechanical arms or robots during motion (such as dancing, construction work, etc.).
  • the robot collision detection method includes the following steps:
  • Step 101 Collect kinematic parameters and measured torques of each joint of the robot.
  • the kinematic parameters and measured torque of each joint on the robot body are collected, and the robot may be a serial robot.
  • these trunk joints include Kneel (joint No. 1), Trunk_yaw (joint No. 2), Trunk_pitch (joint No. 3), and Trunk_roll (joint No. 4) from bottom to top.
  • the kinematic parameters and measured torque of each torso joint of the robot are collected at a frequency of 200 Hz, and the kinematic parameters and measured torque collected once are used as a set of data for the subsequent collision detection process.
  • the kinematic parameters at least include: position (pos), velocity (vel) and acceleration (acc) of the joint.
  • the measured torque (effort) within the same group can be considered as the actual torque produced based on the kinematic parameters within the group.
  • Step 102 Input the kinematic parameters into the pre-trained deep network model to obtain the predicted torque of each joint.
  • the input of the pre-trained deep network model is the kinematic parameters of each joint of the robot collected at any time
  • the output is the predicted torque of the corresponding joint obtained by model prediction based on the input kinematic parameters of each joint.
  • the network structure and construction process of the deep network model are not limited.
  • the deep network model can be constructed using a model framework of an encoder network-decoder network (Encoder-Decoder).
  • Encoder the role of the encoding network (Encoder) is to extract the features of the input kinematic parameters of each joint to obtain a learning vector (C) covering the kinematic parameters of all joints;
  • the role of the decoding network (Decoder) is to convert the Encoder network
  • the output learning vector (C) is subjected to feature conversion to obtain the predicted torque (effort) produced by each joint under the input kinematic parameters.
  • the process of constructing the encoding network includes:
  • Step 201 Create an input layer, and a plurality of first hidden layers corresponding to each joint of the robot; each first hidden layer is used to receive the first input data of the kinematic parameters of the corresponding joint from the input layer, and based on The first input data generates first output data.
  • the encoding network includes: an input layer and multiple first hidden layers (Hidden layer1, Hidden layer2,..., Hidden layer n), the number of the first hidden layer is related to the joints contained in the robot (joint 1. joint 2,..., joint The number of n) is the same; the first hidden layer corresponds to the joints one by one, that is, the kinematic parameters of each joint (Input joint 1 pos/vel/acc, Input joint 2 pos/vel/acc, ..., Input joint n pos/vel/acc) one-to-one correspondence.
  • Hidden layer1, Hidden layer2,..., Hidden layer n the number of the first hidden layer is related to the joints contained in the robot (joint 1. joint 2,..., joint The number of n) is the same; the first hidden layer corresponds to the joints one by one, that is, the kinematic parameters of each joint (Input joint 1 pos/vel/acc, Input joint 2 pos/vel/acc, ..., Input joint n pos/vel/acc) one-
  • Each first hidden layer is used to receive first input data of kinematic parameters corresponding to joints, and calculate first output data based on the first input data.
  • the first hidden layer Hidden layer i receives the kinematic parameters Input corresponding to the joint i
  • the first input data of joint i pos/vel/acc (the first input data of Input joint i), and calculate the first output data (the first output data of Output joint i) based on the first input data.
  • i belongs to any integer in [1,n].
  • Step 202 Sorting the first hidden layers in a first order to form a first sequence.
  • each first hidden layer is not limited, for example, according to Fig. 3: Hidden Layer1, Hidden layer2, ..., Hidden layer n are sorted in order. Since the first hidden layer corresponds to the joints one by one, the order of the first hidden layer corresponds to the order of the joints (kinematic parameters of the joints).
  • Step 203 For every two adjacent first hidden layers in the first sequence, combine the first input data and the first output data of the first hidden layer in the first order with the first data of the first hidden layer in the second order The input data is superimposed as the updated first input data of the first hidden layer that is sorted later.
  • the first output data of the non-first first hidden layer in the first sequence is generated based on the updated first input data of the first hidden layer, and the first output data of the last first hidden layer in the first sequence is the output data of the encoding network.
  • the input layer When the input layer receives data, it receives the kinematic parameters of each joint in serial order (the order is the first order arranged by the first hidden layer), so when sending the first input data to each first hidden layer, it also follows this order.
  • the first input data corresponding to each joint is sent to the first hidden layer corresponding to each joint sequentially in serial order, and the input order is the first order of the first hidden layer. In this way, for every two adjacent first hidden layers in the first sequence, the first hidden layer that is sorted earlier will receive the corresponding first input data before the first hidden layer that is sorted later, and generate the corresponding first hidden layer - output data.
  • the first output data for the first hidden layer that is sorted later in addition to the first input data received by itself, it can also be based on the first input data and the first hidden layer that is sorted earlier. - output data. That is, the first input data and the first output data of the first hidden layer sorted earlier are superimposed on the first input data of the first hidden layer sorted later, as the updated first hidden layer The first input data of . In this way, the first hidden layer sorted later can generate the corresponding first output data based on the updated first input data, and the first output data also covers the joint motions corresponding to the two first hidden layers information about the parameters.
  • each non-first first hidden layer can be based on the first input data and first output data of the previous first hidden layer to the first input data of this first hidden layer update, so as to use the updated first input data to calculate and obtain the first output data of the first hidden layer.
  • the first output data of the last first hidden layer simultaneously covers the kinematic parameter information of all joints corresponding to the first hidden layer. Using the first output data of the last first hidden layer as the output data of the encoding network can reflect the learning results of the encoding network on the kinematic parameters of all joints.
  • the process of constructing the decoding network includes:
  • Step 204 Create an output layer and a plurality of second hidden layers corresponding to each joint of the robot; each second hidden layer is used to receive the output data of the encoding network as the second input data of the second hidden layer, And output the second output data generated based on the second input data through the output layer to obtain the predicted torque of the corresponding joint.
  • the decoding network includes: an output layer and multiple second hidden layers (Hidden layer1, Hidden layer2,..., Hidden layer n), the number of the second hidden layer is related to the joints contained in the robot (joint 1. joint 2,..., joint The number of n) is the same; the second hidden layer corresponds to the joints one by one, that is, the kinematic parameters of each joint (Input joint 1 pos/vel/acc, Input joint 2 pos/vel/acc, ..., Input joint n pos/vel/acc) one-to-one correspondence.
  • Hidden layer1, Hidden layer2,..., Hidden layer n the number of the second hidden layer is related to the joints contained in the robot (joint 1. joint 2,..., joint The number of n) is the same; the second hidden layer corresponds to the joints one by one, that is, the kinematic parameters of each joint (Input joint 1 pos/vel/acc, Input joint 2 pos/vel/acc, ..., Input joint n pos/vel/acc) one-to
  • Each second hidden layer is used to receive the output data (Encoder output) of the encoding network as the second input data (Input joint second input data) of the second hidden layer, and the second input data generated based on the second input data
  • the output data is output through the output layer to obtain the predicted torque of the corresponding joint (Output joint 1 effort, Output joint 2 effort, ..., Output joint n effort).
  • the second hidden layer Hidden layer i receives the output data of the encoding network, that is, the second input data, and calculates the joint joint based on the second input data i
  • the second output data under the current kinematic parameters, the second output data is converted to the output layer format to output the predicted torque.
  • i belongs to any integer in [1,n].
  • Step 205 sort the second hidden layers in a second order to form a second sequence.
  • each second hidden layer is not limited, for example, according to Fig. 5: Hidden Layer n,..., Hidden layer2, Hidden layer1, are sorted in order. Since the second hidden layer corresponds to the joints one by one, the order of the second hidden layer corresponds to the order of each joint (pre-torque of the joint).
  • Step 206 For every two adjacent second hidden layers in the second sequence, superimpose the second output data of the second hidden layer ranked first with the second input data of the second hidden layer ranked later, as the sorting After the updated second input data of the second hidden layer.
  • the second output data of the non-first second hidden layer in the second sequence is generated based on the updated second input data of the second hidden layer.
  • the output layer When the output layer outputs data, it outputs the predicted torque of each joint according to the serial order (the order is the second order arranged by the second hidden layer), so when receiving the second output data from each second hidden layer, it also outputs the predicted torque according to the serial order.
  • the rows sequentially receive the second output data from the second hidden layer corresponding to each joint, and the output sequence is the second sequence of the second hidden layer, and is also the sequence of generating the second output data for each second hidden layer. In this way, for every two adjacent second hidden layers in the second sequence, the second hidden layer that is sorted earlier receives the corresponding second input data before the second hidden layer that is sorted later, and generates the corresponding second hidden layer Two output data.
  • the second output data for the second hidden layer ranked later in addition to the second input data received by itself, it may also be based on the second output data of the second hidden layer ranked earlier. That is, the second output data of the second hidden layer ranked first and the second input data of the second hidden layer ranked lower are superimposed to be the updated second input data of the second hidden layer ranked lower. In this way, the second hidden layer that is sorted later can generate the corresponding second output data based on the updated second input data, and the second output data also covers the joints corresponding to the two second hidden layers. Information about the predicted torque under the current kinematic parameters.
  • each non-first second hidden layer can update the second input data of this second hidden layer based on the second output data of the previous second hidden layer, so as to use the updated After the second input data, calculate the second output data of the second hidden layer.
  • the second order is the reverse order of the first order.
  • the first order is Hidden layer 1 as shown in Figure 3
  • the second sequence is Hidden layer n, ..., Hidden layer2, Hidden as shown in Figure 5 layer1.
  • the advantage of this processing is: in the output data of the encoding network, the first output data of the joints calculated first is compared with the first output data of the joints calculated later, and the kinematic parameters of the retained joints account for a smaller proportion of information.
  • the second output data corresponding to the first output data of the joints calculated later can be predicted based on the output data of the encoding network, and then the predicted torque of the corresponding joints can be obtained.
  • the prediction result is more accurate.
  • the output data of the encoding network can be superimposed on the second output data of the joints obtained in the previous time, such as subtracting the second output data of the joints obtained in the previous time from the output data of the encoding network. output data, which can increase the proportion of the current kinematic parameter information of the joint to be calculated in the updated second input data, so as to obtain the predicted torque of the corresponding joint, and the prediction result is more accurate at this time.
  • the kinematic parameters of the joints that are processed first in the encoding network will generate the predicted torque of the joint later in the decoding network, so that the processing of the relevant information of each joint can be symmetrically deployed in the encoding and decoding network.
  • sample data including the kinematic parameters of each joint can be collected separately at a frequency of 200 Hz, which takes more than 30 seconds for the robot to perform two dances.
  • Step 103 According to the difference between the measured torque and the predicted torque of each joint, determine whether the robot has collided.
  • the measured torque of each joint should not differ much from the predicted torque, otherwise the gap is large. Based on this, it can be judged whether the robot has collided by judging the difference between the measured torque and the predicted torque of each joint.
  • This step 103 can be realized through the following steps.
  • Step 1 Determine whether the difference between the measured torque and the predicted torque of each joint is greater than a preset threshold.
  • the preset threshold is the boundary value for evaluating the collision of the robot, and different preset thresholds can be set for each joint.
  • the difference here refers to the absolute value of the difference between the measured torque and the predicted torque of each joint.
  • Step 2 If the difference of any joint is greater than the preset threshold, it is determined that the robot has collided.
  • Step 3 If the difference values of all joints are not greater than the preset threshold, it is determined that the robot has not collided.
  • the embodiment of the present invention predicts the predicted torque of each joint of the robot by introducing a deep network model; according to the difference between the measured torque and the predicted torque of each joint, it is judged whether the robot has collided.
  • the relevant experimental data show that the deep network model in this embodiment can accurately learn the data characteristics of inverse dynamics, and achieve very good verification results with very few computing resources. These verification results show that the deep network model based on The anti-collision detection is more accurate than the anti-collision detection based on the mechanical model.
  • the second embodiment of the present invention relates to a robot collision detection device, which can be used to implement the robot collision detection method in the above method embodiment.
  • the robot collision detection device includes:
  • the collection module 301 is used to collect the kinematic parameters and measured torque of each joint of the robot;
  • a prediction module 302 configured to input the kinematic parameters into a pre-trained deep network model to obtain the predicted torque of each joint;
  • the judging module 303 is configured to judge whether the robot collides according to the difference between the measured torque and the predicted torque of each joint.
  • the deep network model is constructed using a model framework of encoding network-decoding network.
  • the above-mentioned robot collision detection device also includes:
  • An encoding network building block for creating an input layer, and a plurality of first hidden layers corresponding to each joint of the robot; each of the first hidden layers is used to receive the motion of the corresponding joint from the input layer The first input data of the learning parameters, and generate first output data based on the first input data; sort each of the first hidden layers in a first order to form a first sequence; for each of the first sequence Adjacent to two first hidden layers, the first input data and the first output data of the first hidden layer sorted earlier are superimposed with the first input data of the first hidden layer sorted later , as the updated first input data of the ranked first hidden layer;
  • the first output data of the first hidden layer that is not the first one in the first sequence is generated based on the updated first input data of the first hidden layer, and the last one in the first sequence
  • the first output data of a hidden layer is the output data of the encoding network.
  • the above-mentioned robot collision detection device also includes:
  • the decoding network building block is used to create an output layer, and a plurality of second hidden layers corresponding to each joint of the robot; each of the second hidden layers is used to receive the output data of the encoding network as the The second input data of the second hidden layer, and output the second output data generated based on the second input data through the output layer to obtain the predicted torque of the corresponding joint; Sorting in order to form a second sequence; for every two adjacent second hidden layers in the second sequence, the second output data of the second hidden layer sorted in front and the second hidden layer sorted in the back The second input data of layers are superimposed as the updated second input data of the second hidden layer after the sequence;
  • the second output data of the non-first second hidden layer in the second sequence is generated based on the updated second input data of the second hidden layer.
  • the second order is the reverse order of the first order.
  • the kinematic parameters at least include: the position, velocity and acceleration of the joint.
  • the judging module 303 is configured to judge whether the difference between the measured torque of each joint and the predicted torque is greater than a preset threshold; if the difference of any of the joints is greater than the preset threshold, it is determined that the robot has collided; if the difference of all the joints is not greater than the preset threshold, it is determined that the robot has not collided.
  • the above robot collision detection device further includes: a control module, configured to control the robot to stop moving or enter a compliant mode after the judging module determines that the robot has collided.
  • the robot is a serial robot.
  • the embodiment of the present invention predicts the predicted torque of each joint of the robot by introducing a deep network model; according to the difference between the measured torque and the predicted torque of each joint, it is judged whether the robot has collided, which can achieve better results. Anti-collision detection effect.
  • the third embodiment of the present invention relates to an electronic device, as shown in FIG. 7 , including at least one processor 402; and a memory 401 communicatively connected to at least one processor 402; The instructions executed by the processor 402 are executed by at least one processor 402, so that the at least one processor 402 can execute any one of the above method embodiments.
  • the memory 401 and the processor 402 are connected by a bus, and the bus may include any number of interconnected buses and bridges, and the bus connects one or more processors 402 and various circuits of the memory 401 together.
  • the bus may also connect together various other circuits such as peripherals, voltage regulators, and power management circuits, all of which are well known in the art and therefore will not be further described herein.
  • the bus interface provides an interface between the bus and the transceivers.
  • a transceiver may be a single element or multiple elements, such as multiple receivers and transmitters, providing means for communicating with various other devices over a transmission medium.
  • the data processed by the processor 402 is transmitted on the wireless medium through the antenna, and further, the antenna also receives the data and transmits the data to the processor 402 .
  • Processor 402 is responsible for managing the bus and general processing, and may also provide various functions including timing, peripheral interfacing, voltage regulation, power management, and other control functions. And the memory 401 may be used to store data used by the processor 402 when performing operations.
  • a fourth embodiment of the present invention relates to a computer-readable storage medium storing a computer program.
  • the computer program is executed by the processor, any one of the above method embodiments is implemented.
  • the fifth embodiment of the present invention relates to a computer program.
  • the computer program is executed by a processor, any one of the above method embodiments is implemented.
  • the program is stored in a storage medium, and includes several instructions to make a device ( It may be a single-chip microcomputer, a chip, etc.) or a processor (processor) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

本发明实施例涉及机器人领域,公开了一种机器人碰撞检测方法、装置、电子设备及存储介质。通过采集机器人各关节的运动学参数和测量扭矩;将所述运动学参数输入至预先训练的深度网络模型,得到所述各关节的预测扭矩;根据所述各关节的所述测量扭矩与所述预测扭矩之间的差值,判断所述机器人是否发生碰撞。本方案通过采用深度学习模型做碰撞检测,避免了利用力学模型进行碰撞检测所带来的诸多缺陷。

Description

机器人碰撞检测方法、装置、电子设备及存储介质
本申请基于申请号为202111674606X、申请日为2021年12月31日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此以引入方式并入本申请。
技术领域
本发明实施例涉及机器人领域,特别涉及一种机器人碰撞检测方法、装置、电子设备及存储介质。
背景技术
目前工业机械臂或者机器人领域为了保护人身安全,加入了碰撞检测功能,即机器人在无预知的情况下碰到人或者周围物体时可以检测到碰撞,马上停下来或者进入柔顺模式,以免伤到人或者损坏周围物体。
业内通用的碰撞检测算法一般是基于牛顿-欧拉方程或者欧拉-拉格朗日方程等力学模型建立起的,其方程比较复杂,需要先每个关节建立参考坐标系,计算每个关节的运动学参数,比如线速度、旋转速度、位置、姿态、旋转加速度、线加速度等,再计算每个关节的力和力矩,其模型参数的回归会受到每个执行器的制造差异的影响,比如电流/力矩的非线性化、旋转轴的摩擦力、装配的精度误差、执行器速度_位置数据采集误差等都会影响到最终参数的辨识结果,并导致在使用时发生误触发碰撞检测,且回归所用的激励舞蹈都需要是精心设计的特定动作。
技术解决方案
本发明实施方式的目的在于提供一种机器人碰撞检测方法、装置、电子设备及存储介质,通过采用深度学习模型做碰撞检测,避免了利用力学模型进行碰撞检测所带来的以上缺陷。
为解决上述技术问题,本发明的实施方式提供了一种机器人碰撞检测方法,包括:
采集机器人各关节的运动学参数和测量扭矩;
将所述运动学参数输入至预先训练的深度网络模型,得到所述各关节的预测扭矩;
根据所述各关节的所述测量扭矩与所述预测扭矩之间的差值,判断所述机器人是否发生碰撞。
本发明的实施方式还提供了一种机器人碰撞检测装置,包括:
采集模块,用于采集机器人各关节的运动学参数和测量扭矩;
预测模块,用于将所述运动学参数输入至预先训练的深度网络模型,得到所述各关节的预测扭矩;
判断模块,用于根据所述各关节的所述测量扭矩与所述预测扭矩之间的差值,判断所述机器人是否发生碰撞。
本发明的实施方式还提供了一种电子设备,包括:
至少一个处理器;以及,
与所述至少一个处理器通信连接的存储器;其中,
所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行如上所述的机器人碰撞检测方法。
本发明的实施方式还提供了一种计算机可读存储介质,存储有计算机程序,所述计算机程序被处理器执行时实现如上所述的机器人碰撞检测方法。
本发明的实施方式还提供了一种计算机程序,所述计算机程序被处理器执行时实现如上所述的机器人碰撞检测方法。
本发明实施方式相对于现有技术而言,通过采集机器人各关节的运动学参数和测量扭矩;将运动学参数输入至预先训练的深度网络模型,得到各关节的预测扭矩;根据各关节的测量扭矩与预测扭矩之间的差值,判断机器人是否发生碰撞。本方案相较于传统的利用力学模型进行碰撞检测,具有如下优势:
1、深度网络模型是基于数据特征学习和驱动的,可以避免建立复杂的力学模型,适用于各种型号、大小、形式的串行机械臂或者机器人,模型适用性广泛且简单易懂;
2、基于深度网络模型的数据特征学习,可以自适应由于制造差异和数据采集误差的数据特征,且对于训练的舞蹈没有特殊要求,普通舞蹈动作都可以用来学习训练,省去了设计激励舞蹈;
3、深度网络模型都可以预训练,再针每台机器人做迭代训练,这样可以省去初始参数的训练过程,有利于快速部署;
4、深度网络模型的迭代学习,使得其具有比较强的泛化能力,比如在机器人发生物理改变时,如某个关节的尺寸发生改变,只需要在原有参数基础上做简单的迭代训练即可再次使用;再比如某个舞蹈发生了误触发碰撞,只需将这个舞蹈做简单的迭代训练即可解决这个问题,但传统的力学模型则需要修改模型或者人为调参才可以,非常耗时耗力。
附图说明
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特别申明,附图中的图不构成比例限制。
图1是根据本发明第一实施方式的机器人碰撞检测方法的具体流程图;
图2是根据本发明第一实施方式中构建编码网络的过程流程图;
图3是根据本发明第一实施方式中编码网络的结构示意图;
图4是根据本发明第一实施方式中构建解码网络的过程流程图;
图5是根据本发明第一实施方式中解码网络的结构示意图;
图6是根据本发明第二实施方式中机器人碰撞检测装置的结构示意图;
图7是根据本发明第三实施方式的电子设备的结构示意图。
本发明的实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合附图对本发明的各实施方式进行详细的阐述。然而,本领域的普通技术人员可以理解,在本发明各实施方式中,为了使读者更好地理解本申请而提出了许多技术细节。但是,即使没有这些技术细节和基于以下各实施方式的种种变化和修改,也可以实现本申请所要求保护的技术方案。
本发明的第一实施方式涉及一种机器人碰撞检测方法,该方法适用于工业机械臂或机器人在运动过程(如舞蹈、施工作业等)中的碰撞检测。如图1所示,该机器人碰撞检测方法包括如下步骤:
步骤101:采集机器人各关节的运动学参数和测量扭矩。
在机器人运动过程中,采集机器人本体上各关节的运动学参数和测量扭矩,该机器人可以为串联机器人。例如,以某一跳舞机器人本体的躯干关节为例,这些躯干关节从下至上依次包含Kneel(1号关节)、Trunk_yaw(2号关节)、Trunk_pitch(3号关节)、Trunk_roll(4号关节)。在机器人跳舞时,以200hz频率采集机器人各躯干关节的运动学参数和测量扭矩,将一次采集的运动学参数和测量扭矩作为一组数据,进行后续碰撞检测过程。
其中,所述运动学参数至少包括:关节的位置(pos)、速度(vel)和加速度(acc)。同一组内的测量扭矩(effort)可以认为是基于该组内的运动学参数产生的实际扭矩。
步骤102:将运动学参数输入至预先训练的深度网络模型,得到各关节的预测扭矩。
其中,预先训练的深度网络模型的输入为任一次采集的机器人各关节的运动学参数,输出为基于输入的各关节的运动学参数进行模型预测得到的相应关节的预测扭矩。
本实施例中,对于深度网络模型的网络结构和构建过程不做限定。例如该深度网络模型可以采用编码网络-解码网络(Encoder-Decoder)的模型框架构建。其中,编码网络(Encoder)的作用是将输入的各关节的运动学参数进行特征提取,得到一个涵盖所有关节的运动学参数的学习向量(C);解码网络(Decoder)的作用是将Encoder网络输出的学习向量(C)进行特征转换得到各关节在输入的运动学参数下产生的预测扭矩(effort)。
以下将分别对Encoder网络和Decoder网络的构建过程分别进行阐述。
如图2所示,为本实施例提供的构建编码网络的过程,包括:
步骤201:创建输入层,以及与机器人的各关节一一对应的多个第一隐藏层;每个第一隐藏层用于从输入层接收对应关节的运动学参数的第一输入数据,并基于第一输入数据生成第一输出数据。
如图3所示,为本实施例中编码网络的结构示意图。该编码网络包括:输入层和多个第一隐藏层(Hidden layer1、Hidden layer2,……,Hidden layer n),第一隐藏层的数量与机器人包含的关节(joint 1、 joint 2,……,joint n)的数量相同;第一隐藏层与关节一一对应,即与各关节的运动学参数(Input joint 1 pos/vel/acc、Input joint 2 pos/vel/acc ,……,Input joint n pos/vel/acc)一一对应。每个第一隐藏层用于接收对应关节的运动学参数的第一输入数据,并基于第一输入数据计算得到第一输出数据。例如第一隐藏层Hidden layer i接收对应关节joint i的运动学参数Input joint i pos/vel/acc的第一输入数据(Input joint i第一输入数据),并基于该第一输入数据计算得到第一输出数据(Output joint i第一输出数据)。其中,i属于[1,n]中任一整数。
步骤202:对各第一隐藏层按第一顺序进行排序,形成第一序列。
本实施例中对各第一隐藏层的排序不做限定,例如可以按图3中:Hidden layer1、Hidden layer2,……,Hidden layer n的顺序进行排序,由于第一隐藏层与关节一一对应,则第一隐藏层的顺序对应了各关节(关节的运动学参数)的顺序。
步骤203:针对第一序列中每相邻两个第一隐藏层,将排序在前的第一隐藏层的第一输入数据、第一输出数据,与排序在后的第一隐藏层的第一输入数据叠加,作为排序在后的第一隐藏层的更新后的第一输入数据。
其中,第一序列中非首个第一隐藏层的第一输出数据基于该第一隐藏层的更新后的第一输入数据生成,且第一序列中最后一个第一隐藏层的第一输出数据为编码网络的输出数据。
输入层接收数据时,是按照串行顺序(该顺序为第一隐藏层排列的第一顺序)接收各关节的运动学参数,因此在向各第一隐藏层输送第一输入数据时也是按该串行顺序依次将各关节对应的第一输入数据输送到各关节对应的第一隐藏层,输入顺序即为第一隐藏层的第一顺序。这样,在第一序列中每相邻两个第一隐藏层,其排序在前的第一隐藏层要先于排序在后的第一隐藏层接收对应的第一输入数据,以及产生对应的第一输出数据。基于此,针对排序在后的第一隐藏层在计算第一输出数据时,除了可以基于自身接收的第一输入数据外,还可以基于排序在前的第一隐藏层的第一输入数据和第一输出数据。即,将排序在前的第一隐藏层的第一输入数据、第一输出数据,与排序在后的第一隐藏层的第一输入数据叠加,作为排序在后的第一隐藏层的更新后的第一输入数据。这样排序在后的第一隐藏层就可以基于更新后的第一输入数据生成对应的第一输出数据,且该第一输出数据中同时涵盖了这两个第一隐藏层所对应的关节的运动学参数的信息。
依此处理,在第一序列中,每一个非首个第一隐藏层,都可以基于其前一个第一隐藏层的第一输入数据、第一输出数据对本第一隐藏层的第一输入数据更新,从而利用更新后的第一输入数据,计算得到本第一隐藏层的第一输出数据。而在第一序列中最后一个第一隐藏层的第一输出数据,则同时涵盖了所有第一隐藏层所对应的关节的运动学参数的信息。将最后一个第一隐藏层的第一输出数据作为编码网络的输出数据,可以体现编码网络对所有关节的运动学参数的学习结果。
如图4所示,为本实施例提供的构建解码网络的过程,包括:
步骤204:创建输出层,以及与机器人的各关节一一对应的多个第二隐藏层;每个第二隐藏层用于接收编码网络的输出数据作为该第二隐藏层的第二输入数据,并将基于第二输入数据生成的第二输出数据通过输出层输出,得到对应关节的预测扭矩。
如图5所示,为本实施例中解码网络的结构示意图。该解码网络包括:输出层和多个第二隐藏层(Hidden layer1、Hidden layer2,……,Hidden layer n),第二隐藏层的数量与机器人包含的关节(joint 1、 joint 2,……,joint n)的数量相同;第二隐藏层与关节一一对应,即与各关节的运动学参数(Input joint 1 pos/vel/acc、Input joint 2 pos/vel/acc ,……,Input joint n pos/vel/acc)一一对应。每个第二隐藏层用于接收编码网络的输出数据(Encoder output)作为该第二隐藏层的第二输入数据(Input joint 第二输入数据),并将基于该第二输入数据生成的第二输出数据通过输出层输出,得到对应关节的预测扭矩(Output joint 1 effort、Output joint 2 effort,……,Output joint n effort)。例如第二隐藏层Hidden layer i接收编码网络的输出数据,即第二输入数据,并基于该第二输入数据计算得到关节joint i在当前运动学参数下的第二输出数据,该第二输出数据经输出层格式转换后输出预测扭矩。其中,i属于[1,n]中任一整数。
步骤205:对各第二隐藏层按第二顺序进行排序,形成第二序列。
本实施例中对各第二隐藏层的排序不做限定,例如可以按图5中:Hidden layer n,……,Hidden layer2,Hidden layer1,的顺序进行排序,由于第二隐藏层与关节一一对应,则第二隐藏层的顺序对应了各关节(关节的预扭矩)的顺序。
步骤206:针对第二序列中每相邻两个第二隐藏层,将排序在前的第二隐藏层的第二输出数据与排序在后的第二隐藏层的第二输入数据叠加,作为排序在后的第二隐藏层的更新后的第二输入数据。
其中,第二序列中非首个第二隐藏层的第二输出数据基于该第二隐藏层的更新后的第二输入数据生成。
输出层输出数据时,是按照串行顺序(该顺序为第二隐藏层排列的第二顺序)输出各关节的预测扭矩,因此在从各第二隐藏层接收第二输出数据时也是按该串行顺序依次从各关节对应的第二隐藏层接收第二输出数据,输出顺序即为第二隐藏层的第二顺序,也为各第二隐藏层生成第二输出数据的顺序。这样,在第二序列中每相邻两个第二隐藏层,其排序在前的第二隐藏层要先于排序在后的第二隐藏层接收对应的第二输入数据,以及产生对应的第二输出数据。基于此,针对排序在后的第二隐藏层在计算第二输出数据时,除了可以基于自身接收的第二输入数据外,还可以基于排序在前的第二隐藏层的第二输出数据。即,将排序在前的第二隐藏层的第二输出数据与排序在后的第二隐藏层的第二输入数据叠加,作为排序在后的第二隐藏层的更新后的第二输入数据。这样排序在后的第二隐藏层就可以基于更新后的第二输入数据生成对应的第二输出数据,且该第二输出数据中同时涵盖了这两个第二隐藏层所对应的关节的在当前运动学参数下的预测扭矩的信息。
依此处理,在第二序列中,每一个非首个第二隐藏层,都可以基于其前一个第二隐藏层的第二输出数据对本第二隐藏层的第二输入数据更新,从而利用更新后的第二输入数据,计算得到本第二隐藏层的第二输出数据。
按与机器人的各关节的对应关系划分,第二顺序为第一顺序的倒序。例如,当第一顺序为如图3所示的Hidden layer 1、Hidden layer 2,……Hidden layer n时,第二顺序则为如图5所示的Hidden layer n,……,Hidden layer2、Hidden layer1。这样处理的好处是:在编码网络的输出数据中,先计算的关节的第一输出数据较后计算的关节的第一输出数据,所保留的关节的运动学参数的信息量占比较小,此时可以先基于编码网络的输出数据预测在后计算的关节的第一输出数据所对应的第二输出数据,继而得到相应关节的预测扭矩,此时预测结果较为准确。然后,在计算后续关节的预测扭矩时,可以通过将编码网络的输出数据与前一次得到的关节的第二输出数据叠加,如从编码网络的输出数据中减去前一次得到的关节的第二输出数据,这样可以增大当前待计算关节的运动学参数的信息在更新后的第二输入数据中的占比,从而得到相应关节的预测扭矩,此时预测结果较为准确。依此类推,在编码网络中在先被处理的关节的运动学参数,在解码网络中在后生成该关节的预测扭矩,使对每个关节的相关信息的处理在编码解码网络中实现对称部署。
此外,在联合训练编码网络和解码网络时,可以200hz频率分别采集机器人执行两段舞蹈总耗时30多秒的包含各关节运动学参数的上千例样本数据,每例样本数据包含机器人上各躯干关节的一组运动学参数(pos/vel/acc)和测量扭矩effort。在pytorch框架下编写仿真和验证的程序,将采集的样本数据分为两组,一组用来训练Encoder-Decoder模型,一组用来验证模型。
步骤103:根据各关节的测量扭矩与预测扭矩之间的差值,判断机器人是否发生碰撞。
当机器人没有发生碰撞时,其各关节的测量扭矩应与预测扭矩相差不大,否则差距较大。基于此,可以通过判断各关节的测量扭矩与预测扭矩之间的差值大小,来判断机器人是否发生碰撞。
本步骤103可通过如下步骤实现。
步骤一:判断各关节的测量扭矩与预测扭矩之间的差值是否大于预设阈值。
其中,预设阈值为评价机器人发生碰撞的边界值,每个关节可以设置不同的预设阈值。这里的差值指各关节的测量扭矩与预测扭矩之间的差值的绝对值。
步骤二:如果任一关节的差值大于预设阈值,则判定机器人发生碰撞。
为了保证检测的准确性,设定只要存在一个关节的测量扭矩与预测扭矩之间的差值大于预设阈值,就判定机器人发生碰撞。
步骤三:如果所有关节的差值均不大于所述预设阈值,则判定机器人未发生碰撞。
当判定所有关节的测量扭矩与预测扭矩之间的差值均不大于预设阈值是,才判定机器人未发生碰撞。
此外,为了避免伤到人或者损坏周围物体,当判定机器人发生碰撞之后,可以控制机器人马上停止运动,或者进入柔顺模式。与相关技术相比较,本发明实施例通过引入深度网络模型,预测机器人各关节的预测扭矩;根据各关节的测量扭矩与预测扭矩之间的差值,判断机器人是否发生碰撞。通过相关实验数据表明,本实施例中的深度网络模型可以准确的学习到逆动力学的数据特征,并在极少的计算资源下达到很不错的验证效果,这些验证效果表明基于深度网络模型的防碰撞检测较基于力学模型的防碰撞检测其检测结果更为准确。
本发明第二实施方式涉及一种机器人碰撞检测装置,可用于执行上述方法实施例中的机器人碰撞检测方法。如图6所示,该机器人碰撞检测装置包括:
采集模块301,用于采集机器人各关节的运动学参数和测量扭矩;
预测模块302,用于将所述运动学参数输入至预先训练的深度网络模型,得到所述各关节的预测扭矩;
判断模块303,用于根据所述各关节的所述测量扭矩与所述预测扭矩之间的差值,判断所述机器人是否发生碰撞。
所述深度网络模型采用编码网络-解码网络的模型框架构建。
上述机器人碰撞检测装置还包括:
编码网络构建模块,用于创建输入层,以及与所述机器人的各关节一一对应的多个第一隐藏层;每个所述第一隐藏层用于从所述输入层接收对应关节的运动学参数的第一输入数据,并基于所述第一输入数据生成第一输出数据;对各所述第一隐藏层按第一顺序进行排序,形成第一序列;针对所述第一序列中每相邻两个第一隐藏层,将排序在前的第一隐藏层的所述第一输入数据、所述第一输出数据,与排序在后的第一隐藏层的所述第一输入数据叠加,作为所述排序在后的第一隐藏层的更新后的所述第一输入数据;
其中,所述第一序列中非首个第一隐藏层的所述第一输出数据基于该第一隐藏层的更新后的所述第一输入数据生成,且所述第一序列中最后一个第一隐藏层的所述第一输出数据为所述编码网络的输出数据。
上述机器人碰撞检测装置还包括:
解码网络构建模块,用于创建输出层,以及与所述机器人的各关节一一对应的多个第二隐藏层;每个所述第二隐藏层用于接收所述编码网络的输出数据作为该第二隐藏层的第二输入数据,并将基于所述第二输入数据生成的第二输出数据通过所述输出层输出,得到对应关节的预测扭矩;对各所述第二隐藏层按第二顺序进行排序,形成第二序列;针对所述第二序列中每相邻两个第二隐藏层,将排序在前的第二隐藏层的所述第二输出数据与排序在后的第二隐藏层的所述第二输入数据叠加,作为所述排序在后的第二隐藏层的更新后的所述第二输入数据;
其中,所述第二序列中非首个第二隐藏层的所述第二输出数据基于该第二隐藏层的更新后的所述第二输入数据生成。
按与所述机器人的各关节的对应关系划分,所述第二顺序为所述第一顺序的倒序。
所述运动学参数至少包括:关节的位置、速度和加速度。
所述判断模块303,用于判断所述各关节的所述测量扭矩与所述预测扭矩之间的差值是否大于预设阈值;如果任一所述关节的所述差值大于所述预设阈值,则判定所述机器人发生碰撞;如果所有所述关节的所述差值均不大于所述预设阈值,则判定所述机器人未发生碰撞。
上述机器人碰撞检测装置还包括:控制模块,用于在所述判断模块判定所述机器人发生碰撞之后,控制所述机器人停止运动,或者进入柔顺模式。
所述机器人为串联机器人。
与相关技术相比较,本发明实施例通过引入深度网络模型,预测机器人各关节的预测扭矩;根据各关节的测量扭矩与预测扭矩之间的差值,判断机器人是否发生碰撞,可以达到较好的防碰撞检测效果。
本发明第三实施方式涉及一种电子设备,如图7所示,包括至少一个处理器402;以及,与至少一个处理器402通信连接的存储器401;其中,存储器401存储有可被至少一个处理器402执行的指令,指令被至少一个处理器402执行,以使至少一个处理器402能够执行上述任一方法实施例。
其中,存储器401和处理器402采用总线方式连接,总线可以包括任意数量的互联的总线和桥,总线将一个或多个处理器402和存储器401的各种电路连接在一起。总线还可以将诸如外围设备、稳压器和功率管理电路等之类的各种其他电路连接在一起,这些都是本领域所公知的,因此,本文不再对其进行进一步描述。总线接口在总线和收发机之间提供接口。收发机可以是一个元件,也可以是多个元件,比如多个接收器和发送器,提供用于在传输介质上与各种其他装置通信的单元。经处理器402处理的数据通过天线在无线介质上进行传输,进一步,天线还接收数据并将数据传送给处理器402。
处理器402负责管理总线和通常的处理,还可以提供各种功能,包括定时,外围接口,电压调节、电源管理以及其他控制功能。而存储器401可以被用于存储处理器402在执行操作时所使用的数据。
本发明第四实施方式涉及一种计算机可读存储介质,存储有计算机程序。计算机程序被处理器执行时实现上述任一方法实施例。
本发明第五实施方式涉及一种计算机程序,计算机程序被处理器执行时实现上述任一方法实施例。
即,本领域技术人员可以理解,实现上述实施例方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
本领域的普通技术人员可以理解,上述各实施方式是实现本发明的具体实施例,而在实际应用中,可以在形式上和细节上对其作各种改变,而不偏离本发明的精神和范围。

Claims (13)

  1. 一种机器人碰撞检测方法,其特征在于,包括:
    采集机器人各关节的运动学参数和测量扭矩;
    将所述运动学参数输入至预先训练的深度网络模型,得到所述各关节的预测扭矩;
    根据所述各关节的所述测量扭矩与所述预测扭矩之间的差值,判断所述机器人是否发生碰撞。
  2. 根据权利要求1所述的方法,其特征在于,所述深度网络模型采用编码网络-解码网络的模型框架构建。
  3. 根据权利要求2所述的方法,其特征在于,构建所述编码网络的过程包括:
    创建输入层,以及与所述机器人的各关节一一对应的多个第一隐藏层;每个所述第一隐藏层用于从所述输入层接收对应关节的运动学参数的第一输入数据,并基于所述第一输入数据生成第一输出数据;
    对各所述第一隐藏层按第一顺序进行排序,形成第一序列;
    针对所述第一序列中每相邻两个第一隐藏层,将排序在前的第一隐藏层的所述第一输入数据、所述第一输出数据,与排序在后的第一隐藏层的所述第一输入数据叠加,作为所述排序在后的第一隐藏层的更新后的所述第一输入数据;
    其中,所述第一序列中非首个第一隐藏层的所述第一输出数据基于该第一隐藏层的更新后的所述第一输入数据生成,且所述第一序列中最后一个第一隐藏层的所述第一输出数据为所述编码网络的输出数据。
  4. 根据权利要求3所述的方法,其特征在于,构建所述解码网络的过程包括:
    创建输出层,以及与所述机器人的各关节一一对应的多个第二隐藏层;每个所述第二隐藏层用于接收所述编码网络的输出数据作为该第二隐藏层的第二输入数据,并将基于所述第二输入数据生成的第二输出数据通过所述输出层输出,得到对应关节的预测扭矩;
    对各所述第二隐藏层按第二顺序进行排序,形成第二序列;
    针对所述第二序列中每相邻两个第二隐藏层,将排序在前的第二隐藏层的所述第二输出数据与排序在后的第二隐藏层的所述第二输入数据叠加,作为所述排序在后的第二隐藏层的更新后的所述第二输入数据;
    其中,所述第二序列中非首个第二隐藏层的所述第二输出数据基于该第二隐藏层的更新后的所述第二输入数据生成。
  5. 根据权利要求4所述的方法,其特征在于,按与所述机器人的各关节的对应关系划分,所述第二顺序为所述第一顺序的倒序。
  6. 根据权利要求1-5任一项所述的方法,其特征在于,所述运动学参数至少包括:关节的位置、速度和加速度。
  7. 根据权利要求1-5任一项所述的方法,其特征在于,所述根据所述各关节的所述测量扭矩与所述预测扭矩之间的差值,判断所述机器人是否发生碰撞,包括:
    判断所述各关节的所述测量扭矩与所述预测扭矩之间的差值是否大于预设阈值;
    如果任一所述关节的所述差值大于所述预设阈值,则判定所述机器人发生碰撞;
    如果所有所述关节的所述差值均不大于所述预设阈值,则判定所述机器人未发生碰撞。
  8. 根据权利要求7所述的方法,其特征在于,所述判定所述机器人发生碰撞之后,还包括:
    控制所述机器人停止运动,或者进入柔顺模式。
  9. 根据权利要求1-8任一项所述的方法,其特征在于,所述机器人为串联机器人。
  10. 一种机器人碰撞检测装置,其特征在于,包括:
    采集模块,用于采集机器人各关节的运动学参数和测量扭矩;
    预测模块,用于将所述运动学参数输入至预先训练的深度网络模型,得到所述各关节的预测扭矩;
    判断模块,用于根据所述各关节的所述测量扭矩与所述预测扭矩之间的差值,判断所述机器人是否发生碰撞。
  11. 一种电子设备,其特征在于,包括:
    至少一个处理器;以及,
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行如权利要求1至9中任一项所述的机器人碰撞检测方法。
  12. 一种计算机可读存储介质,存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至9中任一项所述的机器人碰撞检测方法。
  13. 一种计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至9中任一项所述的机器人碰撞检测方法。
PCT/CN2022/100144 2021-12-31 2022-06-21 机器人碰撞检测方法、装置、电子设备及存储介质 WO2023123911A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111674606.XA CN114310895B (zh) 2021-12-31 2021-12-31 机器人碰撞检测方法、装置、电子设备及存储介质
CN202111674606.X 2021-12-31

Publications (1)

Publication Number Publication Date
WO2023123911A1 true WO2023123911A1 (zh) 2023-07-06

Family

ID=81020158

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/100144 WO2023123911A1 (zh) 2021-12-31 2022-06-21 机器人碰撞检测方法、装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN114310895B (zh)
WO (1) WO2023123911A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114310895B (zh) * 2021-12-31 2022-12-06 达闼科技(北京)有限公司 机器人碰撞检测方法、装置、电子设备及存储介质
CN115389077B (zh) * 2022-08-26 2024-04-12 法奥意威(苏州)机器人系统有限公司 碰撞检测方法、装置、控制设备及可读存储介质

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103192413A (zh) * 2012-01-06 2013-07-10 沈阳新松机器人自动化股份有限公司 一种无传感器的机器人碰撞检测保护装置及方法
JP2014018941A (ja) * 2012-07-23 2014-02-03 Daihen Corp 制御装置、及び制御方法
CN104985598A (zh) * 2015-06-24 2015-10-21 南京埃斯顿机器人工程有限公司 一种工业机器人碰撞检测方法
CN110480678A (zh) * 2019-07-19 2019-11-22 南京埃斯顿机器人工程有限公司 一种工业机器人碰撞检测方法
US20200338735A1 (en) * 2019-04-28 2020-10-29 Xi'an Jiaotong University Sensorless Collision Detection Method Of Robotic Arm Based On Motor Current
CN111872936A (zh) * 2020-07-17 2020-11-03 清华大学 一种基于神经网络的机器人碰撞检测系统及方法
CN112247992A (zh) * 2020-11-02 2021-01-22 中国科学院深圳先进技术研究院 一种机器人前馈力矩补偿方法
WO2021086091A1 (ko) * 2019-10-30 2021-05-06 주식회사 뉴로메카 인공신경망을 이용한 로봇 매니퓰레이터의 충돌을 감지하는 방법 및 시스템
CN113021340A (zh) * 2021-03-17 2021-06-25 华中科技大学鄂州工业技术研究院 机器人的控制方法、装置、设备及计算机可读存储介质
CN114310895A (zh) * 2021-12-31 2022-04-12 达闼科技(北京)有限公司 机器人碰撞检测方法、装置、电子设备及存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6586079B2 (ja) * 2014-03-28 2019-10-02 ソニー株式会社 アーム装置、及びプログラム
CN107253196B (zh) * 2017-08-01 2021-05-04 中科新松有限公司 一种机械臂碰撞检测方法、装置、设备及存储介质
CN111712356A (zh) * 2018-02-23 2020-09-25 Abb瑞士股份有限公司 机器人系统和操作方法
CN108582070A (zh) * 2018-04-17 2018-09-28 上海达野智能科技有限公司 机器人碰撞检测系统和方法、存储介质、操作系统
CN109079856A (zh) * 2018-10-30 2018-12-25 珠海格力智能装备有限公司 机器人的碰撞检测方法和装置
CN109732599B (zh) * 2018-12-29 2020-11-03 深圳市越疆科技有限公司 一种机器人碰撞检测方法、装置、存储介质及机器人
CN112757345A (zh) * 2021-01-27 2021-05-07 上海节卡机器人科技有限公司 一种协作机器人碰撞检测方法、装置、介质及电子设备

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103192413A (zh) * 2012-01-06 2013-07-10 沈阳新松机器人自动化股份有限公司 一种无传感器的机器人碰撞检测保护装置及方法
JP2014018941A (ja) * 2012-07-23 2014-02-03 Daihen Corp 制御装置、及び制御方法
CN104985598A (zh) * 2015-06-24 2015-10-21 南京埃斯顿机器人工程有限公司 一种工业机器人碰撞检测方法
US20200338735A1 (en) * 2019-04-28 2020-10-29 Xi'an Jiaotong University Sensorless Collision Detection Method Of Robotic Arm Based On Motor Current
CN110480678A (zh) * 2019-07-19 2019-11-22 南京埃斯顿机器人工程有限公司 一种工业机器人碰撞检测方法
WO2021086091A1 (ko) * 2019-10-30 2021-05-06 주식회사 뉴로메카 인공신경망을 이용한 로봇 매니퓰레이터의 충돌을 감지하는 방법 및 시스템
CN111872936A (zh) * 2020-07-17 2020-11-03 清华大学 一种基于神经网络的机器人碰撞检测系统及方法
CN112247992A (zh) * 2020-11-02 2021-01-22 中国科学院深圳先进技术研究院 一种机器人前馈力矩补偿方法
CN113021340A (zh) * 2021-03-17 2021-06-25 华中科技大学鄂州工业技术研究院 机器人的控制方法、装置、设备及计算机可读存储介质
CN114310895A (zh) * 2021-12-31 2022-04-12 达闼科技(北京)有限公司 机器人碰撞检测方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN114310895A (zh) 2022-04-12
CN114310895B (zh) 2022-12-06

Similar Documents

Publication Publication Date Title
US11331800B2 (en) Adaptive predictor apparatus and methods
WO2023123911A1 (zh) 机器人碰撞检测方法、装置、电子设备及存储介质
CN108873768B (zh) 任务执行系统及方法、学习装置及方法、以及记录介质
US20170285584A1 (en) Machine learning device that performs learning using simulation result, machine system, manufacturing system, and machine learning method
KR102139513B1 (ko) 인공지능 vils 기반의 자율주행 제어 장치 및 방법
CN110516389B (zh) 行为控制策略的学习方法、装置、设备及存储介质
CN112847336B (zh) 动作学习方法、装置、存储介质及电子设备
JP6911798B2 (ja) ロボットの動作制御装置
CN109940619A (zh) 一种轨迹规划方法、电子设备及存储介质
US11971709B2 (en) Learning device, control device, learning method, and recording medium
CN111204476A (zh) 一种基于强化学习的视触融合精细操作方法
US20220339787A1 (en) Carrying out an application using at least one robot
JP2003271975A (ja) 平面抽出方法、その装置、そのプログラム、その記録媒体及び平面抽出装置搭載型ロボット装置
US11203116B2 (en) System and method for predicting robotic tasks with deep learning
CN113874844A (zh) 情景感知装置的仿真方法、装置和系统
CN113111678B (zh) 一种用户的肢体节点的位置确定方法、装置、介质及系统
CN116968024A (zh) 获取用于生成形封闭抓取位姿的控制策略的方法、计算设备和介质
KR20230093191A (ko) 오차 종류별 관절 인식 방법, 서버
CN117295589B (zh) 在训练和细化机器人控制策略中使用模仿学习的系统和方法
US20220148119A1 (en) Computer-readable recording medium storing operation control program, operation control method, and operation control apparatus
Chen et al. Robot control in human environment using deep reinforcement learning and convolutional neural network
Doshi et al. Collision detection in legged locomotion using supervised learning
CN114800525B (zh) 机器人碰撞检测方法、系统、计算机及可读存储介质
Konrad et al. GP-net: Flexible Viewpoint Grasp Proposal
Crnokić et al. Fusion of infrared sensors and camera for mobile robot navigation system-simulation scenario

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22913200

Country of ref document: EP

Kind code of ref document: A1