CN111906778B - Robot safety control method and device based on multiple perceptions - Google Patents

Robot safety control method and device based on multiple perceptions Download PDF

Info

Publication number
CN111906778B
CN111906778B CN202010590912.4A CN202010590912A CN111906778B CN 111906778 B CN111906778 B CN 111906778B CN 202010590912 A CN202010590912 A CN 202010590912A CN 111906778 B CN111906778 B CN 111906778B
Authority
CN
China
Prior art keywords
robot
obstacle
collision
motion
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010590912.4A
Other languages
Chinese (zh)
Other versions
CN111906778A (en
Inventor
郎需林
刘培超
刘主福
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yuejiang Technology Co Ltd
Original Assignee
Shenzhen Yuejiang Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yuejiang Technology Co Ltd filed Critical Shenzhen Yuejiang Technology Co Ltd
Priority to CN202010590912.4A priority Critical patent/CN111906778B/en
Publication of CN111906778A publication Critical patent/CN111906778A/en
Application granted granted Critical
Publication of CN111906778B publication Critical patent/CN111906778B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1674Programme controls characterised by safety, monitoring, diagnostic
    • B25J9/1676Avoiding collision or forbidden zones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/06Safety devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The invention provides a robot safety control method based on multiple perceptions, which comprises the following steps: when obstacle movement information sent by a 3D vision device is received, generating a first control strategy for the robot to execute according to the obstacle movement information and the robot movement information so as to avoid the obstacle; when proximity information sent by the proximity electronic skin is received, generating a second control strategy for the robot to execute according to an artificial potential field method so as to avoid an obstacle; and when the force feedback information sent by the tactile electronic skin is received, generating a third control strategy for the robot to execute according to the force feedback information and the current feedback information of the mechanical arm so as to reduce collision force between the robot and the obstacle. The safety control method of the robot can effectively improve the safety of the robot and avoid the damage to human bodies. In addition, the invention also discloses a robot safety control device based on multiple perceptions.

Description

Robot safety control method and device based on multiple perceptions
Technical Field
The invention relates to the field of robots, in particular to a robot safety control method and device based on multiple perceptions.
Background
Robots are products of integrating control theory, mechatronics, computers, materials and bionics, which can either accept human commands or run pre-programmed computer programs, or act in accordance with guidelines established by artificial intelligence techniques to assist or replace human work.
In actual use, a person may be required to work in conjunction with a robot to accomplish a task. When the robot and the robot work cooperatively, the robot needs to be ensured to have enough safety so as to avoid collision between the robot and a human body, or corresponding safety protection measures are adopted when collision is detected, so that personal safety is ensured.
For this purpose, existing robots employ current loop based contact collision detection to trigger the robot to stop. However, the existing collision detection method relies on the current change caused by the contact between the robot and the person to trigger the robot to stop, and when the robot triggers to stop, the collision between the robot and the person occurs, and in some situations, the collision causes injury to the human body. In addition, in the conventional collision detection method, after the collision is detected, a safe shutdown operation needs to be performed on the whole robot system, but a lot of time and labor are required for restarting operation after shutdown.
Disclosure of Invention
The invention mainly aims to provide a robot safety control method based on multiple perceptions, which aims to solve the safety problem existing in the existing robots.
In order to achieve the above object, the present invention provides a robot safety control method based on multiple perceptions, the robot safety control method comprising: generating a first control strategy for the robot to execute according to the obstacle movement information and the robot movement information when the obstacle movement information sent by the 3D vision device is received so as to avoid the obstacle, wherein the movement information comprises a movement speed and a movement track; when proximity information sent by the proximity electronic skin is received, generating a second control strategy for the robot to execute according to an artificial potential field method so as to avoid the obstacle; and when receiving the force feedback information sent by the tactile electronic skin, generating a third control strategy for the robot to execute according to the force feedback information and the current feedback information of the mechanical arm so as to reduce the collision force between the robot and the obstacle.
Preferably, when receiving the obstacle movement information sent by the 3D vision device, before the step of generating the first control strategy for the robot to execute according to the obstacle movement information and the robot movement information, the method further comprises: real-time tracking is carried out on the obstacle through a 3D vision device so as to acquire real-time motion information of the obstacle, and a first motion model is built according to the motion information; acquiring real-time motion information of the robot through a controller of the robot, and establishing a second motion model according to the motion information; according to the first motion model and the second motion model, collision exercise is carried out on the robot and the obstacle, and a collision exercise result is obtained; and determining whether to generate the first control strategy according to the collision exercise result.
Preferably, the generating the first control strategy for the robot to execute according to the obstacle movement information and the robot movement information includes: generating an autonomous obstacle avoidance path in the collision drilling process, and combining the autonomous obstacle avoidance path with the current path of the robot; and if the autonomous obstacle avoidance path cannot be synthesized, controlling the robot to run at a reduced speed.
Preferably, the generating the third control strategy for the robot to execute according to the force feedback information and the current feedback information of the mechanical arm includes: establishing an impedance control model, wherein the impedance control model comprises collision force, collision parameters, preset positions, preset speeds and preset deceleration during collision; according to a pre-established collision test, acquiring collision parameters during collision, wherein the collision parameters comprise stiffness parameters, damping parameters and mass matrix parameters; and inputting collision force fed back by the tactile electronic skin into the impedance control model to obtain the preset position of the robot.
Preferably, the impedance control model is built up according to the following formula: f=k x+b X' +m X "; wherein F is the collision force; x, X' are respectively the preset position, the preset speed and the preset deceleration of the robot; k, B, M are respectively the rigidity parameter, damping parameter and mass matrix parameter of the obstacle; the deceleration is calculated according to the following formula: x "= (F-K x+b X')/M; and carrying out integral operation on the deceleration to obtain a preset position of the robot.
The invention also provides a robot safety control device based on multiple perception, which comprises: the first control module is used for generating a first control strategy for the robot to execute according to the obstacle motion information and the robot motion information when the obstacle motion information sent by the 3D vision device is received so as to avoid the obstacle, wherein the motion information comprises a motion speed and a motion track; the second control module is used for generating a second control strategy for the robot to execute according to an artificial potential field method when proximity information sent by the proximity electronic skin is received so as to avoid the obstacle; and the third control module is used for generating a third control strategy for the robot to execute according to the force feedback information and the current feedback information of the mechanical arm when receiving the force feedback information sent by the tactile electronic skin so as to reduce the collision force between the robot and the obstacle.
Preferably, the multiple perception based robot safety control device further comprises: the first model building module is used for tracking the obstacle in real time through the 3D vision device so as to acquire real-time motion information of the obstacle, and building a first motion model according to the motion information; the second model building module is used for acquiring real-time motion information of the robot through a controller of the robot and building a second motion model according to the motion information; the collision exercise module is used for performing collision exercise on the robot and the obstacle according to the first motion model and the second motion model, and acquiring a collision exercise result; and the judging module is used for determining whether to generate the first control strategy according to the collision exercise result.
Preferably, the first control module includes: the path generation unit is used for generating an autonomous obstacle avoidance path in the collision exercise process and combining the autonomous obstacle avoidance path with the current path of the robot; and the deceleration control unit is used for controlling the robot to run at a reduced speed when the autonomous obstacle avoidance path cannot be synthesized.
Preferably, the third control module includes: a third model building unit for building an impedance control model, wherein the impedance control model comprises a collision force, a collision parameter, a preset position, a preset speed and a preset deceleration during collision; the device comprises a collision parameter acquisition unit, a collision parameter analysis unit and a collision control unit, wherein the collision parameter acquisition unit is used for acquiring collision parameters during collision according to a pre-established collision test, and the collision parameters comprise a rigidity parameter, a damping parameter and a quality matrix parameter; and the preset position acquisition unit is used for inputting the collision force fed back by the tactile electronic skin into the impedance control model so as to acquire the preset position of the robot.
Preferably, the impedance control model is built up according to the following formula: f=k x+b X' +m X "; wherein F is the collision force; x, X' are respectively the preset position, the preset speed and the preset deceleration of the robot; k, B, M are respectively the rigidity parameter, damping parameter and mass matrix parameter of the obstacle; the deceleration is calculated according to the following formula: x "= (F-K x+b X')/M; and carrying out integral operation on the deceleration to obtain a preset position of the robot.
Compared with the prior art, the embodiment of the invention has the beneficial effects that: firstly, real-time tracking is carried out on an obstacle through a 3D vision device to acquire real-time motion information of the obstacle, and then the real-time motion information of a robot is combined to pre-judge whether the robot collides with the obstacle according to the current motion speed and motion track; if the pre-judging result is collision, generating a first control strategy for the robot to execute according to the real-time motion information of the robot and the obstacle so as to enable the robot to avoid the obstacle and prevent the robot from colliding with the obstacle. Secondly, when the proximity signal is received, the situation that the distance between the obstacle and the robot is relatively close is indicated, the 3D vision device is a detection blind area, and the movement information of the obstacle cannot be detected, so that a second control strategy executed by the co-robot is generated through an artificial potential field method so that the robot can avoid the obstacle, and the robot and the obstacle cannot collide. Finally, when the touch signal is received, the obstacle and the robot are indicated to collide, and a third control strategy for the robot to execute is generated according to the force feedback information of the electronic skin touch and the current feedback information of the mechanical arm, so that collision force between the robot and the obstacle is reduced, a buffer effect on the obstacle is generated, and safety control on the robot is further realized. According to the invention, a hierarchical safety control strategy is executed on the robot through 3D vision, electronic skin proximity and electronic skin touch, and pre-collision active obstacle avoidance and contact active buffering are realized, so that the purposes of 3D vision active obstacle avoidance, proximity emergency obstacle avoidance and collision buffering are achieved.
Drawings
FIG. 1 is a flow chart of a first embodiment of a multiple awareness based robot safety control method of the present invention;
FIG. 2 is a control schematic diagram of a robot safety control method based on multiple perceptions of the present invention;
FIG. 3 is a flow chart of a second embodiment of a robot safety control method based on multiple awareness according to the present invention;
FIG. 4 is a flow chart of a third embodiment of a robot safety control method based on multiple awareness according to the present invention;
FIG. 5 is a flowchart of a fifth embodiment of a robot safety control method based on multiple awareness according to the present invention;
FIG. 6 is a schematic diagram of a portion of the robot and operator that affects the protective spacing provided by the present invention;
fig. 7 is a functional block diagram of the safety control device for the robot based on multiple sensing according to the present invention.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below are exemplary and intended to illustrate the present invention and should not be construed as limiting the invention, and all other embodiments, based on the embodiments of the present invention, which may be obtained by persons of ordinary skill in the art without inventive effort, are within the scope of the present invention.
The invention provides a robot safety control method based on multiple perceptions, in an embodiment, referring to fig. 1, the robot safety control method comprises the following steps:
step S10, when obstacle movement information sent by a 3D vision device is received, generating a first control strategy for the robot to execute according to the obstacle movement information and the robot movement information so as to avoid the obstacle, wherein the movement information comprises a movement speed and a movement track;
in this embodiment, the robot is exemplified by an industrial robot, and the obstacle is exemplified by a human body, and the robot safety control method according to the present invention will be described. It should be noted that the robot is exemplified by an industrial robot and the obstacle is exemplified by a human body, which is merely exemplary and not limiting. In addition, the 3D vision device according to the present embodiment may be disposed on the robot, or may be disposed in an area where the robot is located, so as to obtain three-dimensional vision information of the area where the robot is located, including three-dimensional information of industrial robots, people, and other objects, through the 3D vision device.
When detecting that a moving body appears in the area where the robot is located, acquiring the motion information of the moving body, and combining the motion information of the moving body and the motion information of the robot to plan a path capable of avoiding the moving body so as to enable the robot to travel according to the path. It should be noted that, on the premise of knowing the motion trail of the moving body and the robot, the planning of the obstacle avoidance path can be easily realized by combining the environmental information of the area where the robot is located, namely: on the premise of avoiding the motion trail of the moving human body, other obstacles in the environment are avoided, so that a path which can avoid the moving human body and cannot collide with other obstacles in the environment is generated.
For the human motion information, the 3D vision device may be used to track the moving human body in real time to obtain the motion information including the motion speed and the motion track, which may, of course, be obtained in other manners, including but not limited to. The robot motion information can be obtained through a robot controller, the robot controller can monitor the traveling speed and the traveling route of the robot, more specifically, the traveling speed of the robot is monitored in real time by a speed sensor, and the traveling route of the robot moves according to a previously planned route, so that the speed data fed back by the speed sensor and a previously planned motion path can be obtained through the robot controller.
Step S20, when proximity information sent by the proximity electronic skin is received, generating a second control strategy for the robot to execute according to an artificial potential field method so as to avoid an obstacle;
the basic idea of path planning based on the artificial potential field method is to construct an artificial potential field in the area where the robot is located, wherein the potential field comprises a repulsive pole and an attractive pole, the area where the robot is not expected to enter is defined as the repulsive pole, and the area where the robot is expected to enter is defined as the attractive pole, so that the robot in the potential field is under the combined action of the target pose gravitational field and the repulsive field around the obstacle and advances towards the target.
When the distance between the moving body and the robot is relatively close, the 3D vision device is a detection blind area, and the detection blind area cannot detect the obstacle, so that the robot can actively avoid the obstacle to the moving body through a manual potential field method. More specifically, an artificial potential field is constructed in the area where the robot is located, the movement area of the moving body is defined as a repulsive force pole in the potential field, and the area outside the movement area of the moving body is defined as an attractive force pole, so that the robot moves towards the area outside the movement area of the moving body under the combined action of the attractive force pole and the repulsive force pole, the moving body is avoided, and the robot is prevented from colliding with the moving body.
And step S30, when receiving the force feedback information sent by the tactile electronic skin, generating a third control strategy for the robot to execute according to the force feedback information and the current feedback information of the mechanical arm so as to reduce the collision force between the robot and the obstacle.
When the force feedback information sent by the tactile electronic skin is received, the robot is shown to collide with the moving human body, and at the moment, the backward movement displacement of the robot relative to the moving human body can be calculated according to the collision force sent by the tactile electronic skin so as to buffer the moving human body, so that the damage caused by the collision of the moving human body and the robot is reduced. It will be appreciated that when a moving body collides with a robot, the moving body can be buffered by controlling the robot to move backward relative to the body, similar to buffering by a spring.
Referring to fig. 2, the invention performs a hierarchical safety control strategy on a robot through 3D vision, electronic skin proximity and electronic skin touch to realize pre-collision active obstacle avoidance and contact active buffering, thereby achieving the purposes of 3D vision active obstacle avoidance, proximity emergency obstacle avoidance and collision buffering.
In an embodiment, referring to fig. 3, when receiving the obstacle movement information sent by the 3D vision device, before the step of generating the first control strategy for the robot to execute according to the obstacle movement information and the robot movement information, the method further includes:
step S40, real-time tracking is carried out on the obstacle through the 3D vision device so as to acquire real-time motion information of the obstacle, and a first motion model is established according to the motion information;
step S50, acquiring real-time motion information of the robot through a controller of the robot, and establishing a second motion model according to the motion information;
step S60, performing collision exercise on the robot and the obstacle according to the first motion model and the second motion model, and obtaining a collision exercise result;
step S70, determining whether to generate a first control strategy according to the result of collision exercise.
In this embodiment, motion information of a moving body and a robot is acquired through a 3D vision device and a robot controller, a first motion model is built according to the motion information of the moving body, a second motion model is built according to the motion information of the robot, and finally a situational state exercise model is built according to the first motion model and the second motion model. In the scene state exercise model, the motion condition of the robot and the motion human body can be simulated, and whether the robot runs according to the current running state or not can be judged according to the motion condition.
When the industrial robot works, a safety distance can be set based on the 3D vision device, and when an overhaul or maintenance person enters a safety work area, the overhaul or maintenance person can be warned and controlled to avoid the overhaul or maintenance person, and the overhaul or maintenance person is a moving human body in the area where the robot is located. That is, outside the set safety distance, the robot and the moving body will not collide, and there is no need to control the robot to avoid or run at a reduced speed. However, since the distance between the moving body and the robot varies, the safety distance set based on the 3D vision device dynamically varies, and the specific safety distance can refer to the protective distance in the national standard.
More specifically, the protective spacing mentioned above can be obtained according to the following formula:
S P (t 0 )=S h +S r +S S +C+Z d +Z r (1)
wherein S is P (t 0 ) Is t 0 Protective spacing of time points; t is t 0 Is the instant or current time; s is S h Is a part which affects the protection interval by the position change of the obstacle; s is S r The reaction time of the robot system affects the protection interval; s is S S Is the part that the stopping distance of the robot system affects the protective interval; c is the intrusion distance, defined by ISO13855, which is the distance a part of the body intrudes into the sensing zone before being detected; z is Z d Is the uncertainty of the position of the obstacle in the collaborative work space, which is caused by the measurement error of the current sensing device; z is Z r Is the positional uncertainty of the robotic system, which is caused by the accuracy of the robotic position measurement system. S is S P (t 0 ) Allowing dynamic calculation of the protective spacing to run the robot at the application device speed change, and also allowing fixed values to be used to calculate a protective spacing based on worst case values.
Equation (1) applies to the combination of all obstacle objects in the collaborative workspace, such as the operator and the mobile robot part. For example, the robot component closest to the operator is farther and farther from the operator, but another component of the robot may be closer and closer to the operator.
The portion Sh where the operator position change affects the protective pitch is expressed as formula (2):
Figure BDA0002556193790000071
wherein T is r Is the response time of the robot system, including the position detection time of the operator, the processing time of the signal and the triggerRobot stop time, but exclude robot stop time T s The robot stopping time is from the sending of the stopping command to the stopping of the robot; t (T) s Instead of a constant, a function of robot configuration, planned motion, speed, tip and load; v (V) h The directional speed of the operator in the robot motion direction in the collaborative work space can be positive or negative, and the positive or negative depends on whether the distance is increased or decreased; t is a variable in the formulas (2), (4), (6).
S h The part that the motion of the person from the current time to the robot stopping for the period of time affects the pitch is represented. Here V h Is a function of time and may vary with changes in speed or direction of a person. V should be considered when referring to the system h To minimize the pitch. If the speed of the person is not monitored, the system design should assume V h In this direction the velocity is 1.6m/s in order to minimize the pitch. According to ISO13855 and IEC/TS62046: 4.4.2.3 in 2008, V is evaluated according to risk h May differ from the value of 1.6m/s.
Estimating S using estimated speed of person (1.6 m/S) h The constant value of (2) is represented by formula (3):
S h =1.6*(T r +T s ) (3)
therefore, the part of the robot reaction time that affects the protective pitch, sr, is expressed as follows:
Figure BDA0002556193790000081
wherein V is r The directional speed of the collaborative work space robot in the direction of an operator can be positive or negative, and the positive or negative depends on whether the distance is increased or decreased; s is S r Representing the part of the robot movement that affects the distance from the time the person enters the sensing area to the time the control system triggers the stop, where V r Is a function of time and can vary with changes in speed or direction of the robot. V should be considered when referring to the system r Is provided, and the pitch is minimized.
If the robot speed is not monitored, the system involvement should assume V r Maximum speed for the robot. If the robot speed is monitored, the system involves that the current speed of the robot can be used, but the acceleration capability of the robot should be taken into account to minimize the pitch. If the safe applicable speed limit is valid, it can be used in the system design when it is available for the robot part. If the safe applicable speed limit monitors only the cartesian speed of the emerging electricity of the robot tool and not the other parts, it may pose a hazard to the operator, for which reason it may also be necessary to monitor the robot joint speed with the safe applicable speed limit.
S r A constant value according to formula (5):
S r =V r *T r (5)
the portion of the robot stop device that affects the protective pitch can be expressed as equation (6):
Figure BDA0002556193790000082
wherein V is s Is the robot speed during stopping, i.e. the process from triggering a stop command to stopping the robot. S is S s Representing the part of the robot that affects the protective pitch during stopping. Here, V s Is a function of time and can be varied with the speed or direction of the robot, and V should be considered when referring to the system s To minimize the pitch.
If the robot speed is not monitored, the system involves assuming that the integral is the stopping distance of the robot, in the direction of minimizing the pitch. If the robot speed is monitored, the speed can be used to calculate the stopping distance of the robot and applied in a direction that minimizes the pitch. S is S s Preferably from IOS10218-1: 2011.
The various parts that affect the protective spacing can be seen in figure 6,in fig. 6, a graphical representation of the part of the robot and the operator that affects the protective distance. Speed V of operator to robot h Positive values. And the speed of the robot to the operator (V r ,V s ) Is negative.
In another embodiment, referring to fig. 4, in the step of generating a first control strategy for the robot to execute according to the obstacle movement information and the robot movement information, it includes:
step S11, generating an autonomous obstacle avoidance path in the collision exercise process, and combining the autonomous obstacle avoidance path with the current path of the robot;
and step S12, if the autonomous obstacle avoidance path cannot be generated, controlling the robot to run at a reduced speed.
In the scene state exercise model, the motions of the robot and the moving human body are simulated, so that whether the robot collides with the moving human body or not is judged through a virtual model. If the robot moves according to the current motion state and collides with the moving human body, a new moving path is generated in the scene state exercise model so as to avoid the moving human body. Of course, other obstacles except the moving human body in the area where the robot is located are considered, so that the robot is prevented from colliding with other obstacles.
If the autonomous obstacle avoidance path capable of avoiding the moving body cannot be generated in the situational state exercise model, the robot can be controlled to decelerate and send out an alarm at the same time to warn the moving body, and the moving body can turn back to move away from the area where the robot is located after receiving the alarm signal. Before the moving body turns back, the robot travels at a preset speed to ensure that it does not collide with the moving body.
In yet another embodiment, referring to fig. 5, in the step of generating a third control strategy for the robot to execute according to the force feedback information and the current feedback information of the robot arm, the method includes:
step S31, an impedance control model is established, wherein the impedance control model comprises collision force, collision parameters, preset positions, preset speeds and preset deceleration during collision;
step S32, acquiring collision parameters during collision according to a pre-established collision test, wherein the collision parameters comprise stiffness parameters, damping parameters and mass matrix parameters;
step S33, the collision force fed back by the touch electronic skin is input to the impedance control model to obtain the preset position of the robot.
It should be noted that, the preset deceleration is integrated to obtain a preset speed, the preset speed is integrated to obtain a preset position, and three variables are related to each other. After the robot collides with the moving body, the robot can be controlled to move backwards by a preset distance relative to the moving body so as to reach a preset position in order to buffer the moving body. At the preset position, impact injury generated when the moving body collides with the robot can be minimized, and the preset position is calculated according to the collision force generated when the robot collides with the moving body.
It can be understood that, at the moment when the moving body collides with the robot, a collision force is generated, and in order to avoid the collision force continuously causing impact on the moving body, the embodiment controls the robot to move to a preset position through the impedance control model so as to buffer the moving body. In the established impedance control model, collision parameters are obtained in advance through a collision test and are constant, collision force can be obtained through touch electronic skin, the collision force can also be obtained through calculation of current feedback information of the mechanical arm, and the variables in the impedance control model are a preset position, a preset speed and a preset deceleration, so that the collision force generated during collision is input into the impedance control model, and the preset position of the robot can be obtained to control the robot to move to the preset position.
In yet another embodiment, the impedance control model proposed by the present invention is built up according to the following formula:
F=K*X+B*X’+M*X”;
wherein F is the collision force;
x, X' are respectively the preset position, the preset speed and the preset deceleration of the robot;
k, B, M are respectively the rigidity parameter, damping parameter and mass matrix parameter of the obstacle;
the deceleration is calculated according to the following formula:
X”=(F-K*X+B*X’)/M;
and carrying out integral operation on the deceleration to obtain a preset position of the robot.
Based on the above-mentioned robot safety control method based on multiple perceptions, the invention also provides a robot safety control device based on multiple perceptions, see fig. 7, the robot safety control device comprises:
the first control module 10 is configured to generate a first control policy for the robot to execute according to the obstacle movement information and the robot movement information when the obstacle movement information sent by the 3D vision device is received, so as to avoid the obstacle, wherein the movement information includes a movement speed and a movement track;
a second control module 20, configured to generate a second control strategy for the robot to execute according to an artificial potential field method to avoid the obstacle when proximity information sent by the proximity electronic skin is received;
and the third control module 30 is configured to generate a third control strategy for the robot to execute according to the force feedback information and the current feedback information of the mechanical arm when receiving the force feedback information sent by the tactile electronic skin, so as to reduce the collision force between the robot and the obstacle.
In an embodiment, the robot safety control method based on multiple perceptions provided by the invention further comprises the following steps:
a first model building module 40, configured to track the obstacle in real time through the 3D vision device, so as to obtain real-time motion information of the obstacle, and build a first motion model according to the motion information;
a second model building module 50, configured to obtain real-time motion information of the robot through a controller of the robot, and build a second motion model according to the motion information;
the collision exercise module 60 is configured to perform collision exercise on the robot and the obstacle according to the first motion model and the second motion model, and obtain a result of the collision exercise;
a judging module 70, configured to determine whether to generate the first control strategy according to the result of the collision exercise.
In another embodiment, the first control module 10 according to the present invention comprises:
a path generating unit 11, configured to generate an autonomous obstacle avoidance path during a collision exercise process, and synthesize the autonomous obstacle avoidance path with a current path of the robot;
and the deceleration control unit 12 is used for controlling the robot to run at a reduced speed when the autonomous obstacle avoidance path cannot be generated.
In yet another embodiment, the third control module 30 according to the present invention includes:
a third model establishing unit 31 for establishing an impedance control model including a collision force at the time of collision, a collision parameter, a preset position, a preset speed, and a preset deceleration;
a collision parameter acquisition unit 32 for acquiring collision parameters at the time of collision, including a stiffness parameter, a damping parameter, and a mass matrix parameter, according to a pre-established collision test;
the preset position acquiring unit 33 is configured to input the collision force fed back by the tactile electronic skin to the impedance control model to acquire the preset position of the robot.
In yet another embodiment, the impedance control model is built up as follows:
F=K*X+B*X’+M*X”;
wherein F is the collision force;
x, X' are respectively the preset position, the preset speed and the preset deceleration of the robot;
k, B, M are respectively the rigidity parameter, damping parameter and mass matrix parameter of the obstacle;
the deceleration is calculated according to the following formula:
X”=(F-K*X+B*X’)/M;
and carrying out integral operation on the deceleration to obtain a preset position of the robot.
Based on the above-mentioned robot safety control method based on multiple perceptions, the invention also provides a robot safety control system based on multiple perceptions, which comprises:
a memory for storing a computer program;
a processor, configured to implement the steps of the multiple perception based robot safety control method in the foregoing embodiments when executing a computer program, where the robot safety control method at least includes the following steps:
step S10, when obstacle movement information sent by a 3D vision device is received, generating a first control strategy for the robot to execute according to the obstacle movement information and the robot movement information so as to avoid the obstacle, wherein the movement information comprises movement speed and movement track;
step S20, when proximity information sent by the proximity electronic skin is received, generating a second control strategy for the robot to execute according to an artificial potential field method so as to avoid an obstacle;
and step S30, when receiving the force feedback information sent by the tactile electronic skin, generating a third control strategy for the robot to execute according to the force feedback information and the current feedback information of the mechanical arm so as to reduce the collision force between the robot and the obstacle.
Based on the above-mentioned robot safety control method based on multiple perceptions, the invention also provides a computer readable storage medium storing a computer program which when executed by a processor implements the steps of the robot safety control method based on multiple perceptions in the above-mentioned embodiments, the robot safety control method at least comprises the following steps:
step S10, when obstacle movement information sent by a 3D vision device is received, generating a first control strategy for the robot to execute according to the obstacle movement information and the robot movement information so as to avoid the obstacle, wherein the movement information comprises movement speed and movement track;
step S20, when proximity information sent by the proximity electronic skin is received, generating a second control strategy for the robot to execute according to an artificial potential field method so as to avoid an obstacle;
and step S30, when receiving the force feedback information sent by the tactile electronic skin, generating a third control strategy for the robot to execute according to the force feedback information and the current feedback information of the mechanical arm so as to reduce the collision force between the robot and the obstacle.
In the several embodiments provided in the present application, it should be understood that the disclosed methods and apparatus may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple modules or components may be combined or integrated into another apparatus, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical, mechanical, or other forms.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in each embodiment of the present invention may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules.
The integrated modules, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above description of the preferred embodiments of the present invention should not be taken as limiting the scope of the invention, but rather should be understood to cover all modifications, variations and adaptations of the present invention using its general principles and the following detailed description and the accompanying drawings, or the direct/indirect application of the present invention to other relevant arts and technologies.

Claims (10)

1. The robot safety control method based on multiple perceptions is characterized by comprising the following steps:
generating a first control strategy for the robot to execute according to the obstacle movement information and the robot movement information when the obstacle movement information sent by the 3D vision device is received so as to avoid the obstacle, wherein the movement information comprises a movement speed and a movement track; the generating a first control strategy for the robot to execute according to the obstacle movement information and the robot movement information comprises the following steps: combining the obstacle movement information and the robot movement information to plan a path capable of avoiding the obstacle so as to enable the robot to travel according to the path;
when proximity information sent by the proximity electronic skin is received, generating a second control strategy for the robot to execute according to an artificial potential field method so as to avoid the obstacle;
when force feedback information sent by the tactile electronic skin is received, a third control strategy for the robot to execute is generated according to the force feedback information and the current feedback information of the mechanical arm so as to control the robot to move backwards to a preset position relative to the obstacle, and collision force between the robot and the obstacle is reduced, wherein the preset position is obtained through calculation according to the collision force generated when the robot collides with the obstacle.
2. The robot safety control method according to claim 1, further comprising, upon receiving the obstacle movement information transmitted from the 3D vision apparatus, before the step of generating the first control strategy for the robot to execute, based on the obstacle movement information and the robot movement information:
real-time tracking is carried out on the obstacle through a 3D vision device so as to acquire real-time motion information of the obstacle, and a first motion model is built according to the motion information;
acquiring real-time motion information of the robot through a controller of the robot, and establishing a second motion model according to the motion information;
according to the first motion model and the second motion model, collision exercise is carried out on the robot and the obstacle, and a collision exercise result is obtained;
and determining whether to generate the first control strategy according to the collision exercise result.
3. The robot safety control method according to claim 2, wherein the generating a first control strategy for the robot to execute based on the obstacle movement information and the robot movement information comprises:
generating an autonomous obstacle avoidance path in the collision drilling process, and superposing the autonomous obstacle avoidance path with the current path of the robot;
and if the autonomous obstacle avoidance path cannot be generated, controlling the robot to run at a reduced speed.
4. The robot safety control method according to claim 1, wherein generating a third control strategy for the robot to execute according to the force feedback information and the current feedback information of the robot arm comprises:
establishing an impedance control model, wherein the impedance control model comprises collision force, collision parameters, preset positions, preset speeds and preset deceleration during collision;
according to a pre-established collision test, acquiring collision parameters during collision, wherein the collision parameters comprise stiffness parameters, damping parameters and mass matrix parameters;
and inputting collision force fed back by the tactile electronic skin into the impedance control model to obtain the preset position of the robot.
5. The robot safety control method according to claim 4, wherein the impedance control model is established according to the following formula:
F=K*X+B*X’+M*X”;
wherein F is the collision force;
x, X' are respectively the preset position, the preset speed and the preset deceleration of the robot;
k, B, M are respectively the rigidity parameter, damping parameter and mass matrix parameter of the obstacle;
the deceleration is calculated according to the following formula:
X”=(F-K*X+B*X’)/M;
and carrying out integral operation on the deceleration to obtain a preset position of the robot.
6. A robot safety control device based on multiple perceptions, comprising:
the first control module is used for generating a first control strategy for the robot to execute according to the obstacle motion information and the robot motion information when the obstacle motion information sent by the 3D vision device is received so as to avoid the obstacle, wherein the motion information comprises a motion speed and a motion track; the generating a first control strategy for the robot to execute according to the obstacle movement information and the robot movement information comprises the following steps: combining the obstacle movement information and the robot movement information to plan a path capable of avoiding the obstacle so as to enable the robot to travel according to the path;
the second control module is used for generating a second control strategy for the robot to execute according to an artificial potential field method when proximity information sent by the proximity electronic skin is received so as to avoid the obstacle;
and the third control module is used for generating a third control strategy for the robot to execute according to the force feedback information and the current feedback information of the mechanical arm when receiving the force feedback information sent by the tactile electronic skin so as to control the robot to move backwards to a preset position relative to the obstacle and reduce the collision force between the robot and the obstacle, wherein the preset position is obtained by calculation according to the collision force generated when the robot collides with the obstacle.
7. The robot safety control device of claim 6, further comprising:
the first model building module is used for tracking the obstacle in real time through the 3D vision device so as to acquire real-time motion information of the obstacle, and building a first motion model according to the motion information;
the second model building module is used for acquiring real-time motion information of the robot through a controller of the robot and building a second motion model according to the motion information;
the collision exercise module is used for performing collision exercise on the robot and the obstacle according to the first motion model and the second motion model, and acquiring a collision exercise result;
and the judging module is used for determining whether to generate the first control strategy according to the collision exercise result.
8. The robot safety control device of claim 7, wherein the first control module comprises:
the path generation unit is used for generating an autonomous obstacle avoidance path in the collision exercise process and combining the autonomous obstacle avoidance path with the current path of the robot;
and the deceleration control unit is used for controlling the robot to run in a decelerating mode when the autonomous obstacle avoidance path cannot be generated.
9. The robot safety control device of claim 6, wherein the third control module comprises:
a third model building unit for building an impedance control model, wherein the impedance control model comprises a collision force, a collision parameter, a preset position, a preset speed and a preset deceleration during collision;
the device comprises a collision parameter acquisition unit, a collision parameter analysis unit and a collision control unit, wherein the collision parameter acquisition unit is used for acquiring collision parameters during collision according to a pre-established collision test, and the collision parameters comprise a rigidity parameter, a damping parameter and a quality matrix parameter;
and the preset position acquisition unit is used for inputting the collision force fed back by the tactile electronic skin into the impedance control model so as to acquire the preset position of the robot.
10. The robot safety control device of claim 9, wherein the impedance control model is established according to the following formula:
F=K*X+B*X’+M*X”;
wherein F is the collision force;
x, X' are respectively the preset position, the preset speed and the preset deceleration of the robot;
k, B, M are respectively the rigidity parameter, damping parameter and mass matrix parameter of the obstacle;
the deceleration is calculated according to the following formula:
X”=(F-K*X+B*X’)/M;
and carrying out integral operation on the deceleration to obtain a preset position of the robot.
CN202010590912.4A 2020-06-24 2020-06-24 Robot safety control method and device based on multiple perceptions Active CN111906778B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010590912.4A CN111906778B (en) 2020-06-24 2020-06-24 Robot safety control method and device based on multiple perceptions

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010590912.4A CN111906778B (en) 2020-06-24 2020-06-24 Robot safety control method and device based on multiple perceptions

Publications (2)

Publication Number Publication Date
CN111906778A CN111906778A (en) 2020-11-10
CN111906778B true CN111906778B (en) 2023-04-28

Family

ID=73226613

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010590912.4A Active CN111906778B (en) 2020-06-24 2020-06-24 Robot safety control method and device based on multiple perceptions

Country Status (1)

Country Link
CN (1) CN111906778B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112917475B (en) * 2021-01-27 2022-05-20 哈尔滨工程大学 Safe nursing control method for assisting eating by eating assisting robot based on multiple perceptions
CN113183147B (en) * 2021-03-30 2022-08-23 苏州大学 Large-area coverage electronic skin system with remote proximity sense
CN113021359B (en) * 2021-05-27 2021-10-29 深圳市越疆科技有限公司 Mechanical arm control method, device, equipment, system, storage medium and mechanical arm
CN113721515A (en) * 2021-08-30 2021-11-30 太原理工大学 Active safety device of mechanical arm and safety control method thereof
CN114770559B (en) * 2022-05-27 2022-12-13 中迪机器人(盐城)有限公司 Fetching control system and method of robot
CN115229772B (en) * 2022-08-23 2023-07-18 深圳市越疆科技股份有限公司 Robot, control method, control device, control equipment, storage medium and mechanical arm thereof
CN116749196B (en) * 2023-07-26 2024-06-18 睿尔曼智能科技(北京)有限公司 Multi-axis mechanical arm collision detection system and method and mechanical arm
CN117067199B (en) * 2023-07-26 2024-06-14 睿尔曼智能科技(北京)有限公司 Mechanical arm electronic skin, mechanical arm and collision detection system thereof

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06278081A (en) * 1993-03-26 1994-10-04 Nec Corp Robot arm provided with collision prevention function
CN107677296A (en) * 2017-09-25 2018-02-09 合肥工业大学 A kind of Grazing condition is close to touch-pressure sensation sensor
CN109048926A (en) * 2018-10-24 2018-12-21 河北工业大学 A kind of intelligent robot obstacle avoidance system and method based on stereoscopic vision
CN110125936A (en) * 2019-05-15 2019-08-16 清华大学深圳研究生院 A kind of the Shared control method and ground experiment verifying system of robot for space

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104764481B (en) * 2015-04-08 2017-01-25 合肥工业大学 Full-compliancy capacitance and resistance dual mode proximate sense transducer
TWI615691B (en) * 2016-11-24 2018-02-21 財團法人資訊工業策進會 Anti-collision system and anti-collision method
EP3587042A1 (en) * 2018-06-25 2020-01-01 Siemens Aktiengesellschaft Method, apparatus and system for determining a trajectory of a robot's end effector
US11407111B2 (en) * 2018-06-27 2022-08-09 Abb Schweiz Ag Method and system to generate a 3D model for a robot scene
CN109163824A (en) * 2018-10-10 2019-01-08 北京理工大学 A kind of flexible electronic skin with tactile and close feel bimodulus perceptional function
CN109910011A (en) * 2019-03-29 2019-06-21 齐鲁工业大学 A kind of mechanical arm barrier-avoiding method and mechanical arm based on multisensor

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06278081A (en) * 1993-03-26 1994-10-04 Nec Corp Robot arm provided with collision prevention function
CN107677296A (en) * 2017-09-25 2018-02-09 合肥工业大学 A kind of Grazing condition is close to touch-pressure sensation sensor
CN109048926A (en) * 2018-10-24 2018-12-21 河北工业大学 A kind of intelligent robot obstacle avoidance system and method based on stereoscopic vision
CN110125936A (en) * 2019-05-15 2019-08-16 清华大学深圳研究生院 A kind of the Shared control method and ground experiment verifying system of robot for space

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
蒋志宏.阻抗控制.《机械人学基础》.2018, *

Also Published As

Publication number Publication date
CN111906778A (en) 2020-11-10

Similar Documents

Publication Publication Date Title
CN111906778B (en) Robot safety control method and device based on multiple perceptions
US20210053226A1 (en) Safe operation of machinery using potential occupancy envelopes
JP5283622B2 (en) Monitoring method and apparatus using camera for preventing collision of machine
JP2017516670A (en) Humanoid robot with collision avoidance and orbit return capability
Kumar et al. Speed and separation monitoring using on-robot time-of-flight laser-ranging sensor arrays
CN112476438B (en) Mechanical arm obstacle avoidance method and device, mechanical arm and robot
CN112706158B (en) Industrial man-machine interaction system and method based on vision and inertial navigation positioning
Smith et al. A predictor for operator input for time-delayed teleoperation
US20210379762A1 (en) Motion planning and task execution using potential occupancy envelopes
US11602852B2 (en) Context-sensitive safety monitoring of collaborative work environments
JP7243979B2 (en) Robot interference determination device, robot interference determination method, robot control device, robot control system, human motion prediction device, and human motion prediction method
Weitschat et al. Safe and efficient human-robot collaboration part I: Estimation of human arm motions
US11919173B2 (en) Motion planning and task execution using potential occupancy envelopes
CN113021359A (en) Mechanical arm control method, device, equipment, system, storage medium and mechanical arm
Lim et al. Internet-based teleoperation of a mobile robot with force-reflection
JP2004364396A (en) Controller and control method for motor
US20230173682A1 (en) Context-sensitive safety monitoring of collaborative work environments
Zhang et al. Gesture-based human-robot interface for dual-robot with hybrid sensors
Morato et al. Safe human robot interaction by using exteroceptive sensing based human modeling
CN110549375A (en) protective door anti-collision method and system for mechanical arm
Ostermann et al. Freed from fences-Safeguarding industrial robots with ultrasound
JPH02188809A (en) Controller for avoiding obstacle of traveling object
Lu et al. Human-robot collision detection based on the improved camshift algorithm and bounding box
Roennau et al. Adaptation of a six-legged walking robot to its local environment
US20240165806A1 (en) Motion planning and task execution using potential occupancy envelopes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant