AU2021101646A4 - Man-machine cooperative safe operation method based on cooperative trajectory evaluation - Google Patents
Man-machine cooperative safe operation method based on cooperative trajectory evaluation Download PDFInfo
- Publication number
- AU2021101646A4 AU2021101646A4 AU2021101646A AU2021101646A AU2021101646A4 AU 2021101646 A4 AU2021101646 A4 AU 2021101646A4 AU 2021101646 A AU2021101646 A AU 2021101646A AU 2021101646 A AU2021101646 A AU 2021101646A AU 2021101646 A4 AU2021101646 A4 AU 2021101646A4
- Authority
- AU
- Australia
- Prior art keywords
- man
- trajectory
- worker
- cooperative
- conv2d
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B19/00—Programme-control systems
- G05B19/02—Programme-control systems electric
- G05B19/18—Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form
- G05B19/19—Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form characterised by positioning or contouring control systems, e.g. to control position from one programmed point to another or to control movement along a programmed continuous path
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B19/00—Programme-control systems
- G05B19/02—Programme-control systems electric
- G05B19/18—Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form
- G05B19/406—Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form characterised by monitoring or safety
- G05B19/4061—Avoiding collision or forbidden zones
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N7/00—Computing arrangements based on specific mathematical models
- G06N7/01—Probabilistic graphical models, e.g. probabilistic networks
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/39—Robotics, robotics to robotics hand
- G05B2219/39082—Collision, real time collision avoidance
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/39—Robotics, robotics to robotics hand
- G05B2219/39391—Visual servoing, track end effector with camera image feedback
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/40—Robotics, robotics mapping to robotics vision
- G05B2219/40202—Human robot coexistence
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/40—Robotics, robotics mapping to robotics vision
- G05B2219/40475—In presence of moving obstacles, dynamic environment
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/40—Robotics, robotics mapping to robotics vision
- G05B2219/40519—Motion, trajectory planning
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/40—Robotics, robotics mapping to robotics vision
- G05B2219/40543—Identification and location, position of components, objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Artificial Intelligence (AREA)
- Manufacturing & Machinery (AREA)
- Automation & Control Theory (AREA)
- Probability & Statistics with Applications (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Algebra (AREA)
- Computational Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Manipulator (AREA)
Abstract
The invention relates to a man-machine cooperative safe operation method based on
cooperative trajectory evaluation, which includes the following steps of step 1: detecting
positions of a man and a tool held by the man and a type of the tool through an improved YOLO
target detection algorithm, and then implementing cooperative intention recognition in
combination with an action skeleton of the man and the tool held by the man; step 2:
constructing a minimum safety distance between the man and an end space of an industrial robot
through multi-vision field, and controlling a movement of the robot by speed and separation
monitoring SSM; and step 3: predicting a worker trajectory by a hidden Markov model,
monitoring and updating an actual worker trajectory in real time, determining a threshold
deviation between a position of the actual worker trajectory and a position of a predicted worker
trajectory, and when a threshold is exceeded, performing, by a manipulator, active obstacle
avoidance through a dynamic programming graph algorithm. According to the invention, the
man trajectory can be tracked in real time, and the intention can be determined, so that the safety
of the man is protected, and manufacturing and using costs are low.
Inputs
(batchsize, 416, 416, 3)
Conv2D 32x3x3
(batch-size, 416, 416, 32)
Residual Block 1x 64
(batch-size, 208, 208, 64)
Conv2D Block 6L 128 Conv2D 3x+ Conv2D lxl1
Residual Block 2x 128 (batch-size, 52, 52, 128) (batchsIze, 52,52,75)
(batch-size, 104,104,128) =
Residual Block 8x 256 Concat Conv2D + UpSampling2D1
(batchsize, 5252, 256) (batchsize, 52, 52,384) (batchsize, 52,52.128)
T
Residual Block 8x 512 Concat Conv2D Block 5L 256 Conv2D 3x Conv2D xi
(batch-size, 26, 26. 512) (batchsize, 26, 26, 768) (batch-size. 26 26, 256) (batchsize,26,26,75)
Residual Block 4x 1024 Conv2D + UpSampling2D
(batch_sIze, 13,13,1024) (batch-size, 26, 26, 256)
Darknet-53
.......................--. Conv2D Block 5L 1024 Conv2D3x3+Conv2D x
(batch4size, 13,13,1024) (batch size, 13,13,75)
FIG. 1
SPEED
SSM m
FIG. 2
1/3
Description
Inputs (batchsize, 416, 416, 3)
Conv2D 32x3x3 (batch-size, 416, 416, 32)
Residual Block 1x 64 (batch-size, 208, 208, 64)
Conv2D Block 6L 128 Conv2D 3x+ Conv2D lxl1 Residual Block 2x 128 (batch-size, 52, 52, 128) (batchsIze, 52,52,75) (batch-size, 104,104,128) =
Residual Block 8x 256 Concat Conv2D + UpSampling2D1 (batchsize, 5252, 256) (batchsize, 52, 52,384) (batchsize, 52,52.128)
T Residual Block 8x 512 Concat Conv2D Block 5L 256 Conv2D 3x Conv2D xi (batch-size, 26, 26. 512) (batchsize, 26, 26, 768) (batch-size. 26 26, 256) (batchsize,26,26,75)
Residual Block 4x 1024 Conv2D + UpSampling2D (batch_sIze, 13,13,1024) (batch-size, 26, 26, 256)
Darknet-53 .......................--. Conv2D Block 5L 1024 Conv2D3x3+Conv2D x (batch4size, 13,13,1024) (batch size, 13,13,75)
FIG. 1
SSM m FIG. 2
1/3
The present invention belongs to the field of man-machine integration technologies, and
relates to a man-machine cooperative safe operation method, in particularly to a man-machine
cooperative safe operation method based on cooperative trajectory evaluation.
At present, industrial robots have been widely applied in an intelligent manufacturing
industry due to capabilities of improving productivity and product quality. However, it is still
difficult to implement full automation in many production processes. In reality, a traditional
industrial robot is often isolated from a surrounding environment by a fence due to fast
running speed, high power, and lack of environmental awareness, so that many complicated
tasks that the robot cannot accomplish are still completed by skilled human workers, which
seriously affects the production efficiency.
With the revision on industrial robots operated in factories by the International
Organization for Standardization (ISO), the industrial robot is now possible to share the same
workspace with the humans. However, the industrial robot still needs to have an attribute of
being capable of interacting with a man in complex and atypical environments in a process of
implementing the development from man to machine.
The man is responsible for situation evaluation and complex tasks that are difficult for
the robot to accomplish, and the robot replaces the worker to execute simple repetitive tasks
and deal with heavy parts, thus giving full play to their respective advantages through
cooperation, improving a work efficiency, saving a factory space, constructing a safe
operation method, and improving a flexibility of production lines.
However, for man-machine cooperation, how to track a man trajectory in real time,
determine an intention, protect a safety of the man, and save a transformation cost is the most
important thing. Therefore, how to construct a safe operation environment for a man and an
industrial robot system, and develop a man-machine cooperative system has become an urgent technical problem for those skilled in the field of factory automation.
SUMMARY The present invention aims to overcome the defects in the prior art, and provides a man-machine cooperative safe operation method based on cooperative track evaluation. The practical problems are solved by the present invention through the following technical solutions. A man-machine cooperative safe operation method based on cooperative track evaluation includes the following steps: step 1: constructing a scene cognition method based on a RGB image, detecting positions of a man and a tool held by the man and a type of the tool through an improved YOLO target detection algorithm, and then implementing cooperative intention recognition in combination with an action skeleton of the man and the tool held by the man; step 2: according to the positions of the man and the tool held by the man obtained in step 1, constructing a minimum safety distance between the man and an end space of an industrial robot through multi-vision field, and controlling a movement of the robot by speed and separation monitoring SSM; and step 3: constructing data trajectory of a worker, predicting a worker trajectory by a hidden Markov model, determining an intention in combination with the positions of the man and the tool held by the man and the type of the tool detected in step 1, and when an intention recognition rate exceeds a threshold, performing man-machine interaction; monitoring and updating an actual worker trajectory in real time during the man-machine interaction, determining a threshold deviation between a position of the actual worker trajectory and a position of a predicted worker trajectory, and when a threshold is exceeded, performing, by a manipulator, active obstacle avoidance through an improved RRT random tree or a dynamic programming graph algorithm. Moreover, a specific method of detecting the positions of the man and the tool held by the man and the type of the tool in step1 includes: collecting RGB images of the man and the tool held by the man by using a plurality of hand-eye monocular cameras, generating a trained full convolution neural network detection model of the man and the tool held by the man, inputting a whole picture into the trained full convolution neural network detection model of the man and the tool held by the man, and detecting and obtaining position frames of the man and the tool held by the man and a category that the tool belongs to by calculating through the full convolution neural network. Moreover, a specific method of step 2 includes: monitoring a distance between the man and the machine and a movement speed of the industrial robot through the multi-vision field in real time, and dividing a workspace into a safety region, a shared cooperative region, and a danger region according to the distance between the man and the machine, wherein the industrial robot is capable of freely moving according to a program in the safety region, when entering the shared cooperative region, the industrial robot is subjected to speed and separation monitoring , and when entering the danger region, the industrial robot is stopped immediately; and meanwhile, when the distance between the man and the machine is shortened, controlling the robot to slow down. Moreover, step 3 includes the specific steps of: (1) monitoring and updating the actual worker trajectory in real time, collecting a position image of the actual worker trajectory, and then processing the image to obtain position information of the actual worker trajectory; (2) predicting the worker trajectory according to historical movement data of the worker and the hidden Markov model by a machine learning method to obtain position information of the predicted worker trajectory; and (3) when the threshold deviation between the position information of the actual worker trajectory and the position information of the predicted worker trajectory is less than a preset threshold deviation, determining the intention in combination with the positions of the man and the tool held by the man and the type of the tool detected in step 1, and when the intention recognition rate exceeds the threshold, performing the man-machine interaction; and when the threshold deviation between the position information of the actual worker trajectory and the position information of the predicted worker trajectory exceeds the preset threshold deviation, constructing, by a cooperative manipulator, an optimal obstacle avoidance trajectory based on reinforcement learning, and performing, by an industrial manipulator, double-arm obstacle avoidance according to the improved RRT random tree or the dynamic programming graph algorithm.
Moreover, a specific method of step (2) in step 3 includes:
obtaining the historical movement data of the worker through behavior monitoring by the
machine learning method, establishing a mathematical model based on time and position
information, obtaining a relationship between a movement time and a movement position of
the worker through the convolution neural network, and constructing the data trajectory of the
worker, so as to predict the worker trajectory, thus obtaining the position information of the
predicted worker trajectory.
The present invention has the advantages and the beneficial effects as follows.
According to the present invention, the scene cognition method based on the RGB image
is constructed first, the cooperative intention recognition is implemented in combination with
recognitions of the action skeleton of the man and the small target tool held by the man, then
the trajectory is monitored in real time during cooperation, which is compared with a
predicted trajectory path in real time, when the safety threshold is exceeded, the active
obstacle avoidance is performed by the manipulator through the dynamic programming graph
algorithm, and a man-multi-machine cooperative safe operation method is constructed.
According to the present invention, the man trajectory can be tracked in real time, and the
intention can be determined, so that the safety of the man is protected, and manufacturing and
using costs are low.
FIG. 1 is a framework diagram of an improved YOLO target detection algorithm
according to the present invention.
FIG. 2 is a schematic diagram of speed and separation monitoring (SSM) according to
the present invention.
FIG. 3 is a predicted anti-collision flow chart of a worker trajectory according to the
present invention.
FIG. 4 is a schematic diagram of a predicted worker trajectory and a predicted robot
trajectory according to the present invention.
The embodiments of the present invention are further described in detail hereinafter with
reference to the accompanying drawings.
A man-machine cooperative safe operation method based on cooperative track evaluation
includes the following steps.
In step 1, a scene cognition method is constructed based on a RGB image, positions of a
man and a tool held by the man and a type of the tool are detected through an improved
YOLO target detection algorithm, and then cooperative intention recognition is implemented
in combination with an action skeleton of the man and the tool held by the man.
A specific method of detecting the positions of the man and the tool held by the man and
the type of the tool in step 1 includes: collecting RGB images of the man and the tool held by
the man by using a plurality of hand-eye monocular cameras, generating a trained full
convolution neural network detection model of the man and the tool held by the man,
inputting a whole picture into the trained full convolution neural network detection model of
the man and the tool held by the man, and detecting and obtaining position frames of the man
and the tool held by the man and a category that the tool belongs to by calculating through the
full convolution neural network.
Time required for processing the picture is greatly reduced due to no extraction and
classification of candidate regions in the YOLO. Certainly, the operation comes at the expense
of a part of detection rate, but provision of the YOLO algorithm makes real-time processing
of a video stream become a reality.
As shown in FIG. 1, the YOLO algorithm includes the specific steps of: collecting a
picture, making a label, making a data set, inputting 416*416 into a trained model, obtaining
52*52, 26*26, and 13*13 through a Darknet53 residual network and continuous
down-sampling, and then obtaining 52*52*75, 26*26*75, and 13*13*75 by up-sampling
fusion and convolution, which are used to detect large, medium, and small objects
respectively.
In this embodiment, YOLO series algorithms are used to detect positions of a hand of the
man and an end of the robot and a type of the robot. Specifically, a YOLOv3 algorithm and a
YOLOv4 algorithm may be adopted. The YOLOv3 algorithm represented the highest level in
the world in the field of target recognition and positioning at that time. The YOLOv4
algorithm is improved from a data level and a network design level, which reduces a
workload of a user, and the YOLOv4 algorithm is capable of being well trained with a single
GPU, so that only a training cost is increased, but accuracy can be improved without affecting
a reasoning speed. The YOLOv4 algorithm can also enhance a series of data, such as
regulating of brightness, contrast, and hue, random scaling, cutting, flipping, rotation, and the
like. Regularization of the network and increase of a training difficulty may prevent
over-fitting and improve a training accuracy. A loss function (CIOU) is redefined, and three
geometric factors including an overlapping area, a distance between center points, and a
length-width ratio are considered, thus solving problems that an intersection over union
IOU=O cannot be calculated by gradient and the same IOU cannot reflect an actual situation.
A YOLOv5 algorithm is similar to the YOLOv4 algorithm as a whole, but the YOLOv5
algorithm is more suitable for engineering application.
In step 2, according to the positions of the man and the tool held by the man obtained in
the step 1, a minimum safety distance between the man and an end space of an industrial
robot is constructed through multi-vision field, and a movement of the robot is controlled by
speed and separation monitoring SSM.
A specific method of step 2 includes:
monitoring a distance between the man and the machine and a movement speed of the
industrial robot through the multi-vision field in real time, and dividing a workspace into a
safety region, a shared cooperative region, and a danger region according to the distance
between the man and the machine, wherein the industrial robot is capable of freely moving
according to a program in the safety region, when entering the shared cooperative region, the
industrial robot is subjected to speed and separation monitoring, and when entering the danger
region, the industrial robot is stopped immediately; and meanwhile, when the distance
between the man and the machine is shortened, controlling the robot to slow down.
In this embodiment, as shown in FIG. 2, in a man-machine cooperative space, the speed
and separation monitoring refers to that a speed of the robot is directly related to a safety
distance between the robot and the worker. Whenever the distance becomes smaller, the robot should be slower. According to the distance between the man and the machine, the workspace may be divided into three regions, i.e., the safety region, the shared cooperative region, and the danger region. The robot may freely move according to a program in the safety region, when entering the shared cooperative region, the robot is subjected to speed and separation monitoring, and when entering the danger region, the robot is stopped immediately. In step 3, data trajectory of the worker is constructed, a worker trajectory is predicted by a hidden Markov model, an intention is determined in combination with the positions of the man and the tool held by the man and the type of the tool detected in step 1, and when an intention recognition rate exceeds a threshold, man-machine interaction is performed; an actual worker trajectory is monitored and updated in real time during the man-machine interaction, a threshold deviation between a position of the actual worker trajectory and a position of a predicted worker trajectory is determined, and when a threshold is exceeded, performing, active obstacle avoidance is by a manipulator through an improved RRT random tree or a dynamic programming graph algorithm. As shown in FIG. 3, step 3 includes the specific steps of: (1) monitoring and updating the actual worker trajectory in real time, collecting a position image of the actual worker trajectory, and then processing the image to obtain position information of the actual worker trajectory; (2) predicting the worker trajectory according to historical movement data of the worker and the hidden Markov model by a machine learning method to obtain position information of the predicted worker trajectory; and (3) when the threshold deviation between the position information of the actual worker trajectory and the position information of the predicted worker trajectory is less than a preset threshold deviation, determining the intention in combination with the positions of the man and the tool held by the man and the type of the tool detected in step 1, and when the intention recognition rate exceeds the threshold, performing the man-machine interaction; and when the threshold deviation between the position information of the actual worker trajectory and the position information of the predicted worker trajectory exceeds the preset threshold deviation, constructing, by a cooperative manipulator, an optimal obstacle avoidance trajectory based on reinforcement learning, and performing, by an industrial manipulator, double-arm obstacle avoidance according to the improved RRT random tree or the dynamic programming graph algorithm.
A specific method of step (2) in step 3 includes:
obtaining the historical movement data of the worker through behavior monitoring by the
machine learning method, establishing a mathematical model based on time and position
information, obtaining a relationship between a movement time and a movement position of
the worker through the convolution neural network, and constructing the data trajectory of the
worker, so as to predict the worker trajectory, thus obtaining the position information of the
predicted worker trajectory.
In this embodiment, aiming at the uncertainty problem of the man in speed and
separation control, the trajectory is collected in advance for a common task in a scene by a
trajectory sampling method, when the trajectory conforms to Gaussian distribution, a sampled
trajectory and an implemented trajectory are distinguished, man-machine cooperative control
is performed by using the speed and separation control, and a good effect is achieved.
In this embodiment, as shown in FIG. 4, behavior prediction of the worker refers to
predicting the movement of the worker from past movement data of the worker. According to
this method, a robot trajectory is planned in a space extending along a time axis. Since
probability distribution of the predicted position of the worker is given in each time step, a
state of the industrial robot with a low collision probability may be determined in each time
step.
In addition, a target position is set in a space with a time axis, thus allowing us to
calculate the robot trajectory that reaches the target position at an assumed target time. The
predicted worker trajectory is updated in each measurement period of a sensor, so that an
optimal robot trajectory is updated in measurement of each sensor.
In this embodiment, the present invention has a scene target recognition accuracy more
than 90%, an intention recognition accuracy more than 85%, and a real-time rate no less than
22 fps.
It should be appreciated by those skilled in this art that the embodiment of the present
application may be provided as methods, systems or computer program products. Therefore,
the embodiments of the present application may take the form of complete hardware embodiments, complete software embodiments or software-hardware combined embodiments. Moreover, the embodiments of the present application may take the form of a computer program product embodied on one or more computer usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) in which computer usable program codes are included. The present application is described with reference to the flow charts and/or block diagrams of the method, apparatus (system), and computer program products according to the embodiments of the present disclosure. It should be appreciated that each flow and/or block in the flow charts and/or block diagrams, and combinations of the flows and/or blocks in the flow charts and/or block diagrams may be implemented by computer program instructions. These computer program instructions may be provided to a general purpose computer, a special purpose computer, an embedded processor, or a processor of other programmable data processing apparatus to produce a machine for the instructions executed by the computer or the processor of other programmable data processing apparatus to generate a device for implementing the functions specified in one or more flows of the flow chart and/or in one or more blocks of the block diagram. These computer program instructions may also be provided to a computer readable memory that can guide the computer or other programmable data processing apparatus to work in a given manner, so that the instructions stored in the computer readable memory generate a product including an instruction device that implements the functions specified in one or more flows of the flow chart and/or in one or more blocks of the block diagram. These computer program instructions may also be loaded to a computer, or other programmable data processing apparatus, so that a series of operating steps are executed on the computer, or other programmable data processing apparatus to produce processing implemented by the computer, so that the instructions executed in the computer or other programmable data processing apparatus provide steps for implementing the functions specified in one or more flows of the flow chart and/or in one or more blocks of the block diagram. It will be understood that the term "comprise" and any of its derivatives (eg comprises, comprising) as used in this specification is to be taken to be inclusive of features to which it refers, and is not meant to exclude the presence of any additional features unless otherwise stated or implied. The reference to any prior art in this specification is not, and should not be taken as, an acknowledgement or any form of suggestion that such prior art forms part of the common general knowledge.
Claims (5)
1. A man-machine cooperative safe operation method based on cooperative trajectory
evaluation, comprising the following steps of:
step 1: constructing a scene cognition method based on a RGB image, detecting positions
of a man and a tool held by the man and a type of the tool through an improved YOLO target
detection algorithm, and then implementing cooperative intention recognition in combination
with an action skeleton of the man and the tool held by the man;
step 2: according to the positions of the man and the tool held by the man obtained in
step 1, constructing a minimum safety distance between the man and an end space of an
industrial robot through multi-vision field, and controlling a movement of the robot by speed
and separation monitoring SSM; and
step 3: constructing data trajectory of a worker, predicting a worker trajectory by a
hidden Markov model, determining an intention in combination with the positions of the man
and the tool held by the man and the type of the tool detected in step 1, and when an intention
recognition rate exceeds a threshold, performing man-machine interaction; monitoring and
updating an actual worker trajectory in real time during the man-machine interaction,
determining a threshold deviation between a position of the actual worker trajectory and a
position of a predicted worker trajectory, and when a threshold is exceeded, performing, by a
manipulator, active obstacle avoidance through an improved RRT random tree or a dynamic
programming graph algorithm.
2. The man-machine cooperative safe operation method based on cooperative trajectory
evaluation according to claim 1, wherein a specific method of detecting the positions of the
man and the tool held by the man and the type of the tool in step1 comprises: collecting RGB
images of the man and the tool held by the man by using a plurality of hand-eye monocular
cameras, generating a trained full convolution neural network detection model of the man and
the tool held by the man, inputting a whole picture into the trained full convolution neural
network detection model of the man and the tool held by the man, and detecting and obtaining
position frames of the man and the tool held by the man and a category that the tool belongs
to by calculating through the full convolution neural network.
3. The man-machine cooperative safe operation method based on cooperative trajectory
evaluation according to claim 1 or 2, wherein step 2 comprises the specific steps of:
monitoring a distance between the man and the machine and a movement speed of the
industrial robot through the multi-vision field in real time, and dividing a workspace into a
safety region, a shared cooperative region, and a danger region according to the distance
between the man and the machine, wherein the industrial robot is capable of freely moving
according to a program in the safety region, when entering the shared cooperative region, the
industrial robot is subjected to speed and separation monitoring , and when entering the
danger region, the industrial robot is stopped immediately; and meanwhile, when the distance
between the man and the machine is shortened, controlling the robot to slow down.
4. The man-machine cooperative safe operation method based on cooperative trajectory
evaluation according to claim 1 or 2, wherein the step 3 comprises the specific steps of:
(1) monitoring and updating the actual worker trajectory in real time, collecting a
position image of the actual worker trajectory, and then processing the image to obtain
position information of the actual worker trajectory;
(2) predicting the worker trajectory according to historical movement data of the worker
and the hidden Markov model by a machine learning method to obtain position information of
the predicted worker trajectory; and
(3) when the threshold deviation between the position information of the actual worker
trajectory and the position information of the predicted worker trajectory is less than a preset
threshold deviation, determining the intention in combination with the positions of the man
and the tool held by the man and the type of the tool detected in step 1, and when the intention
recognition rate exceeds the threshold, performing the man-machine interaction; and when the
threshold deviation between the position information of the actual worker trajectory and the
position information of the predicted worker trajectory exceeds the preset threshold deviation,
constructing, by a cooperative manipulator, an optimal obstacle avoidance trajectory based on
reinforcement learning, and performing, by an industrial manipulator, double-arm obstacle
avoidance according to the improved RRT random tree or the dynamic programming graph
algorithm.
5. The man-machine cooperative safe operation method based on cooperative trajectory
evaluation according to claim 4, wherein a specific method of step (2) in step 3 comprises:
obtaining the historical movement data of the worker through behavior monitoring by the
machine learning method, establishing a mathematical model based on time and position
information, obtaining a relationship between a movement time and a movement position of
the worker through the convolution neural network, and constructing the data trajectory of the
worker, so as to predict the worker trajectory, thus obtaining the position information of the
predicted worker trajectory.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2021101646A AU2021101646A4 (en) | 2021-03-30 | 2021-03-30 | Man-machine cooperative safe operation method based on cooperative trajectory evaluation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2021101646A AU2021101646A4 (en) | 2021-03-30 | 2021-03-30 | Man-machine cooperative safe operation method based on cooperative trajectory evaluation |
Publications (1)
Publication Number | Publication Date |
---|---|
AU2021101646A4 true AU2021101646A4 (en) | 2021-05-20 |
Family
ID=75911143
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
AU2021101646A Ceased AU2021101646A4 (en) | 2021-03-30 | 2021-03-30 | Man-machine cooperative safe operation method based on cooperative trajectory evaluation |
Country Status (1)
Country | Link |
---|---|
AU (1) | AU2021101646A4 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113256724A (en) * | 2021-07-07 | 2021-08-13 | 上海影创信息科技有限公司 | Handle inside-out vision 6-degree-of-freedom positioning method and system |
CN113342047A (en) * | 2021-06-23 | 2021-09-03 | 大连大学 | Unmanned aerial vehicle path planning method for improving artificial potential field method based on obstacle position prediction in unknown environment |
CN113822375A (en) * | 2021-11-08 | 2021-12-21 | 北京工业大学 | Improved traffic image target detection method |
CN116061187A (en) * | 2023-03-07 | 2023-05-05 | 睿尔曼智能科技(江苏)有限公司 | Method for identifying, positioning and grabbing goods on goods shelves by composite robot |
-
2021
- 2021-03-30 AU AU2021101646A patent/AU2021101646A4/en not_active Ceased
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113342047A (en) * | 2021-06-23 | 2021-09-03 | 大连大学 | Unmanned aerial vehicle path planning method for improving artificial potential field method based on obstacle position prediction in unknown environment |
CN113342047B (en) * | 2021-06-23 | 2023-10-17 | 大连大学 | Unmanned aerial vehicle path planning method based on obstacle position prediction improved artificial potential field method in unknown environment |
CN113256724A (en) * | 2021-07-07 | 2021-08-13 | 上海影创信息科技有限公司 | Handle inside-out vision 6-degree-of-freedom positioning method and system |
CN113822375A (en) * | 2021-11-08 | 2021-12-21 | 北京工业大学 | Improved traffic image target detection method |
CN113822375B (en) * | 2021-11-08 | 2024-04-26 | 北京工业大学 | Improved traffic image target detection method |
CN116061187A (en) * | 2023-03-07 | 2023-05-05 | 睿尔曼智能科技(江苏)有限公司 | Method for identifying, positioning and grabbing goods on goods shelves by composite robot |
CN116061187B (en) * | 2023-03-07 | 2023-06-16 | 睿尔曼智能科技(江苏)有限公司 | Method for identifying, positioning and grabbing goods on goods shelves by composite robot |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
AU2021101646A4 (en) | Man-machine cooperative safe operation method based on cooperative trajectory evaluation | |
EP3746855B1 (en) | Path planning in mobile robots | |
Dong et al. | Real-time avoidance strategy of dynamic obstacles via half model-free detection and tracking with 2d lidar for mobile robots | |
Mišeikis et al. | Multi 3D camera mapping for predictive and reflexive robot manipulator trajectory estimation | |
Merkt et al. | Robust shared autonomy for mobile manipulation with continuous scene monitoring | |
Schepp et al. | Sara: A tool for safe human-robot coexistence and collaboration through reachability analysis | |
Escobar-Naranjo et al. | Applications of Artificial Intelligence Techniques for trajectories optimization in robotics mobile platforms | |
Liu et al. | A mixed perception-based human-robot collaborative maintenance approach driven by augmented reality and online deep reinforcement learning | |
US11478932B2 (en) | Handling assembly comprising a handling device for carrying out at least one work step, method, and computer program | |
Teke et al. | Real-time and robust collaborative robot motion control with Microsoft Kinect® v2 | |
Nasti et al. | Obstacle avoidance during robot navigation in dynamic environment using fuzzy controller | |
WO2023034047A1 (en) | Machine learning-based environment fail-safes through multiple camera views | |
Kang et al. | Safety monitoring for human robot collaborative workspaces | |
Feng et al. | Human-robot integration for pose estimation and semi-autonomous navigation on unstructured construction sites | |
Yi et al. | Safety-aware human-centric collaborative assembly | |
Pushp et al. | Cognitive decision making for navigation assistance based on intent recognition | |
CN113510699A (en) | Mechanical arm motion trajectory planning method based on improved ant colony optimization algorithm | |
Trinh et al. | Safe and flexible planning of collaborative assembly processes using behavior trees and computer vision | |
Kidiraliev et al. | Using optical sensors for industrial robot-human interactions in a Gazebo environment | |
Kaiser | A framework for the generation of robot controllers from examples | |
Du et al. | Robotic manufacturing systems: A survey on technologies to improve the cognitive level in HRI | |
Wang | A pursuit evasion game approach to obstacle avoidance | |
Borsoi | Hands tracking and fuzzy speed control to improve human-robot collaboration | |
Nattharith | Motor schema-based control of mobile robot navigation | |
Müller et al. | Situation-based identification of probable loss scenarios of industrial mobile robots |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FGI | Letters patent sealed or granted (innovation patent) | ||
MK22 | Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry |