CN114757293A - Man-machine co-fusion risk early warning method and system based on action recognition and man-machine distance - Google Patents

Man-machine co-fusion risk early warning method and system based on action recognition and man-machine distance Download PDF

Info

Publication number
CN114757293A
CN114757293A CN202210453368.8A CN202210453368A CN114757293A CN 114757293 A CN114757293 A CN 114757293A CN 202210453368 A CN202210453368 A CN 202210453368A CN 114757293 A CN114757293 A CN 114757293A
Authority
CN
China
Prior art keywords
human
distance
machine
operator
action
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210453368.8A
Other languages
Chinese (zh)
Inventor
周乐来
魏崇熠
李贻斌
宋锐
田新诚
荣学文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN202210453368.8A priority Critical patent/CN114757293A/en
Publication of CN114757293A publication Critical patent/CN114757293A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1674Programme controls characterised by safety, monitoring, diagnostic
    • B25J9/1676Avoiding collision or forbidden zones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0635Risk analysis of enterprise or organisation activities

Abstract

The invention relates to a human-computer co-fusion risk early warning method and a system based on action recognition and human-computer distance, which comprises the following steps: based on the constructed action recognition model, the current action state of the operator is recognized as walking, observation or working by utilizing the obtained action information of the operator; obtaining the minimum distance between the operator and the robot by using the obtained position information of the operator and the robot; and generating risk prompt instructions in different action states according to the current action state of the operator and the obtained minimum distance. And identifying the action posture of the human in the human-computer cooperation process according to the constructed model, identifying risks in different levels by the robot according to different actions and different human-computer distances of the human, and controlling the robot to execute safety instructions corresponding to the risks.

Description

Man-machine co-fusion risk early warning method and system based on action recognition and man-machine distance
Technical Field
The invention relates to the technical field of man-machine cooperation, in particular to a man-machine co-fusion risk early warning method and system based on action recognition and man-machine distance.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
The industrial robot can replace human beings to perform a part of repeated, mechanical and dangerous work, and the working space of the robot is mutually independent from the working space of the human beings because of the existence of moving parts in the industrial robot, usually the independent space is formed by railings or fences, and under the common condition, workers cannot be allowed to enter the working space of the robot to avoid accidents.
In some scenes requiring the joint operation of human beings and robots, the robots need to dynamically change their pre-planned tasks to adapt to the behaviors of the human beings according to the operation conditions of the human beings, so as to complete more personalized and more complicated work tasks, and the learning, flexibility and versatility and analysis and decision abilities of the human beings can exactly make up the defects of the robots, and the robots can make up the disadvantages of the human beings in terms of accuracy and repeatability, thereby realizing human-computer cooperation.
When a human and a robot are in the same working space during man-machine cooperation, it is required to ensure that actions of the robot do not cause danger to the human, but at present, aiming at a risk assessment method in the man-machine cooperation process, the risk assessment method is more concentrated on man-machine distance and movement speed of the robot, and action intention and state of the human are not considered, so that the robot cannot execute a safety command according to the action of the human, the robot is enabled to actively decelerate or stop working for a long time, and the man-machine cooperation efficiency is greatly reduced.
Disclosure of Invention
In order to solve the technical problems in the background art, the invention provides a human-computer co-fusion risk early warning method and system based on motion recognition and human-computer distance.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention provides a human-computer co-fusion risk early warning method based on action recognition and human-computer distance, which comprises the following steps:
identifying the current action state of the operator as walking, observation or working by using the acquired operator action information based on the constructed action identification model;
obtaining the minimum distance between the operator and the robot by using the obtained position information of the operator and the robot;
and generating risk prompt instructions in different action states according to the current action state of the operator and the obtained minimum distance.
The construction process of the motion recognition model comprises the following steps:
acquiring skeleton information of an operator when the operator and the robot work cooperatively in the same working space through motion capture equipment;
extracting motion characteristics and constructing characteristic vectors based on the skeleton information, and inputting the motion characteristics and the characteristic vectors into a convolutional neural network in a characteristic matrix form for characteristic extraction, learning and classification;
and storing the network model of the optimal training result as an action recognition model.
The skeleton information includes three-dimensional spatial position information of fourteen joint points including left and right shoulders, left and right elbows, left and right wrists, left and right hip joints, left and right knees, left and right ankles, an upper body reference point located at the neck and a lower body reference point located at the waist, and acceleration information obtained by sensors on the left and right upper arms and the left and right thighs.
The motion characteristics include a joint point distance characteristic, an included joint angle characteristic and an acceleration characteristic.
Obtaining the minimum distance between the operator and the robot by using the obtained position information of the operator and the robot, the method comprises the following steps:
the average value of coordinates of each joint point of the human body and the mechanical arm is taken as a sphere center, the spherical bounding box with the maximum value of the distance from the sphere center to each joint point as a radius respectively encloses the human body and the mechanical arm, and the difference between the distance between the two sphere centers of the human body and the mechanical arm and the sum of the radii of the two sphere centers is the A-grade minimum distance between the operator and the robot.
The human body and the mechanical arm are regarded as cylinders in space, the height of each cylinder is the distance between two adjacent joint points in the human body skeleton and the mechanical arm, the radius of each cylinder is a set value, and the difference between the minimum distance between the central axis sections of the two cylinders and the sum of the radii of the two cylinders is the B-level minimum distance between an operator and the robot.
If the A-level minimum distance is not smaller than a preset first safety distance threshold, taking the distance as an actual minimum distance, and not performing distance calculation of the next level;
and if the minimum distance of the grade A is smaller than a preset first safety distance threshold, taking the minimum distance of the grade B as the actual minimum distance between the human machines.
According to the current action state of the operator and the obtained minimum distance, risk prompt instructions under different action states are generated, and the risk prompt instructions comprise:
when the human body acts as walking, if the minimum distance between the human body and the human machine is smaller than a first safety distance threshold value at the moment, the risk level is dangerous; the minimum distance between the man machine and the mobile phone is greater than a first safety distance threshold value and not less than a second safety distance threshold value, and the risk level is warning; and when the minimum distance between the man and the machine is greater than a second safety distance threshold value, the risk level is safety.
When the human body action is observation, if the minimum distance between the human body and the human machine is smaller than a first safety distance threshold value at the moment, the risk level is warning; and when the minimum distance between the man and the machine is greater than the first safety distance threshold value and not less than the second safety distance threshold value, the risk levels are all safety.
When the human body acts as a work, if the minimum distance between the human body and the machine at the moment is smaller than a first safe distance threshold value, or is larger than the first safe distance threshold value and not smaller than a second safe distance threshold value, the risk levels are all dangerous; and when the minimum distance between the man machine and the machine is greater than the second safety distance threshold value, the risk level is safe.
A second aspect of the present invention provides a system for implementing the above method, comprising:
a motion recognition module configured to: identifying the current action state of the operator as walking, observation or working by using the acquired operator action information based on the constructed action identification model;
a distance calculation module configured to: obtaining the minimum distance between the operator and the robot by using the obtained position information of the operator and the position information of the robot;
a risk level determination module configured to: and generating risk prompt instructions in different action states according to the current action state of the operator and the obtained minimum distance.
A third aspect of the invention provides a computer-readable storage medium.
A computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps in the human-machine fusion risk pre-warning method based on motion recognition and human-machine distance as described above.
A fourth aspect of the invention provides a computer apparatus.
A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor executes the program to implement the steps of the human-machine fusion risk early warning method based on motion recognition and human-machine distance.
Compared with the prior art, the above one or more technical schemes have the following beneficial effects:
1. in the man-machine cooperation process, the condition that the working spaces of the operating personnel and the robot are overlapped is fully considered, the trained motion recognition model is used for recognizing the motion of the human body, meanwhile, the minimum distance between the robots is calculated according to the position information, and the motion recognition model and the minimum distance are jointly used as judgment standards to judge the risk level of the current man-machine co-fusion scene, so that the robot can make corresponding early warning responses according to different risk levels, and the occurrence of danger is effectively avoided.
2. The method combining human body action recognition and the human-computer minimum distance can improve the perception level of the cooperative robot on human intention, thereby improving the safety of human when the human and the computer share a working space, avoiding the harm of the robot to the human caused by the execution of a preset program and the lack of perception capability in the human-computer cooperation process, and simultaneously reducing the shutdown action caused by the fact that the robot cannot judge the human state in the human-computer cooperation process.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the invention and together with the description serve to explain the invention and not to limit the invention.
Fig. 1 is a schematic diagram of a human-machine co-fusion risk early warning process according to one or more embodiments of the present invention;
FIG. 2 is a schematic diagram of a human skeleton labeled with human skeleton information for use in providing early warning in accordance with one or more embodiments of the present invention;
fig. 3(a) - (c) are software interface diagrams of a human-computer co-fusion risk early warning system according to one or more embodiments of the present invention.
Detailed Description
The invention is further described with reference to the following figures and examples.
It is to be understood that the following detailed description is exemplary and is intended to provide further explanation of the invention as claimed. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular forms "a", "an", and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
As described in the background art, the current risk assessment method in the human-computer cooperation process focuses more on the human-computer distance and the movement speed of the robot, and does not consider the human behavior intention and state, which may cause the robot to be unable to execute the safety command according to the human behavior, so that the robot actively slows down or stops working for a long time, resulting in a great reduction in the human-computer cooperation efficiency.
Therefore, the following embodiments provide a human-computer co-fusion risk early warning method and system based on motion recognition and human-computer distance, the motion posture of a human in a human-computer cooperation process is recognized according to a built model, and the robot recognizes risks in different levels and controls the robot to execute safety instructions corresponding to the risks according to different motions and different human-computer distances of the human.
The first embodiment is as follows:
as shown in fig. 1-3, the human-machine co-fusion risk early warning method based on motion recognition and human-machine distance includes the following steps:
identifying the current action state of the operator as walking, observation or working by using the acquired operator action information based on the constructed action identification model;
obtaining the minimum distance between the operator and the robot by using the obtained position information of the operator and the robot;
and generating risk prompt instructions in different action states according to the current action state of the operator and the obtained minimum distance.
Specifically, the method comprises the following steps:
the off-line stage is a stage of constructing an action recognition model, and the on-line risk early warning stage is a stage of realizing risk early warning by utilizing the constructed model.
In the offline stage, the collaborator wears an IMU device (motion capture device, e.g., XSE NS MTw awind) to repeatedly execute three actions of walking, squatting, and observation/standing work that may occur in the human-computer collaboration scene, and the IMU device may construct a human skeleton model and obtain useful skeleton information through motion capture, as shown in fig. 2, where the information includes: the 3D spatial position information of fourteen joint points of the left and right shoulders, the left and right elbows, the left and right wrists, the left and right hip joints, the left and right knees, the left and right ankles, the upper body reference point (body _ up), and the lower body reference point (body _ base), and the acceleration information obtained by the IMU sensors located on the left and right upper arms and the left and right thighs.
In order to represent a specific behavior, key information which can best reflect the characteristics of the specific behavior is extracted from the original skeleton information and is stored as characteristic information.
With reference to fig. 2, the feature information extracted in this embodiment has the following three categories:
feature information of joint point distance: six pieces of distance information, namely a left shoulder-to-left wrist distance L1, a right shoulder-to-right wrist distance L2, a left wrist-to-left span distance L3, a right wrist-to-right span distance L4, a left ankle-to-left ankle distance L5 and a right ankle-to-right ankle distance L6, are selected. The distance between the joint points can be obtained by the following formula from the three-dimensional coordinate positions of any two joint points:
Figure BDA0003619722280000081
joint clampAngular characteristic information: the included angle between the upper arm and the lower arm, the included angle between the thigh and the crus, the included angle between the upper arm and the upper body trunk formed by the front-back swing of the upper limb, the included angle between the upper arm and the upper body trunk formed by the lifting/falling of the upper limb and the included angle between the thigh and the upper body trunk formed by the front-back swing of the lower limb are considered in the human motion process. The human body is symmetrical left and right, thus obtaining ten included angle characteristic values theta1~θ10. The joint included angle characteristic value can be obtained by the included angle of vectors formed by any three joint points in pairs, and the formula is as follows:
Figure BDA0003619722280000082
acceleration-related relationship: the left big arm and the right big arm, the left big arm and the right thigh, the right big arm and the left thigh, the right big arm and the right thigh, and the right thigh and the left thigh are selected, the present embodiment describes the characteristic information by using cosine values of included angles between vectors describing acceleration directions, and the formulas are as follows, wherein a1 and a2 are acceleration vectors. A total of six values ACR are obtained1~ACR6
Figure BDA0003619722280000091
The feature information contains 22 feature values in three categories, and the 22 feature values can be combined into a 22-dimensional vector T ═ { L ═ L-1~L61~θ10,ACR1~ACR6And the feature vector is used as a feature vector of a certain frame in the process that the human completes the action.
In the embodiment, the frequency of information acquired by the sensor is set to 50 times/second, that is, the frequency of extracting feature composition feature vectors from the human skeleton information is 50 times/second, in order to identify a whole continuous motion, the embodiment continuously acquires 1s, and all feature information obtained by the 1s represents the behavior and the motion of the human body. A 50 x 22 matrix can thus be constructed as a feature matrix for a particular continuous motion.
And recording the feature matrix of each action by the executive personnel in the process of repeatedly completing the three actions, and constructing a training set for training the deep learning network by using the feature matrix.
In the training set, three actions (walking, squatting, upright observation/work) are involved, 32 groups are collected for each action, so the shape (size) of the training set is (3 × 32, 50, 22) for the total input to the entire convolutional neural network.
In the training process, the training set is divided into 6 batchs (batches) to be transmitted to the convolutional neural network in batches during each training, the loss function is calculated by utilizing the back propagation principle, and the neural network parameters are updated by adopting a gradient descent method. Thus the size (size) of each batch is 16 and the number of iterations of training (epoch) is set to 80.
The trained convolutional neural network comprises two convolutional layers, wherein the first convolutional layer uses 16 filters (3 × 3) for feature extraction, the second convolutional layer uses 16 filters (3 × 3) for feature extraction, and after each convolutional layer, each convolutional layer passes through a relu activation layer and is processed by max-posing.
The fully-connected network with three layers connected behind the convolutional layer classifies the features extracted by the convolutional layer, high-dimensional features are mapped to action classification by utilizing L (x) -Wx + b, Dropout is applied after each layer, neurons are randomly disconnected according to 0.2 probability, and overfitting is prevented.
And finally, storing the trained convolutional neural network for action recognition in a subsequent online risk early warning stage. The expected outputs of the neural network corresponding to the three action types are respectively: (0,0,1), (0,1,0) and (1,0, 0).
With reference to fig. 1, the online risk early warning stage may be divided into action recognition, distance calculation, and risk level determination.
In the experimental environment of this embodiment, the cooperative staff dresses the IMU suit to also install IMU sensor equipment in arm specific position, the arm carries out work according to the motion trail that has set for in advance, and cooperative staff and arm share workspace this moment, and in the scene of man-machine integration, a plurality of IMU equipment carry out action capture, gesture tracking to human body and arm, obtain the 3D spatial position information and the motion information of cooperative staff and arm.
And (3) action recognition: and continuously acquiring and recording the complete human body action characteristic information for 1s, and then transmitting all data in the second to the convolutional neural network trained in the off-line stage for action recognition to obtain the recognition result of the current human body action.
Meanwhile, the distance calculation process adopts a corresponding method to calculate the minimum distance between the human-computer in real time according to different levels, and the specific calculation process is as follows:
firstly, calculating a human-computer distance according to the level A, wherein the level A adopts a spherical bounding box form to calculate the distance between the human and the computer, adding 3D coordinate positions of all joint points of a human body framework/mechanical arm, and dividing the sum by the number of the joint points to obtain the spherical center position of a bounding sphere of the human body/mechanical arm; calculating Euclidean distances from each joint point of the human body skeleton/mechanical arm to the center of the sphere of the human body/mechanical arm surrounding sphere, selecting the longest distance as the radius of the human body/mechanical arm surrounding sphere, wherein the positions and the radii of the centers of the human body and the mechanical arm surrounding sphere are respectively (x)1,y1,z1) R and (x)2,y2,z2) R, the distance between the human body and the robot at the level is the distance between the human body and the center of the sphere surrounded by the mechanical arm, and the radius of the sphere surrounded by the human body and the mechanical arm is subtracted in sequence, and the calculation formula is as follows:
Figure BDA0003619722280000111
the coordinates and the radius of the spherical center position are calculated by adopting an average value method, and the total position information of m joint points of the human body/the mechanical arm is assumed:
{(x1,y1,z1),(x2,y2,z2),...,(xm,ym,zm)}
the 3D spatial position coordinates (X, Y, Z) of the center of the sphere are calculated by the following formula:
Figure BDA0003619722280000112
Figure BDA0003619722280000113
Figure BDA0003619722280000114
then, the coordinates (X, Y, Z) of the 3D space position of the spherical center are calculated to the position (X) of each joint pointi,yi,zi) And i is more than or equal to 1 and less than or equal to m, and the maximum value is taken as the radius of the spherical bounding box.
If the distance d calculated according to the level A is larger than or equal to a preset first safety distance threshold value S1, taking the distance as the minimum distance between the human machines, and not performing distance calculation of the next level;
if the distance d calculated by the level a is smaller than the preset first safety distance threshold value S1, the distance calculation by the level B is also required, and the human-computer distance calculated by the level B is used as the minimum distance between human and computer.
And the B level adopts a cylinder bounding box, each trunk of a human body and each connecting rod of the mechanical arm are regarded as cylinders in space, the height of each cylinder is the distance between two adjacent joint points of the human body skeleton/the mechanical arm, the radius of each cylinder is preset according to the actual condition, and then the shortest distance between the cylinders is solved by utilizing a mathematical formula to serve as the distance between the human body and the machine. The calculation of the minimum distance between the two cylinders is simplified into the difference between the minimum distance between the axis segments of the two cylinders and the sum of the radii of the two cylinders.
Calculating the minimum distance between the body and each cylinder surrounding the arm link, denoted d1~dmWherein the minimum value is the distance between the robots at this level:
Figure BDA0003619722280000121
the calculation amount of the human-computer distance calculation can be reduced by adopting a two-stage calculation mode, if a B-level calculation method is directly used, the distance between each trunk of a human body and each connecting rod of the mechanical arm needs to be calculated, and the calculation of the distance by using a cylindrical bounding box is far more complicated than the calculation of the distance by using a spherical bounding box. Therefore, if the distance between the two is calculated in the form of the spherical bounding box, if the distance is far away, the distance between the human machine and the computer is safe, the B-level calculation method is not selected, and the calculation amount is greatly reduced.
Judging the risk grade of human under the current man-machine co-fusion stope scene according to the recognition result of the human behavior action and the calculated minimum distance between the man-machines, wherein the risk grade table is shown in table 1,
table 1: risk classification table
d≤s1 s1<d≤s2 d>s2
Walk Danger Caution Safe
Observation/inspection Caution Safe Safe
Work of squatting Danger Danger Safe
The specific judgment principle is as follows:
when the human body acts as walking, if the minimum distance between the human and the machine is smaller than a first safety distance threshold (d is less than or equal to s1), the risk level is Danger (Danger); when the minimum distance between the man-machine is between the first safety distance threshold and the second safety distance threshold (d1 is more than d and less than or equal to s2), the risk grade is Caution (warning); the minimum distance between the human and the machine is greater than a second safe distance threshold (s2 < d) and the risk level is Saf e (safe).
When the human body action is observation/inspection, if the minimum distance between the human machine and the human machine is smaller than a first safety distance threshold (d is less than or equal to s1), the risk level is Caution; the risk level is Safe when the minimum distance between the human and the machine is between the first and the second Safe distance threshold (s1 < d ≦ s2) and the minimum distance between the human and the machine is greater than the second Safe distance threshold (s2 < d).
When the human body acts as squatting work, if the minimum distance between the human machine and the human machine is smaller than a first safe distance threshold (d is less than or equal to d1) or is between the first safe distance threshold and a second safe distance threshold (s1 is larger than d and less than or equal to s2), the risk grade is Danger; the risk rating of the minimum distance between the human and the machine being greater than the second Safe distance threshold (s2 < d) is Safe.
As shown in fig. 3(a), if the current risk level is determined to be Danger, a red alarm is displayed at the "risk level" of the software interface, the mechanical arm stops moving, and the speed is 0;
as shown in fig. 3(b), if it is determined that the current risk level is Caution, an orange early warning prompt is displayed at the "risk level" of the software interface, and the movement speed of the mechanical arm is reduced.
As shown in fig. 3(c), if the current risk level is Safe, a green prompt is displayed at the risk level of the software interface, and the mechanical arm operates at a normal speed V;
in the process of man-machine cooperation, the condition that the working space of an operator is overlapped with the working space of the robot is fully considered, and the operator and the robot are subjected to real-time motion capture and posture tracking through the wearable motion capture equipment so as to construct and train a motion recognition model of the deep learning network; the trained motion recognition model is used for recognizing the motion of the human body, meanwhile, the minimum distance between the human body and the computer is calculated according to the position information, the motion recognition model and the computer are combined together to serve as a judgment standard to judge the risk level under the current human-computer co-fusion scene, then corresponding early warning and robot response are made according to different risk levels, and the occurrence of danger is effectively avoided.
The process occurs in an experimental scene, and is applied to an actual human-computer cooperation scene of an industrial robot and an operator, the position information and the information of each joint of the robot can be obtained through instruments such as a position sensor and an acceleration sensor which are carried by the robot, the operator can obtain the information such as the position, the speed, the acceleration and the like of each joint by using a motion capture technology through wearing working clothes embedded with the sensors, or can obtain the information such as the position, the speed, the acceleration and the like of each joint through on-site image information (a monitoring picture obtained by using a camera) by using the existing image recognition technology, and the state of the operator in walking, observation or working is obtained by matching with the motion recognition model constructed by the embodiment, so that risk early warning is realized.
The method combining human body action recognition and human-computer minimum distance can improve the perception level of the cooperative robot on human intention, thereby improving the safety of human when the human and the computer share a working space, and avoiding the harm of the robot to human caused by the execution of a preset program and the lack of perception capability in the human-computer cooperation process.
Example two:
the embodiment provides a system for implementing the method, which includes:
a motion recognition module configured to: the method comprises the following steps that an operator and a robot are located in the same working space, and the current action state of the operator is identified to be walking, observation or working by utilizing the obtained operator action information based on a constructed action identification model;
a distance calculation module configured to: obtaining the distance between the operator and the robot by using the obtained position information of the operator and the robot;
a risk level determination module configured to: and generating risk prompt instructions in different action states according to the current action state of the operator and the obtained minimum distance.
The system identifies the action posture of the human in the human-computer cooperation process according to the established model, the robot identifies risks in different levels according to different actions and different human-computer distances of the human and controls the robot to execute safety instructions corresponding to the risks, and the perception level of the cooperation robot on the human intention can be improved by combining human action identification and the human-computer minimum distance, so that the safety of the human when the human and the computer share a working space is improved, and the harm of the robot to the human caused by the execution of a preset program and the lack of perception capability in the human-computer cooperation process is avoided.
EXAMPLE III
The present embodiment provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps in the human-machine fusion risk early warning method based on motion recognition and human-machine distance as set forth in the first embodiment.
In the human-computer co-fusion risk early warning method based on action recognition and human-computer distance executed by the computer program in the embodiment, the action posture of a human being in a human-computer cooperation process is recognized according to a built model, the robot recognizes risks in different levels according to different actions of the human being and different human-computer distances and controls the robot to execute safety instructions corresponding to the risks, and the perception level of the cooperative robot on human intention can be improved in a mode of combining human action recognition and human-computer minimum distance, so that the safety of the human being in a human-computer shared working space is improved, and the harm of the robot on the human being due to the fact that the robot executes a preset program and lacks perception capability in the human-computer cooperation process is avoided.
Example four
The embodiment provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the program, the steps in the human-computer co-fusion risk early warning method based on motion recognition and human-computer distance as set forth in the above embodiment are implemented.
In the human-computer co-fusion risk early warning method based on action recognition and human-computer distance executed by the processor, action postures of human beings in a human-computer cooperation process are recognized according to a built model, the robot recognizes risks in different levels according to different actions and different human-computer distances of the human beings and controls the robot to execute safety instructions corresponding to the risks, and the perception level of the cooperative robot on human intentions can be improved by combining human action recognition and the human-computer minimum distance, so that the safety of the human beings in a human-computer shared working space is improved, and the harm of the robot to the human beings caused by the fact that the robot executes a preset program and lacks perception capability in the human-computer cooperation process is avoided.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by a computer program, which may be stored in a computer readable storage medium and executed by a computer to implement the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a read-only memory or a random access memory.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A man-machine co-fusion risk early warning method based on action recognition and man-machine distance is characterized by comprising the following steps: the method comprises the following steps:
identifying the current action state of the operator as walking, observation or working by using the acquired operator action information based on the constructed action identification model;
obtaining the minimum distance between the operator and the robot by using the obtained position information of the operator and the robot;
and generating risk prompt instructions in different action states according to the current action state of the operator and the obtained minimum distance.
2. The human-machine fusion risk early warning method based on action recognition and human-machine distance as claimed in claim 1, wherein: the construction process of the motion recognition model comprises the following steps:
acquiring skeleton information of an operator when the operator and the robot work cooperatively in the same working space through motion capture equipment;
extracting motion characteristics and constructing characteristic vectors based on the skeleton information, and inputting the motion characteristics and the characteristic vectors into a convolutional neural network in a characteristic matrix form for characteristic extraction, learning and classification;
and saving the network model of the optimal training result as an action recognition model.
3. The human-machine fusion risk early warning method based on action recognition and human-machine distance as claimed in claim 2, characterized in that: the skeleton information includes three-dimensional spatial position information of fourteen joint points including left and right shoulders, left and right elbows, left and right wrists, left and right hip joints, left and right knees, left and right ankles, an upper body reference point located at the neck and a lower body reference point located at the waist, and acceleration information obtained by sensors on the left and right upper arms and the left and right thighs.
4. The human-machine fusion risk early warning method based on action recognition and human-machine distance as claimed in claim 2, characterized in that: the motion characteristics include a joint point distance characteristic, an included joint angle characteristic and an acceleration characteristic.
5. The human-machine fusion risk early warning method based on action recognition and human-machine distance as claimed in claim 1, wherein: obtaining the minimum distance between the operator and the robot by using the obtained position information of the operator and the robot, the method comprises the following steps:
the average value of coordinates of each joint point of the human body and the mechanical arm is taken as a sphere center, the spherical bounding box with the maximum value of the distance from the sphere center to each joint point as a radius respectively encloses the human body and the mechanical arm, and the difference between the distance between the two sphere centers of the human body and the mechanical arm and the sum of the radii of the two sphere centers is the A-grade minimum distance between an operator and the robot;
the human body and the mechanical arm are taken as cylinders in space, the height of each cylinder is the distance between two adjacent joint points in the human body framework and the mechanical arm, the radius of each cylinder is a set value, and the difference between the minimum distance between the central axis sections of the two cylinders and the sum of the radii of the two cylinders is the B-level minimum distance between an operator and the robot.
6. The human-machine co-fusion risk early warning method based on motion recognition and human-machine distance as claimed in claim 5, characterized in that:
if the A-level minimum distance is not smaller than a preset first safety distance threshold, taking the distance as an actual minimum distance, and not performing distance calculation of the next level;
and if the minimum distance of the grade A is smaller than a preset first safety distance threshold, taking the minimum distance of the grade B as the actual minimum distance between the human machines.
7. The human-machine fusion risk early warning method based on action recognition and human-machine distance as claimed in claim 1, wherein: according to the current action state of the operator and the obtained minimum distance, risk prompt instructions under different action states are generated, and the risk prompt instructions comprise:
when the human body acts as walking, if the minimum distance between the human body and the human machine is smaller than a first safety distance threshold value at the moment, the risk level is dangerous; the minimum distance between the man machine and the mobile phone is greater than a first safety distance threshold value and not less than a second safety distance threshold value, and the risk level is warning; and when the minimum distance between the man machine and the machine is greater than the second safety distance threshold value, the risk level is safe.
When the human body action is observation, if the minimum distance between the human body and the human machine is smaller than a first safety distance threshold value at the moment, the risk level is warning; and when the minimum distance between the man and the machine is greater than the first safety distance threshold value and not less than the second safety distance threshold value, the risk levels are all safety.
When the human body acts as a work, if the minimum distance between the human body and the machine at the moment is smaller than a first safety distance threshold value, or is larger than the first safety distance threshold value and not smaller than a second safety distance threshold value, the risk levels are all dangerous; and when the minimum distance between the man machine and the machine is greater than the second safety distance threshold value, the risk level is safe.
8. Man-machine fuses risk early warning system based on action discernment and man-machine distance, its characterized in that: the method comprises the following steps:
a motion recognition module configured to: identifying the current action state of the operator as walking, observation or working by using the acquired operator action information based on the constructed action identification model;
a distance calculation module configured to: obtaining the minimum distance between the operator and the robot by using the obtained position information of the operator and the robot;
a risk level determination module configured to: and generating risk prompt instructions in different action states according to the current action state of the operator and the obtained minimum distance.
9. A computer-readable storage medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the steps of the human-machine co-fusion risk pre-warning method based on motion recognition and human-machine distance according to any one of claims 1-7.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the steps of the human-machine fusion risk pre-warning method based on motion recognition and human-machine distance according to any one of claims 1-7.
CN202210453368.8A 2022-04-27 2022-04-27 Man-machine co-fusion risk early warning method and system based on action recognition and man-machine distance Pending CN114757293A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210453368.8A CN114757293A (en) 2022-04-27 2022-04-27 Man-machine co-fusion risk early warning method and system based on action recognition and man-machine distance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210453368.8A CN114757293A (en) 2022-04-27 2022-04-27 Man-machine co-fusion risk early warning method and system based on action recognition and man-machine distance

Publications (1)

Publication Number Publication Date
CN114757293A true CN114757293A (en) 2022-07-15

Family

ID=82333518

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210453368.8A Pending CN114757293A (en) 2022-04-27 2022-04-27 Man-machine co-fusion risk early warning method and system based on action recognition and man-machine distance

Country Status (1)

Country Link
CN (1) CN114757293A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116703161A (en) * 2023-06-13 2023-09-05 湖南工商大学 Prediction method and device for man-machine co-fusion risk, terminal equipment and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108527370A (en) * 2018-04-16 2018-09-14 北京卫星环境工程研究所 The man-machine co-melting safety control system of view-based access control model
CN110561432A (en) * 2019-08-30 2019-12-13 广东省智能制造研究所 safety cooperation method and device based on man-machine co-fusion
CN110978064A (en) * 2019-12-11 2020-04-10 山东大学 Human body safety assessment method and system in human-computer cooperation
CN113219926A (en) * 2021-05-13 2021-08-06 中国计量大学 Human-machine co-fusion manufacturing unit safety risk assessment method based on digital twin system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108527370A (en) * 2018-04-16 2018-09-14 北京卫星环境工程研究所 The man-machine co-melting safety control system of view-based access control model
CN110561432A (en) * 2019-08-30 2019-12-13 广东省智能制造研究所 safety cooperation method and device based on man-machine co-fusion
CN110978064A (en) * 2019-12-11 2020-04-10 山东大学 Human body safety assessment method and system in human-computer cooperation
CN113219926A (en) * 2021-05-13 2021-08-06 中国计量大学 Human-machine co-fusion manufacturing unit safety risk assessment method based on digital twin system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116703161A (en) * 2023-06-13 2023-09-05 湖南工商大学 Prediction method and device for man-machine co-fusion risk, terminal equipment and medium

Similar Documents

Publication Publication Date Title
CN108527370B (en) Human-computer co-fusion safety protection control system based on vision
Dröder et al. A machine learning-enhanced digital twin approach for human-robot-collaboration
CN110202583B (en) Humanoid manipulator control system based on deep learning and control method thereof
CN109483573A (en) Machine learning device, robot system and machine learning method
Li et al. An AR-assisted Deep Reinforcement Learning-based approach towards mutual-cognitive safe human-robot interaction
CN108838991A (en) It is a kind of from main classes people tow-armed robot and its to the tracking operating system of moving target
CN110147101A (en) A kind of end-to-end distributed robots formation air navigation aid based on deeply study
CN110421556A (en) A kind of method for planning track and even running method of redundancy both arms service robot Realtime collision free
CN110480657A (en) A kind of labyrinth environment space robot world remote control system
CN108846891B (en) Man-machine safety cooperation method based on three-dimensional skeleton detection
CN114029951B (en) Robot autonomous recognition intelligent grabbing method based on depth camera
CN114757293A (en) Man-machine co-fusion risk early warning method and system based on action recognition and man-machine distance
CN113219926A (en) Human-machine co-fusion manufacturing unit safety risk assessment method based on digital twin system
Cheng et al. Human-robot interaction method combining human pose estimation and motion intention recognition
Liu et al. A mixed perception-based human-robot collaborative maintenance approach driven by augmented reality and online deep reinforcement learning
Aracil et al. ROBTET: A new teleoperated system for live-line maintenance
Wakabayashi et al. Associative motion generation for humanoid robot reflecting human body movement
Chen et al. Dynamic gesture design and recognition for human-robot collaboration with convolutional neural networks
CN113221640B (en) Active early warning and safety monitoring system for live working
Hoecherl et al. Smartworkbench: Toward adaptive and transparent user assistance in industrial human-robot applications
Infantino et al. A cognitive architecture for robotic hand posture learning
Sigalas et al. Robust model-based 3d torso pose estimation in rgb-d sequences
Weiming et al. Real-time virtual UR5 robot imitation of human motion based on 3D camera
Gorkavyy et al. Modeling of Operator Poses in an Automated Control System for a Collaborative Robotic Process
CN110919650A (en) Low-delay grabbing teleoperation system based on SVM (support vector machine)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination