CN116703161B - Prediction method and device for man-machine co-fusion risk, terminal equipment and medium - Google Patents

Prediction method and device for man-machine co-fusion risk, terminal equipment and medium Download PDF

Info

Publication number
CN116703161B
CN116703161B CN202310698021.4A CN202310698021A CN116703161B CN 116703161 B CN116703161 B CN 116703161B CN 202310698021 A CN202310698021 A CN 202310698021A CN 116703161 B CN116703161 B CN 116703161B
Authority
CN
China
Prior art keywords
machine
behavior
personnel
model
reconstruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310698021.4A
Other languages
Chinese (zh)
Other versions
CN116703161A (en
Inventor
李沁
胡春华
张军号
郭潞
肖培
胡雨轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University of Technology
Original Assignee
Hunan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University of Technology filed Critical Hunan University of Technology
Priority to CN202310698021.4A priority Critical patent/CN116703161B/en
Publication of CN116703161A publication Critical patent/CN116703161A/en
Application granted granted Critical
Publication of CN116703161B publication Critical patent/CN116703161B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0635Risk analysis of enterprise or organisation activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/042Knowledge-based neural networks; Logical representations of neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Human Resources & Organizations (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Development Economics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Business, Economics & Management (AREA)
  • Game Theory and Decision Science (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Educational Administration (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application is suitable for the technical field of man-machine co-fusion, and provides a method, a device, terminal equipment and medium for predicting man-machine co-fusion risk by collecting normal behavior data; performing feature extraction on normal behavior data by using a personnel behavior feature extraction model and a machine behavior feature extraction model corresponding to an operator, and fusing the personnel behavior features and the machine behavior features to obtain man-machine co-fusion features; according to the man-machine co-fusion characteristics, a person row reconstruction model and a machine row reconstruction model are utilized to obtain person row reconstruction data and machine row reconstruction data; calculating reconstruction loss, and updating each model by using the reconstruction loss to obtain local model parameters; and aggregating local model parameters corresponding to all operators to obtain global model parameters, and predicting the man-machine co-fusion risk of the operators to be tested in the target operation environment according to the global model parameters, the behavior data of the operators to be tested and the behavior data of the machines to be tested.

Description

Prediction method and device for man-machine co-fusion risk, terminal equipment and medium
Technical Field
The application belongs to the technical field of man-machine co-fusion, and particularly relates to a method, a device, terminal equipment and a medium for predicting man-machine co-fusion risk.
Background
The man-machine co-fusion means that an operator and an operation machine work in the same large space, understand and perceive each other, realize tightly coordinated and natural interactive operation, and jointly complete a certain systematic work. Since there is no physical barrier between the operator and the work machine in the man-machine co-fusion mode of operation, ensuring the safety of the operator during operation is a significant challenge.
The traditional method for predicting the man-machine co-fusion risk takes the distance between an operator and an operation machine as an index for predicting the man-machine co-fusion risk, and is not limited by the states of the operator or the operation machine, but in reality, many scenes such as search and rescue, operation and the like need the operator to closely cooperate with the operation machine or even contact with zero distance, and at the moment, the accuracy of the man-machine co-fusion risk based on the minimum distance prediction is low or even not applicable any more.
Disclosure of Invention
The application provides a prediction method, a prediction device, terminal equipment and a prediction medium for man-machine co-fusion risk, which can solve the problem of low accuracy of the traditional prediction method for man-machine co-fusion risk.
In a first aspect, the present application provides a method for predicting a man-machine co-fusion risk, including:
Step 1, collecting personnel normal behavior data of N operators and machine normal behavior data of M machines in a target working environment;
Step 2 to step 4 are executed for each of the M operators, respectively:
Step 2, respectively utilizing a pre-constructed personnel behavior feature extraction model and a machine behavior feature extraction model corresponding to the operating personnel to perform feature extraction on personnel normal behavior data and machine normal behavior data of the operating personnel to obtain personnel behavior features and M machine behavior features of the operating personnel, and respectively fusing the personnel behavior features and each machine behavior feature to obtain man-machine co-fusion features of the operating personnel and each machine; the machine behavior characteristics are in one-to-one correspondence with the machines, and the man-machine blending characteristics are used for representing the degree of tightness of the relation between the operators and the machines.
Step 3, according to the man-machine co-fusion characteristics, respectively utilizing a pre-constructed personnel behavior reconstruction model and a machine behavior reconstruction model corresponding to the operators to obtain personnel behavior reconstruction data and machine behavior reconstruction data of the operators;
Step 4, calculating reconstruction loss according to the reconstruction data of personnel and the reconstruction data of machine behavior, and updating the personnel behavior feature extraction model, the machine behavior feature extraction model, the personnel behavior reconstruction model and the machine behavior reconstruction model according to the reconstruction loss to obtain local model parameters corresponding to the operation personnel;
And 5, aggregating local model parameters corresponding to all operators to obtain global model parameters, and predicting the man-machine co-fusion risk of the operators to be tested in the target operation environment according to the global model parameters, the collected behavior data of the operators to be tested and the collected behavior data of the machines to be tested.
Optionally, the personnel behavior feature extraction model and the machine behavior feature extraction model both comprise feature extraction modules with the same number, the feature extraction modules comprise a space diagram convolution layer, a self-attention layer and a time sequence convolution layer, an output end of the space diagram convolution layer is connected with an input end of the self-attention layer, and an output end of the self-attention layer is connected with an input end of the time sequence convolution layer.
Optionally, in step 2, the human behavior feature and each machine behavior feature are fused respectively to obtain a human-machine co-fusion feature of the operator and each machine, including:
for each machine of M machines, respectively, through a calculation formula
Obtaining fusion featuresWherein/>The method comprises the steps of expressing a fused characteristic of a personnel behavior characteristic component in a jth characteristic extraction module of a personnel behavior characteristic extraction model corresponding to an nth operating personnel and a machine behavior characteristic component of an mth machine in a jth characteristic extraction module of the machine behavior characteristic extraction model, j=1, 2, J represents the total number of characteristic extraction modules, n=1, 2, N, N represents the total number of operating personnel, m=1, 2, M, M represents the total number of machines, and g (·) represents an attention pooling function, and/>The spatial map convolution layer feature updating parameter, the self-attention layer feature updating parameter and the time sequence convolution layer feature updating parameter of the jth feature extraction module are respectively represented, and are/areRespectively representing a personnel behavior characteristic component corresponding to a space diagram convolution layer, a personnel behavior characteristic component corresponding to a self-attention layer and a personnel behavior characteristic component corresponding to a time sequence convolution layer in a j-th characteristic extraction module of a personnel behavior characteristic extraction model corresponding to an nth operator,/>The method comprises the steps of respectively representing a machine behavior characteristic component of an mth machine corresponding to a space diagram convolution layer in a jth characteristic extraction module of a machine behavior characteristic extraction model, a machine behavior characteristic component of an mth machine corresponding to a self-attention layer and a machine behavior characteristic component of an mth machine corresponding to a time convolution layer, wherein con (·) represents a characteristic coupling operation;
By calculation formula
Obtaining a man-machine co-fusion characteristic F nm; wherein, F nm represents the man-machine co-fusion characteristic of the nth operator and the mth machine.
Optionally, the personnel reconstruction model and the machine reconstruction model both comprise reconstruction modules with the same number, the reconstruction modules comprise a space diagram convolution layer, a self-attention layer and a time sequence deconvolution layer, the output end of the space diagram convolution layer is connected with the input end of the self-attention layer, and the output end of the self-attention layer is connected with the input end of the time sequence deconvolution layer.
Optionally, in step 3, according to the man-machine co-fusion feature, a pre-constructed personnel reconstruction model corresponding to the operator is used to obtain personnel reconstruction data of the operator, including:
By calculation formula
Obtaining polymeric featuresWherein/>Representing the corresponding aggregation characteristic of the nth operator, c nm representing the distance between the nth operator and the mth machine, τ representing the aggregation bias, φ (·) representing the aggregation function;
and according to the aggregation characteristics, a person row reconstruction model is utilized to obtain person row reconstruction data.
Optionally, step 4 includes:
Step 41, by calculation formula
Obtaining reconstruction loss L n; wherein L n represents the reconstruction loss corresponding to the nth operator, H n represents the normal personnel behavior data of the nth operator, R m represents the normal machine behavior data corresponding to the mth machine,Person row representing nth worker is reconstruction data,/>The machine row representing the mth machine is reconstructed data, alpha represents the loss weight corresponding to the operator, β represents the loss weight corresponding to the machine, |·| 2 represents the norm of solution 2;
step 42, respectively carrying out parameter updating on the personnel behavior feature extraction model, the machine behavior feature extraction model, the personnel behavior reconstruction model and the machine behavior reconstruction model corresponding to the nth operating personnel by using a gradient descent method to obtain a new personnel behavior feature extraction model, a new machine behavior feature extraction model, a new personnel behavior reconstruction model and a new machine behavior reconstruction model corresponding to the nth operating personnel;
Step 43, obtaining new personnel behavior reconstruction data and new machine behavior reconstruction data corresponding to the nth operating personnel according to the new personnel behavior feature extraction model, the new machine behavior feature extraction model, the new personnel behavior reconstruction model and the new machine behavior reconstruction model corresponding to the nth operating personnel;
Step 44, calculating a new reconstruction loss according to the reconstruction data of the new personnel and the reconstruction data of the new machine, and if the new reconstruction loss is smaller than L n, taking the parameters of the new personnel behavior feature extraction model, the parameters of the new machine behavior feature extraction model, the parameters of the new personnel reconstruction model and the parameters of the new machine reconstruction model as local model parameters corresponding to the nth operator; otherwise, the new person behavior feature extraction model, the new machine behavior feature extraction model, the new person behavior reconstruction model and the new machine behavior reconstruction model are respectively used as the person behavior feature extraction model, the machine behavior feature extraction model, the person behavior reconstruction model and the machine behavior reconstruction model in the step 42, and the step 42 is executed again.
Optionally, in step 5, local model parameters corresponding to all operators are aggregated to obtain global model parameters, including:
By calculation formula
Obtaining global model parametersWherein,
When i=1, the number of the cells,Model parameters representing the person behavior feature extraction model corresponding to the nth worker,Model parameters of a personnel behavior feature extraction model corresponding to all operators are represented;
when i=2, the number of times, Model parameters representing the machine behavior feature extraction model corresponding to the nth operator,Model parameters of a machine behavior feature extraction model corresponding to all operators are represented;
when i=3, the number of times of the process is, Model parameters representing the reconstruction model of the personnel corresponding to the nth worker,/>Representing model parameters of a reconstruction model of personnel corresponding to all operators;
When i=4, the number of times of the process is, Model parameters representing a machine behavior reconstruction model corresponding to an nth operator,/>, and a computer program productModel parameters of a machine model reconstruction model corresponding to all operators are represented; lambda n represents the importance level of the nth worker.
Optionally, in step 5, predicting the man-machine co-fusion risk of the operator to be tested in the target working environment according to the global model parameters, the collected behavior data of the operator to be tested and the collected behavior data of the machine to be tested, including:
Step 51, obtaining a corresponding global personnel behavior feature extraction model, a global machine behavior feature extraction model, a global personnel behavior reconstruction model and a global machine behavior reconstruction model according to global model parameters, and obtaining reconstruction loss corresponding to a person to be detected according to the person to be detected behavior data and the machine behavior data to be detected;
step 52, if the reconstruction loss is greater than a preset threshold, predicting that the personnel to be tested have a man-machine co-fusion risk in the target operation environment; otherwise, predicting that the personnel to be tested do not have man-machine co-fusion risk in the target operation environment.
In a second aspect, the present application provides a prediction apparatus for man-machine co-fusion risk, including:
The data acquisition module is used for acquiring the personnel normal behavior data of N operators and the machine normal behavior data of M machines in the target operation environment;
The behavior feature fusion module is used for carrying out feature extraction on the normal behavior data of the personnel and the normal behavior data of the machine by respectively utilizing a pre-constructed personnel behavior feature extraction model and a machine behavior feature extraction model corresponding to the operating personnel to obtain personnel behavior features and M machine behavior features of the operating personnel, and respectively fusing the personnel behavior features and each machine behavior feature to obtain man-machine co-fusion features of the operating personnel and each machine; the machine behavior characteristics are in one-to-one correspondence with the machines, and the man-machine blending characteristics are used for representing the degree of tightness of the relation between the operators and the machines.
The system comprises a reconstruction module, a reconstruction module and a control module, wherein the reconstruction module is used for respectively utilizing a pre-constructed personnel reconstruction model and a machine behavior reconstruction model corresponding to an operator according to the man-machine co-fusion characteristics to obtain personnel reconstruction data and machine reconstruction data of the operator;
The parameter acquisition module is used for calculating reconstruction loss according to the reconstruction data of the personnel and the reconstruction data of the machine behavior, and updating the personnel behavior feature extraction model, the machine behavior feature extraction model, the personnel reconstruction model and the machine behavior reconstruction model according to the reconstruction loss to obtain local model parameters corresponding to the operation personnel;
The prediction module is used for aggregating local model parameters corresponding to all operators to obtain global model parameters, and predicting the man-machine co-fusion risk of the operators to be tested in the target operation environment according to the global model parameters, the collected behavior data of the operators to be tested and the collected behavior data of the machines to be tested.
In a third aspect, the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the method for predicting risk of human-machine co-fusion described above when executing the computer program.
In a fourth aspect, the present application provides a computer readable storage medium storing a computer program, which when executed by a processor, implements the above-mentioned method for predicting risk of human-machine co-fusion.
The scheme of the application has the following beneficial effects:
According to the prediction method of the man-machine co-fusion risk, the personnel behavior feature extraction model for extracting the personnel behavior features from the personnel normal behavior data and the machine behavior feature extraction model for extracting the machine behavior features from the machine normal behavior data are constructed to obtain the personnel behavior features and the machine behavior features, and then the personnel behavior features and the machine behavior features are fused to obtain the man-machine co-fusion feature, so that the behavior close relation between an operating personnel and a machine can be more accurately described, and the accuracy of the prediction of the man-machine co-fusion risk is improved; the reconstruction loss is calculated by constructing a personnel reconstruction model and a machine reconstruction model to obtain personnel reconstruction data and machine reconstruction data, and parameters of the models are updated by utilizing the reconstruction loss, so that the fitting degree of the models can be improved, and the accuracy of prediction of man-machine co-fusion risk is improved; the reconstruction loss is used as an index for measuring the human-machine co-fusion risk, so that the influence of behaviors on the human-machine co-fusion risk can be reflected more accurately, and the accuracy of the prediction of the human-machine co-fusion risk is higher.
Other advantageous effects of the present application will be described in detail in the detailed description section which follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a method for predicting risk of human-machine co-fusion according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a feature extraction module according to an embodiment of the application;
FIG. 3 is a schematic diagram of a reconstruction module according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a prediction apparatus for man-machine co-fusion risk according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in the present description and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
Furthermore, the terms "first," "second," "third," and the like in the description of the present specification and in the appended claims, are used for distinguishing between descriptions and not necessarily for indicating or implying a relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
Aiming at the problem that the accuracy of the existing prediction method of the man-machine co-fusion risk is low, the application provides a prediction method, a device, terminal equipment and a medium of the man-machine co-fusion risk, and the prediction method comprises the steps of constructing a personnel behavior feature extraction model for extracting personnel behavior features from personnel normal behavior data and a machine behavior feature extraction model for extracting machine behavior features from machine normal behavior data to obtain personnel behavior features and machine behavior features, fusing the personnel behavior features and the machine behavior features to obtain the man-machine co-fusion feature, describing the behavior close relation between an operation personnel and a machine more accurately, and improving the accuracy of the prediction of the man-machine co-fusion risk; the reconstruction loss is calculated by constructing a personnel reconstruction model and a machine reconstruction model to obtain personnel reconstruction data and machine reconstruction data, and parameters of the models are updated by utilizing the reconstruction loss, so that the fitting degree of the models can be improved, and the accuracy of prediction of man-machine co-fusion risk is improved; the reconstruction loss is used as an index for measuring the human-machine co-fusion risk, so that the influence of behaviors on the human-machine co-fusion risk can be reflected more accurately, and the accuracy of the prediction of the human-machine co-fusion risk is higher.
As shown in fig. 1, the method for predicting man-machine co-fusion risk provided by the application comprises the following steps:
step 1, collecting personnel normal behavior data of N operators and machine normal behavior data of M machines in a target working environment.
In an embodiment of the application, the normal behavior data of the personnel and the normal behavior data of the machine can be obtained through a 3D gesture stream (3D Postural Stream, which represents a body state sequence in a 3D space) of a target operator of the motion capture device and the machine.
Specifically, the normal behavior data of the personnel is composed of information flow of time variation of 3D position information of key points of a human skeleton of an operator in a normal working state in a past period, and the normal behavior data of the machine is composed of information flow of time variation of 3D position information of central points of joints of the machine in the normal working state in the past period, wherein the key points of the human skeleton comprise left and right shoulders, left and right elbows, left and right wrists, left and right hip joints, left and right knees, left and right ankles, neck, spine center and waist center.
The normal behavior data is collected as training data of the feature extraction model, so that the model can accurately extract behavior features of the normal behavior data, and for the abnormal behavior data, the output result of the trained model has larger deviation from the output result corresponding to the normal behavior data, and correspondingly, the reconstruction effect in the subsequent step is poorer, so that theoretical support is provided for taking the reconstruction loss as an index for measuring the blending risk.
In an embodiment of the present application, in order to seek a correspondence between an operator and a machine in a target working environment, a machine having a co-fusion relationship with the operator is determined by constructing a co-fusion relationship map, where the co-fusion relationship map uses the operator as a center node, uses all machines as adjacent nodes, obtains all machines having a co-fusion relationship with the operator according to normal behavior data of the operator and normal behavior data of the machines of all machines, and connects the adjacent nodes corresponding to the machines having the co-fusion relationship with the center nodes corresponding to the operator in the co-fusion relationship map. The attribute of the central node is behavior information of operators, the attribute of the adjacent nodes is behavior information of machines, the edge attribute represents the degree of co-fusion between the operators with co-fusion relationship and the machines, and the degree of co-fusion is represented by the distance between the nodes.
Step 2 to step 4 are executed for each of the N operators, respectively:
And 2, respectively carrying out feature extraction on the personnel normal behavior data and the machine normal behavior data of the operating personnel by utilizing a pre-constructed personnel behavior feature extraction model and a machine behavior feature extraction model corresponding to the operating personnel to obtain personnel behavior features and M machine behavior features of the operating personnel, and respectively fusing the personnel behavior features and each machine behavior feature to obtain the man-machine co-fusion features of the operating personnel and each machine.
The man-machine co-fusion characteristics are used for representing the degree of tightness of the relation between the operator and the machine, and the larger the man-machine co-fusion characteristics (generally measured by a model of a characteristic vector) is, the closer the relation between the operator and the machine is.
As shown in fig. 2, the human behavior feature extraction model and the machine behavior feature extraction model each include the same number of feature extraction modules, each including a spatial-map convolution layer (shown as 21 in fig. 2), a self-attention layer (shown as 22 in fig. 2), and a temporal convolution layer (shown as 23 in fig. 2), an output of the spatial-map convolution layer being connected to an input of the self-attention layer, an output of the self-attention layer being connected to an input of the temporal convolution layer. The feature extraction modules are connected in series.
It should be noted that, in the embodiment of the present application, fusing behavior features means: and fusing the behavior characteristic components of the corresponding positions of the personnel behavior characteristic extraction model and the machine behavior characteristic extraction model, and fusing the personnel behavior characteristic components of the space diagram convolution layer corresponding to the j-th characteristic extraction module of the personnel behavior characteristic extraction model and the machine behavior characteristic components of the space diagram convolution layer corresponding to the j-th characteristic extraction module of the machine behavior learning network model.
And 3, respectively utilizing a pre-constructed personnel behavior reconstruction model and a machine behavior reconstruction model corresponding to the operators according to the man-machine co-fusion characteristics to obtain personnel behavior reconstruction data and machine behavior reconstruction data of the operators.
As shown in fig. 3, the staff reconstruction model and the machine reconstruction model each include the same number of reconstruction modules, each reconstruction module including a spatial-map convolution layer (shown as 31 in fig. 3), a self-attention layer (shown as 32 in fig. 3), and a temporal deconvolution layer (shown as 33 in fig. 3), an output of the spatial-map convolution layer being connected to an input of the self-attention layer, an output of the self-attention layer being connected to an input of the temporal deconvolution layer. In the embodiment of the application, the reconstruction modules are connected in series.
Illustratively, the process of obtaining machine behavior reconstruction data using the machine behavior reconstruction model is as follows:
Taking machine rows corresponding to n operators as an example of a reconstruction model:
And in the space diagram convolution layer, extracting the space structure information of the man-machine co-fusion characteristics (the man-machine co-fusion characteristics of the n-th operator and the m-th machine, which are input), understanding the relative position and motion relation between the nodes, and outputting the space characteristics, thereby providing a structural basis for the subsequent layer.
In the self-attention layer, based on the spatial characteristics, key space-time characteristics of the input man-machine co-fusion characteristics are selected through an attention mechanism, the attention characteristics are output, and the sensitivity of the network to core information is improved.
And in the time sequence deconvolution layer, based on the spatial characteristics and the attention characteristics, the behavior reconstruction and prediction of the mth machine are realized, the mapping from the characteristics to the behaviors is realized, and the machine behavior reconstruction data are obtained.
And 4, calculating reconstruction loss according to the reconstruction data of the personnel and the reconstruction data of the machine behavior, and updating the personnel behavior feature extraction model, the machine behavior feature extraction model, the personnel reconstruction model and the machine reconstruction model according to the reconstruction loss to obtain local model parameters corresponding to the operation personnel.
And 5, aggregating local model parameters corresponding to all operators to obtain global model parameters, and predicting the man-machine co-fusion risk of the operators to be tested in the target operation environment according to the global model parameters, the collected behavior data of the operators to be tested and the collected behavior data of the machines to be tested.
The following exemplary description is given to a process of respectively fusing the personnel behavior feature and each machine behavior feature in step 2 (respectively, performing feature extraction on personnel normal behavior data and machine normal behavior data of an operating personnel by using a pre-constructed personnel behavior feature extraction model and a machine behavior feature extraction model corresponding to the operating personnel to obtain personnel behavior features and M machine behavior features of the operating personnel, and respectively fusing the personnel behavior features and each machine behavior feature to obtain a man-machine co-fusion feature of the operating personnel and each machine).
Step 2.1, for each of the M machines, by a calculation formula
Obtaining fusion features
Wherein,The method comprises the steps of expressing a fused characteristic of a personnel behavior characteristic component in a jth characteristic extraction module of a personnel behavior characteristic extraction model corresponding to an nth operating personnel and a machine behavior characteristic component of an mth machine in a jth characteristic extraction module of the machine behavior characteristic extraction model, j=1, 2, J represents the total number of characteristic extraction modules, n=1, 2, N, N represents the total number of operating personnel, m=1, 2, M, M represents the total number of machines, and g (·) represents an attention pooling function, and/>The spatial map convolution layer feature updating parameter, the self-attention layer feature updating parameter and the time sequence convolution layer feature updating parameter of the jth feature extraction module are respectively represented, and are/areRespectively representing a personnel behavior characteristic component corresponding to a space diagram convolution layer, a personnel behavior characteristic component corresponding to a self-attention layer and a personnel behavior characteristic component corresponding to a time sequence convolution layer in a j-th characteristic extraction module of a personnel behavior characteristic extraction model corresponding to an nth operator,/>The method comprises the steps of respectively representing a machine behavior characteristic component of an mth machine corresponding to a space diagram convolution layer, a machine behavior characteristic component of an mth machine corresponding to a self-attention layer and a machine behavior characteristic component of an mth machine corresponding to a time convolution layer in a jth characteristic extraction module of the machine behavior characteristic extraction model, and con (·) represents characteristic coupling operation.
Step 2.2, through a calculation formula
And obtaining the man-machine co-fusion characteristic F nm.
Wherein, F nm represents the man-machine co-fusion characteristic of the nth operator and the mth machine.
The following describes an exemplary procedure of obtaining the person-to-person reconstruction data of the operator according to the man-machine co-fusion feature in step 3 (according to the man-machine co-fusion feature, the person-to-person reconstruction model and the machine-to-machine reconstruction model corresponding to the pre-constructed operator are used, respectively, to obtain the person-to-person reconstruction data and the machine-to-machine reconstruction data of the operator).
Step 3.1, through a calculation formula
Obtaining polymeric features
Wherein,Representing the corresponding aggregate characteristics of the nth operator, c nm representing the distance between the nth operator and the mth machine, τ representing the aggregate bias, φ (·) representing the aggregate function.
And 3.2, according to the aggregation characteristics, utilizing the person behavior reconstruction model to obtain person behavior reconstruction data.
The following describes exemplary procedures in step 4 (calculating a reconstruction loss according to the reconstruction data of the personnel and the reconstruction data of the machine, and updating the personnel behavior feature extraction model, the machine behavior feature extraction model, the personnel reconstruction model and the machine behavior reconstruction model according to the reconstruction loss to obtain local model parameters corresponding to the operating personnel).
Step 41, by calculation formula
The reconstruction loss L n is obtained.
Wherein L n represents the reconstruction loss corresponding to the nth operator, H n represents the normal personnel behavior data of the nth operator, R m represents the normal machine behavior data corresponding to the mth machine,Person row representing nth worker is reconstruction data,/>The machine row representing the mth machine is reconstructed data, alpha represents the loss weight corresponding to the operator, β represents the loss weight corresponding to the machine, |·| 2 represents the norm of solution 2.
And 42, respectively carrying out parameter updating on the personnel behavior feature extraction model, the machine behavior feature extraction model, the personnel behavior reconstruction model and the machine behavior reconstruction model corresponding to the nth operating personnel by using a gradient descent method to obtain a new personnel behavior feature extraction model, a new machine behavior feature extraction model, a new personnel behavior reconstruction model and a new machine behavior reconstruction model corresponding to the nth operating personnel.
Step 42 will be exemplarily described below by taking an example of updating the person behavior feature extraction model corresponding to the nth operator by using the gradient descent method.
And step 42.1, taking the expression of the reconstruction loss as a loss function, and calculating the gradient of the loss function about the personnel behavior characteristic extraction model parameters.
Step 42.2, updating the parameters according to the gradient.
Specifically, the parameters may be updated using a common iterative formula, for example: Wherein α represents a learning rate,/> Representing the gradient of reconstruction loss versus personnel behavioral characteristics extraction model parameters.
Step 42.3, the learning rate is adjusted.
An adaptive adjustment strategy may be employed to adjust the learning rate.
And 42.4, if the updating times reach a preset threshold value, obtaining a new human behavior feature extraction model according to the updated human behavior feature extraction model parameters, otherwise, returning to the step 42.1.
And 43, obtaining new personnel behavior reconstruction data and new machine behavior reconstruction data corresponding to the nth operator according to the new personnel behavior feature extraction model, the new machine behavior feature extraction model, the new personnel behavior reconstruction model and the new machine behavior reconstruction model corresponding to the nth operator.
Step 44, calculating a new reconstruction loss according to the reconstruction data of the new personnel and the reconstruction data of the new machine, and if the new reconstruction loss is smaller than L n, taking the parameters of the new personnel behavior feature extraction model, the parameters of the new machine behavior feature extraction model, the parameters of the new personnel reconstruction model and the parameters of the new machine reconstruction model as local model parameters corresponding to the nth operator; otherwise, the new person behavior feature extraction model, the new machine behavior feature extraction model, the new person behavior reconstruction model and the new machine behavior reconstruction model are respectively used as the person behavior feature extraction model, the machine behavior feature extraction model, the person behavior reconstruction model and the machine behavior reconstruction model in the step 42, and the step 42 is executed again.
The following illustrates the process of aggregating the local model parameters corresponding to all operators in step 5 (aggregating the local model parameters corresponding to all operators to obtain global model parameters, and predicting the man-machine co-fusion risk of the operators to be tested in the target operation environment according to the global model parameters, the collected behavior data of the operators to be tested and the collected behavior data of the machine to be tested).
Specifically, by a calculation formula
Obtaining global model parameters
Wherein, when i=1,Model parameters representing a person behavior feature extraction model corresponding to the nth operator,/>, and a computer program productModel parameters of a personnel behavior feature extraction model corresponding to all operators are represented;
when i=2, the number of times, Model parameters representing the machine behavior feature extraction model corresponding to the nth operator,Model parameters of a machine behavior feature extraction model corresponding to all operators are represented;
when i=3, the number of times of the process is, Model parameters representing the reconstruction model of the personnel corresponding to the nth worker,/>Representing model parameters of a reconstruction model of personnel corresponding to all operators;
When i=4, the number of times of the process is, Model parameters representing a machine behavior reconstruction model corresponding to an nth operator,/>, and a computer program productModel parameters of a machine model reconstruction model corresponding to all operators are represented; lambda n represents the importance level of the nth worker.
In the following, in step 5, a process of predicting the man-machine blending risk of the operator to be tested in the target operation environment according to the global model parameters, the collected behavior data of the operator to be tested and the collected behavior data of the machine to be tested is exemplarily described.
Step 51, obtaining a corresponding global personnel behavior feature extraction model, a global machine behavior feature extraction model, a global personnel behavior reconstruction model and a global machine behavior reconstruction model according to global model parameters, and obtaining reconstruction loss corresponding to a person to be detected according to the person to be detected behavior data and the machine behavior data to be detected;
step 52, if the reconstruction loss is greater than a preset threshold, predicting that the personnel to be tested have a man-machine co-fusion risk in the target operation environment; otherwise, predicting that the personnel to be tested do not have man-machine co-fusion risk in the target operation environment.
The following describes an exemplary device for predicting man-machine co-fusion risk.
As shown in fig. 4, the apparatus 400 for predicting a risk of human-machine co-fusion includes:
The data acquisition module 401 is used for acquiring the personnel normal behavior data of N operators and the machine normal behavior data of M machines in the target operation environment;
The behavior feature fusion module 402 is configured to perform feature extraction on normal behavior data of a worker and normal behavior data of a machine by using a pre-constructed worker behavior feature extraction model and a machine behavior feature extraction model corresponding to the worker, obtain worker behavior features and M machine behavior features of the worker, and fuse the worker behavior features and each machine behavior feature, respectively, so as to obtain a man-machine co-fusion feature of the worker and each machine; the machine behavior features are in one-to-one correspondence with the machines, and the man-machine co-fusion features are used for representing the degree of tightness of the relation between the operators and the machines;
the row reconstruction module 403 is configured to obtain, according to the man-machine co-fusion feature, person row reconstruction data and machine row reconstruction data of the operator by using a person row reconstruction model and a machine row reconstruction model corresponding to the operator constructed in advance;
The parameter obtaining module 404 is configured to calculate a reconstruction loss according to the reconstruction data of the personnel and the reconstruction data of the machine behavior, and update the personnel behavior feature extraction model, the machine behavior feature extraction model, the personnel reconstruction model and the machine behavior reconstruction model according to the reconstruction loss, so as to obtain local model parameters corresponding to the operation personnel;
The prediction module 405 is configured to aggregate local model parameters corresponding to all operators to obtain global model parameters, and predict a man-machine co-fusion risk of the operator to be tested in the target operation environment according to the global model parameters, the collected behavior data of the operator to be tested and the collected behavior data of the machine to be tested.
It should be noted that, because the content of information interaction and execution process between the above devices/units is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
As shown in fig. 5, an embodiment of the present application provides a terminal device, and as shown in fig. 5, a terminal device D10 of the embodiment includes: at least one processor D100 (only one processor is shown in fig. 5), a memory D101 and a computer program D102 stored in the memory D101 and executable on the at least one processor D100, the processor D100 implementing the steps in any of the various method embodiments described above when executing the computer program D102.
Specifically, when the processor D100 executes the computer program D102, by collecting normal personnel behavior data of N operating personnel and normal machine behavior data of M machines in a target operating environment, respectively using a pre-constructed personnel behavior feature extraction model and a pre-constructed machine behavior feature extraction model, extracting features of the personnel normal behavior data and the machine normal behavior data of the operating personnel to obtain personnel behavior features and M machine behavior features of the operating personnel, respectively fusing the personnel behavior features and each machine behavior feature to obtain a human-machine co-fusion feature of the operating personnel and each machine, then according to the human-machine co-fusion feature, respectively using a pre-constructed personnel behavior reconstruction model and a machine behavior reconstruction model corresponding to the operating personnel to obtain personnel behavior data and machine behavior reconstruction data of the operating personnel, then calculating reconstruction losses according to the personnel behavior feature extraction model, the machine behavior feature extraction model, the personnel behavior reconstruction model and the machine behavior model, updating the personnel behavior model, obtaining local model corresponding to the operating personnel, and global model co-fusion parameters corresponding to the operating personnel to obtain global model to be measured, and collecting all global model parameters of the local model to be measured, and collecting global model parameters of the operating personnel to be measured according to the global model. The method comprises the steps of constructing a personnel behavior feature extraction model for extracting personnel behavior features from personnel normal behavior data and a machine behavior feature extraction model for extracting machine behavior features from machine normal behavior data to obtain personnel behavior features and machine behavior features, fusing the personnel behavior features and the machine behavior features to obtain man-machine co-fusion features, describing the behavior affinity between an operating personnel and a machine more accurately, and improving the accuracy of prediction of man-machine co-fusion risk; the reconstruction loss is calculated by constructing a personnel reconstruction model and a machine reconstruction model to obtain personnel reconstruction data and machine reconstruction data, and parameters of the models are updated by utilizing the reconstruction loss, so that the fitting degree of the models can be improved, and the accuracy of prediction of man-machine co-fusion risk is improved; the reconstruction loss is used as an index for measuring the human-machine co-fusion risk, so that the influence of behaviors on the human-machine co-fusion risk can be reflected more accurately, and the accuracy of the prediction of the human-machine co-fusion risk is higher.
The Processor D100 may be a central processing unit (CPU, central Processing Unit), and the Processor D100 may also be other general purpose processors, digital signal processors (DSP, digital Signal Processor), application SPECIFIC INTEGRATED Circuits (ASIC), off-the-shelf programmable gate arrays (FPGA, field-Programmable GateArray) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory D101 may in some embodiments be an internal storage unit of the terminal device D10, for example a hard disk or a memory of the terminal device D10. The memory D101 may also be an external storage device of the terminal device D10 in other embodiments, for example, a plug-in hard disk, a smart memory card (SMC, smart Media Card), a Secure Digital (SD) card, a flash memory card (FLASH CARD) or the like, which are provided on the terminal device D10. Further, the memory D101 may also include both an internal storage unit and an external storage device of the terminal device D10. The memory D101 is used for storing an operating system, an application program, a boot loader (BootLoader), data, other programs, etc., such as program codes of the computer program. The memory D101 may also be used to temporarily store data that has been output or is to be output.
Embodiments of the present application also provide a computer readable storage medium storing a computer program which, when executed by a processor, implements steps for implementing the various method embodiments described above.
Embodiments of the present application provide a computer program product enabling a terminal device to carry out the steps of the method embodiments described above when the computer program product is run on the terminal device.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a predictive device/terminal equipment of risk of human-machine co-fusion, a recording medium, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other manners. For example, the apparatus/network device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
While the foregoing is directed to the preferred embodiments of the present application, it will be appreciated by those skilled in the art that various modifications and adaptations can be made without departing from the principles of the present application, and such modifications and adaptations are intended to be comprehended within the scope of the present application.

Claims (9)

1. The method for predicting the man-machine co-fusion risk is characterized by comprising the following steps of:
Step 1, collecting personnel normal behavior data of N operators and machine normal behavior data of M machines in a target working environment;
Step 2 to step 4 are executed for each of the N operators, respectively:
Step 2, respectively utilizing a pre-constructed personnel behavior feature extraction model and a pre-constructed machine behavior feature extraction model corresponding to the operating personnel to perform feature extraction on personnel normal behavior data and machine normal behavior data of the operating personnel to obtain personnel behavior features and M machine behavior features of the operating personnel, and respectively fusing the personnel behavior features and each machine behavior feature to obtain man-machine co-fusion features of the operating personnel and each machine; the machine behavior features are in one-to-one correspondence with the machines, and the man-machine co-fusion features are used for representing the degree of tightness of the relation between the operators and the machines; in the step 2, the personnel behavior features and each machine behavior feature are respectively fused to obtain a man-machine co-fusion feature of the operating personnel and each machine, including:
For each machine in the M machines, respectively, through a calculation formula
Obtaining fusion featuresWherein/>The method comprises the steps of expressing a fused characteristic of a personnel behavior characteristic component in a jth characteristic extraction module of a personnel behavior characteristic extraction model corresponding to an nth operating personnel and a machine behavior characteristic component of an mth machine in a jth characteristic extraction module of the machine behavior characteristic extraction model, j=1, 2, J represents the total number of characteristic extraction modules, n=1, 2, N, N represents the total number of operating personnel, m=1, 2, M, M represents the total number of machines, and g (·) represents an attention pooling function, and/>The spatial map convolution layer feature updating parameter, the self-attention layer feature updating parameter and the time sequence convolution layer feature updating parameter of the jth feature extraction module are respectively represented, and are/areRespectively representing a personnel behavior characteristic component corresponding to a space diagram convolution layer, a personnel behavior characteristic component corresponding to a self-attention layer and a personnel behavior characteristic component corresponding to a time sequence convolution layer in a j-th characteristic extraction module of a personnel behavior characteristic extraction model corresponding to an nth operator,/>The method comprises the steps of respectively representing a machine behavior characteristic component of an mth machine corresponding to a space diagram convolution layer in a jth characteristic extraction module of a machine behavior characteristic extraction model, a machine behavior characteristic component of an mth machine corresponding to a self-attention layer and a machine behavior characteristic component of an mth machine corresponding to a time convolution layer, wherein con (·) represents a characteristic coupling operation;
By calculation formula
Obtaining the man-machine co-fusion characteristic F nm; wherein F nm represents the man-machine co-fusion characteristic of the nth operator and the mth machine;
Step 3, according to the man-machine co-fusion characteristics, respectively utilizing a pre-constructed person-to-person reconstruction model and a machine-to-machine reconstruction model corresponding to the operator to obtain person-to-person reconstruction data and machine-to-machine reconstruction data of the operator;
Step 4, calculating reconstruction loss according to the personnel reconstruction data and the machine reconstruction data, and updating the personnel behavior feature extraction model, the machine behavior feature extraction model, the personnel reconstruction model and the machine reconstruction model according to the reconstruction loss to obtain local model parameters corresponding to the operators;
And 5, aggregating local model parameters corresponding to all operators to obtain global model parameters, and predicting the man-machine co-fusion risk of the operators to be tested in the target operation environment according to the global model parameters, the collected behavior data of the operators to be tested and the collected behavior data of the machines to be tested.
2. The prediction method according to claim 1, wherein the personnel behavior feature extraction model and the machine behavior feature extraction model each comprise the same number of feature extraction modules, the feature extraction modules comprising a spatial map convolution layer, a self-attention layer, and a temporal convolution layer, an output of the spatial map convolution layer being connected to an input of the self-attention layer, an output of the self-attention layer being connected to an input of the temporal convolution layer.
3. The prediction method according to claim 2, wherein the person-by-person reconstruction model and the machine-by-machine reconstruction model each comprise the same number of reconstruction modules, the reconstruction modules comprise a spatial-map convolution layer, a self-attention layer and a time-sequence deconvolution layer, an output end of the spatial-map convolution layer is connected with an input end of the self-attention layer, and an output end of the self-attention layer is connected with an input end of the time-sequence deconvolution layer;
In the step 3, according to the man-machine co-fusion feature, a pre-constructed person row reconstruction model corresponding to the operator is utilized to obtain person row reconstruction data of the operator, including:
By calculation formula
Obtaining polymeric featuresWherein/>Representing the corresponding aggregation characteristic of the nth operator, c nm representing the distance between the nth operator and the mth machine, τ representing the aggregation bias, φ (·) representing the aggregation function;
and according to the aggregation characteristics, obtaining the person behavior reconstruction data by using the person behavior reconstruction model.
4. A prediction method according to claim 3, wherein the step 4 comprises:
Step 41, by calculation formula
Obtaining the reconstruction loss L n; wherein L n represents the reconstruction loss corresponding to the nth operator, H n represents the normal personnel behavior data of the nth operator, R m represents the normal machine behavior data corresponding to the mth machine,Person row representing nth worker is reconstruction data,/>The machine row representing the mth machine is reconstructed data, alpha represents the loss weight corresponding to the operator, β represents the loss weight corresponding to the machine, |·| 2 represents the norm of solution 2;
step 42, respectively carrying out parameter updating on the personnel behavior feature extraction model, the machine behavior feature extraction model, the personnel behavior reconstruction model and the machine behavior reconstruction model corresponding to the nth operating personnel by using a gradient descent method to obtain a new personnel behavior feature extraction model, a new machine behavior feature extraction model, a new personnel behavior reconstruction model and a new machine behavior reconstruction model corresponding to the nth operating personnel;
Step 43, obtaining new personnel behavior reconstruction data and new machine behavior reconstruction data corresponding to the nth operating personnel according to the new personnel behavior feature extraction model, the new machine behavior feature extraction model, the new personnel behavior reconstruction model and the new machine behavior reconstruction model corresponding to the nth operating personnel;
Step 44, calculating a new reconstruction loss according to the new person reconstruction data and the new machine reconstruction data, and if the new reconstruction loss is smaller than L n, taking the parameters of the new person behavior feature extraction model, the parameters of the new machine behavior feature extraction model, the parameters of the new person reconstruction model and the parameters of the new machine reconstruction model as local model parameters corresponding to the nth operator; otherwise, the new person behavior feature extraction model, the new machine behavior feature extraction model, the new person behavior reconstruction model and the new machine behavior reconstruction model are respectively used as the person behavior feature extraction model, the machine behavior feature extraction model, the person behavior reconstruction model and the machine behavior reconstruction model in the step 42, and the execution returns to the step 42.
5. The prediction method according to claim 4, wherein the step 5 of aggregating the local model parameters corresponding to all operators to obtain global model parameters includes:
By calculation formula
Obtaining the global model parametersWherein,
When i=1, the number of the cells,Model parameters representing a person behavior feature extraction model corresponding to the nth operator,/>, and a computer program productModel parameters of a personnel behavior feature extraction model corresponding to all operators are represented;
when i=2, the number of times, Model parameters representing a machine behavior feature extraction model corresponding to an nth operator,/>, and a model parameter representing a machine behavior feature extraction model corresponding to an nth operatorModel parameters of a machine behavior feature extraction model corresponding to all operators are represented;
When o=3, the number of times, Model parameters representing the reconstruction model of the personnel corresponding to the nth worker,/>Representing model parameters of a reconstruction model of personnel corresponding to all operators;
When o=4, the number of the groups, Model parameters representing a machine behavior reconstruction model corresponding to an nth operator,/>, and a computer program productModel parameters of a machine model reconstruction model corresponding to all operators are represented; lambda n represents the importance level of the nth worker.
6. The prediction method according to claim 5, wherein predicting the man-machine co-fusion risk of the operator to be tested in the target operation environment according to the global model parameters, the collected behavior data of the operator to be tested and the collected behavior data of the machine to be tested in step 5 comprises:
Step 51, obtaining a corresponding global personnel behavior feature extraction model, a global machine behavior feature extraction model, a global personnel behavior reconstruction model and a global machine behavior reconstruction model according to the global model parameters, and obtaining reconstruction loss corresponding to the personnel to be detected according to the personnel behavior data to be detected and the machine behavior data to be detected;
step 52, if the reconstruction loss is greater than a preset threshold, predicting that the person to be tested has a man-machine co-fusion risk in the target operation environment; otherwise, predicting that the personnel to be tested do not have man-machine co-fusion risk in the target operation environment.
7. A human-machine co-fusion risk prediction device, comprising:
The data acquisition module is used for acquiring the personnel normal behavior data of N operators and the machine normal behavior data of M machines in the target operation environment;
The behavior feature fusion module is used for carrying out feature extraction on the normal behavior data of the operators and the normal behavior data of the machines by respectively utilizing a pre-constructed personnel behavior feature extraction model and a machine behavior feature extraction model corresponding to the operators to obtain personnel behavior features and M machine behavior features of the operators, and respectively fusing the personnel behavior features and each machine behavior feature to obtain man-machine co-fusion features of the operators and each machine; the machine behavior features are in one-to-one correspondence with the machines, and the man-machine co-fusion features are used for representing the degree of tightness of the relation between the operators and the machines; the behavior feature fusion module respectively fuses the personnel behavior features and each machine behavior feature to obtain the man-machine co-fusion feature of the operation personnel and each machine, and the method comprises the following steps:
For each machine in the M machines, respectively, through a calculation formula
Obtaining fusion featuresWherein/>The method comprises the steps of expressing a fused characteristic of a personnel behavior characteristic component in a jth characteristic extraction module of a personnel behavior characteristic extraction model corresponding to an nth operating personnel and a machine behavior characteristic component of an mth machine in a jth characteristic extraction module of the machine behavior characteristic extraction model, j=1, 2, J represents the total number of characteristic extraction modules, n=1, 2, N, N represents the total number of operating personnel, m=1, 2, M, M represents the total number of machines, and g (·) represents an attention pooling function, and/>The spatial map convolution layer feature updating parameter, the self-attention layer feature updating parameter and the time sequence convolution layer feature updating parameter of the jth feature extraction module are respectively represented, and are/areRespectively representing personnel behavior characteristic components corresponding to the space diagram convolution layer, personnel behavior characteristic components corresponding to the self-attention layer and personnel behavior characteristic components corresponding to the time sequence convolution layer in a j-th characteristic extraction module of the personnel behavior characteristic extraction model corresponding to the nth operator,The method comprises the steps of respectively representing a machine behavior characteristic component of an mth machine corresponding to a space diagram convolution layer in a jth characteristic extraction module of a machine behavior characteristic extraction model, a machine behavior characteristic component of an mth machine corresponding to a self-attention layer and a machine behavior characteristic component of an mth machine corresponding to a time convolution layer, wherein con (·) represents a characteristic coupling operation;
By calculation formula
Obtaining the man-machine co-fusion characteristic F nm; wherein F nm represents the man-machine co-fusion characteristic of the nth operator and the mth machine;
The reconstruction module is used for respectively utilizing a pre-constructed personnel reconstruction model and a pre-constructed machine reconstruction model corresponding to the operation personnel according to the man-machine co-fusion characteristics to obtain personnel reconstruction data and machine reconstruction data of the operation personnel;
The parameter acquisition module is used for calculating reconstruction loss according to the personnel reconstruction data and the machine reconstruction data, and updating the personnel behavior feature extraction model, the machine behavior feature extraction model, the personnel reconstruction model and the machine reconstruction model according to the reconstruction loss to obtain local model parameters corresponding to the operators;
The prediction module is used for aggregating local model parameters corresponding to all operators to obtain global model parameters, and predicting the man-machine co-fusion risk of the operators to be tested in the target operation environment according to the global model parameters, the collected behavior data of the operators to be tested and the collected behavior data of the machines to be tested.
8. Terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method of predicting the risk of human-machine co-fusion according to any one of claims 1 to 6 when executing the computer program.
9. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the method of predicting a risk of human-machine co-fusion according to any one of claims 1 to 6.
CN202310698021.4A 2023-06-13 2023-06-13 Prediction method and device for man-machine co-fusion risk, terminal equipment and medium Active CN116703161B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310698021.4A CN116703161B (en) 2023-06-13 2023-06-13 Prediction method and device for man-machine co-fusion risk, terminal equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310698021.4A CN116703161B (en) 2023-06-13 2023-06-13 Prediction method and device for man-machine co-fusion risk, terminal equipment and medium

Publications (2)

Publication Number Publication Date
CN116703161A CN116703161A (en) 2023-09-05
CN116703161B true CN116703161B (en) 2024-05-28

Family

ID=87833587

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310698021.4A Active CN116703161B (en) 2023-06-13 2023-06-13 Prediction method and device for man-machine co-fusion risk, terminal equipment and medium

Country Status (1)

Country Link
CN (1) CN116703161B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110561432A (en) * 2019-08-30 2019-12-13 广东省智能制造研究所 safety cooperation method and device based on man-machine co-fusion
CN112965081A (en) * 2021-02-05 2021-06-15 浙江大学 Simulated learning social navigation method based on feature map fused with pedestrian information
WO2021169209A1 (en) * 2020-02-27 2021-09-02 平安科技(深圳)有限公司 Method, apparatus and device for recognizing abnormal behavior on the basis of voice and image features
CN114367985A (en) * 2022-01-10 2022-04-19 冯玉溪 Intelligent manufacturing method and system based on man-machine co-fusion
CN114757293A (en) * 2022-04-27 2022-07-15 山东大学 Man-machine co-fusion risk early warning method and system based on action recognition and man-machine distance
CN115659275A (en) * 2022-10-18 2023-01-31 苏州市职业大学 Real-time accurate trajectory prediction method and system in unstructured human-computer interaction environment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110561432A (en) * 2019-08-30 2019-12-13 广东省智能制造研究所 safety cooperation method and device based on man-machine co-fusion
WO2021169209A1 (en) * 2020-02-27 2021-09-02 平安科技(深圳)有限公司 Method, apparatus and device for recognizing abnormal behavior on the basis of voice and image features
CN112965081A (en) * 2021-02-05 2021-06-15 浙江大学 Simulated learning social navigation method based on feature map fused with pedestrian information
CN114367985A (en) * 2022-01-10 2022-04-19 冯玉溪 Intelligent manufacturing method and system based on man-machine co-fusion
CN114757293A (en) * 2022-04-27 2022-07-15 山东大学 Man-machine co-fusion risk early warning method and system based on action recognition and man-machine distance
CN115659275A (en) * 2022-10-18 2023-01-31 苏州市职业大学 Real-time accurate trajectory prediction method and system in unstructured human-computer interaction environment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
融合交互信息和能量特征的三维复杂人体行为识别;王永雄;曾艳;李璇;尹钟;张孙杰;刘丽;;小型微型计算机系统;39(08);第1828-1834页 *

Also Published As

Publication number Publication date
CN116703161A (en) 2023-09-05

Similar Documents

Publication Publication Date Title
CN110414526B (en) Training method, training device, server and storage medium for semantic segmentation network
CN109709934B (en) Fault diagnosis redundancy design method for flight control system
CN109859054A (en) Network community method for digging, device, computer equipment and storage medium
CN107967487A (en) A kind of colliding data fusion method based on evidence distance and uncertainty
CN107967489A (en) A kind of method for detecting abnormality and system
CN110111885B (en) Attribute prediction method, attribute prediction device, computer equipment and computer readable storage medium
EP4036796A1 (en) Automatic modeling method and apparatus for object detection model
CN110378942A (en) Barrier identification method, system, equipment and storage medium based on binocular camera
CN112036381B (en) Visual tracking method, video monitoring method and terminal equipment
CN112233200A (en) Dose determination method and device
CN109685805A (en) A kind of image partition method and device
CN116229066A (en) Portrait segmentation model training method and related device
US20180336494A1 (en) Translating sensor input into expertise
CN109671055A (en) Pulmonary nodule detection method and device
CN115690545B (en) Method and device for training target tracking model and target tracking
CN111539349A (en) Training method and device of gesture recognition model, gesture recognition method and device thereof
CN113887501A (en) Behavior recognition method and device, storage medium and electronic equipment
CN112949711B (en) Neural network model multiplexing training method and device for software defined satellites
CN111126264A (en) Image processing method, device, equipment and storage medium
CN110135428A (en) Image segmentation processing method and device
CN116703161B (en) Prediction method and device for man-machine co-fusion risk, terminal equipment and medium
CN113707322A (en) Training method and device of information prediction model, computer equipment and storage medium
US11619515B2 (en) Method and apparatus for processing positioning data, device, storage medium and vehicle
CN112966547A (en) Neural network-based gas field abnormal behavior recognition early warning method, system, terminal and storage medium
CN110378241A (en) Crop growthing state monitoring method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant