CN112256123B - Brain load-based control work efficiency analysis method, equipment and system - Google Patents

Brain load-based control work efficiency analysis method, equipment and system Download PDF

Info

Publication number
CN112256123B
CN112256123B CN202011021842.7A CN202011021842A CN112256123B CN 112256123 B CN112256123 B CN 112256123B CN 202011021842 A CN202011021842 A CN 202011021842A CN 112256123 B CN112256123 B CN 112256123B
Authority
CN
China
Prior art keywords
control
physiological
score
brain load
brain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011021842.7A
Other languages
Chinese (zh)
Other versions
CN112256123A (en
Inventor
李小俚
赵小川
顾恒
姚群力
丁兆环
张昊
柳传财
张予川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Normal University
Original Assignee
Beijing Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Normal University filed Critical Beijing Normal University
Priority to CN202011021842.7A priority Critical patent/CN112256123B/en
Publication of CN112256123A publication Critical patent/CN112256123A/en
Application granted granted Critical
Publication of CN112256123B publication Critical patent/CN112256123B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7225Details of analog processing, e.g. isolation amplifier, gain or sensitivity adjustment, filtering, baseline or drift compensation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06398Performance of employee with respect to a job function
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2503/00Evaluating a particular growth phase or type of persons or animals
    • A61B2503/20Workers

Abstract

The present disclosure provides a brain load-based control ergonomics analysis method, apparatus and system, the method comprising: acquiring control behavior data and physiological information data generated by controlling a target object to execute a target task by a control player; determining a vector value of a second physiological characteristic vector corresponding to a brain load evaluation index according to the control behavior data and the physiological information data; the second physiological feature vector comprises a plurality of second physiological features that affect the brain burden evaluation index; inputting the vector value of the second physiological characteristic vector into a preset brain load identification model to obtain the score of the control player on the brain load evaluation index; the brain load identification model reflects the mapping relation between the second physiological characteristic vector and the grade of the brain load evaluation index; obtaining a control score of the control player according to the score of the control player on the brain load evaluation index; and executing set operation according to the control score.

Description

Brain load-based control work efficiency analysis method, equipment and system
Technical Field
The present disclosure relates to the technical field of automatic analysis of control ergonomics, and more particularly, to a method for analyzing control ergonomics based on a brain load, a device for analyzing control ergonomics based on a brain load, and a system for analyzing control ergonomics based on a brain load.
Background
Different operators operate the same target object to execute the target task, and different operation efficiencies can be achieved, for example, different operators operate the same type of unmanned aerial vehicle to execute the same target task, different performances can be achieved, some operators can complete the target task in a short time, and some operators have good psychological states when executing the target task. The method and the device can analyze the operation work efficiency shown when an operator operates a target object to execute a target task, can be used as a basis for selecting the operator who operates the target object, and can also be used as a basis for evaluating the adaptability between any operator and any motion control device. Currently, when analyzing the control work efficiency, an organization expert usually performs manual scoring for an operator to control a target object to execute a target task, so as to reflect the corresponding control work efficiency through a scoring result, wherein the higher the score is, the higher the control work efficiency is. The manual scoring mode consumes a large amount of manpower, and the scoring result is excessively dependent on human subjective factors, so that the problems of low accuracy and unfairness exist, and therefore, an intelligent scheme for analyzing and controlling the work efficiency is needed to be provided.
Disclosure of Invention
It is an object of embodiments of the present disclosure to provide a new solution for analyzing manipulation ergonomics.
According to a first aspect of the present disclosure, there is provided a brain load-based maneuver ergonomics method comprising:
acquiring control behavior data and physiological information data generated by controlling a target object to execute a target task by a control player;
determining a vector value of a second physiological characteristic vector corresponding to a brain load evaluation index according to the control behavior data and the physiological information data; the second physiological feature vector comprises a plurality of second physiological features that affect the brain burden evaluation index; inputting the vector value of the second physiological characteristic vector into a preset brain load identification model to obtain the score of the control player on the brain load evaluation index; the brain load identification model reflects the mapping relation between the second physiological characteristic vector and the grade of the brain load evaluation index;
and executing a set operation according to the grade of the brain load evaluation index of the control player.
Optionally, the physiological information signal includes an electroencephalogram signal; any physiological feature vector comprises electroencephalogram features;
the step of determining the vector value of any physiological feature vector comprises:
acquiring an electroencephalogram power spectrum of the electroencephalogram signal as a target electroencephalogram power spectrum;
determining a power spectrum classification corresponding to the target electroencephalogram power spectrum from a plurality of preset power spectrum classifications as a target power spectrum classification;
and determining the vector value of the corresponding physiological characteristic vector according to the brain rhythm corresponding to the target power spectrum classification.
Optionally, the method further includes a step of obtaining a power spectrum classification, including:
acquiring a reference brain electrical power spectrum of a plurality of reference brain electrical signals;
based on multiple clustering algorithms, clustering the multiple reference electroencephalogram power spectrums to obtain a clustering result corresponding to each clustering algorithm;
and obtaining a plurality of power spectrum classifications according to the clustering result corresponding to each clustering algorithm based on a consensus clustering algorithm, wherein each power spectrum classification comprises at least one reference electroencephalogram power spectrum.
Optionally, the method further includes:
determining a vector value of a depth feature vector according to the control behavior data and the physiological information data based on a preset depth belief network;
determining a vector value of a splicing feature vector according to the vector value of the second physiological feature vector and the vector value of the depth feature vector; the splicing feature vector is obtained by splicing the second physiological feature vector and the depth feature vector;
the inputting the vector value of the second physiological feature vector into a preset brain load recognition model, and the obtaining of the score of the control player on the brain load evaluation index comprises:
and inputting the vector value of the splicing characteristic vector into the brain load identification model, and obtaining the score of the control player on the brain load evaluation index.
Optionally, the method further comprises a step of obtaining the brain burden recognition model, including:
acquiring second training samples, wherein one second training sample corresponds to one tester, and one second training sample reflects the mapping relation between the vector value of the splicing feature vector corresponding to the tester and the known grade of the brain load evaluation index;
and training a Gaussian kernel vector machine according to the second training sample to obtain the brain load recognition model.
Optionally, the method further includes a step of obtaining the second physiological feature vector, including:
acquiring third training samples, wherein one third training sample corresponds to a tester, and one third training sample comprises control behavior data and physiological information data corresponding to the tester;
for each third training sample, determining a preset characteristic value of each physiological characteristic;
selecting a set number of physiological characteristics from the physiological characteristics according to the characteristic values of the physiological characteristics of the third training sample by using a typical correlation analysis algorithm to serve as the second physiological characteristics;
and obtaining the second physiological characteristic vector according to the second physiological characteristic.
Optionally, the training the gaussian kernel vector machine according to the second training sample to obtain the brain load recognition model includes:
determining a brain load score prediction expression of the second training sample by taking a second network parameter of the Gaussian kernel vector machine as a variable according to the vector value of the splicing feature vector of the second training sample;
constructing a second loss function according to the brain load score prediction expression of the second training sample and the score of the brain load evaluation index corresponding to the second training sample;
and determining the second network parameters according to the second loss function to obtain the brain load identification model.
Optionally, the determining the second network parameter according to the second loss function to obtain the brain load recognition model includes:
and determining the second network parameter according to the second loss function based on a Lagrange multiplier method to obtain the brain load identification model.
Optionally, the step of acquiring the physiological information data includes:
acquiring physiological information data provided by various physiological information acquisition devices, wherein the physiological information data provided by any physiological information acquisition device comprises at least one of physiological signal data and physiological image data.
Optionally, the acquiring the physiological information data provided by each physiological information acquisition device includes:
controlling each physiological information acquisition device to synchronously carry out respective acquisition operation;
and acquiring physiological information data output by the physiological information acquisition equipment through respective acquisition operation.
Optionally, each physiological information acquisition device includes at least one of an electroencephalogram acquisition device, a skin electricity acquisition device, an electrocardio acquisition device, an eye movement tracking device, a video acquisition device for acquiring facial expressions, and a voice acquisition device for acquiring voices;
the physiological information data provided by the electroencephalogram acquisition equipment comprises at least one of an electroencephalogram signal and an electroencephalogram image; the physiological information data provided by the bioelectricity acquisition equipment comprises at least one of a bioelectricity signal and a bioelectricity image; the physiological information data provided by the electrocardio acquisition equipment comprises at least one of electrocardiosignals and electrocardio images; the physiological information data provided by the eye tracking device includes at least one of change data of an ocular feature and ocular image data; the physiological information data provided by the video acquisition equipment comprises at least one of facial video signals and change data of facial features; the physiological information data provided by the voice acquisition device includes at least one of a voice signal and a sound wave image.
Optionally, the setting executing operation includes at least one of:
a first item outputting the manipulation score;
a second item, which provides a selection result whether the control player is selected or not according to the control score;
a third item, determining the control level of the control player according to the control score;
a fourth item, determining a control task executed by the control player according to the control score;
and fifthly, selecting a control combination which enables the control score to meet the set requirement according to the control score of the same control player for controlling the target object to execute the target task through different motion control devices, wherein one control combination comprises the control player and the motion control device which are matched.
Optionally, the method further includes:
providing a setting entrance in response to an operation of setting an application scene;
acquiring an application scene input through the setting entrance, wherein the input application scene reflects an operation to be executed based on a control score;
and determining the operation content of the set operation according to the input application scene.
Optionally, the method further includes:
and providing a virtual scene corresponding to the target task, wherein the target object is a virtual object in the virtual scene.
Acquiring a control command generated by the control player through a control motion control device, and updating the virtual scene according to the control command;
and acquiring feedback data generated by the virtual scene, and sending the feedback data to the motion control device.
Optionally, the acquiring the control behavior data and the physiological information data generated when the control player controls the target object to execute the target task includes:
and acquiring control behavior data and physiological information data generated when the control player controls the target object to execute the target task in the virtual scene.
Optionally, the method includes:
providing a configuration interface in response to an operation to configure the target task;
acquiring configuration information for the target task input through the configuration interface;
and providing a virtual scene corresponding to the target task according to the configuration information.
According to a second aspect of the present disclosure, there is provided a brain load based manipulation ergonomics apparatus comprising at least one computing device and at least one storage device, wherein,
the at least one storage device is configured to store instructions for controlling the at least one computing device to perform the method according to the first aspect of the present disclosure.
According to a third aspect of the present disclosure, a brain load-based control ergonomics system is provided, the system comprising a task execution device, physiological information acquisition devices, and the control ergonomics analysis device of the second aspect of the present disclosure, wherein the task execution device and the physiological information acquisition devices are in communication connection with the control ergonomics analysis device.
Optionally, the task execution device includes a manipulated target object and a motion control device for manipulating the target object, and the target object is connected to the motion control device in a communication manner.
Optionally, the motion control device is a flight control device, and the target object controlled by the flight control device is an unmanned aerial vehicle.
The method has the advantages that the vector value of the second physiological characteristic vector corresponding to the brain load evaluation index is obtained by controlling the target object to execute the control behavior data and the physiological information data generated by the target task through the control player, the score of the control player for the brain load evaluation index is given according to the vector value of the second physiological characteristic vector based on the brain load recognition model, the control score of the control player is further determined according to the score of the control player for the brain load evaluation index, and the selection of the control personnel of the target object, the rating of the control personnel, the matching between the control personnel and the motion control device and the like can be carried out according to the control score. According to the method of the embodiment, the analysis of the control work efficiency can be automatically completed, the labor cost and the time cost can be saved, in addition, the dependence on expert experience is greatly reduced according to the analysis performed by the method of the embodiment, and the accuracy and the effectiveness of the analysis are improved.
Other features of the present invention and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a schematic block diagram of a brain load based steering ergonomics system according to an embodiment;
FIG. 2 is a schematic diagram of a component architecture of a brain load based steering ergonomic system, according to another embodiment;
FIG. 3 is a schematic diagram of a hardware configuration of a brain load based steering ergonomic apparatus, according to another embodiment;
FIG. 4 is a flow diagram of a method of brain load based control ergonomics according to an embodiment;
FIG. 5 is a schematic diagram of a structural equation model in accordance with one embodiment.
Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
< System embodiment >
Figures 1 and 2 are schematic block diagrams of an alternative brain load based control ergonomic system 100 to which the methods of embodiments of the present disclosure may be applied.
As shown in fig. 1, the manipulation ergonomics system 100 may include an electronic device 110, a task performance device 120 and physiological information collection devices 130.
The electronic device 110 may be a server or a terminal device, and is not limited herein.
The server may be, for example, a blade server, a rack server, or the like, and the server may also be a server cluster deployed in the cloud. The terminal device can be any device with data processing capability, such as a PC, a notebook computer, a tablet computer and the like.
The electronic device 110 may include a processor 1101, a memory 1102, an interface device 1103, a communication device 1104, a display device 1105, an input device 1106.
The memory 1102 is used to store computer instructions, and the memory 1102 includes, for example, a ROM (read only memory), a RAM (random access memory), a nonvolatile memory such as a hard disk, and the like. The processor 1101 is used to execute a computer program, which may be written in an instruction set of architectures such as x86, Arm, RISC, MIPS, SSE, etc. The interface device 1103 includes various bus interfaces, for example, a serial bus interface (including a USB interface and the like), a parallel bus interface, and the like. The communication device 1104 is capable of wired or wireless communication, for example, and performs communication using at least one of a RJ45 module, a WIFI module, a 2G to 6G mobile communication module, a network adapter of a bluetooth module, and the like. The display device 1105 is, for example, a liquid crystal display, an LED display touch panel, or the like. The input device 1106 may include, for example, a touch screen, a keyboard, a mouse, etc.
In this embodiment, the memory 1102 of the electronic device 110 is configured to store computer instructions for controlling the processor 1101 to operate to implement a method of manipulating ergonomics according to any embodiment of the present disclosure. The skilled person can design the instructions according to the disclosed aspects of the present disclosure. How the instructions control the operation of the processor is well known in the art and will not be described in detail herein.
Although a plurality of devices of the electronic apparatus 110 are shown in fig. 1, the present disclosure may only refer to some of the devices, for example, the electronic apparatus 110 only refers to the memory 1102, the processor 1101, the communication device 1104 and the like.
In one embodiment, as shown in fig. 1, the task performing device 120 may be a real environment-based performing device, the task performing device 120 includes a motion control apparatus 1201 and a target object 1202 communicatively connected to the motion control apparatus 1201, i.e. a target manipulation object, and a manipulation person may manipulate the target object 1202 to perform a target task through the motion control apparatus 1201. For example, the target object 1202 is a drone, and the motion control device 1201 is a flight control device for operating the drone. As another example, the target task includes completing at least one of a splay flight, a spin flight, a collective flight, and the like in a set environment. As another example, the set environment includes wind, rain, fog, and the like. Of course, the target object 1202 may also be other controlled objects, such as an unmanned vehicle, any type of robot, etc., and is not limited herein.
In this embodiment, the human operator may send a control command to the target object 1202 through the motion control device 1201, so that the target object 1202 acts according to the control command. In the process of controlling and executing the target task, the target object 1202 acquires motion state data and feeds the motion state data back to the motion control device 1201, so that an operator can make control judgment and the like.
The motion control device 1201 may include, for example, at least one of a remote control and a remote control handle.
The motion control device 1201 may include a processor, a memory, an interface device, an input device, a communication device, and the like. The memory may store computer instructions that, when executed by the processor, perform: an operation of transmitting a corresponding control command to the target object 1202 according to an operation of the input device by the operator; acquiring motion state data returned by a target object, and performing corresponding processing operation; and uploading the collected manipulation result data to the electronic device 110, etc., which will not be further described herein.
The target object 1202 may include a processor, memory, communication devices, power devices, sensors, and the like. The memory may store computer instructions that, when executed by the processor, perform: according to a control command sent by the motion control device 1201, a power device and the like of the control target object 1202 execute corresponding actions; acquiring data acquired by each sensor to form motion state data; and control the communication means to transmit the motion state data to the motion control means 1201 and the like.
In this embodiment, the task execution device 120 is communicatively connected to the electronic device 110 to upload the manipulation result data to the electronic device 110. This may be, for example, that the task performing device 120 is communicatively connected to the electronic device 110 via the motion control apparatus 1201. For another example, the motion control apparatus 1201 and the target object 1202 may be both communicatively connected to the electronic device 110, and this is not limited herein.
In another embodiment, as shown in fig. 2, the task performing device 120 may be a task performing device based on semi-physical simulation of a virtual environment, and the task performing device 120 may include a terminal device 1203 and a real motion control apparatus 1201, where the terminal device 1203 is configured to provide a virtual scene corresponding to a target task, that is, a simulation scene, and in this embodiment, the target object 1202 is a virtual object in the virtual scene. In this embodiment, the motion control apparatus 1201 is in communication connection with the terminal device 1203 to implement data and/or command interaction between the motion control apparatus 1201 and the virtual scene, so that an operator can operate and control the target object 1202 to execute a target task in the virtual scene through the motion control apparatus 1201.
In this embodiment, the terminal device 1203 may have a hardware structure similar to that of the electronic device 110, which is not described herein again, and the terminal device 1203 and the electronic device 110 may be physically separated devices or may be the same device, that is, the electronic device 110 may also provide the virtual environment, which is not limited herein.
In fig. 1, each physiological information collection device 130 is used to provide physiological information data required by the electronic device in implementing the method of pilot ergonomics according to any of the embodiments. Each physiological information collection device 130 is communicatively connected to the electronic device 110 to upload the physiological information data provided by each to the electronic device 110.
Each physiological information acquisition device 130 includes at least one of an electroencephalogram acquisition device 1301, a picogram acquisition device 1302, an electrocardiograph acquisition device 1303, a video acquisition device 1304 for acquiring facial expressions, an eye movement tracking device 1305, and a voice acquisition device 1306 for acquiring voices.
The physiological information data provided by the electroencephalogram acquisition device 1301 includes at least one of an electroencephalogram signal and an electroencephalogram image.
The physiological information data provided by the electrodermal acquisition device 1302 includes at least one of a electrodermal signal and an electrodermal image.
The electrocardiographic acquisition device 1303 provides physiological information data including at least one of electrocardiographic signals and electrocardiographic images.
The physiological information data provided by the video capture device 1304 may include at least one of facial feature variation data and facial image data.
The physiological information data provided by the eye tracking device 1305 may include at least one of change data of the ocular feature and ocular image data.
The physiological information data provided by the voice capture device 1306 may include at least one of voice signals and acoustic images.
Any physiological information acquisition device 130 may include a front-end acquisition device and a data processing circuit connected to the acquisition device, the front-end acquisition device is configured to acquire raw data, and may be an electrode device that contacts with a control player, the data processing circuit is configured to perform corresponding preprocessing on the raw data, the preprocessing includes at least one of signal amplification, filtering, denoising, and notch processing, the data processing circuit may be implemented by a basic circuit built by an electronic component, may also be implemented by a processor operation instruction, and may also be implemented by a combination of the two, which is not limited herein.
The electronic device 110 and the task performing device 120, and the electronic device 110 and each physiological information collecting device 130 may be in communication connection in a wired or wireless manner, which is not limited herein.
In one embodiment, as shown in fig. 3, the present disclosure provides a brain load based manipulandum ergonomics apparatus 140 comprising at least one computing device 1401 and at least one storage device 1402, wherein the at least one storage device 1402 is configured to store instructions for controlling the at least one computing device 1401 to perform a method of manipulating ergonomics according to any embodiment of the present disclosure. The control ergonomic device 140 may include at least one electronic device 110, and may further include a terminal device 1203, etc., which are not limited herein.
< method examples >
Fig. 4 is a flow diagram of a brain load based maneuver ergonomics method according to one embodiment, which may be implemented, for example, by the maneuver ergonomics apparatus 140 as shown in fig. 3. In this embodiment, a control ergonomics analysis method of this embodiment is described by taking an example of analyzing a target task executed by a control player through a task execution device, and the method may include the following steps S410 to S450:
step S410, acquiring control behavior data and physiological information data generated when a control player controls a target object to execute a target task.
In this embodiment, the control behavior data may be provided by the task execution device 120, or the task execution device 120 may provide the base data for calculating the control behavior data to the control ergonomics device 140, and the control ergonomics device 140 calculates the control behavior data according to the base data to obtain the control behavior data in the step S410.
The control behavior data may include data reflecting the control behavior of the control player on the task execution device 120 during the execution of the target task, and may further include a subjective evaluation result of the cognitive state of the control player after the execution of the target task. Wherein, the data reflecting the control behavior of the task execution device 120 in the process of executing the target task by the control player may include: a moving trajectory of the target object, an acceleration of the joystick, an angle of the joystick, and the like.
The subjective assessment scale of the brain load status of the operator after performing the objective task may be as shown in table 2 and/or table 3 below. The control player can subjectively evaluate the self-cognition state according to a subjective evaluation scale.
TABLE 2
Figure GDA0003731594710000061
TABLE 3
Figure GDA0003731594710000062
Figure GDA0003731594710000071
The target object may be, for example, a drone or the like.
The target task comprises task content, a corresponding task environment and the like.
In one embodiment, as shown in fig. 1, the control player may control the target object in a real scene through the motion control device 1201, i.e., the target object is real with the task environment.
In another embodiment, as shown in fig. 2, the control player may manipulate the target object in a virtual scene provided by the terminal device 1203 through the motion control device 1201, that is, the target object and the task environment are both virtual. In this embodiment, in order to implement interaction of data and commands between the motion control device 1201 and the virtual scene, the method may further include the following steps S4011 to S4013:
step S4011, providing a virtual scene corresponding to the target task, wherein the target object is a virtual object in the virtual scene.
Step S4012 obtains a control command generated by the operator by operating the motion control apparatus 1201, and updates the virtual scene according to the control command.
In this step S4012, updating the virtual scene includes updating the task environment and the state of the target object, which includes the position and posture of the target object, and the like.
In step S4013, feedback data generated in the virtual scene is obtained, and the feedback data is sent to the motion control apparatus 1201.
The virtual scene includes all virtual things of the corresponding target task provided by the terminal device 1203, including virtual environments and virtual objects, etc.
In step S4013, the feedback data may be collected by a virtual sensor of the virtual object, and sent to the motion control apparatus 1201 by the terminal device 1203, so as to allow the control player to perform the control judgment. The feedback data may also be used for the device 140 to obtain at least part of the above-mentioned manipulation result data.
In this embodiment, the acquiring of the manipulation result data generated by the manipulation of the target object by the manipulation player in step S410 may include: and acquiring control result data generated by controlling the virtual object to execute the target task under the virtual scene by the control player.
In this embodiment, the method may further include the following steps S4021 to S4023:
step S4021, in response to the operation of configuring the target task, provides a configuration interface.
The device 140 may have a simulation application installed thereon, and an interface of the simulation application may provide an entry for triggering an operation of configuring the target task, through which a configuration person may access a configuration interface provided by the configuration interface.
The configuration interface may include at least one of an input box, a checklist, and a drop-down list for a configuration person to configure the target task.
Step S4022, acquiring configuration information for the target task input through the configuration interface.
In step S4022, the configuration information input through the configuration interface may be acquired in response to an operation to complete configuration. The configuration information includes, for example, information reflecting the task content and task environment, and the like.
In step S4022, for example, the configurator may trigger the operation of completing the configuration through a key such as "confirm" or "submit" provided by the configuration interface.
Step S4023, providing a virtual scene corresponding to the target task according to the configuration information.
The virtual scene comprises a virtual object corresponding to the target task, a virtual environment and the like.
As can be seen from the above steps S4021 to S4023, the configurator can flexibly configure the target task through the configuration interface as needed, so as to provide virtual scenes corresponding to different target tasks through the device 140.
In the embodiment shown in fig. 2, the acquiring of the physiological information data generated by the player operating the target object to perform the target task in step S410 may include: and acquiring physiological information data generated by controlling the virtual object to execute the target task under the virtual scene by controlling the player.
The physiological information data reflects the cognitive ability of the control player for the target task, the stronger the cognitive ability is, the easier the control player completes the target task, and the weaker the cognitive ability is, the harder the control player completes the target task. The difficulty in completing the target task is to make the player have corresponding reactions in the physiological state of the player, such as heart rate reaction, brain electricity reaction, skin electricity reaction, facial expression reaction, eyeball position reaction, voice reaction, and the like. Therefore, in this embodiment, based on the physiological information data, it is possible to obtain the scores of the evaluation indexes reflecting the cognitive abilities of the control player with respect to the target task.
The physiological information data is multidimensional data including a plurality of index data. The physiological information data may include at least one of information data reflecting a brain load condition, information data reflecting a nerve fatigue condition, and information data reflecting an emotion, for example.
Correspondingly, each evaluation index for evaluating the cognitive ability of the control player includes, for example: mental fatigue evaluation index, brain load index and emotion evaluation index. According to the physiological information data, a score corresponding to each evaluation index can be obtained.
The physiological information data may be provided by respective physiological information acquisition devices.
In this embodiment, the physiological information data provided by any physiological information acquisition device may include at least one of physiological signal data and physiological image data.
For example, each physiological information acquisition device includes an electroencephalogram acquisition device 1301 as shown in fig. 1, and the physiological information data provided by the electroencephalogram acquisition device 1301 may include at least one of an electroencephalogram signal (electrical signal) and an electroencephalogram image.
As another example, each physiological information acquisition device includes a pico-cell acquisition device 1302 as shown in fig. 1, and the physiological information data provided by the pico-cell acquisition device 1302 may include at least one of a pico-cell signal (electrical signal) and a pico-cell image.
For another example, each physiological information acquisition device includes an electrocardiograph device 1303 shown in fig. 1, and the physiological information data provided by the electrocardiograph device 1303 may include at least one of an electrocardiograph signal and an electrocardiograph image.
For another example, each physiological information acquisition apparatus includes a video acquisition apparatus 1304 as shown in fig. 1, and the physiological information data provided by the video acquisition apparatus 1304 includes at least one of change data of facial features and facial image data. The facial feature change data includes at least one of data on occurrence of an eye closing action, and data on occurrence of a yawning action, for example.
For another example, each physiological information collection device includes an eye tracking device 1305 as shown in fig. 1, the eye tracking device 1305 providing physiological information data including at least one of change data of an ocular feature and ocular image data. The data of changes in the ocular characteristics include, for example, data of occurrence of a blinking motion, data of occurrence of a closing motion, data of occurrence of a saccadic motion, data of occurrence of a gazing motion.
As another example, each physiological information acquisition device includes a voice acquisition device 1306 shown in fig. 1, and the physiological information data provided by the voice acquisition device 1306 includes at least one of a voice signal and a sound wave image.
After the raw data is acquired by any physiological information acquisition device through the acquisition device at the front end, at least one of signal amplification, filtering, denoising and notch processing can be performed on the raw data, and the physiological information data is generated and provided for the device 140 so that the device 140 can obtain the physiological information data.
Since the physiological information data come from different physiological information acquisition devices, in order to make the evaluation of the cognitive abilities of the control player have the same time reference according to the physiological information data, in one embodiment, the acquiring the physiological information data provided by each physiological information acquisition device may include: controlling each physiological information acquisition device to synchronously perform acquisition operation; and acquiring physiological information data generated by the physiological information acquisition equipment through corresponding acquisition operation.
In this embodiment, for example, a unified clock reference may be set to trigger each physiological information acquisition device to synchronously start and end the corresponding acquisition operation, and the like.
Step S420, determining a vector value of a second physiological feature vector corresponding to the brain load evaluation index according to the control behavior data and the physiological information data.
Wherein the second physiological feature vector comprises a plurality of second physiological features that influence the brain load evaluation index.
The score of each evaluation index may reflect the cognitive ability of the operator with respect to the target task. Each evaluation index may be set in advance.
The vector value of the second physiological feature vector in this embodiment may be obtained through a corresponding convolution network.
For the vector values of the second physiological feature vector, the feature values for each second physiological feature included in the physiological feature vector may be reflected.
Because the different individual brain rhythms have differences, the brain rhythms of the control players can be analyzed to obtain the characteristic values of the brain electricity characteristics. Then, in the case that the physiological information data includes an electroencephalogram signal, and the second physiological characteristic vector includes an electroencephalogram characteristic, the step of determining a vector value of the second physiological characteristic vector may include steps S4041 to S4043 as follows:
step S4041, acquiring an electroencephalogram power spectrum of the electroencephalogram signal as a target electroencephalogram power spectrum.
Step S4042, determining a power spectrum classification corresponding to the target electroencephalogram power spectrum from a plurality of preset power spectrum classifications as a target power spectrum classification.
Step S4043, determining a vector value of the second physiological characteristic vector according to the brain rhythm corresponding to the target power spectrum classification.
In one embodiment of the present disclosure, the method may further include the step of obtaining a power spectrum classification, including steps S4051 to S4053 as shown below:
step S4051, the electroencephalogram power spectrums of a plurality of reference electroencephalogram signals are obtained and used as the reference electroencephalogram power spectrums.
In this embodiment, a time-frequency conversion algorithm (e.g., a fast fourier transform algorithm) may be adopted to convert each reference electroencephalogram signal into a corresponding frequency signal, so as to obtain a reference electroencephalogram power spectrum corresponding to the reference electroencephalogram signal.
Step S4052, based on multiple clustering algorithms, clustering is respectively carried out on the multiple reference electroencephalogram power spectrums, and clustering results corresponding to each clustering algorithm are obtained.
In this embodiment, a plurality of clustering algorithms may be used to perform cluster analysis on the reference electroencephalogram power spectrum, so as to comprehensively describe and retrieve differences between rhythms in the plurality of reference electroencephalogram power spectrums.
Because the clustering algorithm adopts a random initialization mode, clustering results obtained by analyzing the same reference electroencephalogram power spectrum by different clustering algorithms are possibly different during clustering, and even the absolute results obtained by analyzing the same reference electroencephalogram power spectrum for multiple times by the same clustering algorithm are possibly different.
Step S4053, based on the consensus clustering algorithm, obtaining a plurality of power spectrum classifications according to the clustering result corresponding to each clustering algorithm.
Wherein each power spectrum classification comprises at least one reference electroencephalogram power spectrum.
In this embodiment, based on the consensus clustering algorithm, a final clustering result of a plurality of reference electroencephalogram power spectrums can be obtained according to a clustering result corresponding to each clustering algorithm, and a plurality of power spectrum classifications can be obtained according to the final clustering result.
The consensus clustering algorithm is a general method for evaluating stability and robustness aiming at multiple operations of multiple or single clustering algorithms, has strong capability of integrating multiple clustering results, and can provide better clustering results than a single clustering scheme.
In this embodiment, a plurality of physiological characteristics included in any physiological characteristic vector may be preset. For example, the expert selects at least part of initial physiological features from preset initial physiological features according to experiments or specific requirements to form corresponding physiological feature vectors; and at least part of initial physiological characteristics with high correlation with the cognitive state of the control player can be screened from the preset initial physiological characteristics by utilizing a correlation analysis method to form corresponding physiological characteristic vectors.
The physiological characteristics in this embodiment may include at least one of electroencephalogram characteristics, electrodermal characteristics, cardiac characteristics, eye movement characteristics, image characteristics, voice characteristics, and behavior characteristics.
The brain electrical features may include brain rhythm features and/or estimated brain electrical features. The characteristic value of the brain rhythm characteristic can be obtained by performing wavelet transformation on the electroencephalogram signal. In one example, the characteristic value for evaluating the electroencephalogram characteristic can be obtained by comparing calculation results of four information entropies of the electroencephalogram signal; or the cross-brain region can be extracted by a spectral coherence estimation technology under a time-frequency space; the method can also be obtained by whole brain extraction through a global synchronization estimation method among multi-channel electroencephalograms.
The eye movement characteristics may include at least one of eye movement characteristics reflecting blink time, eye movement characteristics reflecting blink rate, eye movement characteristics reflecting pupil diameter, eye movement characteristics reflecting gaze time, eye movement characteristics reflecting eye closure time, eye movement characteristics reflecting saccade velocity.
The electrodermal features may include time-domain electrodermal features, which may include amplitude means and/or variances of the electrodermal data, and/or frequency-domain electrodermal features, which may include Power Spectral Densities (PSDs) of sympathetic nervous system (EDASymp) bands.
The electrocardiographic features may include at least one of time domain electrocardiographic features, frequency domain electrocardiographic features, and frequency domain respiratory features. The time-domain electrocardiographic features may include at least one of mean Heart Rate (HR), Heart Rate Variability (HRV), and NN interval Standard Deviation (SDNN). The frequency domain cardiac electrical features may include a Power Spectral Density (PSD) of Low Frequency (LF) and/or High Frequency (HF) bands. The frequency domain respiratory characteristics may include a Power Spectral Density (PSD) of a primary respiratory frequency (DRF) band of 0-2 Hz and Respiratory Frequency (RF) bands spaced 0.5Hz apart.
The speech features may include at least one of speech features reflecting overall clarity time, speech features reflecting overall dwell time, speech features reflecting overall dialog time, speech features reflecting number of pauses, speech features reflecting average dwell duration, speech features reflecting articulation rate, speech features reflecting clear articulation rate, speech features reflecting percentage of unsmooth articulation.
The image features may include image features reflecting a percent closed eye over a fixed time window (PERCLOS), image features reflecting an aspect ratio, image features reflecting a mouth ratio, image features reflecting a yawning number.
The behavior feature may include at least one of a behavior feature reflecting a movement trajectory of the manipulation target object, a behavior feature reflecting an acceleration of the joystick, and a behavior feature reflecting an angle of the joystick.
Since it is difficult to estimate the importance of each initial physiological characteristic related to the cognitive state of the control player, in one embodiment of the present disclosure, a correlation analysis method may be used to screen out at least some initial physiological characteristics that are highly correlated with the cognitive state of the control player from preset initial physiological characteristics, so as to form a corresponding physiological characteristic vector.
Specifically, the method may further include a step of obtaining a second physiological feature vector, including steps S4061 to S4064 as follows:
step S4061, a third training sample is obtained.
One third training sample corresponds to one testing person, and one third training sample comprises control behavior data and physiological information data corresponding to the testing person.
Step S4062, for each third training sample, determining a preset feature value of each physiological feature.
Step S4063, selecting a set number of physiological characteristics from the physiological characteristics as second physiological characteristics according to the characteristic value of each physiological characteristic of the third training sample by using a typical correlation analysis algorithm.
A canonical correlation analysis algorithm (CCA) can automatically learn physiological characteristics that best reflect common intrinsic processes.
For example, in the case where the initial physiological characteristics include a brain electrical characteristic and an electrocardiogram characteristic, a feature value X1 of each brain electrical characteristic and a feature value X2 of each electrocardiogram characteristic of the third training sample are determined.
X1=[x1 1 ,x1 2 ,…,x1 L ],X∈R U×L
X2=[x2 1 ,x2 2 ,…,x2 L ],X2∈R V×L
And L is the number of the third training samples, U is the data dimension of the electroencephalogram characteristic, and V is the data dimension of the electrocardio characteristic.
Using canonical correlation analysis algorithm, according to the optimal weight
Figure GDA0003731594710000101
The typical correlation of X1 and X2 will be maximized:
Figure GDA0003731594710000102
the solution to CCA is a typical set of variables
Figure GDA0003731594710000103
And
Figure GDA0003731594710000104
each one of which is
Figure GDA0003731594710000105
A subspace is expanded in the ith data space to maximize the typical correlation between the two variables.
Typical correlation equations can be solved by:
Figure GDA0003731594710000106
and Λ is a diagonal matrix formed by all the generalized eigenvalues.
According to the solved W x 1 and W x2 And selecting a second physiological characteristic of the corresponding physiological characteristic vector to be constructed from the electroencephalogram characteristic and the electrocardio characteristic.
For another example, in the case that the initial physiological characteristics further include a picoelectric characteristic, based on a typical correlation analysis algorithm, according to the feature value of the second physiological characteristic selected from the electroencephalogram characteristic and the electrocardiograph characteristic, and the feature value of each picoelectric characteristic of the third training sample, the second physiological characteristic of the corresponding physiological characteristic vector to be constructed may be reselected from the target physiological characteristic and the picoelectric characteristic selected from the electroencephalogram characteristic and the electrocardiograph characteristic.
Step S4064, according to the second physiological feature, a second physiological feature vector is obtained.
And step S430, inputting the vector value of the second physiological characteristic vector into a preset brain load identification model, and obtaining the score of the control player on the brain load evaluation index.
The brain load identification model reflects the mapping relation between the second physiological characteristic vector and the grade of the brain load evaluation index.
In one embodiment of the present disclosure, the method may further include steps S4035 to S4036 as shown below:
step S4035, based on the preset depth belief network, determining the vector value of the depth feature vector according to the control behavior data and the physiological information data.
In this embodiment, the control behavior data and the physiological information data obtained in step S410 may be directly input into a depth belief network trained in advance, and the output of the depth belief network may be used as a vector value of the depth feature vector.
Step S4036, determine a vector value of the stitching feature vector according to the vector value of the second physiological feature vector and the vector value of the depth feature vector.
And the splicing characteristic vector is obtained by splicing the second physiological characteristic vector and the depth characteristic vector.
Step S4033, inputting the vector value of the second physiological feature vector into a preset brain burden recognition model, and obtaining the score of the controller on the brain burden evaluation index may further include:
and inputting the vector value of the spliced feature vector into the brain load recognition model to obtain the score of the control player on the brain load evaluation index.
In one embodiment of the present disclosure, the method may further include the step of obtaining a brain burden recognition model, including steps S530 to S540 as follows:
in step S530, a second training sample is obtained.
And step S540, training the Gaussian kernel vector machine according to the second training sample to obtain a brain load identification model.
One second training sample corresponds to one tester, and the other second training sample reflects the mapping relation between the vector value of the splicing feature vector corresponding to the tester and the score of the known brain load evaluation index.
In this embodiment, the vector value of the spliced feature vector of the second training sample may be obtained according to the control behavior data and the physiological information data generated by the corresponding tester performing the corresponding target task.
In this embodiment, in order to reduce the influence of the skill of the tester on the brain load assessment result, the tester needs to perform the corresponding experimental task before the formal experiment until the performance of the experimental task is stable. In order to reduce the influence of the individual capability difference of the testers on the brain load evaluation result, a titration process can be adopted to determine the task difficulty parameter setting range in the process of executing the experiment task by the individuals.
In one example, the difficulty of the target task of the 8-shaped orbit flight is standardized using the n-back task as the experimental task. Specifically, the target task is standardized according to physiological information data generated in the n-back experiment task process, and standardized parameter settings under different difficulty experiment conditions are determined.
The n-back task is a standardized working memory and attention task with n incremental difficulty levels. The tester is asked to continuously monitor the stimuli (single letters) appearing on the screen and to click a button when the target stimulus arrives. The setting of n is used to gradually change the workload. Under the 0-back condition, their dominant hand is tried to react to a single target stimulus (e.g.: X') (the stimulus is identified by a button). Under the 1-back condition, a target is defined as any letter that is identical to its previous letter (i.e., 1Trial back). Under the conditions of 2-back and 3-back, the target is defined as any letter that is identical to the first 2 or 3 letters, and so on.
Each tester completes at least one hour of training tasks (n-back task and 8-shaped orbit flight task) every day, and the training time is 5 days. And when the accuracy rate of the n-back task reaches 80%, performing the next difficult training. And when the task score of the flight task at the current difficulty reaches 80% of the total score, performing next difficulty training.
And calibrating the task difficulty parameter of the tester by using the titration process. And (4) the tester executes the N-back task, gradually increases N until only 30% of the current task can be completed correctly, and records the task difficulty N at the moment as N. The method comprises the steps that a tester executes a flight mission, in the process of executing the flight mission, the difficulty of the mission is changed by changing the wind power of the surrounding environment until the score of the tester for completing the flight mission can only reach 30% of the total score, mission parameters (parameters representing the wind power of the surrounding environment) are recorded as lm, the mission parameters and the mission score of each experiment are recorded, and the final mission parameter I is averaged.
Standardized parameter setting: acquiring a vector value of a second physiological characteristic vector of physiological information data in the N-back experimental process, establishing a linear model, fitting the vector value of the second physiological characteristic vector of the physiological information data generated in the flight task by using the model parameter, standardizing the parameter of the flight task, and determining the difficulty parameter of the unmanned aerial vehicle flight task equivalent to the standard N-back task difficulty (0-N).
Specifically, a first-order polynomial regression model may be respectively established for each tester, and a parameter of the normalized model is estimated by using a vector value of a second physiological feature vector obtained from physiological information data in an n-back experiment process, where the normalized model may be represented as:
Y3 i =β 01 X3 i
where X3 is the vector value through the second eigenvector, Y3 is the standard output (0, 1,2 … N), β 0 And beta 1 To normalize the parameters of the model. Specifically, the parameters of the normalized model can be solved as follows:
Figure GDA0003731594710000111
minimizing the total error, beta 0 And beta 1 The following conditions should be satisfied:
Figure GDA0003731594710000112
and fitting the electrophysiological data of the flight training task by using the trained model parameters to obtain the standard output corresponding to each task parameter lm. The task parameter lm when the output is an integer (0, 1,2 … …) is selected as the difficulty level parameter of the flight task.
In order to improve the accuracy of the brain load recognition model and ensure that the model has certain interpretability, a depth belief network is used for extracting depth features from sensor raw data while multi-modal information extracted by CCA is utilized, the two parts of features are jointly used as the features of the brain load recognition model, and a Gaussian kernel support vector machine (FGSVM) is used as a classifier for recognizing the brain load condition.
The deep belief network is formed by stacking a Restricted Boltzmann Machine (RBM) and a Sigmoid belief network.
The DBN contains 3 stacked RBMs with 3 hidden layers h 1 ,h 2 ,h 3 H, an input vector { X4 ═ h 0 }. The RBM1 is trained with a contrastive divergence algorithm. For layer two networks, the freezing weight w 1 And training the RBM 2. For the layer three network, the weight w is frozen 1 、q 2 And training a third-layer network RBM 3. The mathematical model of the DBN is as follows:
P(X4,h 1 ,h 2 ,…,h n )=P(X4|h 1 )P(h 1 |h 2 )…P(h (n-2) |h (n-1) )P(h (n-1) |h n )
wherein, P (h) (n-1) |h n ) Can be determined by the RBM of the following two formulas:
Figure GDA0003731594710000121
Figure GDA0003731594710000122
and training the RBM of the DBN by adopting a greedy training method. The RBM can construct features and reconstruct input. Therefore, we train the RBM using the contrast divergence algorithm. The contrast divergence method based on Gibbs sampling is as follows:
1) physiological information data is input into the RBM 1.
2) The activation probability of the hidden layer is determined using the following equation:
Figure GDA0003731594710000123
3) the activation probability of the input layer is determined using the following equation:
Figure GDA0003731594710000124
4) the edge weights are updated using the following equation:
W ij =W ij +α(P(h j =1|X4)-P(X4 i =1|h))
and alpha is the learning rate, after the RBM of the first layer is trained, the weight of the first layer is frozen, and the RBMs of the second layer and the third layer are trained by using the same contrast divergence algorithm. The output of the previous layer is used as the input to the RBM of the next layer. And after RBMs of all layers are trained, extracting depth feature vectors from the top layer.
In an embodiment of the present disclosure, training the gaussian kernel vector machine according to the second training sample to obtain the brain load recognition model may include:
determining a brain load score prediction expression of the second training sample by taking a second network parameter of the Gaussian kernel vector machine as a variable according to the vector value of the splicing feature vector of the second training sample;
constructing a second loss function according to the brain load score prediction expression of the second training sample and the score of the brain load evaluation index corresponding to the second training sample;
and determining a second network parameter according to the second loss function to obtain a brain load identification model.
Determining a second network parameter according to the second loss function, and obtaining a brain load recognition model comprises:
and determining a second network parameter according to a second loss function based on a Lagrange multiplier method to obtain a brain load identification model.
The support vector machine is a separate hyperplane that optimally classifies data points into positive and negative classes. A separation hyperplane is given by:
W T X5+w 0 =0
W T is the coefficient vector for data point X5, and X5 is the vector value of the stitched feature vector.
Defining the function g as:
Figure GDA0003731594710000131
now, it is an optimization problem to find a hyperplane that separates the data points to the maximum extent
Figure GDA0003731594710000132
The lagrange multiplier method is used to solve the above problem:
Figure GDA0003731594710000133
Figure GDA0003731594710000134
α t is a function of the lagrange multiplier and,<x5,x5 t >for scalar products, the following gaussian kernel function can be used at a scalar product:
Figure GDA0003731594710000135
wherein
Figure GDA0003731594710000136
P is the number of predictors.
And step S440, obtaining the control score of the control player according to the score of the control player on the brain load evaluation index.
In one embodiment of the present disclosure, obtaining the manipulation score of the manipulation player according to the score of the manipulation player on the brain burden evaluation index includes:
and inputting the scores of the control players for the brain load evaluation indexes into a preset structural equation model to obtain the control scores of the control players.
In one embodiment of the present disclosure, at least one brain load evaluation index may be preset. The score for each evaluation index may be obtained according to the corresponding embodiment described above.
For example, a score m1 of the brain load endogenous evaluation index is obtained according to physiological information data acquired by electroencephalogram acquisition equipment, skin-electricity acquisition equipment and electrocardio acquisition equipment, wherein the brain load endogenous evaluation index, the brain load exogenous evaluation index and the brain load subjective evaluation index are preset; obtaining a score m2 of a brain load exogenous evaluation index from physiological information data acquired by the root-eye movement tracking device, the video acquisition device and the voice acquisition device; and obtaining a score m3 of the brain load subjective evaluation index according to the subjective evaluation of the control player on the brain load state in the control behavior data.
The score of the control player for each brain load evaluation index is input into a preset structural equation model, and the control score of the control player can be obtained.
And step S450, executing set operation according to the control score.
In one embodiment, the operation of performing setting in step S450 may include a first operation of outputting the manipulation score.
Outputting the maneuver score may include: the display device of the driving apparatus 140 or the display device connected to the apparatus 140 displays the manipulation score.
Outputting the maneuver score may also include: and sending the control score to terminal equipment registered by a user customizing the control score or to a user account of the user customizing the control score.
The user is, for example, a manipulation-rated person, and the user may register device information of the terminal device with the device 140, so that the device 140 may send a manipulation score to the terminal device after obtaining the manipulation score of the manipulation player.
In the case of developing the control analysis application in accordance with the method of the present embodiment, a control rater may install a client of the application on a terminal device of the user, and obtain a control score of a control player by logging in a user account registered in the application.
The terminal device is, for example, a PC, a notebook computer, or a mobile phone, and is not limited herein.
In one embodiment, the operation of performing the setting in step S560 may include a second operation of providing a result of whether the manipulation player is selected according to the manipulation score. According to the embodiment, the selection of the operator can be realized. Here, a score threshold value may be set, and in a case where the manipulation score is higher than or equal to the score threshold value, the manipulation player may be judged to be taken in. In this embodiment, the operation of executing the setting may further include: and outputting the selection result in an arbitrary mode. The arbitrary means includes displaying, printing, transmitting, and the like.
In one embodiment, the operation of performing the setting in step S560 may include a third operation of determining the manipulation level of the manipulation player according to the manipulation score. Here, a comparison table reflecting the correspondence between the manipulation scores and the manipulation levels may be preset to determine the manipulation level of the corresponding manipulation player from the manipulation score for any manipulation player and the comparison table. In this embodiment, the operation of executing the setting may further include: the manipulation level is output in an arbitrary manner.
In one embodiment, the operation of performing setting in step S560 may include a fifth operation of determining a manipulation task performed by the manipulation player according to the manipulation score. Here, a comparison table reflecting the correspondence between the manipulation scores and the manipulation tasks may be preset to determine the manipulation task to be executed by the corresponding manipulation player based on the manipulation score for any manipulation player and the comparison table. In this embodiment, the operation of performing the setting may further include: the manipulation task is output in an arbitrary manner.
In one embodiment, the operation of performing setting in step S560 may include a fifth operation of selecting a control combination that makes the control score meet the setting requirement according to the control score of the target task performed by the same control player through different motion control devices by controlling the target object, wherein one control combination includes the matched control player and motion control device. In this embodiment, the operation of executing the setting may further include: the manipulated combination is output in an arbitrary manner.
In this embodiment, since the same manipulation player has different proficiency levels for different motion control devices, in this example, not only a manipulation combination that makes the manipulation score satisfy the setting requirement but also a motion control device that is most suitable for the manipulation player can be obtained. In this example, the setting request is, for example, that the manipulation score is equal to or larger than a set value.
In one embodiment, the user may be allowed to select the operation to be performed in step S560, and thus, the method may further include: providing a setting entrance in response to an operation of setting an application scene; acquiring an application scene input through the setting entrance, wherein the application scene reflects an operation to be executed based on the control score; and determining the operation content of the set operation according to the input application scene.
For example, according to the input application scenario, the operation content of the operation determined to be set includes at least one of the above operations.
As can be seen from the above steps S510 to S540, the method of this embodiment may determine the control score for the control player according to the control behavior data and the physiological information data generated when the control player controls the target object to perform the target task, which may greatly save labor cost and time cost, greatly reduce the dependence on expert experience, and improve the accuracy and effectiveness of the analysis.
In addition, the control score can be used for relevant personnel to select the control personnel, grade the control personnel, and/or carry out matching setting between the control personnel and the motion control device.
In one embodiment of the present disclosure, the subjective rating scale for the mental fatigue status of the player after performing the objective task may be as shown in table 1 below. The operation player can subjectively evaluate the self-cognition state according to a subjective evaluation scale.
TABLE 1
Rating of evaluation Controlling player performance
1 Has effects of refreshing mind and invigorating energy
2 Is very active and can be quickly reflected
3 General sobering
4 Tired and not clear-headed
5 Moderate tiredness and less positive
6 Extreme fatigue and difficulty in concentrating
7 Exhaustion and failure to work effectively
The method may further include steps S610 and S620 as follows:
and step S610, obtaining the scores of the control players for the set mental fatigue evaluation indexes and the scores of the set emotion evaluation indexes according to the control behavior data and the physiological information data obtained in the step S410.
The score of each evaluation index may reflect the cognitive ability of the operator with respect to the target task. Each evaluation index may be set in advance.
In one embodiment of the present disclosure, obtaining the score of the control player for the set mental fatigue evaluation index and the score for the set emotional evaluation index from the control behavior data and the physiological information data may include steps S6031 to S6033 shown below:
step S6031, determining vector values of a first physiological feature vector corresponding to a mental fatigue evaluation index and vector values of a second physiological feature vector corresponding to a brain load evaluation index, which are preset, according to the control behavior data and the physiological information data.
The first physiological feature vector includes a plurality of first physiological features that affect the mental fatigue evaluation index. The second physiological feature vector includes a plurality of second physiological features that affect the brain burden evaluation index.
In this embodiment, the vector value of the first physiological feature vector and the vector value of the second physiological feature vector may be obtained through corresponding convolution networks.
For the vector value of the first physiological feature vector, the feature value for each first physiological feature included in the first physiological feature vector may be reflected.
Because the brain rhythms of different individuals are different, the brain rhythms of the control players can be analyzed to obtain the characteristic value of the brain wave characteristics. Then, in the case where the physiological information data includes an electroencephalogram signal and the first physiological feature vector includes an electroencephalogram feature, the step of determining the vector value of any one of the physiological feature vectors may include steps S6041 to S6043 shown below:
step S6041, acquiring the electroencephalogram power spectrum of the electroencephalogram signal as a target electroencephalogram power spectrum.
Step S6042, determining a power spectrum classification corresponding to the target brain electrical power spectrum from a plurality of preset power spectrum classifications as a target power spectrum classification.
Step S6043, determining a vector value of the first physiological characteristic vector according to the brain rhythm corresponding to the target power spectrum classification.
Specifically, the foregoing steps S4041 to S4043 may be referred to, and are not described herein again.
In one embodiment of the present disclosure, the method may further include the step of obtaining a power spectrum classification, including steps S6051-S6053 as follows:
step S6051, acquiring electroencephalogram power spectrums of a plurality of reference electroencephalogram signals as reference electroencephalogram power spectrums.
Step S6052, based on multiple clustering algorithms, clustering is performed on the multiple reference electroencephalogram power spectrums respectively to obtain clustering results corresponding to each clustering algorithm.
Step S6053, based on the consensus clustering algorithm, a plurality of power spectrum classifications are obtained according to the clustering result corresponding to each clustering algorithm.
Specifically, the foregoing steps S4051 to S4053 may be referred to, and are not described herein again.
Specifically, the method may further include a step of obtaining a first physiological feature vector, including steps S6061 to S6064 as follows:
step S6061, a third training sample is obtained.
In step S6062, for each third training sample, a preset feature value of each physiological feature is determined.
Step S6063, selecting a set number of physiological features from the physiological features as target physiological features according to the feature values of the physiological features of the third training sample by using a canonical correlation analysis algorithm.
And step S6064, obtaining a target physiological characteristic vector according to the target physiological characteristics.
Specifically, reference may be made to the foregoing steps S4061 to S4064, which are not described herein again.
In an embodiment of the present disclosure, the first physiological feature included in the first physiological feature vector may be completely the same as, may be partially the same as, or may be completely different from the second physiological feature included in the second physiological feature vector, and is not limited herein.
And step S6032, inputting the vector value of the first physiological characteristic vector into a preset mental fatigue identification model, and obtaining the score of the control player on the set mental fatigue evaluation index.
The mental fatigue identification model can reflect the mapping relation between the first physiological characteristic vector and the score of the mental fatigue evaluation index.
In one embodiment of the present disclosure, the method may further include the step of obtaining a mental fatigue recognition model, including steps S510 to S520 as follows:
step S510, a first training sample is obtained.
One first training sample corresponds to one tester, and one first training sample reflects the mapping relation between the vector value of the first physiological characteristic vector corresponding to the tester and the score of the known mental fatigue evaluation index.
The score of the known mental fatigue evaluation index in the first training sample can be determined according to the subjective evaluation result of the corresponding tester on the mental fatigue state of the tester.
And S520, training the Gaussian model according to the first training sample to obtain a mental fatigue recognition model.
The Gaussian model has a strict statistical theory basis and has good adaptability when processing complex problems. The performance of the method is superior to that of the most advanced supervised learning method at present, such as ANN and SVM, and the method is easy to realize under the condition of simultaneously keeping good performance and flexible non-parametric reasoning capability, thereby solving the defects of the ANN and SVM to a certain extent.
In an embodiment of the present disclosure, training the gaussian model according to the first training sample to obtain the mental fatigue recognition model may include steps S521 to S523 as follows:
step S521, determining a mental fatigue score prediction expression of the first training sample by using the first network parameter of the gaussian model as a variable according to the vector value of the first physiological feature vector of the first training sample.
Step S522, a first loss function is constructed according to the mental fatigue score prediction expression of the first training sample and the score of the mental fatigue evaluation index of the first training sample.
Step S523, determining a first network parameter according to the first loss function, so as to obtain a mental fatigue identification model.
The gaussian process can be determined by its mean function m (x) and kernel function k (x, x'), which can be expressed as:
f~GP(m(x2),k(x2,x2′))
a Gaussian (GP) model is a probabilistic model in function space, and GP can be considered as a process that defines a function distribution, with inferences made directly in function space. To identify mental fatigue states, a constant mean is used for modeling. The kernel function characterizes the correlation of different data points in the GP, and can be learned through training data. The kernel function used in this embodiment is a square exponential covariance function defined as follows:
Figure GDA0003731594710000161
where x2 and x 2' are vector values of the first physiological eigenvectors of any two first training samples of the input, σ 2 Is the signal variance, and the matrix P is an automatically determined correlation parameter (ARD) diagonal matrix having a value of
Figure GDA0003731594710000162
Where d is the dimension of the input space. In the present a-priori model, the model,
Figure GDA0003731594710000163
is a hyper-parameter.
The data set is D { (x2) i ,y2 i ) 1,2, …, n to get new data point x * (vector values of the first physiological feature vector of the first training sample), i.e., f (x2) at x * The function can be considered to be a gaussian prior function, i.e. any set of points estimated by the function has a multivariate gaussian probability density. Let the hyper-parameter of the a priori GP be Θ and, therefore,the class label of a new data point can be determined by calculating its class probability, i.e.:
p(y2 * |x2 * ,D,Θ)=∫p(y2 * |f2 * ,Θ)p(f * |x2 * ,D,Θ)df *
p(f * |D,x2 * ,Θ)=∫p(f,f * |D,x2 * ,Θ)df=∫p(f|D,Θ)p(f * |f,x2 * ,Θ)df
f=[f 1 ,f 2 ,…,f n ]
p(f * |f,x2 * ,Θ)=p(f,f * |x2 * ,X2,Θ)/p(f|X2,Θ)
Figure GDA0003731594710000164
implicitly writing the dependency of f on x2, the gaussian prior function can be expressed as:
Figure GDA0003731594710000165
where μ is the mean value, which can be generally represented as 0. K i,j =k(x2 i ,y2 j ) Is the covariance matrix of X2, probability term p (y 2) i |f i Θ) can be represented as
Figure GDA0003731594710000166
It follows that it is not appropriate to assume that p (Y2| f, X2, Θ) is gaussian distributed, and non-gaussian probability terms would entangle with the posterior probability being non-gaussian, so posterior propagation methods typically approximate posterior non-gaussian distributions with posterior gaussian distributions.
GP can be determined entirely by the choice of the mean function m (x2) and the kernel function k (x2, x 2'), typically using the available data sets to determine the nature of the gaussian model, i.e., the values of the explicit hyper-parameters. The process of determining the value of the hyper-parameter may be performed by calculating the probability of the data set. The log-edge probability is as follows:
Figure GDA0003731594710000171
selection of the hyperparameter may be achieved by maximizing the log-marginal probability. In one embodiment of the present disclosure, the hyper-parameters may be optimized based on an adaptive pollen propagation algorithm.
The flower pollination algorithm is a group intelligent optimization algorithm based on a plant pollination mechanism. Flower self-pollination is physically close, thus corresponding to an optimized local search process. In most cases, cross pollination is a long-distance pollination by pollinators, so that the cross pollination corresponds to a global search process. In fact, the process of flower pollination is quite complex, and for FPA to be simple and easy, it is assumed that each plant has only one flower, and each flower has only one pollen gamete, where each pollen gamete represents one solution to the problem. Therefore, according to the characteristics of flower pollination, the algorithm is assumed to satisfy the following idealized rules:
1) when cross pollination is carried out, pollen is spread by a pollinator through levy flight, and the process is mapped to be a global search process.
2) Self-pollination maps to a local search process.
3) The constancy of a flower is considered the probability of reproduction, which correlates with the similarity of flowers during pollination.
The change of pollination mode is controlled by switching probability p (p is equal to 0, 1). I.e. when the random number r < p, self-pollination is performed, otherwise cross-pollination is performed.
In the cross pollination process, pollinators follow the levy flight rule, and carry out pollination in a relatively long flight path, the process ensures the most suitable pollination and propagation, and g is used * And (4) showing. Mathematical representation of the cross-pollination process:
Figure GDA0003731594710000172
wherein the content of the first and second substances,
Figure GDA0003731594710000173
is the solution at the t-th iteration, g * The current optimal solution for the optimization problem found among all solutions for the current iteration. The parameter L is the step size. L satisfies the Levy distribution, i.e.
Figure GDA0003731594710000174
Where λ is 1.5, which is a constant.
The self-pollination process can be expressed as:
Figure GDA0003731594710000175
wherein the content of the first and second substances,
Figure GDA0003731594710000176
and
Figure GDA0003731594710000177
are different solutions in the same iteration process. If it is used
Figure GDA0003731594710000178
And
Figure GDA0003731594710000179
from the same population if ε is from [0,1]]Uniformly distributed, the process becomes a local random walk. Choosing p 0.8 as global and local search switching probability.
The pollen propagation algorithm (FPA) has good performance, but inevitably has the problems of large calculation amount and long convergence time. The key steps of a conventional FPA are global search and local search. It is therefore proposed to make the search process more robust using an adaptive approach.
For global search, the key step is the setting of a Levy step L, which is defined as a function of λ. In the conventional algorithm, λ is generally regarded as a constant, and is optimally set to 1.5. However, this way of fixing the parameters is not an optimal setting for all problems, and therefore, an adaptive Levy step size can be used to improve the overall performance of the FPA, and it is proposed to use an adaptive Levy step size factor as follows:
Figure GDA00037315947100001710
wherein the content of the first and second substances,
Figure GDA00037315947100001711
is the solution to be corrected at present, g * Is the optimal solution in the current iteration. Since the 2-norm term is wireless, which may result in a very large Levy step size, the projection matrix a is used to map the result to an acceptable range.
In the method, the self-adaptive Levy step length is related to the distance between the current solution and the optimal solution, a large step length is caused to carry out global search in a long distance, and a short distance can be moved more accurately to search accurately. For local search, traditional FPAs rely on local pollination rather than global pollination. Here, we propose another Levy flight strategy to address local search during global pollination, expressed as follows:
Figure GDA00037315947100001712
wherein, the first and the second end of the pipe are connected with each other,
Figure GDA00037315947100001713
in order to correct the solution, the solution is,
Figure GDA00037315947100001714
for the current optimal solution, gamma is a local search step length, alpha is a constant, gamma is limited within a small range, and L is a Levy step length.
The self-adaptive flower pollination process comprises the following steps:
and step 1, initializing.
1) Setting parameters, such as population number n, maximum iteration number T, switching probability p and the like;
2) an initial population is randomly generated, and t is 0.
Step 2, searching the optimal pollen g of the initial population *
In the step 3, the step of,
Figure GDA0003731594710000181
Figure GDA0003731594710000182
in the embodiment, the self-adaptive pollination algorithm (AFPA) is used for optimizing the hyper-parameter applied to the Gaussian model, so that the accuracy of the mental fatigue recognition model can be improved.
And step S6033, inputting the physiological information data into a preset emotion recognition model to obtain the score of the control player on the emotion evaluation index.
The emotion recognition model can reflect the mapping relation between the control behavior data and the physiological information data and the score of the emotion evaluation index.
In one embodiment of the present disclosure, the physiological information data includes an electroencephalogram signal; then, inputting the physiological information data into a preset emotion recognition model to obtain the score of the control player for the emotion evaluation index may include steps S6071 to S6073 as follows:
and step S6071, performing wavelet packet transformation processing on the electroencephalogram signals to obtain electroencephalogram time-frequency characteristics.
And step S6072, acquiring vector values of the electroencephalogram emotion feature vectors from the electroencephalogram time-frequency features based on a preset first depth convolution neural network.
And step S6073, based on a preset first classifier, obtaining the score of the control player on the emotion evaluation index according to the vector value of the electroencephalogram emotion feature vector.
Specifically, the wavelet packet transform module may perform wavelet packet transform processing on the electroencephalogram signal to obtain electroencephalogram time-frequency characteristics. The wavelet packet transformation module can be configured to decompose k (k is 6) level wavelet packets, the wavelet packet transformation can provide finer decomposition for the high-frequency part of the signal, the decomposition has no redundancy or omission, and better time-frequency localization analysis can be carried out on the signal.
In one embodiment, a convolutional neural network can be used for extracting vector values of electroencephalogram emotional feature vectors from electroencephalogram time-frequency features. The required feature extraction capability is realized with relatively low computational overhead through a specially designed lightweight convolutional neural network.
In one example, ResNet18 may be chosen as the base model for a convolutional neural network, which balances accuracy and the cost of resource overhead better than other models. The improved network is named EsNet26, and the network structure is shown in Table 4 below.
TABLE 4
Figure GDA0003731594710000191
In one embodiment, the vector value of the electroencephalogram emotion feature vector is used as input, and the score of the control player on the emotion evaluation index is obtained based on a classifier of a Softmax function.
The classifier based on the Softmax function can completely connect the feature vectors output by the upper fully-connected layer to an output node, and obtains an n-dimensional vector [ p ] through Softmax regression 1 ,p 2 ,…,p n ] T And the numerical value of each dimension is the probability that the emotion type of the input electroencephalogram signal belongs to the corresponding type.
In one embodiment of the present disclosure, the physiological information data includes a facial video signal. Then, inputting the physiological information data into a preset emotion recognition model, and obtaining the score of the control player for the emotion evaluation index includes steps S6081 to S6085 as follows:
step S6081, a current video sampling interval is acquired.
In this embodiment, the current video sampling interval is specifically a sampling interval for sampling the face video signal this time to obtain the current frame video image, and represents the number of image frames spaced between the current frame video image and the previous frame video image. The previous frame video image is a video image obtained by sampling before the current frame video image.
The current video sampling interval may be a preset fixed value, may also be a random value meeting a preset condition, and may also be determined according to the expression similarity between the video images sampled by the corresponding previous two frames.
In one embodiment of the disclosure, when the expression similarity between two adjacent video images sampled before and after the current video image is less than or equal to a similarity threshold, the current video sampling interval is determined according to the expression similarity of the two previous video images.
Specifically, the current video sampling interval Num may be determined by the following formula skip
Figure GDA0003731594710000192
Wherein, sim ff The expression similarity of the first two frames of video images is obtained, and the lambda is the upper limit of a preset sampling interval; λ is a lower limit of a preset sampling interval; theta.theta. ff Is a similarity threshold.
When the expression similarity between two adjacent video images obtained by sampling before and at last of the current video image is greater than the similarity threshold, randomly generating a current video sampling interval; and the current video sampling interval is less than or equal to a preset maximum sampling interval and greater than or equal to a minimum sampling interval.
In one embodiment of the present disclosure, the method may further include the step of determining a similarity threshold, including:
acquiring a reference face video signal of a control player;
determining the expression similarity of every two adjacent video images in the reference face video signal;
and determining a similarity threshold according to the expression similarity of every two adjacent video images.
Due to the fact thatThe expression of the same user may be different in expression mode or degree, and similarity thresholds are set individually for different users. Similarity threshold θ ff May be calculated according to the following formula:
Figure GDA0003731594710000201
wherein, M (frames) j ) Vector value, M (frames), representing expression feature vector of j-th frame video image in reference face video signal j+1 ) And a vector value representing an expression feature vector of a (j + 1) th frame video image in the reference face video signal, wherein L is the total image frame number of the reference face video signal, and alpha is a preset parameter value.
Step S6082, sampling the face video signal based on the current video sampling interval, and obtaining a current frame video image.
In one embodiment of the present disclosure, step S6081 and step S6082 may be implemented by frame samplers.
In step S6083, the expression similarity between the current frame video image and the corresponding previous frame video image is determined.
The previous frame of video image is a frame of video image obtained by sampling the face video signal at the previous time.
In one embodiment of the present disclosure, determining the expression similarity between the current frame video image and the corresponding previous frame video image may include:
acquiring a vector value of an expression feature vector of a current frame video image;
and based on a preset convolution network, determining the expression similarity between the current frame video image and the previous frame video image according to the vector value of the expression feature vector of the current frame video image and the vector value of the pre-stored expression feature vector of the previous frame video image.
In this embodiment, in the case of determining the emotion recognition result of the previous frame of video image, the vector value of the expression feature vector of the previous frame of video image is determined and buffered. So as to be directly called in the case of determining the expression similarity between the current frame video image and the corresponding previous frame video image.
In one embodiment of the present disclosure, a vector value of an expression feature vector of a current frame video image may be obtained by a feature extractor, and an expression similarity between the current frame video image and a previous frame video image may be determined by a fast switch.
The feature extractor is used for extracting vector values of the expression feature vectors from the current frame video image. The required feature extraction capability is realized through a specially designed lightweight convolutional neural network with relatively low calculation overhead. The recognition of the basic expression is completed only by the face area of the user, which has high requirements on the feature extractor. For this reason, a deep learning based network model is designed, so that to build a powerful feature extractor, ResNet18 is selected as the basic model for designing the feature extractor, which balances accuracy and cost of resource overhead better than other models. The modified network was named EsNet26 and the network structure is shown in Table 5 below.
TABLE 5
Figure GDA0003731594710000202
The modification principle and the modification strategy for the basic model mainly have the following aspects:
(1) the convolution kernel of the first layer size of 7x7 is replaced with the size of 3x3 and downsampling is cancelled to prevent the feature plane size from dropping too fast and feature information from being lost in the shallow convolution.
(2) The modified model can significantly reduce the computational overhead, but at the same time, the modifications introduce a new problem that the performance of feature extraction is also reduced. To compensate for this, the modified model deepens the depth of the convolutional network, extending the convolutional network to 26 layers.
(3) Because the camera is fixed on the wearable device, the shot area is relatively fixed for the same control player, so that the input picture in the scene does not need to be too large in size, and the input of the original ResNet18 model can be reduced from 224x224 to 64x 64.
The quick changer can automatically judge whether a large amount of subsequent convolution calculation is necessary or not during identification, and if the convolution calculation is unnecessary, the quick changer can choose to bypass the calculation so as to accelerate the operation efficiency of the system.
The convolutional network is the core part of the fast transponder and can be composed of 10 convolutional layers, 1 pooling layer and a loss layer. The pooling layer will output 128-dimensional feature vectors, and the network structure of the convolutional network of the fast transformer is shown in table 6 below, where conv1_ x is stacked by 3 residual blocks.
TABLE 6
Figure GDA0003731594710000211
The loss layer will calculate the distance between the features of two adjacent frames of images and calculate the corresponding loss. The Loss layer of the convolutional network is mainly realized by a contrast Loss function, and the calculation formula is as follows:
Figure GDA0003731594710000212
wherein y is a label of an input sample, if the input sample is a positive sample, namely two frames of images input the same expression, the value of y is 1, otherwise, the value is 0. d is the expression similarity of two adjacent frames of images, and the smaller d is, the more similar the two adjacent frames of images are. margin is a hyperparameter, is a penalty term for negative samples, when the input is a positive sample, the square term of the feature distance is a loss value, and when the input is a negative sample, the loss is not generated only when the feature distance is greater than margin, otherwise, the smaller the feature distance, the larger the loss is generated. At training, margin is set to 5 by default.
In one example, it can be that for the current frame video image, after the first 7 layers of convolution calculation of the feature extractor is finished, the result of the intermediate calculation is obtained, at this time, the subsequent calculation of the extractor is suspended, and the data stream enters the fast exchanger. The fast exchanger further extracts vector values from the calculation result of the feature extractor and judges the expression similarity of the current frame video image and the previous frame video image, thereby determining whether the current frame video image should resume the pause processing in the feature extractor or directly allocate a final class label. The input to the system component is the intermediate output of the layer 7 convolution of the feature extractor, which also buffers the output features of the last frame in the switch that failed to trigger the fast switch for comparison with the last frame.
Step S6084, determining the emotion recognition result of the current frame video image according to the expression similarity.
In one embodiment of the present disclosure, determining the emotion recognition result of the current frame video image according to the expression similarity includes:
taking the emotion recognition result of the previous frame of video image as the emotion recognition result of the current frame of video image under the condition that the expression similarity is smaller than or equal to the similarity threshold;
under the condition that the expression similarity is greater than a similarity threshold value, acquiring a vector value of a face emotion feature vector of the current frame video image based on a preset second deep convolutional neural network; and based on a preset second classifier, obtaining an emotion recognition result of the current frame video image according to the vector value of the face emotion characteristic vector of the current frame video image.
And under the condition that the expression similarity is greater than the similarity threshold, resuming the pause processing in the feature extractor, namely, continuously extracting the vector value of the expression feature vector of the current frame video image by the feature extractor, and determining the emotion recognition result corresponding to the vector value of the expression feature vector finally output by the feature extractor by the second classifier.
In one embodiment of the present disclosure, the method further includes a step of training the convolutional network, including steps S550 to S560 as follows:
step S550, a third training sample is obtained.
One third training sample reflects the mapping relation between the vector values of the expression feature vectors corresponding to the two frames of facial images and the labels, and the labels reflect whether the two frames of facial images in the corresponding third sample belong to the same expression or not;
and step S560, training according to the vector values of the expression feature vectors of the two frames of facial images of the third training sample and the label of the third training sample to obtain a convolution network.
In an embodiment of the present disclosure, training the vector values of the expression feature vectors of the two frames of facial images of the third training sample and the label of the third training sample to obtain the convolutional network may include:
determining an expression similarity prediction expression of a third training sample by taking a third network parameter of the convolutional network as a variable according to vector values of expression feature vectors of two frames of facial images of the third training sample;
constructing a third loss function according to the expression similarity prediction expression of the third training sample and the label of the third training sample;
and determining a third network parameter according to the third loss function to obtain the convolutional network.
In this embodiment, the feature extractor and the fast transformer may share multiple layers of convolution calculation, and may combine two network models into one network with two sub-branches. The entire model can be jointly trained to achieve the purpose of sharing convolutional layers, and a feature extractor and a fast transponder can be obtained simultaneously after training is completed.
Step S5085, a score of the control player for the emotion evaluation index is obtained according to the emotion recognition result of the video image obtained by sampling the facial video signal.
In this embodiment, the emotion recognition results of all video images obtained by sampling the face video signal may be determined, and the score of the control player for the emotion assessment index may be determined according to the emotion recognition results of all video images.
In an embodiment of the present disclosure, obtaining the emotion score of the control player according to the emotion recognition result of the video image obtained by sampling the video information includes:
determining an emotion recognition result of the face video signal according to an emotion recognition result of a frame video image obtained by sampling the face video signal based on a voting method;
and obtaining the score of the control player for the emotion evaluation index according to the emotion recognition result of the face video signal.
In the embodiment, unnecessary convolution calculation can be avoided through the quick transmitter, and the redundant video frames can be directly skipped over while emotion change is not missed through the frame sampler. Thus, the emotion recognition efficiency can be improved by the method of the embodiment.
In one embodiment of the present disclosure, the physiological information data includes an electroencephalogram signal and a facial video signal. Then, according to the method in the foregoing embodiment, the emotion recognition results determined according to the electroencephalogram signals and the emotion recognition results of all the video images obtained by sampling the facial video signals may be respectively obtained; and then, a score of the emotion evaluation index is obtained according to the emotion recognition results.
Specifically, the score of the emotion evaluation index may be obtained based on a voting method, based on an emotion recognition result determined by the electroencephalogram signal, and based on emotion recognition results of all video images obtained by sampling the facial video signal.
In the embodiment, the emotion recognition results of the two types of signals are evaluated through a voting method to obtain scores of emotion evaluation indexes, so that the emotion recognition precision can be improved.
In step S620, the control score of the control player is obtained based on the score of the control player for the mental fatigue evaluation index, the score for the mood evaluation index, and the score for the brain load evaluation index, which are obtained in step S430.
In one embodiment of the present disclosure, obtaining a manipulation score of the manipulation player according to the score of the manipulation player for the mental fatigue evaluation index, the score for the brain load evaluation index, and the score for the emotion evaluation index includes:
and inputting the scores of the control players for the mental fatigue evaluation indexes, the scores of the brain load evaluation indexes and the scores of the emotion evaluation indexes into a preset structural equation model to obtain the control scores of the control players.
In one embodiment of the present disclosure, at least one mental fatigue evaluation index, at least one brain load evaluation index, and at least one mood evaluation index may be preset. The score for each evaluation index may be obtained according to the corresponding embodiment described above.
For example, the mental fatigue evaluation index can be a mental fatigue endogenous evaluation index, a mental fatigue exogenous evaluation index and a mental fatigue subjective evaluation index which are preset, and a score f1 of the mental fatigue endogenous evaluation index is obtained according to physiological information data acquired by electroencephalogram acquisition equipment, skin electricity acquisition equipment and electrocardio acquisition equipment; obtaining a score f2 of a mental fatigue exogenous evaluation index from physiological information data collected by the root eye movement tracking device, the video collecting device and the voice collecting device; and obtaining a score f3 of the mental fatigue subjective evaluation index according to the subjective evaluation of the control player on the mental fatigue state in the control behavior data.
For example, the brain load evaluation index may be a brain load endogenous evaluation index, a brain load exogenous evaluation index and a brain load subjective evaluation index which are preset, and the score m1 of the brain load endogenous evaluation index is obtained according to physiological information data acquired by electroencephalogram acquisition equipment, electrodeionization acquisition equipment and electrocardiograph acquisition equipment; obtaining a score m2 of a brain load exogenous evaluation index from physiological information data collected by the root eye movement tracking device, the video collecting device and the voice collecting device; and obtaining a score m3 of the brain load subjective evaluation index according to the subjective evaluation of the control player on the brain load state in the control behavior data.
For another example, the emotion evaluation index may be an emotion endogenous evaluation index, an emotion exogenous evaluation index and an emotion subjective evaluation index which are preset, and a score e1 of the emotion endogenous evaluation index is obtained according to physiological information data acquired by the electroencephalogram acquisition device, the electrodeionization acquisition device and the electrocardiograph acquisition device; obtaining a score e2 of the external emotion evaluation index according to the physiological information data collected by the eye tracking device, the video collecting device and the voice collecting device; and obtaining a score e3 of the emotional subjective evaluation index according to the subjective evaluation of the control player on the emotional state in the control behavior data.
The score of the control player for each mental fatigue evaluation index, the score for each brain load evaluation index and the score for each emotion evaluation index are input into a preset structural equation model, and the control score of the control player can be obtained.
In one embodiment of the present disclosure, the method further includes a step of obtaining a structural equation model, including steps S570 to S580:
step S570, a fourth training sample is obtained.
And one fourth training sample corresponds to one tester, and one second training sample reflects the mapping relation between the scores of the corresponding testers for the mental fatigue evaluation indexes, the scores of the corresponding mental load evaluation indexes and the scores of the corresponding emotion evaluation indexes and the actual control scores.
In this embodiment, for any one fourth training sample, the score of the corresponding tester for the mental fatigue evaluation index, the score of the corresponding tester for the brain load evaluation index, and the score of the corresponding tester for the emotion evaluation index may be obtained according to the control behavior data and the physiological information data generated when the corresponding tester executes the corresponding target task; and obtaining an actual control score according to control result data generated by corresponding testers executing corresponding target tasks.
In one example, the actual maneuver score, Y6, may be determined according to the task duration t and the task score s in the maneuver result data.
And step S580, performing machine learning training according to the fourth training sample to obtain a structural equation model.
The structural equation model construction may be as shown in fig. 5. As shown in fig. 5, the structural equation model contains 5 hidden variables: a mental fatigue score ζ 1, a brain load score ζ 2, a mood score ζ 3, a cognitive state X6, and a manipulation score Y6. The analytical expressions of the structure equation system and the knot measurement equation system can be written according to the structure equation model as follows:
f 1 =w 11 ζ 111
f 2 =w 21 ζ 121
f 3 =w 31 ζ 131
m 1 =w 41 ζ 241
m 2 =w 51 ζ 251
m 3 =w 61 ζ 261
e 1 =w 71 ζ 371
e 2 =w 81 ζ 381
e 3 =w 91 ζ 391
t=β 1 Y6+δ 1
s=β 2 Y6+δ 2
ζ 1 =w 12 X6+ε 12
ζ 2 =w 22 X6+ε 22
ζ 3 =w 32 X6+ε 32
Y6=αX6+∈
the structural equation model is trained through the fourth training sample, parameter estimation in the structural equation model is carried out through the generalized least square method, the weight on each edge in the structural equation model can be determined, and the influence degree of each cognitive state on the control score is quantified through the method.
In step S620, the score of the control player for the mental fatigue evaluation index, the score of the brain load evaluation index, and the score of the mood evaluation index are input into the structural equation model, and the control score of the control player can be obtained.
The present invention may be a system, method and/or computer program product. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied therewith for causing a processor to implement various aspects of the present invention.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present invention may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present invention are implemented by personalizing an electronic circuit, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), with state information of computer-readable program instructions, which can execute the computer-readable program instructions.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, by software, and by a combination of software and hardware are equivalent.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the invention is defined by the appended claims.

Claims (6)

1. A brain load based maneuver ergonomics analysis method comprising:
acquiring control behavior data and physiological information data generated by a control player controlling a target object to execute a target task, wherein the control behavior data comprises data reflecting the control behavior of the control player on a task execution device in the process of executing the target task, and also comprises a subjective evaluation result of the control player on the self brain load state after the control player finishes executing the target task, and the control behavior data comprises: the moving track of the target object, the acceleration of the control rod and the angle of the control rod;
determining a vector value of a second physiological characteristic vector corresponding to a brain load evaluation index according to the control behavior data and the physiological information data; the second physiological feature vector comprises a plurality of second physiological features that affect the brain burden evaluation index;
inputting the vector value of the second physiological characteristic vector into a preset brain load identification model to obtain the score of the control player on the brain load evaluation index; wherein the brain load identification model reflects a mapping relation between the second physiological characteristic vector and the score of the brain load evaluation index;
obtaining a control score of the control player according to the score of the control player on the brain load evaluation index, wherein the control score comprises the following steps:
pre-setting a brain load endogenous evaluation index, a brain load exogenous evaluation index and a brain load subjective evaluation index, and obtaining a score of the brain load endogenous evaluation index according to physiological information data acquired by electroencephalogram acquisition equipment, skin electricity acquisition equipment and electrocardiogram acquisition equipment; obtaining the grade of the brain load exogenous evaluation index according to the physiological information data collected by the eye tracking equipment, the video collecting equipment and the voice collecting equipment; obtaining the grade of the brain load subjective evaluation index according to the subjective evaluation of the control player on the brain load state in the control behavior data;
inputting the score of the control player for each brain load evaluation index into a preset structural equation model to obtain the control score of the control player;
executing the set operation according to the control score,
wherein the method further comprises:
determining a vector value of a depth feature vector according to the control behavior data and the physiological information data based on a preset depth belief network;
determining a vector value of a splicing feature vector according to the vector value of the second physiological feature vector and the vector value of the depth feature vector; the second physiological characteristic vector and the depth characteristic vector are spliced to obtain a spliced characteristic vector;
the inputting the vector value of the second physiological feature vector into a preset brain load recognition model, and the obtaining of the score of the control player on the brain load evaluation index comprises:
inputting the vector value of the spliced feature vector into the brain load identification model to obtain the score of the control player on the brain load evaluation index,
the method further comprises the step of obtaining the brain burden recognition model, comprising:
acquiring second training samples, wherein one second training sample corresponds to a tester, and one second training sample reflects the mapping relation between the vector value of the splicing feature vector corresponding to the tester and the known grade of the brain load evaluation index;
training a Gaussian kernel vector machine according to the second training sample to obtain the brain load identification model,
wherein, the training the gaussian kernel vector machine according to the second training sample to obtain the brain load recognition model comprises:
determining a brain load score prediction expression of the second training sample by taking a second network parameter of the Gaussian kernel vector machine as a variable according to the vector value of the splicing feature vector of the second training sample;
constructing a second loss function according to the brain load score prediction expression of the second training sample and the score of the brain load evaluation index corresponding to the second training sample;
determining the second network parameters according to the second loss function to obtain the brain load identification model,
wherein the determining the second network parameter according to the second loss function to obtain the brain load recognition model comprises:
and determining the second network parameter according to the second loss function based on a Lagrange multiplier method to obtain the brain load identification model.
2. The method of claim 1, wherein the physiological information data comprises brain electrical signals; any physiological feature vector comprises electroencephalogram features;
the step of determining the vector value of any physiological feature vector comprises:
acquiring an electroencephalogram power spectrum of the electroencephalogram signal as a target electroencephalogram power spectrum;
determining a power spectrum classification corresponding to the target electroencephalogram power spectrum from a plurality of preset power spectrum classifications as a target power spectrum classification;
and determining the vector value of the corresponding physiological characteristic vector according to the brain rhythm corresponding to the target power spectrum classification.
3. The method of claim 2, wherein the method further comprises the step of obtaining a power spectrum classification comprising:
acquiring a reference brain electrical power spectrum of a plurality of reference brain electrical signals;
based on a plurality of clustering algorithms, clustering the plurality of reference electroencephalogram power spectrums respectively to obtain a clustering result corresponding to each clustering algorithm;
and obtaining a plurality of power spectrum classifications according to the clustering result corresponding to each clustering algorithm based on a consensus clustering algorithm, wherein each power spectrum classification comprises at least one reference electroencephalogram power spectrum.
4. The method according to claim 1, wherein the method further comprises the step of obtaining the second physiological feature vector, comprising:
acquiring third training samples, wherein one third training sample corresponds to one tester, and one third training sample comprises control behavior data and physiological information data corresponding to the tester;
for each third training sample, determining a preset characteristic value of each physiological characteristic;
selecting a set number of physiological characteristics from the physiological characteristics according to the characteristic values of the physiological characteristics of the third training sample by using a typical correlation analysis algorithm to serve as the second physiological characteristics;
and obtaining the second physiological characteristic vector according to the second physiological characteristic.
5. A brain load based control ergonomics apparatus comprising at least one computing means and at least one storage means, wherein,
the at least one storage device is to store instructions to control the at least one computing device to perform the method of any of claims 1 to 4.
6. A brain load based control ergonomics system, wherein the system comprises a task execution device, physiological information collection devices, and the control ergonomics device of claim 5, wherein the task execution device and the physiological information collection devices are in communication with the control ergonomics device.
CN202011021842.7A 2020-09-25 2020-09-25 Brain load-based control work efficiency analysis method, equipment and system Active CN112256123B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011021842.7A CN112256123B (en) 2020-09-25 2020-09-25 Brain load-based control work efficiency analysis method, equipment and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011021842.7A CN112256123B (en) 2020-09-25 2020-09-25 Brain load-based control work efficiency analysis method, equipment and system

Publications (2)

Publication Number Publication Date
CN112256123A CN112256123A (en) 2021-01-22
CN112256123B true CN112256123B (en) 2022-08-23

Family

ID=74233094

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011021842.7A Active CN112256123B (en) 2020-09-25 2020-09-25 Brain load-based control work efficiency analysis method, equipment and system

Country Status (1)

Country Link
CN (1) CN112256123B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112836760A (en) * 2021-02-19 2021-05-25 清华大学 System and method for identifying performance of manual assembly task based on wearable equipment
CN113057654B (en) * 2021-03-10 2022-05-20 重庆邮电大学 Memory load detection and extraction system and method based on frequency coupling neural network model
CN113158925A (en) * 2021-04-27 2021-07-23 中国民用航空飞行学院 Method and system for predicting reading work efficiency of composite material maintenance manual
CN113627740A (en) * 2021-07-20 2021-11-09 东风汽车集团股份有限公司 Driving load evaluation model construction system and construction method
CN113712511B (en) * 2021-09-03 2023-05-30 湖北理工学院 Stable mode discrimination method for brain imaging fusion characteristics
CN114305452A (en) * 2021-12-15 2022-04-12 南京航空航天大学 Cross-task cognitive load identification method based on electroencephalogram and field adaptation
CN114343640B (en) * 2022-01-07 2023-10-13 北京师范大学 Attention assessment method and electronic equipment
CN114872028B (en) * 2022-04-13 2023-07-14 中国兵器工业计算机应用技术研究所 Method and equipment for training manipulation hands
CN116304643B (en) * 2023-05-18 2023-08-11 中国第一汽车股份有限公司 Mental load detection and model training method, device, equipment and storage medium
CN117455299A (en) * 2023-11-10 2024-01-26 中国民用航空飞行学院 Method and device for evaluating performance of fly-away training of simulator

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103827671A (en) * 2011-05-03 2014-05-28 联邦科学与工业研究组织 Method for detection of a neurological disease
CN109993424A (en) * 2019-03-26 2019-07-09 广东艾胜物联网科技有限公司 A kind of non-interfering formula load decomposition method based on width learning algorithm

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102274032A (en) * 2011-05-10 2011-12-14 北京师范大学 Driver fatigue detection system based on electroencephalographic (EEG) signals
CN109190479A (en) * 2018-08-04 2019-01-11 台州学院 A kind of video sequence expression recognition method based on interacting depth study
CN111598451B (en) * 2020-05-15 2021-10-08 中国兵器工业计算机应用技术研究所 Control work efficiency analysis method, device and system based on task execution capacity
CN111544015B (en) * 2020-05-15 2021-06-25 北京师范大学 Cognitive power-based control work efficiency analysis method, device and system
CN111553617B (en) * 2020-05-15 2021-12-21 北京师范大学 Control work efficiency analysis method, device and system based on cognitive power in virtual scene
CN111553618B (en) * 2020-05-15 2021-06-25 北京师范大学 Operation and control work efficiency analysis method, device and system
CN111598453B (en) * 2020-05-15 2021-08-24 中国兵器工业计算机应用技术研究所 Control work efficiency analysis method, device and system based on execution force in virtual scene

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103827671A (en) * 2011-05-03 2014-05-28 联邦科学与工业研究组织 Method for detection of a neurological disease
CN109993424A (en) * 2019-03-26 2019-07-09 广东艾胜物联网科技有限公司 A kind of non-interfering formula load decomposition method based on width learning algorithm

Also Published As

Publication number Publication date
CN112256123A (en) 2021-01-22

Similar Documents

Publication Publication Date Title
CN112256123B (en) Brain load-based control work efficiency analysis method, equipment and system
CN112256124B (en) Emotion-based control work efficiency analysis method, equipment and system
CN112256122B (en) Control work efficiency analysis method, device and system based on mental fatigue
US11206450B2 (en) System, apparatus and method for providing services based on preferences
Vinola et al. A survey on human emotion recognition approaches, databases and applications
KR102281590B1 (en) System nad method of unsupervised training with weight sharing for the improvement in speech recognition and recording medium for performing the method
CN112200025B (en) Operation and control work efficiency analysis method, device and system
AU2009204001A1 (en) Rapid serial presentation communication systems and methods
CN108703824B (en) Bionic hand control system and control method based on myoelectricity bracelet
CN111553617B (en) Control work efficiency analysis method, device and system based on cognitive power in virtual scene
CN111553618B (en) Operation and control work efficiency analysis method, device and system
CN111598453B (en) Control work efficiency analysis method, device and system based on execution force in virtual scene
KR102206181B1 (en) Terminla and operating method thereof
KR20200126675A (en) Electronic device and Method for controlling the electronic device thereof
Lopez-Martinez et al. Detection of real-world driving-induced affective state using physiological signals and multi-view multi-task machine learning
Dalhoumi et al. Knowledge transfer for reducing calibration time in brain-computer interfacing
CN108175426B (en) Lie detection method based on deep recursion type conditional restricted Boltzmann machine
Bhamare et al. Deep neural networks for lie detection with attention on bio-signals
Xu et al. Accelerating reinforcement learning agent with eeg-based implicit human feedback
Tayarani et al. What an “ehm” leaks about you: mapping fillers into personality traits with quantum evolutionary feature selection algorithms
Rodriguez-Bermudez et al. Testing Brain—Computer Interfaces with Airplane Pilots under New Motor Imagery Tasks
Imah et al. A Comparative Analysis of Machine Learning Methods for Joint Attention Classification in Autism Spectrum Disorder Using Electroencephalography Brain Computer Interface.
Bashashati et al. Neural network conditional random fields for self-paced brain computer interfaces
Reddy et al. Brain Waves Computation using ML in Gaming Consoles
Ghosh et al. Motor imagery task classification using intelligent algorithm with prominent trial selection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant