CN111553617A - Control work efficiency analysis method, device and system based on cognitive power in virtual scene - Google Patents

Control work efficiency analysis method, device and system based on cognitive power in virtual scene Download PDF

Info

Publication number
CN111553617A
CN111553617A CN202010414066.0A CN202010414066A CN111553617A CN 111553617 A CN111553617 A CN 111553617A CN 202010414066 A CN202010414066 A CN 202010414066A CN 111553617 A CN111553617 A CN 111553617A
Authority
CN
China
Prior art keywords
control
data
score
physiological information
physiological
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010414066.0A
Other languages
Chinese (zh)
Other versions
CN111553617B (en
Inventor
李小俚
赵小川
丁兆环
张乾坤
刘华鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Normal University
China North Computer Application Technology Research Institute
Original Assignee
Beijing Normal University
China North Computer Application Technology Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Normal University, China North Computer Application Technology Research Institute filed Critical Beijing Normal University
Priority to CN202010414066.0A priority Critical patent/CN111553617B/en
Publication of CN111553617A publication Critical patent/CN111553617A/en
Application granted granted Critical
Publication of CN111553617B publication Critical patent/CN111553617B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/04Manufacturing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention discloses a control work efficiency analysis method, equipment and a system based on cognitive power in a virtual scene, wherein the method comprises the following steps: acquiring psychological state data generated by controlling a target object to execute a target task by a control player in a virtual scene; acquiring a cognitive neuroergonomic score of the control player for the target task according to the physiological state data; obtaining a control score of the control player according to the cognitive neuroergonomic score; and executing a set operation according to the control score.

Description

Control work efficiency analysis method, device and system based on cognitive power in virtual scene
Technical Field
The invention relates to the technical field of automatic analysis of control work efficiency, in particular to a control work efficiency analysis method, equipment and system based on cognitive power in a virtual scene.
Background
Different operators operate the same target object to execute the target task, and different operation efficiencies can be achieved, for example, different operators operate the same type of unmanned aerial vehicle to execute the same target task, different performances can be achieved, some operators can complete the target task in a short time, and some operators have good psychological states when executing the target task. The method and the device can analyze the operation work efficiency shown when an operator operates a target object to execute a target task, can be used as a basis for selecting the operator who operates the target object, and can also be used as a basis for evaluating the adaptability between any operator and any motion control device. At present, when analyzing the operation work efficiency, an organization expert usually performs manual scoring for an operator to operate a target object to execute a target task, so as to reflect the corresponding operation work efficiency through a scoring result, wherein the higher the score is, the higher the operation work efficiency is. The manual scoring mode consumes a large amount of manpower, and the scoring result is excessively dependent on human subjective factors, so that the problems of low accuracy and unfairness exist, and therefore, an intelligent scheme for analyzing and controlling the work efficiency is needed to be provided.
Disclosure of Invention
It is an object of embodiments of the present invention to provide a new solution for analyzing the manipulation ergonomics.
According to a first aspect of the invention, a manipulation ergonomics analysis method based on cognitive power in a virtual scene is provided, which comprises the following steps:
acquiring a control command generated by a control player through a control motion control device, and updating a virtual scene according to the control command;
acquiring feedback data generated by the virtual scene, and sending the feedback data to the motion control device;
acquiring psychological state data generated by controlling a target object to execute a target task under the virtual scene by a control player;
acquiring a cognitive neuroergonomic score of the control player for the target task according to the physiological state data;
obtaining a control score of the control player according to the cognitive neuroergonomic score;
and executing set operation according to the control score.
Optionally, the performing the setting operation includes at least one of:
a first item outputting the manipulation score;
a second item, which provides a selection result whether the control player is selected or not according to the control score;
a third item, determining the control level of the control player according to the control score;
and fourthly, selecting a control combination which enables the control score to meet the set requirement according to the control score of the same control player for controlling the target object to execute the target task through different motion control devices, wherein one control combination comprises the control player and the motion control device which are matched.
Optionally, the method further comprises:
providing a setting entrance in response to an operation of setting an application scene;
acquiring an application scene input through the setting entrance, wherein the input application scene reflects an operation to be executed based on a control score;
and determining the operation content of the set operation according to the input application scene.
Optionally, the method comprises:
providing a configuration interface in response to an operation to configure the target task;
acquiring configuration information for the target task input through the configuration interface;
and providing the virtual scene corresponding to the target task according to the configuration information.
Optionally, the step of acquiring the physiological state data comprises:
acquiring physiological information data provided by each physiological information acquisition device, wherein the physiological information data provided by any physiological information acquisition device comprises at least one of physiological signal data and physiological image data;
obtaining the physiological state data according to the physiological information data, including:
obtaining parameter values for evaluating parameters of the reflected physiological characteristic indexes according to the physiological information data provided by the physiological information acquisition equipment;
according to the parameter value, determining index data of the control player for the corresponding physiological characteristic index;
generating the physiological state data includes determining all of the indicator data.
Optionally, each physiological information acquisition device comprises an electroencephalogram acquisition device, and the physiological information data provided by the electroencephalogram acquisition device comprises at least one of an electroencephalogram signal and an electroencephalogram image; the brain electricity collection device provides physiological information data and comprises:
the electroencephalogram acquisition equipment filters corresponding noise signals of the acquired original electroencephalogram signals sequentially through a plurality of noise identification models to obtain denoised electroencephalogram signals;
generating the provided physiological information data comprising the denoised electroencephalogram signal; and/or the presence of a gas in the gas,
each physiological information acquisition device comprises an electromyographic acquisition device, and the physiological information data provided by the electromyographic acquisition device comprises at least one of an electromyographic signal and an electromyographic image; and/or the presence of a gas in the gas,
each physiological information acquisition device comprises an electrocardio acquisition device, and the physiological information data provided by the electrocardio acquisition device comprises at least one of electrocardiosignals and electrocardio images; and/or the presence of a gas in the gas,
each physiological information acquisition device comprises a video acquisition device for acquiring facial actions, the physiological information data provided by the video acquisition device comprises at least one of change data of facial features and facial image data, and the providing of the physiological information data by the video acquisition device comprises:
acquiring a collected video image;
identifying a face region in the video image;
locating an eye position and a mouth position in the recognized face region;
determining a time point of blinking according to the gray level change of the eye position between every two adjacent video images;
determining the time point of the yawning action according to the gray scale change of the mouth position between every two adjacent video images;
and generating physiological information data provided by the video acquisition equipment according to the time point.
Optionally, each physiological information collecting device includes an electroencephalogram collecting device, and the providing of physiological information data by the electroencephalogram collecting device includes:
the electroencephalogram acquisition equipment filters corresponding noise signals of the acquired original electroencephalogram signals sequentially through a plurality of noise identification models to obtain denoised electroencephalogram signals;
and generating the provided physiological information data to comprise the denoised electroencephalogram signal.
Optionally, the obtaining, according to the physiological state data, a cognitive neuroergonomic score of the controller for the target task includes:
and inputting the physiological state data into a preset cognitive neural work efficiency model to obtain a cognitive neural work efficiency score of the control player for the target task, wherein the cognitive neural work efficiency model reflects the mapping relation between any physiological state data and the cognitive neural work efficiency score.
According to a second aspect of the present invention, there is also provided a control ergonomics apparatus based on cognition in a virtual scene, the apparatus comprising at least one computing device and at least one memory device, wherein the at least one memory device is adapted to store instructions for controlling the at least one computing device to perform the method according to the first aspect of the present invention.
According to a third aspect of the present invention, there is also provided a control ergonomics system based on cognitive power in a virtual scene, wherein the system comprises a task execution device, physiological information acquisition devices, and a control ergonomics analysis device according to the second aspect of the present invention, wherein the task execution device and the physiological information acquisition devices are both in communication connection with the control ergonomics analysis device.
Optionally, the motion control device of the task execution device is a flight control device, and the target object controlled by the flight control device is an unmanned aerial vehicle in a virtual scene.
One beneficial effect of the embodiment of the present invention is that the method of the embodiment of the present invention provides a cognitive neuroergonomic score representing the difficulty of the target task for the control player by using the mental state data generated by the control player to control the target object to execute the target task, and further determines the control score of the control player at least according to the cognitive neuroergonomic score, and according to the control score, the selection of the control person of the target object, the rating of the control person, and/or the matching between the control person and the motion control device can be performed. According to the method of the embodiment, the analysis of the operation and control work efficiency can be automatically completed, the labor cost and the time cost can be saved, in addition, the dependence on expert experience is greatly reduced according to the analysis performed by the method of the embodiment, and the accuracy and the effectiveness of the analysis are improved.
Other features of the present invention and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a schematic block diagram of a manipulation ergonomic analysis system according to an embodiment;
FIG. 2 is a hardware configuration diagram of a manipulation ergonomic analysis device according to one embodiment;
FIG. 3 is a flow diagram of a method of manipulating ergonomics according to an embodiment
FIG. 4 is a flow diagram of a method of pilot ergonomics according to another embodiment;
fig. 5 is a schematic view of a constitutional structure of the myoelectric acquisition apparatus according to an embodiment.
Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
< System embodiment >
FIG. 1 is a schematic block diagram of an alternative ergonomic analysis system 100 to which the method of embodiments of the present invention may be applied.
As shown in fig. 1, the manipulation ergonomics system 100 may include an electronic device 110, a task performance device 120 and physiological information acquisition means 130.
The electronic device 110 may be a server or a terminal device, and is not limited herein.
The server may be, for example, a blade server, a rack server, or the like, and the server may also be a server cluster deployed in the cloud. The terminal device can be any device with data processing capability, such as a PC, a notebook computer, a tablet computer and the like.
The electronic device 110 may include a processor 1101, a memory 1102, an interface device 1103, a communication device 1104, a display device 1105, an input device 1106.
The memory 1102 is used to store computer instructions, and the memory 1102 includes, for example, a ROM (read only memory), a RAM (random access memory), a nonvolatile memory such as a hard disk, and the like. The processor 1101 is used to execute a computer program, which may be written in an instruction set of architectures such as x86, Arm, RISC, MIPS, SSE, etc. The interface device 1103 includes various bus interfaces, for example, a serial bus interface (including a USB interface and the like), a parallel bus interface, and the like. The communication device 1104 is capable of wired or wireless communication, for example, and performs communication using at least one of a RJ45 module, a WIFI module, a 2G to 6G mobile communication module, a network adapter of a bluetooth module, and the like. The display device 1105 is, for example, a liquid crystal display, an LED display touch panel, or the like. The input device 1106 may include, for example, a touch screen, keyboard, mouse, etc.
In this embodiment, the memory 1102 of the electronic device 110 is configured to store computer instructions for controlling the processor 1101 to operate to implement a method of ergonomics according to any of the embodiments of the present invention. The skilled person can design the instructions according to the disclosed solution. How the instructions control the operation of the processor is well known in the art and will not be described in detail herein.
Although a plurality of devices of the electronic apparatus 110 are shown in fig. 1, the present invention may only relate to some of the devices, for example, the electronic apparatus 110 only relates to the memory 1102, the processor 1101, the communication device 1104 and the like.
In this embodiment, as shown in fig. 1, the task performing device 120 may be a task performing device based on semi-physical simulation of a virtual environment, and the task performing device 120 may include a terminal device 1203 and a real motion control apparatus 1201, where the terminal device 1203 is configured to provide a virtual scene corresponding to a target task, that is, a simulated scene, and in this embodiment, the target object 1202 is a virtual object in the virtual scene. In this embodiment, the motion control apparatus 1201 is in communication connection with the terminal device 1203 to implement data and/or command interaction between the motion control apparatus 1201 and the virtual scene, so that an operator can operate and control the target object 1202 to execute a target task in the virtual scene through the motion control apparatus 1201.
In this embodiment, the terminal device 1203 may have a hardware structure similar to that of the electronic device 110, which is not described herein again, and the terminal device 1203 and the electronic device 110 may be physically separated devices or may be the same device, that is, the electronic device 110 may also provide the virtual environment, which is not limited herein.
In this embodiment, the operator may manipulate the target object in the virtual scene through the motion control device 1201 to perform the target task. For example, the target object in the virtual scene may be a drone, and the motion control device 1201 is a flight control device for controlling the drone. As another example, the target task includes completing at least one of a figure-of-eight flight, a spinning flight, a trunked flight, and the like in a set environment. As another example, the set environment includes wind, rain, fog, and the like. Of course, the target object 1202 may also be other controlled objects, such as an unmanned vehicle, any type of robot, etc., and is not limited herein.
The motion control device 1201 may include, for example, at least one of a remote control and a remote control handle.
The motion control device 1201 may include a processor, a memory, an interface device, an input device, a communication device, and the like. The memory may store computer instructions that, when executed by the processor, perform: an operation of sending a corresponding control command to the terminal device 1203 according to an operation of an operator on the input device; acquiring motion state data returned by a target object, and performing corresponding processing operation; and uploading the collected manipulation result data to the electronic device 110, etc., which will not be further described herein.
In this embodiment, the task execution device 120 is communicatively connected to the electronic device 110 to upload the manipulation result data to the electronic device 110. This may be, for example, that the task performing device 120 is communicatively connected to the electronic device 110 via the motion control apparatus 1201. For another example, the motion control device 1201 and the terminal device 1203 may be both communicatively connected to the electronic device 110, and the present invention is not limited thereto.
In fig. 1, each physiological information collection device 130 is used to provide physiological information data required by the electronic device in implementing the method of pilot ergonomics according to any of the embodiments. Each physiological information collection device 130 is communicatively connected to the electronic device 110 to upload the physiological information data provided by each to the electronic device 110.
Each physiological information acquisition device 130 includes at least one of an electroencephalogram acquisition device 1301, a myoelectricity acquisition device 1302, an electrocardiograph acquisition device 1303, and a video acquisition device 1304 for acquiring facial expressions.
The physiological information data provided by the electroencephalogram acquisition device 1301 includes at least one of an electroencephalogram signal and an electroencephalogram image.
The physiological information data provided by the electromyographic acquisition device 1302 includes at least one of an electromyographic signal and an electromyographic image.
The electrocardiographic acquisition device 1303 provides physiological information data including at least one of electrocardiographic signals and electrocardiographic images.
The physiological information data provided by the video capture device 1304 may include at least one of facial feature variation data and facial image data.
Any physiological information acquisition device 130 may include a front-end acquisition device and a data processing circuit connected to the acquisition device, the front-end acquisition device is configured to acquire raw data, and may be an electrode device that contacts with a control player, the data processing circuit is configured to perform corresponding preprocessing on the raw data, the preprocessing includes at least one of signal amplification, filtering, denoising, and notch processing, the data processing circuit may be implemented by a basic circuit built by an electronic component, may also be implemented by a processor operation instruction, and may also be implemented by a combination of the two, which is not limited herein.
The electronic device 110 and the task performing device 120, and the electronic device 110 and each physiological information collecting device 130 may be in communication connection in a wired or wireless manner, which is not limited herein.
In one embodiment, as shown in fig. 3, the present invention provides a manipulation ergonomics apparatus 140 comprising at least one computing device 1401 and at least one storage device 1402, wherein the at least one storage device 1401 is configured to store instructions for controlling the at least one computing device 1402 to perform a method of manipulation ergonomics based on cognition in a virtual scene according to any of the embodiments of the present invention. The control ergonomics apparatus 140 may include at least one electronic device 110, and may further include a terminal device 1203, etc., which are not limited herein.
< method examples >
Fig. 3 is a flow diagram of a maneuver ergonomics method according to one embodiment, which may be implemented, for example, by the maneuver ergonomics apparatus 140 shown in fig. 2. In this embodiment, the method for analyzing work efficiency of a control of a user by a task execution device will be described as an example, and the method may include the following steps S310 to S340:
step S310, acquiring physiological status data generated when the target object is operated by the operator in the virtual scene to perform the target task.
The physiological state data reflects the cognitive ability of the control player for the target task, the stronger the cognitive ability, the easier the control player completes the target task, the weaker the cognitive ability, the harder the control player completes the target task, and the difficulty in completing the target task will have corresponding reactions on the physiological state of the control player, such as heart rate reaction, brain electricity reaction, myoelectricity reaction, facial expression reaction, and the like. Therefore, in this embodiment, according to the physiological state data, a cognitive neuro-ergonomic score reflecting the cognitive ability of the control player for the target task may be obtained, and the higher the cognitive neuro-ergonomic score is, the stronger the control player has the ability to be competent for the target task.
The physiological state data is multidimensional data including a plurality of index data. The physiological state data may include at least one of index data reflecting attention situation, index data reflecting brain load situation, index data reflecting nerve fatigue situation, index data reflecting muscle fatigue degree, and index data reflecting mood control ability, for example.
Correspondingly, the physiological characteristic indexes for evaluating the cognitive neural ergonomics of the control player include, for example: at least one of an attention index, a brain load index, an executive ability index, a nerve fatigue index, a muscle fatigue index, and an emotion control index. According to the physiological state data, index data corresponding to each physiological characteristic index can be obtained.
The physiological state data can be determined according to physiological information data provided by various physiological information acquisition devices. Therefore, in one embodiment, the acquiring of the physiological status data generated by the player operating the target object to perform the target task in step S310 may include: acquiring physiological information data provided by each physiological information acquisition device; and obtaining the set physiological state data according to the physiological information data.
In this embodiment, the physiological information data provided by any physiological information acquisition device may include at least one of physiological signal data and physiological image data.
For example, each physiological information acquisition device includes an electroencephalogram acquisition device 1301 as shown in fig. 1, and the physiological information data provided by the electroencephalogram acquisition device 1301 may include at least one of an electroencephalogram signal (electrical signal) and an electroencephalogram image.
For another example, each physiological information acquisition apparatus includes an electromyography acquisition apparatus 1302 as shown in fig. 1, and the physiological information data provided by the electromyography acquisition apparatus 1302 may include at least one of an electromyography signal (electric signal) and an electromyography image.
For another example, each physiological information acquisition device includes an electrocardiograph acquisition device 1303 shown in fig. 1, and the physiological information data provided by the electrocardiograph acquisition device 1303 may include at least one of an electrocardiograph signal and an electrocardiograph image.
As another example, each physiological information acquisition device includes a video acquisition device 1304 as shown in fig. 1, and the physiological information data provided by the video acquisition device 1304 includes at least one of change data of facial features and facial image data. The change data of the facial feature includes, for example, at least one of data that a blink action occurs and data that a yawning action occurs.
After the raw data is acquired by any physiological information acquisition device through the acquisition device at the front end, at least one of signal amplification, filtering, denoising and notch processing can be performed on the raw data to generate physiological information data, and the physiological information data is provided for the device 140, so that the device 140 can obtain set physiological state data according to the physiological information data.
According to the natural relation between the physiological characteristic indexes and the physiological information data provided by the electroencephalogram acquisition device 1301, index data of the psychological index characteristics of the control player on attention indexes, brain load indexes, nerve fatigue indexes and the like can be obtained. According to the physiological information data provided by the myoelectric acquisition device 1302, index data of the physical characteristic indexes such as muscle fatigue degree indexes of the controlled player can be obtained. According to the physiological information data provided by the electrocardiogram acquisition equipment 1303, index data of the control player on the psychological characteristic indexes such as the emotion control ability index can be obtained. According to the physiological information data provided by the video acquisition device 1304, index data of the control player on the physical characteristic indexes such as fatigue degree indexes can be obtained, and here, because the change of the blink frequency and/or the frequency of yawning of the control player can be determined according to the change data of the facial characteristics and/or the facial image data provided by the video acquisition device 1304, and the like, and whether the blink frequency is abnormally changed and/or the frequency of yawning can reflect the fatigue degree of the control player, the index data can be a fatigue degree grade determined according to the change of the blink frequency and/or the frequency of yawning of the control player.
In this embodiment, the index data may be obtained according to physiological information data provided by each physiological information acquisition device, and the index data may be determined according to each parameter value obtained by calculation of the physiological information data, where the parameter value is a value of a parameter used for evaluating a corresponding physiological characteristic index. For example, the attention index and the like may be evaluated by the variance of the relative power of the electroencephalogram signal.
Therefore, in one embodiment, obtaining the set physiological state data according to the physiological information data may include: obtaining parameter values of parameters for evaluating the reflected physiological characteristic indexes according to the physiological information data; according to the parameter value, determining index data of the control player corresponding to the physiological characteristic index; and generating the physiological state data includes determining all of the indicator data.
For example, a variance parameter of the relative power of the brain electrical signal based on the power spectral density is used to evaluate an attention index or the like.
In one embodiment, the electroencephalogram, electromyogram, or electrocardiograph images may be used directly as part of the physiological state data to determine the cognitive neuroergonomic score of the control player.
Since the physiological information data come from different physiological information acquisition devices, in order to make the evaluation of the cognitive abilities of the control players have the same time reference according to the physiological information data, in one embodiment, the acquiring the physiological information data provided by each physiological information acquisition device may include: controlling each physiological information acquisition device to synchronously perform acquisition operation; and acquiring physiological information data generated by the physiological information acquisition equipment through corresponding acquisition operation.
In this embodiment, for example, a unified clock reference may be set to trigger each physiological information acquisition device to synchronously start and end the corresponding acquisition operation, and the like.
In this embodiment, as shown in fig. 1, the control player may manipulate the target object 1202 in a virtual scene provided by the terminal device 1203 through the motion control device 1201, that is, the target object and the task environment are both virtual. In this embodiment, in order to implement interaction of data and commands between the motion control device 1201 and the virtual scene, the method may further include the following steps S3011 to S3013:
step S3011, providing a virtual scene corresponding to the target task, where the target object is a virtual object in the virtual scene.
In step S3012, a control command generated by the operator by operating the motion control device 1201 is obtained, and the virtual scene is updated according to the control command.
In this step S3012, updating the virtual scene includes updating the task environment and the state of the target object, which includes the position and posture of the target object, and the like.
In step S3013, feedback data generated in the virtual scene is acquired and sent to the motion control apparatus 1201.
The virtual scene includes all virtual things of the corresponding target task provided by the terminal device 1203, including virtual environments and virtual objects, etc.
In step S3013, the feedback data is collected by the virtual sensor of the target object and sent to the motion control apparatus 1201 by the device 140, so that the control player can perform the control judgment. The feedback data may also be used for the device 140 to obtain at least part of the above-mentioned manipulation result data.
In this embodiment, the method may further include the following steps S3021 to S3023:
in step S3021, in response to an operation to configure the target task, a configuration interface is provided.
The device 140 may have a simulation application installed thereon, and an interface of the simulation application may provide an entry for triggering an operation of configuring the target task, through which a configuration person may access a configuration interface provided by the configuration interface.
The configuration interface may include at least one of an input box, a checklist, and a drop-down list for a configuration person to configure the target task.
Step S3022, configuration information for the target task input through the configuration interface is acquired.
In step S3022, the configuration information input through the configuration interface may be acquired in response to an operation to complete configuration. The configuration information includes, for example, information reflecting task contents and task environments, and the like.
In step S3022, the configuration personnel may trigger the operation of completing the configuration by, for example, pressing a key such as "confirm" or "submit" provided by the configuration interface.
Step S3023, providing a virtual scene corresponding to the target task according to the configuration information.
The virtual scene comprises a virtual object corresponding to the target task, a virtual environment and the like.
As can be seen from the above steps S3021 to S3023, the configuration person can flexibly configure the target task through the configuration interface as needed to be able to provide virtual scenes corresponding to different target tasks through the apparatus 140.
And step S320, acquiring cognitive neuro-ergonomics scores of the control players for the target task according to the physiological state data acquired in the step S310.
The cognitive neuroergonomic score reflects the cognitive ability of the player to perform the target task.
In one embodiment, the cognitive neuro-ergonomic score may be obtained by referring to the way of obtaining the task function score, which is not described herein.
In another embodiment, the cognitive neuroergonomic score of the control player can also be obtained according to the physiological state data and the cognitive neuroergonomic model obtained by pre-training, that is, the physiological state data can be input into a preset cognitive neuroergonomic model to obtain the cognitive neuroergonomic score.
In this embodiment, the cognitive neural ergonomics model reflects a mapping relationship between arbitrary physiological state data and the cognitive neural ergonomics score.
The cognitive neuro-ergonomic model may be trained based on a deep convolutional neural network model, for example.
The deep convolutional neural network model comprises an input layer, an output layer and a hidden layer positioned between the input layer and the output layer. The hidden layer can be composed of a plurality of layers of neural networks, such as two network types including a convolutional layer and a pooling layer. The convolutional layer is mainly responsible for extracting the main characteristics of the upper neural network. The pooling layer, also called a feature mapping layer, is primarily responsible for mapping the features of the upper layers to a new plane.
In one embodiment, the method may further comprise the step of obtaining the cognitive neuroergonomic model, comprising: acquiring training samples, wherein each training sample corresponds to an operator, and each training sample comprises physiological state data of the corresponding operator and a known operation score of the corresponding operator; training a deep convolutional neural network model according to the training sample, and determining network parameters of the model; and acquiring the cognitive neural work efficiency model according to the network parameters.
And step S330, obtaining the control score of the control player according to the cognitive nerve work efficiency score.
In this embodiment, the cognitive neuroergonomic score may be directly used as the control score of the control player.
In this embodiment, the control score of the control player may also be obtained by combining other scores of the control player.
For example, the other scores may include a task ergonomics score determined for task completion capabilities exhibited during the manipulation. For another example, the other scores may also include daily score scores, and the like.
In step S340, a set operation is performed according to the manipulation score obtained in step S330.
In one embodiment, the operation of performing setting in step S340 may include a first operation of outputting the manipulation score.
Outputting the maneuver score may include: the display device of the driving apparatus 140 or the display device connected to the apparatus 140 displays the manipulation score.
Outputting the maneuver score may also include: and sending the control score to terminal equipment registered by a user customizing the control score or to a user account of the user customizing the control score.
The user is, for example, a manipulation rater, and the user may register device information of the terminal device with the device 140, so that the device 140 may transmit a manipulation score to the terminal device after obtaining the manipulation score of the manipulation player.
In the case of developing the control analysis application in accordance with the method of the present embodiment, a control rater may install a client of the application on a terminal device of the user, and obtain a control score of a control player by logging in a user account registered in the application.
The terminal device is, for example, a PC, a notebook computer, or a mobile phone, and is not limited herein.
In one embodiment, the operation of performing the setting in step S340 may include a second operation of providing a result of whether the control player is selected according to the control score. According to the embodiment, the selection of the operator can be realized. Here, a score threshold value may be set, and in a case where the manipulation score is higher than or equal to the score threshold value, the manipulation player may be judged to be taken in. In this embodiment, the operation of executing the setting may further include: and outputting the selection result in an arbitrary mode. The arbitrary means includes displaying, printing, transmitting, and the like.
In one embodiment, the operation of performing setting in step S340 may include a third operation of determining the manipulation level of the manipulation player according to the manipulation score. Here, a comparison table reflecting the correspondence between the manipulation scores and the manipulation levels may be preset to determine the manipulation level of the corresponding manipulation player from the manipulation score for any manipulation player and the comparison table. In this embodiment, the operation of executing the setting may further include: the manipulation level is output in an arbitrary manner.
In one embodiment, the operation of performing setting in step S340 may include a fourth operation of selecting a control combination that makes the control score meet the setting requirement according to the control score of the target object executed by the same control player through different motion control devices, wherein one control combination includes the control player and the motion control device that are matched. In this embodiment, the operation of executing the setting may further include: the manipulated combination is output in an arbitrary manner.
In this embodiment, since the same control player has different proficiency levels for different motion control devices, in this example, not only the control combination that makes the control score satisfy the setting requirement but also the motion control device most suitable for the control player can be obtained. In this example, the setting request is, for example, that the manipulation score is equal to or larger than a set value.
In one embodiment, the user may be allowed to select the operation to be performed in step S340, and thus, the method may further include: providing a setting entrance in response to an operation of setting an application scene; acquiring an application scene input through the setting entrance, wherein the application scene reflects an operation to be executed based on the control score; and determining the operation content of the set operation according to the input application scene.
For example, according to the input application scenario, the operation content of the operation determined to be set includes at least one of the above operations.
As can be seen from the above steps S310 to S340, the method of this embodiment may determine the control score for the control player according to the mental state data generated by the control player controlling the target object to execute the target task, which may greatly save labor cost and time cost, greatly reduce the dependence on expert experience, and improve the accuracy and effectiveness of the analysis. In addition, the control score can be used for relevant personnel to select the control personnel, grade the control personnel, and/or carry out matching setting between the control personnel and the motion control device.
As can be seen from the steps S310 to S340, in the method of this embodiment, in the virtual scene, the analysis of the control player is completed in a semi-physical simulation manner, and the control field and the target object do not need to be accurately controlled, so that the cost for performing the control analysis is greatly reduced.
Fig. 4 is a flow diagram of a maneuver ergonomics method according to another embodiment, which may be implemented, for example, by the maneuver ergonomics apparatus 140 shown in fig. 2. In this embodiment, the method may include the following steps S410 to S460:
in step S410, control result data generated when the control target object performs the target task under the virtual scene by the control player is obtained.
In this embodiment, the manipulation result data may be provided by the task execution device 120, or the task execution device 120 may provide the basic data for calculating the manipulation result data to the manipulation ergonomics device 140, and the manipulation result data is calculated by the manipulation ergonomics device 140 according to the basic data, so as to be acquired in this step S410.
The manipulation result data may include at least one of index data reflecting a manipulation proficiency level, index data reflecting a position control capability, index data reflecting a cluster manipulation information processing capability, index data reflecting a cluster manipulation performance, and index data reflecting a fault processing capability, for example.
The index data reflecting the manipulation proficiency may include at least one of a command reflection delay time, a manipulation delay time, a number of times of task stalls, and the like, for example. The index data reflecting the manipulation proficiency may be determined by the motion control device 1201 as shown in fig. 1.
The index data reflecting the position control ability may include at least one of an altitude deviation, a horizontal deviation, a heading deviation, a stability, and the like, for example. The index data reflecting the position control ability may be derived from each sensor of the target object.
The index data reflecting the cluster manipulation information processing capability may include at least one of a cluster information reporting speed, a cluster failure discovery speed, and the like, for example. The index data reflecting the manipulation proficiency may be determined by the motion control device 1201 as shown in fig. 1.
The index data reflecting the cluster control performance may include at least one of a parameter value reflecting the formation setting capability, a parameter value reflecting the route planning capability, and the like. The parameter value reflecting the formation setting capability may include, for example, a time elapsed for formation setting, a time length for the corresponding formation to complete the target task, and the like. The parameter values reflecting the route planning capability may include, for example, the time of use of the route planning, the length of time to complete the target task, and the like.
The index data reflecting the fault handling capability may include at least one of a parameter value reflecting the emergency response capability, a parameter value reflecting the correctness of the fault handling, and the like, for example. The parameter value reflecting the emergency response capability may include, for example, a response time of the emergency response, a result of whether the emergency treatment is successful, or the like. The parameter value reflecting the correctness of the failure processing may include, for example, the result of whether the failure processing was successful or not, and the like.
The target object may be, for example, a drone or the like.
The target task comprises task content, a corresponding task environment and the like.
In this embodiment, as shown in fig. 1, the control player can control the target object in the real scene through the motion control device 1201, that is, the target object and the task environment are both real.
And step S420, obtaining task ergonomics scores of the control players for the target task according to the control result data obtained in the step S420.
The task ergonomics score reflects the ability of the operator to complete the target task.
In this embodiment, each evaluation index for evaluating the task work efficiency may be set to measure the completion ability of the control player for the target task through each evaluation index.
Each evaluation index may be set in advance. Each evaluation index can also be used for screening at least part of indexes with high relevance to the rating of the control player from a preset initial evaluation index set by using a relevance analysis method to form each finally used evaluation index.
Each evaluation index includes, for example: the index reflecting the operation proficiency, the index reflecting the position control capability, the index reflecting the cluster operation information processing capability, the index reflecting the cluster operation efficiency and the index reflecting the fault processing capability.
The index reflecting the position control capability may include at least one of an altitude deviation index, a horizontal deviation index, a heading deviation index, a stability index, and the like. The index reflecting the cluster manipulation information processing capability may, for example, further include at least one of a cluster information reporting speed index, a cluster failure discovery speed index, and the like. The index reflecting the cluster manipulation performance may further include at least one of a formation setting capability index, a route planning capability index, and the like. The index reflecting the fault handling capability may include at least one of an index reflecting the emergency response capability, an index reflecting the correctness of the fault handling, and the like.
In this embodiment, a comparison table reflecting a mapping relationship between the manipulation result data and the individual scores of the evaluation indexes may be preset, so as to obtain the individual scores of the evaluation indexes for the operator according to the manipulation result data and the comparison table obtained in step S410.
The control result data comprises index data corresponding to each evaluation index, and the comparison table can comprise the mapping relation between the data range of each index data and the single-item scores corresponding to the evaluation indexes, so that the single-item scores of the control players for the evaluation indexes can be obtained according to the control result data of any control player according to the comparison table.
The single score may be represented by a score of 1 to 10 or 1 to 100, or may be represented by a numerical value representing a grade of good, medium, or bad, and is not limited herein.
In this regard, in one embodiment, the obtaining of the task ergonomics score of the control player for the target task according to the control result data in the step S420 may include: obtaining the individual scores of the operator for the set evaluation indexes according to the control result data; and acquiring task work efficiency scores of the control players for the target task according to the individual scores and the respective weight coefficients of the evaluation indexes.
In this embodiment, the sum of the respective weight coefficients of the evaluation indexes is equal to 1. The weight coefficient of each evaluation index may be set in advance, or may be obtained by an analytic hierarchy process or the like, and is not limited herein.
In this embodiment, task ergonomics score Q1Can be obtained by the following formula (1):
Figure BDA0002494390550000151
in the formula (1), i represents the i-th evaluation index, wiWeight coefficient of the i-th evaluation index, qiFor the individual scoring of the i-th evaluation index by the control player, M is the total number of the evaluation indexes.
Step S430, acquiring physiological status data generated when the target object is operated by the operator in the virtual scene to perform the target task.
And step S440, acquiring cognitive neuroergonomic scores of the control players for the target task according to the physiological state data acquired in the step S430.
And step S450, obtaining the control score of the control player according to the task work efficiency score and the cognitive nerve work efficiency score.
In this embodiment, the sum of the task work efficiency score and the cognitive nerve work efficiency score may be directly used as the control score of the control player.
In this embodiment, different weighting coefficients may be set for the task ergonomics score and the cognitive neuroergonomics score according to needs, so as to obtain a control score Q of the control player according to the task ergonomics score, the cognitive neuroergonomics score, and the respective weighting coefficients, as shown in the following formula (2):
Q=α1×Q12×Q2formula (2);
in the formula (2), α1Weight factor, Q, for task ergonomics scoring1For task ergonomics scoring, α2For cognitive neuro-ergonomic scoring, α2And the weight coefficient is the cognitive neuro-ergonomic score.
In this embodiment, the weighting factor α1、α2The sum equals 1. the weighting factor α1、α2The weighting factor α can be preset1、α2The value of (a) has a direct relation with the selection demand of the control player, depends on the screening preference of the standard association for the actual task completion capability and the potential cognitive ability of the control player, and can be set α if the control player with strong actual task completion capability is screened preferentially1Greater than α2If a potential cognizant player is preferred, α can be set1Less than α2
In this embodiment, the task ergonomics score and the cognitive neuroergonomics score may be input to the support vector machine model, i.e., the classification model, to obtain the control score of the control player.
The step of obtaining the support vector machine model may comprise: obtaining a plurality of training samples, wherein each training sample corresponds to one control person, and each training sample comprises a task work efficiency score, a cognitive nerve work efficiency score and a corresponding control score of a corresponding control player; and training the model parameters of the support vector machine model by using the training samples, and further obtaining the support vector machine model.
Step S460, executing a set operation according to the manipulation score obtained through step S450.
As can be seen from the above steps S410 to S460, the method of this embodiment may determine the control score for the control player according to the control result data and the psychological state data generated when the control player controls the target object to execute the target task, which may greatly save labor cost and time cost, greatly reduce the dependence on expert experience, and improve the accuracy and effectiveness of the analysis.
In one embodiment, the method may select an index having a higher correlation with the rating of the operator from the manually set initial evaluation index set according to a correlation analysis method, and form final evaluation indexes, that is, form evaluation indexes available for the task ergonomics rating determination in step S420, which is beneficial to improve the accuracy and effectiveness of the task ergonomics rating. The method of this embodiment may be implemented by a pilot ergonomic analysis device 140 as shown in fig. 3.
In this embodiment, the method may further include steps S511 to S515 of obtaining each evaluation index available for use in step S420:
in step S511, a set initial evaluation index set is acquired.
Each index in the initial set of evaluation indices may be set by an expert. This initial evaluation index set includes, for example, all the indexes for task ergonomics mentioned above, and the like.
In this embodiment, the method may further include a step of obtaining the index set by the expert to form an initial evaluation index set, and the step may include, for example: providing a setting interface in response to an operation of setting an initial index; and acquiring the initial index input through the setting interface to form the initial evaluation index set.
Step S512, acquiring control result data generated when the authenticated operator controls the target object to execute the target task, as control result reference data.
For example, if the target object is a drone, the authorized operator is an operator qualified to operate the drone and has a known operation level, such as an excellent level of operator. Here, the excellent rank can be mapped to a corresponding value for representation, so as to facilitate correlation calculation.
Step S513, according to the control result reference data and the set scoring rule, obtaining a single score of each index in the initial evaluation index set by the authenticated operator as a single reference score.
The set scoring rule reflects the mapping relation between the data range of the index data of each index in the set and the corresponding single score.
In step S513, index data corresponding to each index in the set may be extracted according to the control result reference data, and further, according to the scoring rule, a single score for each index in the initial evaluation index set may be obtained by an authenticated operator.
According to the step S513, a data set including the individual scores of the authorized operator and the qualification level of the authorized operator can be obtained.
In the embodiment, a plurality of authenticated operators can be organized to participate in the implementation of the method to obtain a plurality of data sets, and then each finally used evaluation index can be screened according to the correlation degree between the single score and the qualification grade of the plurality of authenticated operators, so that the screening accuracy is improved.
Step S514, obtaining a correlation value representing the correlation degree between each index in the initial evaluation index set and the control level according to the control level of the authenticated control personnel and the corresponding single reference score.
In step S514, correlation values between each index in the initial evaluation index set and the manipulation level may be obtained according to the data set of one or more operators, for example, 40 indexes in the initial evaluation index set result in 40 correlation values.
In step S514, a screening threshold may be set according to a value range of the correlation value to screen each finally used evaluation index, that is, an index that makes the correlation value greater than or equal to the screening threshold is screened from the initial evaluation index set as each finally used evaluation index.
In this embodiment, a correlation value between any index in the initial evaluation index set and the maneuver level may be obtained based on any correlation algorithm to indicate a degree of closeness of correlation between the index and the maneuver level through the correlation value, and for example, the correlation value may be obtained by using a pearson correlation coefficient method, that is, the correlation value is indicated by a pearson correlation coefficient.
And step S515, selecting each evaluation index from the initial evaluation index set according to the correlation value.
In this embodiment, for example, the correlation value is expressed by a pearson correlation coefficient, a value range of the pearson correlation coefficient is between-1 and 1, and an index that makes an absolute value of the correlation value greater than 0.5 may be selected as each evaluation index that is finally used.
In one embodiment, the method can obtain the respective weights of the finally used evaluation indexes through an analytic hierarchy process and the like so as to improve the accuracy of the obtained task ergonomic score. The method of this embodiment may be implemented by a pilot ergonomic analysis device 140 as shown in fig. 3.
In this embodiment, the method further includes a step of obtaining respective weights of the evaluation indexes, including the following steps S521 to S524:
and step S521, providing a weight comparison interface.
In this embodiment, the expert may be organized to give a comparison result of the importance of each two evaluation indexes for each evaluation index. For example, in this embodiment, 20 evaluation indexes are selected, and a 20 × 20 judgment matrix can be obtained according to the comparison result given by the expert.
In this step S521, a weight comparison interface may be provided, which provides an input box that gives a corresponding comparison result for every two evaluation indexes. In addition, the weight comparison interface may also provide a scale table as shown in table 1 below, and the expert may give the corresponding comparison result in the input box according to the scale table for the following step S521 to obtain.
TABLE 1 Scale Table
Scale Means of
1 Shows that the two indexes have the same importance
3 Indicating that one index is slightly more important than the other index when compared to the other index
5 Indicating that one index is significantly more important than the other index when compared to the other index
7 Indicating that one index is more important than the other index when the two indexes are compared
9 Indicating that one index is extremely important compared to the other index
2,4,6,8 Median value of the above two adjacent judgments
Reciprocal of the The scale of the comparison of index i to j is aijThe scale a of the comparison of the index j with iji=1/aij
In step S522, the comparison result of the importance of each two evaluation indexes input through the weight comparison interface is acquired.
Step S523, a determination matrix is generated according to the comparison result.
In step S523, the determination matrix generated from the comparison result is an M × M matrix.
Step S524, based on the hierarchical analysis algorithm, obtaining the respective weights of the evaluation indexes according to the determination matrix.
In step S524, based on the hierarchical analysis algorithm, the judgment matrix is subjected to a consistency check to make the consistency of the judgment matrix within an acceptable range, and if the judgment matrix fails the consistency check, a weight comparison interface is reloaded for the expert to adjust the comparison result, wherein in the reloaded weight comparison interface, the comparison result input in the prior art is provided in the input box.
In this embodiment, the importance degree ranking and the corresponding weight value of each evaluation index may be obtained through a hierarchical analysis algorithm, and further, the respective weight of each evaluation index may be determined according to the importance degree ranking or the weight value.
< electroencephalogram acquisition apparatus embodiment >
In order to obtain index data of physiological characteristic indexes such as attention indexes, brain load indexes, emotion control indexes and the like according to physiological signal data provided by the electroencephalogram acquisition device 1301, the electroencephalogram acquisition device 1301 in the embodiment adopts a broadband electroencephalogram acquisition device for acquiring electroencephalogram signals of 0.5-100 Hz. The electroencephalogram signals have the characteristics of high low-frequency brain electric energy and low high-frequency brain electric energy, the low-frequency brain electricity is easy to collect, and the collection of the high-frequency brain electricity is easy to interfere with due to weaker collection. If the device has poor noise suppression capability, the acquired broadband electroencephalogram can be submerged in noise. Therefore, the key point for accurately acquiring the broadband information components of the brain electricity of the scalp is how to improve the signal-to-noise ratio of the brain electricity signal. In order to improve the signal-to-noise ratio, in addition to operations such as amplification and filtering performed by a hardware circuit, noise information generated due to various interferences in electroencephalogram acquisition needs to be removed through software design, and for this reason, this embodiment provides a processing method for providing physiological signal data by the electroencephalogram acquisition device 1301.
In this embodiment, the processing method may include: the electroencephalogram acquisition device 1301 sequentially filters corresponding noise signals from the acquired original electroencephalogram signals through a plurality of noise identification models to obtain denoised electroencephalogram signals; and generating physiological signal data to be provided, wherein the physiological signal data comprises the denoised electroencephalogram signal.
In this embodiment, the above plurality of noise recognition models may include at least one of a noise recognition model for removing eye movement (EOG) interference, a noise recognition model for removing Electrocardiogram (ECG) interference, a noise recognition model for removing Electromyogram (EMG) interference, and a noise recognition model for removing head movement interference.
The processing method of this embodiment may be implemented by a processor (e.g., MCU) of the brain electrical acquisition device 1301, i.e., the brain electrical acquisition device 1301 comprises a processor and a memory for storing program instructions for controlling the processor to execute the above processing method.
The physiological signal data provided by the electroencephalogram acquisition equipment can comprise the denoised electroencephalogram signal, and can also comprise an electroencephalogram image and the like.
In this embodiment, the original electroencephalogram signal is collected by a front-end collecting device of the electroencephalogram collecting equipment, and the front-end collecting device may be an electrode cap, and the electrode cap includes a cap body and a plurality of electroencephalograms electrodes fixedly arranged on the cap body. Therefore, the original electroencephalogram signals of the control player during the target task execution period of the control target object can be acquired by enabling the control player to wear the electrode cap.
In this embodiment, because the electrode cap includes a plurality of groups of brain electrodes, a plurality of paths of original brain electrical signals can be collected through the electrode cap, and then a plurality of paths of brain electrical signals are provided, wherein one path of brain electrical signals corresponds to one group of brain electrodes, and different brain electrodes correspond to different brain positions.
One set of brain electrodes may include a recording electrode (or referred to as an active electrode), a reference electrode, and a ground electrode, different sets of brain electrodes may share the reference electrode and/or the ground electrode, and so on.
In this embodiment, since different positions of the brain have different sensitive items, a recording electrode can be set at a position of the brain that is sensitive to a response of a set physiological characteristic index, and index data corresponding to the physiological characteristic index can be obtained by setting an electroencephalogram signal of a channel.
For example, a set of brain electrodes including recording electrodes located in the left dorsolateral prefrontal region (corresponding to the position of F3 in the 10-20 electrode placement system) is provided, and brain electrical signals of the channels in which the set of brain electrodes are located may be used to determine index data corresponding to an attention index.
The original electroencephalogram signal can be subjected to signal amplification, filtering, analog-to-digital conversion and the like, and then the operation of filtering the corresponding noise signal through a plurality of noise identification models in the above sequence is performed.
In one embodiment, the step of obtaining any of the above noise identification models may include: collecting noise signals to be filtered by the noise identification model; extracting signal characteristics of the noise signal; and training to obtain the random noise identification model according to the signal characteristics.
In the embodiment, the target electroencephalogram signal with the noise signal and the reference electroencephalogram signal without the noise signal can be acquired in a targeted manner according to the noise signal to be filtered, and the target electroencephalogram signal and the reference electroencephalogram signal are compared to obtain the noise signal to be filtered. When the noise identification model receives the input electroencephalogram signal, the components which accord with the signal characteristics in the input electroencephalogram signal can be identified, and the purpose of removing the noise signal from the input electroencephalogram signal is further achieved.
Taking the example of obtaining a target electroencephalogram signal with a noise signal generated by eye movement interference, the target electroencephalogram signal can be collected by guiding a tester to blink and/or perform eye movement. For example, a guiding animation can be played on a computer screen for testing, wherein a square block is used for indicating the test person to blink, and the square block is used for guiding the test person to perform eye movement and the like, so that the target electroencephalogram signal can be acquired.
Taking the example of obtaining the target electroencephalogram signal of the noise signal with head movement interference as an example, general head movement artifacts are divided into head nodding and head shaking, and the presenting indicator can be replaced by a guide to guide a tester to rotate the head to complete the test of the corresponding head movement interference, so that the target electroencephalogram signal of the noise signal with head movement interference can be obtained. In addition, the acceleration sensor can be used for head movement detection so as to extract head movement characteristics to participate in training of a noise identification model for removing head movement interference.
Taking the example of obtaining the target electroencephalogram signal of the noise signal with the muscular movement interference as an example, the facial muscles of the human have different degrees of influence on the electroencephalogram, the electroencephalograms at the front part are sensitive to the facial muscles (frown and frontalis), and the electroencephalograms at the temporal lobe and the middle part are sensitive to the masticatory muscles (mainly the masseter and temporalis), when obtaining the target electroencephalogram signal, the voice stimulation can be used for guiding, and the voice stimulation with a duration set time length (for example, 100ms) is used for N100 induction in the processes of tensioning and relaxing, so as to collect the target electroencephalogram signal of the noise signal with the muscular movement interference.
In the embodiment, according to the test modes for different interferences, a tester can be guided to complete each test consecutively so as to improve the test efficiency. For example, this may be to collect a reference brain signal without any interference, and then to collect a target brain signal with a noise signal generated by eye movement interference, a target brain signal with a noise signal of head movement interference, a target brain signal with a noise signal of muscle movement interference, and the like.
In the training of the noise recognition model, in order to enable the trained noise recognition model to have higher accuracy even with fewer training samples, the features of the extracted noise signal may be learned in a transfer learning manner, for example, the training may be performed by using an initiation series model. This can be done by converting the electroencephalogram signal into one image segment by a nonlinear system method, and then learning the image segment by an inference model, etc.
Compared with a denoising method for removing noise signals in the electroencephalogram signals through mathematical calculation, in the embodiment, the accuracy and effectiveness of noise removal can be better improved through a method of connecting a plurality of noise identification models in series to remove set noise signals possibly existing in the electroencephalogram signals.
< myoelectric Power collecting apparatus embodiment >
Fig. 5 is a schematic diagram illustrating a structural configuration of the myoelectricity collection device 1302 according to an embodiment.
As shown in fig. 5, the myoelectricity collecting device 1302 may include a myoelectricity collecting electrode 13021, a filtering and signal amplifying module 13022, an analog-to-digital conversion module 13023, a control module 13024 and a communication module 13025, which are connected in sequence. The myoelectricity collecting device 1302 may further include a power supply module 13026, and the power supply module 13026 provides an operating voltage for the analog-to-digital conversion module 13023 and the control module 13024.
The myoelectric collecting electrode 13021 is brought into contact with a test player to collect a raw myoelectric signal.
The filtering and signal amplifying module 13022 is used for performing analog filtering, signal amplification, notch pretreatment and the like on bioelectric signals accumulated on the acquisition side of the myoelectric acquisition electrode 13021. The filtering and signal amplifying module 13022 may include an analog band pass filter circuit, an op-amp amplifier circuit, and a notch pre-processing circuit.
The analog-to-digital conversion module 13023 may employ an integrated AD conversion chip.
The control module 13024 may employ an MCU chip or the like.
The communication module 13025 is used for transmitting the electromyographic signals output by the control module 13024 (i.e. the electromyographic signals provided by the electromyographic acquisition device 1302) to the device 140. The communication module 13025 may use a USB communication cable with a shielding layer for data transmission, for example.
< embodiment of electrocardiographic acquisition apparatus >
The electrocardiograph acquisition device 1303 may include an acquisition electrode, an analog filtering module, a signal amplification module, a notch processing module, an analog-to-digital conversion module, a control module, and a communication module, which are connected in sequence.
The electrocardio pulsation frequency of a human body is 0.25-100 Hz, and the concentration area of the electrocardiosignal energy is distributed in 0.25-35 Hz, so that the analog filtering module can adopt a band-pass filtering circuit to firstly carry out high-pass filtering and then carry out second-order low-pass filtering, finally the signal frequency is limited to 0.1-106 Hz, and other noise signals outside the bandwidth are filtered.
Because the electrocardiosignal is a weak bioelectricity signal, the signal amplitude range is 0.05-4 mv, post-processing analysis can be carried out after the electrocardiosignal is amplified, the signal amplification module can be used for amplifying the output signal of the analog filtering module, and the signal amplification module can adopt an instrument operation amplification chip for single-stage amplification processing.
In order to better eliminate the interference of the power frequency signal, 50Hz notch processing can be carried out through the notch processing module.
The communication module is configured to send the electrocardiographic signal output by the control module (i.e., the electrocardiographic signal provided by the electrocardiographic acquisition device 1303) to the device 140.
The signal processing flow of the electrocardio acquisition equipment can comprise the following steps: differential pair signals output by the acquisition electrodes are sequentially processed by the analog filtering module, the signal amplifying module and the notch processing module, processed analog bioelectric signals are input to the analog-to-digital conversion module so as to convert the processed analog bioelectric signals into digital signals, the analog-to-digital conversion module transmits the digital signals to the control module by using transmission protocols such as SPI (serial peripheral interface) and the like for digital signal processing, and signals (namely electromyographic signals provided by the electrocardio acquisition equipment) after the digital signal processing are transmitted to the equipment 140 by the communication module.
< video capture device embodiment >
For video capture, the video capture device 1304 may include a camera and an image processing apparatus connected to the camera.
The video file collected by the camera is composed of a plurality of frames of images, so that the video file can be split into one video image, and the image processing device can split the collected video file into the video images for image processing.
The image processing includes face detection, i.e., recognizing a face region from an image captured by a camera.
The face detection may be accomplished by scanning the image through a classifier. The classifier may be any existing face recognition classifier, for example, a classifier obtained by training with the Adaboost algorithm, and is not described herein again.
The image processing may further include: and aiming at the identified face area, positioning key points and extracting features.
The location of the key point includes the location of the eye, which can determine a distinguishing threshold value according to the distinction of the eye from the color around the eye, so as to determine the position of the human eye according to the change of the gray value for the image after the binarization processing. When the blinking motion occurs, the gradation of the region where the eyes are located changes, and feature data indicating the number of blinks can be extracted based on the change condition.
The positioning of the key points can also comprise positioning the mouth, and after the positions of the eyes are determined, the position of the mouth can be positioned according to the human face model. When the test handler yawns, the mouth is opened, the gray value of the mouth position changes, and therefore feature data representing the frequency of yawning can be extracted according to the change of the gray value of the mouth position.
The physiological information data provided by the video capture device 1304 may include at least one of the above characteristic data representing blink action and the above characteristic data representing yawning action.
In addition, the video capture device 1304 may perform only image preprocessing of binarization processing of the image, and transmit the preprocessed image to the device 140, and the device 140 may perform the above image processing operation, which is not limited herein.
In one embodiment, the video capture device 1304 providing physiological information data may include the steps of: acquiring a collected video image; identifying a face region in the video image; locating an eye position and a mouth position in the recognized face region; determining a time point of blinking according to the gray level change of the eye position between every two adjacent video images; determining the time point of the yawning action according to the gray scale change of the mouth position between every two adjacent video images; and generating physiological information data provided by the video acquisition equipment according to the time points.
< example of extraction of index data >
In this embodiment, after the device 140 receives the physiological information data provided by each physiological information collecting device 130, the index data of the set physiological characteristic index is obtained according to the physiological information data.
For the electroencephalogram signals provided by the electroencephalogram acquisition device 1301, index data of physiological characteristic indexes such as attention indexes, brain load indexes and nerve fatigue indexes can be extracted.
1) Regarding extraction of index data for attention indexes from electroencephalogram signals:
attention refers to the ability of human mental activities to direct and concentrate on something, and it includes a series of complex neural processes such as sensory and perceptual information input, processing, integration, regulation and control, which are basic prerequisites for human learning and other activities.
In the process of controlling a player to control a target object to execute a target task, the electroencephalogram acquisition device 1301 can provide an electroencephalogram signal generated by the player in the process.
The present embodiment may extract index data for the attention index by recording the electroencephalogram signals corresponding to a first set of electroencephalograms provided by the electroencephalogram acquisition device 130 as the first electroencephalograms, where the first set of electroencephalograms includes recording electrodes located in the left dorsolateral prefrontal area (corresponding to the position of F3 in the 10-20 electrode placement system).
In this embodiment, the first electroencephalogram signal is composed of a plurality of time series arranged in sequence, and the time length of each time series may be the same, for example, 10 s. In this embodiment, the degree of attention concentration of the control player may be represented by a variance value of the relative power Rp of the first electroencephalogram signal in a β _1 frequency band (13Hz-20Hz) of a plurality of time series, that is, the variance value may be used as index data for an attention index, where the greater the Rp value, the higher the degree of attention concentration of the control player in the corresponding time series is represented.
The relative power Rp of the β _1 band of any time sequence is: the ratio of the power density value of the beta _1 frequency band corresponding to the time sequence to the power density value of the full frequency band.
The power density value of any time series can be calculated by the following steps: the time sequence X (n) may be divided into K sub-time sequences X1(n) -xk (n), each sub-time sequence overlaps with another sub-time sequence, and a hamming window with equal length is added to each sub-time sequence to avoid the spectrum leakage of the final result; calculating a power density spectrum of each sub-time sequence; and calculating the average value of the power density spectrums of all the sub time series as the power density value of the time series.
In this embodiment, a variance value of the Rp value may be obtained according to the Rp value of each time sequence of the operator in the whole operation process, so as to represent a fluctuation condition of attention of the operator in the process of completing the target task, and a smaller variance indicates a stronger attention control capability. Finally, the index data of the control player for the attention index can be determined according to the variance value and the set classification threshold.
Taking the example of dividing the attention control into three levels of excellent, good and unqualified, the classification threshold reflects the mapping relationship between the numerical range of the variance value and the level, and the finally determined index data of the control player for the attention index is the level to which the control player belongs, for example, the level to which the control player belongs is excellent. Here, different data identifiers may be used to represent different levels, for example, 100 represents excellent, 101 represents good, 110 represents unqualified, and the like, which is not limited herein.
2) Regarding extraction of index data for a brain load index from an electroencephalogram signal:
a classification model for classifying the brain burden is established in advance, for example, the brain burden is classified into three levels, namely, high, middle and low, each level can be represented by a corresponding data identifier, that is, the extracted index data can be the corresponding brain burden level.
Because the brain electrical signals of the forehead leaf part are sensitive to brain load changes, the brain electrical signals of the second group of brain electrodes corresponding to the forehead leaf part, which are provided by the brain electrical acquisition equipment 130, can be marked as second brain electrical signals, and index data of brain load indexes can be extracted.
In this embodiment, extracting the index data for the brain load index from the electroencephalogram signal may include: acquiring a vector value of the second electroencephalogram signal to a set characteristic vector reflecting the brain load state; and inputting the vector value into the classification model to obtain index data of the brain load index.
The feature vector is composed of a plurality of features reflecting brain load states, and the value of the second electroencephalogram signal to each feature in the plurality of features constitutes the vector value, wherein each feature corresponds to one parameter used for evaluating the brain load index.
The features in the feature vector are divided into two types, namely a first feature set and a second feature set, wherein the first feature set comprises: the characteristic quantity of the recursion is obtained according to each function of a plurality of Intrinsic Mode Functions (IMFS) of the electroencephalogram signal.
The plurality of eigenmode functions are the first set number of eigenmode functions, for example, the first 9 eigenmode functions, after Ensemble Empirical Mode Decomposition (EEMD) Decomposition of the electroencephalogram signal.
The recursive feature quantity required to be obtained according to each function may include a recursion rate (recurrence rate), a determination rate (determinism), a laminar flow rate (linearity), and an entropy (entropy).
In the case where the plurality of natural mode functions is 9 natural mode functions, each of which corresponds to the above recursive feature quantities, the first feature set includes 36 features.
The second feature set includes: the electroencephalogram signals have relative powers in five bands of 1-4Hz, 4-8Hz, 8-13Hz, 13-30Hz and 30-45Hz respectively, namely the second feature set can comprise 5 features.
In this embodiment, the training sample may be constructed by referring to the above manner of extracting the vector value of the second electroencephalogram signal of the control player to the set feature vector, so as to train and obtain the classification model. For each training sample, a corresponding label can be set for each training sample according to the time spent by the tester for completing the test task, for example, the result label of the training sample with less time spent is low brain load, the result label of the training sample with more time spent is high brain load, the result label of the training sample with middle time spent is medium brain load, and thus, the classification model can be obtained according to the training sample with the label.
3) Extracting index data for the nerve fatigue index from the electroencephalogram signal:
because the component P3a in the electroencephalogram signals can reflect the brain fatigue state, and the component P3a is easily induced in the forehead part, the electroencephalogram signals corresponding to the second group of electroencephalograms located in the forehead part and provided by the electroencephalogram acquisition equipment 130 can be recorded as second electroencephalogram signals, and index data of the neural fatigue index can be extracted. The values of the index data comprise excellent, good and unqualified.
In this embodiment, a reference value of the amplitude of the P3a component may be set, and when the amplitudes of the P3a components in the second electroencephalogram signal are all greater than or equal to the set proportion of the reference value, the set proportion is higher than 50%, for example, 75%, which indicates that the nerve fatigue of the operator does not occur during the operation, the index data of the nerve fatigue index is excellent, and when the amplitude of the P3a component in the second electroencephalogram signal is not lower than the set proportion of the reference value after the set time length for starting the operation, the index data of the nerve fatigue index is good, and otherwise, the index data is not good.
The set time length may be determined according to a time limit of the target task, and may be 30 minutes or the like, taking 60 minutes as an example.
In one embodiment, the auditory Oddball paradigm stimulus may be applied to the control player by the device 140 during the control player's manipulation of the target object to perform the target task to cause the P3a component to appear in the brain electrical signal.
For the myoelectric signal provided by the myoelectric acquisition device 1302, index data for a muscle fatigue index may be extracted.
Because the frequency information (including the median frequency and the mean frequency) of the electromyographic signals is in negative correlation with the fatigue degree of the test player, the electromyographic signals are reflected differently on frequency components under different fatigue degrees. Therefore, the degree of muscle fatigue can be expressed by parameter values of two parameters, the Mean Power Frequency (MPF) and the Median Frequency (MF).
In this embodiment, the collected electromyographic signals may be analyzed by an FFT power spectrum scheme to obtain an average power frequency value and a median frequency value of the electromyographic signals.
In this embodiment, the average power frequency value and the median frequency value may still be used to divide the index data of the muscle fatigue index into several levels, which is not described herein again.
For the electrocardiographic signals provided by the electrocardiographic acquisition device 1303, index data of emotion control capability indexes can be extracted.
Because the heart rate changes all the time, the change curve of the heart rate is naturally a line with fluctuation. The heart rate variability directly reflects the regulating effect of neurohumoral factors on the heart beating. Analyzing heart rate variability from neural activity modulation effects essentially represents the relationship of vagal and sympathetic activity to each other in gambling and balancing. From a time domain analysis, the heart rate of a test player can change rapidly along with different task events, and further influences the Standard Deviation (SDNN) value of all sinus heartbeat RR intervals (NN intervals for short). Thus, the mood change fluctuation value can be quantitatively analyzed by SDNN, and SDNN is positively correlated with mood fluctuations, i.e., the greater the SDNN value, the more pronounced the mood fluctuations are.
Therefore, the SDNN value can be determined according to the electrocardiosignals to obtain index data of the emotion control ability index of the control player.
In this embodiment, the index data of the emotion control ability index may still be divided into several levels by using the SDNN value, and will not be described herein again.
For the psychological information data provided by the video capture device 1304, index data for a nerve fatigue index may be extracted.
As described above, the psychological information data provided by the video capture device 1304 may include the time point at which an eye blink occurs and the time point at which yawning occurs.
And judging whether the test player is in a fatigue state or not according to the change of the blinking times and the yawning condition. For example, in a general state, the number of blinks of each person in one minute does not differ too much, but when one person is in a fatigue state, the number of blinks may become smaller or larger suddenly, and therefore, the change in the number of blinks per set time period (for example, 1 minute) may be compared according to the time point at which the blinks occur, and if the number of blinks in a plurality of consecutive time periods is abnormal, it is indicated that the person is in the fatigue state. For another example, in the normal operation process, there is basically no yawning behavior, but when the operation player is in a tired state, the yawning behavior occurs, the number of yawning times in each set time length is counted, and if the yawning behavior occurs in a plurality of continuous time lengths, it can be said that the operation player is in a tired state.
In this embodiment, index data indicating whether the control player is in a fatigue state may be given according to psychological information data provided by the video capture device 1304.
Each of the above embodiments focuses on differences from other embodiments, and the same or similar parts of different embodiments may be referred to and used with each other.
The present invention may be a system, method and/or computer program product. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied therewith for causing a processor to implement various aspects of the present invention.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the invention is defined by the appended claims.

Claims (10)

1. A control work efficiency analysis method based on cognitive power in a virtual scene comprises the following steps:
acquiring a control command generated by a control player through a control motion control device, and updating a virtual scene according to the control command;
acquiring feedback data generated by the virtual scene, and sending the feedback data to the motion control device;
acquiring psychological state data generated by controlling a target object to execute a target task under the virtual scene by a control player;
acquiring a cognitive neuroergonomic score of the control player for the target task according to the physiological state data;
obtaining a control score of the control player according to the cognitive neuroergonomic score;
and executing set operation according to the control score.
2. The method of claim 1, wherein the performing the setting comprises at least one of:
a first item outputting the manipulation score;
a second item, which provides a selection result whether the control player is selected or not according to the control score;
a third item, determining the control level of the control player according to the control score;
and fourthly, selecting a control combination which enables the control score to meet the set requirement according to the control score of the same control player for controlling the target object to execute the target task through different motion control devices, wherein one control combination comprises the control player and the motion control device which are matched.
3. The method of claim 1, wherein the method further comprises:
providing a setting entrance in response to an operation of setting an application scene;
acquiring an application scene input through the setting entrance, wherein the input application scene reflects an operation to be executed based on a control score;
and determining the operation content of the set operation according to the input application scene.
4. The method of claim 1, wherein the method comprises:
providing a configuration interface in response to an operation to configure the target task;
acquiring configuration information for the target task input through the configuration interface;
and providing the virtual scene corresponding to the target task according to the configuration information.
5. The method of claim 1, wherein the step of acquiring the physiological state data comprises:
acquiring physiological information data provided by each physiological information acquisition device, wherein the physiological information data provided by any physiological information acquisition device comprises at least one of physiological signal data and physiological image data;
obtaining the physiological state data according to the physiological information data, including:
obtaining parameter values for evaluating parameters of the reflected physiological characteristic indexes according to the physiological information data provided by the physiological information acquisition equipment;
according to the parameter value, determining index data of the control player for the corresponding physiological characteristic index;
generating the physiological state data includes determining all of the indicator data.
6. The method of claim 5, wherein each physiological information acquisition device comprises an electroencephalogram acquisition device providing physiological information data comprising at least one of an electroencephalogram signal and an electroencephalogram image; the brain electricity collection device provides physiological information data and comprises:
the electroencephalogram acquisition equipment filters corresponding noise signals of the acquired original electroencephalogram signals sequentially through a plurality of noise identification models to obtain denoised electroencephalogram signals;
generating the provided physiological information data comprising the denoised electroencephalogram signal; and/or the presence of a gas in the gas,
each physiological information acquisition device comprises an electromyographic acquisition device, and the physiological information data provided by the electromyographic acquisition device comprises at least one of an electromyographic signal and an electromyographic image; and/or the presence of a gas in the gas,
each physiological information acquisition device comprises an electrocardio acquisition device, and the physiological information data provided by the electrocardio acquisition device comprises at least one of electrocardiosignals and electrocardio images; and/or the presence of a gas in the gas,
each physiological information acquisition device comprises a video acquisition device for acquiring facial actions, the physiological information data provided by the video acquisition device comprises at least one of change data of facial features and facial image data, and the providing of the physiological information data by the video acquisition device comprises:
acquiring a collected video image;
identifying a face region in the video image;
locating an eye position and a mouth position in the recognized face region;
determining a time point of blinking according to the gray level change of the eye position between every two adjacent video images;
determining the time point of the yawning action according to the gray scale change of the mouth position between every two adjacent video images;
and generating physiological information data provided by the video acquisition equipment according to the time point.
7. The method of claim 5, wherein the physiological information acquisition devices comprise brain electrical acquisition devices that provide physiological information data comprising:
the electroencephalogram acquisition equipment filters corresponding noise signals of the acquired original electroencephalogram signals sequentially through a plurality of noise identification models to obtain denoised electroencephalogram signals;
and generating the provided physiological information data to comprise the denoised electroencephalogram signal.
8. The method of claim 1, the obtaining, from the physiological state data, a cognitive neuroergonomic score for the player for the target task comprising:
and inputting the physiological state data into a preset cognitive neural work efficiency model to obtain a cognitive neural work efficiency score of the control player for the target task, wherein the cognitive neural work efficiency model reflects the mapping relation between any physiological state data and the cognitive neural work efficiency score.
9. A manipulation ergonomics apparatus based on cognition in a virtual scene, comprising at least one computing device and at least one storage device, wherein,
the at least one storage device is to store instructions to control the at least one computing device to perform the method of any of claims 1 to 8.
10. A control ergonomics analysis system based on cognition in a virtual scene, wherein the system comprises a task execution device, physiological information acquisition devices and the control ergonomics analysis device of claim 9, wherein the task execution device and the physiological information acquisition devices are in communication connection with the control ergonomics analysis device;
the motion control device of the task execution equipment is a flight control device, and a target object controlled by the flight control device is an unmanned aerial vehicle under a virtual scene.
CN202010414066.0A 2020-05-15 2020-05-15 Control work efficiency analysis method, device and system based on cognitive power in virtual scene Active CN111553617B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010414066.0A CN111553617B (en) 2020-05-15 2020-05-15 Control work efficiency analysis method, device and system based on cognitive power in virtual scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010414066.0A CN111553617B (en) 2020-05-15 2020-05-15 Control work efficiency analysis method, device and system based on cognitive power in virtual scene

Publications (2)

Publication Number Publication Date
CN111553617A true CN111553617A (en) 2020-08-18
CN111553617B CN111553617B (en) 2021-12-21

Family

ID=72004750

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010414066.0A Active CN111553617B (en) 2020-05-15 2020-05-15 Control work efficiency analysis method, device and system based on cognitive power in virtual scene

Country Status (1)

Country Link
CN (1) CN111553617B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112200025A (en) * 2020-09-25 2021-01-08 北京师范大学 Operation and control work efficiency analysis method, device and system
CN112256123A (en) * 2020-09-25 2021-01-22 北京师范大学 Brain load-based control work efficiency analysis method, equipment and system
CN112256122A (en) * 2020-09-25 2021-01-22 北京师范大学 Control work efficiency analysis method, device and system based on mental fatigue
CN112256124A (en) * 2020-09-25 2021-01-22 北京师范大学 Emotion-based control work efficiency analysis method, equipment and system
CN113158925A (en) * 2021-04-27 2021-07-23 中国民用航空飞行学院 Method and system for predicting reading work efficiency of composite material maintenance manual
CN114089627A (en) * 2021-10-08 2022-02-25 北京师范大学 Non-complete information game strategy optimization method based on double-depth Q network learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104207793A (en) * 2014-07-03 2014-12-17 中山大学 Gripping function evaluating and training system
CN108836347A (en) * 2018-05-10 2018-11-20 中国科学院宁波材料技术与工程研究所 Disturbances in patients with Parkinson disease recovery training method and system
CN109875580A (en) * 2019-03-01 2019-06-14 安徽工业大学 A kind of Cognitive efficiency can computation model
CN110522427A (en) * 2019-09-24 2019-12-03 中国人民解放军第四军医大学 Civilian pilot based on virtual reality executes the monitoring system and method for control force

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104207793A (en) * 2014-07-03 2014-12-17 中山大学 Gripping function evaluating and training system
CN108836347A (en) * 2018-05-10 2018-11-20 中国科学院宁波材料技术与工程研究所 Disturbances in patients with Parkinson disease recovery training method and system
CN109875580A (en) * 2019-03-01 2019-06-14 安徽工业大学 A kind of Cognitive efficiency can computation model
CN110522427A (en) * 2019-09-24 2019-12-03 中国人民解放军第四军医大学 Civilian pilot based on virtual reality executes the monitoring system and method for control force

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112200025A (en) * 2020-09-25 2021-01-08 北京师范大学 Operation and control work efficiency analysis method, device and system
CN112256123A (en) * 2020-09-25 2021-01-22 北京师范大学 Brain load-based control work efficiency analysis method, equipment and system
CN112256122A (en) * 2020-09-25 2021-01-22 北京师范大学 Control work efficiency analysis method, device and system based on mental fatigue
CN112256124A (en) * 2020-09-25 2021-01-22 北京师范大学 Emotion-based control work efficiency analysis method, equipment and system
CN113158925A (en) * 2021-04-27 2021-07-23 中国民用航空飞行学院 Method and system for predicting reading work efficiency of composite material maintenance manual
CN114089627A (en) * 2021-10-08 2022-02-25 北京师范大学 Non-complete information game strategy optimization method based on double-depth Q network learning
CN114089627B (en) * 2021-10-08 2023-09-15 北京师范大学 Incomplete information game strategy optimization method based on double-depth Q network learning

Also Published As

Publication number Publication date
CN111553617B (en) 2021-12-21

Similar Documents

Publication Publication Date Title
CN111544015B (en) Cognitive power-based control work efficiency analysis method, device and system
CN111553617B (en) Control work efficiency analysis method, device and system based on cognitive power in virtual scene
CN111598453B (en) Control work efficiency analysis method, device and system based on execution force in virtual scene
CN111553618B (en) Operation and control work efficiency analysis method, device and system
CN111598451B (en) Control work efficiency analysis method, device and system based on task execution capacity
CN107224291B (en) Dispatcher capability test system
CN110070105B (en) Electroencephalogram emotion recognition method and system based on meta-learning example rapid screening
CN112353407B (en) Evaluation system and method based on active training of neurological rehabilitation
CN107951485A (en) Ambulatory ECG analysis method and apparatus based on artificial intelligence self study
CN112163518B (en) Emotion modeling method for emotion monitoring and adjusting system
CN110169770A (en) The fine granularity visualization system and method for mood brain electricity
Sarkar et al. Classification of cognitive load and expertise for adaptive simulation using deep multitask learning
CN109411042A (en) Ecg information processing method and electro cardio signal workstation
CN112256124B (en) Emotion-based control work efficiency analysis method, equipment and system
US20220249801A1 (en) Breathing meditation induction device combined with headphones for sensing brain wave signals, breathing meditation induction system for displaying and storing brain wave signals by using same, and system for brain wa ve signal management through middle manager
US20220051586A1 (en) System and method of generating control commands based on operator&#39;s bioelectrical data
CN104367306A (en) Physiological and psychological career evaluation system and implementation method
CN115376695A (en) Method, system and device for neuropsychological assessment and intervention based on augmented reality
EP3755226B1 (en) System and method for recognising and measuring affective states
CN113143273A (en) Intelligent detection system and method for attention state of learner in online video learning
CN113974627B (en) Emotion recognition method based on brain-computer generated confrontation
Zhao et al. Human-computer interaction for augmentative communication using a visual feedback system
Barreto et al. On the classification of mental tasks: a performance comparison of neural and statistical approaches
WO2024032728A1 (en) Method and apparatus for evaluating intelligent human-computer coordination system, and storage medium
CN110569968B (en) Method and system for evaluating entrepreneurship failure resilience based on electrophysiological signals

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant