CN115708918A - Emotion regulation ability training method, man-machine interaction method and related device - Google Patents

Emotion regulation ability training method, man-machine interaction method and related device Download PDF

Info

Publication number
CN115708918A
CN115708918A CN202211448157.1A CN202211448157A CN115708918A CN 115708918 A CN115708918 A CN 115708918A CN 202211448157 A CN202211448157 A CN 202211448157A CN 115708918 A CN115708918 A CN 115708918A
Authority
CN
China
Prior art keywords
user
target
mit
training
tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211448157.1A
Other languages
Chinese (zh)
Inventor
黄艳
陶芸芸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN202211448157.1A priority Critical patent/CN115708918A/en
Priority to PCT/CN2022/137706 priority patent/WO2024103462A1/en
Publication of CN115708918A publication Critical patent/CN115708918A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/377Electroencephalography [EEG] using evoked responses
    • A61B5/378Visual stimuli
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychology (AREA)
  • Veterinary Medicine (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Pathology (AREA)
  • Theoretical Computer Science (AREA)
  • Psychiatry (AREA)
  • General Physics & Mathematics (AREA)
  • Hematology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Anesthesiology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Educational Technology (AREA)
  • Hospice & Palliative Care (AREA)
  • Social Psychology (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses an emotion regulating ability training method, a man-machine interaction method and a related device, wherein the emotion regulating ability training method is applied to a man-machine interaction system, and the man-machine interaction system comprises the following steps: the system comprises a multi-identity tracking MIT training system and a result evaluation system; the result evaluation system obtains the number of correct recognition targets obtained by the MIT training system respectively performing target tracking selection detection on a user from two dimensions of a target position and a target identity; the result evaluation system analyzes the number of the correctly identified targets to obtain the reaction correct rate when the user performs target tracking selection detection; and the result evaluation system sends the response accuracy to the MIT training system so that the MIT training system adjusts the tracking target moving speed of the user in the next target tracking selection detection according to the response accuracy, the emotion adjusting capability of the user is trained, and the emotion adjusting capability of the user is improved.

Description

Emotion regulation ability training method, man-machine interaction method and related device
Technical Field
The invention relates to the technical field of brain science, in particular to an emotion regulating ability training method, a man-machine interaction method and a related device.
Background
The perception and expression of emotions are a very important part of life, and mood regulation provides an effective means of understanding and controlling mood. Through emotion adjustment, individuals can make quick and effective adaptive responses to changing social situations, and individual goals are achieved. However, not only the difference in the ability to regulate mood between individuals, but also the lack of the ability to regulate mood has been found to be associated with a wide range of mental disorders, such as anxiety, depression, post-traumatic stress disorder, and the like. Therefore, improvement of the mood-regulating ability helps to solve these problems.
Therefore, it is an urgent need to provide a technical solution that can train the emotion adjustment ability of a user.
Disclosure of Invention
The embodiment of the invention provides a method for training emotion regulating capacity, a man-machine interaction method and a related device, which are used for solving the technical problem that the emotion regulating capacity cannot be trained in the prior art.
In order to achieve the above object, an embodiment of the present invention provides a method for training emotion regulating ability, where the method is applied to a human-computer interaction system, and the human-computer interaction system includes: the system comprises a multi-identity tracking MIT training system and a result evaluation system; the method comprises the following steps:
the result evaluation system obtains the number of correct recognition targets obtained by the MIT training system respectively performing target tracking selection detection on a user from two dimensions of a target position and a target identity;
the result evaluation system analyzes the number of the correctly identified targets to obtain the reaction correct rate when the user performs target tracking selection detection;
and the result evaluation system sends the reaction accuracy to the MIT training system so that the MIT training system adjusts the tracking target moving speed of the user in the next target tracking selection detection according to the reaction accuracy to train the emotion regulating capability of the user.
Optionally, the human-computer interaction system further includes: an emotion adjustment quantification system; the emotion adjusting quantification system is used for quantifying the emotion adjusting capacity of the user according to a preset emotion adjusting scale;
before the result evaluation system obtains the number of correctly recognized targets obtained when the MIT training system performs target tracking selection detection on a user from two dimensions of a target position and a target identity, the method further includes:
the result evaluation system acquires the emotion regulation quantitative system to detect the emotion regulation capacity of the user from two dimensions of cognitive reevaluation and expression inhibition respectively to obtain an initial emotion regulation capacity characteristic value of the user;
the result evaluation system determines an initial value of the MIT training system operation according to the initial emotion regulating capacity characteristic value of the user and sends the initial emotion regulating capacity characteristic value of the user to the MIT training system;
wherein the initial value of the MIT training system operation at least comprises: the number of initial tracking targets and the moving speed of the initial tracking targets.
Optionally, the obtaining, by the result evaluation system, the number of correctly recognized targets obtained by the MIT training system performing target tracking selection detection on the user from two dimensions, namely a target position and a target identity, specifically includes:
and the MIT training system carries out target tracking selection detection on the user from two dimensions of a target position and a target identity according to the initial value.
Optionally, the human-computer interaction system further includes: an electroencephalogram recording system;
before the MIT training system performs target tracking selection detection on the user, the method further comprises: the result evaluation system acquires first electroencephalogram data when the electroencephalogram recording system carries out emotional stimulation paradigm detection on the user;
after the MIT training system performs target tracking selection detection on a user, the method further comprises: the result evaluation system acquires second electroencephalogram data recorded by the electroencephalogram recording system when the emotional stimulation paradigm detection is carried out on the user;
wherein the first brain electrical data and the second brain electrical data both comprise: late positive potentials associated with mood regulation;
and the result evaluation system analyzes and processes the first electroencephalogram data and the second electroencephalogram data to obtain an electroencephalogram evaluation result.
Optionally, the MIT training system adjusting, according to the reaction accuracy, a tracking target moving speed when the user performs next target tracking selection detection, specifically includes:
when the response accuracy is greater than a first preset threshold value, the MIT training system controls the tracking target moving speed to accelerate when the user selects and detects the next target tracking;
and under the condition that the reaction accuracy is smaller than the first preset threshold, the MIT training system controls the tracking target moving speed to decelerate when the next target tracking selection detection is carried out.
Optionally, the method further comprises:
the result evaluation system acquires target identification time data obtained when the MIT training system respectively carries out target tracking selection detection on the user from two dimensions of the target position and the target identity;
and the result evaluation system analyzes the target identification time data to obtain the reaction time of the user in target tracking selection detection.
Optionally, when the reaction accuracy is greater than a first preset threshold, the MIT training system controls the tracking target moving speed to accelerate when the user selects to detect the next target tracking, which specifically includes:
determining whether the reaction time is greater than a second preset threshold;
when the response time is greater than a second preset threshold value, the MIT training system controls the tracking target moving speed of the user in the next target tracking selection detection to accelerate to a first preset moving speed;
when the response time is less than the second preset threshold value, the MIT training system controls the target moving speed during the next target tracking selection detection to accelerate to a second preset moving speed; wherein the first preset moving speed is less than the first preset moving speed.
Optionally, the method further comprises:
the result evaluation system acquires a training emotion regulating capacity characteristic value of the user, which is obtained by performing emotion regulating capacity detection on the user from two dimensions of cognitive re-evaluation and expression inhibition by the emotion regulating quantification system after the user is subjected to MIT training of the MIT training system;
and the result evaluation system determines the emotion regulating ability training effect of the user according to the initial emotion regulating ability characteristic value and the training emotion regulating ability characteristic value.
In order to achieve the above object, an embodiment of the present invention further provides a human-computer interaction method, where the method includes:
after a training trigger signal of a user is acquired, randomly selecting a first preset number of pictures from a preset picture library as a tracking target to form a first stimulation picture and displaying the first stimulation picture to the user; wherein the training trigger signal is used for indicating the training of the emotion regulating ability of the user;
after the first stimulation graph is displayed for a first preset time, randomly selecting a second preset number of tracking targets from the first stimulation graph as memory targets, marking the memory targets with preset marks, and obtaining and displaying a second stimulation graph to a user;
after the second stimulation image shows a second preset time, deleting the preset identification of the memory target, and covering both the tracking target and the memory target in the second stimulation image;
controlling the covered tracking target and the memory target in the second stimulation image to move according to a preset moving speed according to a preset rule, and showing a moving process to a user;
after the movement is stopped, marking a third preset number of covered memory targets in the second stimulation graph as test targets and displaying the test targets to a user so as to prompt the user to determine pictures corresponding to the test targets;
determining pictures corresponding to all test targets selected by a user based on the operation of the user;
determining whether the picture corresponding to each test target selected by the user is correct or not, and acquiring the number of correct target identifications to complete target tracking selection detection of the user;
determining the reaction accuracy of the target tracking selection detection of the user at this time according to the number of the correct target identifications;
and adjusting the preset moving speed of the next target tracking selection detection of the user based on the reaction accuracy until the number of times of the target tracking selection detection of the user is greater than or equal to a third preset threshold.
In order to achieve the above object, an embodiment of the present invention further provides a human-computer interaction system for training emotion adjusting ability, where the system includes: the system comprises a multi-identity tracking paradigm MIT training system and a result evaluation system;
the MIT training system is used for carrying out target tracking selection detection on a user from two dimensions of a target position and a target identity respectively to obtain the number of correct recognition targets;
the result evaluation system is used for analyzing according to the number of the correctly identified targets to obtain the reaction correct rate when the user carries out target tracking selection and sending the reaction correct rate to the MIT training system;
and the MIT training system is used for adjusting the tracking target moving speed of the user in the next target tracking selection detection according to the reaction accuracy so as to train the emotion regulating capability of the user.
To achieve the above object, an embodiment of the present invention also provides a computer-readable storage medium storing one or more programs, which are executable by one or more processors to implement the steps in the emotion adjusting ability training method as described above or the steps in the human-computer interaction method as described above.
According to the method, the target tracking selection detection is carried out on the user from two dimensions of the target position and the target identity information through the MIT training system so as to obtain the number of the correct identification targets, so that the result evaluation system determines the response accuracy of the target tracking selection detection according to the number of the correct identification targets, and the MIT training system adjusts the target moving speed of the user in the next target tracking selection detection according to the response accuracy. Through the scheme, the user can be trained from two aspects of attention and working memory, so that the training of the emotion adjusting capability of the user is realized, and the emotion adjusting capability of the user is effectively improved.
Drawings
FIG. 1 is a schematic system structure diagram of a human-computer interaction system according to an embodiment of the present invention;
FIG. 2 is a flowchart of a method for training emotion regulating ability according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating an example of MIT training process provided by an embodiment of the present invention;
FIG. 4 is another flow chart of a method for training mood adjustment abilities according to an embodiment of the present invention;
fig. 5 is a flowchart of a human-computer interaction method according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the invention provides a method for training emotion regulating ability. Fig. 1 is a schematic diagram of a system structure of a human-computer interaction system in an embodiment of the present invention, and as shown in fig. 1, the human-computer interaction system includes: the system comprises a Multiple Identity Tracking (MIT) training system, a result evaluation system, an electroencephalogram recording system and an emotion adjusting quantification system.
Fig. 2 is a flowchart of an emotion adjusting ability training method provided in an embodiment of the present invention, and as shown in fig. 2, the emotion adjusting ability training method applied to the human-computer interaction system at least may include the following steps:
s201, the emotion adjusting quantification system detects the emotion adjusting capacity of the user from two dimensions of cognitive reevaluation and expression inhibition respectively to obtain an initial emotion adjusting capacity characteristic value of the user.
Specifically, the emotion adjustment quantification system may include a terminal device (e.g., a computer, a tablet computer, a mobile phone, etc.), and when detecting the emotion adjustment capability of the user, the preset emotion adjustment table may be displayed to the user through the terminal device, the terminal device determines the selection of the user according to the operation of the user based on the preset emotion adjustment table, and calculates a score according to the selection of the user, where the score is an initial emotion adjustment capability characterization value of the user.
In the embodiment of the invention, the preset emotion regulation scale can adopt an emotion regulation scale compiled by Gross, the total number of the items is 20, the 7-point integral is 7, and the higher the score is, the higher the use frequency of the emotion regulation strategy of the user is. The mood adjustment scale includes two dimensions: cognitive reassurance and expression inhibition. Wherein, the measurement of the cognitive re-evaluation dimension is composed of 6 questions, and the measurement of the expression inhibition dimension is composed of 4 questions.
Through the above step S201, the emotion adjusting ability of the user can be quantified.
S202, the result evaluation system obtains the initial emotion adjusting capacity characteristic value of the user and determines the running initial value of the MIT training system according to the initial emotion adjusting capacity characteristic value.
Wherein, the initial value of the MIT training system operation at least comprises: the number of initial tracking targets and the moving speed of the initial tracking targets.
Specifically, a mapping relationship between the emotion adjusting capability representation value and an initial value of the MIT training system operation may be preset and stored, so that the result evaluation system determines the corresponding initial value of the MIT training system operation according to the pre-stored mapping relationship when acquiring the initial emotion adjusting capability representation value of the user.
For example: the emotion regulating capacity characterization value is in the interval of A1-A2, and the operation initial value of the MIT training system corresponding to the emotion regulating capacity characterization value is as follows: the number of the initial tracking targets is 8, and the moving speed of the initial tracking targets is 6s.
Through the steps S201-S202, the initial value of the MIT training system can be determined according to the initial emotion regulating capacity characterization value of the user, so that different MIT training schemes can be designed by the MIT training system for users with different initial emotion regulating capacities.
And S203, the MIT training system operates according to the initial value, and performs target tracking selection detection on the user from two dimensions of a target position and a target identity respectively to obtain the number of correctly identified targets.
The MIT training system may also include corresponding terminal devices, such as: computers, tablet computers, and the like.
Specifically, the MIT training system determines the number of initial tracking targets and the moving speed of the initial tracking targets according to the initial value. After a task starts, a fixed fixation point of regard appears in the center of a screen of a terminal device of the MIT training system, the fixation point of regard can be a cross mark, after a preset time appears at the fixation point, a stimulation image randomly appears, the stimulation image comprises N randomly selected images corresponding to the number of initial tracking targets, M images in the N randomly selected images are encircled by a circle, the circle disappears after the preset time, the N randomly selected images are hidden under a preset color background, then the hidden N randomly selected images move according to the moving speed of the initial tracking targets, and the hidden images stop and are displayed to a user after the movement of the preset time, so that the user selects the previous M images encircled by the circle from the hidden N randomly selected images, and determines the corresponding tracking targets.
The number of correctly identified targets is the number of tracking targets accurately identified by the target position and the identity information of the user.
For example, as shown in fig. 3, after a task starts, a fixed fixation point of regard appears in the center of a screen of a terminal device of the MIT training system, after 800-1200ms, a stimulus map randomly appears, the stimulus map includes 8 randomly selected pictures, wherein 4 pictures in the 8 randomly selected pictures are circled (the circle may have a color, such as red, yellow, etc.), the circle disappears after the circle duration is 2s, then the 8 pictures are hidden in the gray circle background, and simultaneously the dynamic interaction moves for 6s, then the presentation to the user is stopped, and the position of the picture selected by the user and the tracking target of the selected position are determined.
And S204, the result evaluation system acquires the number of the correctly identified targets and analyzes the number of the correctly identified targets so as to obtain the reaction correct rate when the user performs target tracking selection detection.
And the result evaluation system calculates the ratio of the number of the correctly identified targets to the number of the initially tracked targets, and takes the ratio as the reaction accuracy in the target tracking selection detection at this time.
S205, the result evaluation system sends the reaction accuracy to the MIT training system.
And S206, the MIT training system adjusts the tracking target moving speed of the next target tracking selection detection of the user according to the anti-positive correct rate.
Specifically, under the condition that the reaction accuracy is greater than or equal to a first preset threshold, the MIT training system controls the moving speed of a tracked target to be accelerated when the user selects and detects the next target tracking; and under the condition that the reaction accuracy is smaller than a first preset threshold value, the MIT training system controls the tracking target moving speed to decelerate when the user selects and detects the target tracking next time.
It should be noted that, each time the MIT training system performs MIT training on the user, each MIT training is composed of a preset number of target tracking selection detections.
In the embodiment of the invention, when the MIT training is carried out on a user, the MIT training system adjusts the moving speed of the tracking target during the next target tracking selection detection according to the response accuracy during the last target tracking selection detection. When the last reaction accuracy is smaller than the first preset threshold, the speed can be reduced adaptively, and when the last reaction accuracy is larger than or equal to the first preset threshold, the speed can be increased properly, so that the user can be trained more appropriately, and the emotion adjusting capability of the user is further improved.
In some embodiments of the invention, the MIT training system may further obtain target identification time data of the user when performing target tracking selection detection on the user from two dimensions, namely, a target position and a target identity; the result evaluation system can analyze the target recognition time data to obtain the reaction time when the user performs the current target tracking selection detection.
On this basis, under the condition that the reaction accuracy is greater than the first preset threshold, the MIT training system controls the tracking target moving speed of the user at the next target tracking selection detection, which may specifically include:
determining whether the reaction time is greater than a second preset threshold value;
when the reaction time is greater than a second preset threshold value, the MIT training system controls the tracking target moving speed of the user in the next target tracking selection detection to accelerate to a first preset moving speed;
when the response time is less than a second preset threshold value, the MIT training system controls the tracking target moving speed of the user in the next target tracking selection detection to accelerate to a second preset moving speed;
the first preset moving speed is smaller than the second preset moving speed.
According to the scheme, the acceleration of the moving speed of the tracking target can be more accurate when the user selects and detects the next target tracking based on the reaction time of the user. Under the condition that the reaction accuracy is greater than the first preset threshold value but the reaction time is greater than the second preset threshold value, the reaction accuracy of the user is higher but the reaction speed is slower, and therefore, the acceleration is smaller so as to be suitable for the characteristics of the user. That is to say, through the scheme, the method and the device can be suitable for different users, and further effectively train the emotion adjusting ability of the users, so that the emotion adjusting ability of the users is improved.
And S207, the MIT training system performs next target tracking selection detection on the user from two dimensions of a target position and a target identity respectively according to the tracking target moving speed and the initial tracking target number during next target tracking selection detection to obtain the correct identification target number, and continues to execute the steps S204-S207 until the number of target tracking selection detection times is greater than the preset training times so as to train the emotion adjusting capability of the user.
The embodiment of the invention provides an emotion regulating ability training method, which is characterized in that an MIT (MIT information technology) training system is used for carrying out target tracking selection detection on a user from two dimensions of a target position and target identity information so as to obtain the number of correct identification targets, so that a result evaluation system determines the reaction accuracy of the target tracking selection detection according to the number of the correct identification targets, and the MIT training system adjusts the target moving speed of the user in the next target tracking selection detection according to the reaction accuracy. According to the scheme, the user can be trained from two aspects of attention and working memory so as to realize training of the emotion regulating capability of the user, and therefore the emotion regulating capability of the user is improved.
In some embodiments of the invention, after the user finishes MIT training through the MIT training system, the emotion regulation quantification system can also detect the emotion regulation capability of the user again from two dimensions of cognitive reevaluation and expression inhibition to obtain a training emotion regulation capability representation value of the user; and the result evaluation system determines the emotion regulation training effect of the user according to the initial emotion regulation characteristic value and the training emotion regulation capability characteristic value.
Wherein the mood regulating training effect of the user can be represented by the difference between the initial mood regulating characteristic value and the training mood regulating capability characteristic value.
Through the scheme, the result evaluation system can evaluate the emotion regulating capacity training effect of the MIT training according to the emotion regulating capacity characteristic values before and after the MIT training after each MIT training, so that the target tracking selection detection of the MIT training system can be adaptively adjusted according to the emotion regulating capacity effect, and the result evaluation system is suitable for users of different types.
In addition, in some embodiments of the present invention, as shown in fig. 4, the method for training emotion regulating ability provided by the embodiments of the present invention may further include the following steps:
s401, before the MIT training system carries out target tracking selection detection on a user, an electroencephalogram recording system carries out emotion stimulation paradigm detection on the user to obtain first electroencephalogram data of the user;
s402, after the MIT training system performs target tracking selection detection on the user, the difficulty recording system performs emotional stimulation paradigm detection on the user again to obtain second electroencephalogram data of the user.
Wherein, first brain electricity data and second brain electricity data all include: late positive potentials associated with mood regulation.
And S403, analyzing by the result evaluation system according to the first electroencephalogram data and the second electroencephalogram data to obtain corresponding electroencephalogram evaluation results.
Through the scheme, the electroencephalogram recording system records the electroencephalogram data of the user before and after MIT training, and then evaluates the user based on the electroencephalogram data.
An embodiment of the present invention further provides a human-computer interaction system for training emotion adjustment capability of a user, as shown in fig. 1, the system includes: MIT training system, result evaluation system, brain electricity recording system, mood adjustment quantization system.
The MIT training system is used for carrying out target tracking selection detection on a user from two dimensions of a target position and a target identity respectively to obtain the number of correct recognition targets;
the result evaluation system is used for analyzing according to the number of the correctly identified targets to obtain the reaction correct rate when the user carries out target tracking selection and sending the reaction correct rate to the MIT training system;
and the MIT training system is used for adjusting the tracking target moving speed of the user in the next target tracking selection detection according to the reaction accuracy so as to train the emotion regulating capability of the user.
The emotion adjusting quantification system is used for quantifying the emotion adjusting capacity of the user according to a preset emotion adjusting scale.
Before the result evaluation system acquires the number of correct recognition targets obtained when the MIT training system performs target tracking selection detection on the user from two dimensions of a target position and a target identity respectively, the result evaluation system is further used for acquiring emotion adjusting capacity detection on the user from two dimensions of cognitive reevaluation and expression inhibition by the emotion adjusting quantification system respectively to obtain an initial emotion adjusting capacity characteristic value of the user; the MIT training system is used for determining an initial value of the running of the MIT training system according to the initial emotion adjusting capacity characteristic value of the user and sending the initial emotion adjusting capacity characteristic value of the user to the MIT training system; wherein the initial values of the MIT training system operation at least comprise: the number of the initial tracking targets and the moving speed of the initial tracking targets.
The MIT training system is specifically used for carrying out target tracking selection detection on a user from two dimensions of a target position and a target identity according to the initial value.
The electroencephalogram recording system is used for carrying out emotion stimulation mode detection on the user and obtaining corresponding electroencephalogram data.
Specifically, the electroencephalogram recording system is used for obtaining first electroencephalogram data when the MIT training system performs emotion stimulation pattern detection on the user before performing target tracking selection detection on the user; and the MIT training system is used for carrying out target tracking selection detection on the user, and then carrying out emotion stimulation mode detection on the user to obtain second electroencephalogram data. Wherein the first electroencephalogram data and the second electroencephalogram data both include: late positive potentials associated with mood regulation.
The result evaluation system is also used for carrying out analysis processing according to the first electroencephalogram data and the second electroencephalogram data to obtain an electroencephalogram evaluation result
The MIT training system is specifically used for controlling the tracking target moving speed acceleration when the user selects and detects the next target tracking by the MIT training system under the condition that the reaction accuracy is larger than a first preset threshold; and under the condition that the reaction accuracy is smaller than the first preset threshold, the MIT training system controls the tracking target moving speed to decelerate when the next target tracking selection detection is carried out.
The result evaluation system is also used for acquiring target identification time data obtained when the MIT training system respectively carries out target tracking selection detection on the user from two dimensions of the target position and the target identity; and the target identification time data is analyzed to obtain the reaction time of the user in target tracking selection detection.
The MIT training system is further specifically configured to determine whether the reaction time is greater than a second preset threshold; and when the response time is greater than a second preset threshold value, the MIT training system controls the tracking target moving speed of the user in the next target tracking selection detection to accelerate to a first preset moving speed; when the response time is less than the second preset threshold, the MIT training system controls the target moving speed to accelerate to a second preset moving speed when the next target tracking selection detection is carried out; wherein the first preset moving speed is less than the first preset moving speed.
The result evaluation system is also used for obtaining a training emotion regulating capacity characteristic value of the user, which is obtained by performing emotion regulating capacity detection on the user from two dimensions of cognitive re-evaluation and expression inhibition by the emotion regulating quantification system after the user is subjected to MIT training of the MIT training system; and the training effect of the emotion regulating ability of the user is determined according to the initial emotion regulating ability characteristic value and the training emotion regulating ability characteristic value.
An embodiment of the present invention further provides a human-computer interaction method, as shown in fig. 5, the human-computer interaction method at least includes the following steps:
s501, after a training trigger signal of a user is acquired, randomly selecting a first preset number of pictures from a preset picture library as a tracking target, forming a first stimulation picture and displaying the first stimulation picture to the user.
Wherein the training trigger signal is used for indicating the emotion regulating ability training of the user.
The training trigger signal may be generated according to an operation instruction when the user operates the terminal device.
For example, as shown in fig. 3, after the training trigger signal is acquired, 8 pictures are randomly selected from a preset picture library as a tracking target, and a first stimulus picture is formed and displayed to the user.
S502, after the first stimulation graph is displayed for a first preset time, randomly selecting a second preset number of tracking targets from the first stimulation graph as memory targets, marking the memory targets with preset marks, and obtaining and displaying a second stimulation graph to a user.
For example, as shown in fig. 3, after the first stimulus diagram is presented to the user for 4s, 4 of the 8 tracked objects in the first stimulus diagram are selected as memory objects, the memory objects are marked with circles, and a second stimulus diagram is generated and presented to the user.
And S503, deleting the preset identification of the memory target after the second stimulation image shows the second preset time, and covering the tracking target and the memory target in the second stimulation image.
For example, as shown in fig. 3, after the second stimulus image is displayed for 2s, the circle of the memory object disappears, and then 8 pictures in the second stimulus image are covered and can be hidden in the gray circle background.
S504, controlling the covered tracking target and the memory target in the second stimulation map to move according to a preset moving speed according to a preset rule, and showing a moving process to a user.
For example, as shown in fig. 3, 8 pictures which are covered are dynamically and interactively moved for 6s, and the moving process is shown to the user.
The preset moving speed may be determined according to an initial emotion adjusting capability representation value obtained by detecting the emotion adjusting capability of the user by the emotion adjusting quantification system, which is specifically referred to the above embodiment and is not described herein again.
And S505, after the movement is stopped, marking a third preset number of covered memory targets in the second stimulus image, using the third preset number of covered memory targets as test targets and displaying the test targets to a user so as to prompt the user to determine pictures corresponding to the test targets.
For example, as shown in fig. 3, after 8 pictures stop moving, a test target is selected from the memory targets and marked to be displayed to the user, so as to prompt the user to determine a picture corresponding to the test target.
S506, determining pictures corresponding to the test targets selected by the user based on the operation of the user.
And S507, determining whether the picture corresponding to each test target selected by the user is correct, and acquiring the number of correct target identifications so as to complete target tracking selection detection on the user.
And S508, determining the reaction accuracy of the target tracking selection detection of the user at this time according to the correct target identification number.
S509, adjusting the preset moving speed of the next target tracking selection detection by the user based on the response accuracy until the number of target tracking selection detections by the user is greater than or equal to a third preset threshold.
Specifically, when the reaction accuracy is adjusted to be greater than or equal to a first preset threshold, controlling the user to accelerate the preset moving speed detected by the target tracking selection next time; and when the adjustment of the reaction accuracy rate is smaller than a first preset threshold value, controlling the user to decelerate the preset moving speed of the next target tracking selection detection.
In the embodiment of the invention, the process of determining the response accuracy is one-time target tracking selection detection, multiple times of target tracking selection detection are carried out on the user, and the moving speed of the tracking target detected by the next target tracking selection detection is adjusted according to the response accuracy of the previous target tracking selection detection, so that the emotion adjusting capability of the user is trained in a self-adaptive manner.
Furthermore, the reaction time of the user in the target tracking selection detection can be obtained. When the reaction accuracy is adjusted to be larger than or equal to a first preset threshold and the reaction time is larger than a second preset threshold, controlling the preset moving speed of the tracked target detected next time in the target tracking selection to accelerate to the first preset moving speed; when the reaction accuracy is adjusted to be larger than or equal to a first preset threshold and the reaction time is smaller than a second preset threshold, controlling the preset moving speed of the tracked target detected by next target tracking selection to accelerate to a second preset moving speed; the first preset moving speed is smaller than the second preset moving speed.
According to the scheme, the acceleration of the moving speed of the tracking target can be more accurate when the user selects and detects the next target tracking based on the reaction time of the user. Under the condition that the reaction accuracy is greater than the first preset threshold value but the reaction time is greater than the second preset threshold value, the reaction accuracy of the user is higher but the reaction speed is slower, and therefore, the smaller acceleration is adopted to be suitable for the characteristics of the user. That is to say, through the scheme, the method and the device are suitable for different users, and further effectively train the emotion regulating ability of the users so as to improve the emotion regulating ability of the users. Through the man-machine interaction method, the attention, the work and the training of the user are carried out from two dimensions of the target position and the target identity, so that the training of the emotion adjusting capability is realized, and the emotion adjusting capability of the user is improved.
Embodiments of the present invention also provide a computer-readable storage medium storing one or more programs, which are executable by one or more processors to implement the steps of the method for training ability of mood adjustment or the steps of the method for human-computer interaction as described above.
The embodiments in the present application are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, the system and media embodiments are substantially similar to the method embodiments and therefore are described in a relatively simple manner, where relevant, reference may be made to some descriptions of the method embodiments.
The system and the medium provided by the embodiment of the application correspond to the method one to one, so the system and the medium also have the beneficial technical effects similar to the corresponding method.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a component of' 8230; \8230;" does not exclude the presence of another like element in a process, method, article, or apparatus that comprises the element.
Of course, it will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by instructing relevant hardware (such as a processor, a controller, etc.) through a computer program, and the program can be stored in a computer readable storage medium, and when executed, the program can include the processes of the embodiments of the methods described above. The computer readable storage medium may be a memory, a magnetic disk, an optical disk, etc.
It is to be understood that the invention is not limited to the examples described above, but that modifications and variations may be effected thereto by those of ordinary skill in the art in light of the foregoing description, and that all such modifications and variations are intended to be within the scope of the invention as defined by the appended claims.

Claims (11)

1. A training method for regulating emotion, which is applied to a human-computer interaction system, wherein the human-computer interaction system comprises: the system comprises a multi-identity tracking MIT training system and a result evaluation system; the method comprises the following steps:
the result evaluation system obtains the number of correct recognition targets obtained by the MIT training system respectively performing target tracking selection detection on a user from two dimensions of a target position and a target identity;
the result evaluation system analyzes the number of the correctly identified targets to obtain the reaction correct rate when the user performs target tracking selection detection;
and the result evaluation system sends the reaction accuracy to the MIT training system so that the MIT training system adjusts the tracking target moving speed of the user in the next target tracking selection detection according to the reaction accuracy to train the emotion regulating capability of the user.
2. The method of claim 1, wherein the human-computer interaction system further comprises: an emotion adjustment quantification system;
before the result evaluation system obtains the number of correctly recognized targets obtained when the MIT training system performs target tracking selection detection on a user from two dimensions of a target position and a target identity, the method further comprises the following steps:
the result evaluation system acquires the emotion regulation quantitative system to detect the emotion regulation capacity of the user from two dimensions of cognitive reevaluation and expression inhibition respectively to obtain an initial emotion regulation capacity characteristic value of the user;
the result evaluation system determines an initial value of the MIT training system operation according to the initial emotion regulating capacity characteristic value of the user and sends the initial emotion regulating capacity characteristic value of the user to the MIT training system;
wherein the initial values of the MIT training system operation at least comprise: the number of initial tracking targets and the moving speed of the initial tracking targets.
3. The method of claim 2, wherein the obtaining of the number of correctly recognized targets by the MIT training system through target tracking selection detection on the user from two dimensions of a target position and a target identity by the result evaluation system specifically comprises:
and the MIT training system carries out target tracking selection detection on the user from two dimensions of a target position and a target identity according to the initial value.
4. The method of claim 1, wherein the human-computer interaction system further comprises: an electroencephalogram recording system;
before the MIT training system performs target tracking selection detection on the user, the method further comprises: the result evaluation system acquires first electroencephalogram data when the electroencephalogram recording system carries out emotional stimulation paradigm detection on the user;
after the MIT training system performs target tracking selection detection on a user, the method further comprises: the result evaluation system acquires second electroencephalogram data recorded by the electroencephalogram recording system when the emotional stimulation paradigm detection is carried out on the user;
wherein the first brain electrical data and the second brain electrical data both comprise: late positive potentials associated with mood regulation;
and the result evaluation system analyzes and processes the first electroencephalogram data and the second electroencephalogram data to obtain an electroencephalogram evaluation result.
5. The method of claim 1, wherein the MIT training system adjusting the tracked target moving speed of the user for the next target tracking selection detection according to the reaction accuracy includes:
when the response accuracy is greater than a first preset threshold value, the MIT training system controls the tracking target moving speed to accelerate when the user selects and detects the next target tracking;
and under the condition that the reaction accuracy is smaller than the first preset threshold, the MIT training system controls the tracking target moving speed to decelerate when the next target tracking selection detection is carried out.
6. The method of claim 5, further comprising:
the result evaluation system acquires target identification time data obtained when the MIT training system respectively carries out target tracking selection detection on the user from two dimensions of the target position and the target identity;
and the result evaluation system analyzes the target identification time data to obtain the reaction time of the user in target tracking selection detection.
7. The method of claim 6, wherein if the response accuracy is greater than a first preset threshold, the MIT training system controls the tracking target moving speed to accelerate when the user performs the next target tracking selection detection, specifically comprising:
determining whether the reaction time is greater than a second preset threshold;
when the response time is greater than a second preset threshold value, the MIT training system controls the tracking target moving speed of the user in the next target tracking selection detection to accelerate to a first preset moving speed;
when the response time is less than the second preset threshold value, the MIT training system controls the target moving speed during the next target tracking selection detection to accelerate to a second preset moving speed; wherein the first preset moving speed is less than the first preset moving speed.
8. The method of claim 2, further comprising:
the result evaluation system acquires a training emotion regulating capacity characteristic value of the user, which is obtained by performing emotion regulating capacity detection on the user from two dimensions of cognitive re-evaluation and expression inhibition by the emotion regulating quantification system after the user is subjected to MIT training of the MIT training system;
and the result evaluation system determines the emotion regulating ability training effect of the user according to the initial emotion regulating ability characteristic value and the training emotion regulating ability characteristic value.
9. A human-computer interaction method, characterized in that the method further comprises:
after a training trigger signal of a user is acquired, randomly selecting a first preset number of pictures from a preset picture library as a tracking target to form a first stimulation picture and displaying the first stimulation picture to the user; wherein the training trigger signal is used for indicating the emotion regulating ability training of the user;
after the first stimulation graph is displayed for a first preset time, randomly selecting a second preset number of tracking targets from the first stimulation graph as memory targets, and marking the memory targets with preset marks to obtain a second stimulation graph and display the second stimulation graph to a user;
after the second stimulation image shows a second preset time, deleting the preset identification of the memory target, and covering both the tracking target and the memory target in the second stimulation image;
controlling the covered tracking target and the memory target in the second stimulation image to move according to a preset moving speed according to a preset rule, and showing a moving process to a user;
after the movement is stopped, marking a third preset number of covered memory targets in the second stimulation graph as test targets and displaying the test targets to a user so as to prompt the user to determine pictures corresponding to the test targets;
determining pictures corresponding to all test targets selected by a user based on the operation of the user;
determining whether the picture corresponding to each test target selected by the user is correct or not, and acquiring the number of correct target identifications to complete target tracking selection detection of the user;
determining the response accuracy of the target tracking selection detection of the user at this time according to the correct target identification number;
and adjusting the preset moving speed of the next target tracking selection detection of the user based on the reaction accuracy until the number of times of the target tracking selection detection of the user is greater than or equal to a third preset threshold.
10. A human-computer interaction system for mood regulating ability training, the system comprising: the system comprises a multi-identity tracking paradigm MIT training system and a result evaluation system;
the MIT training system is used for carrying out target tracking selection detection on a user from two dimensions of a target position and a target identity respectively to obtain the number of correct recognition targets;
the result evaluation system is used for analyzing according to the number of the correctly identified targets to obtain the reaction correct rate when the user carries out target tracking selection and sending the reaction correct rate to the MIT training system;
and the MIT training system is used for adjusting the tracking target moving speed of the user in the next target tracking selection detection according to the reaction accuracy so as to train the emotion regulating capability of the user.
11. A computer readable storage medium, characterized in that the computer readable storage medium stores one or more programs which are executable by one or more processors to implement the steps in the emotion adjusting ability training method as recited in any one of claims 1 to 8, or the steps in the human-computer interaction method as recited in claim 9.
CN202211448157.1A 2022-11-18 2022-11-18 Emotion regulation ability training method, man-machine interaction method and related device Pending CN115708918A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211448157.1A CN115708918A (en) 2022-11-18 2022-11-18 Emotion regulation ability training method, man-machine interaction method and related device
PCT/CN2022/137706 WO2024103462A1 (en) 2022-11-18 2022-12-08 Emotion regulation capability training method, human-machine interaction method, and related apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211448157.1A CN115708918A (en) 2022-11-18 2022-11-18 Emotion regulation ability training method, man-machine interaction method and related device

Publications (1)

Publication Number Publication Date
CN115708918A true CN115708918A (en) 2023-02-24

Family

ID=85233716

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211448157.1A Pending CN115708918A (en) 2022-11-18 2022-11-18 Emotion regulation ability training method, man-machine interaction method and related device

Country Status (2)

Country Link
CN (1) CN115708918A (en)
WO (1) WO2024103462A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8814359B1 (en) * 2007-10-01 2014-08-26 SimpleC, LLC Memory recollection training system and method of use thereof
CN106155308B (en) * 2016-06-22 2019-03-08 浙江工业大学 Eye movement tracking method and system based on recall and annotation
CN111260984B (en) * 2020-01-20 2022-03-01 北京津发科技股份有限公司 Multi-person cooperative cognitive ability training method and device and storage medium
CN112137627B (en) * 2020-09-10 2021-08-03 北京津发科技股份有限公司 Intelligent human factor evaluation and training method and system
CN112535479B (en) * 2020-12-04 2023-07-18 中国科学院深圳先进技术研究院 Method for determining emotion processing tendency and related products

Also Published As

Publication number Publication date
WO2024103462A1 (en) 2024-05-23

Similar Documents

Publication Publication Date Title
Anton-Erxleben et al. Evaluating comparative and equality judgments in contrast perception: Attention alters appearance
JP5672144B2 (en) Heart rate / respiration rate detection apparatus, method and program
Lockhead Absolute judgments are relative: A reinterpretation of some psychophysical ideas
Mathe et al. Action from still image dataset and inverse optimal control to learn task specific visual scanpaths
CN103190923B (en) drowsiness detection method and device thereof
US20130096397A1 (en) Sensitivity evaluation system, sensitivity evaluation method, and program
Chakraborty et al. A human-robot interaction system calculating visual focus of human’s attention level
Nakashima et al. Saliency-based gaze prediction based on head direction
Coco et al. Classification of visual and linguistic tasks using eye-movement features
Yang et al. The influence of cueing on attentional focus in perceptual decision making
US20220160227A1 (en) Autism treatment assistant system, autism treatment assistant device, and program
MacInnes Multiple diffusion models to compare saccadic and manual responses for inhibition of return
Valuch et al. Priming of fixations during recognition of natural scenes
Berga et al. Psychophysical evaluation of individual low-level feature influences on visual attention
CN115444423A (en) Prediction system, prediction method, prediction device, prediction equipment and storage medium
US20170243507A1 (en) Filters and related methods of use in measuring reaction times
Kinder et al. Tracking flanker task dynamics: Evidence for continuous attentional selectivity.
CN115708918A (en) Emotion regulation ability training method, man-machine interaction method and related device
Penney et al. Stimulus spacing effects in duration perception are larger for auditory stimuli: Data and a model
US6422870B1 (en) Preference detecting apparatus and method and medium therefor
JPWO2018198948A1 (en) Intelligent productivity evaluation apparatus and intelligent productivity evaluation method
EP4060617A1 (en) Image processing system and method
Rakhshan et al. Influence of expected reward on temporal order judgment
Rhie et al. Queueing network based driver model for varying levels of information processing
Peacock et al. Searching for meaning: Local scene semantics guide attention during natural visual search in scenes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination