WO2024103464A1 - Emotion training system - Google Patents
Emotion training system Download PDFInfo
- Publication number
- WO2024103464A1 WO2024103464A1 PCT/CN2022/137731 CN2022137731W WO2024103464A1 WO 2024103464 A1 WO2024103464 A1 WO 2024103464A1 CN 2022137731 W CN2022137731 W CN 2022137731W WO 2024103464 A1 WO2024103464 A1 WO 2024103464A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- training
- tracking
- graphic
- emotion
- stimulation signal
- Prior art date
Links
- 238000012549 training Methods 0.000 title claims abstract description 266
- 230000008451 emotion Effects 0.000 title claims abstract description 150
- 230000000638 stimulation Effects 0.000 claims abstract description 250
- 230000000007 visual effect Effects 0.000 claims abstract description 77
- 230000004044 response Effects 0.000 claims abstract description 72
- 230000004424 eye movement Effects 0.000 claims abstract description 48
- 238000004458 analytical method Methods 0.000 claims abstract description 47
- 230000004434 saccadic eye movement Effects 0.000 claims description 42
- 230000001711 saccadic effect Effects 0.000 claims description 39
- 238000000034 method Methods 0.000 claims description 30
- 230000008569 process Effects 0.000 claims description 21
- 230000007935 neutral effect Effects 0.000 claims description 11
- 230000001815 facial effect Effects 0.000 claims description 3
- 230000006399 behavior Effects 0.000 description 48
- 238000010586 diagram Methods 0.000 description 16
- 210000001508 eye Anatomy 0.000 description 13
- 230000002996 emotional effect Effects 0.000 description 10
- 238000004590 computer program Methods 0.000 description 6
- 210000005252 bulbus oculi Anatomy 0.000 description 5
- 238000004891 communication Methods 0.000 description 5
- 210000003128 head Anatomy 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000009877 rendering Methods 0.000 description 4
- 238000011160 research Methods 0.000 description 4
- 208000019901 Anxiety disease Diseases 0.000 description 3
- 101100233916 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) KAR5 gene Proteins 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000012216 screening Methods 0.000 description 3
- 101000827703 Homo sapiens Polyphosphoinositide phosphatase Proteins 0.000 description 2
- 102100023591 Polyphosphoinositide phosphatase Human genes 0.000 description 2
- 230000036506 anxiety Effects 0.000 description 2
- 230000008909 emotion recognition Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 1
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000001149 cognitive effect Effects 0.000 description 1
- 238000011982 device technology Methods 0.000 description 1
- 210000001061 forehead Anatomy 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000001404 mediated effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 208000020016 psychiatric disease Diseases 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/113—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M21/00—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
Definitions
- the present invention relates to the field of medical device technology, and in particular to an emotion training system.
- Emotion is a general term for a series of subjective cognitive experiences. It is a person’s attitude towards objective things and the corresponding behavioral response. It is generally believed that emotion is a psychological activity mediated by individual desires and needs. It is an emotional state that affects our behavior and psychological changes. Long-term negative emotions are a high-risk factor for many mental illnesses, such as depression, anxiety, etc.
- the embodiment of the present invention provides an emotion training system to solve the problem in the prior art that the emotion-enhancing training process requires the guidance and participation of professionals, thereby achieving the flexibility of emotion training and reducing time and labor costs.
- an emotion training system comprising: a visual presentation system and a training analysis system; wherein the visual presentation system comprises a scale display module and a stimulus presentation module;
- the scale display module is used to display the preset emotion scale to the trainee before and after the training, and send the acquired pre-training scale data and post-training scale data to the training analysis system respectively;
- the stimulus presentation module is used to display each visual stimulus signal in the visual stimulus signal group to the trainee in sequence in response to detecting a training trigger instruction; wherein the visual stimulus signal group is used to instruct the trainee to perform upward eye movement training;
- the training analysis system includes a scale analysis module, which is used to determine the emotion training results based on the received pre-training scale data and post-training scale data, and output the emotion training results.
- the visual stimulation signal group includes a saccade stimulation signal group and/or a tracking stimulation signal group, wherein the saccade stimulation signal group represents the trainee performing an upward saccade task, and the tracking stimulation signal group represents the trainee performing an upward eye movement tracking task.
- each visual stimulation signal in the saccadic stimulation signal group includes a saccadic fixation point stimulation signal, a blank screen stimulation signal, and a saccadic pattern stimulation signal; wherein, the graphic position of the saccadic stimulation graphic in the saccadic pattern stimulation signal is higher than the center position of the fixation point in the saccadic fixation point stimulation signal, and the vertical angle between the graphic position and the position corresponding to the center position is greater than or equal to a first angle threshold, and the emotion level corresponding to the saccadic stimulation graphic is neutral or positive.
- the tracking stimulation signal group includes a first tracking stimulation signal group, the first tracking stimulation signal group includes a tracking fixation point stimulation signal and a first tracking stimulation signal, the first tracking stimulation signal represents that a first tracking stimulation graphic moves at a uniform speed along a preset upward trajectory, the vertical angle between the starting graphic position and the ending graphic position of the first tracking stimulation graphic is greater than or equal to a second angle threshold, and the emotion level corresponding to the first tracking stimulation graphic is negative, neutral or positive.
- the tracking stimulation signal group includes a second tracking stimulation signal group
- the second tracking stimulation signal group includes a tracking fixation point stimulation signal and a second tracking stimulation signal
- the second tracking stimulation signal represents that in the process of the first tracking stimulation graphic moving at a constant speed along a preset upward trajectory, based on a preset replacement duration, the first tracking stimulation graphic is replaced by the second tracking stimulation graphic moving at a constant speed along the preset upward trajectory, the vertical angle between the starting graphic position and the ending graphic position of the first tracking stimulation graphic is greater than or equal to a third angle threshold, the emotion level corresponding to the first tracking stimulation graphic is negative or neutral, and the emotion level corresponding to the second tracking stimulation graphic is higher than that of the first tracking stimulation graphic.
- the training analysis system further includes a behavior analysis device, which is used to collect training behavior data of the trainee during the emotional training process, and output training prompt information of the trainee generated based on the training behavior data; wherein the training prompt information represents the training status of the trainee during the emotional training process.
- a behavior analysis device which is used to collect training behavior data of the trainee during the emotional training process, and output training prompt information of the trainee generated based on the training behavior data; wherein the training prompt information represents the training status of the trainee during the emotional training process.
- the behavior analysis device includes a button device and/or an eye tracking device, and accordingly, the training behavior data includes button response data and/or eye movement data.
- the visual stimulation signal group includes a saccadic stimulation signal group
- the behavior analysis device includes a key device
- the saccadic stimulation graphic in the saccadic stimulation signal group is a face graphic
- the key response data represents the key response input by the trainee for the facial gender corresponding to the face graphic.
- the visual stimulation signal group includes a tracking stimulation signal group
- the tracking stimulation signal group includes a first tracking stimulation graphic group and a second tracking stimulation graphic group
- the behavior analysis device includes a key device
- the key response data represents the key response of the trainee to whether the first tracking stimulation graphic moving at a constant speed along a preset upward trajectory is replaced.
- the button device is used to determine the response parameters of the trainee based on the button response data, and generate training prompt information for the trainee based on the response parameters; wherein the response parameters include response accuracy and/or target response time.
- the eye tracking device is used to determine the trainee's concentration parameters based on the eye movement data, and generate training prompt information for the trainee based on the concentration parameters; wherein the concentration parameters include at least one of gaze duration, eye movement direction, eye movement angle, eye movement distance and eye movement speed.
- the visual presentation system further includes a selection module, which is used to, in response to detecting a selection instruction corresponding to a training selection control, add a training stimulus signal group corresponding to the training selection control to a visual stimulus signal group and generate a training trigger instruction; wherein the training selection control includes a saccade selection control and/or a tracking selection control, and the training stimulus signal group is a saccade stimulus signal group or a tracking stimulus signal group.
- the technical solution of the embodiment of the present invention is to set up a visual presentation system and a training analysis system, wherein the visual presentation system includes a scale display module and a stimulus presentation module, the scale display module is used to display a preset emotion scale to the trainee before and after the training, and send the acquired pre-training scale data and post-training scale data to the training analysis system respectively, the stimulus presentation module is used to display each visual stimulus signal in the visual stimulus signal group to the trainee in turn in response to detecting a training trigger instruction; wherein the visual stimulus signal group is used to instruct the trainee to do upward eye movement training, and the training analysis system includes a scale analysis module, which is used to determine the emotion training result based on the received pre-training scale data and post-training scale data, and output the emotion training result, thereby solving the problem in the prior art that the emotion training process needs the guidance and participation of professionals, realizing the flexibility of emotion training, and reducing time cost and labor cost.
- the visual presentation system includes a scale display module and a stimulus presentation module
- the scale display module is used
- FIG1 is a schematic diagram of the structure of an emotion training system provided by an embodiment of the present invention.
- FIG. 2 is a schematic diagram of a position vertical angle provided by an embodiment of the present invention.
- FIG3 is a schematic diagram of a saccadic stimulation signal group provided by an embodiment of the present invention.
- FIG. 4 is a schematic diagram of a first tracking stimulation signal group provided by an embodiment of the present invention.
- FIG. 5 is a schematic diagram of a second tracking stimulation signal group provided by an embodiment of the present invention.
- FIG. 6 is a schematic diagram of the structure of another emotion training system provided by an embodiment of the present invention.
- FIG. 7 is a schematic diagram of the structure of another emotion training system provided by an embodiment of the present invention.
- FIG8 is a rendering of a display interface of a selection module provided by an embodiment of the present invention.
- FIG. 9 is a rendering of a display interface of another selection module provided by an embodiment of the present invention.
- FIG10 is a schematic diagram of the structure of an electronic device provided by an embodiment of the present invention.
- FIG. 1 is a schematic diagram of the structure of an emotion training system provided by an embodiment of the present invention. This embodiment is applicable to the situation of training the emotions of the trainee.
- the system can be implemented in the form of software and/or hardware.
- the emotion training system includes: a visual presentation system 100 and a training analysis system 110; wherein the visual presentation system 100 includes a scale display module 101 and a stimulus presentation module 102; the scale display module 101 is used to display a preset emotion scale to the trainee before and after the training, and send the acquired pre-training scale data and post-training scale data to the training analysis system 110 respectively; the stimulus presentation module 102 is used to display each visual stimulus signal in the visual stimulus signal group to the trainee in turn in response to detecting a training trigger instruction; wherein the visual stimulus signal group is used to instruct the trainee to perform upward eye movement training; the training analysis system 110 includes a scale analysis module 111, which is used to determine the emotion training result based on the received pre-training scale data and post-training scale data, and output the emotion training result.
- the visual presentation system 100 includes a scale display module 101 and a stimulus presentation module 102
- the scale display module 101 is used to display a preset emotion scale to the trainee before and after the training, and send the acquired pre-training scale data and post-training
- the visual presentation system 100 may be composed of a host, a liquid crystal display with a high refresh rate (144 Hz), and psychological visual presentation software, wherein the psychological visual presentation software may be a Matlab (matrix factory) and Psychtoolbox (psychology toolbox) toolkit, and the visual stimulus signal may be presented to the trainee through the liquid crystal display using the Psychtoolbox programming in Maltlab, and the stimulus brightness may be accurately measured by a photometer (unit: cd/ cm2 ).
- the psychological visual presentation software may be a Matlab (matrix factory) and Psychtoolbox (psychology toolbox) toolkit
- the visual stimulus signal may be presented to the trainee through the liquid crystal display using the Psychtoolbox programming in Maltlab, and the stimulus brightness may be accurately measured by a photometer (unit: cd/ cm2 ).
- the scale display module 101 is used to display the preset emotion scale to the trainee before the trainee conducts emotion training, and send the pre-training scale data input by the trainee based on the preset emotion scale to the training analysis system 110; and after the trainee conducts emotion training, display the preset emotion scale to the trainee again, and send the post-training scale data input by the trainee based on the preset emotion scale to the training analysis system 110.
- the preset emotion scale may be one or more.
- the preset emotion scale includes but is not limited to the Positive and Negative Emotion Scale, the Hamilton Depression Rating Scale, the Hamilton Anxiety Rating Scale, and the Depression-Anxiety-Stress Scale (DASS-21), etc.
- the preset emotion scale used is not limited here.
- the pre-training scale data or the post-training scale data includes but is not limited to the score of each item in the preset emotion scale, the total score of all items in the preset training scale, the grade of each item in the preset emotion scale and the total grade of all items in the preset training scale, etc.
- the visual stimulation signal group is set by programming with Psychtoolbox in Maltlab, wherein the visual stimulation signal group includes a plurality of visual stimulation signals.
- the stimulation presentation module 102 is specifically used to display each visual stimulation signal to the trainee in sequence based on the presentation duration corresponding to each visual stimulation signal in the visual stimulation signal group in response to detecting the training trigger instruction.
- the presentation duration corresponding to each visual stimulation signal can be the same or different.
- the visual stimulation signal group includes a saccade stimulation signal group and/or a tracking stimulation signal group, wherein the saccade stimulation signal group represents the trainee performing an upward saccade task, and the tracking stimulation signal group represents the trainee performing an upward eye movement tracking task.
- the visual stimulation signal group includes multiple groups of saccade stimulation signal groups and/or multiple groups of tracking stimulation signal groups.
- the training time corresponding to the visual stimulation signal group can be 10 minutes or 15 minutes. The training time is not limited here.
- the emotion training results include but are not limited to whether the emotion level is improved, the difference in emotion levels, and the emotion improvement ratio.
- the emotion level corresponding to the scale data before training is 10
- the emotion level corresponding to the scale data after training is 25, then the emotion level is improved, the difference in emotion levels is 10, and the emotion improvement ratio is 150%.
- outputting the emotion training results includes: sending the emotion training results to the visual presentation system 100, so that the visual presentation system 100 outputs the received emotion training results.
- outputting the emotion training result includes: sending the emotion training result to a mobile terminal.
- the emotion training result may be sent in the form of text message, email, phone call, etc.
- the technical solution of this embodiment is to set up a visual presentation system and a training analysis system, wherein the visual presentation system includes a scale display module and a stimulus presentation module, the scale display module is used to display a preset emotion scale to the trainee before and after the training, and send the acquired pre-training scale data and post-training scale data to the training analysis system respectively, the stimulus presentation module is used to display each visual stimulus signal in the visual stimulus signal group to the trainee in turn in response to detecting a training trigger instruction; wherein the visual stimulus signal group is used to instruct the trainee to do upward eye movement training, and the training analysis system includes a scale analysis module, which is used to determine the emotion training result based on the received pre-training scale data and post-training scale data, and output the emotion training result, thereby solving the problem in the prior art that the emotion-enhancing training process requires the guidance and participation of professionals, realizing the flexibility of emotion training, and reducing time and labor costs.
- the visual presentation system includes a scale display module and a stimulus presentation module
- the scale display module is used to display
- the embodiment of the present invention further refines the "eye-saccade stimulation signal group” and "pursuit stimulation signal group” in the above embodiment.
- each visual stimulation signal in the saccade stimulation signal group includes a saccade fixation point stimulation signal, a blank screen stimulation signal, and a saccade pattern stimulation signal; wherein, the graphic position of the saccade stimulation graphic in the saccade pattern stimulation signal is higher than the center position of the fixation point in the saccade fixation point stimulation signal, and the vertical angle between the graphic position and the position corresponding to the center position is greater than or equal to a first angle threshold, and the emotion level corresponding to the saccade stimulation graphic is neutral or positive.
- the presentation durations of the saccadic fixation point stimulation signal, the blank screen stimulation signal, and the saccadic pattern stimulation signal may be the same or different.
- the eye-saccade stimulation graphics can be screened from a first emotion graphics library.
- the first emotion graphics library includes but is not limited to the Geneva emotion graphics library (GAPED), KDEF emotion graphics library, AKDEF emotion graphics library, Fer2013 emotion graphics library, RaFD emotion graphics library, etc.
- the first emotion graphics library used here is not limited.
- the graphic position can be used to represent the graphic center position of the eye-saccade stimulation graphic.
- the vertical angle of the position can be used to represent the vertical angle between the first line connecting the trainee's eyeball and the center position and the second line connecting the eyeball and the graphic position.
- the first angle threshold can be 12°, which is not limited here.
- FIG. 2 is a schematic diagram of a position vertical angle provided by an embodiment of the present invention. Specifically, the “eye” on the left side of FIG. 2 represents the eye position of the trainee during the emotion training process, “A” represents the first line between the eye and the center position, “B” represents the second line between the eye and the graphic position, and “ ⁇ ” represents the position vertical angle.
- the vertical distance between the graphic position and the center position must be greater than or equal to 6.38 cm.
- the graphic position of the saccadic stimulation graphic can be directly above, to the upper left, or to the upper right of the gaze point.
- the graphic position of the saccadic stimulation graphic in the horizontal direction is not limited here.
- FIG3 is a schematic diagram of a saccadic stimulation signal group provided by an embodiment of the present invention.
- the saccadic stimulation signal group includes a 500ms saccadic fixation point stimulation signal, a 500ms blank screen stimulation signal, and a 1000ms saccadic pattern stimulation signal.
- FIG3 takes the graphic position of the saccadic stimulation pattern directly above the fixation point as an example. It can be understood that the graphic position of the saccadic stimulation pattern is included in all upper display areas where the vertical angle of the position is greater than or equal to the first angle threshold.
- the tracking stimulation signal group includes a first tracking stimulation signal group, the first tracking stimulation signal group includes a tracking fixation point stimulation signal and a first tracking stimulation signal, the first tracking stimulation signal represents that the first tracking stimulation graphic moves at a uniform speed along a preset upward trajectory, the vertical angle between the starting graphic position and the ending graphic position of the first tracking stimulation graphic is greater than or equal to a second angle threshold, and the emotion level corresponding to the first tracking stimulation graphic is negative, neutral or positive.
- the first tracking stimulus pattern can be obtained by screening from the second emotion pattern library.
- the second emotion pattern library includes but is not limited to the Geneva emotion pattern library (GAPED), KDEF emotion pattern library, AKDEF emotion pattern library, Fer2013 emotion pattern library, RaFD emotion pattern library, etc.
- the second emotion pattern library used is not limited here.
- the first emotion pattern library and the second emotion pattern library can be the same or different.
- the starting graphic position can be used to represent the position of the graphic center of the first tracking stimulation graphic at the starting point
- the ending graphic position can be used to represent the position of the graphic center of the first tracking stimulation graphic at the ending point.
- the starting point can be the fixation point in the tracking fixation point stimulation signal, and the specific starting graphic position is not limited here.
- the vertical angle of the position can be used to represent the vertical angle between the first line connecting the trainee's eyeball and the starting graphic position and the second line connecting the eyeball and the ending graphic position.
- the second angle threshold can be 12°, and the second angle threshold is not limited here.
- the first angle threshold and the second angle threshold can be the same or different.
- the moving speed of the first tracking stimulation pattern is 3 cm/s.
- the termination graphic position of the first tracking stimulation graphic may be directly above, to the upper left, or to the upper right of the fixation point, etc.
- the graphic position of the first tracking stimulation graphic in the horizontal direction is not limited here.
- FIG4 is a schematic diagram of a first tracking stimulation signal group provided by an embodiment of the present invention.
- the first tracking stimulation signal group includes a 500 ms tracking fixation point stimulation signal and a 3000 ms first tracking stimulation signal.
- FIG4 takes the example of the termination graphic position of the first tracking stimulation graphic being directly above the fixation point. It can be understood that the termination graphic position of the first tracking stimulation graphic is included in all upper display areas where the vertical angle of the position is greater than or equal to the second angle threshold.
- the tracking stimulation signal group includes a second tracking stimulation signal group
- the second tracking stimulation signal group includes a tracking fixation point stimulation signal and a second tracking stimulation signal
- the second tracking stimulation signal represents that in the process of the first tracking stimulation graphic moving at a constant speed along a preset upward trajectory, based on a preset replacement duration, the first tracking stimulation graphic is replaced by the second tracking stimulation graphic and moves at a constant speed along the preset upward trajectory, the vertical angle between the starting graphic position and the ending graphic position of the first tracking stimulation graphic is greater than or equal to a third angle threshold, the emotion level corresponding to the first tracking stimulation graphic is negative or neutral, and the emotion level corresponding to the second tracking stimulation graphic is higher than that of the first tracking stimulation graphic.
- the first tracking stimulus graphic can be obtained by screening from the second emotion graphic library.
- the second emotion graphic library includes but is not limited to the Geneva emotion graphic library (GAPED), KDEF emotion graphic library, AKDEF emotion graphic library, Fer2013 emotion graphic library, RaFD emotion graphic library, etc.
- the second emotion graphic library used here is not limited.
- the second tracking stimulus pattern can be obtained by screening from a third emotion pattern library.
- the third emotion pattern library includes but is not limited to the Geneva emotion pattern library (GAPED), KDEF emotion pattern library, AKDEF emotion pattern library, Fer2013 emotion pattern library, RaFD emotion pattern library, etc.
- the third emotion pattern library used is not limited here.
- the first emotion pattern library, the second emotion pattern library and the third emotion pattern library can be the same or different.
- the replacement time point of the second tracking stimulation pattern can be random, such as the second tracking stimulation pattern can be replaced after the first tracking stimulation pattern moves 100ms, or can be replaced after the first tracking stimulation pattern moves 500ms.
- the preset replacement time length can be 100ms or 200ms, and the preset replacement time length is not limited here.
- FIG5 is a schematic diagram of a second tracking stimulation signal group provided by an embodiment of the present invention.
- the second tracking stimulation signal group includes a 500ms tracking fixation point stimulation signal and a 3000ms second tracking stimulation signal, wherein the second tracking stimulation signal includes a preset replacement duration of 100ms for a second tracking stimulation graphic to replace the first tracking stimulation graphic.
- FIG5 takes the example of the termination graphic position of the first tracking stimulation graphic being directly above the fixation point. It can be understood that the termination graphic position of the first tracking stimulation graphic is included in all upper display areas where the vertical angle of the position is greater than or equal to the second angle threshold.
- the emotion training system further includes a forehead support, which is used to support the head of the trainee and keep the head of the trainee stationary during the emotion training.
- Figure 6 is a schematic diagram of the structure of another emotion training system provided by an embodiment of the present invention.
- the embodiment of the present invention further refines the training analysis system 110 in the above embodiment.
- the training analysis system 110 further includes a behavior analysis device 112, which is used to collect training behavior data of the trainee during the emotional training process, and output training prompt information of the trainee generated based on the training behavior data; wherein the training prompt information represents the training status of the trainee during the emotional training process.
- a behavior analysis device 112 which is used to collect training behavior data of the trainee during the emotional training process, and output training prompt information of the trainee generated based on the training behavior data; wherein the training prompt information represents the training status of the trainee during the emotional training process.
- the behavior analysis device 112 is activated in response to detecting a training trigger instruction.
- the behavior analysis device 112 includes a key device and/or an eye tracking device, and accordingly, the training behavior data includes key response data and/or eye movement data.
- the key device may be a remote controller, a keyboard, a mouse or other device that can realize key functions.
- the key response data may be used to characterize the attribute data of the key behavior input by the trainee based on the preset key task.
- the key response data includes at least one key behavior signal and the signal response duration corresponding to each key behavior signal.
- the eye tracking device exemplarily includes a camera unit for collecting eye movement data of the trainee, wherein the eye movement data includes eye movement trajectory and gaze point position, etc.
- each saccadic stimulation signal group includes two graphic types of saccadic stimulation graphics
- the button response data represents the button response of the trainee to the graphic type corresponding to the saccadic stimulation graphics.
- the graphic types of the saccadic stimulation graphics include landscape graphics and face graphics
- the graphic types of the saccadic stimulation graphics include cartoon graphics and face graphics
- the graphic types of the saccadic stimulation graphics include landscape graphics and cartoon graphics, etc.
- the two graphic types are not limited here.
- the graphic types of the saccadic stimulation graphics include landscape graphics and face graphics
- the graphic type of the saccadic stimulation graphics in the current saccadic stimulation signal group is the landscape type
- the trainee can press the left button of the mouse
- the graphic type of the saccadic stimulation graphics in the current saccadic stimulation signal group is the face type
- the trainee can press the right button of the mouse.
- the visual stimulation signal group includes a saccadic stimulation signal group
- the behavior analysis device 112 includes a button device
- the saccadic stimulation graphic in the saccadic stimulation signal group is a face graphic
- the button response data represents the button response input by the trainee for the facial gender corresponding to the face graphic.
- the visual stimulation signal group includes a tracking stimulation signal group
- the tracking stimulation signal group includes a first tracking stimulation graphic group and a second tracking stimulation graphic group
- the behavior analysis device 112 includes a button device
- the button response data represents the trainee's button response to whether the first tracking stimulation graphic moving at a constant speed along a preset upward trajectory is replaced.
- training prompt information of the trainee generated based on the training behavior data is output, including: determining the training parameters of the trainee based on the training behavior data, generating training prompt information based on the comparison result between the training parameters and the preset parameter range, and outputting the training prompt information.
- the training parameters include response parameters determined based on the keystroke behavior data and/or focus parameters determined based on the eye movement data.
- the response parameters include response accuracy and/or target response duration
- the focus parameters include at least one of gaze duration, eye movement direction, eye movement angle, eye movement distance, and eye movement speed.
- a button device is used to determine the trainee's response parameters based on the button response data, and generate training prompt information for the trainee based on the response parameters; wherein the response parameters include response accuracy and/or target response time.
- the key response data includes at least one key behavior signal and a signal response duration corresponding to each key behavior signal.
- the key response data includes at least one key behavior signal.
- the key behavior signal For each key behavior signal, if the key classification corresponding to the key behavior signal is the same as the true classification, the key behavior signal is marked as "correct”, and if the key classification corresponding to the key behavior signal is different from the true classification, the key behavior signal is marked as "wrong", and the ratio between the number of "correct” key behavior signals and the number of signal groups corresponding to the visual stimulus signal group is used as the response accuracy.
- the key classification represents the classification of the graphic type, the classification of the face gender, or the replacement classification of the first tracking stimulus graphic.
- the key-pressing behavior signal is a key-pressing behavior signal corresponding to the eye-saccade stimulation signal group
- the trainee's key-pressing task includes distinguishing the gender of the face of the eye-saccade stimulation pattern
- the total number of key-pressing represents the number of signal groups corresponding to the eye-saccade stimulation signal group. If the key classification corresponding to the key-pressing behavior signal is male (left key), and the real classification corresponding to the key-pressing behavior signal is male (left key), then the key-pressing behavior signal is marked as "correct”.
- the key-pressing behavior signal is a key-pressing behavior signal corresponding to the tracking stimulation signal group
- the trainee's key-pressing task includes determining whether the first tracking stimulation pattern is replaced, then the total number of key-pressing guarantees the number of signal groups corresponding to the tracking stimulation signal group. If the key classification corresponding to the key-pressing behavior signal is replaced (key-pressing), and the real classification corresponding to the key-pressing behavior signal is replaced (key-pressing), then the key-pressing behavior signal is marked as "correct”.
- the key response data includes a signal response duration corresponding to at least one key behavior signal
- the signal response duration can be used to characterize the time length between the appearance time of the eye saccade stimulation pattern in the eye saccade stimulation signal and the key pressing time, or the signal response duration can be used to characterize the time length between the appearance time of the second tracking stimulation pattern in the second tracking stimulation signal group and the key pressing time.
- determining the response parameter of the trainee includes: taking the mean value of the signal response duration in the key response data as the trainee's target response duration, or taking the mean value of the signal response duration corresponding to the "correct" key behavior signal in the key response data as the trainee's target response duration.
- the eye tracking device is used to determine the trainee's concentration parameters based on the eye movement data, and generate training prompt information for the trainee based on the concentration parameters; wherein the concentration parameters include at least one of gaze duration, eye movement direction, eye movement angle, eye movement distance and eye movement speed.
- the eye movement data includes eye movement trajectory and fixation point position.
- Fixation duration can be used to indicate the duration of the trainee's eye movement trajectory remaining within the preset area where the fixation point is located during the presentation duration of the saccade fixation point stimulation signal or the tracking fixation point stimulation signal;
- eye movement direction can be used to indicate the trainee's eye movement direction corresponding to the saccade stimulation signal group or the tracking stimulation signal group;
- eye movement angle can be used to indicate the trainee's eye movement angle corresponding to the saccade stimulation signal group or the tracking stimulation signal group;
- eye movement distance can be used to indicate the trainee's eye movement distance corresponding to the saccade stimulation signal group; and eye movement speed can be used to indicate the trainee's eye movement speed corresponding to the tracking stimulation signal group.
- generating training prompt information for the trainee based on the focus parameter includes: comparing the focus parameter with a preset parameter range, and generating training prompt information for the trainee based on the comparison result.
- the preset parameter range corresponding to the fixation duration may be determined based on the presentation duration of the eye saccade fixation point stimulation signal or the tracking fixation point stimulation signal
- the preset parameter range corresponding to the eye movement angle may be determined based on the vertical angle of the position
- the preset parameter range corresponding to the eye movement speed may be determined based on the moving speed of the first tracking stimulation pattern.
- the training state represented by the generated training prompt information is better; if the training parameters do not satisfy the preset parameter range, the training state represented by the generated training prompt information is worse.
- the training prompt information is output, including: when the training parameters do not meet the preset parameter range, the training prompt information is output.
- the output training prompt information is the training prompt information indicating that the training state is poor.
- outputting the training prompt information includes: sending the training prompt information to the visual presentation system 100, so that the visual presentation system 100 outputs the received training prompt information.
- the information type of the training prompt information is text information or color information. Exemplarily, the text information is "The training status is poor, please pay attention", and the color information is red.
- the output form of the training prompt information includes but is not limited to a sound prompt or an indicator light prompt.
- the sound prompt may be a voice prompt, such as "The training status is poor, please pay attention”
- the sound prompt may also be a prompt tone prompt, such as a prompt tone with a higher output frequency
- the indicator light prompt may be an indicator light with a higher flashing frequency.
- the specific output form of the training prompt information is not limited here.
- the technical solution of this embodiment is to set up a behavior analysis device in the training analysis system to collect the training behavior data of the trainee during the emotional training process, and output the training prompt information of the trainee generated based on the training behavior data, wherein the training prompt information represents the training status of the trainee during the emotional training process, solves the problem that the trainee's concentration is reduced during the emotional training process and affects the training effect, realizes the purpose of real-time feedback on the training status during the training process, and improves the training effect of the emotional training system.
- Figure 7 is a schematic diagram of the structure of another emotion training system provided by an embodiment of the present invention. This embodiment further refines the visual presentation system 100 in the above embodiment.
- the visual presentation system 100 further includes a selection module 103, which is used to, in response to detecting a selection instruction corresponding to a training selection control, add a training stimulus signal group corresponding to the training selection control to the visual stimulus signal group and generate a training trigger instruction; wherein the training selection control includes a saccade selection control and/or a tracking selection control, and the training stimulus signal group is a saccade stimulus signal group or a tracking stimulus signal group.
- a selection module 103 which is used to, in response to detecting a selection instruction corresponding to a training selection control, add a training stimulus signal group corresponding to the training selection control to the visual stimulus signal group and generate a training trigger instruction; wherein the training selection control includes a saccade selection control and/or a tracking selection control, and the training stimulus signal group is a saccade stimulus signal group or a tracking stimulus signal group.
- a selection instruction corresponding to the training selection control is generated.
- the visual stimulation signal group includes a saccade stimulation signal group or a pursuit stimulation signal group.
- FIG8 is a rendering of a display interface of a selection module provided in an embodiment of the present invention.
- the “saccade training task” and “tracking training task” in FIG8 represent training selection controls, respectively.
- the training selection control is a command control that responds to a single click by the trainee to trigger an operation.
- a selection instruction is generated based on the training selection control in the selected state.
- the trainer may input a selection operation based on the training selection control in the display interface of the selection module 103 in advance, and accordingly, the control state of the training selection control corresponding to the selection operation is set to the selected state.
- the visual stimulation signal group includes an eye saccade stimulation signal group and/or a tracking stimulation signal group.
- FIG9 is a rendering of the display interface of another selection module provided by an embodiment of the present invention.
- the “eye-saccade training task” and “tracking training task” in FIG9 represent training selection controls, respectively, and “start training” represents a training trigger control, wherein the training trigger control is a command control that responds to a single-click trigger operation by the trainee; the training selection control is a checkbox control that has two control states: a selected state and an unselected state.
- the technical solution of this embodiment is to set a selection module in the visual presentation system, wherein the selection module is used to add the training stimulus signal group corresponding to the training selection control to the visual stimulus signal group in response to detecting a selection instruction corresponding to the training selection control, and generate a training trigger instruction, wherein the training selection control includes a saccade selection control and/or a tracking selection control, and the training stimulus signal group is a saccade stimulus signal group or a tracking stimulus signal group, thereby solving the problem that the emotion training system cannot customize the selection of the visual stimulus signal group, so that the trainee can make personalized selection of the visual stimulus signal group used for emotion training, further improving the flexibility and pertinence of the emotion training system.
- FIG. 10 is a schematic diagram of the structure of an electronic device provided in accordance with an embodiment of the present invention.
- the electronic device 10 may be configured with a functional device in the visual presentation system 100 or a functional device in the training analysis system 110 in accordance with an embodiment of the present invention.
- electronic device 10 is intended to represent various forms of digital computers, such as laptop computers, desktop computers, workstations, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers.
- digital computers such as laptop computers, desktop computers, workstations, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers.
- the components, their connections and relationships, and their functions shown in the embodiments of the present invention are merely examples and are not intended to limit the implementation of the present invention described and/or claimed herein.
- the electronic device 10 includes at least one processor 11, and a memory connected to the at least one processor 11 in communication, such as a read-only memory (ROM) 12, a random access memory (RAM) 13, etc., wherein the memory stores a computer program executable by the at least one processor 11, and the processor 11 can perform various appropriate actions and processes according to the computer program stored in the read-only memory (ROM) 12 or the computer program loaded from the storage unit 18 to the random access memory (RAM) 13.
- RAM 13 various programs and data required for the operation of the electronic device 10 can also be stored.
- the processor 11, the ROM 12, and the RAM 13 are connected to each other via a bus 14.
- An input/output (I/O) interface 15 is also connected to the bus 14.
- a number of components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16, such as a keyboard, a mouse, etc.; an output unit 17, such as various types of displays, speakers, etc.; a storage unit 18, such as a disk, an optical disk, etc.; and a communication unit 19, such as a network card, a modem, a wireless communication transceiver, etc.
- the communication unit 19 allows the electronic device 10 to exchange information/data with other devices through a computer network such as the Internet and/or various telecommunication networks.
- the processor 11 may be a variety of general and/or special processing components with processing and computing capabilities. Some examples of the processor 11 include, but are not limited to, a central processing unit (CPU), a graphics processing unit (GPU), various special artificial intelligence (AI) computing chips, various processors running machine learning model algorithms, a digital signal processor (DSP), and any appropriate processor, controller, microcontroller, etc.
- the processor 11 executes the various methods and processes described above, such as the emotion training method in the above embodiment.
- the emotion training method described above may be implemented as a computer program, which is tangibly contained in a computer-readable storage medium, such as the storage unit 18.
- part or all of the computer program may be loaded and/or installed on the electronic device 10 via the ROM 12 and/or the communication unit 19.
- Various implementations of the systems and techniques described above herein can be implemented in digital electronic circuit systems, integrated circuit systems, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), application specific standard products (ASSPs), systems on chips (SOCs), programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof.
- FPGAs field programmable gate arrays
- ASICs application specific integrated circuits
- ASSPs application specific standard products
- SOCs systems on chips
- CPLDs programmable logic devices
- Various implementations can include: being implemented in one or more computer programs that can be executed and/or interpreted on a programmable system that includes at least one programmable processor, which can be a special purpose or general purpose programmable processor that can receive data and instructions from a storage system, at least one input device, and at least one output device, and transmit data and instructions to the storage system, the at least one input device, and the at least one output device.
- a programmable processor which can be a special purpose or general purpose programmable processor that can receive data and instructions from a storage system, at least one input device, and at least one output device, and transmit data and instructions to the storage system, the at least one input device, and the at least one output device.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Veterinary Medicine (AREA)
- Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Biophysics (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Psychology (AREA)
- Pathology (AREA)
- Child & Adolescent Psychology (AREA)
- Anesthesiology (AREA)
- Hematology (AREA)
- Human Computer Interaction (AREA)
- Ophthalmology & Optometry (AREA)
- Acoustics & Sound (AREA)
- Developmental Disabilities (AREA)
- Educational Technology (AREA)
- Hospice & Palliative Care (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Rehabilitation Tools (AREA)
Abstract
Provided is an emotion training system, the system comprising: a visual presentation system (100) and a training analysis system (110), wherein a scale display module (101) in the visual presentation system (100) is configured, before training starts and after training ends, for displaying a preset emotion scale to a trainee, respectively, and sending the obtained scale data before training and scale data after training to the training analysis system (110), respectively; a stimulation presentation module (102) in the visual presentation system (100) is configured, in response to detecting a training trigger instruction, for sequentially displaying each visual stimulation signal in a visual stimulation signal set to the trainee, wherein the visual stimulation signal set is used for indicating the trainee to perform upward eye movement training; a scale analysis module (111) in the training analysis system (110) is configured for outputting an emotion training result determined on the basis of the received scale data before training and scale data after training. The emotion training system realizes the flexibility of emotion training, and reduces the time cost and the labor cost.
Description
[01]本发明涉及医疗器械技术领域,尤其涉及一种情绪训练系统。[01] The present invention relates to the field of medical device technology, and in particular to an emotion training system.
[02]情绪,是对一系列主观认知经验的通称,是人对客观事物的态度体验以及相应的行为反应,一般认为,情绪是以个体愿望和需要为中介的一种心理活动,是影响我们行为和心理变化的情感状态。长期的消极情绪是众多精神疾病的高风险因素,例如抑郁症、焦虑症等等。[02] Emotion is a general term for a series of subjective cognitive experiences. It is a person’s attitude towards objective things and the corresponding behavioral response. It is generally believed that emotion is a psychological activity mediated by individual desires and needs. It is an emotional state that affects our behavior and psychological changes. Long-term negative emotions are a high-risk factor for many mental illnesses, such as depression, anxiety, etc.
[03]现有技术中关于情绪的研究大多侧重于情绪识别,而缺乏针对提高情绪的研究内容。目前,提高情绪的训练方法主要采用心理教育和知识讲座等,需要有经验的专业人士参与,时间成本和人工成本均较高。[03] In the prior art, most of the research on emotions focuses on emotion recognition, but lacks research content on improving emotions. At present, the training methods for improving emotions mainly adopt psychological education and knowledge lectures, etc., which require the participation of experienced professionals and have high time and labor costs.
[04]现有技术中关于情绪的研究大多侧重于情绪识别,而缺乏针对提高情绪的研究内容。目前,提高情绪的训练方法主要采用心理教育和知识讲座等,需要有经验的专业人士参与,时间成本和人工成本均较高。[04] In the prior art, most of the research on emotions focuses on emotion recognition, but lacks research content on improving emotions. At present, the training methods for improving emotions mainly adopt psychological education and knowledge lectures, etc., which require the participation of experienced professionals and have high time and labor costs.
[05]本发明实施例提供了一种情绪训练系统,以解决现有技术中提高情绪的训练过程需要专业人士引导参与的问题,实现情绪训练的灵活性,降低时间成本和人工成本。[05] The embodiment of the present invention provides an emotion training system to solve the problem in the prior art that the emotion-enhancing training process requires the guidance and participation of professionals, thereby achieving the flexibility of emotion training and reducing time and labor costs.
[06]根据本发明一个实施例提供了一种情绪训练系统,该系统包括:视觉呈现系统和训练分析系统;其中,所述视觉呈现系统包括量表显示模块和刺激呈现模块;[06] According to one embodiment of the present invention, an emotion training system is provided, the system comprising: a visual presentation system and a training analysis system; wherein the visual presentation system comprises a scale display module and a stimulus presentation module;
[07]所述量表显示模块,用于在训练开始前和训练结束后,分别将预设情绪量表显示给训练者,并将获取到的训练前量表数据和训练后量表数据分别发送给所述训练分析系统;[07] The scale display module is used to display the preset emotion scale to the trainee before and after the training, and send the acquired pre-training scale data and post-training scale data to the training analysis system respectively;
[08]所述刺激呈现模块,用于响应于检测到训练触发指令,将视觉刺激信号组中的各视觉刺激信号依次显示给所述训练者;其中,所述视觉刺激信号组用于指示所述训练者做向上眼动训练;[08] The stimulus presentation module is used to display each visual stimulus signal in the visual stimulus signal group to the trainee in sequence in response to detecting a training trigger instruction; wherein the visual stimulus signal group is used to instruct the trainee to perform upward eye movement training;
[09]所述训练分析系统包括量表分析模块,用于基于接收到的所述训练前量表数据和训练后量表数据,确定情绪训练结果,并将所述情绪训练结果进行输出。[09] The training analysis system includes a scale analysis module, which is used to determine the emotion training results based on the received pre-training scale data and post-training scale data, and output the emotion training results.
[10]在一个可选实施例中,所述视觉刺激信号组包括眼跳刺激信号组和/或追踪刺激信号组,其中,所述眼跳刺激信号组表征所述训练者执行向上眼跳任务,所述追踪刺激信号组表征所述训练者执行向上眼动追踪任务。[10] In an optional embodiment, the visual stimulation signal group includes a saccade stimulation signal group and/or a tracking stimulation signal group, wherein the saccade stimulation signal group represents the trainee performing an upward saccade task, and the tracking stimulation signal group represents the trainee performing an upward eye movement tracking task.
[11]在一个可选实施例中,所述眼跳刺激信号组中的各视觉刺激信号包括眼跳注视点刺激信号、空屏刺激信号以及眼跳图形刺激信号;其中,所述眼跳图形刺激信号中眼跳刺激图形的图形位置高于所述眼跳注视点刺激信号中注视点的中心位置,且所述图形位置与所述中心位置对应的位置垂直角度大于或等于第一角度阈值,所述眼跳刺激图形对应的情绪水平为中性或积极。[11] In an optional embodiment, each visual stimulation signal in the saccadic stimulation signal group includes a saccadic fixation point stimulation signal, a blank screen stimulation signal, and a saccadic pattern stimulation signal; wherein, the graphic position of the saccadic stimulation graphic in the saccadic pattern stimulation signal is higher than the center position of the fixation point in the saccadic fixation point stimulation signal, and the vertical angle between the graphic position and the position corresponding to the center position is greater than or equal to a first angle threshold, and the emotion level corresponding to the saccadic stimulation graphic is neutral or positive.
[12]在一个可选实施例中,所述追踪刺激信号组包括第一追踪刺激信号组,所述第一追踪刺激信号组包括追踪注视点刺激信号以及第一追踪刺激信号,所述第一追踪刺激信号表征第一追踪刺激图形沿预设向上轨迹匀速移动,所述第一追踪刺激图形的起始图形位置与终止图形位置对应的位置垂直角度大于或等于第二角度阈值,所述第一追踪刺激图形对应的情绪水平为消极、中性或积极。[12] In an optional embodiment, the tracking stimulation signal group includes a first tracking stimulation signal group, the first tracking stimulation signal group includes a tracking fixation point stimulation signal and a first tracking stimulation signal, the first tracking stimulation signal represents that a first tracking stimulation graphic moves at a uniform speed along a preset upward trajectory, the vertical angle between the starting graphic position and the ending graphic position of the first tracking stimulation graphic is greater than or equal to a second angle threshold, and the emotion level corresponding to the first tracking stimulation graphic is negative, neutral or positive.
[13]在一个可选实施例中,所述追踪刺激信号组包括第二追踪刺激信号组,所述第二追踪刺激信号组包括追踪注视点刺激信号和第二追踪刺激信号,所述第二追踪刺激信号表征在第一追踪刺激图形沿预设向上轨迹匀速移动的过程中,基于预设替换时长,采用第二追踪刺激图形替换所述第一追踪刺激图形沿预设向上轨迹匀速移动,所述第一追踪刺激图形的起始图形位置与终止图形位置对应的位置垂直角度大于或等于第三角度阈值,所述第一追踪刺激图形对应的情绪水平为消极或中性,所述第二追踪刺激图形对应的情绪水平高于所述第一追踪刺激图形。[13] In an optional embodiment, the tracking stimulation signal group includes a second tracking stimulation signal group, the second tracking stimulation signal group includes a tracking fixation point stimulation signal and a second tracking stimulation signal, the second tracking stimulation signal represents that in the process of the first tracking stimulation graphic moving at a constant speed along a preset upward trajectory, based on a preset replacement duration, the first tracking stimulation graphic is replaced by the second tracking stimulation graphic moving at a constant speed along the preset upward trajectory, the vertical angle between the starting graphic position and the ending graphic position of the first tracking stimulation graphic is greater than or equal to a third angle threshold, the emotion level corresponding to the first tracking stimulation graphic is negative or neutral, and the emotion level corresponding to the second tracking stimulation graphic is higher than that of the first tracking stimulation graphic.
[14]在一个可选实施例中,所述训练分析系统还包括行为分析装置,所述行为分析装置,用于采集所述训练者在进行情绪训练过程中的训练行为数据,并将基于所述训练行为数据生成的所述训练者的训练提示信息进行输出;其中,所述训练提示信息表征所述训练者在进行情绪训练过程中的训练状态。[14] In an optional embodiment, the training analysis system further includes a behavior analysis device, which is used to collect training behavior data of the trainee during the emotional training process, and output training prompt information of the trainee generated based on the training behavior data; wherein the training prompt information represents the training status of the trainee during the emotional training process.
[15]在一个可选实施例中,所述行为分析装置包括按键设备和/或眼动追踪设备,相应的,所述训练行为数据包括按键响应数据和/或眼动数据。[15] In an optional embodiment, the behavior analysis device includes a button device and/or an eye tracking device, and accordingly, the training behavior data includes button response data and/or eye movement data.
[16]在一个可选实施例中,当所述视觉刺激信号组包括眼跳刺激信号组,且所述行为分析装置包括按键设备时,所述眼跳刺激信号组中的眼跳刺激图形为面孔图形,所述按键响应数据表征所述训练者针对所述面孔图形对应的面孔性别输入的按键响应。[16] In an optional embodiment, when the visual stimulation signal group includes a saccadic stimulation signal group, and the behavior analysis device includes a key device, the saccadic stimulation graphic in the saccadic stimulation signal group is a face graphic, and the key response data represents the key response input by the trainee for the facial gender corresponding to the face graphic.
[17]在一个可选实施例中,当所述视觉刺激信号组包括追踪刺激信号组,所述追踪刺激信号组包括第一追踪刺激图形组和第二追踪刺激图形组,且所述行为分析装置包括按键设备时,所述按键响应数据表征所述训练者针对沿预设向上轨迹匀速移动的第一追踪刺激图形是否被替换输入的按键响应。[17] In an optional embodiment, when the visual stimulation signal group includes a tracking stimulation signal group, the tracking stimulation signal group includes a first tracking stimulation graphic group and a second tracking stimulation graphic group, and the behavior analysis device includes a key device, the key response data represents the key response of the trainee to whether the first tracking stimulation graphic moving at a constant speed along a preset upward trajectory is replaced.
[18]在一个可选实施例中,所述按键设备,所述按键设备,用于基于所述按键响应数据,确定所述训练者的响应参数,并基于所述响应参数,生成所述训练者的训练提示信息;其中,所述响应参数包括响应正确率和/或目标响应时长。[18] In an optional embodiment, the button device is used to determine the response parameters of the trainee based on the button response data, and generate training prompt information for the trainee based on the response parameters; wherein the response parameters include response accuracy and/or target response time.
[19]在一个可选实施例中,所述眼动追踪设备,用于基于所述眼动数据,确定所述训练者的专注参数,并基于所述专注参数,生成所述训练者的训练提示信息;其中,所述专注参数包括注视时长、眼动方向、眼动角度、眼动距离和眼动速度中至少一种。[19] In an optional embodiment, the eye tracking device is used to determine the trainee's concentration parameters based on the eye movement data, and generate training prompt information for the trainee based on the concentration parameters; wherein the concentration parameters include at least one of gaze duration, eye movement direction, eye movement angle, eye movement distance and eye movement speed.
[20]在一个可选实施例中,所述视觉呈现系统还包括选择模块,所述选择模块,用于响应于检测到与训练选择控件对应的选择指令,将与所述训练选择控件对应的训练刺激信号组添加到视觉刺激信号组中,并生成训练触发指令;其中,所述训练选择控件包括眼跳选择控件和/或追踪选择控件,所述训练刺激信号组为眼跳刺激信号组或追踪刺激信号组。[20] In an optional embodiment, the visual presentation system further includes a selection module, which is used to, in response to detecting a selection instruction corresponding to a training selection control, add a training stimulus signal group corresponding to the training selection control to a visual stimulus signal group and generate a training trigger instruction; wherein the training selection control includes a saccade selection control and/or a tracking selection control, and the training stimulus signal group is a saccade stimulus signal group or a tracking stimulus signal group.
[21]本发明实施例的技术方案,通过设置视觉呈现系统和训练分析系统,其中,视觉呈现系统包括量表显示模块和刺激呈现模块,量表显示模块,用于在训练开始前和训练结束后,分别将预设情绪量表显示给训练者,并将获取到的训练前量表数据和训练后量表数据分别发送给训练分析系统,刺激呈现模块,用于响应于检测到训练触发指令,将视觉刺激信号组中的各视觉刺激信号依次显示给训练者;其中,视觉刺激信号组用于指示训练者做向上眼动训练,训练分析系统包括量表分析模块,用于基于接收到的训练前量表数据和训练后量表数据,确定情绪训练结果,并将情绪训练结果进行输出,解决了现有技术中提高情绪的训练过程需要专业人士引导参与的问题,实现了情绪训练的灵活性,降低了时间成本和人工成本。[21] The technical solution of the embodiment of the present invention is to set up a visual presentation system and a training analysis system, wherein the visual presentation system includes a scale display module and a stimulus presentation module, the scale display module is used to display a preset emotion scale to the trainee before and after the training, and send the acquired pre-training scale data and post-training scale data to the training analysis system respectively, the stimulus presentation module is used to display each visual stimulus signal in the visual stimulus signal group to the trainee in turn in response to detecting a training trigger instruction; wherein the visual stimulus signal group is used to instruct the trainee to do upward eye movement training, and the training analysis system includes a scale analysis module, which is used to determine the emotion training result based on the received pre-training scale data and post-training scale data, and output the emotion training result, thereby solving the problem in the prior art that the emotion training process needs the guidance and participation of professionals, realizing the flexibility of emotion training, and reducing time cost and labor cost.
[22]应当理解,本部分所描述的内容并非旨在标识本发明的实施例的关键或重要特征,也不用于限制本发明的范围。本发明的其它特征将通过以下的说明书而变得容易理解。[22] It should be understood that the contents described in this section are not intended to identify the key or important features of the embodiments of the present invention, nor are they intended to limit the scope of the present invention. Other features of the present invention will become readily understood through the following description.
[23]为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。[23] In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the following briefly introduces the drawings required for use in the description of the embodiments. Obviously, the drawings described below are only some embodiments of the present invention. For ordinary technicians in this field, other drawings can be obtained based on these drawings without creative work.
[24]图1为本发明一个实施例所提供的一种情绪训练系统的结构示意图;[24] FIG1 is a schematic diagram of the structure of an emotion training system provided by an embodiment of the present invention;
[25]图2为本发明一个实施例所提供的一种位置垂直角度的示意图;[25] FIG. 2 is a schematic diagram of a position vertical angle provided by an embodiment of the present invention;
[26]图3为本发明一个实施例所提供的一种眼跳刺激信号组的示意图;[26] FIG3 is a schematic diagram of a saccadic stimulation signal group provided by an embodiment of the present invention;
[27]图4为本发明一个实施例所提供的一种第一追踪刺激信号组的示意图;[27] FIG. 4 is a schematic diagram of a first tracking stimulation signal group provided by an embodiment of the present invention;
[28]图5为本发明一个实施例所提供的一种第二追踪刺激信号组的示意图;[28] FIG. 5 is a schematic diagram of a second tracking stimulation signal group provided by an embodiment of the present invention;
[29]图6为本发明一个实施例所提供的另一种情绪训练系统的结构示意图;[29] FIG. 6 is a schematic diagram of the structure of another emotion training system provided by an embodiment of the present invention;
[30]图7为本发明一个实施例所提供的另一种情绪训练系统的结构示意图;[30] FIG. 7 is a schematic diagram of the structure of another emotion training system provided by an embodiment of the present invention;
[31]图8为本发明一个实施例所提供的一种选择模块的显示界面的效果图;[31] FIG8 is a rendering of a display interface of a selection module provided by an embodiment of the present invention;
[32]图9为本发明一个实施例所提供的另一种选择模块的显示界面的效果图;[32] FIG. 9 is a rendering of a display interface of another selection module provided by an embodiment of the present invention;
[33]图10为本发明一个实施例所提供的一种电子设备的结构示意图。[33] FIG10 is a schematic diagram of the structure of an electronic device provided by an embodiment of the present invention.
[34]为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。[34] In order to enable those skilled in the art to better understand the solutions of the present invention, the technical solutions in the embodiments of the present invention will be described clearly and completely below in conjunction with the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only part of the embodiments of the present invention, not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by ordinary technicians in this field without creative work should fall within the scope of protection of the present invention.
[35]需要说明的是,本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本发明的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。[35] It should be noted that the terms "first", "second", etc. in the specification and claims of the present invention and the above-mentioned drawings are used to distinguish similar objects, and are not necessarily used to describe a specific order or sequence. It should be understood that the terms used in this way are interchangeable where appropriate, so that the embodiments of the present invention described herein can be implemented in an order other than those illustrated or described herein. In addition, the terms "including" and "having" and any variations thereof are intended to cover non-exclusive inclusions. For example, a process, method, system, product or device that includes a series of steps or units is not necessarily limited to those steps or units explicitly listed, but may include other steps or units that are not explicitly listed or inherent to these processes, methods, products or devices.
[36]图1为本发明一个实施例所提供的一种情绪训练系统的结构示意图,本实施例可适用于对训练者的情绪进行训练的情况,该系统可采用软件和/硬件的方式实现。[36] FIG. 1 is a schematic diagram of the structure of an emotion training system provided by an embodiment of the present invention. This embodiment is applicable to the situation of training the emotions of the trainee. The system can be implemented in the form of software and/or hardware.
[37]在本实施例中,情绪训练系统包括:视觉呈现系统100和训练分析系统110;其中,视觉呈现系统100包括量表显示模块101和刺激呈现模块102;量表显示模块101,用于在训练开始前和训练结束后,分别将预设情绪量表显示给训练者,并将获取到的训练前量表数据和训练后量表数据分别发送给训练分析系统110;刺激呈现模块102,用于响应于检测到训练触发指令,将视觉刺激信号组中的各视觉刺激信号依次显示给训练者;其中,视觉刺激信号组用于指示训练者做向上眼动训练;训练分析系统110包括量表分析模块111,用于基于接收到的训练前量表数据和训练后量表数据,确定情绪训练结果,并将情绪训练结果进行输出。[37] In this embodiment, the emotion training system includes: a visual presentation system 100 and a training analysis system 110; wherein the visual presentation system 100 includes a scale display module 101 and a stimulus presentation module 102; the scale display module 101 is used to display a preset emotion scale to the trainee before and after the training, and send the acquired pre-training scale data and post-training scale data to the training analysis system 110 respectively; the stimulus presentation module 102 is used to display each visual stimulus signal in the visual stimulus signal group to the trainee in turn in response to detecting a training trigger instruction; wherein the visual stimulus signal group is used to instruct the trainee to perform upward eye movement training; the training analysis system 110 includes a scale analysis module 111, which is used to determine the emotion training result based on the received pre-training scale data and post-training scale data, and output the emotion training result.
[38]其中,示例性的,视觉呈现系统100可以由主机、高刷新率(144Hz)的液晶显示器和心理学视觉呈现软件组成,其中,心理学视觉呈现软件可以是Matlab(矩阵工厂)和Psychtoolbox(心理学工具箱)工具包,用Maltlab中的Psychtoolbox编程将视觉刺激信号通过液晶显示器呈现给训练者,刺激亮度可以由光度计精确测量(单位cd/cm
2)。
[38] Exemplarily, the visual presentation system 100 may be composed of a host, a liquid crystal display with a high refresh rate (144 Hz), and psychological visual presentation software, wherein the psychological visual presentation software may be a Matlab (matrix factory) and Psychtoolbox (psychology toolbox) toolkit, and the visual stimulus signal may be presented to the trainee through the liquid crystal display using the Psychtoolbox programming in Maltlab, and the stimulus brightness may be accurately measured by a photometer (unit: cd/ cm2 ).
[39]其中,具体的,量表显示模块101,用于在训练者进行情绪训练之前,将预设情绪量表显示给训练者,并将训练者基于预设情绪量表输入的训练前量表数据发送给训练分析系统110,以及在训练者进行情绪训练之后,再次将预设情绪量表显示给训练者,并将训练者基于预设情绪量表输入的训练后量表数据发送给训练分析系统110。[39] Specifically, the scale display module 101 is used to display the preset emotion scale to the trainee before the trainee conducts emotion training, and send the pre-training scale data input by the trainee based on the preset emotion scale to the training analysis system 110; and after the trainee conducts emotion training, display the preset emotion scale to the trainee again, and send the post-training scale data input by the trainee based on the preset emotion scale to the training analysis system 110.
[40]其中,具体的,预设情绪量表可以为一个也可以为多个。示例性的,预设情绪量表包括但不限于正性负性情绪量表、汉密顿抑郁量表、汉密顿焦虑量表和抑郁-焦虑-压力量表(DASS-21)等等,此处对采用的预设情绪量表不作限定。[40] Specifically, the preset emotion scale may be one or more. Exemplarily, the preset emotion scale includes but is not limited to the Positive and Negative Emotion Scale, the Hamilton Depression Rating Scale, the Hamilton Anxiety Rating Scale, and the Depression-Anxiety-Stress Scale (DASS-21), etc. The preset emotion scale used is not limited here.
[41]其中,示例性的,训练前量表数据或训练后量表数据包括但不限于预设情绪量表中每个项目的分数、预设训练量表的所有项目的总分数、预设情绪量表中每个项目的等级和预设训练量表的所有项目的总等级等等。[41] Among them, exemplarily, the pre-training scale data or the post-training scale data includes but is not limited to the score of each item in the preset emotion scale, the total score of all items in the preset training scale, the grade of each item in the preset emotion scale and the total grade of all items in the preset training scale, etc.
[42]其中,具体的,用Maltlab中的Psychtoolbox编程设置视觉刺激信号组,其中,视觉刺激信号组包含多个视觉刺激信号。在一个可选实施例中,刺激呈现模块102,具体用于响应于检测到训练触发指令,基于视觉刺激信号组中的各视觉刺激信号分别对应的呈现时长,将各视觉刺激信号依次显示给训练者。其中,各视觉刺激信号分别对应的呈现时长可以相同也可以不同。[42] Specifically, the visual stimulation signal group is set by programming with Psychtoolbox in Maltlab, wherein the visual stimulation signal group includes a plurality of visual stimulation signals. In an optional embodiment, the stimulation presentation module 102 is specifically used to display each visual stimulation signal to the trainee in sequence based on the presentation duration corresponding to each visual stimulation signal in the visual stimulation signal group in response to detecting the training trigger instruction. The presentation duration corresponding to each visual stimulation signal can be the same or different.
[43]在一个可选实施例中,视觉刺激信号组包括眼跳刺激信号组和/或追踪刺激信号组,其中,眼跳刺激信号组表征训练者执行向上眼跳任务,追踪刺激信号组表征训练者执行向上眼动追踪任务。[43] In an optional embodiment, the visual stimulation signal group includes a saccade stimulation signal group and/or a tracking stimulation signal group, wherein the saccade stimulation signal group represents the trainee performing an upward saccade task, and the tracking stimulation signal group represents the trainee performing an upward eye movement tracking task.
[44]其中,具体的,视觉刺激信号组中包含多组眼跳刺激信号组和/或多组追踪刺激信号组,示例性的,视觉刺激信号组对应的训练时长可以为10min或15min,此处对训练时长不作限定。[44] Specifically, the visual stimulation signal group includes multiple groups of saccade stimulation signal groups and/or multiple groups of tracking stimulation signal groups. Exemplarily, the training time corresponding to the visual stimulation signal group can be 10 minutes or 15 minutes. The training time is not limited here.
[45]其中,示例性的,情绪训练结果包括但不限于情绪水平是否提高、情绪水平差值、情绪提升比例。举例而言,训练前量表数据对应对应的情绪水平为10,训练后量表对应对应的情绪水平为25,则情绪水平提高,情绪水平差值为10,情绪提升比例为150%。[45] In which, exemplary, the emotion training results include but are not limited to whether the emotion level is improved, the difference in emotion levels, and the emotion improvement ratio. For example, the emotion level corresponding to the scale data before training is 10, and the emotion level corresponding to the scale data after training is 25, then the emotion level is improved, the difference in emotion levels is 10, and the emotion improvement ratio is 150%.
[46]在一个可选实施例中,将情绪训练结果进行输出,包括:将情绪训练结果发送给视觉呈现系统100,以使视觉呈现系统100将接收到的情绪训练结果进行输出。[46] In an optional embodiment, outputting the emotion training results includes: sending the emotion training results to the visual presentation system 100, so that the visual presentation system 100 outputs the received emotion training results.
[47]在另一个可选实施例中,将情绪训练结果进行输出,包括:将情绪训练结果发送给移动终端。其中,示例性的,情绪训练结果的发送形式包括但不限于短信、邮件和电话等形式。[47] In another optional embodiment, outputting the emotion training result includes: sending the emotion training result to a mobile terminal. Exemplarily, the emotion training result may be sent in the form of text message, email, phone call, etc.
[48]本实施例的技术方案,通过通过设置视觉呈现系统和训练分析系统,其中,视觉呈现系统包括量表显示模块和刺激呈现模块,量表显示模块,用于在训练开始前和训练结束后,分别将预设情绪量表显示给训练者,并将获取到的训练前量表数据和训练后量表数据分别发送给训练分析系统,刺激呈现模块,用于响应于检测到训练触发指令,将视觉刺激信号组中的各视觉刺激信号依次显示给训练者;其中,视觉刺激信号组用于指示训练者做向上眼动训练,训练分析系统包括量表分析模块,用于基于接收到的训练前量表数据和训练后量表数据,确定情绪训练结果,并将情绪训练结果进行输出,解决了现有技术中提高情绪的训练过程需要专业人士引导参与的问题,实现了情绪训练的灵活性,降低了时间成本和人工成本。[48] The technical solution of this embodiment is to set up a visual presentation system and a training analysis system, wherein the visual presentation system includes a scale display module and a stimulus presentation module, the scale display module is used to display a preset emotion scale to the trainee before and after the training, and send the acquired pre-training scale data and post-training scale data to the training analysis system respectively, the stimulus presentation module is used to display each visual stimulus signal in the visual stimulus signal group to the trainee in turn in response to detecting a training trigger instruction; wherein the visual stimulus signal group is used to instruct the trainee to do upward eye movement training, and the training analysis system includes a scale analysis module, which is used to determine the emotion training result based on the received pre-training scale data and post-training scale data, and output the emotion training result, thereby solving the problem in the prior art that the emotion-enhancing training process requires the guidance and participation of professionals, realizing the flexibility of emotion training, and reducing time and labor costs.
[49]在上述实施例的基础上,本发明实施例对上述实施例中的“眼跳刺激信号组”和“追踪刺激信号组”进行进一步细化。[49] Based on the above embodiment, the embodiment of the present invention further refines the "eye-saccade stimulation signal group" and "pursuit stimulation signal group" in the above embodiment.
[50]在一个可选实施例中,眼跳刺激信号组中的各视觉刺激信号包括眼跳注视点刺激信号、空屏刺激信号以及眼跳图形刺激信号;其中,眼跳图形刺激信号中眼跳刺激图形的图形位置高于眼跳注视点刺激信号中注视点的中心位置,且图形位置与中心位置对应的位置垂直角度大于或等于第一角度阈值,眼跳刺激图形对应的情绪水平为中性或积极。[50] In an optional embodiment, each visual stimulation signal in the saccade stimulation signal group includes a saccade fixation point stimulation signal, a blank screen stimulation signal, and a saccade pattern stimulation signal; wherein, the graphic position of the saccade stimulation graphic in the saccade pattern stimulation signal is higher than the center position of the fixation point in the saccade fixation point stimulation signal, and the vertical angle between the graphic position and the position corresponding to the center position is greater than or equal to a first angle threshold, and the emotion level corresponding to the saccade stimulation graphic is neutral or positive.
[51]其中,具体的,眼跳注视点刺激信号、空屏刺激信号以及眼跳图形刺激信号分别对应的呈现时长可以相同也可以不同。[51] Specifically, the presentation durations of the saccadic fixation point stimulation signal, the blank screen stimulation signal, and the saccadic pattern stimulation signal may be the same or different.
[52]其中,具体的,眼跳刺激图形可以从第一情绪图形库中筛选得到,示例性的,第一情绪图形库包括但不限于日内瓦情绪图形库(GAPED)、KDEF情绪图形库、AKDEF情绪图形库、Fer2013情绪图形库、RaFD情绪图形库等等,此处对采用的第一情绪图形库不作限定。[52] Specifically, the eye-saccade stimulation graphics can be screened from a first emotion graphics library. Exemplarily, the first emotion graphics library includes but is not limited to the Geneva emotion graphics library (GAPED), KDEF emotion graphics library, AKDEF emotion graphics library, Fer2013 emotion graphics library, RaFD emotion graphics library, etc. The first emotion graphics library used here is not limited.
[53]其中,具体的,图形位置可用于表征眼跳刺激图形的图形中心位置。在训练者进行情绪训练的过程中保持头部固定不动,位置垂直角度可用于表征训练者的眼球与中心位置的第一连线以及眼球与图形位置之间的第二连线在垂直方向上的角度。示例性的,第一角度阈值可以为12°,此处对第一角度阈值不作限定。[53] Specifically, the graphic position can be used to represent the graphic center position of the eye-saccade stimulation graphic. When the trainee keeps his head fixed during the emotional training, the vertical angle of the position can be used to represent the vertical angle between the first line connecting the trainee's eyeball and the center position and the second line connecting the eyeball and the graphic position. Exemplarily, the first angle threshold can be 12°, which is not limited here.
[54]图2为本发明一个实施例所提供的一种位置垂直角度的示意图。具体的,图2中左侧的“眼睛”表示训练者在情绪训练过程中的眼球位置,“A”表示眼球与中心位置的第一连线,“B”表示眼球与图形位置之间的第二连线,“α”表示位置垂直角度。[54] FIG. 2 is a schematic diagram of a position vertical angle provided by an embodiment of the present invention. Specifically, the “eye” on the left side of FIG. 2 represents the eye position of the trainee during the emotion training process, “A” represents the first line between the eye and the center position, “B” represents the second line between the eye and the graphic position, and “α” represents the position vertical angle.
[55]举例而言,假设训练者的眼球与视觉呈现系统100之间的距离为30cm,第一角度阈值为12°,则图形位置与中心位置之间的垂直距离需大于或等于6.38cm。其中,具体的,眼跳刺激图形的图形位置可以在注视点的正上方、左上方和右上方等等。此处对眼跳刺激图形在水平方向上的图形位置不作限定。[55] For example, assuming that the distance between the trainee's eyeball and the visual presentation system 100 is 30 cm, and the first angle threshold is 12°, the vertical distance between the graphic position and the center position must be greater than or equal to 6.38 cm. Specifically, the graphic position of the saccadic stimulation graphic can be directly above, to the upper left, or to the upper right of the gaze point. The graphic position of the saccadic stimulation graphic in the horizontal direction is not limited here.
[56]图3为本发明一个实施例所提供的一种眼跳刺激信号组的示意图。具体的,眼跳刺激信号组包括500ms的眼跳注视点刺激信号、500ms的空屏刺激信号以及1000ms的眼跳图形刺激信号。图3以眼跳刺激图形的图形位置在注视点的正上方为例,可以理解的是,眼跳刺激图形的图形位置包含于位置垂直角度大于或等于第一角度阈值的所有上方显示区域。[56] FIG3 is a schematic diagram of a saccadic stimulation signal group provided by an embodiment of the present invention. Specifically, the saccadic stimulation signal group includes a 500ms saccadic fixation point stimulation signal, a 500ms blank screen stimulation signal, and a 1000ms saccadic pattern stimulation signal. FIG3 takes the graphic position of the saccadic stimulation pattern directly above the fixation point as an example. It can be understood that the graphic position of the saccadic stimulation pattern is included in all upper display areas where the vertical angle of the position is greater than or equal to the first angle threshold.
[57]在一个可选实施例中,追踪刺激信号组包括第一追踪刺激信号组,第一追踪刺激信号组包括追踪注视点刺激信号和第一追踪刺激信号,第一追踪刺激信号表征第一追踪刺激图形沿预设向上轨迹匀速移动,第一追踪刺激图形的起始图形位置与终止图形位置对应的位置垂直角度大于或等于第二角度阈值,第一追踪刺激图形对应的情绪水平为消极、中性或积极。[57] In an optional embodiment, the tracking stimulation signal group includes a first tracking stimulation signal group, the first tracking stimulation signal group includes a tracking fixation point stimulation signal and a first tracking stimulation signal, the first tracking stimulation signal represents that the first tracking stimulation graphic moves at a uniform speed along a preset upward trajectory, the vertical angle between the starting graphic position and the ending graphic position of the first tracking stimulation graphic is greater than or equal to a second angle threshold, and the emotion level corresponding to the first tracking stimulation graphic is negative, neutral or positive.
[58]其中,具体的,第一追踪刺激图形可以从第二情绪图形库中筛选得到,示例性的,第二情绪图形库包括但不限于日内瓦情绪图形库(GAPED)、KDEF情绪图形库、AKDEF情绪图形库、Fer2013情绪图形库、RaFD情绪图形库等等,此处对采用的第二情绪图形库不作限定。其中,具体的,第一情绪图形库和第二情绪图形库可以相同也可以不同。[58] Specifically, the first tracking stimulus pattern can be obtained by screening from the second emotion pattern library. For example, the second emotion pattern library includes but is not limited to the Geneva emotion pattern library (GAPED), KDEF emotion pattern library, AKDEF emotion pattern library, Fer2013 emotion pattern library, RaFD emotion pattern library, etc. The second emotion pattern library used is not limited here. Specifically, the first emotion pattern library and the second emotion pattern library can be the same or different.
[59]其中,具体的,起始图形位置可用于表征第一追踪刺激图形的图形中心在起始点的位置,终止图形位置可用于表征第一追踪刺激图形的图形中心在终止点的位置。示例性的,起始点可以为追踪注视点刺激信号中的注视点,此处对具体的起始图形位置不作限定。[59] Specifically, the starting graphic position can be used to represent the position of the graphic center of the first tracking stimulation graphic at the starting point, and the ending graphic position can be used to represent the position of the graphic center of the first tracking stimulation graphic at the ending point. Exemplarily, the starting point can be the fixation point in the tracking fixation point stimulation signal, and the specific starting graphic position is not limited here.
[60]其中,具体的,在训练者进行情绪训练的过程中保持头部固定不动,位置垂直角度可用于表征训练者的眼球与起始图形位置的第一连线以及眼球与终止图形位置之间的第二连线在垂直方向上的角度。示例性的,第二角度阈值可以为12°,此处对第二角度阈值不作限定。其中,具体的,第一角度阈值和第二角度阈值可以相同也可以不同。[60] Specifically, when the trainee keeps his head fixed during the emotion training, the vertical angle of the position can be used to represent the vertical angle between the first line connecting the trainee's eyeball and the starting graphic position and the second line connecting the eyeball and the ending graphic position. Exemplarily, the second angle threshold can be 12°, and the second angle threshold is not limited here. Specifically, the first angle threshold and the second angle threshold can be the same or different.
[61]举例而言,假设第一追踪刺激图形的起始图形位置与终止图形位置之间的位置距离为9cm,第一追踪刺激信号的呈现时长为3000ms,则第一追踪刺激图形的移动速度为3cm/s。[61] For example, assuming that the distance between the starting and ending positions of the first tracking stimulation pattern is 9 cm, and the presentation duration of the first tracking stimulation signal is 3000 ms, the moving speed of the first tracking stimulation pattern is 3 cm/s.
[62]其中,示例性的,第一追踪刺激图形的终止图形位置可以在注视点的正上方、左上方和右上方等等。此处对第一追踪刺激图形在水平方向上的图形位置不作限定。[62] In which, illustratively, the termination graphic position of the first tracking stimulation graphic may be directly above, to the upper left, or to the upper right of the fixation point, etc. The graphic position of the first tracking stimulation graphic in the horizontal direction is not limited here.
[63]图4为本发明一个实施例所提供的一种第一追踪刺激信号组的示意图。具体的,第一追踪刺激信号组包括500ms的追踪注视点刺激信号以及3000ms的第一追踪刺激信号。图4以第一追踪刺激图形的终止图形位置在注视点的正上方为例,可以理解的是,第一追踪刺激图形的终止图形位置包含于位置垂直角度大于或等于第二角度阈值的所有上方显示区域。[63] FIG4 is a schematic diagram of a first tracking stimulation signal group provided by an embodiment of the present invention. Specifically, the first tracking stimulation signal group includes a 500 ms tracking fixation point stimulation signal and a 3000 ms first tracking stimulation signal. FIG4 takes the example of the termination graphic position of the first tracking stimulation graphic being directly above the fixation point. It can be understood that the termination graphic position of the first tracking stimulation graphic is included in all upper display areas where the vertical angle of the position is greater than or equal to the second angle threshold.
[64]在一个可选实施例中,追踪刺激信号组包括第二追踪刺激信号组,第二追踪刺激信号组包括追踪注视点刺激信号和第二追踪刺激信号,第二追踪刺激信号表征在第一追踪刺激图形沿预设向上轨迹匀速移动的过程中,基于预设替换时长,采用第二追踪刺激图形替换第一追踪刺激图形沿预设向上轨迹匀速移动,第一追踪刺激图形的起始图形位置与终止图形位置对应的位置垂直角度大于或等于第三角度阈值,第一追踪刺激图形对应的情绪水平为消极或中性,第二追踪刺激图形对应的情绪水平高于第一追踪刺激图形。[64] In an optional embodiment, the tracking stimulation signal group includes a second tracking stimulation signal group, the second tracking stimulation signal group includes a tracking fixation point stimulation signal and a second tracking stimulation signal, the second tracking stimulation signal represents that in the process of the first tracking stimulation graphic moving at a constant speed along a preset upward trajectory, based on a preset replacement duration, the first tracking stimulation graphic is replaced by the second tracking stimulation graphic and moves at a constant speed along the preset upward trajectory, the vertical angle between the starting graphic position and the ending graphic position of the first tracking stimulation graphic is greater than or equal to a third angle threshold, the emotion level corresponding to the first tracking stimulation graphic is negative or neutral, and the emotion level corresponding to the second tracking stimulation graphic is higher than that of the first tracking stimulation graphic.
[65]其中,具体的,第一追踪刺激图形可以从第二情绪图形库中筛选得到,示例性的,第二情绪图形库包括但不限于日内瓦情绪图形库(GAPED)、KDEF情绪图形库、AKDEF情绪图形库、Fer2013情绪图形库、RaFD情绪图形库等等,此处对采用的第二情绪图形库不作限定。[65] Specifically, the first tracking stimulus graphic can be obtained by screening from the second emotion graphic library. Exemplarily, the second emotion graphic library includes but is not limited to the Geneva emotion graphic library (GAPED), KDEF emotion graphic library, AKDEF emotion graphic library, Fer2013 emotion graphic library, RaFD emotion graphic library, etc. The second emotion graphic library used here is not limited.
[66]其中,具体的,第二追踪刺激图形可以从第三情绪图形库中筛选得到,示例性的,第三情绪图形库包括但不限于日内瓦情绪图形库(GAPED)、KDEF情绪图形库、AKDEF情绪图形库、Fer2013情绪图形库、RaFD情绪图形库等等,此处对采用的第三情绪图形库不作限定。其中,具体的,第一情绪图形库、第二情绪图形库和第三情绪图形库可以相同也可以不同。[66] Specifically, the second tracking stimulus pattern can be obtained by screening from a third emotion pattern library. For example, the third emotion pattern library includes but is not limited to the Geneva emotion pattern library (GAPED), KDEF emotion pattern library, AKDEF emotion pattern library, Fer2013 emotion pattern library, RaFD emotion pattern library, etc. The third emotion pattern library used is not limited here. Specifically, the first emotion pattern library, the second emotion pattern library and the third emotion pattern library can be the same or different.
[67]其中,具体的,第二追踪刺激图形的替换时间点可以是随机的,如第二追踪刺激图形可以在第一追踪刺激图形移动100ms后替换,也可以是在第一追踪刺激图形移动500ms替换。其中,示例性的,预设替换时长可以为100ms或200ms,此处对预设替换时长不作限定。[67] Specifically, the replacement time point of the second tracking stimulation pattern can be random, such as the second tracking stimulation pattern can be replaced after the first tracking stimulation pattern moves 100ms, or can be replaced after the first tracking stimulation pattern moves 500ms. Exemplarily, the preset replacement time length can be 100ms or 200ms, and the preset replacement time length is not limited here.
[68]其中,具体的,当第一追踪刺激图形对应的情绪水平为消极时,则第二追踪刺激图形对应的情绪水平为中性或积极,当第一追踪刺激图形对应的情绪水平为中性时,则第二追踪刺激图形对应的情绪水平为积极。[68] Specifically, when the emotion level corresponding to the first tracking stimulus graphic is negative, the emotion level corresponding to the second tracking stimulus graphic is neutral or positive; and when the emotion level corresponding to the first tracking stimulus graphic is neutral, the emotion level corresponding to the second tracking stimulus graphic is positive.
[69]图5为本发明一个实施例所提供的一种第二追踪刺激信号组的示意图。具体的,第二追踪刺激信号组包括500ms的追踪注视点刺激信号和3000ms的第二追踪刺激信号,其中,第二追踪刺激信号中包含100ms的第二追踪刺激图形替换第一追踪刺激图形的预设替换时长。图5以第一追踪刺激图形的终止图形位置在注视点的正上方为例,可以理解的是,第一追踪刺激图形的终止图形位置包含于位置垂直角度大于或等于第二角度阈值的所有上方显示区域。[69] FIG5 is a schematic diagram of a second tracking stimulation signal group provided by an embodiment of the present invention. Specifically, the second tracking stimulation signal group includes a 500ms tracking fixation point stimulation signal and a 3000ms second tracking stimulation signal, wherein the second tracking stimulation signal includes a preset replacement duration of 100ms for a second tracking stimulation graphic to replace the first tracking stimulation graphic. FIG5 takes the example of the termination graphic position of the first tracking stimulation graphic being directly above the fixation point. It can be understood that the termination graphic position of the first tracking stimulation graphic is included in all upper display areas where the vertical angle of the position is greater than or equal to the second angle threshold.
[70]在上述实施例的基础上,情绪训练系统还包括额托架,额托架用于承托训练者的头部。其功能在于在训练者进行情绪训练的过程中保持其头部固定不动。[70] Based on the above embodiment, the emotion training system further includes a forehead support, which is used to support the head of the trainee and keep the head of the trainee stationary during the emotion training.
[71]图6为本发明一个实施例所提供的另一种情绪训练系统的结构示意图。本发明实施例对上述实施例中的训练分析系统110进行进一步细化。[71] Figure 6 is a schematic diagram of the structure of another emotion training system provided by an embodiment of the present invention. The embodiment of the present invention further refines the training analysis system 110 in the above embodiment.
[72]如图6所示,训练分析系统110还包括行为分析装置112,行为分析装置112,用于采集训练者在进行情绪训练过程中的训练行为数据,并将基于训练行为数据生成的训练者的训练提示信息进行输出;其中,训练提示信息表征训练者在进行情绪训练过程中的训练状态。[72] As shown in FIG6 , the training analysis system 110 further includes a behavior analysis device 112, which is used to collect training behavior data of the trainee during the emotional training process, and output training prompt information of the trainee generated based on the training behavior data; wherein the training prompt information represents the training status of the trainee during the emotional training process.
[73]其中,具体的,行为分析装置112响应于检测到训练触发指令被启动。在一个可选实施例中,行为分析装置112包括按键设备和/或眼动追踪设备,相应的,训练行为数据包括按键响应数据和/或眼动数据。[73] Specifically, the behavior analysis device 112 is activated in response to detecting a training trigger instruction. In an optional embodiment, the behavior analysis device 112 includes a key device and/or an eye tracking device, and accordingly, the training behavior data includes key response data and/or eye movement data.
[74]其中,示例性的,按键设备可以是遥控器、键盘、鼠标或其他可实现按键功能的设备。其中,具体的,按键响应数据可用于表征训练者基于预设按键任务输入的按键行为的属性数据。示例性的,按键响应数据包括至少一个按键行为信号和与各按键行为信号分别对应的信号响应时长。[74] In one example, the key device may be a remote controller, a keyboard, a mouse or other device that can realize key functions. In one example, the key response data may be used to characterize the attribute data of the key behavior input by the trainee based on the preset key task. In one example, the key response data includes at least one key behavior signal and the signal response duration corresponding to each key behavior signal.
[75]其中,示例性的,眼动追踪设备包括摄像单元,用于采集训练者的眼动数据,眼动数据包括眼动轨迹和注视点位置等。[75] In particular, the eye tracking device exemplarily includes a camera unit for collecting eye movement data of the trainee, wherein the eye movement data includes eye movement trajectory and gaze point position, etc.
[76]在上述实施例的基础上,可选的,当视觉刺激信号组包括至少两个眼跳刺激信号组,且行为分析装置112包括按键设备时,各眼跳刺激信号组包括两种图形类型的眼跳刺激图形,按键响应数据表征训练者针对眼跳刺激图形对应的图形类型输入的按键响应。[76] Based on the above embodiment, optionally, when the visual stimulation signal group includes at least two saccadic stimulation signal groups and the behavior analysis device 112 includes a button device, each saccadic stimulation signal group includes two graphic types of saccadic stimulation graphics, and the button response data represents the button response of the trainee to the graphic type corresponding to the saccadic stimulation graphics.
[77]其中,示例性的,眼跳刺激图形的图形类型包括风景图形和面孔图形、眼跳刺激图形的图形类型包括卡通图形和面孔图形或者眼跳刺激图形的图形类型包括风景图形和卡通图形等等。此处对两种图形类型不作限定。[77] In which, illustratively, the graphic types of the saccadic stimulation graphics include landscape graphics and face graphics, the graphic types of the saccadic stimulation graphics include cartoon graphics and face graphics, or the graphic types of the saccadic stimulation graphics include landscape graphics and cartoon graphics, etc. The two graphic types are not limited here.
[78]以按键设备为鼠标为例,假设眼跳刺激图形的图形类型包括风景图形和面孔图形,在当前眼跳刺激信号组中的眼跳刺激图形的图形类型为风景类型的情况下,训练者可以按下鼠标的左键,在当前眼跳刺激信号组中的眼跳刺激图形的图形类型为面孔类型的情况下,训练者可以按下鼠标的右键。[78] Taking the button device as a mouse as an example, assuming that the graphic types of the saccadic stimulation graphics include landscape graphics and face graphics, when the graphic type of the saccadic stimulation graphics in the current saccadic stimulation signal group is the landscape type, the trainee can press the left button of the mouse; when the graphic type of the saccadic stimulation graphics in the current saccadic stimulation signal group is the face type, the trainee can press the right button of the mouse.
[79]在上述实施例的基础上,可选的,当视觉刺激信号组包括眼跳刺激信号组,且行为分析装置112包括按键设备时,眼跳刺激信号组中的眼跳刺激图形为面孔图形,按键响应数据表征训练者针对面孔图形对应的面孔性别输入的按键响应。[79] Based on the above embodiment, optionally, when the visual stimulation signal group includes a saccadic stimulation signal group, and the behavior analysis device 112 includes a button device, the saccadic stimulation graphic in the saccadic stimulation signal group is a face graphic, and the button response data represents the button response input by the trainee for the facial gender corresponding to the face graphic.
[80]以按键设备为鼠标为例,在当前眼跳刺激信号组中的眼跳刺激图形的面孔性别为男性的情况下,训练者可以按下鼠标的左键,在当前眼跳刺激信号组中的眼跳刺激图形的面孔性别为女性的情况下,训练者可以按下鼠标的右键。[80] Taking the button device as a mouse as an example, when the face gender of the eye-saccade stimulation graphic in the current eye-saccade stimulation signal group is male, the trainee can press the left button of the mouse; when the face gender of the eye-saccade stimulation graphic in the current eye-saccade stimulation signal group is female, the trainee can press the right button of the mouse.
[81]在上述实施例的基础上,可选的,当视觉刺激信号组包括追踪刺激信号组,追踪刺激信号组包括第一追踪刺激图形组和第二追踪刺激图形组,且行为分析装置112包括按键设备时,按键响应数据表征训练者针对沿预设向上轨迹匀速移动的第一追踪刺激图形是否被替换输入的按键响应。[81] Based on the above embodiment, optionally, when the visual stimulation signal group includes a tracking stimulation signal group, the tracking stimulation signal group includes a first tracking stimulation graphic group and a second tracking stimulation graphic group, and the behavior analysis device 112 includes a button device, the button response data represents the trainee's button response to whether the first tracking stimulation graphic moving at a constant speed along a preset upward trajectory is replaced.
[82]以按键设备为鼠标为例,在当前追踪刺激信号组中的第一追踪刺激图形被替换的情况下,训练者可以按下鼠标的左键。[82] Taking the button device as a mouse as an example, when the first tracking stimulation pattern in the current tracking stimulation signal group is replaced, the trainee can press the left button of the mouse.
[83]在一个可选实施例中,将基于训练行为数据生成的训练者的训练提示信息进行输出,包括:基于训练行为数据,确定训练者的训练参数,基于训练参数与预设参数范围之间的比较结果,生成训练提示信息,并将训练提示信息进行输出。[83] In an optional embodiment, training prompt information of the trainee generated based on the training behavior data is output, including: determining the training parameters of the trainee based on the training behavior data, generating training prompt information based on the comparison result between the training parameters and the preset parameter range, and outputting the training prompt information.
[84]其中,具体的,训练参数包括基于按键行为数据确定的响应参数和/或基于眼动数据确定的专注参数。其中,响应参数包括响应正确率和/或目标响应时长,专注参数包括注视时长、眼动方向、眼动角度、眼动距离和眼动速度中至少一种。[84] Specifically, the training parameters include response parameters determined based on the keystroke behavior data and/or focus parameters determined based on the eye movement data. The response parameters include response accuracy and/or target response duration, and the focus parameters include at least one of gaze duration, eye movement direction, eye movement angle, eye movement distance, and eye movement speed.
[85]在上述实施例的基础上,可选的,按键设备,用于基于按键响应数据,确定训练者的响应参数,并基于响应参数,生成训练者的训练提示信息;其中,响应参数包括响应正确率和/或目标响应时长。[85] Based on the above embodiment, optionally, a button device is used to determine the trainee's response parameters based on the button response data, and generate training prompt information for the trainee based on the response parameters; wherein the response parameters include response accuracy and/or target response time.
[86]其中,具体的,按键响应数据包括至少一个按键行为信号和与各按键行为信号分别对应的信号响应时长。[86] Specifically, the key response data includes at least one key behavior signal and a signal response duration corresponding to each key behavior signal.
[87]在一个实施例中,按键响应数据包括至少一个按键行为信号,针对每个按键行为信号,如果该按键行为信号对应的按键分类与真实分类相同,则将该按键行为信号标记为“正确”,如果该按键行为信号对应的按键分类与真实分类不同,则将该按键行为信号标记为“错误”,将“正确”的按键行为信号的数量与视觉刺激信号组对应的信号组数量之间的比值作为响应正确率。其中,按键分类表征图形类型的分类、面孔性别的分类或第一追踪刺激图形的替换分类。[87] In one embodiment, the key response data includes at least one key behavior signal. For each key behavior signal, if the key classification corresponding to the key behavior signal is the same as the true classification, the key behavior signal is marked as "correct", and if the key classification corresponding to the key behavior signal is different from the true classification, the key behavior signal is marked as "wrong", and the ratio between the number of "correct" key behavior signals and the number of signal groups corresponding to the visual stimulus signal group is used as the response accuracy. The key classification represents the classification of the graphic type, the classification of the face gender, or the replacement classification of the first tracking stimulus graphic.
[88]举例而言,假设按键行为信号为与眼跳刺激信号组对应的按键行为信号,训练者的按键任务包括区分眼跳刺激图形的面孔性别,则总按键数量表征眼跳刺激信号组对应的信号组数量。如果按键行为信号对应的按键分类为男性(左键),且按键行为信号对应的真实分类为男性(左键),则该按键行为信号标记为“正确”。假设按键行为信号为与追踪刺激信号组对应的按键行为信号,训练者的按键任务包括判断第一追踪刺激图形是否被替换,则总按键数量保证追踪刺激信号组对应的信号组数量。如果按键行为信号对应的按键分类为被替换(按键),且按键行为信号对应的真实分类为被替换(按键),则该按键行为信号被标记为“正确”。[88] For example, assuming that the key-pressing behavior signal is a key-pressing behavior signal corresponding to the eye-saccade stimulation signal group, and the trainee's key-pressing task includes distinguishing the gender of the face of the eye-saccade stimulation pattern, then the total number of key-pressing represents the number of signal groups corresponding to the eye-saccade stimulation signal group. If the key classification corresponding to the key-pressing behavior signal is male (left key), and the real classification corresponding to the key-pressing behavior signal is male (left key), then the key-pressing behavior signal is marked as "correct". Assuming that the key-pressing behavior signal is a key-pressing behavior signal corresponding to the tracking stimulation signal group, and the trainee's key-pressing task includes determining whether the first tracking stimulation pattern is replaced, then the total number of key-pressing guarantees the number of signal groups corresponding to the tracking stimulation signal group. If the key classification corresponding to the key-pressing behavior signal is replaced (key-pressing), and the real classification corresponding to the key-pressing behavior signal is replaced (key-pressing), then the key-pressing behavior signal is marked as "correct".
[89]在一个实施例中,按键响应数据包括至少一个按键行为信号分别对应的信号响应时长,信号响应时长可用于表征眼跳图形刺激信号中的眼跳刺激图形的出现时刻与按键时刻之间的时间长度,或者,信号响应时长可用于表征第二追踪刺激信号组中第二追踪刺激图形的出现时刻与按键时刻之间的时间长度。其中,具体的,基于按键响应数据,确定训练者的响应参数,包括:将按键响应数据中的信号响应时长的均值作为训练者的目标响应时长,或者,将按键响应数据中“正确”的按键行为信号对应的信号响应时长的均值作为训练者的目标响应时长。[89] In one embodiment, the key response data includes a signal response duration corresponding to at least one key behavior signal, and the signal response duration can be used to characterize the time length between the appearance time of the eye saccade stimulation pattern in the eye saccade stimulation signal and the key pressing time, or the signal response duration can be used to characterize the time length between the appearance time of the second tracking stimulation pattern in the second tracking stimulation signal group and the key pressing time. Specifically, based on the key response data, determining the response parameter of the trainee includes: taking the mean value of the signal response duration in the key response data as the trainee's target response duration, or taking the mean value of the signal response duration corresponding to the "correct" key behavior signal in the key response data as the trainee's target response duration.
[90]在一个可选实施例中,眼动追踪设备,用于基于眼动数据,确定训练者的专注参数,并基于专注参数,生成训练者的训练提示信息;其中,专注参数包括注视时长、眼动方向、眼动角度、眼动距离和眼动速度中至少一种。[90] In an optional embodiment, the eye tracking device is used to determine the trainee's concentration parameters based on the eye movement data, and generate training prompt information for the trainee based on the concentration parameters; wherein the concentration parameters include at least one of gaze duration, eye movement direction, eye movement angle, eye movement distance and eye movement speed.
[91]其中,具体的,眼动数据包括眼动轨迹和注视点位置。注视时长可用于在眼跳注视点刺激信号或追踪注视点刺激信号的呈现时长内,训练者的眼动轨迹保持在注视点所在的预设区域范围内的时长;眼动方向可用于表征与眼跳刺激信号组或追踪刺激信号组对应的训练者的眼球运动方向;眼动角度可用于表征与眼跳刺激信号组或追踪刺激信号组对应的训练者的眼球运动角度;眼动距离可用于表征与眼跳刺激信号组对应的训练者的眼球运动距离;眼动速度可用于表征与追踪刺激信号组对应的训练者的眼球运动速度。[91] Specifically, the eye movement data includes eye movement trajectory and fixation point position. Fixation duration can be used to indicate the duration of the trainee's eye movement trajectory remaining within the preset area where the fixation point is located during the presentation duration of the saccade fixation point stimulation signal or the tracking fixation point stimulation signal; eye movement direction can be used to indicate the trainee's eye movement direction corresponding to the saccade stimulation signal group or the tracking stimulation signal group; eye movement angle can be used to indicate the trainee's eye movement angle corresponding to the saccade stimulation signal group or the tracking stimulation signal group; eye movement distance can be used to indicate the trainee's eye movement distance corresponding to the saccade stimulation signal group; and eye movement speed can be used to indicate the trainee's eye movement speed corresponding to the tracking stimulation signal group.
[92]其中,具体的,基于专注参数,生成训练者的训练提示信息,包括:将专注参数与预设参数范围进行比较,基于比较结果生成训练者的训练提示信息。其中,与注视时长对应的预设参数范围可以是基于眼跳注视点刺激信号或追踪注视点刺激信号的呈现时长确定的,与眼动角度对应的预设参数范围可以是位置垂直角度确定的,与眼动速度对应的预设参数范围可以是基于第一追踪刺激图形的移动速度确定的。[92] Specifically, generating training prompt information for the trainee based on the focus parameter includes: comparing the focus parameter with a preset parameter range, and generating training prompt information for the trainee based on the comparison result. The preset parameter range corresponding to the fixation duration may be determined based on the presentation duration of the eye saccade fixation point stimulation signal or the tracking fixation point stimulation signal, the preset parameter range corresponding to the eye movement angle may be determined based on the vertical angle of the position, and the preset parameter range corresponding to the eye movement speed may be determined based on the moving speed of the first tracking stimulation pattern.
[93]其中,具体的,如果训练参数满足预设参数范围,则生成的训练提示信息表征的训练状态为较好,如果训练参数不满足预设参数范围,则生成的训练提示信息表征的训练状态较差。[93] Specifically, if the training parameters satisfy the preset parameter range, the training state represented by the generated training prompt information is better; if the training parameters do not satisfy the preset parameter range, the training state represented by the generated training prompt information is worse.
[94]在上述实施例的基础上,可选的,将训练提示信息进行输出,包括:在训练参数不满足预设参数范围的情况下,将训练提示信息进行输出。在本实施例中,输出的训练提示信息为表征训练状态较差的训练提示信息。这样设置的好处在于,避免在情绪训练过程中一直输出训练提示信息,影响到训练者的训练状态。[94] Based on the above embodiment, optionally, the training prompt information is output, including: when the training parameters do not meet the preset parameter range, the training prompt information is output. In this embodiment, the output training prompt information is the training prompt information indicating that the training state is poor. The advantage of such a setting is that the training prompt information is not outputted all the time during the emotion training process, which affects the training state of the trainee.
[95]在一个可选实施例中,将训练提示信息进行输出,包括:将训练提示信息发送给视觉呈现系统100,以使视觉呈现系统100将接收到的训练提示信息进行输出。在本实施例中,训练提示信息的信息类型为文字信息或颜色信息。示例性的,文字信息为“训练状态较差,请注意”,颜色信息为红色。[95] In an optional embodiment, outputting the training prompt information includes: sending the training prompt information to the visual presentation system 100, so that the visual presentation system 100 outputs the received training prompt information. In this embodiment, the information type of the training prompt information is text information or color information. Exemplarily, the text information is "The training status is poor, please pay attention", and the color information is red.
[96]在另一个可选实施例中,训练提示信息的输出形式包括但不限于声音提示或指示灯提示。其中,示例性的,声音提示可以是语音提示,如“训练状态较差,请注意”,声音提示还可以是提示音提示,如输出频率较高的提示音,指示灯提示可以是指示灯闪烁频率较高。此处对训练提示信息的具体输出形式不作限定。[96] In another optional embodiment, the output form of the training prompt information includes but is not limited to a sound prompt or an indicator light prompt. In which, for example, the sound prompt may be a voice prompt, such as "The training status is poor, please pay attention", the sound prompt may also be a prompt tone prompt, such as a prompt tone with a higher output frequency, and the indicator light prompt may be an indicator light with a higher flashing frequency. The specific output form of the training prompt information is not limited here.
[97]本实施例的技术方案,通过在训练分析系统中设置行为分析装置,用于采集训练者在进行情绪训练过程中的训练行为数据,并将基于训练行为数据生成的训练者的训练提示信息进行输出,其中,训练提示信息表征训练者在进行情绪训练过程中的训练状态,解决了情绪训练过程中训练者的专注度降低影响训练效果的问题,实现了训练过程中对训练状态进行实时反馈的目的,提高了情绪训练系统的训练效果。[97] The technical solution of this embodiment is to set up a behavior analysis device in the training analysis system to collect the training behavior data of the trainee during the emotional training process, and output the training prompt information of the trainee generated based on the training behavior data, wherein the training prompt information represents the training status of the trainee during the emotional training process, solves the problem that the trainee's concentration is reduced during the emotional training process and affects the training effect, realizes the purpose of real-time feedback on the training status during the training process, and improves the training effect of the emotional training system.
[98]图7为本发明一个实施例所提供的另一种情绪训练系统的结构示意图。本实施例对上述实施例中的视觉呈现系统100进行进一步细化。[98] Figure 7 is a schematic diagram of the structure of another emotion training system provided by an embodiment of the present invention. This embodiment further refines the visual presentation system 100 in the above embodiment.
[99]如图7所示,视觉呈现系统100还包括选择模块103,选择模块103,用于响应于检测到与训练选择控件对应的选择指令,将与训练选择控件对应的训练刺激信号组添加到视觉刺激信号组中,并生成训练触发指令;其中,训练选择控件包括眼跳选择控件和/或追踪选择控件,训练刺激信号组为眼跳刺激信号组或追踪刺激信号组。[99] As shown in FIG7 , the visual presentation system 100 further includes a selection module 103, which is used to, in response to detecting a selection instruction corresponding to a training selection control, add a training stimulus signal group corresponding to the training selection control to the visual stimulus signal group and generate a training trigger instruction; wherein the training selection control includes a saccade selection control and/or a tracking selection control, and the training stimulus signal group is a saccade stimulus signal group or a tracking stimulus signal group.
[100]在一个实施例中,响应于检测到训练者基于选择模块103的显示界面上的训练选择控件输入的触发操作,生成与训练选择控件对应的选择指令。其中,具体的,一次触发操作只能选择一个训练选择控件,相应的,在本实施例中,视觉刺激信号组包括眼跳刺激信号组或追踪刺激信号组。[100] In one embodiment, in response to detecting a trigger operation input by a trainee based on a training selection control on the display interface of the selection module 103, a selection instruction corresponding to the training selection control is generated. Specifically, only one training selection control can be selected in one trigger operation. Accordingly, in this embodiment, the visual stimulation signal group includes a saccade stimulation signal group or a pursuit stimulation signal group.
[101]图8为本发明一个实施例所提供的一种选择模块的显示界面的效果图,具体的,图8中的“眼跳训练任务”和“追踪训练任务”分别表示训练选择控件,在本实施例中,训练选择控件属于命令控件,响应于训练者的单击触发操作。[101] FIG8 is a rendering of a display interface of a selection module provided in an embodiment of the present invention. Specifically, the “saccade training task” and “tracking training task” in FIG8 represent training selection controls, respectively. In this embodiment, the training selection control is a command control that responds to a single click by the trainee to trigger an operation.
[102]在另一个实施例中,响应于检测到训练者基于选择模块103的显示界面上的训练触发控件输入的触发操作,基于处于选中状态的训练选择控件,生成选择指令。其中,具体的,在基于训练触发控件输入的触发操作之前,训练者可提前在选择模块103的显示界面中基于训练选择控件输入选择操作,相应的,将与选择操作对应的训练选择控件的控件状态设置为选中状态。在本实施例中,视觉刺激信号组包括眼跳刺激信号组和/或追踪刺激信号组。[102] In another embodiment, in response to detecting a trigger operation input by the trainer based on the training trigger control on the display interface of the selection module 103, a selection instruction is generated based on the training selection control in the selected state. Specifically, before the trigger operation based on the training trigger control input, the trainer may input a selection operation based on the training selection control in the display interface of the selection module 103 in advance, and accordingly, the control state of the training selection control corresponding to the selection operation is set to the selected state. In this embodiment, the visual stimulation signal group includes an eye saccade stimulation signal group and/or a tracking stimulation signal group.
[103]图9为本发明一个实施例所提供的另一种选择模块的显示界面的效果图,具体的,图9中的“眼跳训练任务”和“追踪训练任务”分别表示训练选择控件,“开始训练”表示训练触发控件,其中,训练触发控件属于命令控件,响应于训练者的单击触发操作;训练选择控件属于复选框控件,存在选中状态和未选中两种控件状态。[103] FIG9 is a rendering of the display interface of another selection module provided by an embodiment of the present invention. Specifically, the “eye-saccade training task” and “tracking training task” in FIG9 represent training selection controls, respectively, and “start training” represents a training trigger control, wherein the training trigger control is a command control that responds to a single-click trigger operation by the trainee; the training selection control is a checkbox control that has two control states: a selected state and an unselected state.
[104]本实施例的技术方案,通过在视觉呈现系统设置选择模块,选择模块,用于响应于检测到与训练选择控件对应的选择指令,将与训练选择控件对应的训练刺激信号组添加到视觉刺激信号组中,并生成训练触发指令,其中,训练选择控件包括眼跳选择控件和/或追踪选择控件,训练刺激信号组为眼跳刺激信号组或追踪刺激信号组,解决了情绪训练系统不能自定义选择视觉刺激信号组的问题,使得训练者可对用于情绪训练的视觉刺激信号组进行个性化选择,进一步提高了情绪训练系统的灵活性和针对性。[104] The technical solution of this embodiment is to set a selection module in the visual presentation system, wherein the selection module is used to add the training stimulus signal group corresponding to the training selection control to the visual stimulus signal group in response to detecting a selection instruction corresponding to the training selection control, and generate a training trigger instruction, wherein the training selection control includes a saccade selection control and/or a tracking selection control, and the training stimulus signal group is a saccade stimulus signal group or a tracking stimulus signal group, thereby solving the problem that the emotion training system cannot customize the selection of the visual stimulus signal group, so that the trainee can make personalized selection of the visual stimulus signal group used for emotion training, further improving the flexibility and pertinence of the emotion training system.
[105]图10为本发明一个实施例所提供的一种电子设备的结构示意图,电子设备10可配置本发明实施例中的视觉呈现系统100中的功能装置或训练分析系统110中的功能装置。[105] FIG. 10 is a schematic diagram of the structure of an electronic device provided in accordance with an embodiment of the present invention. The electronic device 10 may be configured with a functional device in the visual presentation system 100 or a functional device in the training analysis system 110 in accordance with an embodiment of the present invention.
[106]其中,具体的,电子设备10旨在表示各种形式的数字计算机,诸如,膝上型计算机、台式计算机、工作台、个人数字助理、服务器、刀片式服务器、大型计算机、和其它适合的计算机。本发明实施例所示的部件、它们的连接和关系、以及它们的功能仅仅作为示例,并且不意在限制本文中描述的和/或者要求的本发明的实现。[106] Specifically, electronic device 10 is intended to represent various forms of digital computers, such as laptop computers, desktop computers, workstations, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers. The components, their connections and relationships, and their functions shown in the embodiments of the present invention are merely examples and are not intended to limit the implementation of the present invention described and/or claimed herein.
[107]如图10所示,电子设备10包括至少一个处理器11,以及与至少一个处理器11通信连接的存储器,如只读存储器(ROM)12、随机访问存储器(RAM)13等,其中,存储器存储有可被至少一个处理器11执行的计算机程序,处理器11可以根据存储在只读存储器(ROM)12中的计算机程序或者从存储单元18加载到随机访问存储器(RAM)13中的计算机程序,来执行各种适当的动作和处理。在RAM 13中,还可存储电子设备10操作所需的各种程序和数据。处理器11、ROM 12以及RAM 13通过总线14彼此相连。输入/输出(I/O)接口15也连接至总线14。[107] As shown in FIG. 10 , the electronic device 10 includes at least one processor 11, and a memory connected to the at least one processor 11 in communication, such as a read-only memory (ROM) 12, a random access memory (RAM) 13, etc., wherein the memory stores a computer program executable by the at least one processor 11, and the processor 11 can perform various appropriate actions and processes according to the computer program stored in the read-only memory (ROM) 12 or the computer program loaded from the storage unit 18 to the random access memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 can also be stored. The processor 11, the ROM 12, and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to the bus 14.
[108]电子设备10中的多个部件连接至I/O接口15,包括:输入单元16,例如键盘、鼠标等;输出单元17,例如各种类型的显示器、扬声器等;存储单元18,例如磁盘、光盘等;以及通信单元19,例如网卡、调制解调器、无线通信收发机等。通信单元19允许电子设备10通过诸如因特网的计算机网络和/或各种电信网络与其他设备交换信息/数据。[108] A number of components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16, such as a keyboard, a mouse, etc.; an output unit 17, such as various types of displays, speakers, etc.; a storage unit 18, such as a disk, an optical disk, etc.; and a communication unit 19, such as a network card, a modem, a wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices through a computer network such as the Internet and/or various telecommunication networks.
[109]处理器11可以是各种具有处理和计算能力的通用和/或专用处理组件。处理器11的一些示例包括但不限于中央处理单元(CPU)、图形处理单元(GPU)、各种专用的人工智能(AI)计算芯片、各种运行机器学习模型算法的处理器、数字信号处理器(DSP)、以及任何适当的处理器、控制器、微控制器等。处理器11执行上文所描述的各个方法和处理,如上述实施例中的情绪训练方法。[109] The processor 11 may be a variety of general and/or special processing components with processing and computing capabilities. Some examples of the processor 11 include, but are not limited to, a central processing unit (CPU), a graphics processing unit (GPU), various special artificial intelligence (AI) computing chips, various processors running machine learning model algorithms, a digital signal processor (DSP), and any appropriate processor, controller, microcontroller, etc. The processor 11 executes the various methods and processes described above, such as the emotion training method in the above embodiment.
[110]在一些实施例中,上文所描述的情绪训练方法可被实现为计算机程序,其被有形地包含于计算机可读存储介质,例如存储单元18。在一些实施例中,计算机程序的部分或者全部可以经由ROM 12和/或通信单元19而被载入和/或安装到电子设备10上。[110] In some embodiments, the emotion training method described above may be implemented as a computer program, which is tangibly contained in a computer-readable storage medium, such as the storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed on the electronic device 10 via the ROM 12 and/or the communication unit 19.
[111]本文中以上描述的系统和技术的各种实施方式可以在数字电子电路系统、集成电路系统、场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、芯片上系统的系统(SOC)、负载可编程逻辑设备(CPLD)、计算机硬件、固件、软件、和/或它们的组合中实现。这些各种实施方式可以包括:实施在一个或者多个计算机程序中,该一个或者多个计算机程序可在包括至少一个可编程处理器的可编程系统上执行和/或解释,该可编程处理器可以是专用或者通用可编程处理器,可以从存储系统、至少一个输入装置、和至少一个输出装置接收数据和指令,并且将数据和指令传输至该存储系统、该至少一个输入装置和该至少一个输出装置。[111] Various implementations of the systems and techniques described above herein can be implemented in digital electronic circuit systems, integrated circuit systems, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), application specific standard products (ASSPs), systems on chips (SOCs), programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include: being implemented in one or more computer programs that can be executed and/or interpreted on a programmable system that includes at least one programmable processor, which can be a special purpose or general purpose programmable processor that can receive data and instructions from a storage system, at least one input device, and at least one output device, and transmit data and instructions to the storage system, the at least one input device, and the at least one output device.
[112]值得注意的是,上述情绪训练系统的实施例中,所包括的各个单元、模块和装置只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,各功能单元的具体名称也只是为了便于相互区分,并不用于限制本发明的保护范围。[112] It is worth noting that in the above-mentioned embodiment of the emotion training system, the various units, modules and devices included are only divided according to functional logic, but are not limited to the above-mentioned division, as long as the corresponding functions can be achieved; in addition, the specific names of the functional units are only for the convenience of distinguishing each other, and are not used to limit the scope of protection of the present invention.
Claims (12)
- 一种情绪训练系统,其特征在于,包括:视觉呈现系统和训练分析系统;其中,所述视觉呈现系统包括量表显示模块和刺激呈现模块;An emotion training system, characterized by comprising: a visual presentation system and a training analysis system; wherein the visual presentation system comprises a scale display module and a stimulus presentation module;所述量表显示模块,用于在训练开始前和训练结束后,分别将预设情绪量表显示给训练者,并将获取到的训练前量表数据和训练后量表数据分别发送给所述训练分析系统;The scale display module is used to display the preset emotion scale to the trainee before and after the training, and send the acquired pre-training scale data and post-training scale data to the training analysis system respectively;所述刺激呈现模块,用于响应于检测到训练触发指令,将视觉刺激信号组中的各视觉刺激信号依次显示给所述训练者;其中,所述视觉刺激信号组用于指示所述训练者做向上眼动训练;The stimulation presentation module is used for displaying each visual stimulation signal in the visual stimulation signal group to the trainee in sequence in response to detecting the training trigger instruction; wherein the visual stimulation signal group is used to instruct the trainee to perform upward eye movement training;所述训练分析系统包括量表分析模块,用于基于接收到的所述训练前量表数据和训练后量表数据,确定情绪训练结果,并将所述情绪训练结果进行输出。The training analysis system includes a scale analysis module, which is used to determine the emotion training result based on the received pre-training scale data and post-training scale data, and output the emotion training result.
- 根据权利要求1所述的情绪训练系统,其特征在于,所述视觉刺激信号组包括眼跳刺激信号组和/或追踪刺激信号组,其中,所述眼跳刺激信号组表征所述训练者执行向上眼跳任务,所述追踪刺激信号组表征所述训练者执行向上眼动追踪任务。The emotion training system according to claim 1 is characterized in that the visual stimulation signal group includes a saccade stimulation signal group and/or a tracking stimulation signal group, wherein the saccade stimulation signal group represents that the trainee performs an upward saccade task, and the tracking stimulation signal group represents that the trainee performs an upward eye movement tracking task.
- 根据权利要求2所述的情绪训练系统,其特征在于,所述眼跳刺激信号组中的各视觉刺激信号包括眼跳注视点刺激信号、空屏刺激信号以及眼跳图形刺激信号;其中,所述眼跳图形刺激信号中眼跳刺激图形的图形位置高于所述眼跳注视点刺激信号中注视点的中心位置,且所述图形位置与所述中心位置对应的位置垂直角度大于或等于第一角度阈值,所述眼跳刺激图形对应的情绪水平为中性或积极。The emotion training system according to claim 2 is characterized in that each visual stimulation signal in the saccade stimulation signal group includes a saccade fixation point stimulation signal, a blank screen stimulation signal and a saccade graphic stimulation signal; wherein the graphic position of the saccade stimulation graphic in the saccade graphic stimulation signal is higher than the center position of the fixation point in the saccade fixation point stimulation signal, and the vertical angle between the graphic position and the position corresponding to the center position is greater than or equal to a first angle threshold, and the emotion level corresponding to the saccade stimulation graphic is neutral or positive.
- 根据权利要求2所述的情绪训练系统,其特征在于,所述追踪刺激信号组包括第一追踪刺激信号组,所述第一追踪刺激信号组包括追踪注视点刺激信号以及第一追踪刺激信号,所述第一追踪刺激信号表征第一追踪刺激图形沿预设向上轨迹匀速移动,所述第一追踪刺激图形的起始图形位置与终止图形位置对应的位置垂直角度大于或等于第二角度阈值,所述第一追踪刺激图形对应的情绪水平为消极、中性或积极。The emotion training system according to claim 2 is characterized in that the tracking stimulation signal group includes a first tracking stimulation signal group, the first tracking stimulation signal group includes a tracking gaze point stimulation signal and a first tracking stimulation signal, the first tracking stimulation signal represents that the first tracking stimulation graphic moves at a uniform speed along a preset upward trajectory, the vertical angle between the starting graphic position and the ending graphic position of the first tracking stimulation graphic is greater than or equal to a second angle threshold, and the emotion level corresponding to the first tracking stimulation graphic is negative, neutral or positive.
- 根据权利要求2所述的情绪训练系统,其特征在于,所述追踪刺激信号组包括第二追踪刺激信号组,所述第二追踪刺激信号组包括追踪注视点刺激信号和第二追踪刺激信号,所述第二追踪刺激信号表征在第一追踪刺激图形沿预设向上轨迹匀速移动的过程中,基于预设替换时长,采用第二追踪刺激图形替换所述第一追踪刺激图形沿预设向上轨迹匀速移动,所述第一追踪刺激图形的起始图形位置与终止图形位置对应的位置垂直角度大于或等于第三角度阈值,所述第一追踪刺激图形对应的情绪水平为消极或中性,所述第二追踪刺激图形对应的情绪水平高于所述第一追踪刺激图形。The emotion training system according to claim 2 is characterized in that the tracking stimulation signal group includes a second tracking stimulation signal group, the second tracking stimulation signal group includes a tracking fixation point stimulation signal and a second tracking stimulation signal, the second tracking stimulation signal represents that in the process of the first tracking stimulation graphic moving at a uniform speed along a preset upward trajectory, based on a preset replacement duration, the first tracking stimulation graphic is replaced by the second tracking stimulation graphic moving at a uniform speed along the preset upward trajectory, the vertical angle between the starting graphic position and the ending graphic position of the first tracking stimulation graphic is greater than or equal to a third angle threshold, the emotion level corresponding to the first tracking stimulation graphic is negative or neutral, and the emotion level corresponding to the second tracking stimulation graphic is higher than that of the first tracking stimulation graphic.
- 根据权利要求2所述的情绪训练系统,其特征在于,所述训练分析系统还包括行为分析装置,所述行为分析装置,用于采集所述训练者在进行情绪训练过程中的训练行为数据,并将基于所述训练行为数据生成的所述训练者的训练提示信息进行输出;其中,所述训练提示信息表征所述训练者在进行情绪训练过程中的训练状态。The emotion training system according to claim 2 is characterized in that the training analysis system also includes a behavior analysis device, which is used to collect training behavior data of the trainee during the emotion training process, and output training prompt information of the trainee generated based on the training behavior data; wherein the training prompt information represents the training status of the trainee during the emotion training process.
- 根据权利要求6所述的情绪训练系统,其特征在于,所述行为分析装置包括按键设备和/或眼动追踪设备,相应的,所述训练行为数据包括按键响应数据和/或眼动数据。The emotion training system according to claim 6 is characterized in that the behavior analysis device includes a button device and/or an eye tracking device, and correspondingly, the training behavior data includes button response data and/or eye movement data.
- 根据权利要求7所述的情绪训练系统,其特征在于,当所述视觉刺激信号组包括眼跳刺激信号组,且所述行为分析装置包括按键设备时,所述眼跳刺激信号组中的眼跳刺激图形为面孔图形,所述按键响应数据表征所述训练者针对所述面孔图形对应的面孔性别输入的按键响应。The emotion training system according to claim 7 is characterized in that when the visual stimulation signal group includes a saccadic stimulation signal group and the behavior analysis device includes a key device, the saccadic stimulation graphic in the saccadic stimulation signal group is a face graphic, and the key response data represents the key response input by the trainee for the facial gender corresponding to the face graphic.
- 根据权利要求8所述的情绪训练系统,其特征在于,当所述视觉刺激信号组包括追踪刺激信号组,所述追踪刺激信号组包括第一追踪刺激图形组和第二追踪刺激图形组,且所述行为分析装置包括按键设备时,所述按键响应数据表征所述训练者针对沿预设向上轨迹匀速移动的第一追踪刺激图形是否被替换输入的按键响应。The emotion training system according to claim 8 is characterized in that when the visual stimulation signal group includes a tracking stimulation signal group, the tracking stimulation signal group includes a first tracking stimulation graphic group and a second tracking stimulation graphic group, and the behavior analysis device includes a key device, the key response data represents the key response of the trainee to whether the first tracking stimulation graphic moving at a constant speed along a preset upward trajectory is replaced.
- 根据权利要求8或9所述的情绪训练系统,其特征在于,所述按键设备,用于基于所述按键响应数据,确定所述训练者的响应参数,并基于所述响应参数,生成所述训练者的训练提示信息;其中,所述响应参数包括响应正确率和/或目标响应时长。The emotion training system according to claim 8 or 9 is characterized in that the button device is used to determine the response parameters of the trainee based on the button response data, and generate training prompt information for the trainee based on the response parameters; wherein the response parameters include response accuracy and/or target response duration.
- 根据权利要求7所述的情绪训练系统,其特征在于,所述眼动追踪设备,用于基于所述眼动数据,确定所述训练者的专注参数,并基于所述专注参数,生成所述训练者的训练提示信息;其中,所述专注参数包括注视时长、眼动方向、眼动角度、眼动距离和眼动速度中至少一种。The emotion training system according to claim 7 is characterized in that the eye tracking device is used to determine the concentration parameters of the trainee based on the eye movement data, and generate training prompt information for the trainee based on the concentration parameters; wherein the concentration parameters include at least one of gaze duration, eye movement direction, eye movement angle, eye movement distance and eye movement speed.
- 根据权利要求2所述的情绪训练系统,其特征在于,所述视觉呈现系统还包括选择模块,所述选择模块,用于响应于检测到与训练选择控件对应的选择指令,将与所述训练选择控件对应的训练刺激信号组添加到视觉刺激信号组中,并生成训练触发指令;其中,所述训练选择控件包括眼跳选择控件和/或追踪选择控件,所述训练刺激信号组为眼跳刺激信号组或追踪刺激信号组。The emotion training system according to claim 2 is characterized in that the visual presentation system also includes a selection module, which is used to add the training stimulus signal group corresponding to the training selection control to the visual stimulus signal group in response to detecting a selection instruction corresponding to the training selection control, and generate a training trigger instruction; wherein the training selection control includes a saccade selection control and/or a tracking selection control, and the training stimulus signal group is a saccade stimulus signal group or a tracking stimulus signal group.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211444503.9A CN115671489A (en) | 2022-11-18 | 2022-11-18 | Emotion training system |
CN202211444503.9 | 2022-11-18 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024103464A1 true WO2024103464A1 (en) | 2024-05-23 |
Family
ID=85053045
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/137731 WO2024103464A1 (en) | 2022-11-18 | 2022-12-08 | Emotion training system |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN115671489A (en) |
WO (1) | WO2024103464A1 (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106843500A (en) * | 2017-02-27 | 2017-06-13 | 南通大学 | Human-subject test rehabilitation training system based on the dynamic tracer technique of eye |
US20170352283A1 (en) * | 2016-06-07 | 2017-12-07 | Cerekinetic, Inc. | Self-administered evaluation and training method to improve mental state |
CN107519622A (en) * | 2017-08-21 | 2017-12-29 | 南通大学 | Spatial cognition rehabilitation training system and method based on virtual reality and the dynamic tracking of eye |
CN110302459A (en) * | 2019-08-09 | 2019-10-08 | 丹阳慧创医疗设备有限公司 | Emotion regulation and control training method, device, equipment and system |
CN112535479A (en) * | 2020-12-04 | 2021-03-23 | 中国科学院深圳先进技术研究院 | Method for determining emotional processing tendency and related product |
CN113611395A (en) * | 2021-08-09 | 2021-11-05 | 江苏嘉纳宝医疗科技有限公司 | Mental and psychological illness user auxiliary training method based on virtual reality technology |
US20220313083A1 (en) * | 2015-10-09 | 2022-10-06 | Senseye, Inc. | Cognitive, emotional, mental and psychological diagnostic engine via the eye |
CN115206492A (en) * | 2021-04-12 | 2022-10-18 | 中国科学院深圳先进技术研究院 | Emotion recognition capability self-adaptive training method and device based on eye movement feedback |
CN115249379A (en) * | 2021-11-05 | 2022-10-28 | 上海外国语大学 | Plane advertisement evaluation method based on event-related potential and eye movement tracking technology |
-
2022
- 2022-11-18 CN CN202211444503.9A patent/CN115671489A/en active Pending
- 2022-12-08 WO PCT/CN2022/137731 patent/WO2024103464A1/en unknown
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220313083A1 (en) * | 2015-10-09 | 2022-10-06 | Senseye, Inc. | Cognitive, emotional, mental and psychological diagnostic engine via the eye |
US20170352283A1 (en) * | 2016-06-07 | 2017-12-07 | Cerekinetic, Inc. | Self-administered evaluation and training method to improve mental state |
CN106843500A (en) * | 2017-02-27 | 2017-06-13 | 南通大学 | Human-subject test rehabilitation training system based on the dynamic tracer technique of eye |
CN107519622A (en) * | 2017-08-21 | 2017-12-29 | 南通大学 | Spatial cognition rehabilitation training system and method based on virtual reality and the dynamic tracking of eye |
CN110302459A (en) * | 2019-08-09 | 2019-10-08 | 丹阳慧创医疗设备有限公司 | Emotion regulation and control training method, device, equipment and system |
CN112535479A (en) * | 2020-12-04 | 2021-03-23 | 中国科学院深圳先进技术研究院 | Method for determining emotional processing tendency and related product |
CN115206492A (en) * | 2021-04-12 | 2022-10-18 | 中国科学院深圳先进技术研究院 | Emotion recognition capability self-adaptive training method and device based on eye movement feedback |
CN113611395A (en) * | 2021-08-09 | 2021-11-05 | 江苏嘉纳宝医疗科技有限公司 | Mental and psychological illness user auxiliary training method based on virtual reality technology |
CN115249379A (en) * | 2021-11-05 | 2022-10-28 | 上海外国语大学 | Plane advertisement evaluation method based on event-related potential and eye movement tracking technology |
Also Published As
Publication number | Publication date |
---|---|
CN115671489A (en) | 2023-02-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20180308376A1 (en) | Adaptive learning environment driven by real-time identification of engagement level | |
US20220293007A1 (en) | Computing technologies for diagnosis and therapy of language-related disorders | |
Kim et al. | Towards emotionally aware AI smart classroom: Current issues and directions for engineering and education | |
US10192456B2 (en) | Stimulating online discussion in interactive learning environments | |
TWI713000B (en) | Online learning assistance method, system, equipment and computer readable recording medium | |
US20190147760A1 (en) | Cognitive content customization | |
US20200135050A1 (en) | Internet of things public speaking coach | |
US12052299B2 (en) | System and method to improve video conferencing using presence metrics | |
US20220309948A1 (en) | Systems and methods to measure and enhance human engagement and cognition | |
JP7492294B2 (en) | Learning system, learning lecture delivery method, and program | |
CN116018789A (en) | Method, system and medium for context-based assessment of student attention in online learning | |
KR20220061384A (en) | Apparatus and method for detecting learners' participation in an untact online class | |
JP6777999B2 (en) | Programs, information processing methods, and server equipment | |
WO2024103464A1 (en) | Emotion training system | |
US11276420B2 (en) | Interaction system, apparatus, and non-transitory computer readable storage medium | |
JP7529135B2 (en) | Analytical device, analytical method, and program | |
Mehigan et al. | Engaging learners through emotion in Artificially Intelligent environments | |
KR102383457B1 (en) | Active artificial intelligence tutoring system that support teaching and learning and method for controlling the same | |
US20220343785A1 (en) | Learning support system | |
JP2022056108A (en) | Information processing device, information processing method, information processing program, and information processing system | |
MežA et al. | Towards automatic real-time estimation of observed learner’s attention using psychophysiological and affective signals: The touch-typing study case | |
CN116721569B (en) | Concentration training method, concentration training device, electronic equipment and medium | |
CN113469848B (en) | Teaching method, device, equipment and storage medium for mobile terminal | |
WO2024116280A1 (en) | Information processing device, determination method, and storage medium | |
US20240355225A1 (en) | Computing technologies for diagnosis and therapy of language-related disorders |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22965642 Country of ref document: EP Kind code of ref document: A1 |