CN115671489A - Emotion training system - Google Patents
Emotion training system Download PDFInfo
- Publication number
- CN115671489A CN115671489A CN202211444503.9A CN202211444503A CN115671489A CN 115671489 A CN115671489 A CN 115671489A CN 202211444503 A CN202211444503 A CN 202211444503A CN 115671489 A CN115671489 A CN 115671489A
- Authority
- CN
- China
- Prior art keywords
- training
- tracking
- stimulation
- emotion
- trainer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000008451 emotion Effects 0.000 title claims abstract description 99
- 230000000638 stimulation Effects 0.000 claims abstract description 251
- 230000000007 visual effect Effects 0.000 claims abstract description 77
- 230000004424 eye movement Effects 0.000 claims abstract description 46
- 238000004458 analytical method Methods 0.000 claims abstract description 43
- 230000004044 response Effects 0.000 claims description 68
- 230000036651 mood Effects 0.000 claims description 30
- 238000000034 method Methods 0.000 claims description 29
- 230000008569 process Effects 0.000 claims description 18
- 230000007935 neutral effect Effects 0.000 claims description 11
- 238000002347 injection Methods 0.000 claims description 3
- 239000007924 injection Substances 0.000 claims description 3
- 210000001508 eye Anatomy 0.000 description 78
- 230000006399 behavior Effects 0.000 description 50
- 238000010586 diagram Methods 0.000 description 19
- 210000005252 bulbus oculi Anatomy 0.000 description 11
- 230000002996 emotional effect Effects 0.000 description 9
- 230000000694 effects Effects 0.000 description 7
- 238000004590 computer program Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 210000003128 head Anatomy 0.000 description 4
- 208000019901 Anxiety disease Diseases 0.000 description 3
- 239000000243 solution Substances 0.000 description 3
- 230000036506 anxiety Effects 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 210000001061 forehead Anatomy 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001149 cognitive effect Effects 0.000 description 1
- 230000008909 emotion recognition Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000001404 mediated effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 208000020016 psychiatric disease Diseases 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/113—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M21/00—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Veterinary Medicine (AREA)
- Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Biophysics (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Psychology (AREA)
- Pathology (AREA)
- Child & Adolescent Psychology (AREA)
- Anesthesiology (AREA)
- Hematology (AREA)
- Human Computer Interaction (AREA)
- Ophthalmology & Optometry (AREA)
- Acoustics & Sound (AREA)
- Developmental Disabilities (AREA)
- Educational Technology (AREA)
- Hospice & Palliative Care (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Rehabilitation Tools (AREA)
Abstract
The invention discloses an emotion training system, which comprises: a visual presentation system and a training analysis system; the system comprises a visual presentation system, a training analysis system and a scale display module, wherein the scale display module in the visual presentation system is used for respectively displaying a preset emotion scale to a trainer before training and after training, and respectively sending the obtained pre-training scale data and the obtained post-training scale data to the training analysis system; the stimulation presenting module in the visual presenting system is used for responding to the detected training triggering instruction and sequentially displaying each visual stimulation signal in the visual stimulation signal group to a trainer; wherein, the visual stimulation signal group is used for instructing the trainer to do the upward eye movement training; and the scale analysis module in the training analysis system is used for outputting the emotion training result determined based on the received pre-training scale data and the received post-training scale data. The emotion training system capable of improving the emotion level provided by the invention realizes the flexibility of emotion training and reduces the time cost and the labor cost.
Description
Technical Field
The invention relates to the technical field of medical instruments, in particular to an emotion training system.
Background
The emotion is a general term of a series of subjective cognitive experiences, is attitude experience of a person to objective things and corresponding behavioral responses, and is generally considered to be a psychological activity mediated by individual desires and needs and an emotional state influencing behavior and psychological changes. Long-term negative mood is a high risk factor for numerous psychiatric disorders, such as depression, anxiety and the like.
Most of the studies on emotion in the prior art focus on emotion recognition, and the content of the studies aiming at emotion improvement is lacking. At present, psychological education, knowledge lectures and the like are mainly adopted in the training method for improving the emotion, experienced professionals are required to participate, and both time cost and labor cost are high.
Disclosure of Invention
The embodiment of the invention provides an emotion training system, which aims to solve the problem that a professional is required to guide and participate in the emotion training process in the prior art, realize the flexibility of emotion training and reduce the time cost and the labor cost.
According to an embodiment of the present invention, there is provided an emotion training system including: a visual presentation system and a training analysis system; wherein the visual presentation system comprises a gauge display module and a stimulus presentation module;
the scale display module is used for respectively displaying a preset emotion scale to a trainer before training starts and after training ends, and respectively sending the obtained scale data before training and the obtained scale data after training to the training analysis system;
the stimulation presenting module is used for responding to the detected training triggering instruction and sequentially displaying each visual stimulation signal in the visual stimulation signal group to the trainer; wherein the set of visual stimulation signals is used to instruct the handler to do an upward eye movement workout;
the training and analyzing system comprises a scale analyzing module used for determining an emotion training result based on the received scale data before training and the scale data after training and outputting the emotion training result.
In an optional embodiment, the set of visual stimulation signals comprises a set of eye jump stimulation signals characterizing the trainer as performing an eye jump up task and/or a set of tracking stimulation signals characterizing the trainer as performing an eye movement up tracking task.
In an optional embodiment, each visual stimulation signal in the eye jump stimulation signal group comprises an eye jump point stimulation signal, an empty screen stimulation signal and an eye jump pattern stimulation signal; the figure position of the eye jump stimulus figure in the eye jump figure stimulus signal is higher than the central position of the injection point in the eye jump point stimulus signal, the vertical angle of the figure position and the position corresponding to the central position is larger than or equal to a first angle threshold, and the emotion level corresponding to the eye jump stimulus figure is neutral or positive.
In an optional embodiment, the set of tracking stimulation signals includes a first set of tracking stimulation signals including a tracking point of regard stimulation signal and a first tracking stimulation signal, the first tracking stimulation signal is characterized in that the first tracking stimulation pattern moves at a constant speed along a preset upward trajectory, a vertical angle of a position corresponding to a start pattern position and an end pattern position of the first tracking stimulation pattern is greater than or equal to a second angle threshold, and an emotion level corresponding to the first tracking stimulation pattern is negative, neutral or positive.
In an optional embodiment, the tracking stimulation signal group includes a second tracking stimulation signal group, the second tracking stimulation signal group includes a tracking fixation point stimulation signal and a second tracking stimulation signal, the second tracking stimulation signal is characterized in that in the process that the first tracking stimulation pattern moves at a constant speed along the preset upward track, based on the preset replacement time length, the first tracking stimulation pattern is replaced by a second tracking stimulation pattern to move at a constant speed along the preset upward track, a vertical angle of a position corresponding to a start pattern position and a stop pattern position of the first tracking stimulation pattern is greater than or equal to a third angle threshold value, an emotion level corresponding to the first tracking stimulation pattern is negative or neutral, and an emotion level corresponding to the second tracking stimulation pattern is higher than that corresponding to the first tracking stimulation pattern.
In an optional embodiment, the training analysis system further includes a behavior analysis device, and the behavior analysis device is configured to collect training behavior data of the trainer in an emotional training process, and output training prompt information of the trainer, which is generated based on the training behavior data; the training prompt information represents the training state of the trainer in the emotion training process.
In an alternative embodiment, the behavior analysis means comprises a key device and/or an eye-tracking device, and the training behavior data comprises key response data and/or eye movement data, respectively.
In an optional embodiment, when the set of visual stimulation signals includes a set of eye jump stimulation signals and the behavior analysis device includes a key device, the eye jump stimulation patterns in the set of eye jump stimulation signals are face patterns, and the key response data represents key responses of the trainer to face gender input corresponding to the face patterns.
In an optional embodiment, when the set of visual stimulation signals includes a set of tracking stimulation signals including a first set of tracking stimulation patterns and a second set of tracking stimulation patterns, and the behavior analysis device includes a key device, the key response data characterizes whether the trainer replaces the input key response for the first tracking stimulation pattern moving at a constant speed along a preset upward trajectory.
In an optional embodiment, the key device is configured to determine a response parameter of the trainer based on the key response data, and generate training prompt information of the trainer based on the response parameter; wherein the response parameters comprise a response correct rate and/or a target response duration.
In an optional embodiment, the eye tracking device is configured to determine a concentration parameter of the trainer based on the eye movement data, and generate training prompt information of the trainer based on the concentration parameter; wherein the concentration parameter includes at least one of gaze duration, eye movement direction, eye movement angle, eye movement distance, and eye movement speed.
In one optional embodiment, the visual presentation system further comprises a selection module for, in response to detecting a selection instruction corresponding to a training selection control, adding a set of training stimulation signals corresponding to the training selection control to the set of visual stimulation signals and generating a training trigger instruction; the training selection control comprises an eye jump selection control and/or a tracking selection control, and the training stimulation signal group is an eye jump stimulation signal group or a tracking stimulation signal group.
According to the technical scheme, the visual presentation system and the training analysis system are arranged, wherein the visual presentation system comprises a scale display module and a stimulation presentation module, the scale display module is used for displaying preset emotion scales to a trainer respectively before training begins and after training is finished, the obtained pre-training scale data and the obtained post-training scale data are respectively sent to the training analysis system, and the stimulation presentation module is used for responding to a detected training trigger instruction and sequentially displaying all visual stimulation signals in a visual stimulation signal group to the trainer; wherein, visual stimulus signal group is used for instructing the training person to do the training of making progress eye movement, and training analytic system includes scale analysis module for based on received before training scale data and training scale data, confirm the mood training result, and export the mood training result, solved among the prior art and improved the training process of mood and need professional guide participatory problem, realized the flexibility of mood training, reduced time cost and cost of labor.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present invention, nor do they necessarily limit the scope of the invention. Other features of the present invention will become apparent from the following description.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of an emotion training system according to an embodiment of the present invention;
FIG. 2 is a schematic view of a vertical angle position according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating an eye jump stimulation signal set according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a first tracking stimulation signal set according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a second set of tracking stimulation signals, according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of another emotion training system provided in an embodiment of the present invention;
FIG. 7 is a schematic diagram of another emotion training system provided in an embodiment of the present invention;
FIG. 8 is a diagram illustrating an effect of a display interface of a selection module according to an embodiment of the present invention;
FIG. 9 is a diagram illustrating an effect of a display interface of another selection module according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Fig. 1 is a schematic structural diagram of an emotion training system according to an embodiment of the present invention, which is applicable to training the emotion of a trainer, and the system can be implemented in a software and/or hardware manner.
In this embodiment, the emotion training system includes: a visual presentation system 100 and a training analysis system 110; wherein the visual presentation system 100 comprises a gauge display module 101 and a stimulus presentation module 102; the scale display module 101 is used for respectively displaying a preset emotion scale to a trainer before training starts and after training ends, and respectively sending the obtained pre-training scale data and the obtained post-training scale data to the training analysis system 110; the stimulation presenting module 102 is configured to sequentially display, in response to detecting a training trigger instruction, each visual stimulation signal in the set of visual stimulation signals to a trainer; wherein, the visual stimulation signal group is used for instructing the trainer to do the upward eye movement training; the training analysis system 110 comprises a scale analysis module 111 for determining the emotional training result based on the received pre-training scale data and post-training scale data, and outputting the emotional training result.
Illustratively, the visual presentation system 100 may be composed of a host, a high refresh rate (144 Hz) lcd, and a psychology visual presentation software, wherein the psychology visual presentation software may be Matlab (matrix factory) and psychtollbox toolkit, and the psychology visual presentation software is programmed with psychtollbox in MaltlabThe stimulus signal is presented to the trainee through a liquid crystal display, and the stimulus brightness can be accurately measured by a photometer (unit cd/cm) 2 )。
Specifically, the scale display module 101 is configured to display the preset emotion scale to the trainer before performing emotion training, send pre-training scale data input by the trainer based on the preset emotion scale to the training analysis system 110, display the preset emotion scale to the trainer again after performing emotion training by the trainer, and send post-training scale data input by the trainer based on the preset emotion scale to the training analysis system 110.
Specifically, one or more preset emotion meters may be provided. Exemplary preset mood scales include, but are not limited to, the positive negative mood scale, the hamilton depression scale, the hamilton anxiety scale, and the depression-anxiety-stress scale (DASS-21), among others, and the preset mood scale employed is not limited herein.
Exemplary pre-training or post-training scale data includes, but is not limited to, the score of each item in the pre-set mood scale, the total score of all items in the pre-set training scale, the rank of each item in the pre-set mood scale, the total rank of all items in the pre-set training scale, and the like.
Specifically, a visual stimulation signal group is set by programming Psychtoolbox in Maltlab, wherein the visual stimulation signal group comprises a plurality of visual stimulation signals. In an optional embodiment, the stimulus presentation module 102 is specifically configured to, in response to detecting the training trigger instruction, sequentially display the visual stimulation signals to the trainer based on presentation durations respectively corresponding to the visual stimulation signals in the visual stimulation signal group. Wherein, the presenting time lengths corresponding to the visual stimulation signals can be the same or different.
In an optional embodiment, the set of visual stimulation signals comprises a set of eye jump stimulation signals characterizing the trainer as performing an up eye jump task and/or a set of tracking stimulation signals characterizing the trainer as performing an up eye movement tracking task.
Specifically, the visual stimulation signal group includes a plurality of eye jump stimulation signal groups and/or a plurality of tracking stimulation signal groups, and for example, the training duration corresponding to the visual stimulation signal group may be 10min or 15min, where the training duration is not limited.
The emotion training result includes, but is not limited to, an increase or decrease in emotion level, an emotion level difference, and an emotion increasing ratio. For example, if the emotion level corresponding to the scale data before training is 10, and the emotion level corresponding to the scale after training is 25, the emotion level is increased, the difference between the emotion levels is 10, and the proportion of emotion increase is 150%.
In an optional embodiment, the outputting the emotion training result comprises: the emotional training result is sent to the visual presentation system 100 so that the visual presentation system 100 outputs the received emotional training result.
In another optional embodiment, the outputting of the emotion training result comprises: and sending the emotion training result to the mobile terminal. The transmission form of the emotion training result includes, but is not limited to, short message, email, and telephone.
According to the technical scheme, the visual presentation system and the training analysis system are arranged, wherein the visual presentation system comprises a scale display module and a stimulation presentation module, the scale display module is used for displaying the preset emotion scales to a trainer respectively before training starts and after training ends and sending the obtained scale data before training and the obtained scale data after training to the training analysis system respectively, and the stimulation presentation module is used for responding to a detected training trigger instruction and sequentially displaying all visual stimulation signals in a visual stimulation signal group to the trainer; wherein, visual stimulus signal group is used for instructing the training person to do the training of making progress eye movement, and training analytic system includes scale analysis module for based on received before training scale data and training scale data, confirm the mood training result, and export the mood training result, solved among the prior art and improved the training process of mood and need professional guide participatory problem, realized the flexibility of mood training, reduced time cost and cost of labor.
On the basis of the above embodiments, the embodiments of the present invention further refine the "eye jump stimulation signal group" and the "tracking stimulation signal group" in the above embodiments.
In an optional embodiment, each visual stimulation signal in the eye jump stimulation signal group comprises an eye jump fixation point stimulation signal, an empty screen stimulation signal and an eye jump figure stimulation signal; the figure position of the eye jump stimulation figure in the eye jump figure stimulation signal is higher than the central position of the injection point in the eye jump point stimulation signal, the vertical angle of the position of the figure position corresponding to the central position is larger than or equal to a first angle threshold value, and the emotion level corresponding to the eye jump stimulation figure is neutral or positive.
Specifically, the presenting time lengths corresponding to the eye jump fixation point stimulation signal, the blank screen stimulation signal and the eye jump figure stimulation signal may be the same or different.
Specifically, the eye jump stimulation pattern may be selected from a first emotion pattern library, for example, the first emotion pattern library includes, but is not limited to, geneva emotion pattern library (gape), KDEF emotion pattern library, AKDEF emotion pattern library, fer2013 emotion pattern library, raFD emotion pattern library, and the like, and the first emotion pattern library is not limited herein.
Specifically, the pattern position can be used for representing the pattern center position of the eye jump stimulation pattern. The head is kept stationary in the process of emotional training of the trainer, and the vertical angle of the position can be used for representing the angle of a first connecting line between the eyeball of the trainer and the central position and a second connecting line between the eyeball and the graphic position in the vertical direction. For example, the first angle threshold may be 12 °, and the first angle threshold is not limited herein.
FIG. 2 is a schematic diagram of a vertical angle position according to an embodiment of the present invention. Specifically, "eyes" on the left side in fig. 2 represent the positions of the eyeballs of the trainer during the mood training process, "a" represents a first line connecting the eyeballs with the center position, "B" represents a second line connecting the eyeballs with the figure position, and "α" represents a position vertical angle.
For example, assuming that the distance between the eyeball of the trainer and the vision presenting system 100 is 30cm, and the first angle threshold is 12 °, the vertical distance between the graphic position and the center position needs to be greater than or equal to 6.38cm. Specifically, the graphic position of the eye jump stimulation graphic may be right above the fixation point, above left, above right, and the like. The pattern position of the eye jump stimulus pattern in the horizontal direction is not limited here.
Fig. 3 is a schematic diagram of an eye jump stimulation signal set according to an embodiment of the invention. Specifically, the eye jump stimulation signal group includes 500ms eye jump point stimulation signals, 500ms blank screen stimulation signals and 1000ms eye jump pattern stimulation signals. Fig. 3 illustrates an example in which the graphic position of the eye jump stimulus graphic is directly above the fixation point, and it is understood that the graphic position of the eye jump stimulus graphic is included in all upper display areas that are positioned at a vertical angle greater than or equal to the first angle threshold.
In an alternative embodiment, the set of tracking stimulation signals includes a first set of tracking stimulation signals including a tracking point of regard stimulation signal and a first tracking stimulation signal, the first tracking stimulation signal is characterized in that the first tracking stimulation pattern moves at a constant speed along a preset upward trajectory, a vertical angle of a position corresponding to a start pattern position and an end pattern position of the first tracking stimulation pattern is greater than or equal to a second angle threshold, and an emotion level corresponding to the first tracking stimulation pattern is negative, neutral or positive.
In particular, the first tracking stimulus pattern may be selected from a second mood pattern library, which includes, but is not limited to, geneva mood pattern library (gape), KDEF mood pattern library, AKDEF mood pattern library, fer2013 mood pattern library, raFD mood pattern library, and the like, and the second mood pattern library is not limited herein. Specifically, the first emoticon library and the second emoticon library may be the same or different.
Specifically, the starting pattern position may be used to represent a position of a pattern center of the first tracking stimulation pattern at the starting point, and the ending pattern position may be used to represent a position of a pattern center of the first tracking stimulation pattern at the ending point. For example, the starting point may be a gaze point in the tracking gaze point stimulation signal, and the specific starting pattern position is not limited herein.
Specifically, the head is kept stationary during the emotional training of the trainer, and the vertical angle of the position can be used for representing the angle of a first connecting line between the eyeball of the trainer and the position of the initial graph and the angle of a second connecting line between the eyeball and the position of the final graph in the vertical direction. For example, the second angle threshold may be 12 °, and the second angle threshold is not limited herein. Specifically, the first angle threshold and the second angle threshold may be the same or different.
For example, assuming that the position distance between the start pattern position and the end pattern position of the first tracking stimulation pattern is 9cm, and the presentation time period of the first tracking stimulation signal is 3000ms, the moving speed of the first tracking stimulation pattern is 3cm/s.
For example, the termination pattern position of the first tracking stimulation pattern may be right above the gazing point, left above, right above, and the like. The pattern position of the first tracking stimulation pattern in the horizontal direction is not limited here.
Fig. 4 is a schematic diagram of a first tracking stimulation signal set according to an embodiment of the invention. Specifically, the first tracking stimulation signal group includes 500ms of tracking fixation point stimulation signal and 3000ms of first tracking stimulation signal. Fig. 4 illustrates the termination pattern position of the first tracking stimulation pattern being directly above the gaze point, and it can be understood that the termination pattern position of the first tracking stimulation pattern is included in all upper display regions that are positioned at a vertical angle greater than or equal to the second angle threshold.
In an optional embodiment, the tracking stimulation signal group includes a second tracking stimulation signal group, the second tracking stimulation signal group includes a tracking fixation point stimulation signal and a second tracking stimulation signal, the second tracking stimulation signal is characterized in that in the process that the first tracking stimulation graph moves along the preset upward track at a constant speed, the first tracking stimulation graph is replaced by the second tracking stimulation graph to move along the preset upward track at a constant speed based on the preset replacement duration, the vertical angle of the position corresponding to the starting graph position and the ending graph position of the first tracking stimulation graph is greater than or equal to the third angle threshold, the emotion level corresponding to the first tracking stimulation graph is negative or neutral, and the emotion level corresponding to the second tracking stimulation graph is higher than that of the first tracking stimulation graph.
In particular, the first follow-up stimulus graphic may be selected from a second emoticon library, for example, but not limited to, geneva emoticon library (gade), KDEF emoticon library, AKDEF emoticon library, fer2013 emoticon library, raFD emoticon library, etc., and the second emoticon library is not limited thereto.
In particular, the second follow-up stimulus pattern may be selected from a third emoticon library, for example, but not limited to, geneva emoticon library (gade), KDEF emoticon library, AKDEF emoticon library, fer2013 emoticon library, raFD emoticon library, etc., and the third emoticon library is not limited thereto. Specifically, the first emoticon library, the second emoticon library and the third emoticon library may be the same or different.
Specifically, the time point of replacement of the second tracking stimulation pattern may be random, for example, the second tracking stimulation pattern may be replaced after the first tracking stimulation pattern moves for 100ms, or may be replaced after the first tracking stimulation pattern moves for 500 ms. The preset replacement time period may be, for example, 100ms or 200ms, and the preset replacement time period is not limited herein.
Specifically, when the emotion level corresponding to the first tracking stimulus pattern is negative, the emotion level corresponding to the second tracking stimulus pattern is neutral or positive, and when the emotion level corresponding to the first tracking stimulus pattern is neutral, the emotion level corresponding to the second tracking stimulus pattern is positive.
Fig. 5 is a schematic diagram of a second tracking stimulation signal set according to an embodiment of the invention. Specifically, the second tracking stimulation signal group includes a 500ms tracking fixation point stimulation signal and a 3000ms second tracking stimulation signal, wherein the second tracking stimulation signal includes a second tracking stimulation pattern of 100ms replacing the preset replacement duration of the first tracking stimulation pattern. Fig. 5 illustrates the termination pattern position of the first tracking stimulation pattern being directly above the gaze point, and it can be understood that the termination pattern position of the first tracking stimulation pattern is included in all upper display regions that are positioned at a vertical angle greater than or equal to the second angle threshold.
On the basis of the embodiment, the emotion training system further comprises a forehead bracket, and the forehead bracket is used for supporting the head of the trainer. Its function is to keep its head immobile during the course of the training of the mood.
Fig. 6 is a schematic structural diagram of another emotion training system according to an embodiment of the present invention. The embodiment of the present invention further refines the training analysis system 110 in the above embodiment.
As shown in fig. 6, the training analysis system 110 further includes a behavior analysis device 112, where the behavior analysis device 112 is configured to collect training behavior data of the trainer in the process of performing emotional training, and output training prompt information of the trainer generated based on the training behavior data; the training prompt information represents the training state of the trainer in the emotion training process.
In particular, the behavior analysis device 112 is activated in response to detecting the training trigger instruction. In an alternative embodiment, the behavior analysis means 112 comprises a key device and/or an eye-tracking device, and the training behavior data comprises key response data and/or eye movement data, respectively.
The key device may be, for example, a remote controller, a keyboard, a mouse, or other devices capable of implementing key functions. Specifically, the keystroke response data can be used for representing attribute data of keystroke behaviors input by the trainer based on the preset keystroke tasks. Illustratively, the key response data includes at least one key behavior signal and a signal response time duration corresponding to each key behavior signal.
The eye tracking device includes a camera unit for collecting eye movement data of a trainer, where the eye movement data includes an eye movement track, a gazing point position, and the like.
On the basis of the above embodiment, optionally, when the visual stimulation signal group includes at least two eye jump stimulation signal groups, and the behavior analysis device 112 includes a key device, each eye jump stimulation signal group includes eye jump stimulation patterns of two pattern types, and the key response data represents the key response input by the trainer for the pattern type corresponding to the eye jump stimulation pattern.
Illustratively, the graphic types of the eye jump stimulation graphics comprise landscape graphics and face graphics, the graphic types of the eye jump stimulation graphics comprise cartoon graphics and face graphics, or the graphic types of the eye jump stimulation graphics comprise landscape graphics and cartoon graphics, and the like. The two types of graphics are not limited herein.
Taking a key device as a mouse as an example, assuming that the graphic types of the eye jump stimulation graphics include a landscape graphic and a face graphic, when the graphic type of the eye jump stimulation graphics in the current eye jump stimulation signal group is the landscape type, the trainer can press a left key of the mouse, and when the graphic type of the eye jump stimulation graphics in the current eye jump stimulation signal group is the face type, the trainer can press a right key of the mouse.
On the basis of the above embodiment, optionally, when the visual stimulation signal group includes an eye jump stimulation signal group and the behavior analysis device 112 includes a key device, the eye jump stimulation patterns in the eye jump stimulation signal group are face patterns, and the key response data represents key responses input by the trainer for face gender corresponding to the face patterns.
Taking a button device as an example of a mouse, when the face gender of the eye jump stimulation pattern in the current eye jump stimulation signal group is male, the trainer can press a left button of the mouse, and when the face gender of the eye jump stimulation pattern in the current eye jump stimulation signal group is female, the trainer can press a right button of the mouse.
On the basis of the above embodiment, optionally, when the visual stimulation signal group includes a tracking stimulation signal group, the tracking stimulation signal group includes a first tracking stimulation pattern group and a second tracking stimulation pattern group, and the behavior analysis device 112 includes a key device, the key response data represents a key response of the trainer as to whether the first tracking stimulation pattern moving at a constant speed along the preset upward trajectory is input instead.
Taking the button device as an example of a mouse, in the case that the first tracking stimulation pattern in the current tracking stimulation signal group is replaced, the trainer may press the left button of the mouse.
In an alternative embodiment, the training prompt information of the trainer generated based on the training behavior data is output, and the method comprises the following steps: determining training parameters of a trainer based on the training behavior data, generating training prompt information based on a comparison result between the training parameters and a preset parameter range, and outputting the training prompt information.
Specifically, the training parameters include response parameters determined based on the key press behavior data and/or concentration parameters determined based on the eye movement data. Wherein the response parameter includes a response accuracy and/or a target response duration, and the concentration parameter includes at least one of a fixation duration, an eye movement direction, an eye movement angle, an eye movement distance, and an eye movement speed.
On the basis of the above embodiment, optionally, the key device is configured to determine a response parameter of the trainer based on the key response data, and generate training prompt information of the trainer based on the response parameter; wherein, the response parameter comprises the response correct rate and/or the target response duration.
Specifically, the key response data includes at least one key behavior signal and signal response durations corresponding to the key behavior signals, respectively.
In one embodiment, the key response data includes at least one key behavior signal, and for each key behavior signal, if the key classification corresponding to the key behavior signal is the same as the true classification, the key behavior signal is marked as "correct", if the key classification corresponding to the key behavior signal is different from the true classification, the key behavior signal is marked as "wrong", and the ratio between the number of "correct" key behavior signals and the number of signal groups corresponding to the visual stimulation signal groups is used as the response correctness rate. Wherein the key classification represents a classification of a pattern type, a classification of facial gender, or an alternative classification of the first tracking stimulation pattern.
For example, assuming that the key behavior signal is a key behavior signal corresponding to the eye jump stimulation signal group, the key task of the trainer includes distinguishing the facial gender of the eye jump stimulation pattern, and the total number of keys represents the number of signal groups corresponding to the eye jump stimulation signal group. If the key corresponding to the key behavior signal is classified as male (left key) and the actual key corresponding to the key behavior signal is classified as male (left key), the key behavior signal is marked as "correct". Assuming that the key behavior signal is a key behavior signal corresponding to the tracking stimulation signal group, the key task of the trainer includes judging whether the first tracking stimulation graph is replaced, and the total number of keys ensures the number of the signal groups corresponding to the tracking stimulation signal group. The key behavior signal is marked as "correct" if the key to which the key behavior signal corresponds is classified as replaced (key), and the actual classification to which the key behavior signal corresponds is replaced (key).
In one embodiment, the key response data includes signal response durations respectively corresponding to the at least one key behavior signal, and the signal response durations may be used to represent time lengths between the occurrence time and the key press time of the eye jump stimulation pattern in the eye jump stimulation signal, or the signal response durations may be used to represent time lengths between the occurrence time and the key press time of the second tracking stimulation pattern in the second tracking stimulation signal group. Specifically, determining the response parameters of the trainer based on the key response data includes: and taking the average value of the signal response time length in the key response data as the target response time length of the trainer, or taking the average value of the signal response time length corresponding to the 'correct' key behavior signal in the key response data as the target response time length of the trainer.
In an optional embodiment, the eye tracking device is used for determining a concentration parameter of the trainer based on the eye movement data and generating training prompt information of the trainer based on the concentration parameter; wherein the concentration parameter includes at least one of gaze duration, eye movement direction, eye movement angle, eye movement distance, and eye movement speed.
Specifically, the eye movement data includes an eye movement trajectory and a gaze position. The fixation time length can be used for keeping the eye movement track of the trainer within the time length of the eye jump fixation point stimulation signal or the presentation time length of the tracking fixation point stimulation signal within the range of the preset area where the fixation point is located; the eye movement direction can be used for representing the eyeball movement direction of the trainer corresponding to the eye jump stimulation signal group or the tracking stimulation signal group; the eye movement angle can be used for representing the eyeball movement angle of the trainer corresponding to the eye jump stimulation signal group or the tracking stimulation signal group; the eye movement distance can be used for representing the eyeball movement distance of the trainer corresponding to the eye jump stimulation signal group; the eye movement velocity may be used to characterize the eye movement velocity of the handler corresponding to the set of tracking stimulation signals.
Wherein, specifically, based on concentrating on the parameter, generate training suggestion information of training person, include: and comparing the concentration parameter with a preset parameter range, and generating training prompt information of the trainer based on the comparison result. The preset parameter range corresponding to the gaze duration may be determined based on the presentation duration of the eye-jump gaze point stimulation signal or the tracking gaze point stimulation signal, the preset parameter range corresponding to the eye movement angle may be determined based on the position vertical angle, and the preset parameter range corresponding to the eye movement speed may be determined based on the movement speed of the first tracking stimulation pattern.
Specifically, if the training parameter satisfies the preset parameter range, the training state represented by the generated training prompt information is better, and if the training parameter does not satisfy the preset parameter range, the training state represented by the generated training prompt information is worse.
On the basis of the above embodiment, optionally, the outputting the training prompt information includes: and under the condition that the training parameters do not meet the preset parameter range, outputting training prompt information. In this embodiment, the output training prompt information is the training prompt information representing the poor training state. The advantage of setting up like this is, avoids exporting training suggestion information all the time in the training process of mood, influences the training state of training person.
In an optional embodiment, the outputting the training prompt information includes: the training prompt information is sent to the visual presentation system 100 so that the visual presentation system 100 outputs the received training prompt information. In this embodiment, the information type of the training prompt information is text information or color information. Illustratively, the text information is "poor training status, please note" and the color information is red.
In another alternative embodiment, the output form of the training prompt information includes, but is not limited to, an audible prompt or an indicator light prompt. For example, the sound prompt may be a voice prompt, such as "poor training status please note," the sound prompt may also be a warning tone, such as a warning tone with a higher output frequency, and the indicator light prompt may be a warning light with a higher flashing frequency. The specific output form of the training prompt is not limited here.
The technical scheme of this embodiment, through set up the behavior analysis device in training analytic system, a training behavior data for gathering training person carrying out the mood training in-process, and will be based on the training prompt information of the training person that training behavior data generated exports, wherein, training prompt information sign training person is carrying out the training state of mood training in-process, the degree of concentration of having solved the training person among the mood training in-process reduces the problem that influences the training effect, the purpose of carrying out real-time feedback to the training state among the training process has been realized, the training effect of mood training system has been improved.
Fig. 7 is a schematic structural diagram of another emotion training system according to an embodiment of the present invention. The present embodiment further refines the visual presentation system 100 in the above-described embodiment.
As shown in fig. 7, the visual presentation system 100 further comprises a selection module 103, the selection module 103 being configured to, in response to detecting a selection instruction corresponding to a training selection control, add a set of training stimulus signals corresponding to the training selection control to the set of visual stimulus signals and generate a training trigger instruction; the training selection control comprises an eye jump selection control and/or a tracking selection control, and the training stimulation signal group is an eye jump stimulation signal group or a tracking stimulation signal group.
In one embodiment, in response to detecting a trigger operation input by the trainer based on a training selection control on the display interface of the selection module 103, a selection instruction corresponding to the training selection control is generated. Specifically, only one training selection control can be selected by one trigger operation, and correspondingly, in this embodiment, the visual stimulation signal group includes an eye jump stimulation signal group or a tracking stimulation signal group.
Fig. 8 is an effect diagram of a display interface of a selection module according to an embodiment of the present invention, specifically, the "eye jump training task" and the "tracking training task" in fig. 8 respectively represent training selection controls, and in this embodiment, the training selection controls belong to a command control and are triggered to operate in response to a single click of a trainer.
In another embodiment, in response to detecting a trigger operation input by the trainer based on the training trigger control on the display interface of the selection module 103, the selection instruction is generated based on the training selection control in the selected state. Specifically, before the trigger operation based on the training trigger control input, the trainer may input the selection operation based on the training selection control in the display interface of the selection module 103 in advance, and correspondingly, the control state of the training selection control corresponding to the selection operation is set to the selected state. In this embodiment, the set of visual stimulation signals comprises a set of eye jump stimulation signals and/or a set of tracking stimulation signals.
Fig. 9 is an effect diagram of a display interface of another selection module according to an embodiment of the present invention, specifically, the "eye jump training task" and the "tracking training task" in fig. 9 respectively represent a training selection control, and the "start training" represents a training trigger control, where the training trigger control belongs to a command control and is triggered in response to a single click of a trainer; the training selection control belongs to a check box control, and two control states of a selected state and an unselected state exist.
The technical scheme of this embodiment, through setting up the selection module at the visual presentation system, the selection module, a selection instruction for responding to detect and select the control to correspond with the training, will select the training stimulus signal group that the control corresponds with the training and add to the visual stimulus signal group, and generate the training trigger command, wherein, the training is selected the control and/or is tracked the selection control including the eye jump, the training stimulus signal group is the eye jump stimulus signal group or is tracked the stimulus signal group, the problem that the emotion training system can not self-defined selection visual stimulus signal group has been solved, make the training person can carry out individualized selection to the visual stimulus signal group who is used for the emotion training, the flexibility and the pertinence of emotion training system have further been improved.
Fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, where the electronic device 10 may configure a functional device in the visual presentation system 100 or a functional device in the training analysis system 110 according to the embodiment of the present invention.
In particular, the electronic device 10 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The components shown in the embodiments of the present invention, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 10, the electronic device 10 includes at least one processor 11, and a memory communicatively connected to the at least one processor 11, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, and the like, wherein the memory stores a computer program executable by the at least one processor 11, and the processor 11 can perform various suitable actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from a storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data necessary for the operation of the electronic apparatus 10 can also be stored. The processor 11, the ROM 12, and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to the bus 14.
A number of components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, or the like; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, or the like. The processor 11 performs the various methods and processes described above, such as the emotion training method in the above-described embodiment.
In some embodiments, the mood training methods described above may be implemented as a computer program tangibly embodied in a computer-readable storage medium, such as storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
It should be noted that, in the embodiment of the emotion training system, the included units, modules and devices are only divided according to the functional logic, but are not limited to the above division, as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
Claims (12)
1. An emotion training system, comprising: a visual presentation system and a training analysis system; wherein the visual presentation system comprises a gauge display module and a stimulus presentation module;
the scale display module is used for respectively displaying a preset emotion scale to a trainer before training begins and after training is finished, and respectively sending the obtained pre-training scale data and the obtained post-training scale data to the training analysis system;
the stimulation presenting module is used for responding to the detected training triggering instruction and sequentially displaying each visual stimulation signal in the visual stimulation signal group to the trainer; wherein the set of visual stimulation signals is used to instruct the handler to do an upward eye movement workout;
the training and analyzing system comprises a scale analyzing module used for determining an emotion training result based on the received pre-training scale data and post-training scale data and outputting the emotion training result.
2. The emotion training system of claim 1, wherein the set of visual stimulation signals includes a set of eye jump stimulation signals characterizing the trainer as performing an eye jump up task and/or a set of tracking stimulation signals characterizing the trainer as performing an eye movement up tracking task.
3. The emotion training system of claim 2, wherein each visual stimulus signal in the eye jump stimulus signal set comprises an eye jump point of regard stimulus signal, an empty screen stimulus signal, and an eye jump pattern stimulus signal; the figure position of the eye jump stimulation figure in the eye jump figure stimulation signal is higher than the central position of the injection point in the eye jump point stimulation signal, the vertical angle of the figure position and the position corresponding to the central position is larger than or equal to a first angle threshold value, and the emotion level corresponding to the eye jump stimulation figure is neutral or positive.
4. The emotion training system of claim 2, wherein the tracking stimulation signal group comprises a first tracking stimulation signal group, the first tracking stimulation signal group comprises a tracking point-of-regard stimulation signal and a first tracking stimulation signal, the first tracking stimulation signal is characterized in that a first tracking stimulation pattern moves at a constant speed along a preset upward track, a vertical angle of a position corresponding to a starting pattern position and an ending pattern position of the first tracking stimulation pattern is greater than or equal to a second angle threshold, and an emotion level corresponding to the first tracking stimulation pattern is negative, neutral or positive.
5. The emotion training system of claim 2, wherein the tracking stimulation signal group comprises a second tracking stimulation signal group, the second tracking stimulation signal group comprises a tracking fixation point stimulation signal and a second tracking stimulation signal, the second tracking stimulation signal is characterized in that during the process that the first tracking stimulation pattern moves along the preset upward track at a constant speed, the first tracking stimulation pattern moves along the preset upward track at a constant speed by using a second tracking stimulation pattern based on a preset replacement time length, a vertical angle of a position corresponding to a start pattern position and an end pattern position of the first tracking stimulation pattern is greater than or equal to a third angle threshold value, an emotion level corresponding to the first tracking stimulation pattern is negative or neutral, and an emotion level corresponding to the second tracking stimulation pattern is higher than that of the first tracking stimulation pattern.
6. The emotion training system of claim 2, further comprising a behavior analysis device, wherein the behavior analysis device is configured to collect training behavior data of the trainer during emotion training, and output training prompt information of the trainer generated based on the training behavior data; the training prompt information represents the training state of the trainer in the emotion training process.
7. Mood training system according to claim 6, characterized in that the behavior analysis means comprise a key device and/or an eye-tracking device, and in that the training behavior data comprise key response data and/or eye movement data, respectively.
8. The emotion training system of claim 7, wherein when the set of visual stimulation signals includes a set of eye jump stimulation signals and the behavior analysis device includes a key device, the eye jump stimulation patterns in the set of eye jump stimulation signals are face patterns, and the key response data characterizes key responses of the trainer to face gender inputs corresponding to the face patterns.
9. The emotion training system of claim 8, wherein when the set of visual stimulus signals includes a set of tracking stimulus signals including a first set of tracking stimulus patterns and a second set of tracking stimulus patterns, and the behavior analysis device includes a key device, the key response data characterizes whether the trainer has a key response to a replacement input of the first tracking stimulus pattern moving at a constant speed along a preset upward trajectory.
10. The emotion training system of claim 8 or 9, wherein the key device is configured to determine a response parameter of the trainer based on the key response data, and generate training prompt information of the trainer based on the response parameter; wherein the response parameters comprise a response correct rate and/or a target response duration.
11. The emotion training system of claim 7, wherein the eye movement tracking device is configured to determine a concentration parameter of the trainer based on the eye movement data, and to generate training prompt information for the trainer based on the concentration parameter; wherein the concentration parameter includes at least one of gaze duration, eye movement direction, eye movement angle, eye movement distance, and eye movement speed.
12. The emotion training system of claim 2, wherein the visual presentation system further comprises a selection module to, in response to detecting a selection instruction corresponding to a training selection control, add a set of training stimulus signals corresponding to the training selection control to the set of visual stimulus signals and generate a training trigger instruction; the training selection control comprises an eye jump selection control and/or a tracking selection control, and the training stimulation signal group is an eye jump stimulation signal group or a tracking stimulation signal group.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211444503.9A CN115671489A (en) | 2022-11-18 | 2022-11-18 | Emotion training system |
PCT/CN2022/137731 WO2024103464A1 (en) | 2022-11-18 | 2022-12-08 | Emotion training system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211444503.9A CN115671489A (en) | 2022-11-18 | 2022-11-18 | Emotion training system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115671489A true CN115671489A (en) | 2023-02-03 |
Family
ID=85053045
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211444503.9A Pending CN115671489A (en) | 2022-11-18 | 2022-11-18 | Emotion training system |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN115671489A (en) |
WO (1) | WO2024103464A1 (en) |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220313083A1 (en) * | 2015-10-09 | 2022-10-06 | Senseye, Inc. | Cognitive, emotional, mental and psychological diagnostic engine via the eye |
US20170352283A1 (en) * | 2016-06-07 | 2017-12-07 | Cerekinetic, Inc. | Self-administered evaluation and training method to improve mental state |
CN106843500B (en) * | 2017-02-27 | 2020-07-07 | 南通大学 | Cognitive level rehabilitation training system based on eye movement tracking technology |
CN107519622A (en) * | 2017-08-21 | 2017-12-29 | 南通大学 | Spatial cognition rehabilitation training system and method based on virtual reality and the dynamic tracking of eye |
CN110302459B (en) * | 2019-08-09 | 2022-05-13 | 丹阳慧创医疗设备有限公司 | Training method, device, equipment and system for emotion regulation and control |
CN112535479B (en) * | 2020-12-04 | 2023-07-18 | 中国科学院深圳先进技术研究院 | Method for determining emotion processing tendency and related products |
CN115206492A (en) * | 2021-04-12 | 2022-10-18 | 中国科学院深圳先进技术研究院 | Emotion recognition capability self-adaptive training method and device based on eye movement feedback |
CN113611395B (en) * | 2021-08-09 | 2024-05-31 | 江苏嘉纳宝医疗科技有限公司 | Mental illness user auxiliary training method based on virtual reality technology |
CN115249379A (en) * | 2021-11-05 | 2022-10-28 | 上海外国语大学 | Plane advertisement evaluation method based on event-related potential and eye movement tracking technology |
-
2022
- 2022-11-18 CN CN202211444503.9A patent/CN115671489A/en active Pending
- 2022-12-08 WO PCT/CN2022/137731 patent/WO2024103464A1/en unknown
Also Published As
Publication number | Publication date |
---|---|
WO2024103464A1 (en) | 2024-05-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11610500B2 (en) | Adaptive learning environment driven by real-time identification of engagement level | |
US11416756B2 (en) | Systems and methods for physiological sensing for purposes of detecting persons affective focus state for optimizing productivity and work quality | |
KR102262890B1 (en) | Reading ability improvement training apparatus for providing training service to improve reading ability in connection with reading ability diagnosis apparatus based on eye tracking and apparatus for providing service comprising the same | |
D'Mello et al. | Gaze tutor: A gaze-reactive intelligent tutoring system | |
AU2023206097A1 (en) | Computing technologies for diagnosis and therapy of language-related disorders | |
Ghergulescu et al. | A novel sensor-based methodology for learner's motivation analysis in game-based learning | |
CN110678935A (en) | Interactive adaptive learning and neurocognitive disorder diagnosis system applying face tracking and emotion detection and related methods thereof | |
JP7311637B2 (en) | Systems and methods for cognitive training and monitoring | |
KR20210019266A (en) | Apparatus and method for diagnosis of reading ability based on machine learning using eye tracking | |
WO2015027079A1 (en) | System and method for improving student learning by monitoring student cognitive state | |
US10188337B1 (en) | Automated correlation of neuropsychiatric test data | |
US20170039876A1 (en) | System and method for identifying learner engagement states | |
US20190228673A1 (en) | Method and system for evaluating and monitoring compliance using emotion detection | |
Ahuja et al. | An investigative study on the effects of pedagogical agents on intrinsic, extraneous and germane cognitive load: experimental findings with dyscalculia and non-dyscalculia learners | |
KR20220135846A (en) | Learner analysis and care system using emotional analysis technology | |
Mehta et al. | Inclusion of Children With Special Needs in the Educational System, Artificial Intelligence (AI) | |
US11373546B2 (en) | Data processing systems for processing and analyzing data regarding self-awareness and executive function | |
CN115671489A (en) | Emotion training system | |
Mehigan et al. | Engaging learners through emotion in Artificially Intelligent environments | |
KR102383457B1 (en) | Active artificial intelligence tutoring system that support teaching and learning and method for controlling the same | |
WO2023278217A1 (en) | Systems and methods for mental exercises and improved cognition | |
CN114119932A (en) | VR teaching method, apparatus, electronic device, storage medium and program product | |
Utami et al. | A Brief Study of The Use of Pattern Recognition in Online Learning: Recommendation for Assessing Teaching Skills Automatically Online Based | |
Zakharov | Affect recognition and support in intelligent tutoring systems | |
Al-Omair et al. | An Emotionally Adaptive Framework for E-Learning Systems |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |