CN115047969A - Cognitive function evaluation system based on eye movement and evaluation method thereof - Google Patents

Cognitive function evaluation system based on eye movement and evaluation method thereof Download PDF

Info

Publication number
CN115047969A
CN115047969A CN202210654152.8A CN202210654152A CN115047969A CN 115047969 A CN115047969 A CN 115047969A CN 202210654152 A CN202210654152 A CN 202210654152A CN 115047969 A CN115047969 A CN 115047969A
Authority
CN
China
Prior art keywords
function
data
cognitive
eye movement
attention
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210654152.8A
Other languages
Chinese (zh)
Inventor
万巧琴
张世芳
赵小燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Priority to CN202210654152.8A priority Critical patent/CN115047969A/en
Publication of CN115047969A publication Critical patent/CN115047969A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/113Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/40Detecting, measuring or recording for evaluating the nervous system
    • A61B5/4076Diagnosing or monitoring particular conditions of the nervous system
    • A61B5/4088Diagnosing of monitoring cognitive diseases, e.g. Alzheimer, prion diseases or dementia
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Veterinary Medicine (AREA)
  • General Engineering & Computer Science (AREA)
  • Neurology (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • Psychiatry (AREA)
  • Pathology (AREA)
  • Physiology (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Neurosurgery (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Child & Adolescent Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Hospice & Palliative Care (AREA)
  • Motorcycle And Bicycle Frame (AREA)
  • Psychology (AREA)
  • Signal Processing (AREA)
  • Ophthalmology & Optometry (AREA)
  • Fuzzy Systems (AREA)

Abstract

The invention discloses a cognitive function evaluation system based on eye movement and an evaluation method thereof. The cognitive function evaluation system comprises an experimental paradigm module, a data acquisition module, a data processing module and an intelligent evaluation module, the cognitive function is evaluated by using an eye movement tracking technology, the reliability is good, the operation is simple, the implementation is convenient, the objective and accurate effects are achieved, the defects that the traditional cognitive evaluation is time-consuming and labor-consuming and the result is unstable can be overcome, and the large-scale evaluation and popularization are facilitated; the invention adopts various experimental paradigms, can evaluate the cognitive function in each field, solves the defect that the existing eye movement technology only evaluates the executive function, and is beneficial to comprehensively evaluating the cognitive function level of an individual; the invention uses a deep machine learning method to model the multidimensional result of the eye movement experiment and intelligently output the cognitive assessment result so as to more accurately, more conveniently and objectively evaluate the cognitive function.

Description

Cognitive function evaluation system based on eye movement and evaluation method thereof
Technical Field
The invention relates to the field of cognitive function assessment, in particular to a cognitive function assessment system based on eye movement and an assessment method thereof.
Background
The cognitive function is an intelligent processing process for understanding and acquiring knowledge by the body, and relates to a series of psychological and social behaviors such as learning, memory, language, thinking, spirit, emotion and the like. Studies have shown that both brain and cognitive function of individuals are constantly changing with age. In the early stages, the brain and cognitive levels of individuals tend to increase; but both decrease to different degrees with increasing age. Especially, in the later period of life cycle, the cognitive function may be rapidly reduced due to brain trauma, brain infection and other factors, thereby influencing the life quality and life span of the individual. The evaluation of the cognitive function level of each age group can not only reflect the current cognitive function state of an individual, but also explore the development condition of the cognitive level increasing with age, is also beneficial to the identification of abnormal states, and has very important value.
At present, the cognitive function level is mainly evaluated by using scales, such as a Webster child intelligence scale for evaluating childhood, a Danver development screening test, a Myocatin child capacity scale, an MMSE scale for evaluating the cognitive level of the elderly, a MoCA scale and the like. However, these scales are time-consuming and subject to many factors including cultural differences, language, education, environment, and expertise of the evaluator. Due to the limitations of the cognitive assessment function, a cognitive assessment method suitable for large-scale popularization and promotion of communities is lacked, so that a more objective, reliable, simple and feasible cognitive assessment method is needed.
The eye tracking refers to collecting relevant data of a research object in the aspect of processing visual information, such as reaction duration, accuracy, eye jump frequency, fixation time and the like, by recording characteristics of eyeball motion tracks, pupil changes and the like of a person in the process of processing the visual information, wherein the data are closely related to cognitive functions and can reflect the cognitive level of an individual, so that the eye tracking technology is considered as a new technology and a new method for evaluating the cognitive functions. The cognitive level of an individual includes a memory function, an execution function, attention, a visual space function, and the like, however, the existing eye movement technology mostly adopts universal task paradigms which are mainly used for evaluating the execution function, so that the current cognitive level of the individual and the subsequent cognitive track change cannot be accurately and comprehensively reflected. In addition, although the indexes of the current eye movement technology are relatively clear, the indexes are independent from one another, and the cognitive function and the overall cognitive function level of a certain field cannot be truly reflected, for example, under the condition that the index A is abnormal and the index B is normal, the cognitive function is judged to be in a descending/normal level only by the abnormality A/the abnormality B, so that the descending/normal level is relatively smooth, and a more accurate and advanced algorithm needs to be adopted to synthesize a numerical value reflecting the cognitive function and the overall cognitive function of the certain field according to the weight and the obtained score of each index, and divide the cognitive function state.
Disclosure of Invention
In order to solve the technical problems, the invention provides a cognitive function evaluation system and an evaluation method thereof based on eye movement, which are based on evaluation of a covering memory function, an execution function, attention, cognitive flexibility, a visual space function and an abstract function of the eye movement, and learning and training data collected by eye movement tracking through a decision tree-based integrated learning method, a neural network and other machine learning methods, so as to realize automatic judgment of cognitive function evaluation results and conclusion output; the method can quickly evaluate the cognitive function level of an individual, and give an evaluation result and a conclusion, and has good practical significance.
It is an object of the present invention to provide an eye movement-based cognitive function assessment system.
The cognitive function assessment system based on eye movement of the present invention comprises: the system comprises an experimental paradigm module, a data acquisition module, a data processing module and an intelligent evaluation module; wherein,
the experimental paradigm module includes: the system comprises a memory assessment experimental paradigm, an executive function and attention experimental paradigm, a cognitive flexibility experimental paradigm, a visual space function experimental paradigm and an abstract function experimental paradigm, wherein an experimental paradigm module provides an experimental paradigm for a user to perform cognitive function assessment;
the data acquisition module adopts an eye tracker to collect eye movement data generated by a user in the process of executing an experimental paradigm through an eye movement tracking technology and synchronously sends the collected eye movement data to the data processing module; the corresponding collected eye movement data includes: memory assessment data, executive function and attention data, cognitive flexibility data, visual space function data and abstract function data;
the data processing module is used for preprocessing the eye movement data collected by the data processing module to obtain eye movement data characteristic parameters;
the intelligent evaluation module is internally stored with a machine learning SVR (support vector) regression model and a machine learning classification decision tree model, adopts the machine learning SVR regression model and the machine learning classification decision tree model to evaluate the cognitive function of the characteristic parameters of the eye movement data to obtain cognitive function scores, and utilizes the classification model to divide the cognitive function classifications, thereby realizing the intelligent output of the cognitive function scores and classification judgment.
The memory evaluation experiment paradigm comprises two experiment tasks, namely a memory glance task and a picture memory task; in the memory glance task, a central object and polygonal graphs are presented in a screen, wherein the polygonal graphs randomly appear on two sides of the screen; setting the display time of the polygonal graph; the image memory task adopts a material consisting of 5-10 groups of unrelated images, each group comprises 5-20 images, in the image memory task, one image is randomly extracted from each group of images and is presented in the center of a screen, and the extracted images are called as image memory targets; after the time of 4-10 min, a group of new pictures are presented in the center of the screen, in the group of new pictures, part of pictures are the same as the picture memory object, namely, pictures which appear once, and part of pictures are different from the picture memory object, namely, pictures which do not appear once, and the pictures which do not appear once are called picture memory interferents.
The executive function and attention experimental paradigm comprises a executive function and attention forward glance task and an executive function and attention reverse glance task; in the process of executing a function and attention forward panning task and a reverse panning task, a central object and polygonal graphs are displayed in a screen, the polygonal graphs can randomly appear on two sides of the screen, and the display time of the polygonal graphs is set to be 2-5 s.
The cognitive flexibility experimental paradigm comprises a cognitive flexibility forward scanning task and a cognitive flexibility reverse scanning task; in the cognitive flexibility forward glance task, a graph with set colors is presented in a screen, the graph with the set colors is called a cognitive flexibility forward glance task prompt graph, then polygonal graphs appear at the left side and the right side of the screen randomly, and the retention time is 2-5 s; in the cognitive flexibility reverse glance task, a graph with a color different from that of the cognitive flexibility forward glance task depiction is presented in a screen and is called as a cognitive flexibility reverse glance task depiction, then polygonal graphs appear on the left side and the right side of the screen at random, and the retention time is 2-5 s.
The visual space function experiment paradigm adopts a visual search paradigm, a picture or a graph is presented above a screen, and the picture or the graph is called a visual space function target object; simultaneously, a group of pictures or graphs are presented below the screen, and the group of pictures or graphs comprises a picture or graph which is completely the same as the visual space function target object; 3-5 images are the same as the visual space function object, but have one or more different images or graphs, and the images or graphs are called visual space function interferents.
The abstract function experiment paradigm adopts a visual search paradigm, a group of pictures, 4-8 pictures, are presented in the center of a screen, and in the group of pictures, one picture is different from the types of the rest pictures, for example, in four pictures of apple, banana, Hami melon and tomato, the tomato is a vegetable, the other three pictures are fruits, the picture is called an abstract function target object, and the rest pictures are called abstract function interferents.
When a user executes a memory glance task, the data acquisition module acquires memory glance data generated in the process that the user looks at the position of the polygonal figure after a central object disappears, wherein the memory glance data comprises memory glance peak speed, memory glance latency, memory glance accuracy, memory glance redundant glance incidence and memory glance error correction rate; when a user executes a picture memory task, a data acquisition module acquires picture memory data generated in the process of recalling and selecting a picture memory target object by the user, wherein the picture memory data comprises the number of eye movement fixation points, fixation time, correct memory number, correct memory average reaction time and error memory number; the memory glance data and the picture memory data are collectively referred to as memory assessment data.
When a user executes an execution function and attention forward glance task, a data acquisition module acquires execution function and attention forward glance data generated in the process that the user looks at the position of a polygonal figure after a central object disappears, wherein the execution function and attention forward glance data comprise a forward execution function and attention peak speed, a forward execution function and attention latency, a forward execution function and attention accuracy and a forward execution function and attention error correction rate; when a user executes an executive function and attention reverse glance task, a data acquisition module acquires executive function and attention reverse glance data generated in the process that the user looks at the opposite side of the position where the polygonal figure is located after a central object disappears, wherein the executive function and attention reverse glance data comprise a reverse executive function and attention peak speed, a reverse executive function and attention latency, a reverse executive function and attention accuracy rate and a reverse executive function and attention error correction rate; executive function and attention forward glance data and executive function and attention reverse glance data are collectively referred to as executive function and attention data.
When a user executes a cognitive flexibility forward scanning task, a data acquisition module acquires cognitive flexibility forward scanning data generated in the process that the user looks at the position of a polygonal figure, wherein the cognitive flexibility forward scanning data comprises a forward cognitive flexibility peak speed, a forward cognitive flexibility latent period, a forward cognitive flexibility correct rate and a forward cognitive flexibility error correction rate; when a user executes a cognitive flexibility reverse saccade task, a data acquisition module acquires cognitive flexibility reverse saccade data generated in the process that the user looks at the opposite side of the position of the polygonal figure, wherein the cognitive flexibility reverse saccade data comprises a reverse cognitive flexibility peak speed, a reverse cognitive flexibility latent period, a reverse cognitive flexibility correct rate and a reverse cognitive flexibility error correction rate; cognitive flexibility forward glance data and cognitive flexibility reverse glance data are collectively referred to as cognitive flexibility data.
When a user executes a visual search paradigm of a visual space function experiment paradigm, a data acquisition module acquires visual space function data generated in the process of selecting a picture or a graph which is completely the same as a visual space function target object by the user, wherein the visual space function data comprises visual space function total watching time, visual space function target object watching time, visual space function total watching times, visual space function target object watching times and visual space function accuracy.
When a user executes a visual search paradigm of the abstract function experiment paradigm, the data acquisition module acquires abstract function data generated by the user in the process of selecting an abstract function target, wherein the abstract function data comprises abstract function total watching duration, abstract function target watching duration, abstract function total watching times, abstract function target watching times and abstract function correct rate.
It is another object of the present invention to provide a method for eye movement-based assessment of cognitive function.
The invention relates to a cognitive function assessment method based on eye movement, which comprises the following steps:
1) designing an experimental paradigm module:
the experimental paradigm module includes five: the system comprises a memory assessment experimental paradigm, an executive function and attention experimental paradigm, a cognitive flexibility experimental paradigm, a visual space function experimental paradigm and an abstract function experimental paradigm, wherein an experimental paradigm module provides an experimental paradigm for a user to perform cognitive function assessment;
2) the data acquisition module collects eye movement data through an eye movement tracking technology:
the data acquisition module adopts an eye tracker to collect eye movement data generated by a user in the process of executing an experimental paradigm through an eye movement tracking technology and synchronously sends the collected eye movement data to the data processing module; the corresponding collected eye movement data includes: memory assessment data, executive function and attention data, cognitive flexibility data, visual space function data and abstract function data;
a) memory assessment data was collected:
when a user executes a memory glance task, a data acquisition module acquires memory glance data generated in the process that the user looks at the position of a polygonal figure after a central object appearing in a screen disappears, wherein the memory glance data comprises memory glance peak speed, memory glance latency, memory glance accuracy, memory glance redundant glance occurrence rate and memory glance error correction rate;
when a user executes a picture memory task, a data acquisition module acquires picture memory data generated in the process of recalling and selecting a picture memory target object by the user, wherein the picture memory data comprises the number of eye movement fixation points, fixation time, correct memory number, correct memory average reaction time and error memory number;
b) collecting executive function and attention data:
when a user executes an execution function and attention forward glance task, a data acquisition module acquires execution function and attention forward glance data generated in the process that the user looks at the position of a polygonal figure after a central object appearing in a screen disappears, wherein the execution function and attention forward glance data comprise a forward execution function and attention peak speed, a forward execution function and attention latency, a forward execution function and attention accuracy rate and a forward execution function and attention error correction rate;
when a user executes an execution function and attention reverse saccade task, a data acquisition module acquires execution function and attention reverse saccade data generated in the process that the user looks at the opposite side of the position of the polygonal figure after a central object appearing in a screen disappears, wherein the execution function and attention reverse saccade data comprise a reverse execution function and attention peak speed, a reverse execution function and attention latency, a reverse execution function and attention accuracy rate and a reverse execution function and attention error correction rate;
c) collecting cognitive flexibility data:
when a user executes a cognitive flexibility forward scanning task, a data acquisition module acquires cognitive flexibility forward scanning data generated in the process that the user looks at the position of a polygonal figure, wherein the cognitive flexibility forward scanning data comprises a forward cognitive flexibility peak speed, a forward cognitive flexibility latent period, a forward cognitive flexibility correct rate and a forward cognitive flexibility error correction rate;
when a user executes a cognitive flexibility reverse glance task, a data acquisition module acquires cognitive flexibility reverse glance data generated in a process that the user looks at the opposite side of the position of a polygonal figure, wherein the cognitive flexibility reverse glance data comprises a reverse cognitive flexibility peak speed, a reverse cognitive flexibility latent period, a reverse cognitive flexibility correct rate and a reverse cognitive flexibility error correction rate;
d) collecting visual space function data:
when a user executes a visual search paradigm of a visual space function experiment paradigm, a data acquisition module acquires visual space function data generated by the user in the process of selecting a picture or a graph which is completely the same as a visual space function target object, wherein the visual space function data comprises visual space function total watching duration, visual space function target object watching duration, visual space function total watching times, visual space function target object watching times and visual space function accuracy;
e) and (3) abstract function data collection:
when a user executes a visual search paradigm of an abstract function experiment paradigm, a data acquisition module acquires abstract function data generated in the process of selecting an abstract function target object by the user, wherein the abstract function data comprises abstract function total watching duration, abstract function target object watching duration, abstract function total watching times, abstract function target object watching times and abstract function accuracy;
3) the data processing module carries out data preprocessing:
the data processing module preprocesses the eye movement data collected by the data processing module to obtain eye movement data characteristic parameters, wherein the eye movement data characteristic parameters comprise filling missing values, smoothing noise data, smoothing or deleting outliers, deleting non-relevant data, integrating and sorting the data;
4) assessing cognitive function scores using a regression model:
the intelligent evaluation module constructs a machine learning SVR regression model, eye movement data of a large number of people and cognitive function score results obtained by the existing cognitive function common tables are used as samples and labels for training the machine learning SVR regression model, the machine learning SVR regression model is trained, GridSearch (grid search) is used for searching model parameters in the training process, cyclic training is carried out according to different model parameters until the best regression result is obtained, the trained machine learning SVR regression model is obtained, feature selection is carried out on the eye movement data feature parameters by using a genetic algorithm to obtain the optimal feature parameter subset, then the machine learning SVR regression model is used for carrying out cognitive function evaluation on the feature parameter subset of the eye movement data feature parameters to obtain cognitive function scores;
5) classifying cognitive function classification by using a classification model:
the intelligent evaluation module adopts the scores of the existing cognitive function common tables as data labels, combines eye movement data characteristic parameters as training data, adopts a decision tree for classification calculation, and establishes and trains a machine learning classification decision tree model; in the process of establishing a machine learning classification decision tree model, all data of the eye movement data characteristic parameters are used as characteristics, information gains of all the characteristics are calculated for the nodes from a root node, the characteristics with the maximum information gains are selected as the characteristics of the root node, and child nodes are established according to different values of the characteristics of the root node; generating new child nodes by using the same mode for each child node until the information gain is small or no characteristics can be selected, and finally obtaining a machine learning classification decision tree model; and classifying the cognitive function scores through a machine learning classification decision tree model, and outputting expected classification of the cognitive functions.
When the data scale is greatly increased and the noise data is more, a gradient decision tree (GBDT) method is adopted, a plurality of machine learning classification decision tree models are integrated by adopting an addition model (namely linear combination of basis functions), and the residual error generated in the training process is continuously reduced to achieve an algorithm for classifying the data; the system realizes GBDT by using an open source extreme Gradient Boosting (XGboost) algorithm, thereby accelerating the learning speed and improving the model universality.
In step 1), the memory evaluation experimental paradigm comprises two experimental tasks, namely a memory glance task and a picture memory task; in the memory glance task, a central object and polygonal graphs are presented in a screen, wherein the polygonal graphs randomly appear on two sides of the screen; setting the display time of the polygonal graph; the image memory task adopts a material consisting of 5-10 groups of unrelated images, each group comprises 5-20 images, in the image memory task, one image is randomly extracted from each group of images and is presented in the center of a screen, and the extracted images are called as image memory targets; after the time of 4-10 min, a group of new pictures are presented in the center of the screen, in the group of new pictures, part of pictures are the same as the picture memory object, namely, pictures which appear once, and part of pictures are different from the picture memory object, namely, pictures which do not appear once, and the pictures which do not appear once are called picture memory interferents.
The executive function and attention experimental paradigm comprises a executive function and attention forward glance task and an executive function and attention reverse glance task; during the process of executing the function and attention forward glance task and the backward glance task, a central object and polygonal graphs are presented in the screen, the polygonal graphs can randomly appear on two sides of the screen, and the display time of the polygonal graphs is set.
The cognitive flexibility experimental paradigm comprises a cognitive flexibility forward scanning task and a cognitive flexibility reverse scanning task; in the cognitive flexibility forward glance task, a graph with set colors is presented in a screen, the graph with the set colors is called a cognitive flexibility forward glance task prompt graph, then polygonal graphs appear at the left side and the right side of the screen randomly, and the retention time is 2-5 s; in the cognitive flexibility reverse glance task, a graph with a color different from that of the cognitive flexibility forward glance task prompt graph is presented in a screen, the graph is called the cognitive flexibility reverse glance task prompt graph, then a polygonal graph appears on the left side and the right side of the screen randomly, the staying time is 2 s-5 s, and eye movement data generated in the process that a user looks at the opposite side of the position of the polygonal graph is recorded.
The visual space function experiment paradigm adopts a visual search paradigm, a picture or a graph is presented above a screen, and the picture or the graph is called a visual space function target object; meanwhile, a group of pictures or graphics are presented below the screen, and the group of pictures or graphics comprise a picture or graphic which is completely the same as the visual space function target object; the 3-5 images are similar to the visual space function object, but have one or more different images or graphs, and the images or graphs are called visual space function interferents.
The abstract function experiment paradigm adopts a visual search paradigm, a group of pictures, 4-8 pictures, are presented in the center of a screen, and in the group of pictures, one picture is different from the types of the rest pictures, for example, in four pictures of apple, banana, Hami melon and tomato, the tomato is a vegetable, the other three pictures are fruits, the picture is called an abstract function target object, and the rest pictures are called abstract function interferents.
In step 2), in b), the peak speed and the latency are referred to in textbook "eye tracking user experience optimization operation guide"; the accuracy refers to the proportion of the test times of correctly looking at the specified position to the total test times according to the requirements; the incidence of redundant glances refers to the proportion of the number of tests the user has mistakenly looked at the central object to the total number of tests.
In step 2), the error correction rate refers to the ratio of the number of tests for correcting errors to the total number of tests after the user scans the errors for the first time in the reverse scanning task.
The invention has the advantages that:
(1) the invention uses the eye movement tracking technology to evaluate the cognitive function, has good reliability, simple operation, convenient implementation, objectivity and accuracy, can solve the defects of time and labor consumption and unstable result of the traditional cognitive evaluation, and is convenient for large-scale evaluation and popularization;
(2) the invention adopts various experimental paradigms, can evaluate the cognitive function in each field, solves the defect that the existing eye movement technology only evaluates the executive function, and is beneficial to comprehensively evaluating the cognitive function level of an individual;
(3) the invention uses a deep machine learning method to model the multidimensional result of the eye movement experiment and intelligently output the cognitive assessment result so as to more accurately, more conveniently and objectively evaluate the cognitive function.
Drawings
FIG. 1 is a block diagram of an eye movement-based cognitive function assessment system according to the present invention;
fig. 2 is a flowchart of an eye movement-based cognitive function assessment method according to the present invention.
Detailed Description
The invention will be further elucidated by means of specific embodiments in the following with reference to the drawing.
As shown in fig. 1, the eye movement-based cognitive function assessment system of the present embodiment includes: the system comprises an experimental paradigm module, a data acquisition module, a data processing module and an intelligent evaluation module; wherein,
the experimental paradigm module includes: the system comprises a memory assessment experimental paradigm, an executive function and attention experimental paradigm, a cognitive flexibility experimental paradigm, a visual space function experimental paradigm and an abstract function experimental paradigm, wherein an experimental paradigm module provides an experimental paradigm for a user to perform cognitive function assessment;
the data acquisition module adopts an eye tracker to collect eye movement data generated by a user in the process of executing an experimental paradigm through an eye movement tracking technology and synchronously sends the collected eye movement data to the data processing module; the corresponding collected eye movement data includes: memory assessment data, executive function and attention data, cognitive flexibility data, visual space function data and abstract function data;
the data processing module is used for preprocessing the eye movement data collected by the data processing module to obtain eye movement data characteristic parameters;
the intelligent evaluation module is internally stored with a machine learning SVR (support vector) regression model and a machine learning classification decision tree model, adopts the machine learning SVR regression model and the machine learning classification decision tree model, evaluates the cognitive function of the characteristic parameters of the eye movement data to obtain cognitive function scores and utilizes the classification model to classify the cognitive function, thereby realizing the intelligent output of the cognitive function scores and classification judgment.
The cognitive function assessment method based on eye movement comprises the following steps:
1) designing an experimental paradigm module:
the experimental paradigm module includes five: memory assessment experimental paradigm, executive function and attention experimental paradigm, cognitive flexibility experimental paradigm, visual space function experimental paradigm and abstract function experimental paradigm, the experimental paradigm module provides the experimental paradigm that is used for the user to carry out cognitive function assessment:
the memory evaluation experiment paradigm comprises two experiment tasks, namely a memory glance task and an image memory task; in the memory glance task, a central object and polygonal graphs are presented in a screen, wherein the polygonal graphs randomly appear on two sides of the screen; setting the display time of the polygonal graph; the picture memory task adopts a material consisting of 5 groups of unrelated pictures, each group comprises 5 pictures, in the picture memory task, one picture is randomly extracted from each group of pictures and is displayed in the center of a screen, and the extracted pictures are called as picture memory target objects; after the time of 4-10 min, a group of new pictures are presented in the center of the screen, in the group of new pictures, part of the pictures are the same as the picture memory target object, namely, the pictures appear, and part of the pictures are different from the picture memory target object, namely, the pictures do not appear, wherein the pictures which do not appear are called picture memory interferents;
the executive function and attention experimental paradigm comprises a executive function and attention forward glance task and an executive function and attention reverse glance task; in the process of executing a function and attention forward panning task and a reverse panning task, a central object and polygonal graphs are presented in a screen, the polygonal graphs can randomly appear on two sides of the screen, the display time of the polygonal graphs is set, and after the central object disappears, a user respectively looks at the position of the polygonal graphs and looks at the opposite side of the position of the polygonal graphs;
the cognitive flexibility experimental paradigm comprises a cognitive flexibility forward scanning task and a cognitive flexibility reverse scanning task; in the cognitive flexibility forward glance task, a graph with set colors is presented in a screen, the graph with the set colors is called a cognitive flexibility forward glance task prompt graph, then polygonal graphs appear at the left side and the right side of the screen at random, and the retention time is 2 s; in the cognitive flexibility reverse glance task, a graph with a color different from that of a cognitive flexibility forward glance task prompt graph is presented in a screen, the graph is called the cognitive flexibility reverse glance task prompt graph, then a polygonal graph appears at the left side and the right side of the screen randomly, the staying time is 2s, and eye movement data generated in the process that a user looks at the opposite side of the position of the polygonal graph is recorded;
the visual space function experiment paradigm adopts a visual search paradigm, a picture or a graph is presented above a screen, and the picture or the graph is called a visual space function target object; simultaneously, a group of pictures or graphs are presented below the screen, and the group of pictures or graphs comprises a picture or graph which is completely the same as the visual space function target object; 3 pictures or figures similar to the visual space function target object but with one or more different pictures or figures, which are called visual space function interferents;
the abstract function experiment paradigm adopts a visual search paradigm, a group of pictures and 4 pictures are presented in the center of a screen, and in the group of pictures, one picture is different from the types of the rest pictures, for example, in four pictures of apple, banana, Hami melon and tomato, the tomato is a vegetable, the other three pictures are fruits, the picture is called an abstract function target object, and the rest pictures are called abstract function interferents;
2) the data acquisition module collects eye movement data through an eye movement tracking technology:
the data acquisition module adopts an eye tracker to collect eye movement data generated by a user in the process of executing an experimental paradigm through an eye movement tracking technology and synchronously sends the collected eye movement data to the data processing module; the corresponding collected eye movement data includes: memory assessment data, executive function and attention data, cognitive flexibility data, visual space function data and abstract function data;
a) memory assessment data was collected:
when a user executes a memory glance task, a data acquisition module acquires memory glance data generated in the process that the user looks at the position of a polygonal graph after a central object presented in a screen disappears, wherein the memory glance data comprises memory glance peak speed, memory glance latency, memory glance accuracy, memory glance redundant glance occurrence rate and memory glance error correction rate;
when a user executes a picture memory task, a data acquisition module acquires picture memory data generated in the process of recalling and selecting a picture memory target object by the user, wherein the picture memory data comprises the number of eye movement fixation points, fixation time, correct memory number, correct memory average reaction time and error memory number;
b) collecting execution function and attention data:
when a user executes an executive function and attention forward panning task, a data acquisition module acquires executive function and attention forward panning data generated in the process that the user looks at the position of a polygonal graph after a central object appearing in a screen disappears, wherein the executive function and attention forward panning data comprise a forward executive function and attention peak speed, a forward executive function and attention latency, a forward executive function and attention accuracy and a forward executive function and attention error correction rate;
when a user executes an execution function and attention reverse saccade task, a data acquisition module acquires execution function and attention reverse saccade data generated in the process that the user looks at the opposite side of the position of the polygonal figure after a central object appearing in a screen disappears, wherein the execution function and attention reverse saccade data comprise a reverse execution function and attention peak speed, a reverse execution function and attention latency, a reverse execution function and attention accuracy rate and a reverse execution function and attention error correction rate;
c) collecting cognitive flexibility data:
when a user executes a cognitive flexibility forward scanning task, a data acquisition module acquires cognitive flexibility forward scanning data generated in the process that the user looks at the position of a polygonal figure, wherein the cognitive flexibility forward scanning data comprises a forward cognitive flexibility peak speed, a forward cognitive flexibility latent period, a forward cognitive flexibility correct rate and a forward cognitive flexibility error correction rate;
when a user executes a cognitive flexibility reverse saccade task, a data acquisition module acquires cognitive flexibility reverse saccade data generated in the process that the user looks at the opposite side of the position of the polygonal figure, wherein the cognitive flexibility reverse saccade data comprises a reverse cognitive flexibility peak speed, a reverse cognitive flexibility latent period, a reverse cognitive flexibility correct rate and a reverse cognitive flexibility error correction rate;
d) collecting visual space function data:
when a user executes a visual search paradigm of a visual space function experiment paradigm, a data acquisition module acquires visual space function data generated in the process of selecting a picture or a graph which is completely the same as a visual space function target object by the user, wherein the visual space function data comprises visual space function total watching duration, visual space function target object watching duration, visual space function total watching times, visual space function target object watching times and visual space function accuracy;
e) abstract function data collection:
when a user executes a visual search paradigm of an abstract function experiment paradigm, a data acquisition module acquires abstract function data generated in the process of selecting an abstract function target object by the user, wherein the abstract function data comprises abstract function total watching duration, abstract function target object watching duration, abstract function total watching times, abstract function target object watching times and abstract function accuracy;
3) the data processing module carries out data preprocessing:
the data processing module preprocesses the eye movement data collected by the data processing module to obtain eye movement data characteristic parameters, including filling missing values, smoothing noise data, smoothing or deleting outliers, deleting non-relevant data, integrating and sorting the data;
4) assessing cognitive function scores using a regression model:
the intelligent evaluation module constructs a machine learning SVR regression model, eye movement data and cognitive function score results of a large number of crowds are used as samples and labels for training the machine learning SVR regression model, the machine learning SVR regression model is trained, GridSearch (grid search) is used for searching model parameters in the training process, cyclic training is carried out aiming at different parameters until the best regression result is obtained, a genetic algorithm is used for carrying out feature selection on the eye movement data feature parameters to obtain the optimal feature parameter subset, the trained machine learning SVR regression model is obtained, and then the machine learning SVR regression model is used for carrying out cognitive function evaluation on the feature parameter subset of the eye movement data feature parameters to obtain cognitive function scores;
5) classifying cognitive function classification by using a classification model:
the intelligent evaluation module adopts the scores of the cognitive function common-use tables as data labels, combines eye movement data characteristic parameters as training data, adopts a decision tree for classification calculation, and establishes and trains a machine learning classification decision tree model; in the process of establishing a machine learning classification decision tree model, all data of the eye movement data characteristic parameters are used as characteristics, information gains of all the characteristics are calculated for the nodes from a root node, the characteristics with the maximum information gains are selected as the characteristics of the root node, and child nodes are established according to different values of the characteristics of the root node; generating new child nodes by using the same mode for each child node until the information gain is small or no characteristics can be selected, and finally obtaining a machine learning classification decision tree model; and classifying the cognitive function scores through a machine learning classification decision tree model, and outputting expected classification of the cognitive function, wherein the expected classification comprises three, four and more classifications, and the cognitive function is equivalent to the same age group level and is lower than the same age group level.
When the data scale is greatly increased and the noise data is more, a gradient decision tree (GBDT) method is adopted, a plurality of machine learning classification decision tree models are integrated by adopting an addition model (namely linear combination of basis functions), and the residual error generated in the training process is continuously reduced to achieve an algorithm for classifying the data; the system uses an open source XGboost (eXtreme Gradient boosting) algorithm to realize GBDT, thereby accelerating the learning speed and improving the model universality.
Finally, it is noted that the disclosed embodiments are intended to aid in further understanding of the invention, but those skilled in the art will appreciate that: various substitutions and modifications are possible without departing from the spirit and scope of this disclosure and the appended claims. Therefore, the invention should not be limited to the embodiments disclosed, but the scope of the invention is defined by the appended claims.

Claims (9)

1. An eye movement-based cognitive function assessment system, comprising: the system comprises an experimental paradigm module, a data acquisition module, a data processing module and an intelligent evaluation module; wherein,
the experimental paradigm module includes: the system comprises a memory assessment experimental paradigm, an executive function and attention experimental paradigm, a cognitive flexibility experimental paradigm, a visual space function experimental paradigm and an abstract function experimental paradigm, wherein an experimental paradigm module provides an experimental paradigm for a user to perform cognitive function assessment;
the data acquisition module adopts an eye tracker to collect eye movement data generated by a user in the process of executing an experimental paradigm through an eye movement tracking technology and synchronously sends the collected eye movement data to the data processing module; the corresponding collected eye movement data includes: memory assessment data, executive function and attention data, cognitive flexibility data, visual space function data and abstract function data;
the data processing module is used for preprocessing the eye movement data collected by the data processing module to obtain eye movement data characteristic parameters;
the intelligent evaluation module is internally stored with a machine learning Support Vector (SVR) regression model and a machine learning classification decision tree model, adopts the machine learning SVR regression model and the machine learning classification decision tree model to evaluate the cognitive function of the characteristic parameters of the eye movement data to obtain cognitive function scores, and utilizes the classification model to divide the cognitive function classifications, thereby realizing the intelligent output of the cognitive function scores and classification judgment.
2. The eye-movement based cognitive function assessment system according to claim 1, wherein said memory assessment experimental paradigm comprises two experimental tasks, a memory saccade task and a picture memory task; in the memory glance task, a central object and polygonal graphs are presented in a screen, wherein the polygonal graphs randomly appear on two sides of the screen; setting the display time of the polygonal graph; the picture memory task adopts a material consisting of a plurality of groups of unrelated pictures, each group comprises a plurality of pictures, in the picture memory task, one picture is randomly extracted from each group of pictures and is displayed in the center of a screen, and the extracted pictures are called as picture memory targets; after a set time, a group of new pictures are displayed in the center of the screen, in the group of new pictures, part of the pictures are the same as the picture memory target object, namely, the pictures which have appeared, and part of the pictures are different from the picture memory target object, namely, the pictures which have not appeared, and the pictures which have not appeared are called as picture memory interferents.
3. The eye movement-based cognitive function assessment system according to claim 1, wherein said executive function and attention experimental paradigm comprises a executive function and attention forward glance task and an executive function and attention reverse glance task; during the execution function and attention forward panning task and the execution function and attention reverse panning task, a central object and polygon graphics appear in the screen, and the polygon graphics appear at both sides of the screen at random.
4. The eye-movement based cognitive function assessment system according to claim 1, wherein said cognitive flexibility experimental paradigm comprises a cognitive flexibility forward glance task and a cognitive flexibility reverse glance task; in the cognitive flexibility forward glance task, a graph with set colors is presented in a screen, the graph with the set colors is called a cognitive flexibility forward glance task prompt graph, and then polygonal graphs appear on the left side and the right side of the screen at random; in the cognitive flexibility reverse glance task, a graph with a color different from that of a cognitive flexibility forward glance task prompt graph is presented in a screen, the graph is called the cognitive flexibility reverse glance task prompt graph, and then a polygon appears on the left side and the right side of the screen at random.
5. The eye movement-based cognitive function assessment system according to claim 1, wherein said experimental paradigm of view space function employs a visual search paradigm with a picture or graphic presented on top of the screen, the picture or graphic being referred to as the view space function target; simultaneously, a group of pictures or graphs are presented below the screen, and the group of pictures or graphs comprises a picture or graph which is completely the same as the visual space function target object; the visual space function target objects are the same in quantity, but one or more different pictures or figures are called visual space function interferents.
6. The eye movement-based cognitive function assessment system according to claim 1, wherein said abstract functional experimental paradigm employs a visual search paradigm with a central screen presenting a set of pictures, one of which is classified differently from the others, said pictures being referred to as abstract functional targets and said others being referred to as abstract functional interferers.
7. A cognitive function assessment method of an eye movement-based cognitive function assessment system according to claim 1, wherein said cognitive function assessment method comprises the steps of:
1) designing an experimental paradigm module:
the experimental paradigm module includes five: the system comprises a memory assessment experiment paradigm, an executive function and attention experiment paradigm, a cognitive flexibility experiment paradigm, a visual space function experiment paradigm and an abstract function experiment paradigm, wherein an experiment paradigm module provides the experiment paradigm for a user to perform cognitive function assessment;
2) the data acquisition module collects eye movement data through an eye movement tracking technology:
the data acquisition module adopts an eye tracker to collect eye movement data generated by a user in the process of executing an experimental paradigm through an eye movement tracking technology and synchronously sends the collected eye movement data to the data processing module; the corresponding collected eye movement data includes: memory assessment data, executive function and attention data, cognitive flexibility data, visual space function data and abstract function data;
a) memory assessment data was collected:
when a user executes a memory glance task, a data acquisition module acquires memory glance data generated in the process that the user looks at the position of a polygonal figure after a central object appearing in a screen disappears, wherein the memory glance data comprises memory glance peak speed, memory glance latency, memory glance accuracy, memory glance redundant glance occurrence rate and memory glance error correction rate;
when a user executes a picture memory task, a data acquisition module acquires picture memory data generated in the process of recalling and selecting a picture memory target object by the user, wherein the picture memory data comprises the number of eye movement fixation points, fixation time, correct memory number, correct memory average reaction time and error memory number;
b) collecting executive function and attention data:
when a user executes an executive function and attention forward panning task, a data acquisition module acquires executive function and attention forward panning data generated in the process that the user looks at the position of a polygonal graph after a central object appearing in a screen disappears, wherein the executive function and attention forward panning data comprise a forward executive function and attention peak speed, a forward executive function and attention latency, a forward executive function and attention accuracy and a forward executive function and attention error correction rate;
when a user executes an execution function and attention reverse saccade task, a data acquisition module acquires execution function and attention reverse saccade data generated in the process that the user looks at the opposite side of the position of the polygonal figure after a central object appearing in a screen disappears, wherein the execution function and attention reverse saccade data comprise a reverse execution function and attention peak speed, a reverse execution function and attention latency, a reverse execution function and attention accuracy rate and a reverse execution function and attention error correction rate;
c) collecting cognitive flexibility data:
when a user executes a cognitive flexibility forward scanning task, a data acquisition module acquires cognitive flexibility forward scanning data generated in the process that the user looks at the position of a polygonal figure, wherein the cognitive flexibility forward scanning data comprises a forward cognitive flexibility peak speed, a forward cognitive flexibility latent period, a forward cognitive flexibility correct rate and a forward cognitive flexibility error correction rate;
when a user executes a cognitive flexibility reverse glance task, a data acquisition module acquires cognitive flexibility reverse glance data generated in a process that the user looks at the opposite side of the position of a polygonal figure, wherein the cognitive flexibility reverse glance data comprises a reverse cognitive flexibility peak speed, a reverse cognitive flexibility latent period, a reverse cognitive flexibility correct rate and a reverse cognitive flexibility error correction rate;
d) collecting visual space function data:
when a user executes a visual search paradigm of a visual space function experiment paradigm, a data acquisition module acquires visual space function data generated in the process of selecting a picture or a graph which is completely the same as a visual space function target object by the user, wherein the visual space function data comprises visual space function total watching duration, visual space function target object watching duration, visual space function total watching times, visual space function target object watching times and visual space function accuracy;
e) and (3) abstract function data collection:
when a user executes a visual search paradigm of an abstract function experiment paradigm, a data acquisition module acquires abstract function data generated in the process of selecting an abstract function target object by the user, wherein the abstract function data comprises abstract function total watching duration, abstract function target object watching duration, abstract function total watching times, abstract function target object watching times and abstract function accuracy;
3) the data processing module carries out data preprocessing:
the data processing module is used for preprocessing the eye movement data collected by the data processing module to obtain eye movement data characteristic parameters;
4) assessing cognitive function scores using a regression model:
the intelligent evaluation module constructs a machine learning SVR regression model, eye movement data of a large number of people and cognitive function scoring results obtained by the conventional cognitive function common tables are used as samples and labels for training the machine learning SVR regression model, the machine learning SVR regression model is trained, in the training process, grid search (GridSearch) is used for searching model parameters, cyclic training is carried out on different model parameters until the best regression result is obtained, the trained machine learning SVR regression model is obtained, feature selection is carried out on eye movement data feature parameters by using a genetic algorithm to obtain the optimal feature parameter subset, and then the machine learning SVR regression model is used for carrying out cognitive function evaluation on the feature parameter subset of the eye movement data feature parameters to obtain cognitive function scores;
5) classifying cognitive function classification by using a classification model:
the intelligent evaluation module adopts the score of the existing cognitive function common-use meter as a data tag, combines the eye movement data characteristic parameter as training data, adopts a decision tree to carry out classification calculation, and establishes and trains a machine learning classification decision tree model; in the process of establishing a machine learning classification decision tree model, all data of the eye movement data characteristic parameters are used as characteristics, information gains of all the characteristics are calculated for the nodes from a root node, the characteristics with the maximum information gains are selected as the characteristics of the root node, and child nodes are established according to different values of the characteristics of the root node; generating a new child node for each child node in the same mode until the information gain is small or no characteristic can be selected, and finally obtaining a machine learning classification decision tree model; and classifying the cognitive function scores through a machine learning classification decision tree model, and outputting expected classification of the cognitive functions.
8. The cognitive function assessment method according to claim 7, wherein when the data size is greatly increased and the noise data is more, a gradient decision tree (GBDT) method is adopted, a plurality of machine learning classification decision tree models are integrated by adopting an addition model, and the residual error generated in the training process is continuously reduced to achieve an algorithm for classifying the data; the system uses an open-source extreme gradient lifting algorithm to realize GBDT, thereby accelerating the learning speed and improving the model universality.
9. The cognitive function assessment method according to claim 7, wherein in step 3), the data preprocessing comprises filling missing values, smoothing noisy data, smoothing or deleting outliers, non-relevant data deletion, data integration and sorting.
CN202210654152.8A 2022-06-10 2022-06-10 Cognitive function evaluation system based on eye movement and evaluation method thereof Pending CN115047969A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210654152.8A CN115047969A (en) 2022-06-10 2022-06-10 Cognitive function evaluation system based on eye movement and evaluation method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210654152.8A CN115047969A (en) 2022-06-10 2022-06-10 Cognitive function evaluation system based on eye movement and evaluation method thereof

Publications (1)

Publication Number Publication Date
CN115047969A true CN115047969A (en) 2022-09-13

Family

ID=83161931

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210654152.8A Pending CN115047969A (en) 2022-06-10 2022-06-10 Cognitive function evaluation system based on eye movement and evaluation method thereof

Country Status (1)

Country Link
CN (1) CN115047969A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116098587A (en) * 2023-01-13 2023-05-12 北京中科睿医信息科技有限公司 Cognition assessment method, device, equipment and medium based on eye movement
CN117893466A (en) * 2023-12-06 2024-04-16 山东睿医医疗科技有限公司 Cognitive decline condition evaluation method, device, computer equipment and medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116098587A (en) * 2023-01-13 2023-05-12 北京中科睿医信息科技有限公司 Cognition assessment method, device, equipment and medium based on eye movement
CN116098587B (en) * 2023-01-13 2023-10-10 北京中科睿医信息科技有限公司 Cognition assessment method, device, equipment and medium based on eye movement
CN117893466A (en) * 2023-12-06 2024-04-16 山东睿医医疗科技有限公司 Cognitive decline condition evaluation method, device, computer equipment and medium

Similar Documents

Publication Publication Date Title
Kübler et al. SubsMatch 2.0: Scanpath comparison and classification based on subsequence frequencies
CN115047969A (en) Cognitive function evaluation system based on eye movement and evaluation method thereof
JP6472621B2 (en) Classifier construction method, image classification method, and image classification apparatus
Fahimi et al. On metrics for measuring scanpath similarity
CN111134664B (en) Epileptic discharge identification method and system based on capsule network and storage medium
CN104463916B (en) Eye movement fixation point measurement method based on random walk
CN116226629B (en) Multi-model feature selection method and system based on feature contribution
Zhang et al. A human-in-the-loop deep learning paradigm for synergic visual evaluation in children
Salih et al. Prediction of student’s performance through educational data mining techniques
CN116740426A (en) Classification prediction system for functional magnetic resonance images
Degadwala et al. Improvements in Diagnosing Kawasaki Disease Using Machine Learning Algorithms
Lyford et al. Using machine learning to understand students’ gaze patterns on graphing tasks
Havugimana et al. Predicting cognitive load using parameter-optimized cnn from spatial-spectral representation of eeg recordings
CN115906002B (en) Learning input state evaluation method based on multi-granularity data fusion
KR101168596B1 (en) A quality test method of dermatoglyphic patterns analysis and program recording medium
CN116484290A (en) Depression recognition model construction method based on Stacking integration
AU2020101294A4 (en) Student’s physiological health behavioural prediction model using svm based machine learning algorithm
Neshov et al. Softvotingsleepnet: Majority vote of deep learning models for sleep stage classification from raw single eeg channel
Chaithanya et al. A Comprehensive Analysis: Classification Techniques for Educational Data mining
Hamoud et al. A prediction model based machine learning algorithms with feature selection approaches over imbalanced dataset
KR20220005945A (en) Method, system and non-transitory computer-readable recording medium for generating a data set on facial expressions
Chen et al. COCO-Search18: A dataset for predicting goal-directed attention control
Fuhl et al. Gaze-based Assessment of Expertise in Chess
Villar et al. A first approach to a fuzzy classification system for age estimation based on the pubic bone
CN112465152B (en) Online migration learning method suitable for emotional brain-computer interface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination