CN111081374B - Autism auxiliary diagnosis device based on common attention paradigm - Google Patents

Autism auxiliary diagnosis device based on common attention paradigm Download PDF

Info

Publication number
CN111081374B
CN111081374B CN201911291485.3A CN201911291485A CN111081374B CN 111081374 B CN111081374 B CN 111081374B CN 201911291485 A CN201911291485 A CN 201911291485A CN 111081374 B CN111081374 B CN 111081374B
Authority
CN
China
Prior art keywords
indexes
module
index
data
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911291485.3A
Other languages
Chinese (zh)
Other versions
CN111081374A (en
Inventor
彭琪钧
金雪梅
曹伟
朱绘霖
陈佳佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Normal University
Original Assignee
South China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Normal University filed Critical South China Normal University
Priority to CN201911291485.3A priority Critical patent/CN111081374B/en
Publication of CN111081374A publication Critical patent/CN111081374A/en
Application granted granted Critical
Publication of CN111081374B publication Critical patent/CN111081374B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Probability & Statistics with Applications (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention discloses an autism auxiliary diagnosis device based on a common attention paradigm, which comprises: the system comprises a playing module, a data acquisition module, an index extraction module, an index identification module and a prediction module, wherein the index extraction module is used for extracting the watching time ratio in a target area, the delay time of the first watching to the head of an experimenter in a video and the longest common subsequence similarity; the index identification module is used for clustering the single indexes and the data sets combined among the indexes by adopting a clustering algorithm according to the extracted indexes, and checking the clustered data to identify which indexes or combination of indexes are used as key performance indexes; and the prediction module is used for training a classifier by taking the key performance indexes as the characteristics of the prediction classification model and evaluating and predicting whether the tester is an ASD patient. The method can standardize the acquisition process of the eye movement data, extract key performance indexes as the characteristics of the classification model, and finally realize accurate diagnosis results.

Description

Autism auxiliary diagnosis device based on common attention paradigm
Technical Field
The invention belongs to the field of psychology and machine learning, and particularly relates to an autism auxiliary diagnosis device based on a common attention paradigm.
Background
Most studies on Autism Spectrum Disorder (ASD) indicate that a deficiency in the ability to share attention has become an important criterion for determining whether a child has ASD. The practical application of the manual-based common attention ability assessment has the disadvantages that a professional with abundant clinical experience is required to operate the experiment and judge according to the clinical experience, and the time-consuming, labor-consuming, non-quantitative and non-standardized mode limits the accuracy of judgment.
Common attention, it is studied that two persons pay attention to a certain object or event (target) in common to share interest or behavior in the object or event (target). In order to quantify the performance of common attention, it is usually necessary to consider the analysis of eye movement data, and further extract and identify performance indicators reflecting the difference between ASD and Typical Development (TD) children's common attention ability, so as to assist the evaluation prediction of ASD. Indexes are also divided into static indexes and dynamic indexes, and how to extract the indexes is a challenging problem. The statistical difference significance test can only show that the mean values of 2 groups under a certain performance index have significant difference, and does not mean that high classification precision can be obtained by dividing the 2 groups by the performance index.
Therefore, how to realize more accurate ASD and TD differentiation according to effective performance indexes becomes a problem with great research significance and practical value.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide the auxiliary diagnosis device for the autism based on the common attention paradigm, the device can standardize the acquisition process of eye movement data, extract key performance indexes as the characteristics of a classification model, and finally realize accurate diagnosis results.
The purpose of the invention is realized by the following technical scheme: an autism auxiliary diagnosis device based on a common attention paradigm includes:
the playing module is used for playing a video which is recorded in advance, the video comprises a plurality of target areas, and the common attention behaviors of the tested personnel are stimulated and induced in the experiment process;
the data acquisition module is connected with the playing module and is used for acquiring the eye movement information of the tested person, and the tested person comprises an individual suffering from ASD and an individual suffering from TD;
the index extraction module is used for extracting a plurality of static indexes and dynamic indexes to be screened;
the index identification module is used for clustering the single indexes and the data sets combined among the indexes by adopting a clustering algorithm according to the extracted indexes, and checking the clustered data according to the known ASD and TD labels in the data sets to identify which indexes or combination of the indexes are the key performance indexes;
and the prediction module is used for taking the key performance indexes identified in the index identification module as the characteristics of the prediction classification model, training a classifier by combining with the known ASD and TD labels in the data set, and evaluating and predicting whether the tester is an ASD patient.
The extracted indexes and the combination thereof can be subjected to cluster calculation through the index identification module, the index combination which can effectively distinguish ASD and TD is analyzed, the key performance index is obtained, and the classifier obtained by training on the basis of the key performance index can realize more accurate diagnosis.
Preferably, the video played by the playing module comprises three fixed objects and a movable experimenter, the experimenter turns heads to the fixed objects in sequence in the experiment process and then points at the objects by hands, and the target area comprises the heads of the experimenter and the three objects. The tested personnel pay attention to the corresponding objects according to the direction of the laboratory technician turning the head, and make a common attention action.
Preferably, the data acquisition module comprises an eye tracker, the eye tracker is connected with the playing module and is used for synchronously acquiring the eye movement information of the tested person, and the eye movement information comprises time information and eye fixation coordinate information of a certain time point.
Preferably, the auxiliary diagnosis device for autism comprises a data cleaning module, and the data cleaning module is used for screening the eye movement information and processing the missing value and the abnormal value. So as to improve the precision of subsequent training.
Furthermore, in the data cleaning module, the screening of the eye movement information refers to screening only data of experimental segments related to common attention, and deleting data of rest segments in the middle of the experiment; missing and outliers were processed using a mean-filling method.
Preferably, in the index extraction module, the static index selects the watching time ratio in the target area, the delay time of watching the head of the experimenter in the video for the first time and the delay time of watching the three fixed objects, and the dynamic index selects the longest common subsequence similarity.
Furthermore, in the index extraction module, the fixation time ratio in the target area is extracted by:
Figure BDA0002319200640000031
wherein T is AOI Representing the cumulative gaze time, T, at the target area total The total experimental time is indicated.
Furthermore, in the index extraction module, the delay time the _ first of the first fixation of the head of the experimenter in the video is extracted by:
the_first_fixation=T head -T start
wherein T is head Denotes the time, T, of the first fixation to the head of the experimenter start The time at which the experiment started is indicated.
Furthermore, in the index extraction module, the delay times of the fixation objects are extracted as the _ latency _ for _ O 1 ,the_latency_for_O 2 ,the_latency_for_O 3 The method comprises the following steps:
the_latency_for_O i =T Oi -T i
wherein T is Oi Indicating the time, T, of the subject's gaze at the ith object i Indicating the time the experimenter turned his head and looked at the ith object.
Furthermore, in the index extraction module, the longest Common subsequence similarity is extracted according to a dynamic programming lcs (long Common subsequence) algorithm, and the method comprises the following steps:
setting X ═ X 1 ,x 2 ,......x n ),Y=(y 1 ,y 2 ,......y n ) Wherein X represents the eye movement track sequence of the tested individual in the experimental process, and X i Representing reality of eye gaze at a point in timeRegion, Y denotes a logical sequence generated according to the length of the X sequence, Y i Representing a logical area at which an eye should gaze at a certain time point, and obtaining the longest common subsequence of X and Y as LCS (X, Y) according to the LCS algorithm; the longest common subsequence similarity is calculated as follows:
Figure BDA0002319200640000032
where len (LCS (X, Y)) denotes the length of the longest common subsequence LCS (X, Y), and len (X) denotes the length of the X sequence, i.e. n.
Furthermore, in the index identification module, after index information of delay time of the head of the experimenter in the video is obtained, decimal scaling normalization is firstly carried out, and the method comprises the following steps:
Figure BDA0002319200640000033
where k is such that Max (the _ first _ displacement) * ) The smallest integer < 1.
Furthermore, in the index identification module, after index information of delay time of watching three fixed objects is acquired, decimal scaling normalization is performed first, and the method comprises the following steps:
Figure BDA0002319200640000041
wherein k is such that Max (the _ latency _ for _ O) i * ) The smallest integer < 1.
Preferably, in the index identification module, a data set of a single index and a data set of a combination of indexes are respectively clustered by using a K-means clustering algorithm, and difference significance chi-square test is performed on the clustered data to identify which indexes or combination of indexes are key performance indexes.
Preferably, in the prediction module, the classifier uses a logistic binary classification model, inputs characteristic index data of the testee, and outputs a risk probability value of suffering from the ASD.
Compared with the prior art, the invention has the following advantages and beneficial effects:
the method uses video input and an eye tracker to collect eye movement data in a standardized manner, processes the eye movement data, extracts static and dynamic quantitative indexes, further uses a K-means clustering algorithm and chi-square test identification key performance indexes as the characteristics of a classification model, and finally generates a prediction result to assist in evaluation and diagnosis of ASD.
Drawings
FIG. 1 is a schematic diagram of the structure of the apparatus of the present invention;
fig. 2 is a flow chart of a method for implementing the apparatus of the present invention.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited thereto.
Examples
As shown in fig. 1, the autism auxiliary diagnosis device based on the common attention paradigm of the embodiment includes a playing module, a data collecting module, a data cleaning module, an index extracting module, an index identifying module and a predicting module, which are connected in sequence, and after obtaining a classifier through training, the classifier is stored in the predicting module.
In this embodiment, the playing module includes a video playing device, and the device loads the recorded video input from the outside through the interface, and plays the video to the outside when an experiment or a test is required, so as to stimulate and induce the collective attention behavior of the tested person. The video described here includes three stationary objects, which are placed on the left, middle and right of a table, respectively, and a laboratory technician, who sits behind the table. After the video begins, the experimenter would say "hello" and then turn his head sequentially over the object on the table and then point his hand at the object. The tested person can pay attention to the object according to the direction of the laboratory worker's head turning and the finger pointing direction.
In this embodiment, the data acquisition module is used to acquire eye movement information of a person to be tested, and the data acquisition module is implemented by an eye movement instrument with a model number of Tobii X3-120. The eye tracker is connected with the video playing device, and the coordinate position between the tester and the playing module is calibrated before use. In the testing process, when a tested person watches a video, the eye tracker records the eye movement track information of the tested person, wherein the information comprises time information, eye fixation coordinate information of a certain time point and the like. Subjects included individuals with ASD and individuals with canonical development (TD), each with a label. In this example, 30 persons were tested, 14 persons being ASD children and 16 persons being typically developing children.
In this embodiment, the data cleansing module is configured to filter the eye movement data and process the missing values and the abnormal values. Specifically, screening the eye movement data refers to screening only data of experimental segments related to common attention, and deleting data of rest segments in the middle of an experiment, so that useless data are removed, and the calculation amount is reduced. The missing value refers to the missing of the coordinate information of a certain fixation point, because the eye movement information is recorded by the eye movement instrument every 8ms, the eye movement track can be considered to be continuous, and the missing value can be filled according to the mean value of the coordinate information before and after the missing value. The abnormal value is that the coordinate of a certain fixation point falls outside the screen or is obviously a noise point, and the abnormal value can be directly deleted or processed by adopting a mean filling method.
In the embodiment, the index extraction module selects, from the cleaned data, a static index of a ratio of gazing time in a target Area (AOI) to a delay time of a first gazing to a head of an experimenter in a video, and a dynamic index of a longest common subsequence similarity as an object of subsequent index identification. Of course, these 3 indexes are only an example, and in practical applications, other static indexes as well as dynamic indexes may be extracted. The extracted indexes can be combined in various ways, including 2-dimensional, 3-dimensional and even higher-dimensional combinations, and the more the extracted indexes are, the more accurate the key performance indexes or the combination thereof which can be obtained in the subsequent clustering analysis is.
In the above technical solution, the watching time ratio in the target area is calculated according to the following formula:
Figure BDA0002319200640000051
wherein T is AOI Indicating the cumulative fixation time, T, at the target area total The total experimental time is indicated.
In the above technical solution, the delay time from the first gazing to the head of the experimenter in the video is calculated according to the following formula:
the_first_fixation=T head -T start
wherein T is head Denotes the time, T, of the first fixation to the head of the experimenter start The time at which the experiment started is indicated.
In the technical scheme, the dynamic index can be extracted based on the video. And the similarity of the longest public subsequence is calculated according to a dynamic programming LCS algorithm:
X=(x 1 ,x 2 ,......x n ),Y=(y 1 ,y 2 ,......y n )
wherein X represents the eye movement track sequence of the tested individual in the experimental process, and X i Representing the real area of eye fixation at a certain point in time, Y representing a logical sequence generated according to the length of the X sequence, Y i Indicating the logical area at which the eye should be fixated at a certain point in time, the longest common subsequence of X and Y is LCS (X, Y) according to the LCS algorithm. In order to eliminate the influence of different X sequence lengths corresponding to different tested individuals and further solve the similarity of the longest common subsequence, the method is calculated according to the following formula:
Figure BDA0002319200640000061
where len (LCS (X, Y)) denotes the length of the longest common subsequence LCS (X, Y), and len (X) denotes the length of the X sequence, i.e. n.
In this embodiment, the index identification module performs decimal scaling normalization on the extracted indexes, clusters the single indexes and the data sets combined among the indexes by using a K-means clustering algorithm, and performs difference significance chi-square test on the clustered data according to the known ASD and TD tags in the data sets to identify which indexes or combinations of indexes are key performance indexes.
In the above technical solution, since the first calibration time and the LCS similarity are percentage forms, the first calibration is only required to be normalized and calculated according to the following formula:
Figure BDA0002319200640000062
where k is such that Max (the _ first _ displacement) * ) The smallest integer < 1.
Specifically, the data sets of the single indexes and the combination of the indexes are clustered respectively, K is 2, namely the data are clustered into two classes, one group can be regarded as a suspected ASD class and the other group can be regarded as a suspected TD class according to ASD and TD labels after clustering, then chi-square test is carried out according to independent four-grid table data, if P is less than 0.05, whether a certain individual suffers from ASD is shown to be significantly different from the individual which belongs to which class grouped after clustering according to the indexes, the indexes are further regarded as key performance indexes, and the individuals suffering from ASD and TD can be distinguished with high classification accuracy. The P value, i.e., the probability, represents the likelihood in the test that the test variation is due to sampling error. Statistics the P values obtained according to the significance test method are generally statistically different with P < 0.05.
In this embodiment, the independent four-table after clustering by LCS similarity index is as follows:
TABLE 1 independent four-grid table clustered with LCS similarity index
Suspected ASD class Suspected TD species
ASD 10 4
TD 2 14
As shown in the table above, individuals with ASD are significantly different from whether the individual is classified as either suspected ASD or suspected TD, and individuals with ASD are more likely to be classified as suspected ASD, while individuals with TD are more likely to be classified as suspected TD. And finally, obtaining that the position time, the LCS similarity, the position time and the first similarity, and the LCS similarity and the first similarity are key performance indexes or index combinations, wherein the LCS similarity is highest in classification precision, the ASD classification precision is 71.43%, and the TD classification precision is 87.5%.
In this embodiment, after the index identification module identifies the key performance index, the feature data of the key performance index is used as input, and a classifier is trained by combining known ASD and TD tags in the data set to evaluate and predict whether the tester is an ASD patient. The classifier can use a logistic binary classification model to input the data characteristics of the testers into a prediction module, and the evaluation result is displayed as the risk probability value of suffering from ASD.
According to the method, the performance indexes capable of reflecting the difference of the common attention capacities of the ASD and the TD of the children are extracted and identified, the indexes are subjected to cluster analysis and judgment to determine the key performance indexes capable of effectively distinguishing the ASD and the TD, the classifier is established according to the information of the key performance indexes, and the accurate identification result can be realized.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (6)

1. Autism auxiliary diagnosis device based on a common attention paradigm, characterized by comprising:
the playing module is used for playing a video recorded in advance, the video comprises a plurality of target areas, and the common attention behaviors of the tested personnel are stimulated and induced in the experiment process; the video played by the playing module comprises three fixed objects and a movable experimenter, wherein the experimenter turns heads to the fixed objects in sequence in the experiment process and then points to the objects with hands, and the target area comprises the heads of the experimenter and the three objects;
the data acquisition module is connected with the playing module and is used for acquiring eye movement information of tested persons, wherein the tested persons comprise individuals suffering from ASD and individuals suffering from TD;
the index extraction module is used for extracting a plurality of static indexes and dynamic indexes to be screened; in the index extraction module, static indexes select the watching time ratio in a target area, the delay time of watching the head of an experimenter in a video for the first time and the delay time of watching three fixed objects, and dynamic indexes select the longest common subsequence similarity;
extracting the similarity of the longest public subsequence according to a dynamic programming LCS algorithm, wherein the method comprises the following steps:
setting X to (X) 1 ,x 2 ,......x n ),Y=(y 1 ,y 2 ,......y n ) Wherein X represents the eye movement track sequence of the tested individual in the experimental process, and X i Representing the real area of eye fixation at a certain point in time, and Y represents a logical sequence generated according to the length of the X sequenceColumn, y i Representing a logical area at which the eye should be fixated at a certain point in time, and the longest common subsequence of X and Y is LCS (X, Y) according to the LCS algorithm; the longest common subsequence similarity is calculated as follows:
Figure FDA0003671752220000011
where len (LCS (X, Y)) denotes the length of the longest common subsequence LCS (X, Y), len (X) denotes the length of the X sequence, i.e. n;
the index identification module is used for clustering the single indexes and the data sets combined among the indexes by adopting a clustering algorithm according to the extracted indexes, and checking the clustered data according to the known ASD and TD labels in the data sets to identify which indexes or combination of the indexes are used as key performance indexes;
the prediction module is used for taking the key performance indexes identified in the index identification module as the characteristics of a prediction classification model, training a classifier by combining with known ASD and TD labels in a data set, and evaluating and predicting whether a tester is an ASD patient;
in the index identification module, a single index and a data set combined among the indexes are respectively clustered by using a K-means clustering algorithm, and difference significance chi-square test is carried out on the clustered data so as to identify which indexes or combination of the indexes are used as key performance indexes;
in the prediction module, a classifier uses a logistic binary classification model, characteristic index data of a tester is input, and the output result is the risk probability value of suffering from ASD.
2. The autism auxiliary diagnosis device according to claim 1, wherein the data collection module comprises an eye tracker, the eye tracker is connected to the playing module, and is configured to synchronously collect eye movement information of the person to be tested, and the eye movement information comprises time information and eye fixation coordinate information at a certain time point.
3. The device for assisting in diagnosing autism according to claim 1, wherein the device for assisting in diagnosing autism comprises a data cleansing module for screening eye movement information and processing missing values and abnormal values;
screening the eye movement information means that only data of experimental segments related to common attention are screened, and data of rest segments in the middle of experiments are deleted; missing and outliers were processed using a mean-filling method.
4. The device for auxiliary diagnosis of autism according to claim 1, wherein the index extraction module extracts fixation _ time in the target area by:
Figure FDA0003671752220000021
wherein T is AOI Representing the cumulative gaze time, T, at the target area total The total experimental time is indicated.
5. The device for auxiliary diagnosis of autism according to claim 1, wherein the index extracting module extracts the delay time of the first gaze to the head of the experimenter in the video by:
the_first_fixation=T head -T start
wherein T is head Denotes the time of first fixation to the head of the experimenter, T start The time at which the experiment started is indicated.
6. The device for assisting in diagnosing autism according to claim 5, wherein the index recognition module is configured to perform decimal scaling normalization after acquiring index information of a delay time of a first fixation to a head of a laboratory technician in the video, by:
Figure FDA0003671752220000022
where k is such that Max (the _ first _ displacement) * ) The smallest integer < 1.
CN201911291485.3A 2019-12-16 2019-12-16 Autism auxiliary diagnosis device based on common attention paradigm Active CN111081374B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911291485.3A CN111081374B (en) 2019-12-16 2019-12-16 Autism auxiliary diagnosis device based on common attention paradigm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911291485.3A CN111081374B (en) 2019-12-16 2019-12-16 Autism auxiliary diagnosis device based on common attention paradigm

Publications (2)

Publication Number Publication Date
CN111081374A CN111081374A (en) 2020-04-28
CN111081374B true CN111081374B (en) 2022-09-13

Family

ID=70314671

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911291485.3A Active CN111081374B (en) 2019-12-16 2019-12-16 Autism auxiliary diagnosis device based on common attention paradigm

Country Status (1)

Country Link
CN (1) CN111081374B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112613786B (en) * 2020-12-30 2022-05-06 武汉理工大学 Individualized and differentiated driving risk evaluation method based on driving safety event
CN114129164B (en) * 2021-11-05 2022-09-16 山东心法科技有限公司 Autism spectrum disorder risk detection system, detection terminal and cloud server

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105069304A (en) * 2015-08-18 2015-11-18 广东顺德中山大学卡内基梅隆大学国际联合研究院 Machine learning-based method for evaluating and predicting ASD
CN107256332A (en) * 2017-05-24 2017-10-17 上海交通大学 The electric experimental evaluation system and method for brain based on eye movement data
CN109620259A (en) * 2018-12-04 2019-04-16 北京大学 Based on eye movement technique and machine learning to the system of autism children's automatic identification
CN109820524A (en) * 2019-03-22 2019-05-31 电子科技大学 The acquisition of self-closing disease eye movement characteristics and classification wearable system based on FPGA

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105069304A (en) * 2015-08-18 2015-11-18 广东顺德中山大学卡内基梅隆大学国际联合研究院 Machine learning-based method for evaluating and predicting ASD
CN107256332A (en) * 2017-05-24 2017-10-17 上海交通大学 The electric experimental evaluation system and method for brain based on eye movement data
CN109620259A (en) * 2018-12-04 2019-04-16 北京大学 Based on eye movement technique and machine learning to the system of autism children's automatic identification
CN109820524A (en) * 2019-03-22 2019-05-31 电子科技大学 The acquisition of self-closing disease eye movement characteristics and classification wearable system based on FPGA

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
视觉搜索眼动模式在识别孤独症谱系障碍儿童中的应用;吴雪媛等;《中国儿童保健杂志》;20190930;第27卷(第9期);第949-952、978页 *

Also Published As

Publication number Publication date
CN111081374A (en) 2020-04-28

Similar Documents

Publication Publication Date Title
Tracy et al. Investigating voice as a biomarker: deep phenotyping methods for early detection of Parkinson's disease
Sun et al. Diagnosis and analysis of diabetic retinopathy based on electronic health records
US20140226888A1 (en) Method, Systems and Computer Program Products for Medical Brian Imaging Analysis
WO2010005656A2 (en) Brain condition assessment
CN111081374B (en) Autism auxiliary diagnosis device based on common attention paradigm
CN108742513A (en) Patients with cerebral apoplexy rehabilitation prediction technique and system
CN111833330B (en) Intelligent lung cancer detection method and system based on fusion of image and machine olfaction
CN116344050B (en) Evaluation method based on multidimensional health management model
Meyer Guidelines for reporting information in studies of diagnostic test accuracy: the STARD initiative
Ishikawa et al. Handwriting Features of Multiple Drawing Tests for Early Detection of Alzheimer's Disease: A Preliminary Result.
CN104361245B (en) Measurement data-processing system and method
CN113658697B (en) Psychological assessment system based on video fixation difference
CN114847875A (en) Method for identifying functional near infrared spectrum and infrared thermal image of hyperactivity
CN116484290A (en) Depression recognition model construction method based on Stacking integration
CN115497621A (en) Old person cognitive status evaluation system
Tripoliti et al. ProMiSi Architecture-A Tool for the Estimation of the Progression of Multiple Sclerosis Disease using MRI
CN113729708A (en) Lie evaluation method based on eye movement technology
CN113539486A (en) Health state identification system based on traditional Chinese medicine facial and tongue manifestation dynamic change
CN112603337A (en) Electroencephalogram signal identification method
CN109815615A (en) Chronic obstructive pulmonary disease recurrence prediction method, apparatus and computer equipment based on LightGBM model
CN115607159B (en) Depression state identification method and device based on eye movement sequence space-time characteristic analysis
CN113555118B (en) Method and device for predicting disease degree, electronic equipment and storage medium
Mahesh et al. 10 Implementation of
Mahesh et al. Implementation of a Deep Neural Network-Based Framework for Actigraphy Analysis and Prediction of Schizophrenia
Saleena Analysis of machine learning and deep learning prediction models for sepsis and neonatal sepsis: A systematic review

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant