CN111081374A - Autism auxiliary diagnosis device based on common attention paradigm - Google Patents
Autism auxiliary diagnosis device based on common attention paradigm Download PDFInfo
- Publication number
- CN111081374A CN111081374A CN201911291485.3A CN201911291485A CN111081374A CN 111081374 A CN111081374 A CN 111081374A CN 201911291485 A CN201911291485 A CN 201911291485A CN 111081374 A CN111081374 A CN 111081374A
- Authority
- CN
- China
- Prior art keywords
- indexes
- module
- index
- data
- time
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 206010003805 Autism Diseases 0.000 title claims abstract description 18
- 208000020706 Autistic disease Diseases 0.000 title claims abstract description 18
- 238000003745 diagnosis Methods 0.000 title claims abstract description 18
- 230000004424 eye movement Effects 0.000 claims abstract description 26
- 238000000034 method Methods 0.000 claims abstract description 23
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 13
- 238000000605 extraction Methods 0.000 claims abstract description 13
- 230000008569 process Effects 0.000 claims abstract description 11
- 238000013145 classification model Methods 0.000 claims abstract description 9
- 239000000284 extract Substances 0.000 claims abstract description 7
- 238000012549 training Methods 0.000 claims abstract description 6
- 208000035478 Interatrial communication Diseases 0.000 claims abstract 6
- 206010003664 atrial septal defect Diseases 0.000 claims abstract 6
- 238000002474 experimental method Methods 0.000 claims description 12
- 238000012216 screening Methods 0.000 claims description 8
- 230000003068 static effect Effects 0.000 claims description 8
- 230000002159 abnormal effect Effects 0.000 claims description 5
- 238000000546 chi-square test Methods 0.000 claims description 5
- 230000006399 behavior Effects 0.000 claims description 4
- 238000003064 k means clustering Methods 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- 230000001186 cumulative effect Effects 0.000 claims description 3
- 238000006073 displacement reaction Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 2
- 238000013480 data collection Methods 0.000 claims 1
- 208000029560 autism spectrum disease Diseases 0.000 description 33
- 238000012360 testing method Methods 0.000 description 5
- 238000004140 cleaning Methods 0.000 description 4
- 238000011156 evaluation Methods 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000007621 cluster analysis Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000010998 test method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/30—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Public Health (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Biomedical Technology (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Pathology (AREA)
- Epidemiology (AREA)
- General Health & Medical Sciences (AREA)
- Primary Health Care (AREA)
- Probability & Statistics with Applications (AREA)
- Eye Examination Apparatus (AREA)
Abstract
The invention discloses an autism auxiliary diagnosis device based on a common attention paradigm, which comprises: the system comprises a playing module, a data acquisition module, an index extraction module, an index identification module and a prediction module, wherein the index extraction module is used for extracting the watching time ratio in a target area, the delay time from the first watching to the head of an experimenter in a video and the longest common subsequence similarity; the index identification module is used for clustering the single indexes and the data sets combined among the indexes by adopting a clustering algorithm according to the extracted indexes, and checking the clustered data to identify which indexes or combination of the indexes are used as key performance indexes; and the prediction module is used for training a classifier by taking the key performance indexes as the characteristics of the prediction classification model and evaluating and predicting whether the tester is an ASD patient. The method can standardize the acquisition process of the eye movement data, extract key performance indexes as the characteristics of the classification model, and finally realize accurate diagnosis results.
Description
Technical Field
The invention belongs to the field of psychology and machine learning, and particularly relates to an autism auxiliary diagnosis device based on a common attention paradigm.
Background
Most studies on Autism Spectrum Disorder (ASD) indicate that a deficiency in the ability to share attention has become an important criterion for determining whether a child has ASD. The practical application of the manual-based common attention ability assessment has the disadvantages that a professional with abundant clinical experience is required to operate the experiment and judge according to the clinical experience, and the time-consuming, labor-consuming, non-quantitative and non-standardized mode limits the accuracy of judgment.
Common attention, it is studied that two persons pay attention to a certain object or event (target) in common to share interest or behavior in the object or event (target). In order to quantify the performance of common attention, it is usually necessary to consider the analysis of eye movement data, and further extract and identify performance indicators reflecting the difference between ASD and Typical Development (TD) children's common attention ability, so as to assist the evaluation prediction of ASD. Indexes are also divided into static indexes and dynamic indexes, and how to extract the indexes is a challenging problem. The statistical difference significance test can only show that the mean values of 2 groups under a certain performance index have significant difference, and does not mean that high classification precision can be obtained by dividing the 2 groups by the performance index.
Therefore, how to realize more accurate ASD and TD differentiation according to effective performance indexes becomes a problem with great research significance and practical value.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide the auxiliary diagnosis device for the autism based on the common attention paradigm, the device can standardize the acquisition process of eye movement data, extract key performance indexes as the characteristics of a classification model, and finally realize accurate diagnosis results.
The purpose of the invention is realized by the following technical scheme: an autism auxiliary diagnosis device based on a common attention paradigm includes:
the playing module is used for playing a video recorded in advance, the video comprises a plurality of target areas, and the common attention behaviors of the tested personnel are stimulated and induced in the experiment process;
the data acquisition module is connected with the playing module and is used for acquiring eye movement information of tested persons, wherein the tested persons comprise individuals suffering from ASD and individuals suffering from TD;
the index extraction module is used for extracting a plurality of static indexes and dynamic indexes to be screened;
the index identification module is used for clustering the single indexes and the data sets combined among the indexes by adopting a clustering algorithm according to the extracted indexes, and checking the clustered data according to the known ASD and TD labels in the data sets to identify which indexes or combination of the indexes are the key performance indexes;
and the prediction module is used for taking the key performance indexes identified in the index identification module as the characteristics of the prediction classification model, training a classifier by combining with the known ASD and TD labels in the data set, and evaluating and predicting whether the tester is an ASD patient.
The extracted indexes and the combination thereof can be subjected to cluster calculation through the index identification module, the index combination which can effectively distinguish ASD and TD is analyzed, the key performance index is obtained, and the classifier obtained by training on the basis of the key performance index can realize more accurate diagnosis.
Preferably, the video played by the playing module comprises three fixed objects and a movable experimenter, the experimenter turns heads to the fixed objects in sequence in the experiment process and then points at the objects by hands, and the target area comprises the heads of the experimenter and the three objects. The tested personnel pay attention to the corresponding objects according to the direction of the laboratory technician turning the head, and make a common attention action.
Preferably, the data acquisition module comprises an eye tracker, the eye tracker is connected with the playing module and is used for synchronously acquiring the eye movement information of the tested person, and the eye movement information comprises time information and eye fixation coordinate information of a certain time point.
Preferably, the autism auxiliary diagnosis device comprises a data cleaning module, and the data cleaning module is used for screening the eye movement information and processing the missing value and the abnormal value. So as to improve the precision of subsequent training.
Furthermore, in the data cleaning module, the screening of the eye movement information refers to screening only data of experimental segments related to common attention, and deleting data of rest segments in the middle of the experiment; missing values and outliers were processed using mean-filling methods.
Preferably, in the index extraction module, the static index selects the watching time ratio in the target area, the delay time of watching the head of the experimenter in the video for the first time and the delay time of watching the three fixed objects, and the dynamic index selects the longest common subsequence similarity.
Furthermore, in the index extraction module, the fixation time ratio in the target area is extracted by:
wherein T isAOIRepresenting the cumulative gaze time, T, at the target areatotalThe total experimental time is indicated.
Furthermore, in the index extraction module, the delay time the _ first of the first fixation of the head of the experimenter in the video is extracted by:
the_first_fixation=Thead-Tstart
wherein T isheadDenotes the time, T, of the first fixation to the head of the experimenterstartThe time at which the experiment started is indicated.
Furthermore, in the index extraction module, the delay times of the fixation objects are extracted as the _ latency _ for _ O1,the_latency_for_O2,the_latency_for_O3The method comprises the following steps:
the_latency_for_Oi=TOi-Ti
wherein T isOiIndicating the time, T, of the subject's gaze at the ith objectiIndicating the time the experimenter turned his head and looked at the ith object.
Furthermore, in the index extraction module, the longest common subsequence similarity is extracted according to a dynamic programming lcs (changest common subsequence) algorithm by the method comprising:
setting X ═ X1,x2,......xn),Y=(y1,y2,......yn) Wherein X represents the eye movement track sequence of the tested individual in the experimental process, and XiRepresenting the real area of eye fixation at a certain point in time, Y representing a logical sequence generated according to the length of the X sequence, YiRepresenting a logical area at which an eye should gaze at a certain time point, and obtaining the longest common subsequence of X and Y as LCS (X, Y) according to the LCS algorithm; the longest common subsequence similarity is calculated as follows:
where len (LCS (X, Y)) denotes the length of the longest common subsequence LCS (X, Y), and len (X) denotes the length of the X sequence, i.e. n.
Furthermore, in the index identification module, after index information of delay time of the head of the experimenter in the video is obtained, decimal scaling normalization is firstly carried out, and the method comprises the following steps:
where k is such that Max (the _ first _ displacement)*) The smallest integer < 1.
Furthermore, in the index identification module, after index information of delay time of watching three fixed objects is acquired, decimal scaling normalization is performed first, and the method comprises the following steps:
wherein k is such that Max (the _ latency _ for _ O)i *) The smallest integer < 1.
Preferably, in the index identification module, a data set of a single index and a data set of a combination of indexes are respectively clustered by using a K-means clustering algorithm, and difference significance chi-square test is performed on the clustered data to identify which indexes or combination of indexes are key performance indexes.
Preferably, in the prediction module, the classifier uses a logistic binary classification model, inputs characteristic index data of the testee, and outputs a risk probability value of suffering from the ASD.
Compared with the prior art, the invention has the following advantages and beneficial effects:
the method uses video input and an eye tracker to collect eye movement data in a standardized way, processes the eye movement data, extracts static and dynamic quantitative indexes, further uses a K-means clustering algorithm and chi-square test identification key performance indexes as the characteristics of a classification model, and finally generates a prediction result to assist the evaluation and diagnosis of the ASD.
Drawings
FIG. 1 is a schematic diagram of the structure of the apparatus of the present invention;
fig. 2 is a flow chart of a method for implementing the apparatus of the present invention.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited thereto.
Examples
As shown in fig. 1, the autism auxiliary diagnosis device based on the common attention paradigm in this embodiment includes a playing module, a data collecting module, a data cleaning module, an index extracting module, an index identifying module and a predicting module, which are connected in sequence, and after obtaining a classifier through training, the classifier is stored in the predicting module, and in practical application, characteristic index data of a tester is input, so that a risk probability value of suffering from ASD can be obtained.
In this embodiment, the playing module includes a video playing device, and the device loads the recorded video input from the outside through the interface, and plays the video to the outside when an experiment or a test is required, so as to stimulate and induce the collective attention behavior of the tested person. The video described here includes three stationary objects, which are placed on the left, middle and right of a table, respectively, and a laboratory technician, who sits behind the table. After the video begins, the experimenter would say "hello" and then turn the head sequentially towards the object on the table and then point at the object with the hand. The tested person can pay attention to the objects according to the direction of the laboratory worker turning the head and the direction of the finger.
In this embodiment, the data acquisition module is used to acquire eye movement information of a person to be tested, and the data acquisition module is implemented by an eye movement instrument with a model number of Tobii X3-120. The eye tracker is connected with the video playing device, and the coordinate position between the tester and the playing module is calibrated before use. In the testing process, when a tested person watches a video, the eye tracker records the eye movement track information of the tested person, wherein the information comprises time information, eye fixation coordinate information of a certain time point and the like. Subjects included individuals with ASD and individuals with canonical development (TD), each with a label. In this example, 30 persons were tested, 14 persons being ASD children and 16 persons being typically developing children.
In this embodiment, the data cleansing module is configured to filter the eye movement data and process the missing values and the abnormal values. Specifically, screening the eye movement data refers to screening only data of experimental segments related to common attention, and deleting data of rest segments in the middle of an experiment, so that useless data are removed, and the calculation amount is reduced. The missing value refers to the missing of the coordinate information of a certain fixation point, because the eye movement information is recorded by the eye movement instrument every 8ms, the eye movement track can be considered to be continuous, and the missing value can be filled according to the mean value of the coordinate information before and after the missing value. The abnormal value is that the coordinate of a certain fixation point falls outside the screen or is obviously a noise point, and the abnormal value can be directly deleted or processed by adopting a mean filling method.
In the embodiment, the index extraction module selects, from the cleaned data, a static index of a ratio of gazing time in a target Area (AOI) to a delay time of a first gazing to a head of an experimenter in a video, and a dynamic index of a longest common subsequence similarity as an object of subsequent index identification. Of course, these 3 indexes are only an example, and in practical applications, other static indexes as well as dynamic indexes may be extracted. The extracted indexes can be combined in various ways, including 2-dimensional, 3-dimensional and even higher-dimensional combinations, and the more the extracted indexes are, the more accurate the key performance indexes or the combination thereof which can be obtained in the subsequent clustering analysis is.
In the above technical solution, the watching time ratio in the target area is calculated according to the following formula:
wherein T isAOIRepresenting the cumulative gaze time, T, at the target areatotalThe total experimental time is indicated.
In the above technical solution, the delay time from the first gazing to the head of the experimenter in the video is calculated according to the following formula:
the_first_fixation=Thead-Tstart
wherein T isheadDenotes the time, T, of the first fixation to the head of the experimenterstartThe time at which the experiment started is indicated.
In the technical scheme, the dynamic index can be extracted based on the video. And the similarity of the longest public subsequence is calculated according to a dynamic programming LCS algorithm:
X=(x1,x2,......xn),Y=(y1,y2,......yn)
wherein X represents the eye movement track sequence of the tested individual in the experimental process, and XiRepresenting the real area of eye fixation at a certain point in time, Y-tableShowing logical sequences generated according to the length of the X sequence, yiIndicating the logical area at which the eye should be fixated at a certain point in time, the longest common subsequence of X and Y is LCS (X, Y) according to the LCS algorithm. In order to eliminate the influence of different X sequence lengths corresponding to different tested individuals and further solve the similarity of the longest common subsequence, the method is calculated according to the following formula:
where len (LCS (X, Y)) denotes the length of the longest common subsequence LCS (X, Y), and len (X) denotes the length of the X sequence, i.e. n.
In this embodiment, the index identification module performs decimal scaling normalization on the extracted indexes, clusters the single indexes and the data sets combined among the indexes by using a K-means clustering algorithm, and performs difference significance chi-square test on the clustered data according to the known ASD and TD tags in the data sets to identify which indexes or combinations of indexes are key performance indexes.
In the above technical solution, since the first calibration time and the lcs precision are percentage forms, the first calibration is only required to be normalized and calculated according to the following formula:
where k is such that Max (the _ first _ displacement)*) The smallest integer < 1.
Specifically, the data sets of the single indexes and the combination of the indexes are clustered respectively, K is 2, namely the data are clustered into two classes, one group can be regarded as a suspected ASD class and the other group can be regarded as a suspected TD class according to ASD and TD labels after clustering, then chi-square test is carried out according to independent four-grid table data, if P is less than 0.05, whether a certain individual suffers from ASD is shown to be significantly different from the individual which belongs to which class grouped after clustering according to the indexes, the indexes are further regarded as key performance indexes, and the individuals suffering from ASD and TD can be distinguished with high classification accuracy. The P value, i.e., the probability, represents the likelihood in the test that the test variation is due to sampling error. Statistics the P values obtained according to the significance test method are generally statistically different with P < 0.05.
In this embodiment, the independent four-table after clustering by LCS similarity index is as follows:
TABLE 1 independent four-grid table clustered with LCS similarity index
Suspected ASD class | Suspected TD species | |
ASD | 10 | 4 |
TD | 2 | 14 |
As shown in the table above, an individual with ASD is significantly different from whether the individual is suspected ASD or TD, and an individual with ASD is more likely to belong to suspected ASD, while an individual with TD is more likely to belong to suspected TD. Finally, it is obtained that the fixationime, the LCS similarity, the similarity time and the first similarity, and the LCS similarity and the first similarity are key performance indexes or index combinations, wherein the LCS similarity classification precision is the highest, the ASD classification precision is 71.43%, and the TD classification precision is 87.5%.
In this embodiment, after the index identification module identifies the key performance index, the feature data of the key performance index is used as input, and a classifier is trained by combining known ASD and TD tags in the data set to evaluate and predict whether the tester is an ASD patient. The classifier can use a logistic binary classification model to input the data characteristics of the testers into a prediction module, and the evaluation result is displayed as the risk probability value of suffering from ASD.
According to the method, the performance indexes capable of reflecting the difference of the common attention capacities of the ASD and the TD of the children are extracted and identified, the indexes are subjected to cluster analysis and judgment to determine the key performance indexes capable of effectively distinguishing the ASD and the TD, the classifier is established according to the information of the key performance indexes, and the accurate identification result can be realized.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (10)
1. An autism auxiliary diagnosis device based on a common attention paradigm, comprising:
the playing module is used for playing a video recorded in advance, the video comprises a plurality of target areas, and the common attention behaviors of the tested personnel are stimulated and induced in the experiment process;
the data acquisition module is connected with the playing module and is used for acquiring eye movement information of tested persons, wherein the tested persons comprise individuals suffering from ASD and individuals suffering from TD;
the index extraction module is used for extracting a plurality of static indexes and dynamic indexes to be screened;
the index identification module is used for clustering the single indexes and the data sets combined among the indexes by adopting a clustering algorithm according to the extracted indexes, and checking the clustered data according to the known ASD and TD labels in the data sets to identify which indexes or combination of the indexes are the key performance indexes;
and the prediction module is used for taking the key performance indexes identified in the index identification module as the characteristics of the prediction classification model, training a classifier by combining with the known ASD and TD labels in the data set, and evaluating and predicting whether the tester is an ASD patient.
2. The device for assisting in diagnosing autism according to claim 1, wherein the playing module plays the video including three fixed objects and a movable experimenter, the experimenter turns the head to the fixed objects in sequence and then points to the objects with hands during the experiment, and the target area includes the head of the experimenter and the three objects.
3. The autism auxiliary diagnosis device according to claim 1, wherein the data collection module comprises an eye tracker, the eye tracker is connected to the playing module, and is configured to synchronously collect eye movement information of the person to be tested, and the eye movement information comprises time information and eye fixation coordinate information at a certain time point.
4. The device for assisting in diagnosing autism according to claim 1, wherein the device for assisting in diagnosing autism comprises a data cleansing module for screening eye movement information and processing missing values and abnormal values;
screening the eye movement information refers to screening only the data of the experimental segments related to the common attention and deleting the data of the rest segments in the middle of the experiment; missing values and outliers were processed using mean-filling methods.
5. The device for auxiliary diagnosis of autism according to claim 2, wherein the index extraction module is configured to select a static index to select a watching time ratio in the target area, a delay time for watching the head of the experimenter in the video for the first time, and a delay time for watching three fixed objects, and select the longest common subsequence similarity as the dynamic index.
7. The device for auxiliary diagnosis of autism according to claim 5, wherein the index extracting module extracts the delay time of the first gaze to the head of the experimenter in the video by:
the_first_fixation=Thead-Tstart
wherein T isheadDenotes the time, T, of the first fixation to the head of the experimenterstartThe time at which the experiment started is indicated.
8. The device of claim 5, wherein the index extraction module extracts the longest common subsequence similarity according to a dynamic programming (LCS) algorithm by:
setting X ═ X1,x2,......xn),Y=(y1,y2,......yn) Wherein X represents the eye movement track sequence of the tested individual in the experimental process, and XiRepresenting the real area of eye fixation at a certain point in time, Y representing a logical sequence generated according to the length of the X sequence, YiRepresenting a logical area at which an eye should gaze at a certain time point, and obtaining the longest common subsequence of X and Y as LCS (X, Y) according to the LCS algorithm; calculation of longest common subsequence similarityThe formula is as follows:
where len (LCS (X, Y)) denotes the length of the longest common subsequence LCS (X, Y), and len (X) denotes the length of the X sequence, i.e. n.
9. The device for assisting in diagnosing autism according to claim 7, wherein the index recognition module normalizes the index by scaling the decimal after acquiring the index information of the delay time of the first fixation to the head of the experimenter in the video by:
where k is such that Max (the _ first _ displacement)*) The smallest integer < 1.
10. The autism auxiliary diagnosis device according to claim 1, wherein the index identification module is configured to cluster the data sets of the individual indexes and the combinations of the indexes by using a K-means clustering algorithm, and perform chi-square test on the clustered data to identify which indexes or combinations of indexes are key performance indexes;
in the prediction module, a classifier uses a logistic binary classification model, characteristic index data of a tester is input, and the output result is the risk probability value of suffering from ASD.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911291485.3A CN111081374B (en) | 2019-12-16 | 2019-12-16 | Autism auxiliary diagnosis device based on common attention paradigm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911291485.3A CN111081374B (en) | 2019-12-16 | 2019-12-16 | Autism auxiliary diagnosis device based on common attention paradigm |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111081374A true CN111081374A (en) | 2020-04-28 |
CN111081374B CN111081374B (en) | 2022-09-13 |
Family
ID=70314671
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911291485.3A Expired - Fee Related CN111081374B (en) | 2019-12-16 | 2019-12-16 | Autism auxiliary diagnosis device based on common attention paradigm |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111081374B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112613786A (en) * | 2020-12-30 | 2021-04-06 | 武汉理工大学 | Individualized and differentiated driving risk evaluation method based on driving safety event |
CN114129164A (en) * | 2021-11-05 | 2022-03-04 | 山东心法科技有限公司 | Autism spectrum disorder risk detection system, detection terminal and cloud server |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105069304A (en) * | 2015-08-18 | 2015-11-18 | 广东顺德中山大学卡内基梅隆大学国际联合研究院 | Machine learning-based method for evaluating and predicting ASD |
CN107256332A (en) * | 2017-05-24 | 2017-10-17 | 上海交通大学 | The electric experimental evaluation system and method for brain based on eye movement data |
CN109620259A (en) * | 2018-12-04 | 2019-04-16 | 北京大学 | Based on eye movement technique and machine learning to the system of autism children's automatic identification |
CN109820524A (en) * | 2019-03-22 | 2019-05-31 | 电子科技大学 | The acquisition of self-closing disease eye movement characteristics and classification wearable system based on FPGA |
-
2019
- 2019-12-16 CN CN201911291485.3A patent/CN111081374B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105069304A (en) * | 2015-08-18 | 2015-11-18 | 广东顺德中山大学卡内基梅隆大学国际联合研究院 | Machine learning-based method for evaluating and predicting ASD |
CN107256332A (en) * | 2017-05-24 | 2017-10-17 | 上海交通大学 | The electric experimental evaluation system and method for brain based on eye movement data |
CN109620259A (en) * | 2018-12-04 | 2019-04-16 | 北京大学 | Based on eye movement technique and machine learning to the system of autism children's automatic identification |
CN109820524A (en) * | 2019-03-22 | 2019-05-31 | 电子科技大学 | The acquisition of self-closing disease eye movement characteristics and classification wearable system based on FPGA |
Non-Patent Citations (1)
Title |
---|
吴雪媛等: "视觉搜索眼动模式在识别孤独症谱系障碍儿童中的应用", 《中国儿童保健杂志》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112613786A (en) * | 2020-12-30 | 2021-04-06 | 武汉理工大学 | Individualized and differentiated driving risk evaluation method based on driving safety event |
CN112613786B (en) * | 2020-12-30 | 2022-05-06 | 武汉理工大学 | Individualized and differentiated driving risk evaluation method based on driving safety event |
CN114129164A (en) * | 2021-11-05 | 2022-03-04 | 山东心法科技有限公司 | Autism spectrum disorder risk detection system, detection terminal and cloud server |
CN114129164B (en) * | 2021-11-05 | 2022-09-16 | 山东心法科技有限公司 | Autism spectrum disorder risk detection system, detection terminal and cloud server |
Also Published As
Publication number | Publication date |
---|---|
CN111081374B (en) | 2022-09-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106725532B (en) | Depression automatic evaluation system and method based on phonetic feature and machine learning | |
Tracy et al. | Investigating voice as a biomarker: deep phenotyping methods for early detection of Parkinson's disease | |
CN108742513A (en) | Patients with cerebral apoplexy rehabilitation prediction technique and system | |
WO2010005656A2 (en) | Brain condition assessment | |
CN111081374B (en) | Autism auxiliary diagnosis device based on common attention paradigm | |
CN116344050B (en) | Evaluation method based on multidimensional health management model | |
CN111833330B (en) | Intelligent lung cancer detection method and system based on fusion of image and machine olfaction | |
Meyer | Guidelines for reporting information in studies of diagnostic test accuracy: the STARD initiative | |
Ishikawa et al. | Handwriting features of multiple drawing tests for early detection of Alzheimer’s disease: a preliminary result | |
CN113539486A (en) | Health state identification system based on traditional Chinese medicine facial and tongue manifestation dynamic change | |
CN104361245B (en) | Measurement data-processing system and method | |
CN117315379B (en) | Deep learning-oriented medical image classification model fairness evaluation method and device | |
CN114847875A (en) | Method for identifying functional near infrared spectrum and infrared thermal image of hyperactivity | |
CN113796873A (en) | Wearable dynamic electrocardiosignal classification method and system | |
CN113658697A (en) | Psychological assessment system based on video fixation difference | |
CN117174291A (en) | Method and system for predicting risk of diabetes of pre-diabetes crowd | |
CN117116475A (en) | Method, system, terminal and storage medium for predicting risk of ischemic cerebral apoplexy | |
CN116484290A (en) | Depression recognition model construction method based on Stacking integration | |
Li et al. | Learning a cytometric deep phenotype embedding for automatic hematological malignancies classification | |
CN115497621A (en) | Old person cognitive status evaluation system | |
Tripoliti et al. | ProMiSi Architecture-A Tool for the Estimation of the Progression of Multiple Sclerosis Disease using MRI | |
CN113729708A (en) | Lie evaluation method based on eye movement technology | |
CN113425298A (en) | Method for analyzing depression degree by collecting data through wearable equipment | |
CN112603337A (en) | Electroencephalogram signal identification method | |
Mahesh et al. | Implementation of a Deep Neural Network-Based Framework for Actigraphy Analysis and Prediction of Schizophrenia |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20220913 |
|
CF01 | Termination of patent right due to non-payment of annual fee |