WO2021109855A1 - 一种基于深度学习的孤独症辅助评估系统和方法 - Google Patents
一种基于深度学习的孤独症辅助评估系统和方法 Download PDFInfo
- Publication number
- WO2021109855A1 WO2021109855A1 PCT/CN2020/129160 CN2020129160W WO2021109855A1 WO 2021109855 A1 WO2021109855 A1 WO 2021109855A1 CN 2020129160 W CN2020129160 W CN 2020129160W WO 2021109855 A1 WO2021109855 A1 WO 2021109855A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- neural network
- layer
- classification result
- map
- autism
- Prior art date
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/165—Evaluating the state of mind, e.g. depression, anxiety
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/113—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
Definitions
- the present invention relates to the technical field of autism assessment, in particular to an auxiliary assessment system and method for autism based on deep learning.
- Autism spectrum disorder Disorder ASD
- ASD Autism spectrum disorder Disorder
- ASD Autism spectrum disorder Disorder
- ASD Autism spectrum disorder Disorder
- ASD Autism spectrum disorder Disorder
- ASD Autism spectrum disorder Disorder
- the diagnostic criteria for children with ASD mainly include the Diagnostic and Statistical Manual of Mental Disorders of the American Psychiatric Association and the diagnostic criteria for mental and behavioral disorders of the World Health Organization.
- the current mainstream assessment and prediction methods are based on diagnostic criteria, aiming at the three aspects of language communication barriers, social communication barriers and repetitive stereotypes, using the mainstream international diagnostic tools (Autism Diagnostic Observation Scale and Autism Diagnostic Interview Scale). Autism diagnosis was made by combining questionnaire surveys and interviews with the children’s usual growth and development history, medical history, and mental examination. When direct or indirect observation based on behavioral signs and symptoms is no longer applicable, from the perspective of social cognition, the analysis of the visual characteristics of ASD around social disorders, which is the core specific symptom of ASD, is of special significance.
- autism testers to watch the facial images corresponding to the pre-made video and the changes in facial temperature, heart rate, and breathing when viewing the images; or use a visual camera to assist in judging the subject’s response to the language; or during the process of collecting funny laughs
- Multi-channel audio and video multi-modal data from multiple RGB-D camera perspectives of subjects, evaluators, and props; or extract the subjects’ brain electrical signals, electromyographic signals, and eye electrical signals in different emotional states based on physiological signals Signals, galvanic skin response signals, body temperature data, respiratory rate, etc., and even directly use the subject’s expression response to extract expression features to assist in judgment.
- a variety of available multi-modal signals and features have been tried to assist in the diagnosis of children with autism.
- the ASD eye movement data is used to classify and assist the diagnosis of autism, usually watching static face pictures or static pictures in visual following tasks, with the help of eye trackers or wearing glasses.
- Eye tracker to obtain the eye movement data or eye gaze data of the observer.
- SVM machine learning classification model support vector machine
- BP BP neural network training
- Facial recognition and emotional perception disorders are the core problems of social disorders in children with ASD, and facial emotion recognition defects that are common in patients with autism spectrum disorder are the core reasons for their social and communication disorders.
- the eye movement patterns of patients with autism spectrum disorder are significantly different from those of the typical developing population. Patients with autism spectrum disorder pay less attention to the eye area, and avoiding direct gaze may be patients with autism spectrum disorder Reasons for the development of facial emotion recognition defects.
- the current research on facial emotion recognition disorder of patients with autism spectrum disorder is mainly eye tracking technology. Studies have shown that ASD children have different eye movement patterns in face recognition and emotion perception than normal children. This feature exists in ASD children with mild to severe symptoms.
- the specific facial processing mode of patients with autism spectrum disorder is mainly to select facial stimuli and extract information, which is sub-optimal for the task of emotion recognition.
- the existing technical solutions mainly have the following problems: 1)
- the stimulus material of the eye movement-based solution is too typical and out of touch with real life scenes.
- the stimulus materials provided are all static pictures, which are out of touch with real life scenes, and cannot truly evaluate the emotion recognition ability of ASD children in real social communication and interaction.
- Most of the current Autism Spectrum Disorder Scales have age ranges. For autistic patients, there are often problems with difficult questions or age that is not within the scope of the scale.
- the purpose of the present invention is to overcome the above-mentioned shortcomings of the prior art, and provide a deep learning-based auxiliary assessment system and method for autism, which combines eye movement technology and deep learning to predict and assess autism.
- an auxiliary evaluation system for autism based on deep learning includes a data acquisition and feature extraction unit, a first neural network, a second neural network, a third neural network, and a result output unit.
- the data acquisition and feature extraction unit and the result output unit are respectively connected to the first neural network.
- the network, the second neural network, and the third neural network have a communication connection, wherein: the data collection and feature extraction unit is used to collect eye movement data of the subject watching the video to obtain the corresponding heat map and focus map And a scan path map, the heat map is used to characterize the time and position of the fixation point dynamic changes, the focus map is used to characterize the fixation position, the dynamic change of time, the path scan chart continuously displays the fixation point position and Each gaze time information; the first neural network is used to input the heat map to obtain a first classification result; the second neural network is used to input the focus map to obtain a second classification result; the third neural network The network is used to input the scanning path graph to obtain a third classification result; the result output unit gathers the first classification result, the second classification result, and the third classification result to obtain the subject’s autism detection result.
- the first neural network, the second neural network, and the third neural network have the same or different structures.
- the first neural network, the second neural network, and the third neural network have the same structure, including an input layer, a first convolutional layer, a second pooling layer, and a first Three layers of convolutional layer, fourth layer of pooling layer, fifth layer of convolution layer, sixth layer of fully connected layer, seventh layer of fully connected layer and output layer.
- the first neural network, the second neural network, and the third neural network have a first convolutional layer, a second pooling layer, a third convolutional layer, and a fourth neural network.
- the activation function of the layer pooling layer, the fifth convolutional layer, and the sixth fully connected layer is the ReLU nonlinear activation function
- the activation function of the seventh fully connected layer is the Softmax activation function
- the number of neurons in the output layer is 4
- the result output unit uses a simple voting method to combine the first classification result, the second classification result, and the third classification result to give a final prediction result.
- an eye tracker is used to collect eye movement data for each subject in a non-invasive manner.
- the heat map displays the dynamic changes of the time and position of the fixation point with warm chromaticity
- the focus map displays the dynamic changes of the fixation position and time with the brightness
- an auxiliary evaluation method for autism based on deep learning includes the following steps:
- the heat map is used to characterize the dynamic changes of the time and position of the fixation point
- the focus map is used to characterize the fixation position.
- the dynamic change of time, the path scanning diagram continuously displays the location of the gaze point and the information of each gaze time point by point;
- the distribution of the heat map, the focus map, and the scan path map are input to the trained first neural network, the second neural network, and the third neural network to obtain the first classification result, the second classification result, and the second classification result, respectively.
- the first classification result, the second classification result, and the third classification result are assembled to obtain the subject's autism detection result.
- the present invention has the advantages of: starting from the common facial emotion recognition defect in ASD patients, aiming at the research goal of ASD eye movement screening, compared with static picture stimulation, it adopts dynamic and daily speech
- the expressed scene is used as the stimulus material.
- the stimulus design of the dynamic video Through the stimulus design of the dynamic video, the emotion recognition response and the eye movement data of the subject in the real social interaction are extracted, thereby improving the true reliability of the auxiliary diagnosis; because the eye movement technology is non-invasive
- the subject does not need to wear any device, and the stimulating material can be appropriately adjusted according to the age of the subject, which can be suitable for autistic patients of different ages and different developmental levels, especially for children from 6 to 18 months of age.
- Fig. 1 is a schematic diagram of a deep learning-based autism auxiliary assessment system according to an embodiment of the present invention
- Fig. 2 is a structure diagram of a neural network according to an embodiment of the present invention.
- Facial emotion recognition defects are common in patients with autism spectrum disorder, which is the core cause of their social and communication disorders.
- the current research on facial emotion recognition disorders in patients with autism spectrum disorder mainly involves eye tracking technology. Under the conditions of social and facial stimulation, the eye movement patterns of patients with autism spectrum disorder are significantly different from those of the typical developing population. Patients with autism spectrum disorder pay less attention to the eye area, and avoid direct gaze on the face of patients with autism spectrum disorder. Reasons for the development of emotion recognition defects. There is evidence that the high levels of arousal associated with direct gaze in patients with autism spectrum disorder are related to avoiding direct gaze and are related to more severe impairments in social skills.
- the specific facial processing mode of patients with autism spectrum disorder is mainly to select facial stimuli and extract information, which is sub-optimal for the task of emotion recognition.
- the present invention provides an efficient, convenient, non-invasive, and low-cost ASD auxiliary diagnosis method based on eye movement technology for face recognition and emotion perception tasks based on the association between eye movement data characteristics and patients with autism spectrum disorder.
- the embodiment of the present invention includes the following steps: guide the subject to complete the video viewing task during the data collection process, require the subject to focus on what he hears and see, and use it when the subject is watching the short video.
- the eye tracker records its eye movement data; for the obtained eye movement image data, the data preprocessing is completed; then, combined with the convolutional neural network (Convolutional Neural Network) in the deep learning method Networks, CNN) perform automatic feature extraction and obtain neural network classifiers through model training, and finally achieve auxiliary diagnosis of ASD.
- the present invention combines eye tracking data with a deep learning algorithm to effectively extract the specific facial processing mode of autistic patients, thereby realizing auxiliary diagnosis for patients with mild to moderate autism.
- the autism auxiliary assessment system based on deep learning includes a data collection and feature extraction unit 110, a neural network for classification training, and a result output unit 120 that are sequentially connected, wherein , Shows neural network 1, neural network 2, and neural network 3.
- the present invention mainly includes three processes, namely data collection and feature extraction, classifier training and result prediction, which will be introduced in detail below.
- the present invention uses an eye tracker that is convenient for young children and does not require high professionalism for doctors to collect eye movement data with dynamic daily life scenes as stimulus materials for classification.
- the dynamic video emotion stimulus is selected in the data collection process of the present invention.
- static image stimuli the use of dynamic video stimuli with daily verbal expressions is more suitable for daily life scenes, and is more able to truly assess the emotion recognition ability of ASD children, because the communication that occurs in real social interactions cannot always be It is well aware, and can dynamically adjust the stimulus materials according to the different age groups of the subjects in order to be suitable for children of different age groups.
- the stimulus material in the data collection process is composed of the Chinese Natural Emotional Audiovisual Database (Chinese Natural Emotional Audiovisual Database), which aims to provide Chinese resources for the study of multimodal and multimedia interactions.
- Audio-Visual Database CHEAVD
- CHEAVD provides 20 movie clips video composition.
- six typical and relatively complete emotional stimulation videos are selected for analysis. They are composed of three positive emotional videos and three negative emotional videos. The duration of the video is between 3 seconds and 9 seconds.
- the RED250 eye tracker from SMI of Germany was used to collect the eye movement data of each subject in a non-invasive manner.
- the device has been integrated into a 22-inch widescreen display panel with a resolution of 1280*1024 pixels.
- the sampling frequency is 60 Hz, and the accuracy is 0.4 degrees.
- the subject’s head movement freedom is 40*20 cm at a distance of 70 cm.
- the experiment design software Experiment Center of SMI company of Germany was used for online eye movement data recording, and the SMI data analysis software BeGaze was used for offline data analysis.
- the Samsung tablet ST800 Before formally conducting the experimental task, first train the subjects to understand the task in this experiment. After understanding the task, use the Samsung tablet ST800 to conduct pre-experiment to ensure that the experimenter is familiar with the entire process of the experiment. Then ask the participant to sit about 60-80 cm away from the test screen until the eye tracker can stably detect the participant’s pupils. During the experiment, participants cannot be intervened to avoid any attentional bias.
- first perform a five-point calibration of eye movements the subject is asked to look at the four corners of the screen and the calibration point in the middle one by one.
- the calibration part is counted only when the average error of all five calibration points does not exceed 1 degree of viewing angle. by.
- 20 test videos were played randomly. Participants were allowed to watch each video multiple times because they did not understand the content displayed in it.
- the data analysis software SMI BeGaze processed the output to obtain the heat map, focus map, and scan path of each video clip that the subjects watched.
- the heat map for example, displays the dynamic change of the time and position of the fixation point in warm chromaticity, that is, the closer to the color on the right side of the color bar in the data analysis software, the longer the time to fix the area.
- the scan path map continuously displays information such as the location of the gaze point and each gaze time, for example, point by point.
- the focus map displays the dynamic changes of the gaze position and time with brightness, for example.
- neural network training and learning are used to obtain a neural network classifier for predicting and evaluating autism.
- the embodiment of the present invention adopts a convolutional neural network and uses a design similar to the LeNet structure.
- the structure of the entire neural network includes an input layer (input), three convolutional layers (ie conv1, conv2, conv3), and 2 Maximum pooling layer (namely max pooling1, max pooling2), 2-layer fully connected layer (namely fc1 and fc2) and output layer (output).
- the first is the data input layer.
- the input images are heat maps, focus maps, and scan path maps obtained by analysis software of eye movement data, and the size of the input images is uniformly normalized to 1024*1024.
- the first layer is the convolutional layer
- the second layer is the maximum pooling layer
- the third layer is the convolutional layer
- the fourth layer is the maximum pooling layer
- the fifth layer is the convolutional layer
- the sixth and seventh layers are Fully connected layer
- the eighth layer is the output layer.
- the activation function of the first six layers is the ReLU nonlinear activation function
- the activation function of the seventh layer is the Softmax activation function.
- the number of neurons in the output layer can be set to 4, corresponding to healthy, mild autism symptoms, and moderate autism. There are four categories of closed symptoms and severe autism symptoms.
- the training of the neural network is used for the auxiliary diagnosis of ASD. Specifically, set the number of nodes in the input layer, hidden layer and output layer neurons and the size of the convolution kernel, and initialize the weight matrix randomly, including the input layer to the hidden layer, the hidden layer to the hidden layer, and the hidden layer to the output layer.
- the weight matrix between. Input the heat map, focus map, and scan path map of each video clip's eye movement data as the input of the neural network.
- the neural network is trained according to the forward and back propagation algorithms and the gradient descent method.
- the weight matrix is the cross-entropy loss function.
- a cross-validation method is used, and the confusion matrix of the classification result is used according to the receiver operating characteristic curve (Receiver Operating Characteristic Curve).
- Characteristic, ROC Receiveiver Operating Characteristic Curve
- AUC rea Under ROC Curve
- the three convolutional neural network classifiers take the heat map, focus map and scanning path map of the eye tracking data as input, and the output corresponds to the healthy and mild self. There are four categories of autism symptoms, moderate autism symptoms and severe autism symptoms. Then, according to the predicted output of the three classifiers, a simple voting method is used to combine to give the final prediction result. The final four results are still one of the above four categories.
- the neural networks involved in the embodiments of the present invention can have the same or different structures. For example, more convolutional layers and fully connected layers can be used, and either average pooling or maximum pooling can be used, and The classification results are not limited to the above four categories.
- the neural network and the result output unit can be implemented by software or hardware, such as a hardware processor or logic circuit.
- the AOI area of interest
- AOI aims to measure the areas of interest that the eyes are looking at, usually including the eyes, nose, and mouth, and then count the frequency and time of the eyes looking at these areas.
- children with ASD have relatively less fixation time and number of fixations in AOI compared to normal children. More specifically, children with ASD spend far more time looking at their bodies and objects than looking at their eyes.
- the eye movement data The heat map analysis and focus map analysis illustrate the same problem.
- the present invention performs auxiliary diagnosis of ASD based on different eye movement patterns.
- the present invention aims at the shortcomings in the current ASD auxiliary diagnosis technology and the main problems in the ASD screening assessment and clinical application, and establishes an efficient, convenient, easy to implement and popularize and suitable for the auxiliary diagnosis of ASD in young children.
- Technical solutions aims at the shortcomings in the current ASD auxiliary diagnosis technology and the main problems in the ASD screening assessment and clinical application, and establishes an efficient, convenient, easy to implement and popularize and suitable for the auxiliary diagnosis of ASD in young children.
- the present invention may be a system, a method and/or a computer program product.
- the computer program product may include a computer-readable storage medium loaded with computer-readable program instructions for enabling a processor to implement various aspects of the present invention.
- the computer-readable storage medium may be a tangible device that holds and stores instructions used by the instruction execution device.
- the computer-readable storage medium may include, but is not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing, for example.
- Computer-readable storage media include: portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM) Or flash memory), static random access memory (SRAM), portable compact disk read-only memory (CD-ROM), digital versatile disk (DVD), memory stick, floppy disk, mechanical encoding device, such as a printer with instructions stored thereon
- RAM random access memory
- ROM read-only memory
- EPROM erasable programmable read-only memory
- flash memory flash memory
- SRAM static random access memory
- CD-ROM compact disk read-only memory
- DVD digital versatile disk
- memory stick floppy disk
- mechanical encoding device such as a printer with instructions stored thereon
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Animal Behavior & Ethology (AREA)
- Surgery (AREA)
- Medical Informatics (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Psychiatry (AREA)
- Artificial Intelligence (AREA)
- Pathology (AREA)
- Child & Adolescent Psychology (AREA)
- Ophthalmology & Optometry (AREA)
- Hospice & Palliative Care (AREA)
- Educational Technology (AREA)
- Developmental Disabilities (AREA)
- Human Computer Interaction (AREA)
- Psychology (AREA)
- Social Psychology (AREA)
- Evolutionary Computation (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physiology (AREA)
- Signal Processing (AREA)
- Eye Examination Apparatus (AREA)
Abstract
Description
Claims (8)
- 一种基于深度学习的孤独症辅助评估系统,其特征在于,包括数据采集和特征提取单元、第一神经网络、第二神经网络和第三神经网络,结果输出单元,所述数据采集和特征提取单元以及所述结果输出单元分别与所述第一神经网络、所述第二神经网络、所述第三神经网络具有通信连接,其中:所述数据采集和特征提取单元用于采集受试者观看动态视频的眼动数据,获得对应的热点图、焦点图和扫描路径图,所述热点图用于表征注视点的时间和位置的动态变化,所述焦点图用于表征注视位置、时间的动态变化,所述路径扫描图逐点连续显示注视点位置和各注视时间信息;所述第一神经网络用于输入所述热点图,获得第一分类结果;所述第二神经网络用于输入所述焦点图,获得第二分类结果;所述第三神经网络用于输入所述扫描路径图,获得第三分类结果;所述结果输出单元集合所述第一分类结果、所述第二分类结果和所述第三分类结果获得受试者的孤独症检测结果。
- 根据权利要求1所述的系统,其特征在于,所述第一神经网络、所述第二神经网络和所述第三神经网络具有相同或不同的结构。
- 根据权利要求1所述的系统,其特征在于,所述第一神经网络、所述第二神经网络和所述第三神经网络具有相同的结构,包括输入层、第一层卷积层,第二层池化层,第三层卷积层,第四层池化层,第五层卷积层,第六层全连接层、第七层全连接层和输出层。
- 根据权利要求3所述的系统,其特征在于,所述第一神经网络、所述第二神经网络和所述第三神经网络的第一层卷积层、第二层池化层、第三层卷积层、第四层池化层、第五层卷积层、第六层全连接层的激活函数为ReLU非线性激活函数,第七层全连接层的激活函数为Softmax激活函数,输出层的神经元数目为4个,分别对应健康、轻度自闭症症状、中度自闭症症状和重度自闭症症状四个类别。
- 根据权利要求1所述的系统,其特征在于,所述结果输出单元使用简单投票法结合所述第一分类结果、所述第二分类结果和所述第三分类结果给出最终的预测结果。
- 根据权利要求1所述的系统,其特征在于,使用眼动仪以非侵入方式收集每个受试者的眼动数据。
- 根据权利要求1所述的系统,其特征在于,所述热点图以颜色暖色度来显示注视点的时间和位置的动态变化,所述焦点图以亮度显示注视位置、时间的动态变化。
- 一种基于深度学习的孤独症辅助评估方法,包括以下步骤:采集受试者观看动态视频的眼动数据,获得对应的热点图、焦点图和扫描路径图,所述热点图用于表征注视点的时间和位置的动态变化,所述焦点图用于表征注视位置、时间的动态变化,所述路径扫描图逐点连续显示注视点位置和各注视时间信息;将所述热点图、所述焦点图和所述扫描路径图分布输入到经训练的第一神经网络、第二神经网络和第三神经网络,分别获得第一分类结果、第二分类结果和第三分类结果;集合所述第一分类结果、所述第二分类结果和所述第三分类结果获得受试者的孤独症检测结果。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911228792.7 | 2019-12-04 | ||
CN201911228792.7A CN112890815A (zh) | 2019-12-04 | 2019-12-04 | 一种基于深度学习的孤独症辅助评估系统和方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021109855A1 true WO2021109855A1 (zh) | 2021-06-10 |
Family
ID=76111103
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/129160 WO2021109855A1 (zh) | 2019-12-04 | 2020-11-16 | 一种基于深度学习的孤独症辅助评估系统和方法 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112890815A (zh) |
WO (1) | WO2021109855A1 (zh) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113658697B (zh) * | 2021-07-29 | 2023-01-31 | 北京科技大学 | 一种基于视频注视差异的心理测评系统 |
CN113784215B (zh) * | 2021-09-08 | 2023-07-25 | 天津智融创新科技发展有限公司 | 基于智能电视的性格特征的检测方法和装置 |
CN113946217B (zh) * | 2021-10-20 | 2022-04-22 | 北京科技大学 | 一种肠镜操作技能智能辅助评估系统 |
CN115990016B (zh) * | 2022-12-02 | 2024-04-19 | 天津大学 | 一种基于眼动特征的孤独特质程度检测装置 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140364761A1 (en) * | 2012-01-05 | 2014-12-11 | University Court Pf The University Of Aberdeen | An apparatus and method for psychiatric evaluation |
CN107256332A (zh) * | 2017-05-24 | 2017-10-17 | 上海交通大学 | 基于眼动数据的脑电实验评估系统及方法 |
CN109157231A (zh) * | 2018-10-24 | 2019-01-08 | 阿呆科技(北京)有限公司 | 基于情绪刺激任务的便携式多通道抑郁倾向评估系统 |
CN109508755A (zh) * | 2019-01-22 | 2019-03-22 | 中国电子科技集团公司第五十四研究所 | 一种基于图像认知的心理测评方法 |
CN109620259A (zh) * | 2018-12-04 | 2019-04-16 | 北京大学 | 基于眼动技术与机器学习对孤独症儿童自动识别的系统 |
CN109820524A (zh) * | 2019-03-22 | 2019-05-31 | 电子科技大学 | 基于fpga的自闭症眼动特征采集与分类可穿戴系统 |
CN211862821U (zh) * | 2019-12-04 | 2020-11-06 | 中国科学院深圳先进技术研究院 | 一种基于深度学习的孤独症辅助评估系统 |
-
2019
- 2019-12-04 CN CN201911228792.7A patent/CN112890815A/zh active Pending
-
2020
- 2020-11-16 WO PCT/CN2020/129160 patent/WO2021109855A1/zh active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140364761A1 (en) * | 2012-01-05 | 2014-12-11 | University Court Pf The University Of Aberdeen | An apparatus and method for psychiatric evaluation |
CN107256332A (zh) * | 2017-05-24 | 2017-10-17 | 上海交通大学 | 基于眼动数据的脑电实验评估系统及方法 |
CN109157231A (zh) * | 2018-10-24 | 2019-01-08 | 阿呆科技(北京)有限公司 | 基于情绪刺激任务的便携式多通道抑郁倾向评估系统 |
CN109620259A (zh) * | 2018-12-04 | 2019-04-16 | 北京大学 | 基于眼动技术与机器学习对孤独症儿童自动识别的系统 |
CN109508755A (zh) * | 2019-01-22 | 2019-03-22 | 中国电子科技集团公司第五十四研究所 | 一种基于图像认知的心理测评方法 |
CN109820524A (zh) * | 2019-03-22 | 2019-05-31 | 电子科技大学 | 基于fpga的自闭症眼动特征采集与分类可穿戴系统 |
CN211862821U (zh) * | 2019-12-04 | 2020-11-06 | 中国科学院深圳先进技术研究院 | 一种基于深度学习的孤独症辅助评估系统 |
Also Published As
Publication number | Publication date |
---|---|
CN112890815A (zh) | 2021-06-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zunino et al. | Video gesture analysis for autism spectrum disorder detection | |
WO2021109855A1 (zh) | 一种基于深度学习的孤独症辅助评估系统和方法 | |
Vargas-Cuentas et al. | Developing an eye-tracking algorithm as a potential tool for early diagnosis of autism spectrum disorder in children | |
Chen et al. | Strabismus Recognition Using Eye‐Tracking Data and Convolutional Neural Networks | |
US8388529B2 (en) | Differential diagnosis of neuropsychiatric conditions | |
CN111326253A (zh) | 自闭症谱系障碍患者的多模态情感认知能力的评估方法 | |
He et al. | The characteristics of intelligence profile and eye gaze in facial emotion recognition in mild and moderate preschoolers with autism spectrum disorder | |
Heaton et al. | Reduced visual exploration when viewing photographic scenes in individuals with autism spectrum disorder. | |
CN211862821U (zh) | 一种基于深度学习的孤独症辅助评估系统 | |
PATEL | Methods in the study of clinical reasoning | |
Melo et al. | How doctors generate diagnostic hypotheses: a study of radiological diagnosis with functional magnetic resonance imaging | |
Tan et al. | Virtual classroom: An ADHD assessment and diagnosis system based on virtual reality | |
Fabiano et al. | Gaze-based classification of autism spectrum disorder | |
CN115517681A (zh) | Md患者情绪波动监测和情感障碍状态评估的方法和系统 | |
Zhang et al. | A human-in-the-loop deep learning paradigm for synergic visual evaluation in children | |
Huang et al. | Automatic recognition of schizophrenia from facial videos using 3D convolutional neural network | |
Cilia et al. | Eye-tracking dataset to support the research on autism spectrum disorder | |
Xia et al. | Dynamic viewing pattern analysis: towards large-scale screening of children with ASD in remote areas | |
Cheng et al. | Computer-aided autism spectrum disorder diagnosis with behavior signal processing | |
Zuo et al. | Deep Learning-based Eye-Tracking Analysis for Diagnosis of Alzheimer's Disease Using 3D Comprehensive Visual Stimuli | |
Fernández et al. | A convolutional neural network for gaze preference detection: A potential tool for diagnostics of autism spectrum disorder in children | |
Chen | Cognitive load measurement from eye activity: acquisition, efficacy, and real-time system design | |
Jeyarani et al. | Eye Tracking Biomarkers for Autism Spectrum Disorder Detection using Machine Learning and Deep Learning Techniques | |
Guo et al. | Design and application of facial expression analysis system in empathy ability of children with autism spectrum disorder | |
Zhou et al. | Gaze Patterns in Children with Autism Spectrum Disorder to Emotional Faces: Scanpath and Similarity |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20895924 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20895924 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 12.01.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20895924 Country of ref document: EP Kind code of ref document: A1 |