CN111128368A - Automatic autism spectrum disorder detection method and device based on video expression behavior analysis - Google Patents

Automatic autism spectrum disorder detection method and device based on video expression behavior analysis Download PDF

Info

Publication number
CN111128368A
CN111128368A CN201911053897.3A CN201911053897A CN111128368A CN 111128368 A CN111128368 A CN 111128368A CN 201911053897 A CN201911053897 A CN 201911053897A CN 111128368 A CN111128368 A CN 111128368A
Authority
CN
China
Prior art keywords
face
expression
video
infant
autism spectrum
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911053897.3A
Other languages
Chinese (zh)
Other versions
CN111128368B (en
Inventor
郑文明
唐传高
柯晓燕
仇娜娜
闫思蒙
宗源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201911053897.3A priority Critical patent/CN111128368B/en
Publication of CN111128368A publication Critical patent/CN111128368A/en
Application granted granted Critical
Publication of CN111128368B publication Critical patent/CN111128368B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Databases & Information Systems (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Pathology (AREA)
  • Data Mining & Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses an autism spectrum disorder automatic detection method and device based on video expression behavior analysis, which comprises the following steps: s1, video acquisition: acquiring video data of a tested infant and a caregiver in an interaction process during a face-still period under a face-still paradigm; s2, face preprocessing: detecting the face position of the tested infant in the video, cutting out a face area, and correcting and normalizing the faces with different postures; s3, expression behavior index feature extraction: extracting texture features from the preprocessed face image, classifying the input texture features to obtain a face motion unit prediction result, taking the result as an input signal, counting the signal in a time dimension according to the window length to obtain expression segmentation statistical features, and further taking the expression segmentation statistical features as expression behavior index features; s4, risk prediction of high-risk autism spectrum disorder: and predicting whether the infant is the high-risk autism spectrum disorder or not by using the expression behavior characteristics.

Description

Automatic autism spectrum disorder detection method and device based on video expression behavior analysis
Technical Field
The invention relates to an automatic screening system for high-risk autism spectrum disorder within 24 months, in particular to an automatic autism spectrum disorder detection device based on video expression behavior analysis.
Background
Autism spectrum disorder is a neurodevelopmental disorder characterized by social disorders, language and non-language communication disorders, and restrictive and repetitive behaviors. Worldwide, it is estimated that 2480 million people suffer from autism. Currently, physicians can determine whether a child is suffering from autism spectrum disorder until the child is 3-4 years old. However, for early intervention in infants with autism spectrum disorder, the diagnosis is made at a later time. The large-scale automatic early screening of the infants with the high-risk autism spectrum disorder below 24 months is an important problem to be solved before early intervention.
Currently, traditional screening of children with high risk autism spectrum disorder relies on questionnaires and manual interviews of experienced physicians to determine whether the children are high risk autism spectrum disorder comprehensively based on multiple autism-related assessment scales. The existing high-risk autism spectrum disorder manual assessment scales comprise infant communication and symbolic behavior development scales, childhood autism assessment scales and autism childhood behavior assessment scales. The existing manual evaluation technology has the following technical bottlenecks:
(1) the behavior observation of the infants in the manual scale depends on manual evaluation, and the behavior evaluation is time-consuming and is influenced by subjective evaluation results of evaluators;
(2) the accuracy of manual scale assessment is affected by the experience of the physician, and diagnosis deviation exists;
(3) the number of doctors is limited, and the current assessment means cannot be suitable for large-scale early screening of the high-risk autism spectrum disorder infants within 24 months of age.
Disclosure of Invention
Aiming at the current technical situation, the invention provides a system capable of automatically and objectively quantitatively evaluating facial expression behaviors of infants under a static face paradigm condition, so that the identification result of the high-risk autism spectrum disorder of the infants within 24 months of age is automatically output.
In order to achieve the technical purpose, the invention adopts the following technical scheme:
an automatic detection method of autism spectrum disorder based on video expression behavior analysis is used for automatic screening of high-risk autism spectrum disorder within 24 months, and comprises the following steps:
s1, video acquisition: acquiring infant expression behavior video data in the face-to-face interaction process of an infant and a caregiver during a static face period of a static face paradigm;
s2, face preprocessing: detecting the face position of the tested infant in the video, cutting out a face area, and correcting faces with different postures;
s3, expression behavior index feature extraction: extracting texture features from the preprocessed face image, classifying the input texture features to obtain a face motion unit recognition result, taking the face motion unit recognition result as an input signal, counting the face motion unit recognition result according to the window length in a time dimension to obtain segmented expression statistical features, and further taking the segmented expression statistical features as behavior index features, wherein the method specifically comprises the following steps:
s3.1, extracting directional gradient histogram features of the face image;
s3.2, classifying the face motion units: identifying 18 human face motion unit results of human faces;
s3.3, expressing the result as expression recognition result vector X of each frame by analyzing 18 human face motion unit results of each frame of image in the videoi=[yi1,yi2,…,yij,…,yi18]TWhere i denotes the ith frame of the video, yijRepresenting the result of recognition of the jth class of face motion units in the ith frame, yijThe element belongs to {0,1}, T is the video duration of 60 seconds, and the total number is 1500 frames;
s3.4, because the emotional response of the autism spectrum disorder baby to the sudden face-expressionless and limb-unresponsive of the caregiver is different from that of the normal baby, dividing the video with the time length of T into n video segments according to a certain window length, and observing the difference of the expression results of two groups of babies at each video segment level, the group average value of the total occurrence time length of the face motion unit result AU6 related to positive emotion and the face motion unit result AU17 related to negative emotion in the normal development group is lower than that of the group average value of the high-risk autism spectrum disorder group, and the group average value of the total occurrence time length of the face motion unit result AU6 related to negative emotion is higher than that of the high-risk autism spectrum disorder group, according to the difference of the occurrence time of the two types of face motion unit results AU, a high-risk autism spectrum disorder group and a normal development group can be distinguished;
s3.5, each segment length is S, the segment statistical characteristic result in the kth window length of the jth type human face motion unit result AU
Figure BDA0002256023320000021
Behavior feature matrix of whole video
Figure BDA0002256023320000031
Splicing the matrix by rows to form a vector V epsilon R1×18n
S4, risk prediction of high-risk autism spectrum disorder: on the basis of the expression behavior index characteristics of sectional statistics, a support vector machine classifier is adopted to train the V vectors, a strategy of leaving one person for cross validation is adopted in the training process, namely, data of one person is left each time as a test, data of the rest persons are used for training, and a polynomial kernel is adopted in the experiment:
Figure BDA0002256023320000032
wherein gamma, r, d are nuclear parameters,
determining the optimal parameter value of a classifier through a cross-validation strategy, on the basis, using all data of two groups of infants collected in the research to train a high-risk autism spectrum disorder detection model, collecting expression videos of the infants during the period of the face being still of a still face caregiver in a still face paradigm for a new infant, identifying 18 types of face motion unit results of each frame in the videos, calculating segmented expression statistical characteristics according to the steps S3.4 and S3.5, and finally inputting the test characteristics into the trained model to predict the risk of the high-risk autism spectrum disorder.
S5, screening report: reporting the expression recognition result of the infant in the process of 1 minute and reporting the risk coefficient of the high-risk autism spectrum disorder.
The step S1 of video capture specifically includes:
s1.1, analyzing the infant, wherein the infant is within 24 months of age, the infant is seated in the infant chair, a caregiver and the infant are seated facing each other, and the head of the caregiver is approximately as high as the head of the infant;
s1.2, the interaction is divided into 2 stages, in the first 2 minutes, a caregiver amuses the child by means of speech, expression, limb movement and the like, in the later stage, the caregiver keeps the face expressionless and does not have any speech and limb movement, meanwhile, the facial expression and behavior of the child in the later stage are recorded, the time length is 60 seconds, and the sampling frame rate is 25 frames/second.
The step S2 of face preprocessing specifically includes:
s2.1, performing face detection on each frame of the expression video;
and S2.2, cutting the face image by using the detected candidate frame to obtain a face area with a background removed, and correcting the face image, specifically, positioning face characteristic points, correcting the face and normalizing the face size, wherein the size of the face normalization is 112 multiplied by 112 pixels.
The 18 face motion unit results include AU1,
AU2,AU4,AU5,AU6,AU7,AU9,AU10,AU12,AU14,AU15,AU17,AU20,AU23,AU25,AU26,AU28,AU45。
the invention further discloses an autism spectrum disorder automatic detection device based on video expression behavior analysis, which comprises
The video acquisition module: acquiring infant expression behavior video data in the face-to-face interaction process of an infant and a caregiver during a static face period of a static face paradigm;
a face preprocessing module: detecting the face position of the tested infant in the video, cutting out a face area, and correcting faces with different postures;
expression behavior index feature extraction module: extracting texture features from the preprocessed face image, classifying the input texture features to obtain a face motion unit identification result, taking the face motion unit identification result as an input signal, counting the face motion unit identification result according to the window length in a time dimension to obtain segmented expression statistical features, and further taking the segmented expression statistical features as behavior index features;
high-risk autism spectrum disorder risk prediction module: and predicting whether the infant is the high-risk autism spectrum disorder or not by utilizing the segmented emotional statistical characteristics and the support vector machine classifier.
A screening report module: reporting the expression recognition result of the infant in the process of 1 minute and reporting the risk coefficient of the high-risk autism spectrum disorder.
The invention has the advantages that:
firstly, the risk of the infant high-risk autism spectrum disorder can be automatically analyzed and judged based on the infant facial expression behavior data within 1 minute during the stationary face in the stationary face paradigm, the experiment time is long, no contact type cheap equipment is needed, the operation is simple and easy, and the method is suitable for large-scale automatic screening of the high-risk autism spectrum disorder;
second, compared with manual evaluation, the present invention has the advantages of high reliability based on quantitative evaluation results and high efficiency based on automatic evaluation results.
Drawings
FIG. 1 is a block diagram of the framework and the method of the present invention;
FIG. 2 is an acquisition scenario arrangement of the method of the present invention;
FIG. 3 is a block statistical expression behavior feature diagram of the method of the present invention;
FIG. 4 is a schematic representation of prediction of high risk autism spectrum disorder in the methods of the invention.
Detailed Description
According to the result of artificial observation of the facial behaviors of the high-risk autism spectrum disorder infants in the static face model, the duration and the frequency of positive emotions of the autism spectrum disorder infants and the normally-developing infants in the static face experiment are obviously different. In addition, in the aspect of automatic evaluation of expressions, psychologists have developed a Facial motion Coding System (FACS), and Facial expressions are products of deformation of Facial motion units. Based on the artificial observation result of the infant emotion of the autism and a FACS coding system, the automatic analysis result of the human face movement unit is used as the behavior characteristic for the automatic detection of the autism spectrum disorder.
The specific implementation process of the invention is as follows:
s1, video acquisition: and (3) shooting the emotional response video of the infant within 2 years of age when the infant care giver keeps the face expressionless for 1 minute under the static face paradigm. The infant sits in the infant chair, and the caretaker and the infant sit facing each other, and the head of the caretaker is approximately as high as the head of the infant; the interaction is divided into 2 stages, in the first 2 minutes, the caretaker amuses the child by means of speech, expression, limb movement and the like, in the later stage, the caretaker keeps the face expressionless and keeps the absence of any speech and limb movement, and meanwhile, the facial expression and the behavior of the child in the later stage are recorded for 60 seconds;
the invention adopts a 1-path camera to capture the facial expression of the front face of the infant, and the camera facing the infant is positioned at the position about 40cm in front of the infant and is about 20cm higher than the top of the head of the infant when the infant sits. The acquisition scenario is shown in fig. 2.
The invention collects the facial expression video data of 37 high-risk autism spectrum disorder infants and 49 normal-development infants.
S2, face preprocessing: face detection, face feature point positioning, face correction and size normalization operations
Face detection: aiming at the face detection of a current frame (from a second frame) in a video, using the face position prior information of a previous frame in the video, performing 1.5 times expansion on a face detection frame output by an MTCNN face technology (three-stage detection threshold values are respectively set to be 0.6,0.7 and 0.8) to obtain a candidate area of a face, detecting the face of the current frame in the expansion area, and if the face is not detected, reducing the face threshold value of the MTCNN from a coarse stage to a fine stage by a proportion of 10% until the face detection is successful, wherein the strategy can achieve efficient and more stable face detection performance.
Positioning face feature points: detecting 68 characteristic points of the human face by using a Constrained Local Models (CLMs) technology;
face correction and size normalization: on the basis of 68 individual face characteristic points, the detected faces are registered to a face standard template (112 × 112), and the faces are corrected and normalized to 112 × 112 pixels, so that background interference except the faces is eliminated.
S3, expression behavior index feature extraction:
extracting Histogram of Oriented Gradients (HOG) features from the normalized face image;
according to a Facial motion unit Coding System (FACS) developed by psychologists, 18 Facial motion units in Facial expression videos of children are analyzed, and the method comprises the following steps: AU1, AU2, AU4, AU5, AU6, AU7, AU9, AU10, AU12, AU14, AU15, AU17, AU20, AU23, AU25, AU26, AU28 and AU45, wherein the face motion unit is analyzed by adopting an Openface open-source tool;
expressing the result as expression recognition result vector X of each frame by analyzing 18 facial motion unit results of each frame of image in the videoi=[yi1,yi2,…,yij,…,yi18]TWhere i denotes the ith frame of the video, yijDenotes the result of recognition of the ith frame of an AU of the jth class, yijThe element belongs to {0,1}, T is the video duration of 60 seconds, and the total number is 1500 frames;
dividing a video with the time length of T into n (n is 4) video segments according to a certain window length, and judging a high-risk autism spectrum disorder group and a normal development group according to the difference of the occurrence time lengths of AU6 and AU 17;
the segment statistical characteristic result in the k window length of the j-th AU with each segment length of S
Figure BDA0002256023320000061
Behavior feature matrix of whole video
Figure BDA0002256023320000062
Splicing the matrix by rows to form a vector V epsilon R1×18n
S4, risk prediction of high-risk autism spectrum disorder:
on the basis of expression behavior index features of sectional statistics, a support vector machine (polynomial kernel) classifier is adopted to train the V vectors, and a strategy of leaving one person for cross validation is adopted in the training process, namely, data of one person is left each time to be used as a test, and data of the rest persons are used for training. The experiment used a polynomial kernel:
Figure BDA0002256023320000063
wherein gamma, r, d are nuclear parameters
Wherein γ is 0.5, r is 0, and d is 3;
determining the optimal parameter value of a classifier through a cross-validation strategy, on the basis, using all data of two groups of infants collected in the research to train a high-risk autism spectrum disorder detection model, collecting expression videos of the infants during the period of the face being still of a still-face caregiver in a still-face paradigm for a new infant, identifying 18 types of face motion unit results of each frame in the videos, calculating segmented expression statistical characteristics according to the step S3, and finally inputting the test characteristics into the trained model to predict the risk of the high-risk autism spectrum disorder.
S5, screening report: reporting the expression recognition result of the infant in the process of 1 minute and reporting the risk coefficient of the high-risk autism spectrum disorder.
The method has the advantages that the classification accuracy rate is 88.37%, the sensitivity is 83.78%, and the specificity is 91.84%. The cross-validation classification confusion matrix of the leave-one-person method is shown in table 1.
TABLE 1 Classification result confusion matrix
Figure BDA0002256023320000071
The invention further discloses an autism spectrum disorder automatic detection device based on video expression behavior analysis, which comprises
The video acquisition module: acquiring infant expression behavior video data in the face-to-face interaction process of an infant and a caregiver during a static face period of a static face paradigm;
a face preprocessing module: detecting the face position of the tested infant in the video, cutting out a face area, and correcting faces with different postures;
expression behavior index feature extraction module: extracting texture features from the preprocessed face image, classifying the input texture features to obtain a face motion unit identification result, taking the face motion unit identification result as an input signal, counting the face motion unit identification result according to the window length in a time dimension to obtain segmented expression statistical features, and further taking the segmented expression statistical features as behavior index features;
high-risk autism spectrum disorder risk prediction module: and predicting whether the infant is the high-risk autism spectrum disorder or not by utilizing the segmented emotional statistical characteristics and the support vector machine classifier.
A screening report module: reporting the expression recognition result of the infant in the process of 1 minute and reporting the risk coefficient of the high-risk autism spectrum disorder.
The above embodiments are only intended to illustrate the technical solution of the present invention and not to limit the same, and a person skilled in the art can modify the technical solution of the present invention or substitute the same without departing from the spirit and scope of the present invention, and the scope of the present invention should be determined by the claims.

Claims (5)

1. An automatic detecting method for autism spectrum disorder based on video expression behavior analysis is used for automatic screening of high-risk autism spectrum disorder within 24 months, and is characterized by comprising the following steps:
s1, video acquisition: acquiring infant expression behavior video data in the face-to-face interaction process of an infant and a caregiver during a static face period of a static face paradigm;
s2, face preprocessing: detecting the face position of the tested infant in the video, cutting out a face area, and correcting faces with different postures;
s3, expression behavior index feature extraction: extracting texture features from the preprocessed face image, classifying the input texture features to obtain a face motion unit recognition result, taking the face motion unit recognition result as an input signal, counting the face motion unit recognition result according to the window length in a time dimension to obtain segmented expression statistical features, and further taking the segmented expression statistical features as behavior index features, wherein the method specifically comprises the following steps:
s3.1, extracting directional gradient histogram features of the face image;
s3.2, classifying the face motion units: identifying 18 human face motion unit results of human faces;
s3.3, expressing the result as expression recognition result vector X of each frame by analyzing 18 human face motion unit results of each frame of image in the videoi=[yi1,yi2,…,yij,…,yi18]TWhere i denotes the ith frame of the video, yijRepresenting the result of recognition of the jth class of face motion units in the ith frame, yijThe element belongs to {0,1}, T is the video duration of 60 seconds, and the total number is 1500 frames;
s3.4, because the emotional response of the autism spectrum disorder baby to the sudden face-expressionless and limb-unresponsive of the caregiver is different from that of the normal baby, for the video with the duration T, the video is divided into n video segments according to a certain window length, the difference of the expression results of the two groups of babies at each video segment level can be observed, and the statistical difference between the recognition result of each segment of the face motion unit result AU6 related to positive emotion and the face motion unit result AU17 related to negative emotion can be found, wherein, the group average value of the total occurrence duration of the face motion unit result AU6 related to positive emotion in the normal development group is lower than that of the high-risk autism spectrum disorder group, and the group average value of the total occurrence duration of the face motion unit result AU17 related to negative emotion is higher than that of the high-risk autism spectrum disorder group, according to the difference of the occurrence time of the two types of face motion unit results AU, a high-risk autism spectrum disorder group and a normal development group can be distinguished;
s3.5, each segment length is S, and the statistical characteristics of the segments in the kth window length of the jth type human face motion unit result AUResults
Figure FDA0002256023310000024
Behavior feature matrix of whole video
Figure FDA0002256023310000022
Splicing the matrix by rows to form a vector V epsilon R1×18n
S4, risk prediction of high-risk autism spectrum disorder: on the basis of the expression behavior index characteristics of sectional statistics, a support vector machine classifier is adopted to train the V vectors, a strategy of leaving one person for cross validation is adopted in the training process, namely, data of one person is left each time as a test, data of the rest persons are used for training, and a polynomial kernel is adopted in the experiment:
Figure FDA0002256023310000023
on the basis, all data of two groups of infants collected in the research are used for training a high-risk autism spectrum disorder detection model, expression videos of the infants during the static face of a static face paradigm caregiver are collected for a new infant, 18 types of face motion unit results of each frame in the videos are identified, segmented expression statistical characteristics are calculated according to the steps of S3.4 and S3.5, and finally the test characteristics are input into the trained model to predict the risk of the high-risk autism spectrum disorder.
S5, screening report: reporting the expression recognition result of the infant in the process of 1 minute and reporting the risk coefficient of the high-risk autism spectrum disorder.
2. The method for automatically detecting autism spectrum disorder based on video expression behavior analysis as claimed in claim 1, wherein the step S1 video capturing specifically comprises:
s1.1, analyzing the infant, wherein the infant is within 24 months of age, the infant is seated in the infant chair, a caregiver and the infant are seated facing each other, and the head of the caregiver is approximately as high as the head of the infant;
s1.2, the interaction is divided into 2 stages, in the first 2 minutes, a caregiver amuses the child by means of speech, expression, limb movement and the like, in the later stage, the caregiver keeps the face expressionless and does not have any speech and limb movement, meanwhile, the facial expression and behavior of the child in the later stage are recorded, the time length is 60 seconds, and the sampling frame rate is 25 frames/second.
3. The method for automatically detecting autism spectrum disorder based on video expression behavior analysis as claimed in claim 1, wherein the step S2 face preprocessing specifically comprises:
s2.1, performing face detection on each frame of the expression video;
and S2.2, cutting the face image by using the detected candidate frame to obtain a face area with a background removed, and correcting the face image, specifically, positioning face characteristic points, correcting the face and normalizing the face size, wherein the size of the face normalization is 112 multiplied by 112 pixels.
4. The method of claim 1, wherein the 18 facial motion unit results include AU1, AU2, AU4, AU5, AU6, AU7, AU9, AU10, AU12, AU14, AU15, AU17, AU20, AU23, AU25, AU26, AU28 and AU 45.
5. An autism spectrum disorder automatic detection device based on video expression behavior analysis is characterized by comprising
The video acquisition module: acquiring infant expression behavior video data in the face-to-face interaction process of an infant and a caregiver during a static face period of a static face paradigm;
a face preprocessing module: detecting the face position of the tested infant in the video, cutting out a face area, and correcting faces with different postures;
expression behavior index feature extraction module: extracting texture features from the preprocessed face image, classifying the input texture features to obtain a face motion unit identification result, taking the face motion unit identification result as an input signal, counting the face motion unit identification result according to the window length in a time dimension to obtain segmented expression statistical features, and further taking the segmented expression statistical features as behavior index features;
high-risk autism spectrum disorder risk prediction module: and predicting whether the infant is the high-risk autism spectrum disorder or not by utilizing the segmented emotional statistical characteristics and the support vector machine classifier.
A screening report module: reporting the expression recognition result of the infant in the process of 1 minute and reporting the risk coefficient of the high-risk autism spectrum disorder.
CN201911053897.3A 2019-10-31 2019-10-31 Automatic autism spectrum disorder detection method and device based on video expression behavior analysis Active CN111128368B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911053897.3A CN111128368B (en) 2019-10-31 2019-10-31 Automatic autism spectrum disorder detection method and device based on video expression behavior analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911053897.3A CN111128368B (en) 2019-10-31 2019-10-31 Automatic autism spectrum disorder detection method and device based on video expression behavior analysis

Publications (2)

Publication Number Publication Date
CN111128368A true CN111128368A (en) 2020-05-08
CN111128368B CN111128368B (en) 2023-04-07

Family

ID=70495584

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911053897.3A Active CN111128368B (en) 2019-10-31 2019-10-31 Automatic autism spectrum disorder detection method and device based on video expression behavior analysis

Country Status (1)

Country Link
CN (1) CN111128368B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112163512A (en) * 2020-09-25 2021-01-01 杨铠郗 Autism spectrum disorder face screening method based on machine learning
CN112597841A (en) * 2020-12-14 2021-04-02 之江实验室 Emotion analysis method based on door mechanism multi-mode fusion
CN116665310A (en) * 2023-07-28 2023-08-29 中日友好医院(中日友好临床医学研究所) Method and system for identifying and classifying tic disorder based on weak supervision learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105279387A (en) * 2015-11-17 2016-01-27 东南大学 Execution function evaluating and training system for autism spectrum disorder children
CN109272259A (en) * 2018-11-08 2019-01-25 梁月竹 A kind of autism-spectrum disorder with children mood ability interfering system and method
CN110349667A (en) * 2019-07-05 2019-10-18 昆山杜克大学 The autism assessment system analyzed in conjunction with questionnaire and multi-modal normal form behavioral data
CN110349674A (en) * 2019-07-05 2019-10-18 昆山杜克大学 Autism-spectrum obstacle based on improper activity observation and analysis assesses apparatus and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105279387A (en) * 2015-11-17 2016-01-27 东南大学 Execution function evaluating and training system for autism spectrum disorder children
CN109272259A (en) * 2018-11-08 2019-01-25 梁月竹 A kind of autism-spectrum disorder with children mood ability interfering system and method
CN110349667A (en) * 2019-07-05 2019-10-18 昆山杜克大学 The autism assessment system analyzed in conjunction with questionnaire and multi-modal normal form behavioral data
CN110349674A (en) * 2019-07-05 2019-10-18 昆山杜克大学 Autism-spectrum obstacle based on improper activity observation and analysis assesses apparatus and system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112163512A (en) * 2020-09-25 2021-01-01 杨铠郗 Autism spectrum disorder face screening method based on machine learning
CN112597841A (en) * 2020-12-14 2021-04-02 之江实验室 Emotion analysis method based on door mechanism multi-mode fusion
CN116665310A (en) * 2023-07-28 2023-08-29 中日友好医院(中日友好临床医学研究所) Method and system for identifying and classifying tic disorder based on weak supervision learning
CN116665310B (en) * 2023-07-28 2023-11-03 中日友好医院(中日友好临床医学研究所) Method and system for identifying and classifying tic disorder based on weak supervision learning

Also Published As

Publication number Publication date
CN111128368B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN111128368B (en) Automatic autism spectrum disorder detection method and device based on video expression behavior analysis
CN113069080B (en) Difficult airway assessment method and device based on artificial intelligence
O’Shea et al. Deep learning for EEG seizure detection in preterm infants
CN110598608B (en) Non-contact and contact cooperative psychological and physiological state intelligent monitoring system
CN113990494B (en) Tic disorder auxiliary screening system based on video data
CN107887032A (en) A kind of data processing method and device
Tang et al. Automatic identification of high-risk autism spectrum disorder: a feasibility study using video and audio data under the still-face paradigm
CN111513726B (en) System for evaluating AMS risk based on IHT dynamic performance
CN114358194A (en) Gesture tracking based detection method for abnormal limb behaviors of autism spectrum disorder
Ali et al. Video-based behavior understanding of children for objective diagnosis of autism
Zaki et al. The study of drunken abnormal human gait recognition using accelerometer and gyroscope sensors in mobile application
Mahmud et al. Sleep apnea event detection from sub-frame based feature variation in EEG signal using deep convolutional neural network
Hosseini et al. Convolution neural network for pain intensity assessment from facial expression
CN110718301A (en) Alzheimer disease auxiliary diagnosis device and method based on dynamic brain function network
US20240050006A1 (en) System and method for prediction and control of attention deficit hyperactivity (adhd) disorders
CN113642525A (en) Infant neural development assessment method and system based on skeletal points
CN116386845A (en) Schizophrenia diagnosis system based on convolutional neural network and facial dynamic video
CN115439920B (en) Consciousness state detection system and equipment based on emotional audio-visual stimulation and facial expression
CN114464319B (en) AMS susceptibility assessment system based on slow feature analysis and deep neural network
Abdullah et al. Parkinson’s Disease Symptom Detection using Hybrid Feature Extraction and Classification Model
Gamage et al. Academic depression detection using behavioral aspects for Sri Lankan university students
Weber et al. Deep transfer learning for video-based detection of newborn presence in incubator
CN110675953B (en) System for identifying psychotic patients using artificial intelligence and big data screening
Budarapu et al. Early Screening of Autism among Children Using Ensemble Classification Method
Narala et al. Prediction of Autism Spectrum Disorder Using Efficient Net

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant