CN114129164B - Autism spectrum disorder risk detection system, detection terminal and cloud server - Google Patents
Autism spectrum disorder risk detection system, detection terminal and cloud server Download PDFInfo
- Publication number
- CN114129164B CN114129164B CN202111303594.XA CN202111303594A CN114129164B CN 114129164 B CN114129164 B CN 114129164B CN 202111303594 A CN202111303594 A CN 202111303594A CN 114129164 B CN114129164 B CN 114129164B
- Authority
- CN
- China
- Prior art keywords
- image
- risk detection
- tested person
- test image
- risk
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/165—Evaluating the state of mind, e.g. depression, anxiety
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2503/00—Evaluating a particular growth phase or type of persons or animals
- A61B2503/06—Children, e.g. for attention deficit diagnosis
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Psychiatry (AREA)
- Engineering & Computer Science (AREA)
- Educational Technology (AREA)
- Biomedical Technology (AREA)
- Hospice & Palliative Care (AREA)
- Psychology (AREA)
- Social Psychology (AREA)
- Physics & Mathematics (AREA)
- Child & Adolescent Psychology (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Developmental Disabilities (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The application discloses autism spectrum disorder risk detection system, detection terminal and cloud server for solving the problem that the existing autism spectrum disorder detection method cannot detect different types. The system comprises: at least one risk judgment module and a plurality of risk detection modules; the plurality of risk detection modules comprises: a risk of attention dysfunction detection module; an intention subject detection functional disorder risk detection module; a basic emotion recognition dysfunction risk detection module; the risk judging module is used for determining whether the tested person suffers from autism spectrum disorder and specific disorder types according to the risk detection results of the risk detecting modules. This application has realized detecting by the reasonable setting of multiple detection module to the autism pedigree obstacle typing of being tested personnel, is an effectual autism auxiliary detection means, can the wide application in various clinical application scenes of autism diagnosis, treatment.
Description
Technical Field
The application relates to the field of auxiliary diagnosis medical treatment of autism, in particular to an autism spectrum disorder risk detection system, a detection terminal and a cloud server.
Background
Autism is one of the developmental disorders with social disorder as a major symptom. Since no drug therapy with a definite therapeutic effect on autism has been found. Therefore, the means to ameliorate autism are primarily early diagnosis and educational intervention.
The existing autism detection method mainly comprises the following steps:
the brain-computer interface technology is utilized, and a specific brain electrical activity mode is identified through brain electrical data acquisition and analysis, so that the risk of the autism spectrum disorder is inferred, for example, the invention patent of China 'virtual immersion type autism children treatment system based on brain electrical signals and an eye tracker', the publication number of which is CN 113082448A.
By using the eye movement technology, the eye movement trajectory mode tested in specific image stimulation or visual task stimulation is collected by an eye movement instrument, so that the risk of the autism spectrum disorder is inferred, for example, the Chinese patent of invention 'method and system for detecting the observation and reading ability based on the eye movement data', and the publication No. CN 112137576A.
The method utilizes the using behaviors of the mobile phone, such as the use of a telephone and an APP, environmental sounds, movement and other characteristics to perform multi-mode data fusion and deduce the illness risk of the tested autism spectrum disorder according to the multi-mode data fusion, for example, the Chinese patent of invention, namely a method, equipment and a system for evaluating autism spectrum disorder, has the publication number CN 113194816A.
A method and device for inferring risk of autism spectrum disorder by using other visual behavior data or manually filling in questionnaire on line, such as Chinese utility model patent "A Children autism detector", published under CN 213821432U.
Since autism spectrum disorders are not a class of diseases but a group of mental disorders, and the subtype disorders thereof are greatly different from each other, the fact that the differences between the subtype disorders are greatly different means that the respective symptoms, causes and interventions cannot be uniformly processed according to the same technical route, but should be processed according to the characteristics of each subtype.
However, the existing autism detection method cannot realize typing detection of autism spectrum disorder and cannot meet the clinical requirement of autism detection.
Disclosure of Invention
The embodiment of the application provides an autism spectrum disorder risk detection system, a detection terminal and a server, and aims to solve the technical problem that the existing autism detection method cannot realize typing detection of autism spectrum disorder and cannot meet the clinical requirement of autism detection.
In a first aspect, an embodiment of the present application provides an autism spectrum disorder risk detection system, including: at least one risk judgment module and a plurality of risk detection modules; the plurality of risk detection modules includes: a functional disorder risk detection module M1, configured to collect a first reaction image when the subject watches the first test image, and determine a first gaze point set of the subject according to the first reaction image; the system comprises a first fixation point set, a first test image, a second fixation point set, a second test image and a third test image, wherein the first fixation point set is used for fixing the first test image; the intention main body detection functional disorder risk detection module M2 is configured to collect a second reaction image when the person to be tested watches the second test image, and determine a second gaze point set of the person to be tested according to the second reaction image; the system comprises a first test image, a second test image, a third test image and a fourth test image, wherein the first test image is used for displaying an intention type subject of a person to be tested; the basic emotion recognition dysfunction risk detection module M6 is used for acquiring a third reaction image when the tested person watches the third test image, and determining the facial data characteristics and the limb data characteristics of the tested person when the tested person watches the third test image according to the third reaction image; the system comprises a face data characteristic acquisition unit, a body data characteristic acquisition unit, an image preprocessing program acquisition unit, a convolutional neural network model acquisition unit, a functional disorder risk detection unit and a functional disorder risk detection unit, wherein the face data characteristic acquisition unit is used for acquiring a face data characteristic and a limb data characteristic of a person to be tested; the risk judgment module is used for determining whether the tested person suffers from autism spectrum disorder and the specific disorder type according to the risk detection results of the risk detection modules.
In the embodiment of the application, a plurality of different risk detection modules are set, and different test contents are used for testing the tested person, so that a more comprehensive test result is obtained. The risk detection module that the system adopted includes at least that attention dysfunction risk detection module, intention class main part detect dysfunction risk detection module, basic emotion recognition dysfunction risk detection module, and the typing that considers different modules is surveyed in the synthesis, can realize detecting the type of autism pedigree obstacle. The module does not need complex cognitive ability, and can detect the infant as long as the module has normal visual ability and limb ability, so that the module can be suitable for testing people of all ages including infants.
In one embodiment of the present application, the first test image comprises a light spot flickering image; the determining the first gaze point set of the subject according to the first reaction image includes: determining a plurality of feature points of the head image of the tested person, and determining the first fixation point set of the tested person according to the position features of the feature points; the plurality of feature points include: head vertex, left frontal angle vertex, right frontal angle vertex, chin vertex, medial left canthus, medial right canthus, lateral left canthus, lateral right canthus, left pupillary point, right pupillary point.
In the embodiment of the application, the head pose of the tested person and the position relation between the eyes and each feature point of the head are confirmed by collecting the position features of the feature points of the head of the tested person, so that the gaze line of the tested person is determined, and the gaze point is determined according to the gaze line. And detecting whether the attention function of the tested person is normal or not according to the contact ratio of the watching point of the tested person and the light spot flickering position in the first reaction image.
In one embodiment of the present application, the second test image is a movement process of four groups of test patterns to the direction of the tested person; the four groups of test patterns are respectively triangular, an automatic driving carrier, an animal and a human.
In an embodiment of the present application, the system further includes an intention type subject intention reasoning functional disorder risk detecting module M3, configured to request the person under test to judge whether the four groups of test patterns want to actively approach the person under test while viewing the second test image, and determine an intention type subject intention reasoning functional disorder risk detection result of the person under test according to an answer result of the person under test.
In the embodiment of the present application, the second test image selects four different patterns, i.e., triangle, autopilot, animal, and human, and shows the movement process of the second test image pointing to the direction of the person to be tested. Under the condition that the intention type main body intention reasoning function is sound, the tested person can distinguish whether the image pattern has the intention of active approaching or not, so that the situation that the triangular and automatic driving carrier does not have the intention of active approaching, and the animal and the human have the intention of active approaching is judged. Therefore, whether the intention reasoning function of the subject in the intention class is in failure can be judged according to the correctness of the selection of the subject.
In one embodiment of the present application, the system further comprises a co-attention establishing dysfunction risk detection module M4, a co-attention semantic reasoning dysfunction risk detection module M5; attention is shared to establish a functional disorder risk detection module M4 for displaying a fourth test image, wherein the fourth test image comprises a character image, and the character image sequentially faces a plurality of square points; the fourth reaction image is used for collecting the fourth test image when the tested person watches the fourth test image; and a fourth fixation point set used for determining the tested person according to the fourth reaction image; the system comprises a plurality of observation points, a plurality of direction points and a plurality of functional disorder risk detection units, wherein the observation points are used for observing the observation points of the tested persons; a common attention semantic reasoning functional disorder risk detection module M5, configured to display different object discrimination test patterns and object names respectively at a plurality of orientation points where the human images sequentially face in the fourth test image; and is used for requiring the person to be tested to judge whether the object identification test pattern corresponds to the object name; and the method is used for determining the risk detection result of the common attention semantic reasoning dysfunction of the tested person according to the judgment accuracy of the tested person.
In one embodiment of the present application, the system further comprises: and the complex emotion recognition dysfunction risk detection module M7 is used for acquiring a fifth reaction image when the tested person watches the fifth test image, determining the facial data characteristics and the limb data characteristics of the tested person when the tested person watches the fifth test image according to the fifth reaction image, and determining the complex emotion recognition dysfunction risk detection result of the tested person based on the image preprocessing program and the convolutional neural network model according to the facial data characteristics and the limb data characteristics.
In one embodiment of the application, the system selects the module to be started according to the age of the person to be tested; when the age of the tested person is 0-3 years, starting an attention dysfunction risk detection module M1, an intention subject detection dysfunction risk detection module M2 and a basic emotion recognition dysfunction risk detection module M6; wherein, the attention dysfunction risk detection module M1 and the intention subject detection dysfunction risk detection module M2 are started in sequence; when the age of the tested person is more than 3 years old, starting an attention functional disorder risk detection module M1, an intention subject detection functional disorder risk detection module M2, an intention subject intention reasoning functional disorder risk detection module M3, a common attention establishing functional disorder risk detection module M4, a common attention semantic reasoning functional disorder risk detection module M5, a basic emotion recognition functional disorder risk detection module M6 and a complex emotion recognition functional disorder risk detection module M7; the system comprises an attention functional disorder risk detection module M1, an intention main body detection functional disorder risk detection module M2, an intention main body intention reasoning functional disorder risk detection module M3, a common attention establishing functional disorder risk detection module M4, a common attention semantic reasoning functional disorder risk detection module M5, a basic emotion recognition functional disorder risk detection module M6 and a complex emotion recognition functional disorder risk detection module M7 which are sequentially started.
In the embodiment of the application, the intention class subject intention reasoning functional disorder risk detection module M3, the common attention establishing functional disorder risk detection module M4, the common attention semantic reasoning functional disorder risk detection module M5 and the complex emotion recognition functional disorder risk detection module M7 have certain requirements on the cognitive level and the intelligence level of a person to be tested, and are not suitable for detecting infants aged 0 to 3 years. Therefore, only the attention function disorder risk detection module M1, the intention subject detection function disorder risk detection module M2, and the basic emotion recognition function disorder risk detection module M6 are activated when detecting infants between 0 and 3 years old.
In one embodiment of the application, the system selects different test image databases according to different sexes of the tested person; the test image database comprises a fourth test image and a fifth test image; the system sets different risk judgment thresholds according to different sexes of the tested person.
In the embodiment of the application, different test influence databases are used for testees of different genders, and the performance videos of the same sex are easier for the testees to generate a common situation, so that a better test effect is achieved. Considering that the prevalence rate of autism is different between men and women, different thresholds are set for the results of the tests for men and women to improve the accuracy of the tests.
In a second aspect, an embodiment of the present application provides an autism spectrum disorder risk detection terminal, where the terminal includes a display, an interaction unit, a camera, a processor, and a transmitter; the display is used for displaying a test image, and the test image comprises any one or more of a first test image, a second test image, a third test image, a fourth test image and a fifth test image; and for displaying test questions; the interaction unit is used for the testee to select the answer of the test question; the camera is used for acquiring a reaction image when the tested person watches the test image, wherein the reaction image comprises any one or more of a first reaction image, a second reaction image, a third reaction image, a fourth reaction image and a fifth reaction image; the processor is configured to perform an edge calculation, and the edge calculation includes: determining a first fixation point set of the tested person according to the first reaction image; calculating the contact ratio of the first fixation point set and a first test image; determining a second fixation point set of the tested person according to the second reaction image; calculating the contact ratio of the second fixation point set and the image motion track in the second test image; determining the facial data characteristics and the limb data characteristics of the tested person when the tested person watches the third test image according to the third reaction image; and the transmitter is used for transmitting the edge calculation result to the cloud server, and the edge calculation result is used for determining whether the tested person suffers from the autism spectrum disorder and the specific disorder type.
In the embodiment of the application, the edge calculation content is placed at the terminal, so that the terminal only needs to output the data after feature extraction to the server, and does not need to upload the reaction video of the tested person, thereby greatly reducing the data volume sent by the terminal to the server, improving the overall transmission speed and shortening the time for waiting for the detection result.
In a third aspect, the embodiment of the application provides an autism spectrum disorder risk detection cloud server, which is characterized in that the cloud server comprises a receiver and a processor; the receiver is used for receiving the edge calculation result sent by the terminal; the edge calculation results include: the coincidence degree of the first fixation point set and the first test image, the coincidence degree of the second fixation point set and the image motion track in the second test image, and the facial data characteristics and the limb data characteristics of the tested person when watching the third test image; the processor is configured to execute the following instructions: determining the attention dysfunction risk detection result of the tested person according to the coincidence degree of the first fixation point set and the first test image; determining the detection result of the intention type main body detection dysfunction risk of the tested person according to the coincidence degree of the second fixation point set and the image motion track in the second test image; determining a basic emotion recognition dysfunction risk detection result of the tested person based on an image preprocessing program and a convolutional neural network model according to the facial data characteristics and the limb data characteristics of the tested person when the tested person watches the third test image; and determining whether the tested person suffers from autism spectrum disorder and the specific disorder type according to the attention dysfunction risk detection result, the intention main body detection dysfunction risk detection result and the basic emotion recognition dysfunction risk detection result.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a block diagram of a risk detection system for autism spectrum disorder according to an embodiment of the present disclosure;
fig. 2 is a block diagram of a risk detection system for autism spectrum disorder according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Autism spectrum disorders are a relatively common developmental disorder, however, autism spectrum disorders are not a class of diseases, but rather are a group of indications of mental disorders. The subtype disorders are very different from each other, which means that the symptom expression, pathogenic cause and intervention method of various subtype disorders cannot be processed uniformly according to the same technical path, but should be processed respectively according to the characteristics of each seed type.
According to the current national risk screening and diagnosis practice of autism spectrum disorder, questionnaire scale tools such as PEP-3 (psychological Profile Third Edition) and the like are mainly adopted for the risk screening and diagnosis of autism spectrum disorder in China, and a guardian is generally required to observe children to perform data acquisition and risk assessment. Meanwhile, the questionnaire scale tool such as PEP3 is a actuarial evaluation tool, and lacks a causal relationship to autism spectrum disorder, i.e., hierarchical inference of a pathogenic cause, in the entry setting itself. Furthermore, the questionnaire-type risk screening tool has limitations such as inaccurate human observation data, difficulty in direct observation of young infants, and lack of hierarchical causal inference.
However, the existing autism detection scheme by using the computer technology has the limitation that the specific subtype of autism is not classified and diagnosed. Therefore, the research on the autism spectrum disorder risk detection system which can directly carry out the autism parting diagnosis on the tested children by using the computer system has great significance for the early discovery and early intervention treatment of the autism spectrum disorder of the children.
In order to solve the technical problems, under the guidance of the mental theory, the invention is formed by combining the empirical observation of some clinical samples and utilizing the computer artificial intelligence technology and the causal relationship analysis technology. According to the invention, different test images are played, the reaction image when the tested person watches is directly recorded, and the tested person is required to carry out judgment and selection, so that the autism spectrum disorder risk detection result is obtained.
According to the invention, during detection, the autism spectrum disorder is not regarded as a single disorder type, but the Asperger syndrome, the William syndrome, the low-functional autism and the high-functional autism in the spectrum are further distinguished, and according to the respective judging key points and pathogenic causes of the four subclasses, the risk screening steps are respectively set for risk assessment and auxiliary diagnosis.
Fig. 1 is a block diagram of a risk detection system for autism spectrum disorder according to an embodiment of the present disclosure.
It will be understood by those skilled in the art that the system blocks shown in fig. 1 do not constitute a limitation of the detection system, and in fact, the detection system may include more or less blocks than those shown, or employ different combinations of functional elements to carry out the same or similar functions of the blocks described above.
In one embodiment of the present application, as shown in fig. 1, the detection system comprises an attention dysfunction risk detection module M1, an intention subject detection dysfunction risk detection module M2, an intention subject intention reasoning dysfunction risk detection module M3, a common attention establishing dysfunction risk detection module M4, a common attention semantic reasoning dysfunction risk detection module M5, a basic emotion recognition dysfunction risk detection module M6, and a complex emotion recognition dysfunction risk detection module M7. The functional implementation of each module is specifically described below.
The M1 module is realized by the following steps:
s110: displaying the first test image.
In one embodiment of the present application, after performing pre-experimental tests on 15 patients and ordinary children who have been diagnosed with autism spectrum disorder, according to the pre-experimental test results and observation results, the first test image is selected as: outside a circular area of 20cm diameter with the center of the display screen as the origin, a red spot and a yellow spot, respectively, are shown with a luminance of 14 lm. A spot flash of one week is performed around a circular area, with a frequency of once every 5 seconds, for a total of 10 flashes, and a flash spot coordinate set is defined as Star { P1.
S120: and acquiring a first reaction image when the tested person watches the first test image through the camera. The first reaction image mainly records head information of the tested person so as to analyze the gaze line and the gaze point of the tested person.
S130: and performing data processing on the first reaction image to determine a first fixation point set of the tested person.
In one embodiment of the present application, the data processing procedure is as follows: firstly, a plurality of key frame images are intercepted from a first reaction image, wherein the key frame images correspond to each light spot flickering time in a first test image.
Then, for each key frame image, extracting the following feature points and respectively positioning: head vertex P1, left frontal angle vertex P2, right frontal angle vertex P3, chin vertex P4, medial left eye angle P5, medial right eye angle P6, lateral left eye angle P7, lateral right eye angle P8, left pupil point P9, right pupil point P10. The point locations are determined point locations in a two-dimensional coordinate system and are marked as P (X-in, Y-in), wherein X is an image abscissa and Y is an image ordinate.
The point locations are arranged into an input data set { P1, P2, P3, P4, P5, P6, P7, P8, P9 and P10}, an output data set (X-out and Y-out) is obtained through SVM model calculation, and the output data set (X-out and Y-out) is the first fixation point set. 200 pieces of pre-marked data are collected in a natural scene to serve as training and testing data, wherein 170 pieces of data are training data, 30 pieces of data are testing data, and the accuracy of the model is actually measured to be 70.2%.
S140: and determining the attention dysfunction risk detection result of the person to be tested according to the coincidence degree of the first fixation point set and the flicker light point coordinate set Star { P1.., P10} in the first test image.
In one embodiment of the present application, the following are the results of the risk detection of attention dysfunction:
if the first gaze point set is located outside the display screen, the attention dysfunction risk detection result is "suspected attention dysfunction or retest", and the result is defined as C1 +; otherwise, the detection result is C1-.
Defining a result of coincidence calculation of the first set of points of regard with Star { P1. ·, P10} as DegreeAF, comparing the DegreeAF with an adaptive threshold t (af), and if the DegreeAF is lower than the adaptive threshold t (af), noting that the dysfunction risk detection result is at risk, the result being defined as C2 +; otherwise, the test result is no risk, and is defined as C2-.
In one embodiment of the present application, in order to prevent the tested person from failing to complete the test normally due to the disturbance of attention by other factors such as dispersion of things, if the test result is C1+, the test needs to be performed again. Retesting may be performed multiple times until test result is C1-and normal test flow continues. If the test result is always C1+ after multiple tests, the test can be stopped or the step can be skipped for subsequent tests, and the results of the subsequent tests and the clinical condition can be combined to carry out further diagnosis or supplementary test on the tested person.
In one embodiment of the present application, the M1 module is also used to detect whether a co-morbid disorder is present in the subject. When the detection result is C1+, judging that the tested person is possibly at the risk of visual disorder; when the detection result is C1+ or C2+, the possibility of reading disorder risk of the tested person is judged.
In clinical tests, the accuracy of the co-morbid test of autism spectrum disorder on the typing diagnosis of autism and the selection of subsequent treatment means have larger images. For example, when the person to be tested has visual disorder, the test results may be affected by the visual disorder to lose accuracy, and further diagnosis may need to be performed by other auxiliary means. Therefore, the embodiment of the application detects the common disease condition of the tested person, and has high clinical application value for the accuracy of diagnosis and treatment of the autism spectrum disorder.
The M2 module is realized by the following steps:
s210: and displaying the second test image.
In one embodiment of the present application, through the pre-experiment test on 15 patients who have already diagnosed autism spectrum disorder and ordinary children, the materials used in the pre-experiment are 225 kinds of animals, objects, figures, etc., the purpose of the pre-experiment is mainly to select the stimulating material that can call the attention of children most, the screening condition is set to be less than 2 seconds in reaction time, and the second test image content with the best verification effect is preferably selected according to the test result and the observation result of the pre-experiment: the motion process of the following four groups of test patterns pointing to the viewer end is respectively displayed in the center of the display screen: a red triangle and/or green triangle and/or yellow triangle in color, an autonomous unmanned car and/or an autonomous taxi truck and/or an autonomous taxi boat, a dog and/or a cat and/or a mouse, a person.
S220: and acquiring a second reaction image when the tested person watches the second test image through the camera. The second reaction image mainly records head information of the tested person so as to analyze the gaze line and the gaze point of the tested person.
S230: and performing data processing on the second reaction image to determine a second fixation point set of the tested person.
The manner of determining the second set of gaze points is the same as S130, and will not be described herein.
S240: and determining the attention dysfunction risk detection result of the tested person according to the coincidence degree of the motion tracks of the four groups of test patterns in the second fixation point set and the second test image.
In an embodiment of the present application, a result of calculating a coincidence degree of the motion trajectories of the second set of gaze points and the four sets of test patterns is defined as DegreeIT, DegreeIT is compared with an adaptive threshold t (it), and if DegreeIT is lower than the adaptive threshold t (it), a result of detecting the dysfunction risk for the intent subject is at risk, which is defined as C3 +; otherwise, the test result is no risk, and is defined as C3-.
The M3 module is realized by the following steps:
s310: the person to be tested is asked to judge whether the motion pattern displayed in the second test image is "he wants to get close to me actively".
In one embodiment of the present application, the person under test is asked to select the above-described problem for four sets of test patterns in sequence, with options including "yes", "no", and "no determination". Each time, the tested person is required to make a selection after the group of test pattern motion images are played, and then the motion images of the next group of patterns are played after the selection.
In another embodiment of the present application, after the four sets of test patterns are played in sequence, the examinee is once again required to make a selection about the motion images of the four sets of test patterns.
S320: and determining the scoring condition of the tested person according to whether the selection of the tested person is correct or not.
In one embodiment of the application, the first set of test patterns is a red triangle and/or green triangle and/or yellow triangle, the second set of test patterns is an autonomous unmanned car and/or an autonomous truck and/or an autonomous gliding boat, and the third set of test patterns is a dog and/or a cat and/or a mouse, a fourth set of test patterns is a person. The first and second groups select "yes" or "not known" and are "answered incorrectly", and the third and fourth groups select "no" or "not known" and are answered incorrectly. The correct answer questions were counted in 1 share, and the wrong answer questions were counted in 0 share. The scores of the four problems are defined as C4-1, the full score of C4-1 is 4 points, and the lowest score is 0 point.
The M4 module is realized by the following steps:
s410: and displaying the fourth test image. The fourth test image comprises a character image, and the head and the eyes of the character image move and/or swing to face a plurality of azimuth points in sequence.
In one embodiment of the present application, the character image is oriented in sequence toward three orientation points, which are defined as Loca1, Loca2, and Loca3, respectively.
In an embodiment of the application, in the fourth test image, when the character image sequentially faces three positions Loca1, Loca2 and Loca3, different objects are displayed on the positions where the character image faces respectively, so as to perform the detection step of the M5 module. For example, the presentation object is: loca1 is a cat, Loca2 is a book, and Loca3 is a stone.
S420: and acquiring a fourth reaction image when the tested person watches the fourth test image through the camera. The fourth reaction image mainly records head information of the tested person so as to analyze the gaze line and the gaze point of the tested person.
S430: and performing data processing on the fourth reaction image to determine a fourth fixation point set of the tested person.
The manner of determining the fourth gaze point set is the same as S130, and will not be described herein.
S440: and determining the contact ratio of the fourth gazing point set to the Loca1, the Loca2 and the Loca3, and defining the contact ratio as DegreeCA.
The M5 module is realized by the following steps:
s510: the examinee is required to judge the object name of Loca 1-3. For example, a test may be performed using a judgment question of yes/no in the judgment task, along with a judgment question for each object.
S520: and determining the scoring condition of the tested person according to the judgment option selected by the tested person.
In one embodiment of the present application, questions that are answered correctly are scored as 1, and questions that are answered incorrectly are scored as 0. The scoring conditions of the three judgment problems are defined as C4-2, the full score of C4-2 is 3 points, and the lowest score is 0 point.
In one embodiment of the application, the total score result C4 is obtained statistically according to the scores of the C4-1 and the C4-2. If the scores of C4-1 and C4-2 are full scores of 7, the total score result is C4-; otherwise, if the score is not full 7, the total score result is C4 +.
The M6 module is realized by the following steps:
in one embodiment of the present application, a set of "chinese adult basic emotion video library" is previously created and stored in the system before the M6 module is started. The establishment mode in the video library is as follows:
15 males and 15 females were photographed separately, requiring the actors to perform the emotional experience of the following scenes separately: finger injuries, frustration, abuse, reward, natural emotional performance in the natural beauty environment, ready to engage in 6 scenarios of athletic activities with others. A first step of requiring each actor to perform 10 emotional expression videos, forming a total of 300 videos; secondly, recruiting 50 scorers, 25 men and 25 women, all of which are adults, and scoring 300 videos by using a scoring method to score structured choice questions; thirdly, requiring the scorers to open and describe the video content and recording the description content; fourthly, processing the scoring and evaluation data, and selecting materials with the consistency higher than 90% among the scorers; and fifthly, selecting 10 children aged 3, 5 males and 5 females, evaluating the selected materials, and selecting the materials of which the consistency among the scorers is higher than 90% to obtain a basic emotion video library of Chinese adults for testing by an M6 module. The video includes not only facial information but also whole body information such as limb movement and posture.
The specific implementation steps of the M6 module are as follows:
s610: and displaying the third test image. The third test image is one or more sections of videos in a basic emotion video library of Chinese adults.
S620: and acquiring a third reaction image when the tested person watches the third test image through the camera. Wherein, the third reaction image is required to record the facial and limb movements of the tested person.
S630: and carrying out data processing on the third reaction image to obtain the facial data characteristics and the limb data characteristics of the tested person.
In one embodiment of the present application, the data processing procedure is as follows:
labeling 3 regions of the face of the subject, labeled Re 1: forehead-eyebrow part, Re 2: cheek-nasal wing portion, Re 3: and the chin-mouth part records the space characteristic faceSpaceB of the mark area before the third test image is displayed, and records the change faceSpaceD of the space characteristic of the mark area in the video playing process in the third test image playing process.
Marking the positions of the two hands, the two elbows and the two knees of the limbs of the tested person, recording the spatial characteristics BodySpaceB of the marked area before the third test image is displayed, and recording the change BodySpaceD of the marked area in the video playing process when the third test image is played. .
S640: and determining the similarity of the reaction of the tested person when watching the third test image and the watched basic emotion video based on the basic emotion likelihood judgment model SimiBEM.
In one embodiment of the present application, the SimiBEM is a combined model of an image preprocessing program and a convolutional neural network model, and is trained by using a "chinese adult basic emotion video library" as training data. And the image preprocessing program extracts the combined part of Re1 and Re2 in the training data, extracts the global topological space geometric characteristics of Re1 and Re2, and defines the global topological space geometric characteristics as GST which is a group of three-dimensional coordinates. After extracting GST of a training data set, an image preprocessing program inputs a group of pre-training convolutional neural network models, the group of pre-training convolutional neural network models comprises three channels, a channel 1 processes image data of a first region, namely Re1, and a convolution kernel parameter is described as 16 multiplied by 42; channel 2 processes the image data of the second region, Re2, with convolution kernel parameters of 18 x 40; the channel 3 processes the GST data, and the input is a three-dimensional array of GST. The three channels are input data, and the output data is the similarity degree of the facial and limb activities of the tested person and the emotion type in the displayed video, wherein the similarity degree is expressed in percentage.
S650: and determining the basic emotion recognition dysfunction risk of the tested person according to the similarity degree of the facial and limb activities of the tested person and the emotion types in the displayed video.
In one embodiment of the present application, the basic emotion recognition dysfunction score is defined as DegreeBER, which is 1-degree of similarity. A higher value of DegreeBER indicates a greater risk of basic emotion recognition dysfunction. Comparing the DegreeBER with an adaptive threshold t (ber), and if the DegreeBER is higher than the adaptive threshold t (ber), the basic emotion recognition dysfunction detection result is at risk, which is defined as C5 +; otherwise, the test result is no risk, and is defined as C5-.
The specific implementation steps of the M7 module are as follows:
s710: displaying the fifth test image. The fifth test image is one or more sections of videos in a Chinese adult complex emotion video library.
S720: and acquiring a fifth reaction image when the tested person watches the fifth test image through the camera. Wherein, the fifth reaction image is required to record the facial and limb movements of the tested person.
S730: and carrying out data processing on the fifth reaction image to obtain the facial data characteristics and the limb data characteristics of the tested person.
The data processing procedure of S730 may refer to S630, which is not described herein.
S740: and determining the similarity of the reaction of the tested person watching the fifth test image and the watched basic emotion video based on the complex emotion likelihood judgment model Simicem.
In one embodiment of the present application, the SimiCEM is a combined model of an image preprocessing program and a convolutional neural network model, and is trained by using a "chinese adult complex emotion video library" as training data. And the image preprocessing program extracts the combined part of Re1 and Re2 in the training data, extracts the global topological space geometric characteristics of Re1 and Re2, and defines the global topological space geometric characteristics as GST which is a group of three-dimensional coordinates. After extracting GST of a training data set, an image preprocessing program inputs a group of pre-training convolutional neural network models, the group of pre-training convolutional neural network models comprises three channels, a channel 1 processes image data of a first region, namely Re1, and a convolution kernel parameter is described as 16 multiplied by 42; channel 2 processes the image data of the second region, Re2, with convolution kernel parameters of 18 x 40; channel 3 processes the GST data and inputs it as a three-dimensional array of GST. The three channels are input data, and the output data is the similarity degree of the facial and limb activities of the tested person and the emotion type in the displayed video, wherein the similarity degree is expressed in percentage.
S750: and determining the risk of the complex emotion recognition dysfunction of the tested person according to the similarity degree of the activities of the face and the limbs of the tested person and the emotion types in the displayed video.
In one embodiment of the present application, the complex emotion recognition dysfunction score is defined as DegreeCER, which is 1-degree of similarity. A higher value of DegreeCER indicates a greater risk of basic emotion recognition dysfunction. Comparing the DegreeCER with an adaptive threshold t (cer), if the DegreeCER is above the adaptive threshold t (cer), the basic emotion recognition dysfunction detection result is at risk, which result is defined as C6 +; otherwise, the test result is no risk, and is defined as C6-.
In an embodiment of the present application, the detection system further includes a risk determination module, configured to determine whether the person to be tested has a risk of autism spectrum disorder and which risk of autism spectrum disorder exists according to the detection result. The examples of the present application classify autism spectrum disorders into four categories: asperger syndrome, high functional autism, low functional autism and Williams syndrome. When the detection result is C1+, C2+, C3+, C4+ or C5+ and C6+, the risk judgment module judges that the person to be tested has the risk of the Asperger syndrome; when the detection result is C1+, C2+, C3+, C4+ and C5+ and C6+, the risk judgment module judges that the tested person has high risk of functional autism; when the detection result is C1+ or C2+ or C3+, the risk judgment module determines that the tested person has low risk of functional autism; and when the detection result is C1+ or C2+ or C3+ and C4+, the risk judging module judges that the tested person is at risk of Williams syndrome.
In an embodiment of the present application, a sequence relationship exists between the detection modules, and the specific sequence relationship is: the M1, M2, M3, M4 and M5 modules are in sequential relation, and the latter detection module is not started until the former module is completed. The M6 and M7 modules are in sequential order, and the M7 module is not started until the M6 module is completed.
In one embodiment of the present application, the actual module to be activated is determined based on the age of the person being tested. For example, when the subject person ages from 0 to 3 years, the cognitive function, literacy, and logical judgment ability are not sound, and therefore, the M3, M4, M5, and M7 modules are not activated, and only the M1, M2, and M6 modules are activated. When the subject is older than 3 years, all M1-M7 detection modules may be activated.
In yet another embodiment of the present application, different test databases are enabled and different adaptive thresholds are used, depending on the gender of the person being tested. For example, when a male child is tried, the M6, M7 module starts the male correspondence database, and otherwise starts the female correspondence database. Meanwhile, when the adaptive thresholds of C2, C3, C5 and C6 are started, different adaptive thresholds are adopted according to the corresponding gender, and according to the previous research results, the male threshold is higher than the female threshold, and preferably the male threshold is 2.93 times of the female threshold.
Based on the same inventive concept, the embodiment of the application also provides a detection terminal and a cloud server, which are used for forming a specific hardware structure of the detection system.
Fig. 2 is a block diagram of a risk detection system for autism spectrum disorder according to an embodiment of the present disclosure.
In one embodiment of the present application, the system employs a distributed deployment approach. As shown in fig. 2, the detection system provided in the embodiment of the present application includes a detection terminal and a cloud server, and the detection terminal and the cloud server are used for being matched with each other to implement functions of each module of the system. The detection terminal is mainly used for displaying a test image, collecting a reaction image of a tested person, performing edge calculation and the like; and the cloud server is mainly used for judging the starting conditions of the modules, starting a proper detection module for testing, and judging the risk of the autism spectrum disorder of the tested person according to the edge calculation result of the detection terminal.
In one embodiment of the application, the detection terminal comprises a display, an interaction unit, a camera, a processor and a transmitter.
The display is used for displaying a test image, wherein the test image comprises any one or more of a first test image, a second test image, a third test image, a fourth test image and a fifth test image; and for displaying test problems. The image range displayed by the display is related to the detection module which is actually started by the detection system.
And the interaction unit is used for selecting the answer of the test question by the tested person. In an embodiment of the application, the interaction unit may be a touch screen, that is, the interaction unit and the display are integrated into one hardware, and the person to be tested directly clicks the screen to interact with the hardware, and inputs the selected option to the detection terminal. The interactive unit can also be any other way that can realize the interactive function, such as interacting through mouse click, speaking voice, etc.
And the camera is used for acquiring a reaction image when the tested person watches the test image. The reaction image may be any one or more of the first reaction image, the second reaction image, the third reaction image, the fourth reaction image, and the fifth reaction image. The image range collected by the camera is related to the detection module actually started by the detection system.
A processor to perform an edge calculation. In one embodiment of the present application, when only the M1, M2, M6 modules are enabled, the edge calculation includes: determining a first fixation point set of the tested person according to the first reaction image, and calculating the contact ratio of the first fixation point set and the first test image; determining a second fixation point set of the tested person according to the second reaction image, and calculating the contact ratio of the second fixation point set and the image motion track in the second test image; and determining the facial data characteristics and the limb data characteristics of the tested person when the tested person watches the third test image according to the third reaction image.
In another embodiment of the present application, when all M1-M7 modules are launched, the edge calculation further comprises: calculating a C4-1 score according to the selected options of the tested person in the M3 module test; determining a fourth fixation point set of the tested person according to the fourth reaction image, and calculating the coincidence degree of the fourth fixation point set of the tested person and a plurality of orientation points of the character orientation in the fourth test image; calculating a C4-2 score according to judgment selection of a tested person in the M5 module test; and determining the facial data characteristics and the limb data characteristics of the tested person when the tested person watches the fifth test image according to the fifth reaction image.
The specific method of edge calculation refers to the description part of the present specification about the detection system, and is not described herein again.
A transmitter for transmitting an edge calculation result to a cloud server, the edge calculation result being used for determining whether the person under test is suffering from autism spectrum disorder and a specific disorder type.
In an embodiment of the present application, the detection terminal may be in the form of a home computer, a home appliance, a home mobile terminal, or the like. The problem that convenience and household are difficult to realize by using external equipment such as electroencephalograms and oculometers in the traditional technology is solved, and the limitation of risk assessment and auxiliary diagnosis in the application scene in the traditional technology is broken.
In one embodiment of the application, a cloud server comprises a receiver and a processor.
The receiver is configured to receive data sent by a terminal, where the data includes an edge calculation result of the terminal.
And the processor is used for determining the risk detection result of each detection module and determining the risk of the autism spectrum disorder and the specific disorder type of the tested person.
The method for determining the detection result may refer to the description part of the detection system in this specification, and is not described herein again.
In an embodiment of the present application, the interaction unit of the detection terminal is further configured to set information such as age and gender of the person to be tested, and the interaction operation may be performed by the person to be tested, a guardian or a tester. After the information setting such as the age, the sex and the like of the tested person is completed, the detection terminal sends the information to the cloud server through the transmitter. After receiving the information, the cloud server selects a proper detection module to start detection according to the age information of the tested person, selects a corresponding test image database according to the gender information of the tested person, and starts a self-adaptive threshold value corresponding to the gender to judge the related obstacle risk.
How the cloud server performs the above adjustment according to the age and gender information of the person to be tested can refer to the description part of the detection system in this specification, and is not described herein again.
Those skilled in the art can understand that the system deployment manner shown in fig. 2 does not constitute a limitation on the detection terminal and the cloud server, and actually, the system deployment manner may perform different distribution arrangements on the functions of each module.
The autism spectrum disorder risk detection system, the detection terminal and the cloud server provided by the embodiment of the application at least have the following beneficial effects:
(1) the embodiment of the application can conveniently realize the typing detection of four subtypes of autism spectrum disorder: asperger syndrome, William syndrome, low functional autism, high functional autism.
(2) Aiming at the limitation that the traditional technology does not distinguish the ages and the sexes of different testees, the embodiment of the application divides the ages and the sexes of the testees, adaptively adjusts the threshold value of the age division according to clinical observation data, and configures different risk assessment and auxiliary diagnosis steps aiming at different ages and sexes;
(3) aiming at the limitation that the traditional technology does not distinguish common disease factors and is difficult to distinguish autism spectrum disorder from reading disorder, auditory disorder, visual disorder and the like, the embodiment of the application provides a cause inference system for distinguishing pathogenic factors of different disorders;
(4) the device and the system for realizing the data acquisition and analysis limitation of convenience and household performance by using external equipment such as an electroencephalogram and an eye tracker in the prior art are provided, the technical implementation scheme and the matched device and system based on the connection of a household mobile terminal, a household computer and a cloud server are provided, and the limitation that the conventional technology is difficult to carry out risk assessment and auxiliary diagnosis in a household mode is solved.
The embodiments in the present application are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the apparatus and device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference may be made to the partial description of the method embodiments for relevant points.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art to which the present application pertains. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.
Claims (9)
1. An autism spectrum disorder risk detection system, comprising at least one risk assessment module and a plurality of risk detection modules;
the plurality of risk detection modules comprises:
the attention dysfunction risk detection module is used for acquiring a first reaction image when a tested person watches the first test image and determining a first fixation point set of the tested person according to the first reaction image; the system comprises a first fixation point set, a first test image, a second fixation point set, a second test image and a third test image, wherein the first fixation point set is used for fixing the first test image;
the intention main body detection functional disorder risk detection module is used for collecting a second reaction image when the tested person watches the second test image and determining a second fixation point set of the tested person according to the second reaction image; and determining the detection result of the intention type subject detection dysfunction risk of the tested person according to the coincidence degree of the second gaze point set and the image motion track in the second test image;
the basic emotion recognition dysfunction risk detection module is used for acquiring a third reaction image when the tested person watches the third test image, and determining the facial data characteristics and the limb data characteristics of the tested person when the tested person watches the third test image according to the third reaction image; the system comprises a face data characteristic and a limb data characteristic, and is used for determining a basic emotion recognition dysfunction risk detection result of the tested person according to the face data characteristic and the limb data characteristic based on an image preprocessing program and a convolutional neural network model;
the system also comprises a common attention establishing functional disorder risk detection module and a common attention semantic reasoning functional disorder risk detection module;
the common attention building functional disorder risk detection module is used for displaying a fourth test image, wherein the fourth test image comprises a figure image, and the figure image sequentially faces a plurality of square points; the fourth reaction image is used for collecting the fourth test image when the tested person watches the fourth test image; and a fourth fixation point set used for determining the tested person according to the fourth reaction image; the system comprises a plurality of azimuth points, a fourth fixation point set and a fourth fixation point set, wherein the azimuth points are used for establishing a functional disorder risk detection result according to the coincidence degree of the fourth fixation point set and the azimuth points of the tested person;
the common attention semantic reasoning functional disorder risk detection module is used for respectively displaying different object distinguishing test patterns and object names at a plurality of square points towards which the human images sequentially face in the fourth test image; and is used for asking the person to be tested to judge whether the object distinguishing test pattern corresponds to the object name; and the system is used for determining the risk detection result of the common attention semantic reasoning dysfunction of the tested person according to the accuracy rate judged by the tested person;
the risk judging module is used for determining whether the tested person suffers from autism spectrum disorder and specific disorder types according to the risk detection results of the risk detecting modules.
2. The autism spectrum disorder risk detection system of claim 1, wherein the first test image comprises a light-spot flickering image;
the determining the first gaze point set of the subject according to the first reaction image includes:
determining a plurality of feature points of the head image of the person to be tested, and determining the first gaze point set of the person to be tested according to the position features of the feature points;
the plurality of feature points include: head vertex, left frontal angle vertex, right frontal angle vertex, chin vertex, medial left canthus, medial right canthus, lateral left canthus, lateral right canthus, left pupillary point, right pupillary point.
3. The autism spectrum disorder risk detection system of claim 1, wherein the risk judgment module is further configured to determine whether the subject is at risk of co-morbidity based on the attention dysfunction risk detection result; the comorbidities include visual disorders, reading disorders;
determining whether the tested person has co-morbidity according to the attention dysfunction risk detection result specifically comprises the following steps:
determining whether the person to be tested has the risk of visual disturbance according to whether the first point of regard set falls into the video display area of the first test image; and
and determining whether the person to be tested has the reading obstacle risk according to whether the first gaze point set falls into the video display area of the first test image and the coincidence degree of the first gaze point set and the first test image.
4. The system according to claim 1, wherein the second test image is four sets of test patterns of movement toward the subject;
the four groups of test patterns are respectively triangular, an automatic driving carrier, an animal and a human;
the system also comprises an intention main body intention reasoning functional disorder risk detection module which is used for judging whether the four groups of test patterns want to actively approach the tested person or not when the tested person watches the second test image and determining the intention main body intention reasoning functional disorder risk detection result of the tested person according to the answer result of the tested person.
5. The system for risk detection of autism spectrum disorder of claim 1, further comprising:
and the complex emotion recognition dysfunction risk detection module is used for acquiring a fifth reaction image when the tested person watches a fifth test image, determining a facial data feature and a limb data feature when the tested person watches the fifth test image according to the fifth reaction image, and determining a complex emotion recognition dysfunction risk detection result of the tested person based on an image preprocessing program and a convolutional neural network model according to the facial data feature and the limb data feature.
6. The autism spectrum disorder risk detection system of claim 5, wherein the system selects the module to activate based on the age of the subject;
when the age of the tested person is 0-3 years, starting the attention dysfunction risk detection module, the intention main body detection dysfunction risk detection module and the basic emotion recognition dysfunction risk detection module; the attention dysfunction risk detection module and the intention main body detection dysfunction risk detection module are started in sequence;
when the age of the tested person is more than 3 years old, starting the attention functional disorder risk detection module, the intention main body detection functional disorder risk detection module, the intention main body intention reasoning functional disorder risk detection module, the common attention establishing functional disorder risk detection module, the common attention semantic reasoning functional disorder risk detection module, the basic emotion recognition functional disorder risk detection module and the complex emotion recognition functional disorder risk detection module; the system comprises an attention functional disorder risk detection module, an intention main body detection functional disorder risk detection module, an intention main body intention reasoning functional disorder risk detection module, a common attention establishing functional disorder risk detection module and a common attention semantic reasoning functional disorder risk detection module, wherein the attention functional disorder risk detection module, the intention main body detection functional disorder risk detection module, the intention main body intention reasoning functional disorder risk detection module, the common attention establishing functional disorder risk detection module and the common attention semantic reasoning functional disorder risk detection module are sequentially started, and the basic emotion recognition functional disorder risk detection module and the complex emotion recognition functional disorder risk detection module are sequentially started.
7. The autism spectrum disorder risk detection system of claim 5, wherein the system selects different test image databases according to the gender of the person to be tested;
the test image database comprises the fourth test image and the fifth test image;
the system sets different risk judgment thresholds according to different sexes of the tested person.
8. The autism spectrum disorder risk detection terminal is characterized by comprising a display, an interaction unit, a camera, a processor and a transmitter;
the display is used for displaying a test image, and the test image comprises a first test image, a second test image, a third test image and a fourth test image; the fourth test image comprises a figure image, the figure image sequentially faces a plurality of square points, and the plurality of square points, towards which the figure image sequentially faces, in the fourth test image respectively display different object distinguishing test patterns and object names;
the display is also used for displaying a test question, and the test question comprises a requirement that a person to be tested judges whether the object identification test pattern corresponds to the object name;
the interaction unit is used for the testee to select the answer of the test question;
the camera is used for acquiring a reaction image when a tested person watches the test image, wherein the reaction image comprises a first reaction image when watching a first test image, a second reaction image when watching a second test image, a third reaction image when watching a third test image and a fourth reaction image when watching a fourth test image;
the processor is configured to perform an edge calculation, the edge calculation including:
determining a first fixation point set of the tested person according to the first reaction image; calculating the contact ratio of the first fixation point set and the first test image;
determining a second fixation point set of the tested person according to the second reaction image; calculating the contact ratio of the second fixation point set and the image motion track in the second test image;
determining the facial data characteristics and the limb data characteristics of the tested person when the tested person watches the third test image according to the third reaction image;
determining a fourth fixation point set of the tested person according to the fourth reaction image; calculating the contact ratio of a fourth fixation point set of the tested person and the plurality of azimuth points;
calculating the judgment accuracy of the tested person according to the judgment result of whether the object identification test pattern corresponds to the object name or not;
the transmitter is used for transmitting the edge computing result to a cloud server, and the edge computing result is used for enabling the cloud server to determine whether the tested person is suffered from the autism spectrum disorder and the specific disorder type.
9. An autism spectrum disorder risk detection cloud server, comprising a receiver, a processor;
the receiver is used for receiving the edge calculation result sent by the autism spectrum disorder risk detection terminal according to claim 8;
the processor is configured to execute the following instructions:
determining the attention dysfunction risk detection result of the tested person according to the coincidence degree of the first fixation point set and the first test image;
determining the detection result of the intention type subject detection dysfunction risk of the tested person according to the coincidence degree of the second fixation point set and the image motion track in the second test image;
determining a basic emotion recognition dysfunction risk detection result of the tested person based on an image preprocessing program and a convolutional neural network model according to the facial data characteristics and the limb data characteristics of the tested person when the tested person watches the third test image;
determining the joint attention of the tested person to establish a functional disorder risk detection result according to the contact ratio of the fourth fixation point set of the tested person and the plurality of the azimuth points;
determining the risk detection result of the common attention semantic reasoning dysfunction of the tested person according to the judgment accuracy of the tested person;
and determining whether the tested person suffers from autism spectrum disorder and the specific disorder type according to the attention dysfunction risk detection result, the intention main body detection dysfunction risk detection result, the basic emotion recognition dysfunction risk detection result, the joint attention establishment dysfunction risk detection result and the joint attention semantic reasoning dysfunction risk detection result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111303594.XA CN114129164B (en) | 2021-11-05 | 2021-11-05 | Autism spectrum disorder risk detection system, detection terminal and cloud server |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111303594.XA CN114129164B (en) | 2021-11-05 | 2021-11-05 | Autism spectrum disorder risk detection system, detection terminal and cloud server |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114129164A CN114129164A (en) | 2022-03-04 |
CN114129164B true CN114129164B (en) | 2022-09-16 |
Family
ID=80392337
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111303594.XA Active CN114129164B (en) | 2021-11-05 | 2021-11-05 | Autism spectrum disorder risk detection system, detection terminal and cloud server |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114129164B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109820524A (en) * | 2019-03-22 | 2019-05-31 | 电子科技大学 | The acquisition of self-closing disease eye movement characteristics and classification wearable system based on FPGA |
CN110313923A (en) * | 2019-07-05 | 2019-10-11 | 昆山杜克大学 | Autism early screening system based on joint ability of attention test and audio-video behavioural analysis |
CN111081374A (en) * | 2019-12-16 | 2020-04-28 | 华南师范大学 | Autism auxiliary diagnosis device based on common attention paradigm |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2771863B1 (en) * | 2011-10-24 | 2020-07-08 | President and Fellows of Harvard College | Enhancing diagnosis of autism through artificial intelligence and mobile health technologies without compromising accuracy |
WO2016040673A2 (en) * | 2014-09-10 | 2016-03-17 | Oregon Health & Science University | Animation-based autism spectrum disorder assessment |
-
2021
- 2021-11-05 CN CN202111303594.XA patent/CN114129164B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109820524A (en) * | 2019-03-22 | 2019-05-31 | 电子科技大学 | The acquisition of self-closing disease eye movement characteristics and classification wearable system based on FPGA |
CN110313923A (en) * | 2019-07-05 | 2019-10-11 | 昆山杜克大学 | Autism early screening system based on joint ability of attention test and audio-video behavioural analysis |
CN111081374A (en) * | 2019-12-16 | 2020-04-28 | 华南师范大学 | Autism auxiliary diagnosis device based on common attention paradigm |
Also Published As
Publication number | Publication date |
---|---|
CN114129164A (en) | 2022-03-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11848083B2 (en) | Measuring representational motions in a medical context | |
WO2020119355A1 (en) | Method for evaluating multi-modal emotional understanding capability of patient with autism spectrum disorder | |
CN111012367A (en) | Intelligent identification system for mental diseases | |
CN106691476A (en) | Image recognition Mentality Analysis System based on features of eye movements | |
KR20150103051A (en) | Systems and methods for detecting blink inhibition as a marker of engagement and perceived stimulus salience | |
CN108922629A (en) | The screening and its application of brain function corelation behaviour normal form index | |
Kreidler et al. | Neural indices of semantic processing in early childhood distinguish eventual stuttering persistence and recovery | |
CN112890815A (en) | Autism auxiliary evaluation system and method based on deep learning | |
CN113974589B (en) | Multi-modal behavior paradigm evaluation optimization system and cognitive ability evaluation method | |
CN109445578A (en) | A kind of cognitive ability assessment system and method | |
Tan et al. | Virtual classroom: An ADHD assessment and diagnosis system based on virtual reality | |
US9355366B1 (en) | Automated systems for improving communication at the human-machine interface | |
CN114090862A (en) | Information processing method and device and electronic equipment | |
E. Bixler et al. | Crossed eyes: Domain adaptation for gaze-based mind wandering models | |
CN111743553A (en) | Emotional feature extraction method and system based on eye movement data | |
Donchin | Cognitive Psychophysiology: Event-Related Potentials and the Study of Cognition: The Carmel Conferences Volume I | |
Minissi et al. | Biosignal comparison for autism assessment using machine learning models and virtual reality | |
WO2024134621A1 (en) | Systems and methods for assessing social skills in virtual reality | |
CN114129164B (en) | Autism spectrum disorder risk detection system, detection terminal and cloud server | |
US20210353208A1 (en) | Systems and methods for automated passive assessment of visuospatial memory and/or salience | |
CN115040130A (en) | Screening system for social disorder | |
WO2020139108A1 (en) | Method for conducting cognitive examinations using a neuroimaging system and a feedback mechanism | |
Maharaj et al. | Automated measurement of repetitive behavior using the Microsoft Kinect: a proof of concept | |
RU2671869C1 (en) | Method and system for determining development capacity of higher mental functions and human skills by means of neurometry | |
Voštinár et al. | Game control via EEG helmets |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |