CN113688710A - Child autism training system and method thereof - Google Patents

Child autism training system and method thereof Download PDF

Info

Publication number
CN113688710A
CN113688710A CN202110943379.XA CN202110943379A CN113688710A CN 113688710 A CN113688710 A CN 113688710A CN 202110943379 A CN202110943379 A CN 202110943379A CN 113688710 A CN113688710 A CN 113688710A
Authority
CN
China
Prior art keywords
training
asd
image
human body
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110943379.XA
Other languages
Chinese (zh)
Inventor
海洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Engineering University
Harbin Medical University
Original Assignee
Harbin Medical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Medical University filed Critical Harbin Medical University
Priority to CN202110943379.XA priority Critical patent/CN113688710A/en
Publication of CN113688710A publication Critical patent/CN113688710A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention discloses a training system and a method for autism of children. The system comprises: the image acquisition module is used for acquiring a field training image; the face recognition module is used for carrying out face recognition on the field training image to obtain an ASD face image; the human body contour recognition module is used for carrying out human body contour recognition on the field training image according to the ASD human face image to obtain ASD human body contour information; the human-computer interaction module is used for receiving the selection operation of a trainer to provide a training task; the rendering module is used for fusing and rendering the scene of the training task and the ASD human body outline information to generate a three-dimensional animation scene; the human-computer interaction module is also used for displaying the three-dimensional animation scene so that a trainer can finish training according to the three-dimensional animation scene. The invention realizes the training by extracting and identifying the ASD face and the ASD human body outline based on the particularity of the ASD child face, and considers the individual difference of a trainer, thereby improving the training effect.

Description

Child autism training system and method thereof
Technical Field
The invention relates to the technical field of medical training, in particular to a training system and method for autism of children.
Background
Asd (autism Spectrum disorder), an autism Spectrum disorder, is a broad Spectrum of autism, also known as autism, a group of disorders of neurodevelopmental development characterized primarily by social disturbances, language communication disturbances, a narrow range of interest or activity, and repetitive stereotypical behavior. Autism has a great impact on the mental monitoring and behavioral health of the patient, and seriously affects the life of the patient. If the early detection and scientific intervention are not carried out, the disability rate of the patient caused by the autism is high.
Currently, standardized manuals and scales are mainly adopted in the world for the diagnosis of autism, and children are subjected to behavior analysis. For long-term research and accumulation, Chinese experts design and research and develop a first Chinese standardized scale for Chinese autism children: the Chinese Autism Diagnostic Scale (CADS). In the general principle of CADS testing, testers are required to have working experience of standardized psychological measurement and learning experience of professional knowledge of developmental behavior pediatrics, children development psychology, psychology and the like.
In order to popularize the professional standardized tests and reduce the labor cost of the standardized tests, a large number of scholars are already put into the research of using computer-aided autism screening and intervention. Among them, some attempts have been made to diagnose or intervene in children by means of display technology, standardized test scenarios.
However, the existing autism training system can adopt electronic equipment to play audio and video files, and the autism children complete training with the help of an assistant. This type of training lacks consideration of individual differences among the trainers, making training poor.
Disclosure of Invention
In view of the above technical deficiencies, an object of the embodiments of the present invention is to provide a training system for childhood autism and a method thereof.
In order to achieve the above object, in a first aspect, an embodiment of the present invention provides a children autism training system, including:
the image acquisition module is used for acquiring a field training image;
the face recognition module is used for carrying out face recognition on the field training image to obtain an ASD face image;
the human body contour recognition module is used for carrying out human body contour recognition on the field training image according to the ASD human face image to obtain ASD human body contour information;
the human-computer interaction module is used for receiving the selection operation of a trainer to provide a training task;
the rendering module is used for fusing and rendering scenes of the training task and the ASD human body contour information to generate a three-dimensional animation scene;
the human-computer interaction module is also used for displaying the three-dimensional animation scene so that a trainer can complete training according to the three-dimensional animation scene.
In some embodiments of the present application, the face recognition module is specifically configured to:
acquiring a face sample image, wherein the face sample image comprises an ASD face sample and a normal face sample;
dividing the face sample image into a training set and a test set;
training a convolutional neural network model by adopting the training set;
testing the trained convolutional neural network by adopting the test set to obtain an ASD face detection model;
and inputting the field training image into the ASD face detection model to obtain an ASD face image.
In some embodiments of the present application, the human body contour recognition module is specifically configured to:
extracting an ASD human body contour image from the field training image by adopting the ASD face image;
and carrying out human body contour recognition on the ASD human body contour image by adopting a human body target detection recognition algorithm based on human body indexes to obtain the ASD human body contour information.
The human body target detection and identification algorithm based on the human body index specifically comprises the following steps:
the depth distance and the index value of the pixel point are obtained according to the depth image, the median of the depth image is filtered, a depth threshold range is set according to the depth distance, after the depth image is binarized, the pixel value in the depth threshold range is assigned 255, and the pixel point with the index value of 1-6 in the set depth threshold range is judged to be a human body pixel point, namely a human body contour.
As a specific implementation manner of the present application, the human-computer interaction module specifically includes:
the input unit is used for receiving selection operation of a trainer to provide a training task;
the display unit is used for displaying the three-dimensional animation scene;
and the training unit is used for receiving the operation of the trainer on the three-dimensional animation scene and scoring according to the training condition of the trainer.
Further, in some preferred embodiments of the present application, the human-computer interaction module further includes:
the login unit is used for receiving login operation of a trainer to enter the children autism training system;
the query unit is used for receiving query operation of a user and displaying the training score condition of a trainer to the user according to the query operation; the user comprises a trainer, a training assistant or a medical staff.
Further, in certain preferred embodiments of the present application, the system further comprises:
the communication unit is used for acquiring the training process and the score condition recorded by the training unit;
and the centralized monitoring platform is used for acquiring the training process and the score condition transmitted by the communication unit, so that a doctor can remotely guide the training process in time.
In a second aspect, an embodiment of the present invention further provides a training method for childhood autism, including:
collecting a field training image;
carrying out face recognition on the field training image to obtain an ASD face image;
carrying out human body contour recognition on the field training image according to the ASD face image to obtain ASD human body contour information;
receiving selection operation of a trainer to provide a training task;
performing fusion rendering on the scene of the training task and the ASD human body contour information to generate a three-dimensional animation scene;
and displaying the three-dimensional animation scene so that a trainer completes training according to the three-dimensional animation scene.
By implementing the embodiment of the invention, the human face is firstly identified to obtain the ASD human face, the ASD human body contour information is obtained based on the ASD human face, and then the scene of the training task and the ASD human body contour information are fused and rendered to generate the three-dimensional animation scene, so that a trainer can complete training according to the three-dimensional animation scene; because the ASD children generally have cognitive impairment, the ASD children have the characteristics of social difficulties, stereotypy behaviors, narrow interests and the like, the facial expressions of the ASD children are not rich, the scheme is just based on the particularity of the faces of the ASD children to extract and identify ASD faces and ASD body contours to achieve training, and the individual difference of trainers is considered, so that the training effect can be improved.
In addition, the autistic children usually need assistance of teachers or parents in the actual training process, a plurality of targets can be obtained by simply carrying out image acquisition and face recognition, and the interference of the faces of the teachers or the parents on the rendering of subsequent scenes can be eliminated by carrying out ASD face recognition in the embodiment of the invention, so that the scenes obtained by the subsequent rendering can be ensured to be more directed to the ASD children.
Drawings
In order to more clearly illustrate the detailed description of the invention or the technical solutions in the prior art, the drawings that are needed in the detailed description of the invention or the prior art will be briefly described below.
FIG. 1 is a block diagram of a child autism training system provided by an embodiment of the present invention;
FIG. 2 is a block diagram of the human-machine interaction module of FIG. 1;
fig. 3 is a flowchart of a childhood autism training method provided by an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a training system for children autism provided by the embodiment of the invention includes:
the image acquisition module 10 is used for acquiring a field training image; the live training image can comprise a trainer, an assistant (a teacher or a parent), a background object and the like;
a face recognition module 20, configured to perform face recognition on the field training image to obtain an ASD face image;
the human body contour recognition module 30 is configured to perform human body contour recognition on the field training image according to the ASD human face image to obtain ASD human body contour information;
a human-computer interaction module 40 for receiving selection operation of a trainer to provide a training task;
the rendering module 50 is configured to perform fusion rendering on a scene of a training task and the ASD human body contour information to generate a three-dimensional animation scene;
the human-computer interaction module 40 is further configured to display the three-dimensional animation scene, so that a trainer completes training according to the three-dimensional animation scene;
and the centralized monitoring platform 60 is communicated with the human-computer interaction module 40 and is used for acquiring the training process and the scoring condition, so that a doctor can remotely guide and adjust the training process in time, and the like.
The face recognition module 20 is specifically configured to:
acquiring a face sample image which comprises an ASD face sample and a normal face sample; the acquisition mode of the face sample image may be downloading from the internet through a search engine, or shooting by using equipment such as a camera, and is not limited to the two acquisition modes;
preprocessing a face sample image, and converting a color image into a gray image;
dividing a face sample image into a training set and a test set;
training a preset convolutional neural network model by adopting a face sample image in a training set;
adopting the face sample image in the test set to test the trained convolutional neural network model, and if the recognition accuracy of the face sample image in the test set by the model is greater than a preset threshold value, taking the trained convolutional neural network model as an ASD face detection model; otherwise, training the model again until the recognition accuracy of the model on the face sample image is greater than the threshold value;
and inputting the field training image into the ASD face detection model to obtain an ASD face image.
The human body contour recognition module 30 is specifically configured to:
extracting an ASD human body contour image from the field training image by adopting the ASD face image;
and carrying out human body contour recognition on the ASD human body contour image by adopting a human body target detection recognition algorithm based on human body indexes to obtain the ASD human body contour information.
Specifically, the human body target detection and identification algorithm based on the human body index specifically comprises:
firstly, the depth distance and the index value of a pixel point are obtained according to a depth image. In the color information of the pixel, the high 13 bits represent the depth information of the pixel, and the low 3 bits represent the index number of the pixel. The lower 3 bytes represent the tracked human index number, and these 3 bytes would be converted to an integer value type (i.e., 0-7) but not as flag bits. And carrying out median filtering on the obtained depth image to obtain a more stable depth image. Then, setting a depth threshold range according to the effective detection distance: 1 m-3.5 m, setting the gray value of the pixel point outside the range as 0, and treating the gray value as background; then binarizing the depth image; setting the pixel values within the depth threshold range to 255 and vice versa to 0; and judging that the pixel points with the index value of 1-6 in the set depth threshold range are human body pixel points, namely the human body contour.
Further, as shown in fig. 2, the human-computer interaction module 40 includes:
a login unit 401, configured to receive a login operation of a trainer to enter the childhood autism training system;
an input unit 402 for receiving a selection operation of a trainer to provide a training task;
a display unit 403 for displaying the three-dimensional animation scene;
a training unit 404, configured to receive an operation of the trainer on the three-dimensional animation scene, and score according to a training situation of the trainer;
a query unit 405, configured to receive a query operation of a user, and display a training score of a trainer to the user according to the query operation; the user comprises a trainer, a training assistant or a medical staff;
and the communication unit 406 is used for acquiring the training process and the score condition recorded by the training unit and transmitting the training process and the score condition to the centralized monitoring platform 60.
As can be known from the above description, in the training system for childhood autism according to the embodiment of the present invention, a person performs face recognition to obtain an ASD face, then obtains ASD body contour information based on the ASD face, and then performs fusion rendering on a scene of a training task and the ASD body contour information to generate a three-dimensional animation scene, so that a trainer completes training according to the three-dimensional animation scene; because the ASD children generally have cognitive impairment, the ASD children have the characteristics of social difficulties, stereotypy behaviors, narrow interests and the like, the facial expressions of the ASD children are not rich, the scheme is just based on the particularity of the faces of the ASD children to extract and identify ASD faces and ASD body contours to achieve training, and the individual difference of trainers is considered, so that the training effect can be improved.
In addition, the autistic children usually need assistance of teachers or parents in the actual training process, a plurality of targets can be obtained by simply carrying out image acquisition and face recognition, and the interference of the faces of the teachers or the parents on the rendering of subsequent scenes can be eliminated by carrying out ASD face recognition in the embodiment of the invention, so that the scenes obtained by the subsequent rendering can be ensured to be more directed to the ASD children.
Based on the same inventive concept, the invention provides a training method for children autism, as shown in fig. 3, comprising:
and S1, acquiring a field training image.
And S2, performing face recognition on the field training image to obtain an ASD face image.
Specifically, step S2 includes:
acquiring a face sample image, wherein the face sample image comprises an ASD face sample and a normal face sample;
dividing the face sample image into a training set and a test set;
training a convolutional neural network model by adopting the training set;
testing the trained convolutional neural network by adopting the test set to obtain an ASD face detection model;
and inputting the field training image into the ASD face detection model to obtain an ASD face image.
And S3, carrying out human body contour recognition on the field training image according to the ASD face image to obtain ASD human body contour information.
Specifically, step S3 includes:
extracting an ASD human body contour image from the field training image by adopting the ASD face image;
adopting a human body target detection and identification algorithm based on human body index to carry out human body contour identification on the ASD human body contour image to obtain the ASD human body contour information;
the human body target detection and identification algorithm based on the human body index specifically comprises the following steps:
the depth distance and the index value of the pixel point are obtained according to the depth image, the median of the depth image is filtered, a depth threshold range is set according to the depth distance, after the depth image is binarized, the pixel value in the depth threshold range is assigned 255, and the pixel point with the index value of 1-6 in the set depth threshold range is judged to be a human body pixel point, namely a human body contour.
And S4, receiving the selection operation of the trainer to provide the training task.
And S5, performing fusion rendering on the scene of the training task and the ASD human body contour information to generate a three-dimensional animation scene.
And S6, displaying the three-dimensional animation scene so that the trainer completes training according to the three-dimensional animation scene.
And S7, transmitting the training process and the score condition to a centralized monitoring platform, so that the doctor can remotely guide the training process in time.
Further, in the method, the user can log in the child autism training system to inquire the score of the trainer.
It should be noted that, for a more detailed description of the method, please refer to the training system part, which is not described herein again.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A child autism training system, comprising:
the image acquisition module is used for acquiring a field training image;
the face recognition module is used for carrying out face recognition on the field training image to obtain an ASD face image;
the human body contour recognition module is used for carrying out human body contour recognition on the field training image according to the ASD human face image to obtain ASD human body contour information;
the human-computer interaction module is used for receiving the selection operation of a trainer to provide a training task;
the rendering module is used for fusing and rendering scenes of the training task and the ASD human body contour information to generate a three-dimensional animation scene;
the human-computer interaction module is also used for displaying the three-dimensional animation scene so that a trainer can complete training according to the three-dimensional animation scene.
2. The childhood autism training system of claim 1, wherein the face recognition module is specifically configured to:
acquiring a face sample image, wherein the face sample image comprises an ASD face sample and a normal face sample;
dividing the face sample image into a training set and a test set;
training a convolutional neural network model by adopting the training set;
testing the trained convolutional neural network by adopting the test set to obtain an ASD face detection model;
and inputting the field training image into the ASD face detection model to obtain an ASD face image.
3. The childhood autism training system of claim 2, wherein the body contour recognition module is specifically configured to:
extracting an ASD human body contour image from the field training image by adopting the ASD face image;
and carrying out human body contour recognition on the ASD human body contour image by adopting a human body target detection recognition algorithm based on human body indexes to obtain the ASD human body contour information.
4. The childhood autism training system of claim 1, wherein the human target detection and identification algorithm based on human index is specifically:
the depth distance and the index value of the pixel point are obtained according to the depth image, the median of the depth image is filtered, a depth threshold range is set according to the depth distance, after the depth image is binarized, the pixel value in the depth threshold range is assigned 255, and the pixel point with the index value of 1-6 in the set depth threshold range is judged to be a human body pixel point, namely a human body contour.
5. The childhood autism training system of claim 4, wherein said human-machine interaction module comprises:
the input unit is used for receiving selection operation of a trainer to provide a training task;
the display unit is used for displaying the three-dimensional animation scene;
and the training unit is used for receiving the operation of the trainer on the three-dimensional animation scene and scoring according to the training condition of the trainer.
6. The child autism training system of claim 5, wherein said human-machine interaction model further comprises:
the login unit is used for receiving login operation of a trainer to enter the children autism training system;
the query unit is used for receiving query operation of a user and displaying the training score condition of a trainer to the user according to the query operation; the user comprises a trainer, a training assistant or a medical staff.
7. The childhood autism training system of claim 5, wherein said system further comprises:
the communication unit is used for acquiring the training process and the score condition recorded by the training unit;
and the centralized monitoring platform is used for acquiring the training process and the score condition transmitted by the communication unit, so that a doctor can remotely guide the training process in time.
8. A method of childhood autism training, comprising:
collecting a field training image;
carrying out face recognition on the field training image to obtain an ASD face image;
carrying out human body contour recognition on the field training image according to the ASD face image to obtain ASD human body contour information;
receiving selection operation of a trainer to provide a training task;
performing fusion rendering on the scene of the training task and the ASD human body contour information to generate a three-dimensional animation scene;
and displaying the three-dimensional animation scene so that a trainer completes training according to the three-dimensional animation scene.
9. The childhood autism training method of claim 8, wherein obtaining the ASD face image specifically comprises:
acquiring a face sample image, wherein the face sample image comprises an ASD face sample and a normal face sample;
dividing the face sample image into a training set and a test set;
training a convolutional neural network model by adopting the training set;
testing the trained convolutional neural network by adopting the test set to obtain an ASD face detection model;
and inputting the field training image into the ASD face detection model to obtain an ASD face image.
10. The childhood autism training method of claim 9 after 8, wherein the obtaining of ASD body contour information specifically includes:
extracting an ASD human body contour image from the field training image by adopting the ASD face image;
adopting a human body target detection and identification algorithm based on human body index to carry out human body contour identification on the ASD human body contour image to obtain the ASD human body contour information;
the human body target detection and identification algorithm based on the human body index specifically comprises the following steps:
the depth distance and the index value of the pixel point are obtained according to the depth image, the median of the depth image is filtered, a depth threshold range is set according to the depth distance, after the depth image is binarized, the pixel value in the depth threshold range is assigned 255, and the pixel point with the index value of 1-6 in the set depth threshold range is judged to be a human body pixel point, namely a human body contour.
CN202110943379.XA 2021-08-17 2021-08-17 Child autism training system and method thereof Pending CN113688710A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110943379.XA CN113688710A (en) 2021-08-17 2021-08-17 Child autism training system and method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110943379.XA CN113688710A (en) 2021-08-17 2021-08-17 Child autism training system and method thereof

Publications (1)

Publication Number Publication Date
CN113688710A true CN113688710A (en) 2021-11-23

Family

ID=78580293

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110943379.XA Pending CN113688710A (en) 2021-08-17 2021-08-17 Child autism training system and method thereof

Country Status (1)

Country Link
CN (1) CN113688710A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114296556A (en) * 2021-12-31 2022-04-08 苏州欧普照明有限公司 Interactive display method, device and system based on human body posture
CN114392457A (en) * 2022-03-25 2022-04-26 北京无疆脑智科技有限公司 Information generation method, device, electronic equipment, storage medium and system
CN114758530A (en) * 2022-04-28 2022-07-15 浙江理工大学 Infant face capability training program and training method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109550233A (en) * 2018-11-15 2019-04-02 东南大学 Autism child attention training system based on augmented reality
CN111081371A (en) * 2019-11-27 2020-04-28 昆山杜克大学 Virtual reality-based early autism screening and evaluating system and method
CN111104897A (en) * 2019-12-18 2020-05-05 深圳市捷顺科技实业股份有限公司 Training method and device for child face recognition model and storage medium
CN112163512A (en) * 2020-09-25 2021-01-01 杨铠郗 Autism spectrum disorder face screening method based on machine learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109550233A (en) * 2018-11-15 2019-04-02 东南大学 Autism child attention training system based on augmented reality
CN111081371A (en) * 2019-11-27 2020-04-28 昆山杜克大学 Virtual reality-based early autism screening and evaluating system and method
CN111104897A (en) * 2019-12-18 2020-05-05 深圳市捷顺科技实业股份有限公司 Training method and device for child face recognition model and storage medium
CN112163512A (en) * 2020-09-25 2021-01-01 杨铠郗 Autism spectrum disorder face screening method based on machine learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
林强 等: "《行为识别与智能计算》", 30 November 2016, 西安电子科技大学出版社, pages: 43 - 52 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114296556A (en) * 2021-12-31 2022-04-08 苏州欧普照明有限公司 Interactive display method, device and system based on human body posture
CN114392457A (en) * 2022-03-25 2022-04-26 北京无疆脑智科技有限公司 Information generation method, device, electronic equipment, storage medium and system
CN114758530A (en) * 2022-04-28 2022-07-15 浙江理工大学 Infant face capability training program and training method
CN114758530B (en) * 2022-04-28 2023-08-08 浙江理工大学 Infant face ability training program and training method

Similar Documents

Publication Publication Date Title
CN113688710A (en) Child autism training system and method thereof
CN107224291B (en) Dispatcher capability test system
CN107981858A (en) Electrocardiogram heartbeat automatic recognition classification method based on artificial intelligence
CN110970130B (en) Data processing device for attention deficit hyperactivity disorder
CN111544015B (en) Cognitive power-based control work efficiency analysis method, device and system
CN111598451B (en) Control work efficiency analysis method, device and system based on task execution capacity
CN111012367A (en) Intelligent identification system for mental diseases
CN114581823B (en) Virtual reality video emotion recognition method and system based on time sequence characteristics
CN111887867A (en) Method and system for analyzing character formation based on expression recognition and psychological test
Hahn et al. Thatcherization impacts the processing of own-race faces more so than other-race faces: An ERP study
CN112580552A (en) Method and device for analyzing behavior of rats
CN109620266A (en) The detection method and system of individual anxiety level
CN109276243A (en) Brain electricity psychological test method and terminal device
CN115659207A (en) Electroencephalogram emotion recognition method and system
CN116186561B (en) Running gesture recognition and correction method and system based on high-dimensional time sequence diagram network
CN115022617B (en) Video quality evaluation method based on electroencephalogram signal and space-time multi-scale combined network
CN113974589B (en) Multi-modal behavior paradigm evaluation optimization system and cognitive ability evaluation method
Zhao et al. A multimodal data driven rehabilitation strategy auxiliary feedback method: A case study
CN110353703B (en) Autism assessment device and system based on parrot tongue learning language model behavior analysis
CN113456075A (en) Concentration assessment training method based on eye movement tracking and brain wave monitoring technology
CN113611416A (en) Psychological scene assessment method and system based on virtual reality technology
Cacciatori et al. On Developing Facial Stress Analysis and Expression Recognition Platform
CN110781719A (en) Non-contact and contact cooperative mental state intelligent monitoring system
CN111724896A (en) Drug addiction evaluation system based on multi-stimulus image or video ERP
CN112419112B (en) Method and device for generating academic growth curve, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination