CN117351390A - BPPV eye-shake classification model modeling method - Google Patents

BPPV eye-shake classification model modeling method Download PDF

Info

Publication number
CN117351390A
CN117351390A CN202311259719.2A CN202311259719A CN117351390A CN 117351390 A CN117351390 A CN 117351390A CN 202311259719 A CN202311259719 A CN 202311259719A CN 117351390 A CN117351390 A CN 117351390A
Authority
CN
China
Prior art keywords
eye
shake
video
bppv
classification model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311259719.2A
Other languages
Chinese (zh)
Inventor
方志军
王卓然
吴沛霞
高永彬
王海玲
李文妍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai University of Engineering Science
Eye and ENT Hospital of Fudan University
Original Assignee
Shanghai University of Engineering Science
Eye and ENT Hospital of Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai University of Engineering Science, Eye and ENT Hospital of Fudan University filed Critical Shanghai University of Engineering Science
Priority to CN202311259719.2A priority Critical patent/CN117351390A/en
Publication of CN117351390A publication Critical patent/CN117351390A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Abstract

The invention relates to a modeling method of a BPPV eye-shake classification model, which comprises the following steps: acquiring an eye shake video; positioning and marking pupil areas in the eye shake video; calculating the optical flow between the continuous frames according to the eye shake video marked with the pupil area, and obtaining a corresponding optical flow frame sequence; and training the BPPV eye-shake classification model by taking the optical flow frame sequence as input and the BPPV eye-shake type as a label to obtain a trained BPPV eye-shake classification model. Compared with the prior art, the method has the advantages of being capable of achieving fine granularity classification of compound eye shake, reducing manual labeling quantity and the like.

Description

BPPV eye-shake classification model modeling method
Technical Field
The invention relates to the field of deep learning and image processing, in particular to a modeling method of a BPPV eye-shake classification model.
Background
Dizziness and vertigo are common clinical and typical symptoms of many diseases. Dizziness is one of the most common symptoms among all the symptoms clinically encountered. Nystagmus is the most sensitive and specific sign of vestibular lesions in clinical examinations of vertigo. Unlike other diseases, vestibular disorders are difficult to diagnose due to the lack of typical signs and features. The nystagmus mode is measured through clinically collected eye movement videos, and a valuable reference basis can be provided for diagnosis of dizziness and vertigo. However, the above process still depends on expert and special examination, and intelligent eye shake pattern recognition and clinical diagnosis cannot be realized.
Benign paroxysmal positional vertigo (Benign Paroxysmal Positional Vertigo, BPPV) is an extremely common vertigo disease, with nystagmus being its main sign. BPPV (commonly known as otolithiasis) is a vestibular device disease with transient paroxysmal vertigo accompanied by eye shake, which is triggered by head position change, and is one of the most common vestibular vertigo. Continuous untreated BPPV can severely affect the daily life of patients, and characterization of nystagmus in BPPV patients is critical to diagnosis of BPPV, and current relevant techniques typically acquire eye-shake information through positional experimentation.
The traditional detection method only can provide pupil tracking data with low precision for doctors, and the characteristics are manually extracted and then input into an algorithm model for classification, so that the scheme has strong subjectivity and limitation; in addition, nystagmus is multidirectional, extraction of the axial rotation vector features of the eye ball is difficult, and part of the features can only distinguish patients from healthy people, or the simple nystagmus mode is classified, but in general, multiple types of compound eye shakes exist in clinical diagnosis, and specific eye shake types cannot be identified by the related technology.
In summary, the eye shake classification scheme in the related art cannot accurately extract the eye shake video features, and performs fine-grained classification on eye shakes compounded by multiple types.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a BPPV (binary phase-change material) eye shock classification model modeling method capable of realizing fine granularity classification of compound eye shocks and reducing manual labeling quantity.
The aim of the invention can be achieved by the following technical scheme:
a BPPV eye-shake classification model modeling method, comprising: acquiring an eye shake video; positioning and marking pupil areas in the eye shake video; calculating the optical flow between the continuous frames according to the eye shake video marked with the pupil area, and obtaining a corresponding optical flow frame sequence; and training the BPPV eye-shake classification model by taking the optical flow frame sequence as input and the BPPV eye-shake type as a label to obtain the trained BPPV eye-shake classification model.
As a preferred technical solution, the acquiring the eye shake video includes: and acquiring eye shake video in an MPG format or an AVI format recorded by an infrared video eye movement recorder.
As a preferred technical solution, after the acquiring the eye shake video, the method further includes: removing invalid frames in the eye shake video, compressing the eye shake video, and obtaining the compressed eye shake video;
the positioning and marking the pupil area in the eye shake video comprises the following steps: and positioning and marking the through hole area in the compressed eye shake video.
As a more preferable technical solution, the removing the invalid frame in the eye shake video, compressing the eye shake video, and obtaining the compressed eye shake video includes: identifying effective frames and ineffective frames in the eye shake video by adopting an ineffective frame identification model; and sequentially recombining the effective frames in the eye shake video to obtain the compressed eye shake video.
As a more preferable technical solution, before the identifying the valid frame and the invalid frame in the eye shake video by using the invalid frame identification model, the method further includes: and refining the invalid frame removal problem into a two-classification task, and training an invalid frame identification model by adopting the invalid frame and the valid frame which are acquired in advance.
As a more preferable aspect, the invalid frame identification model includes:
the invalid frame identification network is used for classifying frames in the eye shake video to obtain valid frames and invalid frames;
and the pupil area calculation module is used for calculating the pupil area in the effective frame determined by the invalid frame identification network by adopting a Hough circle change algorithm, and determining the effective frame as an invalid frame when the pupil area is smaller than a preset area threshold.
As a preferred technical solution, the positioning and marking the pupil area in the oculogram video includes:
pupil region positioning algorithm based on random ellipse fitting, positioning and marking pupil region in the eye shake video, comprising: converting the video frame of the eye shake video into a gray scale map; processing the gray map using gaussian filtering; eliminating white spots in pupils by adopting an opening operation in graphics; the black irrelevant is eliminated by adopting a closing operation in graphics; extracting edge features in the image to obtain a binarized image; randomly generating a certain number of ellipses, and evaluating the fitness of each ellipse; screening out an optimal ellipse fitting result according to the fitness evaluation result; and determining the position and the size of the pupil through the best ellipse fitting result, and marking the pupil.
As a preferred technical solution, the calculating an optical flow between consecutive frames according to the eye shake video marked with the pupil area, to obtain a corresponding optical flow frame sequence includes: and calculating the optical flow between continuous frames in the eye shake video by adopting a LiteFlowNet2 algorithm according to the eye shake video marked with the pupil area, and obtaining a corresponding optical flow frame sequence.
As a preferable technical scheme, the BPPV eye-vibration classification model comprises an Efficient Net submodel and a Bi-LSTM submodel connected at the tail end of the Efficient Net submodel.
As a preferred technical scheme, the method comprises the following steps,
the method further comprises the steps of:
and performing eye shake classification by adopting the trained BPPV eye shake classification model.
Compared with the prior art, the invention has the following beneficial effects:
1. the fine granularity classification of the compound eye shake is realized: the BPPV eye-shake classification model constructed by the BPPV eye-shake classification model modeling method of the invention extracts an optical flow sequence from the video by using LiteFlowNet2 algorithm, thereby better capturing the motion information between continuous frames in the eye-shake video and overcoming the subjectivity and the limitation of the traditional motion trail characteristics; meanwhile, the BPPV eye shake classification model is constructed by adopting EfficientNet and Bi-LSTM, so that the fine granularity classification of the compound eye shake is realized.
2. The manual labeling amount is reduced: the BPPV eye-vibration classification model constructed by the BPPV eye-vibration classification model modeling method can intelligently mark pupils, so that the artificial marking burden is reduced.
Drawings
FIG. 1 is a schematic flow chart of a modeling method of a BPPV eye-shake classification model in an embodiment of the invention;
FIG. 2 is a flow chart illustrating the process of invalid frame removal and compression of an eye-shake video according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating pupil area localization in accordance with an embodiment of the present invention;
FIG. 4 is a schematic diagram of extracting optical flow using LiteFlowNet2 according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a BPPV eye-shake classification model constructed based on EfficientNet in a real-time example of the invention;
fig. 6 is a schematic diagram of a BPPV eye-shock classification system according to an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
Reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic may be included in at least one implementation of the present application. In the description of the present application, it should be understood that the terms "upper," "lower," "left," "right," "top," "bottom," and the like indicate an orientation or a positional relationship based on that shown in the drawings, and are merely for convenience of description and simplification of the description, and do not indicate or imply that the apparatus or element in question must have a specific orientation, be configured and operated in a specific orientation, and therefore should not be construed as limiting the present application. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may include one or more of the feature, either explicitly or implicitly. Moreover, the terms "first," "second," and the like, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein.
Fig. 1 is a flowchart of a BPPV eye-shake classification model modeling method provided in an embodiment of the application. The present application provides method operational steps as described in the examples or flowcharts, but may include more or fewer operational steps based on conventional or non-inventive labor. The step sequence listed in the embodiment is only one BPPV eye-shock classification model modeling method manner in the numerous step execution sequences, and does not represent the only execution sequence. The method should be implemented in software and/or hardware. Referring to fig. 1, the method may include:
step S110: acquiring an eye shake video;
step S120: positioning and marking pupil areas in the eye shake video;
step S130: calculating the optical flow between the continuous frames according to the eye shake video marked with the pupil area, and obtaining a corresponding optical flow frame sequence;
step S140: and training the BPPV eye-shake classification model by taking the optical flow frame sequence as input and the BPPV eye-shake type as a label to obtain a trained BPPV eye-shake classification model.
The following describes the steps S110 to S140 in detail:
optionally, step S110 includes: eye-shake video in MPG (Moving Pictures Experts Group, moving picture experts group) format or AVI (Audio Video Interleaved, audio video staggering) format recorded by an infrared video eye-movement recorder is acquired.
Optionally, after step S110, the method further includes: removing invalid frames in the eye shake video, compressing the eye shake video, and obtaining the compressed eye shake video;
at this time, step S120 includes: and positioning and marking the through hole area in the compressed eye shake video.
Optionally, removing an invalid frame in the eye shake video, compressing the eye shake video, and obtaining the compressed eye shake video, including:
identifying effective frames and ineffective frames in the eye shake video by adopting an ineffective frame identification model;
and sequentially recombining the effective frames in the eye shake video to obtain the compressed eye shake video.
The invalid frame identification model may adopt a convolutional neural network CNN model.
Optionally, before the invalid frame identification model is used to identify the valid frame and the invalid frame in the eye shake video, the method further comprises: and refining the invalid frame removal problem into a two-classification task, and training an invalid frame identification model by adopting the invalid frame and the valid frame which are acquired in advance.
Optionally, the invalid frame identification model includes:
the invalid frame identification network is used for classifying frames in the eye shake video to obtain valid frames and invalid frames;
and the pupil area calculation module is used for calculating the pupil area in the effective frame determined by the invalid frame identification network by adopting a Hough circle change algorithm, and determining the effective frame as the invalid frame when the pupil area is smaller than a preset area threshold value.
The flow of identifying valid frames and invalid frames in the eye-shake video by using the invalid frame identification model is shown in fig. 2.
Optionally, step S120 includes:
pupil region positioning algorithm based on random ellipse fitting, positioning and marking pupil region in eye shake video, comprising:
converting video frames of the eye shake video into a gray scale map;
processing the gray map by Gaussian filtering;
the white spots generated by light reflection in the pupils are eliminated by adopting an opening operation in graphics;
the closing operation in graphics is adopted to eliminate fine black irrelevant objects such as eyelashes;
extracting edge features in the image to obtain a binarized image;
randomly generating a certain number of ellipses, and evaluating the fitness of each ellipse;
screening out an optimal ellipse fitting result according to the fitness evaluation result;
the position and size of the pupil are determined through the best ellipse fitting result, and the pupil is marked, and the marking result is shown in fig. 3.
Optionally, as shown in fig. 4, step S130 includes: and calculating the optical flow between continuous frames in the eye shake video by adopting a LiteFlowNet2 algorithm according to the eye shake video marked with the pupil area, and obtaining a corresponding optical flow frame sequence. The mode of extracting the optical flow sequence by using the LiteFlowNet2 algorithm can better capture the motion information between continuous frames in the eye shake video, thereby overcoming the subjectivity and the limitation of the traditional motion trail characteristics.
Optionally, as shown in fig. 5, the BPPV eye-shake classification model includes an afflicientnet submodel and a Bi-LSTM submodel connected to the end of the afflicientnet submodel, where the Bi-LSTM submodel can better capture motion information between consecutive frames.
Optionally, the BPPV eye-shake classification model modeling method further includes:
and performing eye shake classification by adopting the trained BPPV eye shake classification model.
As shown in fig. 6, based on the same inventive concept, the present embodiment further provides a BPPV eye-shock classification system, which includes: the intelligent marking system comprises a video database storage subsystem, an intelligent marking auxiliary marking subsystem and a type intelligent diagnosis subsystem, wherein the video database storage subsystem is respectively connected with the intelligent marking auxiliary marking subsystem and the type intelligent diagnosis subsystem, and the intelligent marking auxiliary marking subsystem is connected with the type intelligent diagnosis subsystem, wherein the intelligent marking auxiliary marking subsystem is connected with the type intelligent diagnosis subsystem:
video database storage subsystem: the method comprises the steps of inputting collected infrared eye shake videos, removing invalid frames of original eye shake videos, and storing the processed eye shake videos;
the intelligent labeling auxiliary marking subsystem: the method comprises the steps of performing pupil region positioning through a pupil region positioning algorithm based on random ellipse fitting, and marking on an original video; the method can also be used for carrying out custom labeling on the eye shake video and storing labeling information;
type intelligent diagnostic subsystem: the method comprises the steps of extracting an optical flow sequence from a BPPV eye shake video through a LiteFlowNet2 algorithm; and predicting the BPPV eye vibration type by the BPPV eye vibration classification model constructed based on the BPPV eye vibration classification model modeling method, and outputting the confidence degree of each class in the prediction result.
Optionally, the preset eye shake in the intelligent labeling auxiliary labeling subsystem changes in horizontal, vertical, axial and strength, and is used for assisting manual labeling.
Optionally, the video database storage subsystem comprises:
the data input unit is used for inputting infrared videos acquired by a hospital;
an invalid frame removing unit, configured to remove an invalid frame from an original video;
the data storage unit is used for storing the preprocessed video;
the data input unit, the invalid frame removing unit and the data storage unit are sequentially connected.
The intelligent labeling auxiliary marking subsystem comprises:
the pupil area positioning unit is used for positioning pupil areas through a pupil area positioning algorithm based on random ellipse fitting and marking the original video;
the marking unit is used for marking the eye shake video by a doctor and storing marking information;
a data storage unit;
the pupil area positioning unit and the labeling unit are connected in sequence, and the data storage unit is connected with the pupil area positioning unit.
The type intelligent diagnostic subsystem includes:
the optical flow extraction unit is used for extracting an optical flow sequence from the BPPV eye shake video through a LiteFlowNet2 algorithm;
the type prediction unit is used for extracting characteristics through a BPPV eye-shake classification model constructed based on EfficientNet, predicting the eye-shake type, and outputting the confidence degree of each type in the prediction result;
the optical flow extraction unit and the type prediction unit are connected in sequence. The pupil area positioning unit is connected with the optical flow extraction unit.
The BPPV eye shake classification system has the following beneficial effects:
1. compared with the prior art, the BPPV eye shake classification system can assist a doctor in intelligent auxiliary marking function, can also perform intelligent diagnosis on BPPV compound eye shake, and can complete type recognition, so that the workload of the doctor is greatly reduced.
2. The BPPV eye shake classification system extracts an optical flow sequence from the video by using a LiteFlowNet2 algorithm, better captures motion information between continuous frames in the eye shake video, and overcomes subjectivity and limitation of the traditional motion trail characteristics; the BPPV eye shake classification model is constructed by adopting EfficientNet and Bi-LSTM, thereby realizing the fine granularity classification of the compound eye shake, having simple operation and lower requirement of the system on hardware and having higher value in clinical application.
While the invention has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (10)

1. A method for modeling a BPPV eye-shock classification model, the method comprising:
acquiring an eye shake video;
positioning and marking pupil areas in the eye shake video;
calculating the optical flow between the continuous frames according to the eye shake video marked with the pupil area, and obtaining a corresponding optical flow frame sequence;
and training the BPPV eye-shake classification model by taking the optical flow frame sequence as input and the BPPV eye-shake type as a label to obtain the trained BPPV eye-shake classification model.
2. The method for modeling a BPPV eye-shock classification model according to claim 1, wherein the acquiring eye-shock video comprises:
and acquiring eye shake video in an MPG format or an AVI format recorded by an infrared video eye movement recorder.
3. The BPPV eye-shock classification model modeling method according to claim 1, wherein after the acquiring eye-shock video, the method further comprises:
removing invalid frames in the eye shake video, compressing the eye shake video, and obtaining the compressed eye shake video;
the positioning and marking the pupil area in the eye shake video comprises the following steps:
and positioning and marking the through hole area in the compressed eye shake video.
4. The BPPV eye-shake classification model modeling method according to claim 3, wherein the removing the invalid frame in the eye-shake video, compressing the eye-shake video, and obtaining the compressed eye-shake video comprises:
identifying effective frames and ineffective frames in the eye shake video by adopting an ineffective frame identification model;
and sequentially recombining the effective frames in the eye shake video to obtain the compressed eye shake video.
5. The BPPV eye-shock classification model modeling method of claim 4, wherein prior to said identifying valid and invalid frames in the eye-shock video using an invalid frame identification model, the method further comprises:
and refining the invalid frame removal problem into a two-classification task, and training an invalid frame identification model by adopting the invalid frame and the valid frame which are acquired in advance.
6. The BPPV eye-shock classification model modeling method of claim 4, wherein the inactive frame identification model comprises:
the invalid frame identification network is used for classifying frames in the eye shake video to obtain valid frames and invalid frames;
and the pupil area calculation module is used for calculating the pupil area in the effective frame determined by the invalid frame identification network by adopting a Hough circle change algorithm, and determining the effective frame as an invalid frame when the pupil area is smaller than a preset area threshold.
7. The method of modeling a BPPV eye-shake classification model according to claim 1, wherein locating and marking pupil areas in the eye-shake video comprises:
pupil region positioning algorithm based on random ellipse fitting, positioning and marking pupil region in the eye shake video, comprising:
converting the video frame of the eye shake video into a gray scale map;
processing the gray map using gaussian filtering;
eliminating white spots in pupils by adopting an opening operation in graphics;
the black irrelevant is eliminated by adopting a closing operation in graphics;
extracting edge features in the image to obtain a binarized image;
randomly generating a certain number of ellipses, and evaluating the fitness of each ellipse;
screening out an optimal ellipse fitting result according to the fitness evaluation result;
and determining the position and the size of the pupil through the best ellipse fitting result, and marking the pupil.
8. The BPPV eye-shake classification model modeling method according to claim 1, wherein calculating optical flow between successive frames from the eye-shake video marked with the pupil area, and acquiring a corresponding optical flow frame sequence, comprises:
and calculating the optical flow between continuous frames in the eye shake video by adopting a LiteFlowNet2 algorithm according to the eye shake video marked with the pupil area, and obtaining a corresponding optical flow frame sequence.
9. The method for modeling a BPPV eye-shock classification model according to claim 1, wherein the BPPV eye-shock classification model comprises an EfficientNet sub-model and a Bi-LSTM sub-model connected at the end of the EfficientNet sub-model.
10. A BPPV eye-shock classification model modeling method according to any of claims 1-9, characterised in that the method further comprises:
and performing eye shake classification by adopting the trained BPPV eye shake classification model.
CN202311259719.2A 2023-09-26 2023-09-26 BPPV eye-shake classification model modeling method Pending CN117351390A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311259719.2A CN117351390A (en) 2023-09-26 2023-09-26 BPPV eye-shake classification model modeling method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311259719.2A CN117351390A (en) 2023-09-26 2023-09-26 BPPV eye-shake classification model modeling method

Publications (1)

Publication Number Publication Date
CN117351390A true CN117351390A (en) 2024-01-05

Family

ID=89365983

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311259719.2A Pending CN117351390A (en) 2023-09-26 2023-09-26 BPPV eye-shake classification model modeling method

Country Status (1)

Country Link
CN (1) CN117351390A (en)

Similar Documents

Publication Publication Date Title
Zunino et al. Video gesture analysis for autism spectrum disorder detection
CN109472781B (en) Diabetic retinopathy detection system based on serial structure segmentation
KR102097742B1 (en) System for Searching medical image using artificial intelligence and Driving method thereof
US20100086215A1 (en) Automated Facial Action Coding System
CN112580552B (en) Murine behavior analysis method and device
CN110428908B (en) Eyelid motion function evaluation system based on artificial intelligence
CN111785363A (en) AI-guidance-based chronic disease auxiliary diagnosis system
Ali et al. Video-based behavior understanding of children for objective diagnosis of autism
CN105279380A (en) Facial expression analysis-based depression degree automatic evaluation system
CN112927187A (en) Method for automatically identifying and positioning focal cortical dysplasia epileptic focus
CN111685740B (en) Heart function parameter detection method and device
CN114358194A (en) Gesture tracking based detection method for abnormal limb behaviors of autism spectrum disorder
CN114565957A (en) Consciousness assessment method and system based on micro expression recognition
Zhou et al. Automatic microaneurysms detection based on multifeature fusion dictionary learning
Ahmedt-Aristizabal et al. Vision-based mouth motion analysis in epilepsy: A 3d perspective
CN111738992A (en) Lung focus region extraction method and device, electronic equipment and storage medium
Thomas et al. Artificial neural network for diagnosing autism spectrum disorder
CN113642525A (en) Infant neural development assessment method and system based on skeletal points
Pediaditis et al. Vision-based human motion analysis in epilepsy-methods and challenges
CN117351390A (en) BPPV eye-shake classification model modeling method
CN115439920B (en) Consciousness state detection system and equipment based on emotional audio-visual stimulation and facial expression
Arnold et al. Indistinct frame detection in colonoscopy videos
Valenzuela et al. A spatio-temporal hypomimic deep descriptor to discriminate parkinsonian patients
CN113627255A (en) Mouse behavior quantitative analysis method, device, equipment and readable storage medium
CN111354458A (en) Touch interactive motion user feature extraction method based on general drawing task and auxiliary disease detection system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination