CN114617529A - Eyeball dizziness data identification method and system for eye shade equipment - Google Patents

Eyeball dizziness data identification method and system for eye shade equipment Download PDF

Info

Publication number
CN114617529A
CN114617529A CN202210511473.2A CN202210511473A CN114617529A CN 114617529 A CN114617529 A CN 114617529A CN 202210511473 A CN202210511473 A CN 202210511473A CN 114617529 A CN114617529 A CN 114617529A
Authority
CN
China
Prior art keywords
eyeball
motion
dizziness
data set
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210511473.2A
Other languages
Chinese (zh)
Other versions
CN114617529B (en
Inventor
邢娟丽
韩鹏
杨悦
屈寅弘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Zehnit Medical Technology Co ltd
First Affiliated Hospital of Medical College of Xian Jiaotong University
Original Assignee
Shanghai Zehnit Medical Technology Co ltd
First Affiliated Hospital of Medical College of Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Zehnit Medical Technology Co ltd, First Affiliated Hospital of Medical College of Xian Jiaotong University filed Critical Shanghai Zehnit Medical Technology Co ltd
Priority to CN202210511473.2A priority Critical patent/CN114617529B/en
Publication of CN114617529A publication Critical patent/CN114617529A/en
Application granted granted Critical
Publication of CN114617529B publication Critical patent/CN114617529B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/113Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0082Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4863Measuring or inducing nystagmus
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biophysics (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Pathology (AREA)
  • Artificial Intelligence (AREA)
  • Ophthalmology & Optometry (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Eye Examination Apparatus (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention provides an eyeball dizziness data identification method and system for eye shade equipment, wherein the method comprises the following steps: acquiring a first eyeball image data set through an image acquisition device, wherein the first eyeball image data set comprises eyeball image data sets when different users are dizzy; constructing an eyeball dizziness recognition model; inputting the first eyeball image data set into an eyeball dizziness recognition model for training to obtain the eyeball dizziness recognition model trained to be in a convergence state; obtaining first eyeball image information of a first user; and inputting the first eyeball image information into the eyeball dizziness identification model to obtain a first identification result and a second identification result, wherein the first identification result is slow-phase motion data, and the second identification result is fast-phase motion data.

Description

Eyeball dizziness data identification method and system for eye shade equipment
Technical Field
The invention relates to the technical field related to data processing, in particular to an eyeball dizziness data identification method and system for eye shade equipment.
Background
Nystagmus (NY) is an eye disease in which the eyes do not move autonomously, regularly, and back and forth, and the sufferer usually has symptoms of vertigo, tinnitus, and the like at the onset of nystagmus.
At present, the nystagmus eye movement is observed mainly by medical staff in a hospital through observation or a Frenzel lens to acquire images, and the frequency, the direction, the speed and the like of the nystagmus are obtained.
However, in the process of implementing the technical solution of the invention in the embodiments of the present application, the inventors of the present application find that the above-mentioned technology has at least the following technical problems:
in the prior art, a method for analyzing and identifying relevant data of eyeball movement of an eye shake patient mainly adopts the technical problems that medical staff directly observe naked eyes or collect eyeball movement images for observation, judgment and identification are needed by depending on the experience of doctors, and nystagmus data cannot be intelligently and efficiently identified.
Disclosure of Invention
The embodiment of the application provides an eyeball dizziness data identification method and system for eye patch equipment, which are used for solving the technical problems that the eyeball dizziness data of eye patch patients can not be intelligently and efficiently identified, and the method is mainly used for analyzing and identifying the relevant data of the eyeball movement of the eye patch patients in the prior art, the observation is mainly carried out by directly observing naked eyes or acquiring eyeball motion images through medical workers, the judgment and identification need to be carried out by depending on the experience of doctors.
In view of the above problems, embodiments of the present application provide an eyeball dizziness data identification method and system for an eye mask device.
In a first aspect of the embodiments of the present application, there is provided an eyeball dizziness data identification method for an eyeshade apparatus, the method being applied to an eyeshade apparatus which is communicatively connected with an image acquisition device, the method comprising: acquiring a first eyeball image data set through the image acquisition device, wherein the first eyeball image data set comprises eyeball image data sets when different users are dizzy; constructing an eyeball dizziness recognition model; inputting the first eyeball image data set into the eyeball dizziness recognition model for training to obtain the eyeball dizziness recognition model trained to be in a convergence state; obtaining first eyeball image information of a first user; and inputting the first eyeball image information into the eyeball dizziness identification model to obtain a first identification result and a second identification result, wherein the first identification result is slow-phase motion data, and the second identification result is fast-phase motion data.
In a second aspect of embodiments of the present application, there is provided an eyeball dizziness data recognition system for an eyeshade apparatus, the system comprising: the first obtaining unit is used for acquiring a first eyeball image data set through an image acquisition device, and the first eyeball image data set comprises eyeball image data sets when dizziness occurs to different users; the first construction unit is used for constructing an eyeball dizziness recognition model; a first processing unit, configured to input the first eyeball image data set into the eyeball vertigo recognition model for training, and obtain the eyeball vertigo recognition model trained to a convergence state; a second obtaining unit configured to obtain first eyeball image information of a first user; and the second processing unit is used for inputting the first eyeball image information into the eyeball dizziness identification model to obtain a first identification result and a second identification result, wherein the first identification result is slow-phase motion data, and the second identification result is fast-phase motion data.
In a third aspect of the embodiments of the present application, there is provided an eyeball dizziness data recognition system for an eyeshade apparatus, comprising: a processor coupled to a memory for storing a program that, when executed by the processor, causes a system to perform the steps of the method according to the first aspect.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of the method according to the first aspect.
One or more technical solutions provided in the embodiments of the present application have at least the following technical effects or advantages:
according to the technical scheme, eyeball image data of a patient user when dizziness and nystagmus occur are collected through the image collecting device in the eyeshade equipment, then an eyeball dizziness recognition model is built, the eyeball image data is adopted to train the model, the trained eyeball dizziness recognition model is obtained, then eyeball image data of the user who is subjected to eyeball dizziness data recognition at present when the user happens dizziness and nystagmus are driven into the model, and a recognition result is obtained. According to the embodiment of the application, eyeball data when a large amount of eyeballs are collected through image collection equipment is acquired, abundant training data are obtained, then an eyeball dizziness recognition model is built and model training is carried out, accurate analysis and recognition can be carried out on the model according to eyeball image information, accurate nystagmus data recognition results are output, fast-phase motion and slow-phase motion information in the image information are recognized, accurate data bases are provided for recognition of dizziness, nystagmus type information and the like, interference of other factors in the subjective recognition process of doctors is avoided, and the technical effects of intellectualization and high-efficiency analysis and recognition of eyeball dizziness are achieved.
The above description is only an overview of the technical solutions of the present application, and the present application may be implemented in accordance with the content of the description so as to make the technical means of the present application more clearly understood, and the detailed description of the present application will be given below in order to make the above and other objects, features, and advantages of the present application more clearly understood.
Drawings
Fig. 1 is a schematic flowchart of an eyeball vertigo data identification method for an eye mask device according to an embodiment of the present application;
fig. 2 is a schematic flowchart of an eyeball vertigo recognition model constructed in the eyeball vertigo data recognition method for an eyeshade device according to the embodiment of the application;
fig. 3 is a schematic flow chart illustrating a process of acquiring first eyeball image information in an eyeball vertigo data identification method for an eyeshade device according to an embodiment of the application;
FIG. 4 is a schematic structural diagram of an eyeball dizziness data identification system for an eye mask device according to an embodiment of the application;
fig. 5 is a schematic structural diagram of an exemplary electronic device according to an embodiment of the present application.
Description of reference numerals: the system comprises a first obtaining unit 11, a first constructing unit 12, a first processing unit 13, a second obtaining unit 14, a second processing unit 15, an electronic device 300, a memory 301, a processor 302, a communication interface 303 and a bus architecture 304.
Detailed Description
The embodiment of the application provides an eyeball vertigo data identification method and system for eye shade equipment, and the method is used for solving the problem that relevant data of eyeball movement of an eye shake patient is analyzed and identified in the prior art, the eye shake data identification method is mainly realized by directly observing naked eyes or acquiring eyeball moving images by medical workers, judgment and identification need to be carried out by depending on doctor experience, and the technical problem that eyeball tremor data cannot be intelligently and efficiently identified exists is solved.
Summary of the application
Nystagmus is the most common symptom of vertigo diseases, and is an eye disease with involuntary, regular and reciprocating movement of eyes, and patients usually have vertigo, tinnitus and other symptoms when nystagmus occurs. The pathogenesis of nystagmus mainly comprises diseases of the visual nervous system or the central nervous system caused by trauma or infection. The nerve is sent out from the vestibular peripheral receptor through the vestibular nerve to reach the brain stem vestibular nucleus, and the nerve is sent out from the vestibular nucleus group to form an inner longitudinal bundle, innervates bilateral oculomotor nuclei and then innervates extraocular muscles, which is a vestibulo-ocular reflex loop, and eyeball tremor can be caused by damage of any part. At present, the nystagmus eye movement is observed mainly by medical staff in a hospital through observation or a Frenzel lens to acquire images, and the frequency, the direction, the speed and the like of the nystagmus are obtained. In the prior art, a method for analyzing and identifying relevant data of eyeball movement of an eye shake patient mainly adopts the technical problems that medical staff directly observe naked eyes or collect eyeball movement images for observation, judgment and identification are needed by depending on the experience of doctors, and nystagmus data cannot be intelligently and efficiently identified.
In view of the above technical problems, the technical solution provided by the present application has the following general idea:
acquiring a first eyeball image data set through an image acquisition device, wherein the first eyeball image data set comprises eyeball image data sets when different users are dizzy; constructing an eyeball dizziness recognition model; inputting the first eyeball image data set into the eyeball dizziness recognition model for training to obtain the eyeball dizziness recognition model trained to be in a convergence state; obtaining first eyeball image information of a first user; and inputting the first eyeball image information into the eyeball dizziness identification model to obtain a first identification result and a second identification result, wherein the first identification result is slow-phase motion data, and the second identification result is fast-phase motion data.
Having described the basic principles of the present application, the following embodiments will be described in detail and fully with reference to the accompanying drawings, it being understood that the embodiments described are only some embodiments of the present application, and not all embodiments of the present application, and that the present application is not limited to the exemplary embodiments described herein. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present application without making any creative effort belong to the protection scope of the present application. It should be further noted that, for the convenience of description, only some but not all of the elements relevant to the present application are shown in the drawings.
Example one
As shown in fig. 1, an embodiment of the present application provides an eyeball dizziness data identification method for an eyeshade apparatus, which is applied to an eyeshade apparatus that is communicatively connected with an image acquisition device, and the method includes:
s100: acquiring a first eyeball image data set through the image acquisition device, wherein the first eyeball image data set comprises eyeball image data sets when different users are dizzy;
specifically, in the prior art, for acquiring and analyzing eyeball motion image data of vertigo and nystagmus patients, visual observation, analysis and identification are generally performed by medical staff in hospitals, or image acquisition, identification and analysis are performed in a darkroom through a Frenzel mirror or an nystagmus identification chart, so as to process nystagmus image data and analyze and obtain information such as types corresponding to the nystagmus of the patients. However, in the above image analysis process, the medical staff mainly performs subjective judgment based on medical experience, and certain errors exist, so that the image information cannot be accurately processed and analyzed.
The eye shade equipment that eye shade equipment in this application embodiment was worn for can appearing vertigo and nystagmus patient, its communication connection has an image acquisition device for the eye movement information of anytime and anywhere acquisition user carries out the dizzy real-time analysis discernment of nystagmus. Illustratively, the image acquisition device is arranged inside the eyeshade equipment, and the image acquisition device can be any type of camera device or combination thereof in the prior art, which can acquire and acquire image information, and can acquire and acquire image data of the front or the full angle of the eyeball of the user when the user wears the eyeshade.
S200: constructing an eyeball dizziness recognition model;
as shown in fig. 2, step S200 in the method provided in the embodiment of the present application includes:
s210: adding a convolution connection layer in a hidden layer after the input layer;
s220: adding a support vector machine sub-network after the convolutional link layer;
s230: and adding an output layer behind the support vector machine sub-network to construct an eyeball dizziness recognition model.
Specifically, the eyeball vertigo recognition model constructed in the embodiment of the application is a neural network model and comprises an input layer, a hidden processing layer and an output layer, wherein the input layer is used for inputting eyeball image data in the embodiment of the application, the processing layer is used for carrying out recognition analysis and classification on the eyeball image data, then the corresponding output result information is output according to supervision training setting, the processing layer comprises a plurality of connected neurons and can reflect the characteristics of judgment and analysis of human brain, each neuron comprises factors influencing the recognition analysis of nystagmus in the eyeball image information, the connection among the neurons comprises the weight of each factor influencing the recognition analysis of nystagmus, and based on the processing layer, the eyeball image is subjected to recognition analysis, the recognition result of the image analysis is obtained, and the recognition of the nystagmus image is completed.
In order to accurately identify an eyeball image, the eyeball dizziness identification model provided by the embodiment of the application comprises a multilayer neural network. The method comprises the steps of adding a convolution connection layer into a hidden layer behind an input layer, wherein the convolution connection layer is a Convolution Neural Network (CNN), and can perform feature extraction and recognition through image convolution features corresponding to nystagmus types in eyeball images in training data to complete recognition and analysis of the eyeball images, determine nystagmus types corresponding to eye movement features in the eyeball images and the like, and complete recognition and analysis.
A Support Vector Machine (SVM) sub-network is added after the above-mentioned convolutional link layer. After convolution characteristic extraction is carried out on eyeball image information at a convolution connection layer to obtain a plurality of dimensionality eyeball motion image characteristics, classification is carried out according to the eyeball motion image characteristics of the plurality of dimensionalities, and finally what vertigo and nystagmus correspond to the current eyeball image is identified. Wherein, the eyeball moving image characteristics of multiple dimensions comprise: nystagmus movement speed, direction of movement, frequency of movement, and the like.
Based on the eyeball motion image characteristics of the dimensions, the eyeball vertigo tremor categories corresponding to the eyeball image information are classified in the support vector machine sub-network, if linear classification can be carried out, the categories are directly classified according to a linear function, and exemplarily, the motion data in the eyeball image information can be classified into slow-phase motion data and fast-phase motion data. If linear classification can not be carried out, a kernel function in a support vector machine sub-network is adopted to map the motion data characteristics in the eyeball image information to a high-dimensional characteristic space, and an optimal separation hyperplane is adopted to separate nonlinear data in the high-dimensional characteristic space, so that classification is finished.
And after the support vector machine sub-network is constructed, constructing an output layer for finishing the output of the prediction recognition result. Therefore, after eyeball image information is input into the input layer, the convolution connection layer can perform convolution extraction according to the motion data characteristic convolution kernel to extract the characteristics of nystagmus such as motion speed, motion direction, motion frequency and the like, then classification is performed on the eyeball image information formed by the motion data characteristics of multiple dimensions based on the support vector machine sub-network, an output result is obtained by classification, identification and analysis of the eyeball image information are completed, a model for intelligently and accurately identifying and analyzing the eyeball image information is constructed, and the technical effect of accurately analyzing and identifying the eyeball image information is achieved.
S300: inputting the first eyeball image data set into the eyeball dizziness recognition model for training to obtain the eyeball dizziness recognition model trained to be in a convergence state;
specifically, in the embodiment of the application, eyeball images of a plurality of vertigo and nystagmus patients obtained based on historically acquired data are used as training data, a set of medical diagnosis results corresponding to the eyeball image data acquired based on the historical data is used as training data for outputting classification results, a constructed vertigo eyeball recognition model is supervised and trained, and when the accuracy of output information of the model reaches a preset threshold value or a convergence state, the model is trained. Optimally, because the data volume in the training data in the embodiment of the application is not massive training data, in order to obtain the most accurate output result, the model training is completed when the model is trained to be in a convergence state.
Step S300 in the method provided in the embodiment of the present application includes:
s310: obtaining a first motion characteristic and a second motion characteristic through the convolution connecting layer according to the eyeball image;
s320: constructing a first training data set and a second training data set according to the first motion characteristic and the second motion characteristic respectively;
s330: and training the support vector machine sub-network to a convergence state according to the first training data set and the second training data set, and obtaining the eyeball dizziness recognition model trained to the convergence state.
Specifically, eyeball image data sets obtained when a plurality of users are dizzy and nystagmus are collected according to the history, convolution feature extraction is carried out on the basis of the convolution connecting layer, and a first motion feature and a second motion feature are obtained. And the first motion characteristic and the second motion characteristic respectively correspond to a fast phase motion characteristic and a slow phase motion characteristic in nystagmus motion.
In the prior art, the nystagmus and vertigo can be analyzed and diagnosed according to the fast phase motion characteristics and the slow phase motion characteristics, for example, the fast phase motion characteristics are more generated and can be regarded as jerky nystagmus, which can also be called vestibular nystagmus, if the eyeballs move to the healthy side in a fast phase manner and then return to the original positions, the eyeballs move to the sick side in a slow phase manner and can be regarded as spontaneous nystagmus, the diagnosis process is medical contents in the prior art and does not relate to the contents provided by the embodiment of the application, and the embodiment of the application accurately identifies and analyzes the fast phase motion characteristics and the slow phase motion characteristics according to the eyeball image information for analysis and reference.
And after the first motion characteristic and the second motion characteristic are obtained, the first motion characteristic and the second motion characteristic are used as classification result information of the support vector machine sub-network, a first training data set and a second training data set are respectively constructed on the basis of motion characteristic data corresponding to the first motion characteristic and the second motion characteristic in the eyeball image data set, the classification of the support vector machine sub-network is trained until convergence, and the training of the eyeball dizziness recognition model is completed.
Step S310 in the method provided in the embodiment of the present application includes:
s311: obtaining a first motion convolution kernel and a second motion convolution kernel;
s312: and performing feature separation on the first eyeball image data set through the convolution connecting layer according to the first motion convolution kernel and the second motion convolution kernel to obtain a first motion feature and a second motion feature.
Specifically, as described above, the first motion feature and the second motion feature correspond to a fast-phase motion feature and a slow-phase motion feature, respectively, and motion feature data in the first motion feature and the second motion feature are obtained by performing convolution feature extraction through a convolution connection layer.
Specifically, the process of extracting the motion feature data through convolution includes setting a first motion convolution kernel and a second motion convolution kernel based on image feature data corresponding to different types of nystagmus in historical eyeball image information, and then performing feature separation on eyeball images in a first eyeball image data set based on the first motion convolution kernel and the second motion convolution kernel by adopting a convolution connection layer to obtain a first motion feature and a second motion feature. The first motion characteristic and the second motion characteristic respectively comprise fast phase motion characteristic data and slow phase motion characteristic data.
According to the embodiment of the application, the convolution connection layer based on the convolution neural network is adopted, the motion characteristic convolution kernel is constructed based on the history collected eyeball image, the convolution characteristic extraction is carried out on the eyeball image information, the motion characteristic data corresponding to the fast-phase motion characteristic and the slow-phase motion characteristic in the eyeball image information can be accurately obtained, the motion characteristic data are provided for the subsequent support vector machine sub-network to be classified, and the technical effect of accurately extracting the eyeball motion characteristic data is achieved.
Step S320 in the method provided in the embodiment of the present application includes:
s321: respectively carrying out speed analysis and direction analysis on the first motion characteristic and the second motion characteristic to obtain a first motion speed, a first motion direction, a second motion speed and a second motion direction;
s322: constructing a first training data set according to the first movement direction and the first movement speed;
s323: and constructing a second training data set according to the second movement direction and the second movement speed.
Specifically, after obtaining the motion feature data in the first motion feature and the second motion feature, a support vector machine sub-network is required to be adopted to classify according to the motion feature data of the eyeball image so as to obtain the identification result of the fast-phase motion data and the identification result of the slow-phase motion in a classified manner.
Specifically, according to the first motion characteristic and the second motion characteristic obtained by the convolution connection layer, retrospective analysis is performed on the first motion characteristic and the second motion characteristic, and motion characteristic data such as a first motion speed, a first motion direction, a second motion speed and a second motion direction in the first motion characteristic and the second motion characteristic are obtained respectively. Then constructing a first training data set by using the first motion direction and the first motion speed, and taking the recognition result of the corresponding first motion characteristic as classification prediction information; and constructing a second training data set by using the second motion direction and the second motion speed, and using the recognition result of the corresponding second motion characteristic as corresponding classification prediction information. Therefore, the recognition results corresponding to the two training data sets are used as the classification training data to train the support vector machine sub-network for classification, so that the support vector machine sub-network trained to be convergent can accurately classify the motion characteristic data extracted according to convolution of the convolution connecting layer, and then the classification recognition result is output, and the analysis and recognition of the eyeball image information are completed.
According to the embodiment of the application, the support vector machine sub-network is additionally constructed, the eyeball motion characteristic data obtained by convolution extraction of the convolution connecting layer are classified, accurate classification can be carried out on the eyeball motion characteristic data which is linearly classified or can not be linearly classified on the basis of obtaining the eyeball motion characteristic data, the identification result is obtained, more accurate eyeball image data identification basis can be provided for medical workers, and the technical effect of accurately classifying the nystagmus motion characteristic data is achieved.
S400: obtaining first eyeball image information of a first user;
as shown in fig. 3, step S400 in the method provided in the embodiment of the present application includes:
s410: monitoring the eyeball state of the first user in real time to obtain eyeball state information in a normal state;
s420: obtaining a predetermined fluctuation threshold;
s430: judging whether the comparison of the first eyeball image information and the eyeball state information in the normal state exceeds the preset fluctuation threshold value or not;
s440: triggering a first acquisition instruction when the predetermined fluctuation threshold is exceeded;
s450: and acquiring and identifying first eyeball image information of a first user according to the first acquisition instruction.
Particularly, when the symptoms of nystagmus do not occur to patients with dizziness and nystagmus, the nystagmus capability is the same as that of normal people, and free eye movement can be performed, so that eye image data of the patients in a normal state can be acquired by mistakenly acquiring the eye image data when the patients wear eye mask equipment, and further, the calculation cost of the model is prevented from being increased or the accuracy of the output information of the model is prevented from being influenced, and the image data of the patients with nystagmus can be accurately acquired.
Specifically, in the process of wearing the eyeshade device, the eyeball state of the first user is monitored in real time, eyeball state information of the first user in a normal state is obtained, the eyeball state information comprises normal eyeball rotation and the like, and the movement direction, the movement speed and the like of normal movement and nystagmus movement are different. Setting a predetermined fluctuation threshold by the motion characteristic data in the nystagmus moving image, the fluctuation threshold illustratively comprising: an eye movement direction threshold, an eye movement speed threshold, and an eye movement frequency threshold.
Within the fluctuation threshold, the eyeball of the first user can be considered to be in a normal state, if the fluctuation threshold is exceeded, the eyeball of the first user can be considered to generate tremor, at the moment, a first acquisition instruction is obtained, image data of the eyeball of the first user is acquired in real time, and the acquired first eyeball image information is identified by adopting an eyeball vertigo identification model.
According to the eye-shield equipment, the fluctuation threshold value is set through the historical nystagmus movement characteristic data and the normal nystagmus movement characteristic data, eyeball image information is collected when the movement of eyeballs exceeds the fluctuation threshold value when a user wears eye-shield equipment to recognize, an nystagmus recognition result is obtained, the calculation cost of a model can be effectively saved, and the waste caused by recognizing normal images and the influence on the model recognition precision can be avoided.
S500: and inputting the first eyeball image information into the eyeball dizziness recognition model to obtain a first recognition result and a second recognition result, wherein the first recognition result is slow-phase motion data, and the second recognition result is fast-phase motion data.
Specifically, after acquiring and obtaining first eyeball image information based on a first acquisition instruction, inputting the first eyeball image information into a trained eyeball dizziness recognition model for recognition and analysis, firstly extracting and obtaining motion characteristic data through convolution of a convolution connecting layer, and then classifying the motion characteristic data obtained through recognition through a support vector machine sub-network to obtain a first recognition result and a second recognition result. And the first identification result is the slow phase motion data of nystagmus motion, and the second identification result is the fast phase motion data of nystagmus motion, so that the analysis and identification of eyeball image information are completed.
After step S450 in the method provided in the embodiment of the present application, step S460 is further included, and step S460 includes:
s461: obtaining sleep information of the first user;
s462: acquiring an acquisition frequency according to the first acquisition instruction;
s463: acquiring a correlation function between the sleep information and the acquisition frequency according to the sleep information and the acquisition frequency;
s464: determining a first degree of association according to the association function;
s465: and when the first association degree exceeds a preset threshold value, first reminding information is obtained, and the first reminding information is used for reminding the first user of insufficient sleep.
Specifically, the sleep information of the first user is obtained, and a sensor which can be used for detecting head movement and heart rate of a human body is arranged in the eyeshade device, and is used for monitoring information such as whether the first user is in a sleep state, sleep time and sleep depth and integrating the information to obtain the sleep information.
Symptoms such as dizziness and nystagmus of the first user can affect the health of the nervous system of the user, and further affect the sleep quality of the first user. In the normal eye process in daytime, when nystagmus occurs, namely the eye movement characteristics exceed the preset fluctuation threshold value, the eyeball image information is collected, therefore, the times of acquiring the eyeball image by obtaining the first acquisition instruction correspond to the times of generating nystagmus symptoms of the first user, and the generation times and time of the nystagmus symptoms are inversely related to the sleep quality of the user. Based on the information, a correlation function between the sleep information and the acquisition frequency is constructed, and in the correlation function, if the frequency of acquiring the eyeball image information by the first user is higher, the sleep quality corresponding to the sleep information is worse.
And determining a first association degree according to the association function, wherein the first association degree is the association degree of the current acquisition frequency of the first user, obtaining the sleep quality information of the first user according to the first association degree, and obtaining first reminding information when the first association degree exceeds a preset threshold value to remind the first user of insufficient sleep. For example, the predetermined threshold may be a threshold of a sleep time less than 7 hours corresponding to the first user collecting frequency, a threshold of a sleep depth, and the like, but is not limited thereto.
According to the embodiment of the application, the association function of the eyeball image information acquisition frequency and the sleep information quality inspection of the user is constructed, the corresponding sleep information can be obtained according to the acquisition frequency of the eyeball image acquired by the current user, whether the current user has insufficient sleep is judged, the current user is further reminded, the attention degree of the user to the sleep quality can be improved, and the enthusiasm of the user in treating dizziness and nystagmus is promoted.
In summary, the embodiment of the present application acquires a large amount of eyeball data during eyeball vertigo through the image acquisition device to obtain rich training data, then constructing an eyeball dizziness recognition model and carrying out model training, respectively training a convolution connecting layer and a support vector machine sub-network in the eyeball dizziness recognition model, enabling the model to carry out accurate analysis and recognition according to eyeball image information, and output accurate nystagmus data recognition classification results, recognize the fast phase motion information and the slow phase motion information in the image information, provide accurate data basis for recognition of vertigo and nystagmus type information and the like, avoid interference of other factors in the subjective recognition process of doctors, start to collect nystagmus image information when nystagmus occurs to users, save calculation cost, improve nystagmus image recognition precision, and achieve the technical effects of intelligently and efficiently analyzing and recognizing the nystagmus.
Example two
Based on the same inventive concept as the eyeball vertigo data identification method for the eyeshade apparatus in the foregoing embodiment, as shown in fig. 4, the present embodiment provides an eyeball vertigo data identification system for the eyeshade apparatus, wherein the system comprises:
a first obtaining unit 11, where the first obtaining unit 11 is configured to obtain a first eyeball image data set by an image collecting device, where the first eyeball image data set includes eyeball image data sets when different users are dizzy;
a first construction unit 12, wherein the first construction unit 12 is used for constructing an eyeball dizziness recognition model;
a first processing unit 13, where the first processing unit 13 is configured to input the first eyeball image data set into the eyeball vertigo recognition model for training, and obtain the eyeball vertigo recognition model trained to a convergence state;
a second obtaining unit 14, wherein the second obtaining unit 14 is used for obtaining the first eyeball image information of the first user;
a second processing unit 15, where the second processing unit 15 is configured to input the first eyeball image information into the eyeball dizziness recognition model, and obtain a first recognition result and a second recognition result, where the first recognition result is slow-phase motion data, and the second recognition result is fast-phase motion data.
Further, the system further comprises:
a second construction unit for adding a convolution connection layer in an implied layer following the input layer;
a third building unit for adding a support vector machine sub-network after the convolutional connecting layer;
and the fourth construction unit is used for adding an output layer behind the support vector machine sub-network to construct an eyeball vertigo recognition model.
Further, the system further comprises:
a third processing unit, configured to obtain a first motion feature and a second motion feature through the convolution connection layer according to the eyeball image;
a fifth construction unit, configured to construct a first training data set and a second training data set according to the first motion feature and the second motion feature, respectively;
a fourth processing unit, configured to train the support vector machine sub-network to a convergence state according to the first training data set and the second training data set, and obtain the vertigo eyeball recognition model trained to the convergence state.
Further, the system further comprises:
a fifth processing unit, configured to perform speed analysis and direction analysis on the first motion characteristic and the second motion characteristic, respectively, to obtain a first motion speed, a first motion direction, a second motion speed, and a second motion direction;
a sixth construction unit for constructing a first training data set from the first movement direction and the first movement speed;
a seventh construction unit for constructing a second training data set from the second movement direction and the second movement speed.
Further, the system further comprises:
a third obtaining unit configured to obtain a first motion convolution kernel and a second motion convolution kernel;
a sixth processing unit, configured to perform feature separation on the first eyeball image data set through the convolution connection layer according to the first motion convolution kernel and the second motion convolution kernel, so as to obtain a first motion feature and a second motion feature.
Further, the system further comprises:
a fourth obtaining unit, configured to monitor an eyeball state of the first user in real time, and obtain eyeball state information in a normal state;
a fifth obtaining unit configured to obtain a predetermined fluctuation threshold;
a first judging unit configured to judge whether or not the first eyeball image information exceeds the predetermined fluctuation threshold value as compared with the eyeball state information in the normal state;
a seventh processing unit for triggering a first acquisition instruction when the predetermined fluctuation threshold is exceeded;
and the eighth processing unit is used for acquiring and identifying the first eyeball image information of the first user according to the first acquisition instruction.
Further, the system further comprises:
a sixth obtaining unit, configured to obtain sleep information of the first user;
a seventh obtaining unit, configured to obtain a collection frequency according to the first collection instruction;
a ninth processing unit, configured to obtain an association function between the sleep information and the acquisition frequency according to the sleep information and the acquisition frequency;
a tenth processing unit, configured to determine a first degree of association according to the association function;
an eighth obtaining unit, configured to obtain first reminding information when the first association degree exceeds a predetermined threshold, where the first reminding information is used to remind the first user that the first user has insufficient sleep.
Exemplary electronic device
The electronic device of the embodiment of the present application is described below with reference to figure 5,
based on the same inventive concept as the eyeball vertigo data identification method for the eyeshade apparatus in the foregoing embodiment, the present embodiment also provides an eyeball vertigo data identification system for the eyeshade apparatus, which comprises: a processor coupled to a memory, the memory for storing a program that, when executed by the processor, causes the system to perform the steps of the method of embodiment one.
The electronic device 300 includes: processor 302, communication interface 303, memory 301. Optionally, the electronic device 300 may also include a bus architecture 304. Wherein, the communication interface 303, the processor 302 and the memory 301 may be connected to each other through a bus architecture 304; the bus architecture 304 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus architecture 304 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 5, but this is not intended to represent only one bus or type of bus.
Processor 302 may be a CPU, microprocessor, ASIC, or one or more integrated circuits for controlling the execution of programs in accordance with the teachings of the present application.
The communication interface 303 may be any device, such as a transceiver, for communicating with other devices or communication networks, such as an ethernet, a Radio Access Network (RAN), a Wireless Local Area Network (WLAN), a wired access network, and the like.
The memory 301 may be a ROM or other type of static storage device that can store static information and instructions, a RAM or other type of dynamic storage device that can store information and instructions, an electrically erasable Programmable read-only memory (EEPROM), a compact disc read-only memory (compact disc)
read-only memory, CD-ROM) or other optical disk storage, optical disk storage (including compact disk, laser disk, optical disk, digital versatile disk, blu-ray disk, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory may be self-contained and coupled to the processor through a bus architecture 304. The memory may also be integral to the processor.
The memory 301 is used for storing computer-executable instructions for executing the present application, and is controlled by the processor 302 to execute. The processor 302 is configured to execute the computer-executable instructions stored in the memory 301, so as to implement the eyeball vertigo data identification method for an eye-shield apparatus provided by the above-mentioned embodiment of the present application.
Optionally, the computer-executable instructions in this embodiment may also be referred to as application program codes, which is not specifically limited in this embodiment.
The embodiment of the application acquires a large amount of eyeball data when the eyeball is dizzy through the image acquisition equipment to obtain rich training data, then constructing an eyeball dizziness recognition model and carrying out model training, respectively training a convolution connecting layer and a support vector machine sub-network in the eyeball dizziness recognition model, enabling the model to carry out accurate analysis and recognition according to eyeball image information, and output accurate nystagmus data recognition classification results, recognize the fast phase motion information and the slow phase motion information in the image information, provide accurate data basis for recognition of vertigo and nystagmus type information and the like, avoid interference of other factors in the subjective recognition process of doctors, start to collect nystagmus image information when nystagmus occurs to users, save calculation cost, improve nystagmus image recognition precision, and achieve the technical effects of intelligently and efficiently analyzing and recognizing the nystagmus.
Those of ordinary skill in the art will understand that: the various numbers of the first, second, etc. mentioned in this application are only used for the convenience of description and are not used to limit the scope of the embodiments of this application, nor to indicate the order of precedence. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one" means one or more. At least two means two or more. "at least one," "any," or similar expressions refer to any combination of these items, including any combination of item(s) or item(s). For example, at least one (one ) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer finger
The instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, where the computer instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device including one or more available media integrated servers, data centers, and the like. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The various illustrative logical units and circuits described in this application may be implemented or operated upon by design of a general purpose processor, a digital signal processor, an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof. A general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a digital signal processor and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a digital signal processor core, or any other similar configuration.
The steps of a method or algorithm described in the embodiments herein may be embodied directly in hardware, in a software element executed by a processor, or in a combination of the two. The software cells may be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. For example, a storage medium may be coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC, which may be disposed in a terminal. In the alternative, the processor and the storage medium may reside in different components within the terminal. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Although the present application has been described in conjunction with specific features and embodiments thereof, it will be evident that various modifications and combinations can be made thereto without departing from the spirit and scope of the application. Accordingly, the specification and figures are merely exemplary of the present application as defined in the appended claims and are intended to cover any and all modifications, variations, combinations, or equivalents within the scope of the present application. It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations.

Claims (10)

1. An eyeball dizziness data identification method for an eyeshade device, which is characterized by being applied to the eyeshade device which is in communication connection with an image acquisition device, and the method comprises the following steps:
acquiring a first eyeball image data set through the image acquisition device, wherein the first eyeball image data set comprises eyeball image data sets when different users are dizzy;
constructing an eyeball dizziness recognition model;
inputting the first eyeball image data set into the eyeball dizziness recognition model for training to obtain the eyeball dizziness recognition model trained to be in a convergence state;
obtaining first eyeball image information of a first user;
and inputting the first eyeball image information into the eyeball dizziness identification model to obtain a first identification result and a second identification result, wherein the first identification result is slow-phase motion data, and the second identification result is fast-phase motion data.
2. The method of claim 1, wherein the constructing an eye vertigo recognition model comprises:
adding a convolution connection layer in a hidden layer after the input layer;
adding a support vector machine sub-network after the convolutional link layer;
and adding an output layer behind the support vector machine sub-network to construct an eyeball vertigo recognition model.
3. The method of claim 2, wherein said inputting said first eye image data set into said eye vertigo recognition model for training to obtain said eye vertigo recognition model trained to a convergent state comprises:
obtaining a first motion characteristic and a second motion characteristic through the convolution connecting layer according to the eyeball image;
constructing a first training data set and a second training data set according to the first motion characteristic and the second motion characteristic respectively;
and training the support vector machine sub-network to a convergence state according to the first training data set and the second training data set, and obtaining the eyeball dizziness recognition model trained to the convergence state.
4. The method of claim 3, wherein constructing a first training data set and a second training data set from the first motion feature and the second motion feature, respectively, comprises:
respectively carrying out speed analysis and direction analysis on the first motion characteristic and the second motion characteristic to obtain a first motion speed, a first motion direction, a second motion speed and a second motion direction;
constructing a first training data set according to the first movement direction and the first movement speed;
and constructing a second training data set according to the second movement direction and the second movement speed.
5. The method of claim 3, wherein obtaining the first and second motion features from the eye image signal through the convolutional linker comprises:
obtaining a first motion convolution kernel and a second motion convolution kernel;
and performing feature separation on the first eyeball image data set through the convolution connecting layer according to the first motion convolution kernel and the second motion convolution kernel to obtain a first motion feature and a second motion feature.
6. The method of claim 1, wherein prior to obtaining the first eye image information of the first user, further comprising:
monitoring the eyeball state of the first user in real time to obtain eyeball state information in a normal state;
obtaining a predetermined fluctuation threshold;
judging whether the comparison of the first eyeball image information and the eyeball state information in the normal state exceeds the preset fluctuation threshold value or not;
triggering a first acquisition instruction when the predetermined fluctuation threshold is exceeded;
and acquiring and identifying the first eyeball image information of the first user according to the first acquisition instruction.
7. The method of claim 6, wherein after triggering a first acquisition instruction when the predetermined fluctuation threshold is exceeded, further comprising:
obtaining sleep information of the first user;
acquiring an acquisition frequency according to the first acquisition instruction;
acquiring a correlation function between the sleep information and the acquisition frequency according to the sleep information and the acquisition frequency;
determining a first degree of association according to the association function;
and when the first association degree exceeds a preset threshold value, first reminding information is obtained, and the first reminding information is used for reminding the first user of insufficient sleep.
8. An ocular vertigo data recognition system for an eye-shield apparatus, the system comprising:
the first obtaining unit is used for acquiring a first eyeball image data set through an image acquisition device, and the first eyeball image data set comprises eyeball image data sets when dizziness occurs to different users;
the first construction unit is used for constructing an eyeball dizziness recognition model;
a first processing unit, configured to input the first eyeball image data set into the eyeball vertigo recognition model for training, and obtain the eyeball vertigo recognition model trained to a convergence state;
a second obtaining unit configured to obtain first eyeball image information of a first user;
and the second processing unit is used for inputting the first eyeball image information into the eyeball dizziness identification model to obtain a first identification result and a second identification result, wherein the first identification result is slow-phase motion data, and the second identification result is fast-phase motion data.
9. An eyeball dizziness data recognition system for an eye shield apparatus, comprising: a processor coupled to a memory, the memory for storing a program that, when executed by the processor, causes a system to perform the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium, characterized in that the storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202210511473.2A 2022-05-12 2022-05-12 Eyeball dizziness data identification method and system for eye shade equipment Active CN114617529B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210511473.2A CN114617529B (en) 2022-05-12 2022-05-12 Eyeball dizziness data identification method and system for eye shade equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210511473.2A CN114617529B (en) 2022-05-12 2022-05-12 Eyeball dizziness data identification method and system for eye shade equipment

Publications (2)

Publication Number Publication Date
CN114617529A true CN114617529A (en) 2022-06-14
CN114617529B CN114617529B (en) 2022-08-26

Family

ID=81905913

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210511473.2A Active CN114617529B (en) 2022-05-12 2022-05-12 Eyeball dizziness data identification method and system for eye shade equipment

Country Status (1)

Country Link
CN (1) CN114617529B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1457013A (en) * 2003-06-18 2003-11-19 翁天祥 Video frequency electronystagmograph instrument and automatic generation of video frequency electronystagmograph
US20120179636A1 (en) * 2011-01-11 2012-07-12 The Royal Institution For The Advancement Of Learning / Mcgill University Method and System for Automatically Classifying and Identifying Vestibulo-Ocular Responses
US20180049663A1 (en) * 2014-06-17 2018-02-22 Seoul National University Hospital Apparatus for diagnosing and treating dizziness
WO2018164361A1 (en) * 2017-03-06 2018-09-13 순천향대학교 산학협력단 Nystagmus video test device and method using infrared camera
CN108937844A (en) * 2018-06-06 2018-12-07 苏州桑德欧声听觉技术有限公司 For manufacturing method, the mobile terminal of nystagmus test mobile terminal
CN110020597A (en) * 2019-02-27 2019-07-16 中国医学科学院北京协和医院 It is a kind of for the auxiliary eye method for processing video frequency examined of dizziness/dizziness and system
CN111191639A (en) * 2020-03-12 2020-05-22 上海志听医疗科技有限公司 Vertigo type identification method, device, medium and electronic equipment based on eye shake
KR20210140808A (en) * 2020-05-13 2021-11-23 삼육대학교산학협력단 A smart inspecting system, method and program for nystagmus using artificial intelligence
CN113947805A (en) * 2021-09-27 2022-01-18 华东师范大学 Eye shake type classification method based on video image
CN114091621A (en) * 2021-12-01 2022-02-25 上海市第六人民医院 BPPV eye shake signal labeling method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1457013A (en) * 2003-06-18 2003-11-19 翁天祥 Video frequency electronystagmograph instrument and automatic generation of video frequency electronystagmograph
US20120179636A1 (en) * 2011-01-11 2012-07-12 The Royal Institution For The Advancement Of Learning / Mcgill University Method and System for Automatically Classifying and Identifying Vestibulo-Ocular Responses
US20180049663A1 (en) * 2014-06-17 2018-02-22 Seoul National University Hospital Apparatus for diagnosing and treating dizziness
WO2018164361A1 (en) * 2017-03-06 2018-09-13 순천향대학교 산학협력단 Nystagmus video test device and method using infrared camera
CN108937844A (en) * 2018-06-06 2018-12-07 苏州桑德欧声听觉技术有限公司 For manufacturing method, the mobile terminal of nystagmus test mobile terminal
CN110020597A (en) * 2019-02-27 2019-07-16 中国医学科学院北京协和医院 It is a kind of for the auxiliary eye method for processing video frequency examined of dizziness/dizziness and system
CN111191639A (en) * 2020-03-12 2020-05-22 上海志听医疗科技有限公司 Vertigo type identification method, device, medium and electronic equipment based on eye shake
KR20210140808A (en) * 2020-05-13 2021-11-23 삼육대학교산학협력단 A smart inspecting system, method and program for nystagmus using artificial intelligence
CN113947805A (en) * 2021-09-27 2022-01-18 华东师范大学 Eye shake type classification method based on video image
CN114091621A (en) * 2021-12-01 2022-02-25 上海市第六人民医院 BPPV eye shake signal labeling method

Also Published As

Publication number Publication date
CN114617529B (en) 2022-08-26

Similar Documents

Publication Publication Date Title
EP3307165A1 (en) Method and system for assessing mental state
CN109887568A (en) Based on the health management system arranged of doctor's advice
US20200205709A1 (en) Mental state indicator
US11337639B1 (en) System for mental stress assessment
CN109119172B (en) Human behavior detection method based on bee colony algorithm
WO2020190648A1 (en) Method and system for measuring pupillary light reflex with a mobile phone
Lu et al. Automated strabismus detection for telemedicine applications
WO2019075522A1 (en) Risk indicator
Yadav et al. Computer‐aided diagnosis of cataract severity using retinal fundus images and deep learning
CN110755091A (en) Personal mental health monitoring system and method
CN117690583B (en) Internet of things-based rehabilitation and nursing interactive management system and method
Sharma et al. DepCap: a smart healthcare framework for EEG based depression detection using time-frequency response and deep neural network
Rescio et al. Ambient and wearable system for workers’ stress evaluation
US11779260B2 (en) Cognitive function evaluation method, cognitive function evaluation device, and non-transitory computer-readable recording medium in which cognitive function evaluation program is recorded
CN114617529B (en) Eyeball dizziness data identification method and system for eye shade equipment
CN116759075A (en) Psychological disorder inquiry method, device, equipment and medium
CN114470719A (en) Full-automatic posture correction training method and system
Vasu et al. A survey on bipolar disorder classification methodologies using machine learning
WO2021120078A1 (en) Seizure early-warning method and system
CN117224080B (en) Human body data monitoring method and device for big data
Faris Cataract Eye Detection Using Deep Learning Based Feature Extraction with Classification
WO2024095261A1 (en) System and method for diagnosis and treatment of various movement disorders and diseases of the eye
CN115762754A (en) Pain grade determination method and device, storage medium and terminal
Kowshik et al. Precise and Prompt Identification of Jaundice in Infants using AI
CN117224080A (en) Human body data monitoring method and device for big data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant