CN112869746B - Method and device for detecting muscle force of lifting eyelid - Google Patents

Method and device for detecting muscle force of lifting eyelid Download PDF

Info

Publication number
CN112869746B
CN112869746B CN202011249622.XA CN202011249622A CN112869746B CN 112869746 B CN112869746 B CN 112869746B CN 202011249622 A CN202011249622 A CN 202011249622A CN 112869746 B CN112869746 B CN 112869746B
Authority
CN
China
Prior art keywords
eyelid
upper eyelid
muscle
height
face picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011249622.XA
Other languages
Chinese (zh)
Other versions
CN112869746A (en
Inventor
熊柯
张帆
郑毅旭
郭学东
刘明迪
王陆权
覃楚渝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southern Hospital Southern Medical University
Original Assignee
Southern Hospital Southern Medical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southern Hospital Southern Medical University filed Critical Southern Hospital Southern Medical University
Priority to CN202011249622.XA priority Critical patent/CN112869746B/en
Publication of CN112869746A publication Critical patent/CN112869746A/en
Application granted granted Critical
Publication of CN112869746B publication Critical patent/CN112869746B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/22Ergometry; Measuring muscular strength or the force of a muscular blow
    • A61B5/224Measuring muscular strength
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/22Ergometry; Measuring muscular strength or the force of a muscular blow
    • A61B5/221Ergometry, e.g. by using bicycle type apparatus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Biophysics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Human Computer Interaction (AREA)
  • Veterinary Medicine (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a device for detecting the muscle force of lifting eyelid, wherein the method comprises the following steps: inputting a face picture, and transforming the face picture to obtain a transformed face picture; and taking the transformed human face picture as the input of an upper eyelid detection model, outputting the height h1 of the edges of the upper and lower eyelids and the movement height h2 of the eyebrows by the upper eyelid detection model, and obtaining the force value P of the muscle of the upper eyelid according to the height h1 of the edges of the upper and lower eyelids and the movement height h2 of the eyebrows. The upper eyelid detection model can automatically identify the eyelid and the eyebrow through deep learning, further calculate a distance h1 between the upper eyelid margin and the lower eyelid margin and a height h2 of the upper and lower eyebrow movement, and finally accurately calculate the muscle force value P of the upper eyelid. The invention can quickly and accurately identify the eye of a person through the deep learning technology after image training, and then the condition of improving the muscle strength of the eyelid is obtained by processing different data during the eye movement through an algorithm, and the obtained result is quick, objective, accurate and stable.

Description

Method and device for detecting muscle force of lifting eyelid
Technical Field
The invention relates to an image processing technology, in particular to a method and a device for detecting the muscle force of an eyelid lifting muscle.
Background
Ptosis is a disease commonly seen in the upper eyelid of the eye, and light people block part of the pupil, and severe people completely cover the pupil, which may affect the beauty and visual function to some extent, and may also cause congenital amblyopia of the patient with ptosis. The disease refers in particular to an incomplete or complete loss of function of the levator muscle of the eyelids (innervating the eyes) muller smooth muscle (innervating the sympathetic nerves), so that the upper face exhibits partial or total sagging. Can affect the quality of life of the patient. Early detection is therefore of great importance for the intervention of the patient. Traditional clinical diagnosis relies on the subjective experience of the physician and requires close patient compliance. If the patient is a baby, a particular patient with cognitive impairment, etc., accurate determination becomes more difficult. The direct cause of the disease is insufficient muscle strength of the upper eyelid.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a method and a device for detecting the muscle force of the eyelid lifting muscle, so that the traditional detection method is replaced, and the muscle force of the eyelid lifting muscle can be rapidly, objectively and accurately obtained.
In order to achieve the purpose, the technical scheme of the invention is as follows:
in a first aspect, an embodiment of the present invention provides a method for detecting a muscle force of an eyelid lifting muscle, including:
inputting a face picture, and transforming the face picture to obtain a transformed face picture;
the transformed human face picture is taken as the input of an upper eyelid detection model, the upper eyelid detection model outputs the height h1 of the edges of the upper eyelid and the lower eyelid and the moving height h2 of the eyebrow,
the upper eyelid lifting muscle force value P is obtained according to the height h1 of the upper and lower eyelid edges and the height h2 of the movement of the eyebrows.
In a second aspect, an embodiment of the present invention provides an apparatus for detecting muscle strength of an upper eyelid lifting muscle, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method when executing the computer program.
In a third aspect, the present invention provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of the method described above.
Compared with the prior art, the invention has the beneficial effects that:
the upper eyelid detection model can automatically identify the eyelid and the eyebrow through deep learning, further calculate a distance h1 between the upper eyelid margin and the lower eyelid margin and a height h2 of the upper and lower eyebrow movement, and finally accurately calculate the muscle force value P of the upper eyelid. The invention can quickly and accurately identify the eyes of people after image training by applying the deep learning technology, then processes different data during eye movement by applying the algorithm to obtain the condition of improving the muscle strength of the eyelid, processes specific data obtained by a subsequent deep learning method by using a simple algorithm, overcomes the problems of complex image data processing and memory resource consumption, and obtains a quick, objective, accurate and stable result.
Drawings
Fig. 1 is a flowchart of a method for detecting the muscle force of the eyelid lifting muscle provided in embodiment 1 of the present invention;
FIG. 2 is a schematic diagram of marking the upper and lower eyelid margins and the eyebrow center line after positioning both eyes with the pupil and when closing the eyes;
fig. 3 is a picture of compensated eye opening by using frontal muscle of a ptosis patient, in which the size of the palpebral fissure is partially contracted by the frontal muscle and pulled by the skin, and the muscle strength of the lifted eyelid muscle cannot be truly reflected;
FIG. 4 is a graph showing the actual amount of ptosis when the frontal muscle is restricted (the frontal muscle is relaxed without applying force), and the apparent lack of muscle strength of the upper eyelid muscle;
FIG. 5 is a schematic view of the height h1 of the upper and lower eyelid edges and the height h2 of the eyebrow movement;
FIGS. 6A-6C are diagrams of eyelid area initial positioning processing;
FIGS. 7a-7c are graphs showing the effect of noise reduction using Gabor;
FIG. 8 is a graph of the corresponding projection of FIGS. 7b-7 c;
FIG. 9 is a flow chart for locating human eyes using deep learning;
FIG. 10 is a diagram of a neural network architecture;
fig. 11 is a schematic composition diagram of an apparatus for measuring the muscle strength of the eyelid lifting muscles according to embodiment 2 of the present invention.
Detailed Description
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
Example 1:
referring to fig. 1, the method for detecting the muscle force of the eyelid lifting mainly includes the following steps:
101. inputting a face picture, and transforming the face picture to obtain a transformed face picture;
102. the transformed human face picture is used as the input of an upper eyelid detection model, and the upper eyelid detection model outputs the height h1 of the edges of upper and lower eyelids and the movement height h2 of eyebrows;
103. the upper eyelid lifting muscle force value P is obtained according to the height h1 of the upper and lower eyelid edges and the height h2 of the movement of the eyebrows.
For the ptosis patients, excessive contraction of frontal muscle for raising upper eyelid through eyebrow lifting often occurs, as shown in fig. 2-5, the distance h1 between the upper and lower eyelid margins at this time cannot truly reflect the muscle strength of the muscle for raising upper eyelid, because the frontal muscle additionally applies a traction force to the upper eyelid to make the value of h1 larger, while the traditional method only judges the muscle strength of the muscle for raising upper eyelid by the distance h1 between the upper and lower eyelid margins, therefore, the obtained result has larger deviation. Therefore, the inventor of the present application has conducted many studies and found that the traction force of the frontal muscle on the upper eyelid can be represented by the height h2 of the upper and lower movement of the eyebrow, and therefore, when measuring the muscle strength of the upper eyelid, the distance h1 between the upper and lower eyelid margins, i.e., the traction force h2 of the frontal muscle on the upper eyelid (the height of the upper and lower movement of the eyebrow), is required.
The upper eyelid detection model of the method can automatically identify the eyelids and the eyebrows through deep learning, further calculate a distance h1 between the edges of the upper eyelid and the lower eyelid and a height h2 of the upper and lower movement of the eyebrows, and finally accurately calculate the muscle force value P of the upper eyelid. The method can quickly and accurately identify the eyes of people through the deep learning technology after image training, then the condition of improving the muscle strength of the upper eyelid is obtained by processing different data during eye movement through an algorithm, and the obtained result is quick, objective, accurate and stable.
Specifically, the above-mentioned eyelid lifting muscle force value P is h1-h 2. In addition, in order to obtain more accurate results, h1 and the height h2 of the upper and lower motion of the eyebrows can be calculated by pictures of the patient blinking for multiple times, and finally the extracted upper eyelid muscle strength P is calculated by taking the average of h1 and h 2.
For allelic human eyes, the traditional human eye positioning method is realized by an algorithm and generally comprises the following steps: the pre-positioning of the approximate positions of the eyelids removes noise and precisely positions the positions of the human eyelids.
The method is roughly divided into the following steps:
(1) initial positioning of eyelid area
Firstly, corroding the image by using a 3 x 3 operator, carrying out Gabor change on the image (figure 6A), then projecting the upper half part of the face after the Gabor change, approximately obtaining the horizontal coordinates of two eyes (figure 6B), finding out the maximum gray scale of two eye regions, carrying out binarization on the eye regions by using 0.92 times of the minimum gray scale as a threshold value (figure 6C), and finally detecting the binarization region.
In the process of the step, the gray value selection can directly influence the final result, the neural network method positions the human eye images more accurately after being trained by a large amount of data, the influence of the gray value on the positioning of the human eye positions can be greatly reduced, the obtained model positions the images without the complicated and slow steps like an algorithm, the human eyes can be predicted and positioned only by introducing the images, and the thousands of images can be predicted in tens of seconds, so that the method for deep learning is used for positioning the human eyes more quickly and accurately.
(2) Noise removal
Actually, the Gabor is used in the initial positioning, and after the Gabor is used for processing the human face, the Gabor can help to remove the influence of noise and actually can reduce the influence of illumination shadow.
Taking the right eye as an example, respectively projecting the images in fig. 7b and fig. 7c to obtain the ordinate and abscissa of the eye, wherein the projection curve is as shown in the figure, and the curve after Gabor processing is smoother, and the position of the trough is easier to judge.
Light and image noise are difficult problems which need to be processed by an algorithm, the light and the noise can affect the gray value of an image to be changed wholly or locally, and therefore the algorithm positioning is inaccurate. The problem does not need to be considered by deep learning, and the positions of human eyes can be positioned only by importing pictures after model training, so that the time is shorter and the accuracy is higher.
(3) Accurate positioning
The integral projection used in the second step is an improved projection mode, and is initially determined as the effect of projection in the area. The minimum neighborhood mean projection is used.
Of course, if the positioning is accurate, some detailed work is required: for example, the inclination angle of the human face can be judged in advance, so that the directivity of Gabor transformation can be used, the projection curve of the transformed human face is smoother, and the position of eyes is easy to judge. The step is specific positioning, and a plurality of methods and details are used, so that a great deal of time and steps are taken to achieve accurate positioning.
If the traditional algorithm is used for positioning, the face is well determined, but when the face posture changes, the face is accurately acquired, and difficulty is caused. Certainly, there is a method of binarizing eyelid areas for people, but when the illumination is different, the brightness of the face of a person can be changed, so the threshold setting of binarization is limited, and in the field of image processing, all related to the threshold value, the processing operation is relatively troublesome.
Therefore, in this embodiment, as shown in fig. 10, the method positions the human eyes by using deep learning, and in the training stage, firstly, label information is annotated to the existing human face data set to form a training data set with a resolution of 256 × 256; and inputting the 256 multiplied by 256 training data set into the U-Net network, updating the parameters of the network by adopting a random gradient descent method, and iterating for multiple times to obtain a U-Net network model.
In the implementation method, the key points respectively comprise two end points of the eyebrow and characteristic points around the upper eyelid and the lower eyelid. The convolutional neural network structure is roughly as shown in fig. 11, a 256 × 256 × 3 image is input to U-Net, 3 represents three channels of the image, the size of a convolutional kernel is 3 × 3, the largest pooling is adopted, the window size is 2 × 2, the step size of all convolutional kernels in the network structure diagram is 1, the step size of the pooling is 2, all convolutional layers and all connection layers are connected with an excitation layer, and the activation function is softmax. The U-Net task is face classification, and 3, the position of a key point regresses; for the face classification task, the loss function adopts a cross entropy loss function, and in order to reduce the sensitivity to abnormal samples and prevent gradient explosion, the key point position regression task adopts a smooth L1 loss function, so that the loss function of the whole network is the weighted sum of the two loss functions.
(1) Data set preparation
A certain number of face images, which can be of different angles and light intensities, are prepared, and then the eyes of the face images are labeled. The more different data sets, the stronger the generalization ability of the model obtained by deep learning training.
(2) Training and prediction
The network framework uses a U-Net network, after training parameters are set, a certain batch of training is carried out, a model is obtained, then the model can be directly used for predicting any one face picture, and an image only containing the data of the positions of the human eyes is obtained after prediction.
(3) Obtaining specific data of the region of interest
In a group of face image data, the height h1 of the upper and lower eyelid edges (passing through the vertical line perpendicular to the pupil) and the movement height h2 of the eyebrows can be calculated through human blinking, eyelid area change and an algorithm, and the area size of eyes under different conditions can be predicted by a deeply learned model.
(4) Predicting the muscle strength of the upper eyelid lifting muscle
And finally, the specific data is processed by a computer to obtain the specific situation P h1-h2 of the specific and accurate muscle strength of the upper eyelid.
Therefore, compared with the method for positioning the human eyes by using an algorithm, the method for deep learning cannot be influenced by light, so that the human eyes can be well positioned, and the obtained positioning is more accurate.
The invention provides a positioning method for automatically identifying human eyes based on deep learning by aiming at solving the problem of human lifting eyelid muscle strength detection, and the method can automatically identify human faces from a neural network model trained from multiple samples, return data such as human eye positions and areas, and finally obtain relevant information of an interested area, thereby realizing the positioning of human eyes. The adopted identification mode is realized by an algorithm based on the whole image instead of a certain area in the image, so that the positioning by using the neural network model is more accurate and correct, and the speed is not lacked. In the case of ptosis, the problems of incapability of positioning, deviation, time consumption and the like can occur when the algorithm is used for realizing the ptosis, and the deep learning method is significant for processing the problems. The application also provides a method for efficiently and quickly detecting the muscle strength of the upper eyelid and the lower eyelid by positioning the specific position points of the upper eyelid and the lower eyelid and the eyebrow, then finding the position point of the longitudinal line passing through the vertical pupil by utilizing the vertical direction minimum and maximum pixel point algorithm, and after obtaining h1 and h2, P is h1-h2, namely, the specific data obtained by the subsequent deep learning method is processed by a simple algorithm, so that the problems of complex image data processing and memory resource consumption are solved, and meanwhile, the method for efficiently and quickly detecting the muscle strength of the upper eyelid is provided.
Example 2:
referring to fig. 11, the apparatus for detecting the muscle strength of the eyelid lifting muscle provided in this embodiment includes a processor 111, a memory 112, and a computer program 113 stored in the memory 112 and executable on the processor 111, such as a program for detecting the muscle strength of the eyelid lifting muscle. The processor 111, when executing the computer program 113, implements the steps of embodiment 1 described above, such as the steps shown in fig. 1.
Illustratively, the computer program 113 may be divided into one or more modules/units, which are stored in the memory 112 and executed by the processor 111 to accomplish the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions for describing the execution process of the computer program 113 in the apparatus for detecting upper eyelid muscle strength.
The device for detecting the muscle strength of the eyelid lifting muscles can be computing equipment such as a desktop computer, a notebook computer, a palm computer and a cloud server. The device for detecting the muscle strength of the eyelid lifting muscle can include, but is not limited to, a processor 111 and a memory 112. Those skilled in the art will appreciate that fig. 11 is merely an example of a device for detecting eyelid lifting muscle strength and does not constitute a limitation of the device for detecting eyelid lifting muscle strength, and may include more or less components than those shown, or some components in combination, or different components, for example, the device for detecting eyelid lifting muscle strength may further include input and output devices, network access devices, buses, etc.
The Processor 111 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable Gate Array (FPGA) or other programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 112 may be an internal storage element of the device for detecting eyelid lifting muscle strength, such as a hard disk or a memory of the device for detecting eyelid lifting muscle strength. The memory 112 may also be an external storage device of the device for detecting eyelid muscle strength, such as a plug-in hard disk, a Smart Memory Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like, which is provided on the device for detecting eyelid muscle strength. Further, the memory 112 may also include both an internal memory unit and an external memory device of the apparatus for detecting the muscle strength of the upper eyelid. The memory 112 is used to store the computer program and other programs and data required by the device for detecting the muscle strength of the upper eyelid lifting muscles. The memory 112 may also be used to temporarily store data that has been output or is to be output.
Example 3:
the present embodiment provides a computer-readable storage medium, which stores a computer program that, when executed by a processor, implements the steps of the method of embodiment 1.
The computer-readable medium can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Further, the computer readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
The above embodiments are only for illustrating the technical concept and features of the present invention, and the purpose thereof is to enable those skilled in the art to understand the contents of the present invention and implement the present invention accordingly, and not to limit the protection scope of the present invention accordingly. All equivalent changes or modifications made in accordance with the spirit of the present disclosure are intended to be covered by the scope of the present disclosure.

Claims (5)

1. A method of detecting a muscle force of an upper eyelid retractor, comprising:
inputting a face picture, and transforming the face picture to obtain a transformed face picture;
the transformed human face picture is taken as the input of an upper eyelid detection model, the upper eyelid detection model outputs the height h1 of the edges of the upper eyelid and the lower eyelid and the moving height h2 of the eyebrow,
obtaining a muscle force value P of the upper eyelid lifting according to the height h1 of the upper eyelid edge and the height h2 of the eyebrow movement;
the force value P of the eyelid lifting muscle is h1-h 2;
carrying out pyramid scale transformation on the face picture;
the upper eyelid detection model is a U-Net network model;
the U-Net network model is obtained by the following method:
collecting a face picture and preprocessing the face picture to obtain corresponding labeled data to form a training data set, wherein the labeled data comprises labeled data of eyelid and eyebrow regions;
inputting the training data set into a U-Net network, updating parameters of the network by adopting a random gradient descent method, and iterating for multiple times to obtain a U-Net network model;
the training data set comprises a face image, a background image and an image containing 3 key point information; wherein, the 3 key point information comprises coordinate position information of an upper eyelid, a lower eyelid and an eyebrow.
2. The method of detecting eyelid lifting muscle force of claim 1, wherein the training data set is a 256 x 256 resolution training data set;
the task of the U-Net is face classification, and 3, the position of a key point is regressed; for the face classification task, a cross entropy loss function is adopted as a loss function, a smooth L1 loss function is adopted as a key point position regression task, and the loss function of the whole network is the weighted sum of the two loss functions.
3. The method for detecting eyelid lifting muscle force according to claim 1, wherein the eyelid lifting muscle force value P is h1 '-h 2', wherein h1 'is the mean of a plurality of h1 and h 2' is the mean of a plurality of h 2.
4. An apparatus for detecting a muscle strength of an upper eyelid lifting muscle, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor when executing the computer program performs the steps of the method according to any one of claims 1 to 3.
5. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 3.
CN202011249622.XA 2020-11-10 2020-11-10 Method and device for detecting muscle force of lifting eyelid Active CN112869746B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011249622.XA CN112869746B (en) 2020-11-10 2020-11-10 Method and device for detecting muscle force of lifting eyelid

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011249622.XA CN112869746B (en) 2020-11-10 2020-11-10 Method and device for detecting muscle force of lifting eyelid

Publications (2)

Publication Number Publication Date
CN112869746A CN112869746A (en) 2021-06-01
CN112869746B true CN112869746B (en) 2022-09-20

Family

ID=76043002

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011249622.XA Active CN112869746B (en) 2020-11-10 2020-11-10 Method and device for detecting muscle force of lifting eyelid

Country Status (1)

Country Link
CN (1) CN112869746B (en)

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201341890Y (en) * 2008-12-22 2009-11-11 钟晖 Upper eyelid lifting muscle strength measuring meter
KR101538678B1 (en) * 2013-11-14 2015-07-22 최승우 Device for measuring the muscular strength of eyelid
CN204072124U (en) * 2014-09-10 2015-01-07 金陵科技学院 A kind of eyelid force tester
CN106618614B (en) * 2016-12-21 2024-04-02 上海交通大学医学院附属第九人民医院 Eyelid margin light reflection distance and device for measuring muscle strength of upper eyelid muscle
CN106919898A (en) * 2017-01-16 2017-07-04 北京龙杯信息技术有限公司 Feature modeling method in recognition of face
CN108229297B (en) * 2017-09-30 2020-06-05 深圳市商汤科技有限公司 Face recognition method and device, electronic equipment and computer storage medium
CN209421960U (en) * 2018-04-26 2019-09-24 姜艳华 Upper eyelid flesh muscular strength detection device
US10580133B2 (en) * 2018-05-30 2020-03-03 Viswesh Krishna Techniques for identifying blepharoptosis from an image
CN109086719A (en) * 2018-08-03 2018-12-25 北京字节跳动网络技术有限公司 Method and apparatus for output data
CN109508678B (en) * 2018-11-16 2021-03-30 广州市百果园信息技术有限公司 Training method of face detection model, and detection method and device of face key points
CN109685776B (en) * 2018-12-12 2021-01-19 华中科技大学 Pulmonary nodule detection method and system based on CT image
CN110210357B (en) * 2019-05-24 2021-03-23 浙江大学 Ptosis image measuring method based on static photo face recognition

Also Published As

Publication number Publication date
CN112869746A (en) 2021-06-01

Similar Documents

Publication Publication Date Title
Zhu et al. Retinal vessel segmentation in colour fundus images using extreme learning machine
Radman et al. Automated segmentation of iris images acquired in an unconstrained environment using HOG-SVM and GrowCut
Yang et al. Exploiting ensemble learning for automatic cataract detection and grading
US9443132B2 (en) Device and method for classifying a condition based on image analysis
Saraydemir et al. Down syndrome diagnosis based on gabor wavelet transform
Barbosa et al. Efficient quantitative assessment of facial paralysis using iris segmentation and active contour-based key points detection with hybrid classifier
Qiao et al. Application of SVM based on genetic algorithm in classification of cataract fundus images
Taie et al. CSO-based algorithm with support vector machine for brain tumor's disease diagnosis
Haase et al. Automated and objective action coding of facial expressions in patients with acute facial palsy
CN111222380B (en) Living body detection method and device and recognition model training method thereof
KR102162683B1 (en) Reading aid using atypical skin disease image data
Deng et al. A hierarchical model for automatic nuchal translucency detection from ultrasound images
Ribeiro et al. Handling inter-annotator agreement for automated skin lesion segmentation
Samant et al. Analysis of computational techniques for diabetes diagnosis using the combination of iris-based features and physiological parameters
JP2007293438A (en) Device for acquiring characteristic quantity
Song et al. Multiple facial image features-based recognition for the automatic diagnosis of turner syndrome
WO2020190648A1 (en) Method and system for measuring pupillary light reflex with a mobile phone
Mussi et al. A novel ear elements segmentation algorithm on depth map images
Chen et al. Hybrid facial image feature extraction and recognition for non-invasive chronic fatigue syndrome diagnosis
Sari et al. A study on algorithms of pupil diameter measurement
Li et al. Image understanding from experts' eyes by modeling perceptual skill of diagnostic reasoning processes
CN112869746B (en) Method and device for detecting muscle force of lifting eyelid
Jin et al. Simulated multimodal deep facial diagnosis
Hai et al. Real time burning image classification using support vector machine
US20220319707A1 (en) System, Method and Computer Readable Medium for Video-Based Facial Weakness Analysis for Detecting Neurological Deficits

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant