CN115953389B - Strabismus judging method and device based on face key point detection - Google Patents

Strabismus judging method and device based on face key point detection Download PDF

Info

Publication number
CN115953389B
CN115953389B CN202310158984.5A CN202310158984A CN115953389B CN 115953389 B CN115953389 B CN 115953389B CN 202310158984 A CN202310158984 A CN 202310158984A CN 115953389 B CN115953389 B CN 115953389B
Authority
CN
China
Prior art keywords
face
strabismus
eye
distance
key point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310158984.5A
Other languages
Chinese (zh)
Other versions
CN115953389A (en
Inventor
谢伟浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Shijing Medical Software Co ltd
Original Assignee
Guangzhou Shijing Medical Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Shijing Medical Software Co ltd filed Critical Guangzhou Shijing Medical Software Co ltd
Priority to CN202310158984.5A priority Critical patent/CN115953389B/en
Publication of CN115953389A publication Critical patent/CN115953389A/en
Application granted granted Critical
Publication of CN115953389B publication Critical patent/CN115953389B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The application provides a squint discrimination method and device based on face key point detection, wherein the method comprises the following steps: acquiring a plurality of face images of a subject, and extracting face key points from each image; the left eye key point and the right eye key point of each face image are determined, the left eye external rectangle and the right eye external rectangle are further obtained respectively, the left eye image block and the right eye image block of each image are obtained through cutting of the left eye external rectangle and the right eye external rectangle respectively, the left eye image block and the right eye image block are input into a strabismus judging model trained in advance, the score of each face image in each strabismus category is determined respectively based on the output of the model, and therefore the strabismus type of the subject is judged. According to the application, the face key points are extracted from the face graph, and the images of the left eye and the right eye are cut based on the face key points, so that compared with the prior art, the relative relation between the face and the left eye and the right eye can be fully utilized, the accuracy in strabismus screening and distinguishing can be improved, more complete eye information can be obtained, and the probability of misjudgment can be reduced.

Description

Strabismus judging method and device based on face key point detection
Technical Field
The application relates to the field of strabismus discrimination, in particular to a strabismus discrimination method and device based on face key point detection.
Background
Strabismus can appear in different age stages of people, and different strabismus types can correspondingly show different age stages. This phenomenon not only seriously affects the aesthetic appearance of the eyes of the person, but may also affect the normal development of vision. Strabismus is closely related to amblyopia, and sometimes is related to each other, for example, the vision of a person with strabismus is normal, but because the person looks at a target, only one eye is used all the time, and the other eye is deviated, the vision is narrow and far less wide than a normal person, and therefore, the phenomenon of amblyopia can also exist.
The prior art mainly checks whether a person has strabismus or not through a triple prism alternating covering test, but the method needs additional auxiliary equipment, relies on the experience of an expert, takes long time and needs high coordination of a subject. At present, although a strabismus screening method based on artificial intelligence and video exists, the method has high dependence on the accuracy of pupil identification, and the relative relationship between the face and the left and right eyes is not fully utilized, so that the accuracy is low when strabismus screening and distinguishing is performed.
Disclosure of Invention
The application provides a squint distinguishing method and device based on face key point detection, which aim to solve the technical problem of how to improve squint distinguishing accuracy.
In order to solve the above technical problems, an embodiment of the present application provides a squint discriminating method based on face key point detection, including:
acquiring a plurality of face images of a subject, and extracting face key points from the face images of each subject respectively;
according to the distance information among the extracted face key points, screening face images in an eye opening state, and screening face images with eyes in a preset range relative to the offset of a camera from the face images in the eye opening state to obtain a plurality of face images subjected to screening processing;
determining a left eye key point and a right eye key point of each face image from the face key points, further respectively obtaining a left eye external rectangle and a right eye external rectangle based on the left eye key point and the right eye key point, and respectively cutting the left eye external rectangle and the right eye external rectangle to obtain a left eye image block and a right eye image block corresponding to each face image;
inputting the left eye image block and the right eye image block of each face image into a strabismus discrimination model trained in advance, respectively determining the score of each face image in each strabismus category based on the output of the strabismus discrimination model, and judging the strabismus type of the subject through the scores of all the face images in each strabismus category;
according to the distance information among the extracted face key points, screening face images in an eye opening state, and screening face images with eyes in a preset range relative to the offset of a camera from the face images in the eye opening state, so as to obtain a plurality of face images subjected to screening processing, wherein the face images comprise:
calculating distances d1, d2, d3 and d4 according to the extracted face key points;
calculating a distance s1 according to the distance d1 and the distance d 2; calculating a distance s2 according to the distance d3 and the distance d4; wherein, the distance s1 is the sum of the distance d1 and the distance d 2; distance s2 is the absolute value of the difference between distance d3 and distance d4;
sequencing face images from large to small according to the s1, and acquiring the face images ranked in front thirty in the sequencing of the s1 as the face images in the eye-open state; and sequencing the face images in the eye opening state from small to large according to the s2, and acquiring the face images ranked in the front five in the sequencing of the s2 as the face images subjected to screening processing.
As a preferred solution, the determining the score of each face image in each strabismus category based on the output of the strabismus discrimination model specifically includes:
based on the output of the strabismus judging model, respectively judging the score of each face image in each strabismus category through a preset classifying label; the classification labels comprise a left-eye internal strabismus label, a left-eye external strabismus label, a right-eye internal strabismus label, a right-eye external strabismus label, a left-eye upper strabismus label, a left-eye lower strabismus label, a right-eye upper strabismus label, a right-eye lower strabismus label and a normal label; each class label corresponds to a squint class.
Preferably, the training process of the strabismus discrimination model includes:
acquiring user sample data; the user sample data comprises left eye image blocks and right eye image blocks of a plurality of users and strabismus categories of the users;
training a basic classification model adopting deep learning through the user sample data until the basic classification model converges to obtain the strabismus discrimination model; the number of input channels of the basic classification model is 6.
Preferably, before the left eye image block and the right eye image block of each face image are input to the strabismus discrimination model trained in advance, the method further includes:
and scaling the sizes of all the left-eye image blocks and all the right-eye image blocks to 112x112, and performing channel dimension stitching on the right-eye image blocks and the corresponding left-eye image blocks after the right-eye image blocks are horizontally flipped, so as to obtain the preprocessed left-eye image blocks with the sizes of 112x112 and the channels of 6.
Correspondingly, the embodiment of the application provides a squint distinguishing device based on face key point detection, which comprises a key point extraction module, a screening module, an image cutting module and a distinguishing module; wherein,
the key point extraction module is used for acquiring a plurality of face images of the subject and extracting face key points from the face images of each subject respectively;
the screening module is used for screening face images in an eye opening state according to the distance information among the extracted face key points; screening face images with eyes in a preset range relative to the offset of the camera from the face images in an eye opening state, and obtaining a plurality of face images subjected to screening treatment;
the image clipping module is used for determining a left eye key point and a right eye key point of each face image from the face key points, further obtaining a left eye external rectangle and a right eye external rectangle based on the left eye key point and the right eye key point respectively, and clipping the left eye external rectangle and the right eye external rectangle respectively to obtain a left eye image block and a right eye image block corresponding to each face image;
the judging module is used for inputting the left eye image block and the right eye image block of each face image into a strabismus judging model trained in advance, respectively determining the score of each face image in each strabismus category based on the output of the strabismus judging model, and judging the strabismus type of the subject through the scores of each strabismus category of all face images.
The screening module screens face images in an eye opening state according to the distance information among the extracted face key points, screens face images with eyes in a preset range relative to the offset of the camera from the face images in the eye opening state, and obtains a plurality of face images subjected to screening processing, specifically:
the screening module calculates distances d1, d2, d3 and d4 according to the extracted face key points;
calculating a distance s1 according to the distance d1 and the distance d 2; calculating a distance s2 according to the distance d3 and the distance d4; wherein, the distance s1 is the sum of the distance d1 and the distance d 2; distance s2 is the absolute value of the difference between distance d3 and distance d4;
sequencing face images from large to small according to the s1, and acquiring the face images ranked in front thirty in the sequencing of the s1 as the face images in the eye-open state; and sequencing the face images in the eye opening state from small to large according to the s2, and acquiring the face images ranked in the front five in the sequencing of the s2 as the face images subjected to screening processing.
As a preferred scheme, the judging module respectively determines the score of each face image in each strabismus category based on the output of the strabismus judging model, specifically:
the judging module is used for respectively judging the score of each face image in each strabismus category through a preset classification label based on the output of the strabismus judging model; the classification labels comprise a left-eye internal strabismus label, a left-eye external strabismus label, a right-eye internal strabismus label, a right-eye external strabismus label, a left-eye upper strabismus label, a left-eye lower strabismus label, a right-eye upper strabismus label, a right-eye lower strabismus label and a normal label; each class label corresponds to a squint class.
Preferably, the training process of the strabismus discrimination model includes:
acquiring user sample data; the user sample data comprises left eye image blocks and right eye image blocks of a plurality of users and strabismus categories of the users;
training a basic classification model adopting deep learning through the user sample data until the basic classification model converges to obtain the strabismus discrimination model; the number of input channels of the basic classification model is 6.
As a preferred solution, the strabismus discriminating device further includes a preprocessing module, where the preprocessing module is configured to scale the sizes of all the left-eye image blocks and all the right-eye image blocks to 112x112 before the left-eye image block and the right-eye image block of each face image are input to the strabismus discriminating model trained in advance, and perform channel dimension stitching with the corresponding left-eye image block after the right-eye image block is horizontally flipped, so as to obtain preprocessed left-eye and right-eye image blocks with sizes of 112x112 and channels of 6.
Compared with the prior art, the embodiment of the application has the following beneficial effects:
the embodiment of the application provides a squint discrimination method and device based on face key point detection, wherein the method comprises the following steps: acquiring a plurality of face images of a subject, and extracting face key points from the face images of each subject respectively; determining a left eye key point and a right eye key point of each face image from the face key points, further respectively obtaining a left eye external rectangle and a right eye external rectangle based on the left eye key point and the right eye key point, and respectively cutting the left eye external rectangle and the right eye external rectangle to obtain a left eye image block and a right eye image block corresponding to each face image; and inputting the left eye image block and the right eye image block of each face image into a strabismus judging model trained in advance, respectively determining the score of each face image in each strabismus category based on the output of the strabismus judging model, and judging the strabismus type of the subject through the scores of all the face images in each strabismus category. According to the embodiment of the application, the face key points are extracted from the face graph, the images of the left eye and the right eye are cut based on the face key points, and compared with the technical scheme based on pupil identification, the relative relationship between the face and the left eye and the right eye can be fully utilized, and the accuracy in strabismus screening and distinguishing is improved; and moreover, the left eye image block and the right eye image block corresponding to each face image are respectively obtained by cutting the left eye external rectangle and the right eye external rectangle, so that more complete eye information can be obtained, misjudgment caused by inaccurate pupil identification is avoided, and the overall identification effect is improved.
Further, the score of each face image in each strabismus category is respectively judged through the preset classification label, the preset classification label comprises nine types, each type corresponds to one strabismus category, and then the strabismus type of the subject can be judged through the score of each strabismus category, so that the condition that the judging result is ambiguous can be avoided, and the recognition effect and stability of the model are improved.
Furthermore, a deep learning basic classification model is adopted, the number of input channels of the model is 6, training is performed on the basis of the number of input channels of the model by using user sample data, the mapping relation between image blocks and categories is learned, and the performance of the model is optimized, so that the accuracy of discrimination can be further improved.
Further, based on distance information among key points of the human face, the human face image in an eye opening state is screened out, the human face image with the offset of eyes relative to a camera in a preset range is screened out from the human face image, a user sample with higher effectiveness is obtained through secondary screening, and accuracy of strabismus discrimination is further improved.
Drawings
Fig. 1: the application provides a process diagram of one embodiment of a squint distinguishing method based on face key point detection.
Fig. 2: a schematic diagram of an embodiment of a face key point is provided in the application.
Fig. 3: the structure schematic diagram of one embodiment of the squint discriminating device based on the face key point detection is provided for the application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Embodiment one:
according to the literature, human stereo vision occurs mainly 3 to 4 months after birth and will develop substantially mature at 3 to 4 years of age and end at 9 years of age. According to the development progress of human stereoscopic vision, it is known that infants with strabismus phenomenon are more beneficial to stereoscopic vision recovery by receiving corresponding vision training or correction in the ages of 3 to 5, and in addition, the period of 3 to 5 is the period with highest incidence rate of child strabismus, if the optimal correction time is missed, as the age of strabismus person increases and the disease course time increases, pathological changes such as macula and retinal corresponding abnormality can be possibly occurred. Therefore, it is important to judge and correct strabismus and other conditions in time and to develop vision healthily.
The prior art is mainly used for checking whether a person has strabismus clinically through a triple prism alternating covering test, but the method needs additional auxiliary equipment, relies on the experience of an expert, takes a long time and needs high coordination of a subject. At present, although a strabismus screening method based on artificial intelligence and video exists, the method has high dependence on the accuracy of pupil identification, and the relative relationship between the face and the left and right eyes is not fully utilized, so that the accuracy is low when strabismus screening and distinguishing is performed.
Referring to fig. 1, fig. 1 is a schematic diagram illustrating an embodiment of a squint determining method based on face key point detection according to the present application, which may be applied to subjects including, but not limited to, infants, etc., and the squint determining method includes steps S1 to S3; wherein,
step S1, a plurality of face images of a subject are obtained, and face key points are extracted from the face images of each subject respectively.
In this embodiment, the video may be recorded by a mobile phone camera (front or rear), a notebook computer camera, etc., where the recording process includes a face image of the subject, the recording time is 10s, and the subject cannot close eyes in the whole recording process. From the recorded photographed video, the first n photos (preferably 5) meeting the requirements are screened out, specifically:
and extracting frames from the sequence frames of the shot video at fixed intervals, wherein each second is used as a time period, 10 pictures are extracted from each time period at equal intervals, and a picture sequence with the length of 100 is obtained, so that a plurality of face images of the subject are obtained.
And then inputting all the face images into a face detection module provided by a dlib library to detect the face of the subject, and inputting face information into the existing PIPNet to obtain the face key points shown in figure 2. Each face image contains a plurality of face key points.
Before step S2, the method further includes: and screening face images in an eye opening state according to the distance information among the extracted face key points, and screening face images with eyes in a preset range relative to the offset of the camera from the face images in the eye opening state to obtain a plurality of face images subjected to screening processing.
For example, using fig. 2 as an example, face key points numbered 1 to 67 may be extracted by recognizing face information through the PIPNet, and distances d1, d2, d3, and d4 may be calculated from the extracted key points. The distance d1 is the distance between the face key point 38 and the face key point 40 of each face image, and the distance d2 is the distance between the face key point 43 and the face key point 47; s1 is the sum of d1 and d 2. And then sequencing from large to small according to s1, acquiring face images of the front 30 rows, and filtering out the rest 70 face images. And then calculating the distance d3 between the face key point 36 and the face key point 39 of each face image of the front 30 and the distance d4 between the face key point 42 and the face key point 45 so as to obtain the absolute value s2 of d3-d4, and acquiring the face images arranged in front 5 according to the descending order of s2, and screening out the rest 25 face images to obtain the face image subjected to secondary screening. Therefore, the complete eye information can be ensured, the acquisition of the face image and the face key point with too large eye closing or eye deviation cameras is avoided, the effectiveness of the face image and the face key point is ensured, and the accuracy of the judging result is improved.
And S2, determining a left eye key point and a right eye key point of each face image from the face key points, further respectively obtaining a left eye external rectangle and a right eye external rectangle based on the left eye key point and the right eye key point, and respectively cutting the left eye external rectangle and the right eye external rectangle to obtain a left eye image block and a right eye image block corresponding to each face image.
In this embodiment, the left eye key point and the right eye key point of each face image after secondary screening are determined, then the external rectangle of the face key point of the left eye is obtained according to the face key points 36 to 41, the external rectangle of the face key point of the right eye is obtained according to the face key points 42 to 47, then the eye image block which takes the center point of the external rectangle as the center position is cut out, and the left eye image block of each face image and the right eye image block of each face image are obtained by expanding the right eye external rectangle by 1.2 times (the length and the width of the rectangle are respectively prolonged by 1.2 times).
Further, before step S3, the method further includes: and scaling the sizes of all the left-eye image blocks and all the right-eye image blocks to 112x112, and performing channel dimension stitching on the right-eye image blocks and the corresponding left-eye image blocks after the right-eye image blocks are horizontally flipped, so as to obtain the preprocessed left-eye image blocks with the sizes of 112x112 and the channels of 6.
And S3, inputting the left eye image block and the right eye image block of each face image into a strabismus distinguishing model trained in advance, respectively determining the score of each face image in each strabismus category based on the output of the strabismus distinguishing model, and judging the strabismus type of the subject through the scores of all the face images in each strabismus category.
As a preferred embodiment, the training process of the strabismus discrimination model includes:
acquiring user sample data, wherein the user sample data is equivalent to a data set and comprises a plurality of left eye image blocks and right eye image blocks of a user and strabismus categories of the user; the left-eye image block and the right-eye image block may be obtained by manual clipping or may be obtained by the above-described manner. The specific strabismus category data is from historical medical record information of a user or eye examination historical data.
Training a basic classification model adopting deep learning through the user sample data comprising a plurality of users, wherein the basic classification model can adopt a network structure of a resnet18 to learn the mapping relation between image blocks and categories, and extracting the correlation until the basic classification model converges to obtain the strabismus discrimination model; the number of input channels of the basic classification model is 6.
Further, after the pre-processed left and right eye image blocks with the sizes of 112x112 and the channels of 6 are input into the pre-trained strabismus discrimination model, the score of each face image in each strabismus category is respectively determined based on the output of the strabismus discrimination model, specifically:
based on the output of the strabismus judging model, respectively judging the score of each face image in each strabismus category through a preset classifying label; the classification labels comprise a left-eye internal strabismus label, a left-eye external strabismus label, a right-eye internal strabismus label, a right-eye external strabismus label, a left-eye upper strabismus label, a left-eye lower strabismus label, a right-eye upper strabismus label, a right-eye lower strabismus label and a normal label; each class label corresponds to a squint class.
Because the subject has 5 face images in total, for each strabismus category, the score average value of the five face images is respectively taken, the five face images are integrated, and according to the score average value of each category, the highest one is taken as the strabismus type of the subject, and whether the subject is left-eye strabismus, right-eye strabismus, left-eye strabismus, right-eye strabismus or right-eye strabismus is judged. In this embodiment, the types of strabismus are divided into the above nine categories, and the discrimination results thereof can meet the requirements in different application scenarios. Besides, the embodiment of the application is applied, besides the camera, no additional auxiliary equipment (the auxiliary equipment can refer to the triple prism alternate coverage test) is needed, and the strabismus type of the subject can be automatically judged through the strabismus judging model without the high coordination of the subject, so that the method has higher applicability to the subjects of the types such as infants.
Correspondingly, referring to fig. 3, the embodiment of the application provides a squint distinguishing device based on face key point detection, which comprises a key point extraction module 101, an image clipping module 102 and a distinguishing module 103; wherein,
the key point extraction module 101 is configured to obtain a plurality of face images of a subject, and extract face key points from each face image of the subject;
the image clipping module 102 is configured to determine a left eye key point and a right eye key point of each face image from the face key points, and further obtain a left eye external rectangle and a right eye external rectangle based on the left eye key point and the right eye key point respectively, and clip the left eye image block and the right eye image block corresponding to each face image through the left eye external rectangle and the right eye external rectangle respectively;
the judging module 103 is configured to input the left eye image block and the right eye image block of each face image into a strabismus judging model trained in advance, determine the score of each face image in each strabismus category based on the output of the strabismus judging model, and judge the strabismus type of the subject according to the scores of each strabismus category of all face images.
Preferably, the determining module 103 determines the score of each face image in each strabismus category based on the output of the strabismus determining model, specifically:
the judging module 103 judges the score of each face image in each strabismus category through a preset classification label based on the output of the strabismus judging model; the classification labels comprise a left-eye internal strabismus label, a left-eye external strabismus label, a right-eye internal strabismus label, a right-eye external strabismus label, a left-eye upper strabismus label, a left-eye lower strabismus label, a right-eye upper strabismus label, a right-eye lower strabismus label and a normal label; each class label corresponds to a squint class.
Preferably, the training process of the strabismus discrimination model includes:
acquiring user sample data; the user sample data comprises left eye image blocks and right eye image blocks of a plurality of users and strabismus categories of the users;
training a basic classification model adopting deep learning through the user sample data until the basic classification model converges to obtain the strabismus discrimination model; the number of input channels of the basic classification model is 6.
As a preferred solution, the strabismus discriminating device further includes a preprocessing module, where the preprocessing module is configured to scale the sizes of all the left-eye image blocks and all the right-eye image blocks to 112x112 before the left-eye image block and the right-eye image block of each face image are input to the strabismus discriminating model trained in advance, and perform channel dimension stitching with the corresponding left-eye image block after the right-eye image block is horizontally flipped, so as to obtain preprocessed left-eye and right-eye image blocks with sizes of 112x112 and channels of 6.
As a preferred scheme, the strabismus judging device further comprises a screening module, wherein the screening module is used for screening the face images in the open-eye state according to the distance information between the extracted face key points before the left-eye key point and the right-eye key point of each face image are determined from the face key points; and screening face images with eyes in a preset range relative to the offset of the camera from the face images in an eye opening state, and obtaining a plurality of face images subjected to screening processing.
Compared with the prior art, the embodiment of the application has the following beneficial effects:
the embodiment of the application provides a squint discrimination method and device based on face key point detection, wherein the method comprises the following steps: acquiring a plurality of face images of a subject, and extracting face key points from the face images of each subject respectively; determining a left eye key point and a right eye key point of each face image from the face key points, further respectively obtaining a left eye external rectangle and a right eye external rectangle based on the left eye key point and the right eye key point, and respectively cutting the left eye external rectangle and the right eye external rectangle to obtain a left eye image block and a right eye image block corresponding to each face image; and inputting the left eye image block and the right eye image block of each face image into a strabismus judging model trained in advance, respectively determining the score of each face image in each strabismus category based on the output of the strabismus judging model, and judging the strabismus type of the subject through the scores of all the face images in each strabismus category. According to the embodiment of the application, the face key points are extracted from the face graph, the images of the left eye and the right eye are cut based on the face key points, and compared with the technical scheme based on pupil identification, the relative relationship between the face and the left eye and the right eye can be fully utilized, and the accuracy in strabismus screening and distinguishing is improved; and moreover, the left eye image block and the right eye image block corresponding to each face image are respectively obtained by cutting the left eye external rectangle and the right eye external rectangle, so that more complete eye information can be obtained, misjudgment caused by inaccurate pupil identification is avoided, and the overall identification effect is improved.
Further, the score of each face image in each strabismus category is respectively judged through the preset classification label, the preset classification label comprises nine types, each type corresponds to one strabismus category, and then the strabismus type of the subject can be judged through the score of each strabismus category, so that the condition that the judging result is ambiguous can be avoided, and the recognition effect and stability of the model are improved.
Furthermore, a deep learning basic classification model is adopted, the number of input channels of the model is 6, training is performed on the basis of the number of input channels of the model by using user sample data, the mapping relation between image blocks and categories is learned, and the performance of the model is optimized, so that the accuracy of discrimination can be further improved.
Further, based on distance information among key points of the human face, the human face image in an eye opening state is screened out, the human face image with the offset of eyes relative to a camera in a preset range is screened out from the human face image, a user sample with higher effectiveness is obtained through secondary screening, and accuracy of strabismus discrimination is further improved.
The foregoing embodiments have been provided for the purpose of illustrating the general principles of the present application, and are not to be construed as limiting the scope of the application. It should be noted that any modifications, equivalent substitutions, improvements, etc. made by those skilled in the art without departing from the spirit and principles of the present application are intended to be included in the scope of the present application.

Claims (6)

1. A squint distinguishing method based on face key point detection is characterized by comprising the following steps:
acquiring a plurality of face images of a subject, and extracting face key points from the face images of each subject respectively;
according to the distance information among the extracted face key points, screening face images in an eye opening state, and screening face images with eyes in a preset range relative to the offset of a camera from the face images in the eye opening state to obtain a plurality of face images subjected to screening processing;
determining a left eye key point and a right eye key point of each face image from the face key points, further respectively obtaining a left eye external rectangle and a right eye external rectangle based on the left eye key point and the right eye key point, and respectively cutting the left eye external rectangle and the right eye external rectangle to obtain a left eye image block and a right eye image block corresponding to each face image;
inputting the left eye image block and the right eye image block of each face image into a strabismus discrimination model trained in advance, respectively determining the score of each face image in each strabismus category based on the output of the strabismus discrimination model, and judging the strabismus type of the subject through the scores of all the face images in each strabismus category;
according to the distance information among the extracted face key points, screening face images in an eye opening state, and screening face images with eyes in a preset range relative to the offset of a camera from the face images in the eye opening state, so as to obtain a plurality of face images subjected to screening processing, wherein the face images comprise:
calculating distances d1, d2, d3 and d4 according to the extracted face key points;
calculating a distance s1 according to the distance d1 and the distance d 2; calculating a distance s2 according to the distance d3 and the distance d4; wherein, the distance s1 is the sum of the distance d1 and the distance d 2; distance s2 is the absolute value of the difference between distance d3 and distance d4; wherein the distance d1 is the distance from the first face key point (38) to the second face key point (40) of each face image, the distance d2 is the distance from the third face key point (43) to the fourth face key point (47) of each face image, the distance d3 is the distance from the fifth face key point (36) to the sixth face key point (39) of each face image, and the distance d4 is the distance from the seventh face key point (42) to the eighth face key point (45) of each face image;
sequencing face images from large to small according to the s1, and acquiring the face images ranked in front thirty in the sequencing of the s1 as the face images in the eye-open state; sequencing the face images in the eye opening state from small to large according to s2, and acquiring face images ranked in the front five in the sequencing of s2 as the plurality of face images subjected to screening processing;
the output based on the strabismus discrimination model respectively determines the score of each face image in each strabismus category, specifically:
based on the output of the strabismus judging model, respectively judging the score of each face image in each strabismus category through a preset classifying label; the classification labels comprise a left-eye internal strabismus label, a left-eye external strabismus label, a right-eye internal strabismus label, a right-eye external strabismus label, a left-eye upper strabismus label, a left-eye lower strabismus label, a right-eye upper strabismus label, a right-eye lower strabismus label and a normal label; each class label corresponds to a squint class.
2. The strabismus discrimination method based on face key point detection according to claim 1, wherein the training process of the strabismus discrimination model comprises:
acquiring user sample data; the user sample data comprises left eye image blocks and right eye image blocks of a plurality of users and strabismus categories of the users;
training a basic classification model adopting deep learning through the user sample data until the basic classification model converges to obtain the strabismus discrimination model; the number of input channels of the basic classification model is 6.
3. The strabismus discrimination method based on face keypoint detection according to claim 2, further comprising, before said inputting the left-eye image block and the right-eye image block of each face image into the pre-trained strabismus discrimination model:
and scaling the sizes of all the left-eye image blocks and all the right-eye image blocks to 112x112, and performing channel dimension stitching on the right-eye image blocks and the corresponding left-eye image blocks after the right-eye image blocks are horizontally flipped, so as to obtain the preprocessed left-eye image blocks with the sizes of 112x112 and the channels of 6.
4. The squint distinguishing device based on the face key point detection is characterized by comprising a key point extraction module, a screening module, an image cutting module and a distinguishing module; wherein,
the key point extraction module is used for acquiring a plurality of face images of the subject and extracting face key points from the face images of each subject respectively;
the screening module is used for screening face images in an eye opening state according to the distance information among the extracted face key points; screening face images with eyes in a preset range relative to the offset of the camera from the face images in an eye opening state, and obtaining a plurality of face images subjected to screening treatment;
the image clipping module is used for determining a left eye key point and a right eye key point of each face image from the face key points, further obtaining a left eye external rectangle and a right eye external rectangle based on the left eye key point and the right eye key point respectively, and clipping the left eye external rectangle and the right eye external rectangle respectively to obtain a left eye image block and a right eye image block corresponding to each face image;
the judging module is used for inputting the left eye image block and the right eye image block of each face image into a strabismus judging model trained in advance, respectively determining the score of each face image in each strabismus category based on the output of the strabismus judging model, and judging the strabismus type of the subject through the scores of each strabismus category of all face images;
the screening module screens face images in an eye opening state according to the distance information among the extracted face key points, screens face images with eyes in a preset range relative to the offset of the camera from the face images in the eye opening state, and obtains a plurality of face images subjected to screening processing, specifically:
the screening module calculates distances d1, d2, d3 and d4 according to the extracted face key points;
calculating a distance s1 according to the distance d1 and the distance d 2; calculating a distance s2 according to the distance d3 and the distance d4; wherein, the distance s1 is the sum of the distance d1 and the distance d 2; distance s2 is the absolute value of the difference between distance d3 and distance d4; wherein the distance d1 is the distance from the first face key point (38) to the second face key point (40) of each face image, the distance d2 is the distance from the third face key point (43) to the fourth face key point (47) of each face image, the distance d3 is the distance from the fifth face key point (36) to the sixth face key point (39) of each face image, and the distance d4 is the distance from the seventh face key point (42) to the eighth face key point (45) of each face image;
sequencing face images from large to small according to the s1, and acquiring the face images ranked in front thirty in the sequencing of the s1 as the face images in the eye-open state; sequencing the face images in the eye opening state from small to large according to s2, and acquiring face images ranked in the front five in the sequencing of s2 as the plurality of face images subjected to screening processing;
the judging module respectively determines the score of each face image in each strabismus category based on the output of the strabismus judging model, and specifically comprises the following steps:
the judging module is used for respectively judging the score of each face image in each strabismus category through a preset classification label based on the output of the strabismus judging model; the classification labels comprise a left-eye internal strabismus label, a left-eye external strabismus label, a right-eye internal strabismus label, a right-eye external strabismus label, a left-eye upper strabismus label, a left-eye lower strabismus label, a right-eye upper strabismus label, a right-eye lower strabismus label and a normal label; each class label corresponds to a squint class.
5. The strabismus discriminating apparatus based on face keypoint detection of claim 4 wherein said strabismus discriminating model training process comprises:
acquiring user sample data; the user sample data comprises left eye image blocks and right eye image blocks of a plurality of users and strabismus categories of the users;
training a basic classification model adopting deep learning through the user sample data until the basic classification model converges to obtain the strabismus discrimination model; the number of input channels of the basic classification model is 6.
6. The strabismus discriminating apparatus based on face key point detection according to claim 5 further comprising a preprocessing module for scaling the sizes of all left eye image blocks and all right eye image blocks to 112x112 before said inputting the left eye image blocks and the right eye image blocks of each face image to the strabismus discriminating model trained in advance, and performing channel dimension stitching with the corresponding left eye image blocks after horizontally flipping the right eye image blocks to obtain preprocessed left and right eye image blocks having a size of 112x112 and a channel of 6.
CN202310158984.5A 2023-02-24 2023-02-24 Strabismus judging method and device based on face key point detection Active CN115953389B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310158984.5A CN115953389B (en) 2023-02-24 2023-02-24 Strabismus judging method and device based on face key point detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310158984.5A CN115953389B (en) 2023-02-24 2023-02-24 Strabismus judging method and device based on face key point detection

Publications (2)

Publication Number Publication Date
CN115953389A CN115953389A (en) 2023-04-11
CN115953389B true CN115953389B (en) 2023-11-24

Family

ID=87289607

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310158984.5A Active CN115953389B (en) 2023-02-24 2023-02-24 Strabismus judging method and device based on face key point detection

Country Status (1)

Country Link
CN (1) CN115953389B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006053666A (en) * 2004-08-10 2006-02-23 Olympus Corp Image processing program, image processing method, image processing apparatus, and recording medium
GB2496005A (en) * 2012-07-06 2013-05-01 Iriss Medical Technologies Ltd Method for detecting strabismus in images of the eyes
CN107690675A (en) * 2017-08-21 2018-02-13 美的集团股份有限公司 Control method, control device, Intelligent mirror and computer-readable recording medium
CN110826396A (en) * 2019-09-18 2020-02-21 云知声智能科技股份有限公司 Method and device for detecting eye state in video
CN112036311A (en) * 2020-08-31 2020-12-04 北京字节跳动网络技术有限公司 Image processing method and device based on eye state detection and storage medium
CN112419670A (en) * 2020-09-15 2021-02-26 深圳市点创科技有限公司 Method, device and medium for detecting fatigue driving of driver by fusing key point positioning and image classification
CN112949345A (en) * 2019-11-26 2021-06-11 北京四维图新科技股份有限公司 Fatigue monitoring method and system, automobile data recorder and intelligent cabin
CN113662506A (en) * 2021-09-26 2021-11-19 温州医科大学 Corneal surface morphology measuring method, device, medium and electronic equipment
CN114299587A (en) * 2021-12-30 2022-04-08 上海商汤临港智能科技有限公司 Eye state determination method and apparatus, electronic device, and storage medium
CN114495252A (en) * 2022-01-26 2022-05-13 上海商汤临港智能科技有限公司 Sight line detection method and device, electronic equipment and storage medium
CN114758384A (en) * 2022-03-29 2022-07-15 奇酷软件(深圳)有限公司 Face detection method, device, equipment and storage medium
CN115223231A (en) * 2021-04-15 2022-10-21 虹软科技股份有限公司 Sight direction detection method and device
CN115456974A (en) * 2022-08-31 2022-12-09 上海睿介机器人科技有限公司 Strabismus detection system, method, equipment and medium based on face key points

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105701445A (en) * 2014-12-15 2016-06-22 爱信精机株式会社 determination apparatus and determination method
CN111539389B (en) * 2020-06-22 2020-10-27 腾讯科技(深圳)有限公司 Face anti-counterfeiting recognition method, device, equipment and storage medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006053666A (en) * 2004-08-10 2006-02-23 Olympus Corp Image processing program, image processing method, image processing apparatus, and recording medium
GB2496005A (en) * 2012-07-06 2013-05-01 Iriss Medical Technologies Ltd Method for detecting strabismus in images of the eyes
CN107690675A (en) * 2017-08-21 2018-02-13 美的集团股份有限公司 Control method, control device, Intelligent mirror and computer-readable recording medium
CN110826396A (en) * 2019-09-18 2020-02-21 云知声智能科技股份有限公司 Method and device for detecting eye state in video
CN112949345A (en) * 2019-11-26 2021-06-11 北京四维图新科技股份有限公司 Fatigue monitoring method and system, automobile data recorder and intelligent cabin
CN112036311A (en) * 2020-08-31 2020-12-04 北京字节跳动网络技术有限公司 Image processing method and device based on eye state detection and storage medium
CN112419670A (en) * 2020-09-15 2021-02-26 深圳市点创科技有限公司 Method, device and medium for detecting fatigue driving of driver by fusing key point positioning and image classification
CN115223231A (en) * 2021-04-15 2022-10-21 虹软科技股份有限公司 Sight direction detection method and device
CN113662506A (en) * 2021-09-26 2021-11-19 温州医科大学 Corneal surface morphology measuring method, device, medium and electronic equipment
CN114299587A (en) * 2021-12-30 2022-04-08 上海商汤临港智能科技有限公司 Eye state determination method and apparatus, electronic device, and storage medium
CN114495252A (en) * 2022-01-26 2022-05-13 上海商汤临港智能科技有限公司 Sight line detection method and device, electronic equipment and storage medium
CN114758384A (en) * 2022-03-29 2022-07-15 奇酷软件(深圳)有限公司 Face detection method, device, equipment and storage medium
CN115456974A (en) * 2022-08-31 2022-12-09 上海睿介机器人科技有限公司 Strabismus detection system, method, equipment and medium based on face key points

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于人脸关键点的疲劳驾驶检测研究;黄家才 等;南京工程学院学报(自然科学版)(第04期);第11-16页 *

Also Published As

Publication number Publication date
CN115953389A (en) 2023-04-11

Similar Documents

Publication Publication Date Title
US10438052B2 (en) Systems and methods for facial property identification
CN109543526B (en) True and false facial paralysis recognition system based on depth difference characteristics
US6895103B2 (en) Method for automatically locating eyes in an image
CN102804208B (en) Individual model for visual search application automatic mining famous person
KR102058883B1 (en) Method of analyzing iris image and retina image for diagnosing diabetes and pre-symptom in artificial intelligence
CN110111316B (en) Method and system for identifying amblyopia based on eye images
KR20030046007A (en) Iris image processing and recognizing method for personal identification
Fuadah et al. Mobile cataract detection using optimal combination of statistical texture analysis
CN112232448B (en) Image classification method and device, electronic equipment and storage medium
CN110309813A (en) A kind of model training method, detection method, device, mobile end equipment and the server of the human eye state detection based on deep learning
CN111881838A (en) Dyskinesia assessment video analysis method and equipment with privacy protection function
KR20200137536A (en) Automatic analysis system for picture for cognitive ability test and recording medium for the same
US11295117B2 (en) Facial modelling and matching systems and methods
CN111738992A (en) Lung focus region extraction method and device, electronic equipment and storage medium
Taubert et al. Identity aftereffects, but not composite effects, are contingent on contrast polarity
CN115953389B (en) Strabismus judging method and device based on face key point detection
KR102240228B1 (en) Method and system for scoring drawing test results through object closure determination
JP2004303150A (en) Apparatus, method and program for face identification
CN112263220A (en) Endocrine disease intelligent diagnosis system
CN115170503B (en) Fundus image visual field classification method and device based on decision rule and deep neural network
JP7129058B2 (en) Medical image diagnosis support device, program, and medical image diagnosis support method
CN115713800A (en) Image classification method and device
CN110287795A (en) A kind of eye age detection method based on image analysis
CN115223232A (en) Eye health comprehensive management system
US20230346276A1 (en) System and method for detecting a health condition using eye images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant