CN115953389A - Strabismus discrimination method and device based on face key point detection - Google Patents

Strabismus discrimination method and device based on face key point detection Download PDF

Info

Publication number
CN115953389A
CN115953389A CN202310158984.5A CN202310158984A CN115953389A CN 115953389 A CN115953389 A CN 115953389A CN 202310158984 A CN202310158984 A CN 202310158984A CN 115953389 A CN115953389 A CN 115953389A
Authority
CN
China
Prior art keywords
squint
eye
face
key point
strabismus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310158984.5A
Other languages
Chinese (zh)
Other versions
CN115953389B (en
Inventor
谢伟浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Shijing Medical Software Co ltd
Original Assignee
Guangzhou Shijing Medical Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Shijing Medical Software Co ltd filed Critical Guangzhou Shijing Medical Software Co ltd
Priority to CN202310158984.5A priority Critical patent/CN115953389B/en
Publication of CN115953389A publication Critical patent/CN115953389A/en
Application granted granted Critical
Publication of CN115953389B publication Critical patent/CN115953389B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a squint discrimination method and a device based on face key point detection, wherein the method comprises the following steps: acquiring a plurality of face images of a subject, and extracting face key points from each image; determining a left eye key point and a right eye key point of each human face image, further respectively obtaining a left eye external rectangle and a right eye external rectangle, respectively cutting the left eye external rectangle and the right eye external rectangle to obtain a left eye image block and a right eye image block of each image, inputting the left eye image block and the right eye image block into a pre-trained squint discrimination model, respectively determining the score of each human face image in each squint category based on the output of the model, and further judging the squint type of the subject. Compared with the prior art, the method can fully utilize the relative relationship between the face and the left and right eyes, improve the accuracy in strabismus screening and discrimination, obtain more complete eye information and reduce the probability of misjudgment.

Description

Strabismus discrimination method and device based on face key point detection
Technical Field
The invention relates to the field of strabismus judgment, in particular to a strabismus judgment method and device based on face key point detection.
Background
Strabismus can appear in different age stages of people, and the difference of the strabismus type can be correspondingly reflected in different age stages. This phenomenon not only seriously affects the aesthetic appearance of the human eye, but may also affect the normal development of vision. The oblique vision and the amblyopia are closely related and sometimes related causally, for example, a person with oblique vision has normal vision, but the person always only uses one eye when the person looks at the target, and the other eye is deviated, so that the visual field is narrow and far not as wide as the normal person, and the phenomenon of amblyopia can also exist.
The prior art mainly checks whether the person has strabismus through a triangular prism alternate covering test, but the method needs additional auxiliary equipment, depends on the experience of an expert, is long in time consumption and needs high cooperation of a subject. Although the strabismus screening method based on artificial intelligence and videos exists at present, the method has high dependence on the accuracy of pupil identification, and the relative relationship between the face and the left and right eyes is not fully utilized, so that the accuracy is low when the strabismus is screened and judged.
Disclosure of Invention
The invention provides a method and a device for judging strabismus based on face key point detection, which aim to solve the technical problem of improving the accuracy of strabismus judgment.
In order to solve the above technical problem, an embodiment of the present invention provides a method for determining strabismus based on face keypoint detection, including:
acquiring a plurality of face images of a subject, and extracting face key points from each face image of the subject;
determining a left eye key point and a right eye key point of each face image from the face key points, further obtaining a left eye external rectangle and a right eye external rectangle based on the left eye key point and the right eye key point respectively, and cutting the left eye external rectangle and the right eye external rectangle respectively to obtain a left eye image block and a right eye image block corresponding to each face image;
and inputting the left eye image block and the right eye image block of each human face image into a pre-trained squint discrimination model, respectively determining the score of each human face image in each squint category based on the output of the squint discrimination model, and judging the squint type of the subject according to the scores of all the human face images in each squint category.
As a preferred scheme, the determining the score of each face image in each strabismus category based on the output of the strabismus discrimination model specifically includes:
based on the output of the squint discrimination model, the score of each face image in each squint category is respectively judged through a preset classification label; the classification tags comprise a left eye inner squint tag, a left eye outer squint tag, a right eye inner squint tag, a right eye outer squint tag, a left eye upper squint tag, a left eye lower squint tag, a right eye upper squint tag, a right eye lower squint tag and a normal tag; each classification label corresponds to a squint category.
Preferably, the training process of the strabismus discrimination model includes:
acquiring user sample data; the user sample data comprises left eye image blocks and right eye image blocks of a plurality of users and the squint categories of the users;
training a basic classification model adopting deep learning through the user sample data until the basic classification model is converged to obtain the squint discrimination model; the number of input channels of the basic classification model is 6.
Preferably, before the inputting the left-eye image block and the right-eye image block of each face image into a pre-trained strabismus discrimination model, the method further includes:
scaling the sizes of all the left-eye image blocks and all the right-eye image blocks to be 112x112, and splicing the right-eye image blocks and the corresponding left-eye image blocks after horizontal inversion to obtain preprocessed left-eye and right-eye image blocks with the sizes of 112x112 and 6 channels.
Preferably, before determining the left-eye key point and the right-eye key point of each face image from the face key points, the method further includes:
and screening the face images in the eye opening state according to the extracted distance information between the key points of the face, screening the face images with the eyes in the preset range relative to the offset of the camera from the face images in the eye opening state, and obtaining the face images subjected to screening processing.
Correspondingly, the embodiment of the invention provides a strabismus judging device based on face key point detection, which comprises a key point extracting module, an image cutting module and a judging module; wherein, the first and the second end of the pipe are connected with each other,
the key point extraction module is used for acquiring a plurality of face images of the testees and extracting face key points from the face images of each testee;
the image cutting module is used for determining a left eye key point and a right eye key point of each face image from the face key points, further obtaining a left eye external rectangle and a right eye external rectangle based on the left eye key point and the right eye key point respectively, and obtaining a left eye image block and a right eye image block corresponding to each face image by cutting the left eye external rectangle and the right eye external rectangle respectively;
and the distinguishing module is used for inputting the left eye image block and the right eye image block of each face image into a pre-trained squint distinguishing model, respectively determining the score of each face image in each squint category based on the output of the squint distinguishing model, and judging the squint type of the subject according to the scores of all the face images in each squint category.
As a preferred scheme, the determination module determines the score of each face image in each squint category based on the output of the squint determination model, specifically:
the judging module is used for respectively judging the scores of each face image in each squint category through a preset classification label based on the output of the squint judging model; the classification tags comprise a left eye inner squint tag, a left eye outer squint tag, a right eye inner squint tag, a right eye outer squint tag, a left eye upper squint tag, a left eye lower squint tag, a right eye upper squint tag, a right eye lower squint tag and a normal tag; each classification label corresponds to a squint category.
As a preferred scheme, the training process of the strabismus discrimination model comprises the following steps:
acquiring user sample data; the user sample data comprises left eye image blocks and right eye image blocks of a plurality of users and the squint categories of the users;
training a basic classification model adopting deep learning through the user sample data until the basic classification model converges to obtain the squint discrimination model; the number of input channels of the basic classification model is 6.
As a preferred scheme, the strabismus judging device further includes a preprocessing module, where the preprocessing module is configured to scale the sizes of all left-eye image blocks and all right-eye image blocks to 112x112 before the left-eye image block and the right-eye image block of each face image are input to a pre-trained strabismus judging model, and perform channel dimension stitching with the corresponding left-eye image block after the right-eye image block is horizontally flipped, so as to obtain preprocessed left-eye and right-eye image blocks with the sizes of 112x112 and a channel of 6.
As a preferable scheme, the strabismus judging device further comprises a screening module, wherein the screening module is used for screening out the face images in the eye opening state according to the extracted distance information between the face key points before the left eye key point and the right eye key point of each face image are determined from the face key points; and screening the face images with the eyes in a preset range relative to the offset of the camera from the face images in the eye opening state to obtain the face images subjected to screening processing.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
the embodiment of the invention provides a squint discrimination method and a device based on face key point detection, wherein the method comprises the following steps: acquiring a plurality of face images of a subject, and extracting face key points from each face image of the subject; determining a left eye key point and a right eye key point of each face image from the face key points, further obtaining a left eye external rectangle and a right eye external rectangle based on the left eye key point and the right eye key point respectively, and cutting the left eye external rectangle and the right eye external rectangle respectively to obtain a left eye image block and a right eye image block corresponding to each face image; and inputting the left eye image block and the right eye image block of each face image into a pre-trained squint discrimination model, respectively determining the score of each face image in each squint category based on the output of the squint discrimination model, and judging the squint type of the subject according to the scores of all the face images in each squint category. In the embodiment of the invention, the key points of the face are extracted from the face graph, and the images of the left eye and the right eye are cut based on the key points of the face, so that compared with the technical scheme based on pupil identification, the relative relationship between the face and the left eye and the right eye can be fully utilized, and the accuracy of strabismus screening and discrimination is improved; and the left eye external rectangle and the right eye external rectangle are respectively cut to obtain a left eye image block and a right eye image block corresponding to each face image, so that more complete eye information can be obtained, misjudgment caused by inaccurate pupil identification is avoided, and the overall identification effect is improved.
Furthermore, the scores of each face image in each squint category are respectively judged through preset classification labels, the preset classification labels comprise nine types, each type corresponds to one squint category, and then the squint type of the subject can be judged through the scores of each squint category, so that the condition that the judgment result is fuzzy can be avoided, and the identification effect and the stability of the model are improved.
Furthermore, a basic classification model for deep learning is adopted, the number of input channels of the model is 6, on the basis, user sample data is used for training, the mapping relation between image blocks and classes is learned, the performance of the model is optimized, and the accuracy of judgment can be further improved.
Further, based on the distance information between the key points of the human face, the human face images in the eye opening state are screened out, the human face images with the eyes in the preset range relative to the offset of the camera are screened out, more effective user samples are obtained through secondary screening, and the accuracy of squint judgment is further improved.
Drawings
FIG. 1: the invention provides a flow diagram of an embodiment of a strabismus judging method based on face key point detection.
FIG. 2: the invention provides a schematic diagram of an embodiment of a face key point.
FIG. 3: the invention provides a structural schematic diagram of an embodiment of an oblique vision judging device based on face key point detection.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of protection of the present invention.
The first embodiment is as follows:
according to the relevant literature, human stereovision occurs mainly at 3 to 4 months after birth and will develop to a substantial maturity between 3 and 4 years of age and end at 9 years of age. According to the development process of human stereoscopic vision, it is known that the infants with strabismus are subjected to corresponding visual training or correction in the age of 3-5 years, which is more beneficial to the recovery of stereoscopic vision, and the infants in the age of 3-5 years are also the period with the highest incidence rate of strabismus of children, if the infants miss the optimal correction time, pathological changes such as macula lutea and retina corresponding abnormality can be inhibited along with the increase of the age of strabismus and the increase of the disease duration. Therefore, the method can be used for timely judging and correspondingly correcting situations such as strabismus and the like, and is very important for the healthy development of vision.
The prior art is mainly used for checking whether the person has strabismus clinically through a triangular prism alternate covering test, but the method needs additional auxiliary equipment, depends on the experience of experts, is long in time consumption and needs high cooperation of a subject. Although the strabismus screening method based on artificial intelligence and videos exists at present, the method has high dependence on the accuracy of pupil identification, and the relative relationship between the face and the left and right eyes is not fully utilized, so that the accuracy is low when the strabismus is screened and judged.
In view of one or more of the above technical problems, please refer to fig. 1, where fig. 1 is a method for determining strabismus based on face keypoint detection according to an embodiment of the present invention, which can be applied to subjects including but not limited to infants, and includes steps S1 to S3; wherein, the first and the second end of the pipe are connected with each other,
step S1, a plurality of face images of the testees are obtained, and face key points are respectively extracted from the face images of the testees.
In this embodiment, a video of the subject can be recorded by a mobile phone camera (front or rear), a notebook computer camera, and the like, the face image of the subject is included in the recording process, the recording time is 10s, and the subject cannot close the eyes in the whole recording process. From the recorded captured video, the first n photographs (preferably 5) meeting the requirements are screened out, specifically:
and (3) according to a fixed interval, performing frame extraction on sequence frames of the shot video, taking each second as a time period, extracting 10 pictures from each time period at equal intervals, obtaining a picture sequence with the length of 100 in total, and obtaining a plurality of face images of the subject.
Then, all the face images are input into a face detection module provided by the dlib library to detect the face of the subject, and face information is input into the existing PIPNet to obtain the face key points as shown in fig. 2. Each face image contains a plurality of face key points.
Before step S2, further comprising: and screening the face images in the eye opening state according to the extracted distance information between the key points of the face, screening the face images with the eyes in the preset range relative to the offset of the camera from the face images in the eye opening state, and obtaining the face images subjected to screening processing.
For example, taking fig. 2 as an example, by recognizing the face information by the PIPNet, key points of the face numbered 1 to 67 may be extracted, and the distances d1, d2, d3, and d4 may be calculated from the extracted key points. Wherein, the distance d1 is the distance between the face key point 38 and the face key point 40 of each face image, and the distance d2 is the distance between the face key point 43 and the face key point 47; s1 is the sum of d1 and d 2. Then, the images are sorted from large to small according to s1, the face images in the top 30 rows are obtained, and the rest 70 face images are filtered. Then, the distance d3 from the face key point 36 to the face key point 39 and the distance d4 from the face key point 42 to the face key point 45 of each of the front 30 face images are calculated, so that an absolute value s2 of d3-d4 is obtained, the face images in the front 5 are obtained according to the sequence of s2 from small to large, the rest 25 face images are screened out, and the face images after secondary screening are obtained. Therefore, complete eye information can be ensured, the face image and the face key points with too large closed eyes or eyes deviating cameras are prevented from being acquired, the effectiveness of the face image and the face key points is ensured, and the accuracy of the judgment result is improved.
And S2, determining a left eye key point and a right eye key point of each face image from the face key points, further obtaining a left eye external rectangle and a right eye external rectangle based on the left eye key point and the right eye key point respectively, and cutting the left eye external rectangle and the right eye external rectangle respectively to obtain a left eye image block and a right eye image block corresponding to each face image.
In this embodiment, a left eye key point and a right eye key point of each of the face images after the secondary screening are determined, then an external rectangle of the face key point of the left eye is obtained according to the face key points 36 to the face key points 41, an external rectangle of the face key point of the right eye is obtained according to the face key points 42 to the face key points 47, then an eye image block which is expanded by 1.2 times (the length and width of the rectangle are respectively extended by 1.2 times) with the center point of the external rectangle as the center position is cut out, and a left eye image block of each of the face images and a right eye image block of each of the face images are obtained.
Further, before step S3, the method further includes: and scaling the sizes of all the left-eye image blocks and all the right-eye image blocks to be 112x112, and splicing the right-eye image blocks and the corresponding left-eye image blocks after horizontal inversion to obtain preprocessed left-eye and right-eye image blocks with the sizes of 112x112 and 6.
And S3, inputting the left eye image block and the right eye image block of each human face image into a pre-trained squint discrimination model, respectively determining the score of each human face image in each squint category based on the output of the squint discrimination model, and judging the squint type of the subject according to the scores of all the human face images in each squint category.
As a preferred embodiment, the training process of the strabismus discriminant model includes:
acquiring user sample data, wherein the user sample data is equivalent to a data set and comprises left eye image blocks and right eye image blocks of a plurality of users and the squint categories of the users; the left-eye image block and the right-eye image block can be obtained by manual cropping or by the above method. The specific strabismus category data is from historical medical record information or ocular examination historical data of the user.
Training a basic classification model adopting deep learning through the user sample data containing a plurality of users, wherein the basic classification model can adopt a resnet18 network structure to learn the mapping relation between image blocks and categories, extracting the correlation in the image blocks until the basic classification model converges, and obtaining the squint discrimination model; the number of input channels of the basic classification model is 6.
Further, after inputting the pre-processed left and right eye image blocks with the size of 112 × 112 and the channel of 6 into the pre-trained strabismus discrimination model, the score of each face image in each strabismus category is respectively determined based on the output of the strabismus discrimination model, specifically:
based on the output of the squint discrimination model, the score of each face image in each squint category is respectively judged through a preset classification label; the classification tags comprise a left eye inner squint tag, a left eye outer squint tag, a right eye inner squint tag, a right eye outer squint tag, a left eye upper squint tag, a left eye lower squint tag, a right eye upper squint tag, a right eye lower squint tag and a normal tag; each classification label corresponds to a squint category.
Because the subjects have 5 face images in total, the score average values of the five face images are respectively taken for each strabismus category, the five face images are integrated, the highest one of the five face images is taken as the strabismus type of the subject according to the score average value of each category, and whether the subject is the left eye inward strabismus, the left eye outward strabismus, the right eye inward strabismus, the right eye outward strabismus, the left eye upward strabismus, the left eye downward strabismus, the right eye upward strabismus and the right eye downward strabismus or normal is judged. In this embodiment, the types of strabismus are divided into the above nine categories, and the determination result can satisfy the requirements in different application scenarios. In addition, by implementing the embodiment of the present application, in addition to the camera, since additional auxiliary equipment (the auxiliary equipment may refer to a triple prism alternating coverage test) is not required, the squint type of the subject can be automatically determined by the squint determination model without the high cooperation of the subject, and thus, the present application has high applicability to subjects of types such as infants.
Correspondingly, referring to fig. 3, an embodiment of the present invention provides a strabismus determination device based on face key point detection, including a key point extraction module 101, an image clipping module 102, and a determination module 103; wherein, the first and the second end of the pipe are connected with each other,
the key point extraction module 101 is configured to acquire a plurality of face images of subjects, and extract face key points from each of the face images of the subjects;
the image cropping module 102 is configured to determine a left-eye key point and a right-eye key point of each face image from the face key points, further obtain a left-eye circumscribed rectangle and a right-eye circumscribed rectangle based on the left-eye key point and the right-eye key point respectively, and crop the left-eye circumscribed rectangle and the right-eye circumscribed rectangle respectively to obtain a left-eye image block and a right-eye image block corresponding to each face image;
the judging module 103 is configured to input the left-eye image block and the right-eye image block of each face image into a pre-trained squint judging model, determine the score of each face image in each squint category based on the output of the squint judging model, and judge the squint type of the subject according to the scores of each squint category of all face images.
Preferably, the determination module 103 determines scores of each face image in each strabismus category based on the output of the strabismus determination model, specifically:
the judging module 103 judges the score of each face image in each squint category through a preset classification label based on the output of the squint judging model; the classification tags comprise a left eye inner squint tag, a left eye outer squint tag, a right eye inner squint tag, a right eye outer squint tag, a left eye upper squint tag, a left eye lower squint tag, a right eye upper squint tag, a right eye lower squint tag and a normal tag; each classification label corresponds to a squint category.
As a preferred scheme, the training process of the strabismus discrimination model comprises the following steps:
acquiring user sample data; the user sample data comprises left eye image blocks and right eye image blocks of a plurality of users and the squint categories of the users;
training a basic classification model adopting deep learning through the user sample data until the basic classification model converges to obtain the squint discrimination model; the number of input channels of the basic classification model is 6.
As a preferred scheme, the strabismus judging device further includes a preprocessing module, where the preprocessing module is configured to scale the sizes of all left-eye image blocks and all right-eye image blocks to 112x112 before the left-eye image block and the right-eye image block of each face image are input to a pre-trained strabismus judging model, and perform channel dimension stitching with the corresponding left-eye image block after the right-eye image block is horizontally flipped, so as to obtain preprocessed left-eye and right-eye image blocks with the sizes of 112x112 and a channel of 6.
As a preferable scheme, the strabismus judging device further comprises a screening module, wherein the screening module is used for screening out the face images in the eye opening state according to the extracted distance information between the face key points before the left eye key point and the right eye key point of each face image are determined from the face key points; and screening the face images with the eyes in a preset range relative to the offset of the camera from the face images in the eye opening state to obtain the face images subjected to screening processing.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
the embodiment of the invention provides a squint discrimination method and a device based on face key point detection, wherein the method comprises the following steps: acquiring a plurality of face images of subjects, and extracting face key points from the face image of each subject respectively; determining a left eye key point and a right eye key point of each face image from the face key points, further obtaining a left eye external rectangle and a right eye external rectangle based on the left eye key point and the right eye key point respectively, and cutting the left eye external rectangle and the right eye external rectangle respectively to obtain a left eye image block and a right eye image block corresponding to each face image; and inputting the left eye image block and the right eye image block of each face image into a pre-trained squint discrimination model, respectively determining the score of each face image in each squint category based on the output of the squint discrimination model, and judging the squint type of the subject according to the scores of all the face images in each squint category. In the embodiment of the invention, the face key points are extracted from the face graph, and the images of the left eye and the right eye are cut based on the face key points, so that compared with the technical scheme based on pupil identification, the relative relation between the face and the left eye and the right eye can be fully utilized, and the accuracy of strabismus screening and discrimination is improved; and the left eye external rectangle and the right eye external rectangle are respectively cut to obtain a left eye image block and a right eye image block corresponding to each face image, so that more complete eye information can be obtained, misjudgment caused by inaccurate pupil identification is avoided, and the overall identification effect is improved.
Furthermore, the scores of each face image in each squint category are respectively judged through the preset classification labels, the preset classification labels comprise nine types, each type corresponds to one squint category, and then the squint type of the testee can be judged through the scores of each squint category, so that the situation that the judgment result is fuzzy can be avoided, and the identification effect and the stability of the model are improved.
Furthermore, a basic classification model for deep learning is adopted, the number of input channels of the model is 6, on the basis, user sample data is used for training, the mapping relation between image blocks and categories is learned, the performance of the model is optimized, and the accuracy of judgment can be further improved.
Furthermore, based on the distance information between the key points of the human face, the human face images in the eye opening state are screened out, the human face images with eyes in a preset range relative to the offset of the camera are screened out, a user sample with more effectiveness is obtained through secondary screening, and the accuracy of the strabismus judgment is further improved.
The above-mentioned embodiments are provided to further explain the objects, technical solutions and advantages of the present invention in detail, and it should be understood that the above-mentioned embodiments are only examples of the present invention and are not intended to limit the scope of the present invention. It should be understood that any modifications, equivalents, improvements and the like, which come within the spirit and principle of the invention, may occur to those skilled in the art and are intended to be included within the scope of the invention.

Claims (10)

1. A strabismus discrimination method based on face key point detection is characterized by comprising the following steps:
acquiring a plurality of face images of a subject, and extracting face key points from each face image of the subject;
determining a left eye key point and a right eye key point of each face image from the face key points, further obtaining a left eye external rectangle and a right eye external rectangle based on the left eye key point and the right eye key point respectively, and cutting the left eye external rectangle and the right eye external rectangle respectively to obtain a left eye image block and a right eye image block corresponding to each face image;
and inputting the left eye image block and the right eye image block of each face image into a pre-trained squint discrimination model, respectively determining the score of each face image in each squint category based on the output of the squint discrimination model, and judging the squint type of the subject according to the scores of all the face images in each squint category.
2. The method for strabismus discrimination based on human face keypoint detection as claimed in claim 1, wherein the score of each human face image in each strabismus category is determined based on the output of the strabismus discrimination model, specifically:
based on the output of the squint discrimination model, the score of each face image in each squint category is respectively judged through a preset classification label; the classification tags comprise a left eye inner squint tag, a left eye outer squint tag, a right eye inner squint tag, a right eye outer squint tag, a left eye upper squint tag, a left eye lower squint tag, a right eye upper squint tag, a right eye lower squint tag and a normal tag; each classification label corresponds to a squint category.
3. The method for judging strabismus based on human face key point detection as claimed in claim 2, wherein the training process of the strabismus judging model comprises:
acquiring user sample data; the user sample data comprises left eye image blocks and right eye image blocks of a plurality of users and the squint categories of the users;
training a basic classification model adopting deep learning through the user sample data until the basic classification model converges to obtain the squint discrimination model; the number of input channels of the basic classification model is 6.
4. The method as claimed in claim 3, wherein before the step of inputting the left-eye image block and the right-eye image block of each face image into the pre-trained strabismus decision model, the method further comprises:
scaling the sizes of all the left-eye image blocks and all the right-eye image blocks to be 112x112, and splicing the right-eye image blocks and the corresponding left-eye image blocks after horizontal inversion to obtain preprocessed left-eye and right-eye image blocks with the sizes of 112x112 and 6 channels.
5. The method as claimed in any one of claims 1 to 4, wherein before determining the left-eye key point and the right-eye key point of each face image from the face key points, the method further comprises:
and screening the face images in the eye opening state according to the extracted distance information between the key points of the face, screening the face images with the eyes in the preset range relative to the offset of the camera from the face images in the eye opening state, and obtaining the face images subjected to screening processing.
6. A squint discriminating device based on face key point detection is characterized by comprising a key point extraction module, an image cutting module and a discriminating module; wherein the content of the first and second substances,
the key point extraction module is used for acquiring a plurality of face images of the testees and extracting face key points from the face images of each testee;
the image cutting module is used for determining a left eye key point and a right eye key point of each face image from the face key points, further obtaining a left eye external rectangle and a right eye external rectangle respectively based on the left eye key point and the right eye key point, and obtaining a left eye image block and a right eye image block corresponding to each face image by cutting the left eye external rectangle and the right eye external rectangle respectively;
and the judging module is used for inputting the left eye image block and the right eye image block of each human face image into a pre-trained squint judging model, respectively determining the score of each human face image in each squint category based on the output of the squint judging model, and judging the squint type of the testee according to the scores of all the human face images in each squint category.
7. The device for discriminating strabismus based on human face key point detection according to claim 6, wherein the discriminating module determines the score of each human face image in each strabismus category based on the output of the strabismus discriminating model, specifically:
the judging module is used for respectively judging the scores of each face image in each squint category through a preset classification label based on the output of the squint judging model; the classification tags comprise a left eye inner squint tag, a left eye outer squint tag, a right eye inner squint tag, a right eye outer squint tag, a left eye upper squint tag, a left eye lower squint tag, a right eye upper squint tag, a right eye lower squint tag and a normal tag; each classification label corresponds to a squint category.
8. The device for judging strabismus based on human face key point detection as claimed in claim 7, wherein the training process of the strabismus judging model comprises:
acquiring user sample data; the user sample data comprises left eye image blocks and right eye image blocks of a plurality of users and the squint categories of the users;
training a basic classification model adopting deep learning through the user sample data until the basic classification model converges to obtain the squint discrimination model; the number of input channels of the basic classification model is 6.
9. The device according to claim 7, further comprising a preprocessing module, wherein the preprocessing module is configured to scale all left-eye image blocks and all right-eye image blocks to 112x112 before the left-eye image blocks and the right-eye image blocks of each face image are input into the pre-trained strabismus decision model, and perform channel dimension stitching with the corresponding left-eye image blocks after the right-eye image blocks are horizontally flipped, so as to obtain preprocessed left-eye and right-eye image blocks with a size of 112x112 and a channel of 6.
10. The device according to claim 7, wherein the device further comprises a screening module, the screening module is configured to screen out the face images in the eye-open state according to the extracted distance information between the face key points before the left-eye key point and the right-eye key point of each face image are determined from the face key points; and screening the face images with the eyes in a preset range relative to the offset of the camera from the face images in the eye opening state to obtain the face images subjected to screening processing.
CN202310158984.5A 2023-02-24 2023-02-24 Strabismus judging method and device based on face key point detection Active CN115953389B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310158984.5A CN115953389B (en) 2023-02-24 2023-02-24 Strabismus judging method and device based on face key point detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310158984.5A CN115953389B (en) 2023-02-24 2023-02-24 Strabismus judging method and device based on face key point detection

Publications (2)

Publication Number Publication Date
CN115953389A true CN115953389A (en) 2023-04-11
CN115953389B CN115953389B (en) 2023-11-24

Family

ID=87289607

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310158984.5A Active CN115953389B (en) 2023-02-24 2023-02-24 Strabismus judging method and device based on face key point detection

Country Status (1)

Country Link
CN (1) CN115953389B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006053666A (en) * 2004-08-10 2006-02-23 Olympus Corp Image processing program, image processing method, image processing apparatus, and recording medium
GB2496005A (en) * 2012-07-06 2013-05-01 Iriss Medical Technologies Ltd Method for detecting strabismus in images of the eyes
US20160171321A1 (en) * 2014-12-15 2016-06-16 Aisin Seiki Kabushiki Kaisha Determination apparatus and determination method
CN107690675A (en) * 2017-08-21 2018-02-13 美的集团股份有限公司 Control method, control device, Intelligent mirror and computer-readable recording medium
CN110826396A (en) * 2019-09-18 2020-02-21 云知声智能科技股份有限公司 Method and device for detecting eye state in video
CN112036311A (en) * 2020-08-31 2020-12-04 北京字节跳动网络技术有限公司 Image processing method and device based on eye state detection and storage medium
CN112419670A (en) * 2020-09-15 2021-02-26 深圳市点创科技有限公司 Method, device and medium for detecting fatigue driving of driver by fusing key point positioning and image classification
CN112949345A (en) * 2019-11-26 2021-06-11 北京四维图新科技股份有限公司 Fatigue monitoring method and system, automobile data recorder and intelligent cabin
CN113662506A (en) * 2021-09-26 2021-11-19 温州医科大学 Corneal surface morphology measuring method, device, medium and electronic equipment
CN114299587A (en) * 2021-12-30 2022-04-08 上海商汤临港智能科技有限公司 Eye state determination method and apparatus, electronic device, and storage medium
CN114495252A (en) * 2022-01-26 2022-05-13 上海商汤临港智能科技有限公司 Sight line detection method and device, electronic equipment and storage medium
CN114758384A (en) * 2022-03-29 2022-07-15 奇酷软件(深圳)有限公司 Face detection method, device, equipment and storage medium
US20220277596A1 (en) * 2020-06-22 2022-09-01 Tencent Technology (Shenzhen) Company Limited Face anti-spoofing recognition method and apparatus, device, and storage medium
CN115223231A (en) * 2021-04-15 2022-10-21 虹软科技股份有限公司 Sight direction detection method and device
CN115456974A (en) * 2022-08-31 2022-12-09 上海睿介机器人科技有限公司 Strabismus detection system, method, equipment and medium based on face key points

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006053666A (en) * 2004-08-10 2006-02-23 Olympus Corp Image processing program, image processing method, image processing apparatus, and recording medium
GB2496005A (en) * 2012-07-06 2013-05-01 Iriss Medical Technologies Ltd Method for detecting strabismus in images of the eyes
US20160171321A1 (en) * 2014-12-15 2016-06-16 Aisin Seiki Kabushiki Kaisha Determination apparatus and determination method
CN107690675A (en) * 2017-08-21 2018-02-13 美的集团股份有限公司 Control method, control device, Intelligent mirror and computer-readable recording medium
CN110826396A (en) * 2019-09-18 2020-02-21 云知声智能科技股份有限公司 Method and device for detecting eye state in video
CN112949345A (en) * 2019-11-26 2021-06-11 北京四维图新科技股份有限公司 Fatigue monitoring method and system, automobile data recorder and intelligent cabin
US20220277596A1 (en) * 2020-06-22 2022-09-01 Tencent Technology (Shenzhen) Company Limited Face anti-spoofing recognition method and apparatus, device, and storage medium
CN112036311A (en) * 2020-08-31 2020-12-04 北京字节跳动网络技术有限公司 Image processing method and device based on eye state detection and storage medium
CN112419670A (en) * 2020-09-15 2021-02-26 深圳市点创科技有限公司 Method, device and medium for detecting fatigue driving of driver by fusing key point positioning and image classification
CN115223231A (en) * 2021-04-15 2022-10-21 虹软科技股份有限公司 Sight direction detection method and device
CN113662506A (en) * 2021-09-26 2021-11-19 温州医科大学 Corneal surface morphology measuring method, device, medium and electronic equipment
CN114299587A (en) * 2021-12-30 2022-04-08 上海商汤临港智能科技有限公司 Eye state determination method and apparatus, electronic device, and storage medium
CN114495252A (en) * 2022-01-26 2022-05-13 上海商汤临港智能科技有限公司 Sight line detection method and device, electronic equipment and storage medium
CN114758384A (en) * 2022-03-29 2022-07-15 奇酷软件(深圳)有限公司 Face detection method, device, equipment and storage medium
CN115456974A (en) * 2022-08-31 2022-12-09 上海睿介机器人科技有限公司 Strabismus detection system, method, equipment and medium based on face key points

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
黄家才 等: "基于人脸关键点的疲劳驾驶检测研究", 南京工程学院学报(自然科学版), no. 04, pages 11 - 16 *

Also Published As

Publication number Publication date
CN115953389B (en) 2023-11-24

Similar Documents

Publication Publication Date Title
CN109377474B (en) Macular positioning method based on improved Faster R-CNN
CN109543526B (en) True and false facial paralysis recognition system based on depth difference characteristics
JP4888217B2 (en) Person attribute estimation device
US7620216B2 (en) Method of tracking a human eye in a video image
US20030016846A1 (en) Method for automatically locating eyes in an image
US20020114495A1 (en) Multi-mode digital image processing method for detecting eyes
Boehnen et al. A fast multi-modal approach to facial feature detection
CN103902958A (en) Method for face recognition
CN109299690B (en) Method capable of improving video real-time face recognition precision
US20240046632A1 (en) Image classification method, apparatus, and device
KR20200012355A (en) Online lecture monitoring method using constrained local model and Gabor wavelets-based face verification process
CN112232448B (en) Image classification method and device, electronic equipment and storage medium
Monwar et al. Pain recognition using artificial neural network
JP6448212B2 (en) Recognition device and recognition method
CN110176295A (en) A kind of real-time detecting method and its detection device of Gastrointestinal Endoscopes lower portion and lesion
CN113887386A (en) Fatigue detection method based on multi-feature fusion of deep learning and machine learning
WO2019189971A1 (en) Artificial intelligence analysis method of iris image and retinal image to diagnose diabetes and presymptom
Kumar et al. Detection of glaucoma using image processing techniques: a critique
JP2004303150A (en) Apparatus, method and program for face identification
CN115953389A (en) Strabismus discrimination method and device based on face key point detection
RU2175148C1 (en) Method for recognizing person identity
CN110751064B (en) Blink frequency analysis method and system based on image processing
CN113536947A (en) Face attribute analysis method and device
Gallo et al. WCE video segmentation using textons
CN110334698A (en) Glasses detection system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant