WO2020252969A1 - Eye key point labeling method and apparatus, and training method and apparatus for eye key point detection model - Google Patents
Eye key point labeling method and apparatus, and training method and apparatus for eye key point detection model Download PDFInfo
- Publication number
- WO2020252969A1 WO2020252969A1 PCT/CN2019/108077 CN2019108077W WO2020252969A1 WO 2020252969 A1 WO2020252969 A1 WO 2020252969A1 CN 2019108077 W CN2019108077 W CN 2019108077W WO 2020252969 A1 WO2020252969 A1 WO 2020252969A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- eyelid
- eye
- points
- curve
- image
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/193—Preprocessing; Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/197—Matching; Classification
Definitions
- the present invention relates to the technical field of intelligent detection, in particular to a method and device for labeling key points of the eye and its detection model.
- the fatigue detection system can detect the key points contained in the image based on the pre-trained eye key point detection model.
- the key points of the upper and lower eyelids of the human eye, and then, based on the key points of the upper and lower eyelids, the distance between the upper and lower eyelids of the human eye is calculated as the opening and closing distance of the human eye, and then based on the opening and closing distance of the human eye in the prediction Set the opening and closing distance within the time length to determine whether the person corresponding to the image is in a fatigue state, and realize the detection of the fatigue state of the person.
- the beautifying camera can detect the eye key points of the upper and lower eyelids contained in the image based on the pre-trained eye key point detection model.
- the positions of the key points of the upper and lower eyelids are used to scale and shrink the human eyes to beautify the human eyes.
- the above-mentioned pre-trained eye key point detection model is obtained by training based on sample images marked with human eye key points.
- the above-mentioned key eye points are generally manually marked by the annotator.
- the marking standards for the key points of the eye by different annotators are not uniform.
- the semantic features of the key points of the eye are not obvious, and the efficiency of manually marking the key points of the eye is low.
- the present invention provides a method and device for marking eye key points and a detection model training method and device, so as to realize marking eye key points with obvious semantic features and improve marking efficiency to a certain extent.
- the specific technical solution is as follows.
- an embodiment of the present invention provides a method for labeling key points of the eye.
- the method includes: obtaining a face image and an eyelid curve corresponding to each face image, wherein the face image is marked with Contains the marked corner points of the eyes and the marked eyelid points of the upper and lower eyelids.
- Each marked eyelid curve includes: based on the corresponding marked eyelid points and marked eye corner points, the marked upper eyelid curve representing the upper eyelid and the marked lower eyelid representing the lower eyelid are generated curve;
- the marked eyelid curve is integrated to determine the first curve length marked with the upper eyelid curve and the second curve length marked with the lower eyelid curve in the marked eyelid curve; In the marked upper eyelid curve and the marked lower eyelid curve, multiple eyelid points to be used are determined;
- the multiple eyelid points to be used and the preset number of equal division points In marking the eyelid curve, marking the upper eyelid curve and marking the lower eyelid curve, respectively determining the preset number of equal division points minus 1 equal division upper eyelid point and the preset number of equal division points minus 1 equal division lower eyelid point;
- the marked eye corner points, equal division of upper eyelid points, and equal division of lower eyelid points in each face image are determined as key eye points corresponding to each face image.
- the step of obtaining the eyelid curve corresponding to each face image includes:
- the upper eyelid curve that characterizes the upper eyelid is fitted based on the marked eye corner points in the face image, the marked eyelid points of the upper eyelid, and a preset curve fitting algorithm; and based on the face
- the marked eye corner points, the marked eyelid points of the lower eyelid and the preset curve fitting algorithm in the image fit the lower eyelid curve that characterizes the lower eyelid to obtain the eyelid curve corresponding to the face image.
- the method further Including: for each face image, based on the key points of the eyes in the face image, cut out the image of the area where the eyes are located from the face image to obtain an eye image marked with the key points of the eyes;
- the eye image and its corresponding calibration information are determined as the training data of the eye key point detection model for detecting the eye key points of the eyes in the image, wherein the calibration information includes the eye in the corresponding eye image.
- the location information of key points are determined as the training data of the eye key point detection model for detecting the eye key points of the eyes in the image, wherein the calibration information includes the eye in the corresponding eye image.
- the step of determining the eye image and its corresponding calibration information as the training data of the eye key point detection model for detecting the eye key points of the eyes in the image includes:
- the measured eye opening and closing length corresponding to each face image where the measured eye opening and closing length is: based on the length determined by the upper and lower eyelids of the eyes in the target 3D face model corresponding to the face image, the target 3D person
- the face model is: a face model determined based on the face feature points in the corresponding face image and a preset three-dimensional face model, and the target three-dimensional face model includes: based on the marked eye corner points in the face image, Equally divide the upper eyelid point and the lower eyelid point to construct the upper and lower eyelids of the eye;
- the eye image and its corresponding calibration information are determined as the training data of the eye key point detection model used to detect the upper and lower eyelids of the eye in the image, where the calibration information includes: the corresponding eye image The position information of the marked eye key points, and the actual measured deviation corresponding to the face image corresponding to the eye image.
- the method further includes: a process of marking the upper and lower eyelids of the eyes of the person’s face in each face image, wherein , For each face image, perform the following steps to mark the upper and lower eyelids of the person’s face in each face image:
- first face image includes eyes of a person's face, and the first face image is one of the face images
- the tagger If it is detected that the tagger has triggered the tagging instruction for the specified eyelid in the first face image at least twice, based on the location information of the tag point carried by the tagging instruction triggered by the tagger at least twice, and the preset A curve fitting algorithm to determine a specified eyelid curve that characterizes the specified eyelid, wherein the specified eyelid is: the upper eyelid and the lower eyelid of the eye in the first face image;
- the specified eyelid curve is displayed in the first face image, so that the annotator detects whether the annotated point is an eyelid point or an eye corner point on the specified eyelid.
- an embodiment of the present invention provides a method for training an eye key point detection model, the method including:
- the training data includes the upper and lower eyelid equidistant eyelid points marked with the eyes and the eye images marked with the corner points of the eyes, and calibration information corresponding to each eye image
- the calibration information includes: the marking in the corresponding eye image
- the position information of the divided eyelid points and the marked eye corner points, the divided eyelid points include: based on the marked eyelid curve corresponding to the face image and the principle of mathematical integration, the divided upper eyelid of the eye in the face image is determined
- the eyelid point and the lower eyelid are equally divided into the lower eyelid points
- the marking eyelid curve includes: marking the upper eyelid, which is generated based on the marked corner points of the eyes marked in the corresponding face image and the marked eyelid points of the upper and lower eyelids, and represents the upper eyelid
- the curve and the marked lower eyelid curve that characterizes the lower eyelid, and each eye image is an image of the area where the eye is located from the corresponding face image;
- the method further includes:
- the correction processing is: processing that the ordinates in the position information of the marked corner points in the eye image are all the same;
- the eye image and the position information of the divided eyelid points and the marked eye corner points included in the calibration information corresponding to each eye image are input into an initial eye key point detection model to be trained to detect the eyes in the image
- the steps of detecting the key points of the eyelid points and the corner points of the upper and lower eyelids include:
- the eye image includes a left eye image and a right eye image corresponding to the left eye image;
- the method further includes: mirroring the left-eye image or the right-eye image corresponding to the left-eye image , Get the mirror image;
- the mirror image and the unmirrored image are spliced to obtain a spliced image, wherein if the left-eye image is mirrored, the unmirrored image is the right-eye image corresponding to the left-eye image; The right-eye image corresponding to the left-eye image is mirrored, and the image that is not mirrored is the left-eye image;
- Said each converted image and the calibration information corresponding to each converted image includes the position information of the divided eyelid points and the marked eye corner points, and input the initial eye key point detection model to be trained to detect the eyes in the image
- the steps of detecting the key points of the eyelid point and the corner point of the eyelid in the upper and lower eyelids include:
- the calibration information corresponding to each eye image further includes: a measured deviation corresponding to the eye image, where the measured deviation is: the ratio of the actual eye opening and closing length corresponding to the eye image to the measured eye opening and closing length;
- the measured eye opening and closing length is the length determined based on the upper and lower eyelids of the eyes in the target three-dimensional face model corresponding to the face image.
- the target three-dimensional face model is based on the face feature points and predictions in the corresponding face image.
- the eye image and the position information of the divided eyelid points and the marked eye corner points included in the calibration information corresponding to each eye image are input into an initial eye key point detection model to be trained to detect the eyes in the image
- the steps of detecting the key points of the eyelid point and the corner point of the eyelid in the upper and lower eyelids include:
- the eye image and the calibration information corresponding to each eye image include the position information of the divided eyelid points and the marked eye corner points and the measured deviation corresponding to the eye image, and input the initial eye key point detection model to train the eye
- the key point detection model of the eye wherein the key point detection model of the eye is used to detect the equally divided eyelid points and the corner points of the upper and lower eyelids of the eyes in the image, and detect the actual measured deviation corresponding to the image.
- the method further includes:
- the to-be-detected image is input into the eye key point detection model, and the equally divided eyelid points and eye corner points of the upper and lower eyelids of the to-be-detected person’s eyes in the to-be-detected image are determined.
- the method further includes: using the Sobel algorithm to perform edge extraction on the image to be detected to obtain a grayscale edge map corresponding to the image to be detected;
- the eyelid curve of the upper and lower eyelids of the person to be detected is determined as the eyelid curve to be detected, and In the grayscale edge map, the eyelid curve to be detected is drawn, wherein the eyelid curve to be detected includes: an upper eyelid curve to be detected representing the upper eyelid of the person to be detected and a lower eyelid curve representing the person to be detected The lower eyelid curve of the eyelid to be tested;
- each group of reference points includes points corresponding to the equally divided eyelid points in the eyelid curve to be detected ;
- the reference curve corresponding to the set of reference points is determined, and the reference curve is determined in the gray scale.
- a reference curve corresponding to each set of reference points is drawn in the edge map, where the reference curve corresponding to each set of reference points includes: a reference upper eyelid curve that characterizes the upper eyelid of the person to be inspected and a reference curve that characterizes the lower eyelid of the person to be inspected Refer to the lower eyelid curve;
- the sum of the pixel values of the pixel points corresponding to the first upper eyelid curve is determined, wherein the first upper eyelid curve includes: each set of reference points The corresponding reference upper eyelid curve and the upper eyelid curve to be detected;
- the sum with the largest value is determined, and the first upper eyelid curve corresponding to the largest sum is determined as a characterizing the person to be tested
- each first lower eyelid curve determines the sum of the pixel values of the pixel points corresponding to the first lower eyelid curve, where the first lower eyelid curve includes: each set of reference points The corresponding reference lower eyelid curve and the lower eyelid curve to be detected;
- the sum with the largest value is determined, and the first lower eyelid curve corresponding to the largest sum is determined as a characterizing the person to be tested
- the target eyelid curve is integrated to determine the third curve length of the target upper eyelid curve and the fourth curve length of the target lower eyelid curve in the target eyelid curve;
- the lower eyelid curve of the target multiple reference eyelid points are determined;
- the target upper eyelid curve is selected from the target eyelid curve.
- the preset number of equal division points minus one equal upper eyelid point and the preset number of equal division points minus one equal lower eyelid point are respectively determined.
- an embodiment of the present invention provides a device for marking eye key points.
- the device includes: a first obtaining module configured to obtain a face image and an eyelid curve corresponding to each face image.
- the face image is marked with the marked corner points of the eyes and the marked eyelid points of the upper and lower eyelids, and each marked eyelid curve includes: based on the corresponding marked eyelid point and marked eye corner points, the marked upper eyelid curve representing the upper eyelid is generated And the marked lower eyelid curve representing the lower eyelid;
- the first determining module is configured to, for each marked eyelid curve, integrate the marked eyelid curve based on the principle of mathematical integration, and determine the first curve length of the marked upper eyelid curve and the marked lower eyelid curve in the marked eyelid curve The second curve length; respectively determine a plurality of eyelid points to be used from the marked upper eyelid curve and the marked lower eyelid curve;
- the second determining module is configured to, for each marked eyelid curve, based on the first curve length marking the upper eyelid curve in the marked eyelid curve, the second curve length marking the lower eyelid curve, the multiple eyelid points to be used, and Preset the number of equal points, from the marked upper eyelid curve and marked lower eyelid curve from the marked eyelid curve, respectively determine the preset number of equal points minus 1 equal upper eyelid point and the preset number of equal points minus 1 Divide the lower eyelid points equally;
- the third determining module is configured to determine the marked eye corner point, the equally divided upper eyelid point, and the equally divided lower eyelid point in each face image as the key eye points corresponding to each face image.
- an embodiment of the present invention provides a training device for an eye key point detection model, the device including:
- the second obtaining module is configured to obtain training data, where the training data includes the upper and lower eyelid equal eyelid points marked with the eyes and the eye images marked with the corner points of the eyes, and the calibration information corresponding to each eye image, the calibration information Including: the position information of the divided eyelid points and the marked eye corner points marked in the corresponding eye image.
- the divided eyelid points include: based on the marked eyelid curve corresponding to the face image and the principle of mathematical integration, the determined face image The equal division of the upper eyelid of the eye and the equal division of the lower eyelid point of the lower eyelid;
- the marked eyelid curve includes: generated based on the marked corner points of the eye marked in the corresponding face image and the marked eyelid points of the upper and lower eyelids ,
- the input module is configured to input the eye image and the position information of the equally divided eyelid points and the marked eye corner points included in the calibration information corresponding to each eye image into the initial eye key point detection model to be trained for An eye key point detection model that detects the equally divided eyelid points and the corner points of the upper and lower eyelids of the eyes in the image.
- the method and device for annotating key points of the eye and its detection model training provided by the embodiments of the present invention can obtain a face image and an eyelid curve corresponding to each face image, where the face image There are the marked corner points of the eyes and the marked eyelid points of the upper and lower eyelids.
- Each marked eyelid curve includes: based on the corresponding marked eyelid points and marked eye corner points, the marked upper eyelid curve and the lower eyelid representing the upper eyelid are generated The marked lower eyelid curve; for each marked eyelid curve, based on the principle of mathematical integration, the marked eyelid curve is integrated to determine the first curve length of the marked upper eyelid curve and the second marked lower eyelid curve in the marked eyelid curve Curve length; respectively, from the marked upper eyelid curve and marked lower eyelid curve, determine multiple eyelid points to be used; for each marked eyelid curve, based on the marked eyelid curve, mark the first curve length of the upper eyelid curve, mark the lower The second curve length of the eyelid curve, multiple eyelid points to be used, and the preset number of equal points.
- the preset number of equal points minus one is determined respectively.
- the upper eyelid point and the preset number of equal points minus 1 equal lower eyelid point; the marked eye corner point, equal upper eyelid point and equal lower eyelid point in each face image are determined as corresponding to each face image Key points of the eye.
- the eyelid marking curve corresponding to the face image can be integrated, and the first curve length marking the upper eyelid curve and the second curve length marking the lower eyelid curve in the marking eyelid curve can be determined, and the markings are respectively
- the eyelid curve and the marked lower eyelid curve are densely selected to determine multiple eyelid points to be used from the marked upper eyelid curve and the marked lower eyelid curve, and then mark the upper eyelid curve based on the first curve length of the marked upper eyelid curve
- the number of eyelid points to be used and the preset number of equal division points are determined from the marked upper eyelid curve, and the preset number of equal division points minus one equal upper eyelid point is determined; and the second curve length and label are based on the marked lower eyelid curve.
- the multiple eyelid points to be used on the lower eyelid curve and the preset number of equal division points are determined from the marked lower eyelid curve to determine the preset number of equal division points minus 1 equal lower eyelid point to achieve semi-automatic inclusion from the face image
- the upper and lower eyelids of the eyes are marked with equally divided eyelid points with obvious semantic characteristics, which realizes marking the key points of the eye with obvious semantic characteristics and improves the marking efficiency to a certain extent.
- any product or method of the present invention does not necessarily need to achieve all the advantages described above at the same time.
- the lower eyelid curve is intensively selected to determine multiple eyelid points to be used from the upper eyelid curve and the lower eyelid curve. Then, based on the first curve length of the upper eyelid curve, the multiple eyelid points on the upper eyelid curve are marked.
- the eyelids are marked with the equally divided eyelid points with obvious semantic characteristics, which realizes marking the key points of the eye with obvious semantic characteristics and improves the marking efficiency to a certain extent.
- the image of the area where the eye is located is intercepted from the face image to obtain the eye image marked with the key points of the eye, and
- the eye image and its corresponding calibration information including the position information of the eye key points in the corresponding eye image are determined as the training data of the eye key point detection model for detecting the eye key points of the eyes in the image, and Eye key points with obvious semantic features are used as the training data of the eye key point detection model.
- the eye key point detection model with high stability and detection accuracy can be trained.
- the measured deviation corresponding to each face image is determined.
- the measured deviation can characterize the measured distance between the upper and lower eyelids of the eye and the real The difference between the distance between the upper and lower eyelids of the eyes, the measured deviation is used as part of the training data of the eye key point detection model used to detect the upper and lower eyelids of the eye, which can make the trained eye
- the key point detection model can be based on the measured deviation between the distance between the upper and lower eyelids measured by the image output and the actual distance between the upper and lower eyelids, and further, can correct the measured distance between the upper and lower eyelids according to the measured deviation , To a certain extent, improve the accuracy of the measured distance between the upper and lower eyelids.
- the specified eyelid curve of the specified eyelid can be generated in real time according to the annotation points marked by the annotator, and displayed, so that the annotator can mark the marked points Check to determine whether the marked point is the eyelid point or eye corner point on the designated eyelid. To a certain extent, it can efficiently ensure the accuracy of the eyelid point or eye corner point marked by the labeler, and improve the labeling efficiency of the labeler.
- the point detection model is trained to obtain an eye key point detection model used to detect the equally divided eyelid points and the corner points of the upper and lower eyelids of the eyes in the image.
- the eye key point detection model can determine the eye key points with obvious semantic features from the image, and to a certain extent guarantee the stability and detection accuracy of the eye key point detection model.
- the key point detection model is trained to obtain the eye key point detection model used to detect the equally divided eyelid points and corner points of the upper and lower eyelids of the eyes in the image, which can shorten the training time to a certain extent.
- the left eye image and the right eye image determined from the image can be processed first, and the correction, mirroring, and stitching processing can be performed to make the eye
- the key point detection model detects the key points of the eyes in the two human eyes in the processed image at the same time, that is, through a detection process, the upper and lower eyelids of the two human eyes in the processed image can be detected
- the key points of the eye simplifies the detection process of the key points of the eye using the eye key point detection model.
- FIG. 1 is a schematic flowchart of a method for marking eye key points according to an embodiment of the present invention
- 2A, 2B, 2C, and 2D are schematic diagrams of marking eyelid curves corresponding to face images
- FIG. 3 is a schematic flowchart of a method for training an eye key point detection model provided by an embodiment of the present invention
- FIG. 4 is a schematic diagram of a structure of an eye key point marking device provided by an embodiment of the present invention.
- FIG. 5 is a schematic structural diagram of a training device for an eye key point detection model provided by an embodiment of the present invention.
- the embodiment of the present invention discloses an eye key point labeling and a training method and device for its detection model, so as to realize labeling eye key points with obvious semantic features and improve labeling efficiency to a certain extent.
- the embodiments of the present invention will be described in detail below.
- FIG. 1 is a schematic flowchart of a method for marking key eye points according to an embodiment of the present invention. This method is applied to an electronic device, which can be a device with strong computing and processing capabilities, such as a server.
- the electronic device that implements the method of marking the key points of the eye can be the first An electronic device. The method specifically includes the following steps:
- each marked eyelid curve includes: the marked upper eyelid which is generated based on the corresponding marked eyelid point and marked eye corner points and represents the upper eyelid Curves and labeled lower eyelid curves that characterize the lower eyelid.
- the human face image may include two eyes of the person's face, as shown in Figures 2B and 2D, or only one eye of the face, as shown in Figures 2A and 2C, this is both possible .
- the face image when the face image includes two eyes of a person's face, the face image is marked with the marked corner points of the two eyes and the marked eyelid points of the upper and lower eyelids.
- the marked eyelid curve corresponding to the face image includes : The marked eyelid curve corresponding to the left eye, and the marked eyelid curve corresponding to the right eye.
- the labeled eyelid curve corresponding to the left eye includes: a labeled upper eyelid curve that represents the upper eyelid of the left eye and a labeled lower eyelid curve that represents the lower eyelid of the left eye, which is generated based on the labeled eyelid point and the labeled eye corner point corresponding to the left eye.
- the labeled eyelid curve corresponding to the right eye includes: a labeled upper eyelid curve that represents the upper eyelid of the right eye and a labeled lower eyelid curve that represents the lower eyelid of the right eye, which is generated based on the labeled eyelid points and the labeled eye corner points corresponding to the right eye.
- the marked eyelid curve corresponding to each face image may be based on the marked eyelid point of the upper eyelid of the marked eye when the marking staff is marking the upper and lower eyelids of the eyes in the face image And generated by a preset curve fitting algorithm, and generated based on the marked eyelid points of the lower lid of the marked eye and a preset curve fitting algorithm. Furthermore, when the first electronic device obtains the face image, it also obtains the marked eyelid curve corresponding to the face image.
- the step of obtaining the eyelid curve corresponding to each face image may include: for each face image, based on the marked eye corner point, the marked eyelid point of the upper eyelid in the face image, and The preset curve fitting algorithm fits the upper eyelid curve that characterizes the upper eyelid; and based on the marked eye corner points, the lower eyelid marked eyelid points in the face image, and the preset curve fitting algorithm, The lower eyelid curve representing the lower eyelid is drawn to obtain the eyelid curve corresponding to the face image.
- the first electronic device after the first electronic device obtains the face image, it can be fitted based on the marked corner points of the eyes contained in the face image and the marked eyelid points of the upper eyelid, and a preset curve fitting algorithm.
- the upper eyelid curve that characterizes the upper eyelid of the eye is used as the marked upper eyelid curve; and based on the marked corner points of the included eyes and the marked eyelid points of the lower eyelid marked in the face image, and the preset curve fitting algorithm,
- the lower eyelid curve that characterizes the lower eyelid of the eye is obtained as the labeled lower eyelid curve.
- the preset curve fitting algorithm may be a cubic spline interpolation algorithm. It is understandable that each eye includes two marked eye corner points, which are the intersection of the marked upper eyelid curve representing the upper eyelid of the eye and the marked lower eyelid curve representing the lower eyelid of the eyelid.
- S102 For each marked eyelid curve, based on the principle of mathematical integration, integrate the marked eyelid curve to determine the first curve length marked with the upper eyelid curve and the second curve length marked with the lower eyelid curve in the marked eyelid curve; respectively; From marking the upper eyelid curve and marking the lower eyelid curve, multiple eyelid points to be used are determined.
- the first electronic device may integrate the marked eyelid curve based on the principle of data integration, and respectively integrate the marked upper eyelid curve in the marked eyelid curve, and determine the marked upper eyelid curve.
- the curve length of the eyelid curve is taken as the first curve length; the marked lower eyelid curve in the marked eyelid curve is integrated to determine the curve length of the marked lower eyelid curve as the second curve length.
- the first electronic device intensively selects points on the marked upper eyelid curve in the marked eyelid curve, and determines multiple eyelid points to be used, for example: taking a preset number of points, that is, determining A preset number of eyelid points to be used.
- the first electronic device for each marked eyelid curve intensively selects points on the marked lower eyelid curve in the marked eyelid curve, and determines multiple eyelid points to be used, for example: take a preset number of points, namely Determine the preset number of eyelid points to be used.
- S103 For each marked eyelid curve, based on the first curve length of the upper eyelid curve marked on the marked eyelid curve, the second curve length marked on the lower eyelid curve, multiple eyelid points to be used, and a preset number of equal division points, In marking the eyelid curve, marking the upper eyelid curve and marking the lower eyelid curve, respectively determine the preset number of equal points minus 1 equal upper eyelid point and the preset number of equal points minus 1 equal lower eyelid point.
- the first curve length and the preset Set the number of equally divided points to determine the distance between each two adjacent equally divided upper eyelid points that need to be marked, where the distance between each adjacent two equally divided upper eyelid points is equal to the length of the first curve and The ratio of the number of equal points is preset.
- the first electronic device may calculate the distance between the corner point of the eye and the eyelid point to be used from a certain corner of the eye.
- the distance between a certain eyelid point to be used at the corner of the eye is determined, it is every adjacent If the distance between the two halved upper eyelid points is an integer multiple of, the eyelid point to be used can be determined as the halved upper eyelid point.
- the integer multiple can be 1 time to the preset number of equal division points minus 1 time.
- the electronic device can calculate the distance between the eyelid point and the eyelid point to be used from a certain corner of the eye, and when it is determined that the distance between the eyelid point to be used and the eyelid point is every two adjacent If the distance between the upper eyelid points is divided into two equal parts, the eyelid point to be used is determined as the first divided upper eyelid point, and then the first divided upper eyelid point is taken as the starting position, and the first divided upper eyelid point is sequentially traversed.
- the length of the second curve and the preset number of equal points are used to determine the distance between each two adjacent equally divided lower eyelid points that need to be marked, where each adjacent two equally divided lower eyelids
- the distance between the points is equal to the ratio of the length of the second curve to the preset number of equal points.
- the first electronic device may calculate the distance between the corner point of the eye and the eyelid point to be used from a certain corner of the eye. When the distance between a certain eyelid point to be used at the corner of the eye is determined, it is every adjacent Is an integer multiple of the distance between the two equally divided lower eyelid points, then the eyelid point to be used can be determined as the equally divided lower eyelid point.
- the integer multiple can be 1 time to the preset number of equal division points minus 1 time.
- the electronic device can calculate the distance between the eyelid point and the eyelid point to be used from a certain corner of the eye, and when it is determined that the distance between the eyelid point to be used and the eyelid point is every two adjacent If the distance between the lower eyelid points is divided into two, it is determined that the eyelid point to be used is the first equally divided lower eyelid point, and then, taking the first equally divided lower eyelid point as the starting position, traverse the first The to-be-used eyelid points after the lower eyelid points are divided, and the distance between a certain to-be-used eyelid point and the first lower eyelid point is determined to be between every two adjacent lower eyelid points Then, the eyelid point to be used is determined as the second equally divided lower eyelid point, and so on to determine the preset number of equally divided lower eyelid points minus one equally divided lower eyelid point.
- the above steps are performed for the marked eyelid curve corresponding to the left eye, and the above steps are performed for the marked eyelid curve corresponding to the right eye.
- S104 Determine the marked eye corner point, equally divided upper eyelid point, and equally divided lower eyelid point in each face image as the key eye point corresponding to each face image.
- the halved eyelid points of the upper and lower eyelids of the eyes contained in the face image are determined as the face image Corresponding key points of the eye.
- the halved eyelid points of the upper and lower eyelids of the eye include the halved upper eyelid points of the upper eyelid and the halved lower eyelid points of the lower eyelid.
- the first electronic device may mark the divided eyelid points in the upper and lower eyelids of the eyes of the face image based on the position information of the divided eyelid points of the upper and lower eyelids of the determined human face image, and save the The eyelid points are divided into the upper and lower eyelids of the eyes and the face images are marked with the corner points of the eyes.
- the labeled eyelid curve corresponding to each face image can also be saved. It is also possible to save the position information of the equally divided eyelid points of the upper and lower eyelids of the eyes and the marked eye corner points in the face image in the form of text.
- the eyelid marking curve corresponding to the face image can be integrated, and the first curve length marking the upper eyelid curve and the second curve length marking the lower eyelid curve in the marking eyelid curve can be determined, and the markings are respectively
- the eyelid curve and the marked lower eyelid curve are densely selected to determine multiple eyelid points to be used from the marked upper eyelid curve and the marked lower eyelid curve, and then mark the upper eyelid curve based on the first curve length of the marked upper eyelid curve
- the number of eyelid points to be used and the preset number of equal division points are determined from the marked upper eyelid curve, and the preset number of equal division points minus one equal upper eyelid point is determined; and the second curve length and label are based on the marked lower eyelid curve.
- the multiple eyelid points to be used on the lower eyelid curve and the preset number of equal division points are determined from the marked lower eyelid curve to determine the preset number of equal division points minus 1 equal lower eyelid point to achieve semi-automatic inclusion from the face image
- the upper and lower eyelids of the eyes are marked with equally divided eyelid points with obvious semantic characteristics, which realizes marking the key points of the eye with obvious semantic characteristics and improves the marking efficiency to a certain extent.
- the method may further include:
- the eye image and its corresponding calibration The information is determined as the training data of the eye key point detection model used to detect the eye key points of the eyes in the image, where the calibration information includes position information of the eye key points in the corresponding eye image.
- the first electronic device determines the eye key points in each face image, it can determine the eye area based on the position of the eye key points in the face image, and then obtain the Intercept the image of the area where the eye is located, and obtain the eye image marked with the key points of the eye.
- the area where the eye is located may be the smallest rectangular area containing the eyes, or it may be an area where the smallest rectangular area containing the eyes extends to the surrounding by preset pixel points, which is all right.
- extending the preset pixel points to the surrounding is: respectively expanding the preset pixel points in the upper, lower, left, and right directions of the smallest rectangular area including the eyes.
- Each face image has a corresponding relationship with the eye image cut out.
- the eye image After determining the eye image, for each eye image, determine the position information of the eye key points marked in the eye image in the eye image as the calibration information corresponding to the eye image, and mark the eye key points
- the eye image of and the calibration information corresponding to the eye image determine the training data of the eye key point detection model used to detect the eye key points of the eyes in the image.
- the step of determining the eye image and its corresponding calibration information as the training data of the eye key point detection model for detecting the eye key points of the eyes in the image may include : Obtain the real eye opening and closing length corresponding to each face image;
- the measured eye opening and closing length corresponding to each face image where the measured eye opening and closing length is: based on the length determined by the upper and lower eyelids of the target 3D face model corresponding to the face image, the target 3D face model is: A face model determined based on the facial feature points in the corresponding face image and a preset three-dimensional face model.
- the target three-dimensional face model includes: based on the marked eye corner points in the face image, equally divided upper eyelid points and Divide the lower eyelid points equally to construct the upper and lower eyelids of the eye;
- the eye image and its corresponding calibration information are determined as the training data of the eye key point detection model used to detect the upper and lower eyelids of the eye in the image, where the calibration information includes: the eye marked in the corresponding eye image The location information of the key points, and the measured deviation corresponding to the face image corresponding to the eye image.
- the face image when it is a face image collected by a multi-image acquisition device system, it can be based on multiple image acquisition devices in the multi-image acquisition device system to target the face of the same person at the same time.
- the collected images containing the person's face are constructed to construct a real three-dimensional face model of the person's face, and the real three-dimensional face model of the person's face includes the upper and lower eyelids of the person's eyes.
- the distance between the upper and lower eyelids of the person’s eyes can be determined as the real eye opening and closing corresponding to each image containing the person’s face Length, that is, the actual eye opening and closing length corresponding to each face image containing a person's face.
- the above-mentioned real three-dimensional face model can be constructed based on any current technology that can reconstruct a three-dimensional face model of a person based on a plurality of images containing the face of a person.
- the above-mentioned process based on the center eyelid points of the upper and lower eyelids of the person's eyes included in the real three-dimensional face model to determine the distance between the upper and lower eyelids of the person's eyes may be:
- the first value can be 0, the second value can be 1, and the second eye canthus constraint is expressed by formula (2); based on the curve equation, the third position information of each eyelid point, and the pose information of each image acquisition device and The internal reference information is used to construct the re-projection error constraints corresponding to the equally divided eyelid points, where the above-mentioned re-projection error constraints can be based on the third and second position information of each equally divided eyelid point, and the true three-dimensional corresponding to each equally divided eyelid point
- the distance between the equally divided eyelid space points in the face model and the projection position in the face image is constructed; based on the order of the equally divided eyelid points in the face image, the order constraint can be constructed by the formula ( 3); Based on the first corner of the eye constraint, the second corner of the eye constraint, the reprojection error constraint, and the order constraint, construct the eyelid space curve equation used to characterize the upper and lower eyelids of the eye, that is, the four constraints corresponding to the above are
- (x 0 , y 0 , z 0 ) represents the first three-dimensional position information
- (x 1 , y 1 , z 1 ) represents the second three-dimensional position information
- a 1 , a 2 , a 3 , b 1 , b 2 , B 3 , c 1 , c 2 , and c 3 are the required coefficients
- t is the independent variable.
- Formula (3) 0 ⁇ t 1 ⁇ t 2 ... ⁇ t i ... ⁇ t M ⁇ 1(3); among them, when the eye space curve equation characterizing the upper eyelid is determined, t i represents the equivalence of the ith upper eyelid The third two-dimensional position information of the eyelid points.
- M represents the number of equal eyelid points of the upper eyelid; when the eye space curve equation representing the lower eyelid is determined, t i represents the equal division of the i-th lower eyelid
- M represents the number of equal eyelid points of the lower eyelid.
- the first electronic device can obtain the actual measured eye opening and closing length corresponding to each face image.
- the actual measured eye opening and closing length can be: the length determined based on the upper and lower eyelids of the eyes in the target three-dimensional face model corresponding to the face image
- the target three-dimensional face model can be: using 3DMM (3D Morphable Models, three-dimensional deformable Model) technology, a face model determined based on the facial feature points in the corresponding face image and a preset three-dimensional face model.
- the target three-dimensional face model includes: based on the marked eye corner points in the face image, etc.
- the upper eyelid point and the lower eyelid point are divided equally to construct the upper and lower eyelids of the eye.
- the first electronic device calculates the ratio of the actual eye opening and closing length corresponding to the face image to the actual measured eye opening and closing length as the actual measurement deviation corresponding to the face image; wherein the actual measurement deviation can be determined
- the real eye opening and closing length and the measured eye opening and closing length corresponding to the left eye in the face image can be obtained respectively, and the real eye opening and closing length and the measured eye corresponding to the right eye Opening and closing length; based on the actual eye opening and closing length corresponding to the left eye and the measured eye opening and closing length, determine the actual measurement deviation corresponding to the left eye; based on the real eye opening and closing length corresponding to the right eye and the measured eye opening and closing length, determine the right eye correspondence
- the measured deviation of is taken as the measured deviation corresponding to the face image.
- the first electronic device determines the eye image and its corresponding calibration information as the training data of the eye key point detection model for detecting the upper and lower eyelids of the eye in the image, and the calibration information includes the corresponding eye The position information of the key points of the eye marked by the image and the actual measured deviation corresponding to the face image corresponding to the eye image.
- the eye key point detection model trained based on the eye image and its corresponding calibration information can not only detect the equally divided eyelid points in the upper and lower lids of the eyes in the image, but also detect the corresponding eye in the image
- the actual measurement deviation, and further, the actual measurement eye opening and closing length corresponding to the image can be corrected based on the actual measurement deviation, so as to obtain a more accurate eye opening and closing length. Furthermore, when performing other tasks with the more accurate eye opening and closing length, the accuracy of the results of other tasks can be improved.
- the method may further include:
- first face image contains the eyes of the person's face, and the first face image is one of the face images
- the tagger If it is detected that the tagger has triggered the tagging instruction for the specified eyelid in the first face image at least twice, based on the location information of the tag point carried by the tagging instruction triggered by the tagger at least twice, and the preset curve fitting algorithm, determine The specified eyelid curve representing the specified eyelid, where the specified eyelid is: the upper eyelid and the lower eyelid of the eye in the first face image;
- the designated eyelid curve is displayed in the first face image, so that the labeler detects whether the labelled point is an eyelid point or an eye corner point on the designated eyelid.
- a process of labeling the upper and lower eyelids of the eyes of the person's face in each face image may also be included.
- the labeling process is a process in which the labeling person marks the corner points and the eyelid points of the upper and lower eyelids of the eyes of the person's face in the face image.
- the first electronic device can obtain the first face image after detecting the face image annotation start instruction triggered by the annotator, and the first face image is any one of the aforementioned face images,
- the first face image contains the eyes of the person's face.
- the first electronic device displays the first face image, the annotator can mark the upper and lower eyelids of the eyes in the first face image, and the first electronic device receives the trigger from the annotator for the upper and lower eyelids of the eyes in the first face image.
- a marking instruction which carries the position information of the marked marking point.
- the first electronic device displays a preset labeling icon at a corresponding position in the first face image based on the position information of the labeling point carried in the labeling instruction; the first electronic device can count the labeling personnel’s real-time statistics on the first face The number of marking instructions triggered by the specified eyelid in the image; if it is detected that the marking person has triggered the marking instruction for the specified eyelid in the first face image at least twice, that is, when the specified eyelid includes at least two marking points, the first electronic device is based on The position information of the label points carried by the labeling instructions triggered by the labeler at least twice, and the preset curve fitting algorithm, determine the specified eyelid curve representing the specified eyelid; and display the specified eyelid curve in the first face image, and mark The person can observe the specified eyelid curve and detect whether the specified eyelid curve coincides with the specified eyelid in the first face image, that is, detect whether the marked point is an eyelid point or an eye corner point on the specified eyelid.
- the annotator can trigger the marking point position modification instruction, and the first electronic device obtains the marking point position modification instruction, wherein the marking point position modification instruction carries the Modify the current position information of the marked point of the modified position and the target position information to be modified, the first electronic device moves the marked point of the position to be modified from the position corresponding to the current position information to the position corresponding to the target position information to be modified , That is, display the preset label icon at the location corresponding to the target location information, and delete the preset label icon displayed at the location corresponding to the current location information.
- the first electronic device determines a new specified eyelid curve that characterizes the specified eyelid based on the new position information of the modified marked point in the specified eyelid and the position information of other marked points, and a preset curve fitting algorithm, And it is displayed in the first face image, so that the annotator continues to detect whether the marked point is an eyelid point or an eye corner point on the designated eyelid.
- the first electronic device saves the first face image and the annotation points contained at the moment when the save instruction is triggered, and each annotation Point location information.
- the marked points included in the trigger moment of the save instruction in the first face image may include two corner points of the eyes, the upper eyelid point of the upper eyelid and the lower eyelid point of the lower eyelid.
- the number of upper eyelid points and lower eyelid points can be the same or different, for example: the number of upper eyelid points can be 3, the number of lower eyelid points can be 4, and so on.
- the preset labeling icon can be a solid circle or a hollow circle, and can also be a solid image or a hollow image of other shapes, which is all right.
- the labeling process can be executed on the first electronic device, or can be executed on another electronic device different from the first electronic device, all of which are possible. If the labeling process is performed on another electronic device different from the first electronic device, it may be that after the labeling personnel finish labeling the face image, upload the face image with the eye corner points and eyelid points marked on the upper and lower eyelids of the eyes. Cloud, so that when the first electronic device performs the marking of the key points of the eye, it can obtain the face image of the upper and lower eyelids marked with the eye corner points and the eyelid points from the cloud.
- the embodiments of the present invention can efficiently ensure the accuracy of the eyelid points or the corner points of the eyes marked by the labeler to a certain extent, and improve the labeling efficiency of the labeler.
- FIG. 2A, FIG. 2B, FIG. 2C, and FIG. 2D it is a schematic diagram of an eyelid curve corresponding to a face image.
- the face image shown in FIG. 2A includes one eye, and the eye can be completely detected, and the marked eyelid curve corresponding to the face image may include the marked upper eyelid curve and the marked lower eyelid curve of the eye.
- the occluded position in FIG. 2A, FIG. 2B, FIG. 2C, and FIG. 2D is the position where the human face is located.
- the face image shown in FIG. 2B includes two eyes, and both eyes can be completely detected.
- the labeled eyelid curve corresponding to the face image may include the labeled upper eyelid curve and the labeled lower eyelid curve of the left eye, and the right The eye is marked with the upper eyelid curve and the lower eyelid curve.
- the right image in Figure 2B is a partial enlarged view of the position of the eye in the left image.
- the face image shown in Figure 2C includes an eye, and the inner corner of the eye is occluded.
- the eyelid points and corner points of the upper and lower lids of the partially occluded eyes in this type of face image one In this case, the eyelid points and eye corner points at the occluded position of the occluded eye can be directly marked through experience.
- a three-dimensional face model of the person corresponding to the face image can be reconstructed based on other face images corresponding to the face image , And then, from the three-dimensional face model, determine the eye space point corresponding to the position where the eye is blocked in the face image, and then reproject the eye space point to the face image to determine the The eyelid point and/or eye corner point at the occluded position.
- the above-mentioned partially blocked eyes may refer to eyes whose blocked area does not exceed a preset area.
- the right image in Figure 2C is a partial enlarged view of the position of the eye in the left image.
- the face image shown in FIG. 2D includes two eyes, and one eye can be detected at night, and one eye is partially blocked. At this time, the annotator can directly mark the eyelid points and eye corner points at the occluded position of the occluded eye based on experience. Or, when the face image is an image collected based on a multi-image acquisition device system, a three-dimensional face model of the person corresponding to the face image can be reconstructed based on other face images corresponding to the face image, and further, From the three-dimensional face model, determine the eye space point corresponding to the position where the eye is blocked in the face image, and then reproject the eye space point to the face image to determine the blocked position Eyelid point and/or corner point of the eye.
- the right image in Figure 2D is a partial enlarged view of the position of the left eye in the left image.
- the aforementioned face image and other face images corresponding to the face image are all images collected by the multi-image collection device system, and are images collected at the same time.
- the first electronic device can be set to restrict the labeling staff to separate the labeling points of the upper and lower eyelids of the eyes.
- the labeling staff can be instructed to label the labeling points of the upper eyelids of the eyes.
- the labeling staff cannot label the eye
- the intersection of the upper and lower eyelids of the eyes is the inner and outer corners of the corresponding eyes.
- the first electronic device ensures that the eyelid curves of the upper and lower eyelids can intersect with the corresponding inner and outer corners of the eye respectively.
- the first electronic device detects that the marking point in the upper eyelid has been marked.
- the marked corner point of the eye may refer to the marked point with the smallest and largest horizontal axis coordinate of the marked point in the marked upper eyelid.
- FIG. 3 is a schematic flowchart of a method for training an eye key point detection model provided by an embodiment of the present invention.
- This method is applied to an electronic device, which can be a device with strong computing and processing capabilities, such as a server, etc., in which, for clear layout, in the subsequent description, the electronic device that implements the training method of the eye key point detection model can be used Become a second electronic device.
- the method specifically includes the following steps:
- the training data includes the upper and lower eyelid equal eyelid points marked with the eyes and the eye images marked with the corner points of the eyes, and the calibration information corresponding to each eye image.
- the calibration information includes: the equal eyelid points and labels marked in the corresponding eye image The position information of the corner points of the eyes.
- the equally divided eyelid points include: based on the marked eyelid curve corresponding to the face image and the principle of mathematical integration, the equal division of the upper eyelid of the eye in the face image is determined.
- marking eyelid curves include: based on the marked corner points of the eyes and the marked eyelid points of the upper and lower eyelids in the corresponding face image, the marked upper eyelid curve representing the upper eyelid and the marked lower eyelid curve representing the lower eyelid are generated, Each eye image is an image of the area where the eye is cut out from the corresponding face image.
- the second electronic device may first obtain training data, where the training data may include multiple eye images and calibration information corresponding to each eye image.
- the eye image is marked with the upper and lower eyelids of the eye and the eyelid points and the corner points.
- the specific obtaining process of dividing the eyelid points and marking the eye corner points can refer to the obtaining process of obtaining the equal eyelid points and marking the eye corner points in the marking process of the above-mentioned key points of the eye, and will not be repeated here.
- S302 Input the eye image and the position information of the equally divided eyelid points and the marked eye corner points included in the calibration information corresponding to each eye image into the initial eye key point detection model to be trained to detect the upper and lower positions of the eyes in the image An eye key point detection model that divides the eyelid point and the corner point of the eyelid.
- the initial eye key point detection model may be: a neural network model based on deep learning.
- the second electronic device inputs the eye image and the position information of the eye key points included in the calibration information corresponding to each eye image into the initial eye key point detection model, where the eye key points include the upper and lower eyelids of the corresponding eye The halves of the eyelid point and the corner point of the eye.
- the second electronic device uses the initial eye key point detection model to extract the image features in the eye image for each eye image, and based on the extracted image features, detects the eye key points and the key points of the eye in the eye image. Its position information; and the detected position information of the key points of the eye is matched with the position information of the key points of the eye in the corresponding calibration information.
- the trained eye key point detection model If the matching is successful, it is determined that the initial key point detection model of the eye has converged to obtain The trained eye key point detection model; if the matching is unsuccessful, determine that the initial eye key point detection model has not converged, adjust the parameters of the initial eye key point detection model, and return to execute the eye image and each eye image
- the corresponding calibration information includes the position information of the equally divided eyelid points and the marked eye corner points, and the steps of inputting the initial eye key point detection model until the matching is successful, confirm that the initial eye key point detection model converges, and obtain the trained eye Key point detection model.
- the eye image there is a correspondence between the eye image and the detected position information of the key points of the eye, and there is a correspondence between the eye image and the calibration information, and correspondingly, the detected position information of the key points of the eye has a corresponding relationship with the calibration information.
- the above process of matching the position information of the detected key points of the eye with the position information of the key points of the eye in the corresponding calibration information may be: using a preset loss function to calculate each detected eye The loss value between the position information of the key point and the position information of the eye key point in the corresponding calibration information, to determine whether the loss value is less than the preset loss threshold; if it is determined that the loss value is less than the preset loss threshold, and, if The current judgment loss value is less than the preset loss threshold for more than a predetermined number of times, or the ratio of the current judgment loss value less than the preset loss threshold to the total number of times exceeds the preset ratio threshold, the match is determined to be successful, and the initial eye can be determined
- the key point detection model converges to obtain a trained eye key point detection model; if it is determined that the loss value is not less than the preset loss threshold, it is determined that the matching is unsuccessful.
- the above process is only an example of determining the convergence of the initial eye key point detection model.
- the embodiment of the present invention can use any determination method that can characterize the model convergence to determine whether the initial eye key point detection model has converged, and then The key point detection model of the eye is trained.
- the aforementioned preset loss function may be a loss function such as smooth L1 loss (smooth 1 norm loss), wing loss (wing loss), and KL loss (that is, KL divergence loss).
- the above process of adjusting the parameters of the initial eye key point detection model can be based on the position information of the eye key points detected by the eye key point detection model in the training process and the eye parts in the corresponding calibration information.
- the "gap" between the location information of key points is getting smaller and smaller, and adjustments should be made.
- SGD Spochastic Gradient Descent
- SGDR sinochastic gradient descent with restarts
- Method and other optimization strategies.
- the batch size during the training process can be 256
- the initial learning rate can be 0.04.
- the more the number of eye images included in the obtained training data the higher the stability of the trained eye key point detection model, and the accuracy of the detection result obtained based on the eye key point detection model Also higher.
- the eye key point detection model is trained to obtain the eye key point detection model used to detect the equally divided eyelid points and eye corner points in the upper and lower eyelids of the eyes in the image.
- the eye key point detection model can determine the eye key points with obvious semantic features from the image, and to a certain extent guarantee the stability and detection accuracy of the eye key point detection model.
- the method may further include:
- the correction processing is: processing that the ordinates in the position information of the marked corner points in the eye image are all the same;
- the S302 may include: inputting the position information of the halved eyelid points and the marked eye corner points included in each converted image and the calibration information corresponding to each converted image into the initial eye key point detection model to be used for training An eye key point detection model for detecting the equally divided eyelid points and eye corner points in the upper and lower eyelids of the eyes in the image.
- the person’s face in the face image may be tilted.
- the eye image and its calibration information are used before training the initial eye key point detection model.
- the eye image can be corrected first to obtain the corrected image, that is, the ordinates in the position information of the marked eye corner points in the eye image are all the same. And re-determine the position information of the key points of the eye in the corrected image, and update the calibration information corresponding to each corrected image based on the key points of the eye in each corrected image, that is, the position information of the eyelid points and the marked eye corner points.
- the key points of the eye are the position information of the eyelid points and the corner points of the eyes.
- the eye image includes a left eye image and a right eye image corresponding to the left eye image;
- the method may further include: mirroring the left eye image or the right eye image corresponding to the left eye image to obtain a mirror image image;
- the mirror image and the unmirrored image are spliced to obtain a spliced image. If the left-eye image is mirrored, the unmirrored image is the right-eye image corresponding to the left-eye image; if the right-eye image corresponding to the left-eye image is The eye image is mirrored, and the image that is not mirrored is the left eye image;
- Said each converted image and the calibration information corresponding to each converted image includes the position information of the divided eyelid points and the marked eye corner points, and input the initial eye key point detection model to be trained to detect the eyes in the image
- the steps of detecting the key points of the eyelid point and the corner point of the eyelid in the upper and lower eyelids may include:
- the eye image includes: an image containing the left eye of the target person, which may be called a left eye image; and an image containing the right eye of the target person, which may be called a right eye image corresponding to the left eye image.
- the left-eye image or the left-eye image may be mirrored to obtain a mirrored image, and then the mirrored image and the unmirrored image are spliced to obtain the spliced image corresponding to the eye image.
- Each stitched image and its corresponding calibration information are input into the initial eye key point detection model to train the initial eye key point detection model.
- Mirroring the left-eye image or the right-eye image corresponding to the left-eye image can make the left-eye image mirror the mirrored right-eye image, or make the right-eye image mirror the mirrored left-eye image, to a certain extent. Shorten training time.
- the left eye image and the right eye image determined from the image can be processed first, that is, the correction, mirroring, and stitching processing can make the eye
- the key point detection model detects the key points of the eyes in the two human eyes in the processed image at the same time, that is, through a detection process, the upper and lower eyelids of the two human eyes in the processed image can be detected
- the key points of the eye simplifies the detection process of the key points of the eye using the eye key point detection model.
- the above-mentioned process of splicing the mirror image and the unmirrored image to obtain the spliced image may be: splicing the mirror image and the unmirrored image in the spatial dimension or the channel dimension, where the spatial dimension is
- the splicing can be: splicing the mirror image and the unmirrored image left and right or up and down.
- Left and right splicing can be: the right side of the mirror image is spliced with the left side of the image that is not mirrored, and the left side of the mirror image is spliced with the right side of the image that is not mirrored.
- Top and bottom splicing may be: the upper side of the mirror image is spliced with the lower side of the image that is not mirrored, and the lower side of the mirror image is spliced with the upper side of the image that is not mirrored.
- the splicing of the aforementioned channel dimensions may be: splicing the mirror image and the unmirrored image back and forth, that is, superimposing and splicing the mirror image and the unmirrored image.
- the ordinate value of the corner point of the eye in the original image corresponding to the mirror image and the ordinate value of the corner point of the image without mirroring can be adjusted to the same value during the previous normalization process.
- the original image corresponding to the mirror image is: the mirror image is obtained by performing mirror image processing.
- the calibration information corresponding to each eye image may further include: a measured deviation corresponding to the eye image, and the measured deviation is: the actual eye opening and closing length corresponding to the eye image and the measured eye opening and closing length
- the actual measured eye opening and closing length is: based on the length determined by the upper and lower eyelids of the eyes in the target three-dimensional face model corresponding to the face image
- the target three-dimensional face model is: based on the facial feature points in the corresponding face image
- the target 3D face model includes: the upper and lower eyelids of the eye constructed based on the marked eye corner points, equal division of upper eyelid points and equal division of lower eyelid points in the face image ;
- the S302 may include: inputting the eye image and the calibration information corresponding to each eye image including the position information of the divided eyelid points and the marked eye corner points and the actual measurement deviation corresponding to the eye image, and inputting the initial eye key point detection model ,
- the key point detection model of the eye is obtained by training, where the key point detection model of the eye is used to detect the equal-divided eyelid points and corner points of the upper and lower lids of the eyes in the image, and detect the actual measured deviation corresponding to the image.
- the position information of the eye image, the position information of the eye corner points and the measured deviation corresponding to the eye image can be used as the calibration information corresponding to the eye image, and then the eye image and the calibration corresponding to the eye image can be used Information, train the initial eye key point detection model, that is, input the eye image and the calibration information corresponding to the eye image into the initial eye key point detection model to train to obtain the eye key point detection model.
- the trained eye key point detection model it can be used to detect the equally divided eyelid points and corner points of the upper and lower lids of the eyes in the image, and can also detect the actual measured deviations corresponding to the image.
- the initial eye key point detection model includes: a feature extraction layer, a first feature classification layer, and a second feature classification layer.
- the feature extraction layer may refer to the layer used to extract image features of the image, such as the convolutional layer and the pooling layer;
- the first feature classification layer may refer to: the key points of the eyes in the image and their position information are detected based on the image features
- the fully connected layer may refer to a fully connected layer used to detect the actual measured deviation corresponding to the image.
- the above process of inputting the eye image and the calibration information corresponding to the eye image into the initial eye key point detection model to train the eye key point detection model may be: first input each eye image to the feature extraction layer, and extract the The image features in the eye image; then the image features are input to the first feature classification layer to determine the current position information of the eye key points in the eye image; and then the current position information and the corresponding calibration information of the eye key points The position information is matched.
- the middle eye key point detection model it can detect the position information of the eye key points in the image.
- the middle eye key point detection model includes a trained feature extraction layer, a first feature classification layer, and an untrained second feature classification layer.
- the eye image is input to the feature extraction layer of the middle eye key point detection model to obtain the image features of the eye image; the image feature is input to the first feature classification layer of the middle eye key point detection model to determine the eye key in the eye image Point position information; based on the position information of the key points of the eye in the eye image, determine the current measured deviation corresponding to the eye image; input the current measured deviation into the second feature classification layer in the middle eye key point detection model, and the current measured deviation Match with the actual measured deviation in the corresponding calibration information. If the matching is successful, it is determined that the middle eye key point detection model converges, and the trained eye key point detection model is obtained.
- the trained eye key point detection model includes training The completed feature extraction layer and the first feature classification layer and the second feature classification layer after training; if the matching fails, adjust the parameters of the second feature classification layer of the middle eye key point detection model, and return to execute the input of the eye image
- the eye key point detection model obtained by this training can not only detect the eye key points in the image, but also detect the actual measured deviation corresponding to the image.
- the training obtains the eye that can detect the position information of the key point of the eye in the image.
- the central key point detection model is used as the middle eye key point detection model, where the middle eye key point detection model includes a trained feature extraction layer, a first feature classification layer, and an untrained second feature classification layer.
- the feature extraction layer may refer to: the convolutional layer and the pooling layer of the middle eye key point detection model are used to extract features of the image;
- the first feature classification layer may refer to: the middle eye key point detection model
- the fully connected layer used to detect the key points of the eyes and their position information in the image.
- the above-mentioned second feature classification layer may refer to: a fully connected layer of the middle eye key point detection model used to detect the actual measured deviation corresponding to the image.
- the second electronic device obtains multiple other eye images, and obtains calibration information corresponding to each other eye image.
- the calibration information corresponding to the other eye image includes the actual measured deviation corresponding to the other eye image;
- the calibration information corresponding to other eye images is input to the middle eye key point detection model; for each other eye image, the image features of the other eye images are extracted based on the trained feature extraction layer, and the image features are input to the trained
- the first feature classification layer obtains the position information of the key points of the other eye images.
- the second electronic device determines the current measured deviation corresponding to the other eye image based on the position information of the eye key points of the other eye image, and inputs the current measured deviation into the untrained second feature classification of the middle eye key point detection model Layer, the current measured deviation is matched with the measured deviation in the corresponding calibration information. If the matching is successful, it is determined that the middle eye key point detection model has converged, and the trained eye key point detection model is obtained.
- the training completed The eye key point detection model includes the above-mentioned trained feature extraction layer, the first feature classification layer, and the trained second feature classification layer; if the matching fails, adjust the second feature classification layer of the middle eye key point detection model Return to the step of inputting other eye images and the calibration information corresponding to each other eye image to the middle eye key point detection model until the matching is successful, confirm that the middle eye key point detection model converges, and obtain the training result Eye key point detection model.
- the eye key point detection model obtained by this training can not only detect the eye key points in the image, but also detect the actual measured deviation corresponding to the image.
- the specific calculation process of the current actual measurement deviation can be referred to the specific calculation process of the actual measurement deviation in the above-mentioned eye key point marking process, which will not be repeated here.
- the method may further include:
- the image to be inspected includes the eyes of the person to be inspected
- the trained eye key point detection model can be used to detect the corner points in the eyes of the person to be detected in the image to be detected And the equally divided corner points in the upper and lower eyelids, and the position information of the corner points in the image to be detected, and the position information of the equally divided corner points in the upper and lower eyelids in the image to be detected.
- the facial feature points of the face of the person to be detected can also be detected, where the facial feature points are used to characterize the face of the person to be detected.
- various parts of the face may include nose, lips, eyebrows, eyes, jaw, cheeks, ears, and forehead.
- the facial feature points of each part of the face can respectively include: feature points in the face that characterize the position of the nose, such as nose wings, nose bridge, and tip of the nose; can also include feature points that characterize the position of the lips, such as lips The feature points of the corners of the mouth and the outer edges of the lips; it can also include the feature points that characterize the position of the eyebrows, such as the edge of the eyebrows; it can also include the feature points that characterize the location of the eyes, such as the corner of the eye feature points , Orbital feature points, pupil feature points, etc.; can also include feature points that characterize the position of the mandible, such as feature points on the contour of the mandible, that is, feature points on the contour of the chin, etc.; can also include characterizing the location of the ear The feature points of the position, such as the feature points on the contours of the ears, etc.; it can also include the feature points that characterize the position of the forehead, such as the feature points on the contour of the forehead, such as the intersection of
- the area of the eyes is determined from the image to be detected, and the area of the eyes is cut out from the image to be detected, and the eye image corresponding to the image to be detected is obtained as the image of the eye to be detected.
- the eye image is input into the eye key point detection model obtained by the training, so as to improve the accuracy of the detected corner points in the eyes of the person to be detected and the equally divided eye corner points in the upper and lower eyelids to a certain extent.
- both the left eye image to be detected and the right eye image to be detected may be subjected to normalization processing, that is, the left eye image to be detected
- the ordinate values in the position information of the two eye corner points in the image are the same, so that the ordinate values in the position information of the two eye corner points in the right-eye image to be detected are the same; input the corrected eye image to the training eye
- the key point detection model is used to improve the accuracy of the detected corner points in the eyes of the person’s eyes and the equally divided eye corner points in the upper and lower eyelids to a certain extent, and to a certain extent lower the key point detection of the eyes obtained by this training The difficulty of model detection.
- the left-eye image to be detected or the right-eye image to be detected after the normalization process is obtained, and the mirroring process is performed to obtain a mirrored eye image, and the mirrored eye image and the unmirrored eye image are stitched together to obtain the stitched eye Image, input the stitched eye image into the trained eye key point detection model, so that the trained eye key point detection model can detect the key points and their position information in the mirrored eye image at one time , And the eye key points and their position information in the unmirrored eye image; subsequently, the position information of the eye key points in the mirrored eye image is mirrored to obtain the eye key points corresponding to the mirrored eye image Position information in the image before mirroring. Furthermore, the position information of the key eye points in the left-eye image to be detected after the normalization processing and the position information of the key eye points in the right-eye image to be detected are obtained.
- the method may also include:
- the preset curve fitting algorithm determine the eyelid curve of the upper and lower eyelids of the person to be detected as the eyelid curve to be detected, and set it in the grayscale edge map Draw the eyelid curve to be detected, where the eyelid curve to be detected includes: the upper eyelid curve to be detected that characterizes the upper eyelid of the person to be detected and the lower eyelid curve to be detected that represents the lower eyelid of the person to be detected;
- each group of reference points includes points corresponding to the equally divided eyelid points in the eyelid curve to be detected;
- the corner points in the eyelid curve to be detected, and the preset curve fitting algorithm determine the reference curve corresponding to the set of reference points, and draw each set in the grayscale edge map
- the reference curve corresponding to the reference point wherein the reference curve corresponding to each set of reference points includes: a reference upper eyelid curve representing the upper eyelid of the person to be inspected and a reference lower eyelid curve representing the lower eyelid of the person to be inspected;
- the grayscale edge map for each first upper eyelid curve, determine the sum of the pixel values of the pixel points corresponding to the first upper eyelid curve, where the first upper eyelid curve includes: the reference upper eyelid corresponding to each set of reference points Eyelid curve and upper eyelid curve to be tested;
- the grayscale edge map for each first lower eyelid curve, determine the sum of the pixel values of the pixel points corresponding to the first lower eyelid curve, where the first lower eyelid curve includes: the reference lower eyelid corresponding to each set of reference points An eyelid curve and the lower eyelid curve to be detected;
- the target eyelid curve is integrated to determine the third curve length of the target upper eyelid curve and the fourth curve length of the target lower eyelid curve in the target eyelid curve; from the target upper eyelid curve and the target lower eyelid curve respectively , To determine multiple reference eyelid points;
- the preset number of equal division points minus 1 equal upper eyelid point and the preset number of equal division points minus 1 equal lower eyelid point are respectively determined.
- the position information of the corner points of the eyes in the image to be inspected is output by the key point detection model of the obtained training
- the edges are all extracted.
- the pixel value of the pixels corresponding to the upper and lower eyelids of the eyes in the grayscale edge map can be 255, and the pixel values of the pixels corresponding to other parts at the edge can be 0, so as to indicate the edge of the image to be detected Locations, such as the upper and lower eyelids of the eye.
- the eyelid curve of the upper and lower eyelids of the person to be detected can be determined as the eyelid curve to be detected. It can be understood that there is a one-to-one correspondence between each pixel in the grayscale edge map and each pixel in the image to be detected. Based on the corresponding relationship, the determined eyelid curve to be detected is placed on the edge of the grayscale. It is drawn in the figure, and the eyelid curve to be detected drawn in the grayscale edge map can be used to determine the corner point of the corresponding eye and its position information, and the equally divided eyelid point and its position information.
- each set of reference points includes points corresponding to the equally divided eyelid points of the eyelid curve to be detected.
- at least one set of reference points may be determined respectively at the upper and lower positions of the eyelid curve to be detected in the grayscale edge map.
- the eyelid curve to be detected includes: the upper eyelid curve to be detected that represents the upper eyelid of the eye and the lower eyelid curve to be detected that represents the lower eyelid.
- the foregoing determination of at least one set of reference points at the upper and lower positions of the eyelid curve to be detected in the grayscale edge map may be: respectively determining at least one set of reference points for the upper and lower positions of the upper eyelid curve to be detected, and for the lower At least one set of reference points are respectively determined at the upper and lower positions of the eyelid curve.
- the white curves in Fig. 4 respectively represent the position of the upper eyelid and the lower eyelid of the eye in the grayscale edge diagram
- the gray curve in Fig. 4 represents the upper eyelid curve to be detected and the lower eyelid curve to be detected in the eyelid curve to be detected.
- the white solid points on the gray curve indicate that the eyelid curve to be detected is equally divided into the eyelid point and the eye corner point.
- the middle eyelid points of the upper eyelid curve to be detected correspond to two sets of reference points, which are white and gray hollow points
- the middle eyelid points of the lower eyelid curve to be detected correspond to two sets of reference points, respectively White hollow dots and gray hollow dots.
- the corner points in the eyelid curve to be detected determine the reference curve corresponding to the set of reference points, and draw each set in the grayscale edge map The reference curve corresponding to the reference point.
- the grayscale edge map for each first upper eyelid curve, determine the sum of the pixel values of the pixel points corresponding to the first upper eyelid curve, that is, determine the pixel points corresponding to each set of reference points corresponding to the reference upper eyelid curve The sum of pixel values, and the sum of pixel values of pixels corresponding to the upper eyelid curve to be detected.
- the pixel value of the pixel at the upper and lower eyelid positions of the eye in the grayscale edge map is 255.
- the sum of the pixel values of each pixel corresponding to the first upper eyelid curve the more the sum of the pixel values is Larger, it indicates that the first upper eyelid curve fits the upper eyelid of the eye in the grayscale edge diagram more closely, and accordingly, the first upper eyelid curve with the largest sum of pixel values is determined as the upper eyelid of the person to be inspected The target upper eyelid curve.
- the grayscale edge map for each first lower eyelid curve, determine the sum of the pixel values of the pixels corresponding to the first lower eyelid curve, that is, determine the reference lower eyelid curve corresponding to each set of reference points The sum of the pixel values of the pixels and the sum of the pixel values of the pixels corresponding to the lower eyelid curve to be detected.
- the pixel value of the pixel at the upper and lower eyelid positions of the eye in the grayscale edge map is 255.
- the first lower eyelid curve fits the lower eyelid of the eye in the grayscale edge map more closely, and accordingly, the first lower eyelid curve with the largest sum of pixel values is determined as the lower eyelid of the person to be inspected The target lower eyelid curve.
- the target eyelid curve is integrated to determine the third curve length of the target upper eyelid curve and the fourth curve length of the target lower eyelid curve in the target eyelid curve; respectively mark the target upper eyelid curve and the lower eyelid curve
- the curve is intensively selected, for example: take out two points of the preset book; that is, determine multiple reference eyelid points from the target upper eyelid curve, and determine multiple reference eyelid points from the target lower eyelid curve; further, based on the target The third curve length of the upper eyelid curve, multiple reference eyelid points in the target upper eyelid curve, and the preset number of equal division points are determined from the target upper eyelid curve minus 1 equal division upper eyelid point. Based on the fourth curve length of the target lower eyelid curve, multiple reference eyelid points in the target lower eyelid curve, and the preset number of equal points, the preset number of equal points minus 1 equal lower eyelid point is determined from the target lower eyelid curve .
- the preset number of equal division points minus 1 equal division is determined from the target upper eyelid curve
- the process of upper eyelid point please refer to the first curve length of the upper eyelid curve marked on the marked eyelid curve, multiple eyelid points to be used, and the preset number of equal points, and the preset number of equal points is determined from the marked upper eyelid curve. The process of dividing the upper eyelid point by 1 equal.
- the preset number of equal division points minus 1 equal division is determined from the target lower eyelid curve.
- the preset number of equal division points minus 1 equal division is determined from the target lower eyelid curve.
- FIG. 5 is a schematic structural diagram of an eye key point marking device provided in an embodiment of the present invention.
- the device includes:
- the first obtaining module 510 is configured to obtain a face image and a marked eyelid curve corresponding to each face image, wherein the face image is marked with marked corner points of the eyes and marked eyelid points of the upper and lower eyelids.
- a marked eyelid curve includes: a marked upper eyelid curve representing the upper eyelid and a marked lower eyelid curve representing the lower eyelid generated based on the corresponding marked eyelid points and marked eye corner points;
- the first determining module 520 is configured to, for each marked eyelid curve, integrate the marked eyelid curve based on the principle of mathematical integration, and determine the first curve length of the marked upper eyelid curve and the marked lower eyelid curve in the marked eyelid curve The second curve length; respectively from the marked upper eyelid curve and the marked lower eyelid curve, determine multiple eyelid points to be used;
- the second determination module 530 is configured to, for each marked eyelid curve, based on the first curve length marked with the upper eyelid curve in the marked eyelid curve, the second curve length marked with the lower eyelid curve, and the plurality of eyelid points to be used And the preset number of equal points, from the marked upper eyelid curve and the marked lower eyelid curve, the preset number of equal points minus 1 and the preset number of equal points minus 1 are respectively determined Two equally divided lower eyelid points
- the third determination module 540 is configured to determine the marked eye corner point, equal division upper eyelid point, and equal division lower eyelid point in each face image as the key eye point corresponding to each face image.
- the eyelid marking curve corresponding to the face image can be integrated, and the first curve length marking the upper eyelid curve and the second curve length marking the lower eyelid curve in the marking eyelid curve can be determined, and the markings are respectively
- the eyelid curve and the marked lower eyelid curve are densely selected to determine multiple eyelid points to be used from the marked upper eyelid curve and the marked lower eyelid curve, and then mark the upper eyelid curve based on the first curve length of the marked upper eyelid curve
- the number of eyelid points to be used and the preset number of equal division points are determined from the marked upper eyelid curve, and the preset number of equal division points minus one equal upper eyelid point is determined; and the second curve length and label are based on the marked lower eyelid curve.
- the multiple eyelid points to be used on the lower eyelid curve and the preset number of equal division points are determined from the marked lower eyelid curve to determine the preset number of equal division points minus 1 equal lower eyelid point to achieve semi-automatic inclusion from the face image
- the upper and lower eyelids of the eyes are marked with equally divided eyelid points with obvious semantic characteristics, which realizes marking the key points of the eye with obvious semantic characteristics and improves the marking efficiency to a certain extent.
- the first obtaining module 510 is specifically configured as
- the upper eyelid curve that characterizes the upper eyelid is fitted based on the marked eye corner points in the face image, the marked eyelid points of the upper eyelid, and a preset curve fitting algorithm; and based on the face
- the marked eye corner points, the marked eyelid points of the lower eyelid and the preset curve fitting algorithm in the image fit the lower eyelid curve that characterizes the lower eyelid to obtain the eyelid curve corresponding to the face image.
- the device further includes: an interception module (the image is not shown) configured to mark the corner points of the eyes, divide the upper eyelid points equally, and divide the lower eyelid points equally in each face image.
- the eyelid points are determined as the key eye points corresponding to each face image. For each face image, based on the eye key points in the face image, the eye area image is cut out from the face image to obtain the annotation Eye image with key points of the eye;
- the fourth determining module (the image is not shown) is configured to determine the eye image and its corresponding calibration information as the training data of the eye key point detection model for detecting the eye key points of the eyes in the image, wherein, the calibration information includes position information of key eye points in the corresponding eye image.
- the third determining module 540 is specifically configured to: obtain the actual eye opening and closing length corresponding to each face image; obtain the actual measured eye opening and closing length corresponding to each face image, wherein, The measured eye opening and closing length is: based on the length determined by the upper and lower eyelids of the eyes in the target three-dimensional face model corresponding to the face image, and the target three-dimensional face model is: based on the face features in the corresponding face image Points and a face model determined by a preset three-dimensional face model.
- the target three-dimensional face model includes: eyes constructed based on the marked eye corner points, equal division of upper eyelid points and equal division of lower eyelid points in the face image
- the upper and lower eyelids for each face image, calculate the ratio of the actual eye opening and closing length corresponding to the face image to the measured eye opening and closing length as the actual measured deviation corresponding to the face image; compare the eye image and its corresponding
- the calibration information is determined as the training data of the eye key point detection model used to detect the upper and lower eyelids of the eye in the image, wherein the calibration information includes: the position information of the eye key points marked in the corresponding eye image , And the measured deviation corresponding to the face image corresponding to the eye image.
- the device further includes: a labeling module (the image is not shown), configured to, before obtaining the face image and labeling the eyelid curve corresponding to each face
- the marking module includes: an obtaining and displaying unit (the image is not shown) configured to obtain and display a first face image, wherein the first The face image contains the eyes of the person’s face, and the first face image is one of the face images; the receiving unit (the image is not shown) is configured to receive an annotator’s reference to the first face image
- the determining unit (the image is not shown) is configured to trigger the targeting at least twice if the marking person is detected
- the labeling instruction for the designated eyelid in the first face image is determined to characterize the designation of the designated eyelid based on the position information of the labeling points
- an embodiment of the present invention provides an eye key point detection model training device, which includes:
- the second obtaining module 610 is configured to obtain training data, where the training data includes the upper and lower eyelid equal eyelid points marked with the eyes and the eye images marked with the corner points of the eyes and the calibration information corresponding to each eye image.
- the calibration The information includes: the position information of the divided eyelid points and the marked eye corner points marked in the corresponding eye image, and the divided eyelid points include: the face image determined based on the marked eyelid curve corresponding to the face image and the principle of mathematical integration
- the marked eyelid curve includes: generating based on the marked corner points of the eye marked in the corresponding face image and the marked eyelid points of the upper and lower eyelids
- the marked upper eyelid curve representing the upper eyelid and the marked lower eyelid curve representing the lower eyelid, each eye image is an image of the area where the eye is intercepted from the corresponding face image;
- the input module 620 is configured to input the eye image and the position information of the divided eyelid points and the marked eye corner points included in the calibration information corresponding to each eye image into the initial eye key point detection model to be used for training An eye key point detection model for detecting the equally divided eyelid points and eye corner points in the upper and lower eyelids of the eyes in the image.
- the device further includes: a straightening module (not shown in the figure), configured to divide the eye image and the calibration information corresponding to each eye image into equal parts.
- a straightening module (not shown in the figure), configured to divide the eye image and the calibration information corresponding to each eye image into equal parts.
- the initial eye key point detection model Before the position information of the eyelid points and the marked eye corner points, input the initial eye key point detection model to train the eye key point detection model used to detect the equally divided eyelid points and the corner points of the upper and lower eyelids in the image, For each eye image, performing the correction processing on the eye image to obtain the correction image, wherein the correction processing is: processing that the ordinates in the position information of the marked corner points in the eye image are all the same;
- the update module (not shown in the figure) is configured to update the calibration information corresponding to each converted image including the divided eyelid points and the marked eye canthus points based on the position information of the equally divided eyelid points and the marked eye corner points of each converted image location information;
- the input module 620 is specifically configured to: input the position information of the divided eyelid points and the marked eye corner points included in each converted image and the calibration information corresponding to each converted image into the initial eye key point detection model, Through training, an eye key point detection model used to detect the equally divided eyelid points and eye corner points in the upper and lower eyelids of the eyes in the image is obtained.
- the eye image includes a left-eye image and a right-eye image corresponding to the left-eye image; the device further includes: a mirroring module (not shown in the figure), configured to For each converted image, and the calibration information corresponding to each converted image, including the position information of the divided eyelid points and the marked eye corner points, input the initial eye key point detection model to train the upper and lower eyelids used to detect the eyes in the image Before the eye key point detection model that equally divides the eyelid point and the corner point of the eye, the left eye image or the right eye image corresponding to the left eye image is mirrored to obtain a mirror image; stitching module (not shown in the figure) Out), configured to splice the mirror image and the unmirrored image to obtain a spliced image, wherein, if the left-eye image is mirrored, the unmirrored image is corresponding to the left-eye image Right-eye image; if mirroring is performed on the right-eye image corresponding to the left-eye image, the image that is not
- the calibration information corresponding to each eye image further includes: a measured deviation corresponding to the eye image, and the measured deviation is: the actual eye opening and closing length corresponding to the eye image and the measured eye opening and closing length
- the actual measured eye opening and closing length is the length determined based on the upper and lower eyelids of the eyes in the target three-dimensional face model corresponding to the face image
- the target three-dimensional face model is: based on the person in the corresponding face image Face feature points and a face model determined by a preset three-dimensional face model.
- the target three-dimensional face model includes: constructing based on the marked eye corner points, equal division of the upper eyelid point and equal division of the lower eyelid point in the face image The upper and lower eyelids of your eyes;
- the input module 620 is specifically configured to input the eye image, the position information of the divided eyelid points and the marked eye corner points included in the calibration information corresponding to each eye image, and the actual measured deviation corresponding to the eye image, into the initial An eye key point detection model to train to obtain an eye key point detection model, wherein the eye key point detection model is used to detect the equally divided eyelid points and eye corner points in the upper and lower lids of the eyes in the image, and detect the corresponding image The measured deviation.
- the device further includes: a third obtaining module (not shown in the figure) configured to include the eye image and the calibration information corresponding to each eye image in the Divide the eyelid points and label the position information of the eye corner points, and input the initial eye key point detection model to train to obtain the eye key point detection model for detecting the divided eyelid points and the corner points of the upper and lower eyelids in the image Afterwards, an image to be detected is obtained, wherein the image to be detected includes the eyes of the person to be detected; a fifth determining module (not shown in the figure) is configured to input the image to be detected into the key point detection of the eye The model determines the equal-divided eyelid points and corner points of the upper and lower eyelids of the eyes of the person to be inspected in the image to be inspected.
- a third obtaining module configured to include the eye image and the calibration information corresponding to each eye image in the Divide the eyelid points and label the position information of the eye corner points, and input the initial eye key point detection model to train to obtain the eye key point
- the device further includes: an extraction module (not shown in the figure) configured to input the to-be-detected image into the eye key point detection model to determine the After the halved eyelid points and corner points of the upper and lower eyelids of the person to be detected in the image to be detected, the Sobel algorithm is used to perform edge extraction on the image to be detected to obtain the grayscale edge map corresponding to the image to be detected
- the first determination drawing module (not shown in the figure) is configured to determine the to-be-detected image based on the equally divided eyelid points and corner points of the upper and lower eyelids of the eyes in the image to be detected, and a preset curve fitting algorithm
- the eyelid curve of the upper and lower eyelids of the inspector is used as the eyelid curve to be inspected, and the eyelid curve to be inspected is drawn in the grayscale edge map, wherein the eyelid curve to be inspected includes: The upper eyelid curve to be detected of the upper eyelid and the lower eyelid curve to be detected that characterizes
- the foregoing device embodiment corresponds to the method embodiment, and has the same technical effect as the method embodiment.
- the device embodiment is obtained based on the method embodiment, and the specific description can be found in the method embodiment part, which will not be repeated here.
- modules in the device in the embodiment may be distributed in the device in the embodiment according to the description of the embodiment, or may be located in one or more devices different from this embodiment with corresponding changes.
- the modules of the above-mentioned embodiments can be combined into one module or further divided into multiple sub-modules.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Ophthalmology & Optometry (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (10)
- 一种眼部关键点的标注方法,其特征在于,包括:A method for marking key points of eyes, which is characterized in that it includes:获得人脸图像以及每一人脸图像对应的标注眼睑曲线,其中,所述人脸图像中标注有所包含眼睛的标注眼角点和上下眼睑的标注眼睑点,每一标注眼睑曲线包括:基于所对应标注眼睑点和标注眼角点生成的、表征上眼睑的标注上眼睑曲线和表征下眼睑的标注下眼睑曲线;Obtain the face image and the marked eyelid curve corresponding to each face image, wherein the face image is marked with the marked corner points of the eyes and the marked eyelid points of the upper and lower eyelids, and each marked eyelid curve includes: based on the corresponding Annotated upper eyelid curve generated by marking eyelid points and marking eye corner points, which characterizes the upper eyelid, and marked lower eyelid curve which characterizes the lower eyelid;针对每一标注眼睑曲线,基于数学积分原理,对该标注眼睑曲线进行积分,确定出该标注眼睑曲线中标注上眼睑曲线的第一曲线长度和标注下眼睑曲线的第二曲线长度;分别从所述标注上眼睑曲线和所述标注下眼睑曲线中,确定出多个待利用眼睑点;For each marked eyelid curve, based on the principle of mathematical integration, the marked eyelid curve is integrated to determine the first curve length marked with the upper eyelid curve and the second curve length marked with the lower eyelid curve in the marked eyelid curve; In the marked upper eyelid curve and the marked lower eyelid curve, multiple eyelid points to be used are determined;针对每一标注眼睑曲线,基于该标注眼睑曲线中标注上眼睑曲线的第一曲线长度、标注下眼睑曲线的第二曲线长度、所述多个待利用眼睑点以及预设等分点数,从该标注眼睑曲线中标注上眼睑曲线和标注下眼睑曲线中,分别确定出预设等分点数减1个等分上眼睑点和所述预设等分点数减1个等分下眼睑点;For each marked eyelid curve, based on the first curve length of the upper eyelid curve marked on the marked eyelid curve, the second curve length marked on the lower eyelid curve, the multiple eyelid points to be used and the preset number of equal division points, In marking the eyelid curve, marking the upper eyelid curve and marking the lower eyelid curve, respectively determining the preset number of equal division points minus 1 equal division upper eyelid point and the preset number of equal division points minus 1 equal division lower eyelid point;将每一人脸图像中的标注眼角点、等分上眼睑点和等分下眼睑点,确定为每一人脸图像对应的眼部关键点。The marked eye corner points, equal division of upper eyelid points, and equal division of lower eyelid points in each face image are determined as key eye points corresponding to each face image.
- 如权利要求1所述的方法,其特征在于,在所述将每一人脸图像中的标注眼角点、等分上眼睑点和等分下眼睑点,确定为每一人脸图像对应的眼部关键点的步骤之后,所述方法还包括:The method of claim 1, wherein the marking of the corner of the eye, the equal division of the upper eyelid point and the equal division of the lower eyelid point in each face image are determined as the eye key corresponding to each face image After the step of point, the method further includes:针对每一人脸图像,基于该人脸图像中的眼部关键点,从该人脸图像中截取出眼睛所在区域图像,得到标注有眼部关键点的眼睛图像;For each face image, based on the eye key points in the face image, cut out the eye area image from the face image to obtain the eye image marked with the eye key points;将所述眼睛图像及其对应的标定信息,确定为用于检测图像中眼睛的眼部关键点的眼部关键点检测模型的训练数据,其中,所述标定信息包括所对应眼睛图像中的眼部关键点的位置信息。The eye image and its corresponding calibration information are determined as the training data of the eye key point detection model for detecting the eye key points of the eyes in the image, wherein the calibration information includes the eye in the corresponding eye image. The location information of key points.
- 如权利要求2所述的方法,其特征在于,所述将所述眼睛图像及其对应的标定信息,确定为用于检测图像中眼睛的眼部关键点的眼部关键点检测模型的训练数据的步骤,包括:The method of claim 2, wherein the eye image and its corresponding calibration information are determined as training data of an eye key point detection model for detecting eye key points of the eyes in the image The steps include:获得每一人脸图像对应的真实眼睛开闭长度;Obtain the real eye opening and closing length corresponding to each face image;获得每一人脸图像对应的实测眼睛开闭长度,其中,所述实测眼睛开闭长度为:基于人脸图像对应的目标三维人脸模型中眼睛的上下眼睑所确定的长度,所述目标三维人脸模型为:基于所对应人脸图像中的人脸特征点以及预设的三维人脸模型确定的人脸模型,所述目标三维人脸模型包括:基于该人脸图像中的标注眼角点、等分上眼睑点和等分下眼睑点,构建的眼睛的上下眼睑;Obtain the measured eye opening and closing length corresponding to each face image, where the measured eye opening and closing length is: based on the length determined by the upper and lower eyelids of the eyes in the target 3D face model corresponding to the face image, the target 3D person The face model is: a face model determined based on the face feature points in the corresponding face image and a preset three-dimensional face model, and the target three-dimensional face model includes: based on the marked eye corner points in the face image, Equally divide the upper eyelid point and the lower eyelid point to construct the upper and lower eyelids of the eye;针对每一人脸图像,计算该人脸图像对应的真实眼睛开闭长度与实测眼睛开闭长度的比值,作为该人脸图像对应的实测偏差;For each face image, calculate the ratio of the actual eye opening and closing length corresponding to the face image to the measured eye opening and closing length as the measured deviation corresponding to the face image;将所述眼睛图像及其对应的标定信息,确定为用于检测图像中眼睛的上下眼睑中等分眼睑点的眼部关键点检测模型的训练数据,其中,所述标定信息包含:所对应眼睛图像标注的眼部关键点的位置信息,以及该眼睛图像所对应人脸图像对应的实测偏差。The eye image and its corresponding calibration information are determined as the training data of the eye key point detection model used to detect the upper and lower eyelids of the eye in the image, where the calibration information includes: the corresponding eye image The position information of the marked eye key points, and the actual measured deviation corresponding to the face image corresponding to the eye image.
- 如权利要求1-3任一项所述的方法,其特征在于,在所述获得人脸图像以及每一人脸图像对应的标注眼睑曲线的步骤之前,所述方法还包括:The method according to any one of claims 1 to 3, characterized in that, before the step of obtaining a face image and an eyelid curve corresponding to each face image, the method further comprises:对每一人脸图像中人员面部的眼睛的上下眼睑进行标注的过程,其中,针对每一人脸图像,执行如下步骤,以对每一人脸图像中人员面部的眼睛的上下眼睑进行标注:The process of labeling the upper and lower eyelids of the eyes of the person's face in each face image, wherein for each face image, the following steps are performed to label the upper and lower eyelids of the eyes of the person's face in each face image:获得第一人脸图像并显示,其中,所述第一人脸图像包含人员面部的眼睛,所述第一人脸图像为所述人脸图像中的一个;Obtain and display a first face image, wherein the first face image includes eyes of a person's face, and the first face image is one of the face images;接收标注人员针对所述第一人脸图像中眼睛的上下眼睑触发的标注指令,其中,所述标注指令携带所标注的标注点的位置信息;Receiving an annotation instruction triggered by an annotator for the upper and lower eyelids of the eyes in the first face image, wherein the annotation instruction carries position information of the marked annotation point;若检测到所述标注人员至少两次触发针对所述第一人脸图像中指定眼睑的标注指令,基于所述标注人员触发的至少两次标注指令携带的标注点的位置信息,以及预设的曲线拟合算法,确定表征所述指定眼睑的指定眼睑曲线,其中,所述指定眼睑为:所述第一人脸图像中眼睛的上眼睑和下眼睑;If it is detected that the tagger has triggered the tagging instruction for the specified eyelid in the first face image at least twice, based on the location information of the tag point carried by the tagging instruction triggered by the tagger at least twice, and the preset A curve fitting algorithm to determine a specified eyelid curve that characterizes the specified eyelid, wherein the specified eyelid is: the upper eyelid and the lower eyelid of the eye in the first face image;在所述第一人脸图像中显示所述指定眼睑曲线,以使得所述标注人员检测所标注的标注点是否为所述指定眼睑上的眼睑点或眼角点。The specified eyelid curve is displayed in the first face image, so that the annotator detects whether the annotated point is an eyelid point or an eye corner point on the specified eyelid.
- 一种眼部关键点检测模型的训练方法,其特征在于,所述方法包括:A method for training an eye key point detection model, characterized in that the method includes:获得训练数据,其中,所述训练数据包括标注有眼睛的上下眼睑中等分眼睑点和标注眼角点的眼睛图像以及每一眼睛图像对应的标定信息,所述标定信息包括:所对应眼睛图像中标注的等分眼睑点和标注眼角点的位置信息,所述等分眼睑点包括:基于人脸图像对应的标注眼睑曲线以及数学积分原 理,确定出的人脸图像中眼睛的上眼睑的等分上眼睑点和下眼睑的等分下眼睑点;所述标注眼睑曲线包括:基于所对应人脸图像中标注的眼睛的标注眼角点和上下眼睑的标注眼睑点生成的、表征上眼睑的标注上眼睑曲线和表征下眼睑的标注下眼睑曲线,每一眼睛图像为从所对应人脸图像中截取出的眼睛所在区域图像;Obtain training data, where the training data includes the upper and lower eyelid equidistant eyelid points marked with the eyes and the eye images marked with the corner points of the eyes, and calibration information corresponding to each eye image, and the calibration information includes: the marking in the corresponding eye image The position information of the divided eyelid points and the marked eye corner points, the divided eyelid points include: based on the marked eyelid curve corresponding to the face image and the principle of mathematical integration, the divided upper eyelid of the eye in the face image is determined The eyelid point and the lower eyelid are equally divided into the lower eyelid points; the marking eyelid curve includes: marking the upper eyelid, which is generated based on the marked corner points of the eyes marked in the corresponding face image and the marked eyelid points of the upper and lower eyelids, and represents the upper eyelid The curve and the marked lower eyelid curve that characterizes the lower eyelid, and each eye image is an image of the area where the eye is located from the corresponding face image;将所述眼睛图像,以及每一眼睛图像对应的标定信息包括的等分眼睑点和标注眼角点的位置信息,输入初始的眼部关键点检测模型,以训练得到用于检测图像中眼睛的上下眼睑中的等分眼睑点和眼角点的眼部关键点检测模型。Input the eye image and the position information of the divided eyelid points and the marked eye corner points included in the calibration information corresponding to each eye image into the initial eye key point detection model to be trained to detect the up and down of the eyes in the image An eye key point detection model that divides the eyelid point and the corner point of the eyelid.
- 如权利要求5所述的方法,其特征在于,每一眼睛图像对应的标定信息还包括:该眼睛图像对应的实测偏差,所述实测偏差为:该眼睛图像对应的真实眼睛开闭长度与实测眼睛开闭长度的比值;所述实测眼睛开闭长度为基于人脸图像对应的目标三维人脸模型中眼睛的上下眼睑所确定的长度,所述目标三维人脸模型为:基于所对应人脸图像中的人脸特征点以及预设的三维人脸模型确定的人脸模型,所述目标三维人脸模型包括:基于该人脸图像中的标注眼角点、等分上眼睑点和等分下眼睑点,构建的眼睛的上下眼睑;The method of claim 5, wherein the calibration information corresponding to each eye image further comprises: an actual measurement deviation corresponding to the eye image, and the actual measurement deviation is: the actual eye opening and closing length corresponding to the eye image and the actual measurement deviation The ratio of the opening and closing length of the eyes; the actual measured opening and closing length of the eyes is the length determined based on the upper and lower eyelids of the eyes in the target 3D face model corresponding to the face image, and the target 3D face model is based on the corresponding face Face feature points in the image and a face model determined by a preset three-dimensional face model. The target three-dimensional face model includes: annotated eye corner points, equal division of upper eyelid points and equal division of lower eyelid points based on the face image. Eyelid points, the upper and lower eyelids of the constructed eye;所述将所述眼睛图像,以及每一眼睛图像对应的标定信息包括的等分眼睑点和标注眼角点的位置信息,输入初始的眼部关键点检测模型,以训练得到用于检测图像中眼睛的上下眼睑中的等分眼睑点和眼角点的眼部关键点检测模型的步骤,包括:The eye image and the position information of the divided eyelid points and the marked eye corner points included in the calibration information corresponding to each eye image are input into an initial eye key point detection model to be trained to detect the eyes in the image The steps of detecting the key points of the eyelid point and the corner point of the eyelid in the upper and lower eyelids include:将所述眼睛图像,以及每一眼睛图像对应的标定信息包括的等分眼睑点和标注眼角点的位置信息及眼睛图像对应的实测偏差,输入初始的眼部关键点检测模型,以训练得到眼部关键点检测模型,其中,所述眼部关键点检测模型用于检测图像中眼睛的上下眼睑中的等分眼睑点和眼角点,并检测图像对应的实测偏差。The eye image and the calibration information corresponding to each eye image include the position information of the divided eyelid points and the marked eye corner points and the measured deviation corresponding to the eye image, and input the initial eye key point detection model to train the eye The key point detection model of the eye, wherein the key point detection model of the eye is used to detect the equally divided eyelid points and the corner points of the upper and lower eyelids of the eyes in the image, and detect the actual measured deviation corresponding to the image.
- 如权利要求5所述的方法,其特征在于,在所述将所述眼睛图像,以及每一眼睛图像对应的标定信息包括的等分眼睑点和标注眼角点的位置信息,输入初始的眼部关键点检测模型,以训练得到用于检测图像中眼睛的上下眼睑中的等分眼睑点和眼角点的眼部关键点检测模型的步骤之后,所述方法还包括:The method according to claim 5, characterized in that, in the said eye image, and the calibration information corresponding to each eye image, including the equally divided eyelid points and the position information of the marked eye corner points, input the initial eye After the key point detection model is trained to obtain the eye key point detection model used to detect the equally divided eyelid points and the corner points of the upper and lower eyelids of the eyes in the image, the method further includes:获得待检测图像,其中,所述待检测图像包括待检测人员的眼睛;Obtaining an image to be inspected, where the image to be inspected includes the eyes of the person to be inspected;将所述待检测图像输入所述眼部关键点检测模型,确定所述待检测图像中待检测人员的眼睛的上下眼睑的等分眼睑点和眼角点。The to-be-detected image is input into the eye key point detection model, and the equally divided eyelid points and eye corner points of the upper and lower eyelids of the to-be-detected person’s eyes in the to-be-detected image are determined.
- 如权利要求7所述的方法,其特征在于,在所述将所述待检测图像输入所述眼部关键点检测模型,确定所述待检测图像中待检测人员的眼睛的上下眼睑的等分眼睑点和眼角点的步骤之后,所述方法还包括:The method according to claim 7, wherein in said inputting said image to be detected into said eye key point detection model, the equal division of the upper and lower eyelids of the eyes of the person to be detected in said image to be detected is determined After the steps of eyelid point and eye corner point, the method further includes:利用索贝尔sobel算法,对所述待检测图像进行边缘提取,得到所述待检测图像对应的灰阶边缘图;Using the Sobel algorithm to perform edge extraction on the image to be detected to obtain a grayscale edge map corresponding to the image to be detected;基于所述待检测图像中眼睛的上下眼睑的等分眼睑点和眼角点,以及预设的曲线拟合算法,确定所述待检测人员的上下眼睑的眼睑曲线,作为待检测眼睑曲线,并在所述灰阶边缘图中,绘制所述待检测眼睑曲线,其中,所述待检测眼睑曲线包括:表征所述待检测人员的上眼睑的待检测上眼睑曲线和表征所述待检测人员的下眼睑的待检测下眼睑曲线;Based on the equally divided eyelid points and corner points of the upper and lower eyelids of the eyes in the image to be detected, and a preset curve fitting algorithm, the eyelid curve of the upper and lower eyelids of the person to be detected is determined as the eyelid curve to be detected, and In the grayscale edge map, the eyelid curve to be detected is drawn, wherein the eyelid curve to be detected includes: an upper eyelid curve to be detected representing the upper eyelid of the person to be detected and a lower eyelid curve representing the person to be detected The lower eyelid curve of the eyelid to be tested;基于所述待检测眼睑曲线中的等分眼睑点,在所述灰阶边缘图中确定多组参考点,其中,每组参考点中包括与所述待检测眼睑曲线中等分眼睑点对应的点;Based on the equally divided eyelid points in the eyelid curve to be detected, multiple sets of reference points are determined in the grayscale edge map, wherein each group of reference points includes points corresponding to the equally divided eyelid points in the eyelid curve to be detected ;针对每组参考点,基于该组参考点、所述待检测眼睑曲线中的眼角点以及所述预设的曲线拟合算法,确定出该组参考点对应的参考曲线,并在所述灰阶边缘图中绘制每组参考点对应的参考曲线,其中,每组参考点对应的参考曲线包括:表征所述待检测人员的上眼睑的参考上眼睑曲线和表征所述待检测人员的下眼睑的参考下眼睑曲线;For each set of reference points, based on the set of reference points, the corner points of the eyelid curve to be detected, and the preset curve fitting algorithm, the reference curve corresponding to the set of reference points is determined, and the reference curve is determined in the gray scale. A reference curve corresponding to each set of reference points is drawn in the edge map, where the reference curve corresponding to each set of reference points includes: a reference upper eyelid curve that characterizes the upper eyelid of the person to be inspected and a reference curve that characterizes the lower eyelid of the person to be inspected Refer to the lower eyelid curve;在所述灰阶边缘图中,针对每一第一上眼睑曲线,确定该第一上眼睑曲线对应的像素点的像素值之和,其中,所述第一上眼睑曲线包括:每组参考点所对应参考上眼睑曲线和所述待检测上眼睑曲线;In the grayscale edge map, for each first upper eyelid curve, the sum of the pixel values of the pixel points corresponding to the first upper eyelid curve is determined, wherein the first upper eyelid curve includes: each set of reference points The corresponding reference upper eyelid curve and the upper eyelid curve to be detected;从每一第一上眼睑曲线对应的像素点的像素值之和中,确定出数值最大的和,并确定所述数值最大的和对应的第一上眼睑曲线,作为表征所述待检测人员的上眼睑的目标上眼睑曲线;From the sum of the pixel values of the pixel points corresponding to each first upper eyelid curve, the sum with the largest value is determined, and the first upper eyelid curve corresponding to the largest sum is determined as a characterizing the person to be tested The target upper eyelid curve of the upper eyelid;在所述灰阶边缘图中,针对每一第一下眼睑曲线,确定该第一下眼睑曲线对应的像素点的像素值之和,其中,所述第一下眼睑曲线包括:每组参考点所对应参考下眼睑曲线和所述待检测下眼睑曲线;In the grayscale edge map, for each first lower eyelid curve, determine the sum of the pixel values of the pixel points corresponding to the first lower eyelid curve, where the first lower eyelid curve includes: each set of reference points The corresponding reference lower eyelid curve and the lower eyelid curve to be detected;从每一第一下眼睑曲线对应的像素点的像素值之和中,确定出数值最大的和,并确定所述数值最大的和对应的第一下眼睑曲线,作为表征所述待检测人员的下眼睑的目标下眼睑曲线;From the sum of the pixel values of the pixel points corresponding to each first lower eyelid curve, the sum with the largest value is determined, and the first lower eyelid curve corresponding to the largest sum is determined as a characterizing the person to be tested The target lower eyelid curve of the lower eyelid;基于数学积分原理,对目标眼睑曲线进行积分,确定出所述目标眼睑曲线中目标上眼睑曲线的第 三曲线长度和目标下眼睑曲线的第四曲线长度;分别从所述目标上眼睑曲线和所述目标下眼睑曲线中,确定出多个参考眼睑点;Based on the principle of mathematical integration, the target eyelid curve is integrated to determine the third curve length of the target upper eyelid curve and the fourth curve length of the target lower eyelid curve in the target eyelid curve; In the curve of the target lower eyelid, multiple reference eyelid points are determined;基于所述目标眼睑曲线中目标上眼睑曲线的第三曲线长度、目标下眼睑曲线的第四曲线长度、所述多个参考眼睑点以及预设等分点数,从所述目标眼睑曲线中目标上眼睑曲线和目标下眼睑曲线中,分别确定出预设等分点数减1个等分上眼睑点和所述预设等分点数减1个等分下眼睑点。Based on the third curve length of the target upper eyelid curve in the target eyelid curve, the fourth curve length of the target lower eyelid curve, the multiple reference eyelid points and the preset number of equal points, the target upper eyelid curve is selected from the target eyelid curve. In the eyelid curve and the target lower eyelid curve, the preset number of equal division points minus one equal upper eyelid point and the preset number of equal division points minus one equal lower eyelid point are respectively determined.
- 一种眼部关键点的标注装置,其特征在于,包括:A marking device for key points of eyes, characterized in that it comprises:第一获得模块,被配置为获得人脸图像以及每一人脸图像对应的标注眼睑曲线,其中,所述人脸图像中标注有所包含眼睛的标注眼角点和上下眼睑的标注眼睑点,每一标注眼睑曲线包括:基于所对应标注眼睑点和标注眼角点生成的、表征上眼睑的标注上眼睑曲线和表征下眼睑的标注下眼睑曲线;The first obtaining module is configured to obtain a face image and a marked eyelid curve corresponding to each face image, wherein the face image is marked with marked corner points of the eyes and marked eyelid points of the upper and lower eyelids, each Annotating the eyelid curve includes: an upper eyelid curve representing the upper eyelid and a lower eyelid curve representing the lower eyelid generated based on the corresponding eyelid points and corner points of the eye;第一确定模块,被配置为针对每一标注眼睑曲线,基于数学积分原理,对该标注眼睑曲线进行积分,确定出该标注眼睑曲线中标注上眼睑曲线的第一曲线长度和标注下眼睑曲线的第二曲线长度;分别从所述标注上眼睑曲线和所述标注下眼睑曲线中,确定出多个待利用眼睑点;The first determining module is configured to, for each marked eyelid curve, integrate the marked eyelid curve based on the principle of mathematical integration, and determine the first curve length of the marked upper eyelid curve and the marked lower eyelid curve in the marked eyelid curve The second curve length; respectively determine a plurality of eyelid points to be used from the marked upper eyelid curve and the marked lower eyelid curve;第二确定模块,被配置为针对每一标注眼睑曲线,基于该标注眼睑曲线中标注上眼睑曲线的第一曲线长度、标注下眼睑曲线的第二曲线长度、所述多个待利用眼睑点以及预设等分点数,从该标注眼睑曲线中标注上眼睑曲线和标注下眼睑曲线中,分别确定出预设等分点数减1个等分上眼睑点和所述预设等分点数减1个等分下眼睑点;The second determining module is configured to, for each marked eyelid curve, based on the first curve length marking the upper eyelid curve in the marked eyelid curve, the second curve length marking the lower eyelid curve, the multiple eyelid points to be used, and Preset the number of equal points, from the marked upper eyelid curve and marked lower eyelid curve from the marked eyelid curve, respectively determine the preset number of equal points minus 1 equal upper eyelid point and the preset number of equal points minus 1 Divide the lower eyelid points equally;第三确定模块,被配置为将每一人脸图像中的标注眼角点、等分上眼睑点和等分下眼睑点,确定为每一人脸图像对应的眼部关键点。The third determining module is configured to determine the marked eye corner point, the equally divided upper eyelid point, and the equally divided lower eyelid point in each face image as the key eye points corresponding to each face image.
- 一种眼部关键点检测模型的训练装置,其特征在于,包括:A training device for an eye key point detection model, characterized in that it comprises:第二获得模块,被配置为获得训练数据,其中,所述训练数据包括标注有眼睛的上下眼睑中等分眼睑点和标注眼角点的眼睛图像以及每一眼睛图像对应的标定信息,所述标定信息包括:所对应眼睛图像中标注的等分眼睑点和标注眼角点的位置信息,所述等分眼睑点包括:基于人脸图像对应的标注眼睑曲线以及数学积分原理,确定出的人脸图像中眼睛的上眼睑的等分上眼睑点和下眼睑的等分下眼睑点;所述标注眼睑曲线包括:基于所对应人脸图像中标注的眼睛的标注眼角点和上下眼睑的标注眼睑点生成的、表征上眼睑的标注上眼睑曲线和表征下眼睑的标注下眼睑曲线,每一眼睛图像为从所对应人脸图像中截取出的眼睛所在区域图像;The second obtaining module is configured to obtain training data, where the training data includes the upper and lower eyelid equal eyelid points marked with the eyes and the eye images marked with the corner points of the eyes, and the calibration information corresponding to each eye image, the calibration information Including: the position information of the divided eyelid points and the marked eye corner points marked in the corresponding eye image. The divided eyelid points include: based on the marked eyelid curve corresponding to the face image and the principle of mathematical integration, the determined face image The equal division of the upper eyelid of the eye and the equal division of the lower eyelid point of the lower eyelid; the marked eyelid curve includes: generated based on the marked corner points of the eye marked in the corresponding face image and the marked eyelid points of the upper and lower eyelids , The marked upper eyelid curve representing the upper eyelid and the marked lower eyelid curve representing the lower eyelid, each eye image is an image of the area where the eye is cut out from the corresponding face image;输入模块,被配置为将所述眼睛图像,以及每一眼睛图像对应的标定信息包括的等分眼睑点和标注眼角点的位置信息,输入初始的眼部关键点检测模型,以训练得到用于检测图像中眼睛的上下眼睑中的等分眼睑点和眼角点的眼部关键点检测模型。The input module is configured to input the eye image and the position information of the equally divided eyelid points and the marked eye corner points included in the calibration information corresponding to each eye image into the initial eye key point detection model to be trained for An eye key point detection model that detects the equally divided eyelid points and corner points of the upper and lower eyelids of the eyes in the image.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910541988.5 | 2019-06-21 | ||
CN201910541988.5A CN110956071B (en) | 2019-06-21 | 2019-06-21 | Eye key point labeling and detection model training method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020252969A1 true WO2020252969A1 (en) | 2020-12-24 |
Family
ID=69975485
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/108077 WO2020252969A1 (en) | 2019-06-21 | 2019-09-26 | Eye key point labeling method and apparatus, and training method and apparatus for eye key point detection model |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110956071B (en) |
WO (1) | WO2020252969A1 (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113221599B (en) * | 2020-01-21 | 2022-06-10 | 魔门塔(苏州)科技有限公司 | Eyelid curve construction method and device |
CN113516705B (en) * | 2020-04-10 | 2024-04-02 | 魔门塔(苏州)科技有限公司 | Calibration method and device for hand key points |
CN113743172B (en) * | 2020-05-29 | 2024-04-16 | 魔门塔(苏州)科技有限公司 | Personnel gazing position detection method and device |
CN113723214B (en) * | 2021-08-06 | 2023-10-13 | 武汉光庭信息技术股份有限公司 | Face key point labeling method, system, electronic equipment and storage medium |
CN113591815B (en) * | 2021-09-29 | 2021-12-21 | 北京万里红科技有限公司 | Method for generating canthus recognition model and method for recognizing canthus in eye image |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101583971A (en) * | 2006-12-04 | 2009-11-18 | 爱信精机株式会社 | Eye detecting device, eye detecting method, and program |
CN106203262A (en) * | 2016-06-27 | 2016-12-07 | 辽宁工程技术大学 | A kind of ocular form sorting technique based on eyelid curve similarity Yu ocular form index |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104091150B (en) * | 2014-06-26 | 2019-02-26 | 浙江捷尚视觉科技股份有限公司 | A kind of human eye state judgment method based on recurrence |
CN108229301B (en) * | 2017-11-03 | 2020-10-09 | 北京市商汤科技开发有限公司 | Eyelid line detection method and device and electronic equipment |
-
2019
- 2019-06-21 CN CN201910541988.5A patent/CN110956071B/en active Active
- 2019-09-26 WO PCT/CN2019/108077 patent/WO2020252969A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101583971A (en) * | 2006-12-04 | 2009-11-18 | 爱信精机株式会社 | Eye detecting device, eye detecting method, and program |
CN106203262A (en) * | 2016-06-27 | 2016-12-07 | 辽宁工程技术大学 | A kind of ocular form sorting technique based on eyelid curve similarity Yu ocular form index |
Also Published As
Publication number | Publication date |
---|---|
CN110956071A (en) | 2020-04-03 |
CN110956071B (en) | 2022-06-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020252969A1 (en) | Eye key point labeling method and apparatus, and training method and apparatus for eye key point detection model | |
US10068128B2 (en) | Face key point positioning method and terminal | |
US10255484B2 (en) | Method and system for assessing facial skin health from a mobile selfie image | |
KR102591552B1 (en) | Eyelid shape estimation using eye pose measurement | |
US20200103675A1 (en) | Method, device, and computer program for virtually adjusting a spectacle frame | |
CN101339606B (en) | Human face critical organ contour characteristic points positioning and tracking method and device | |
CN111325846B (en) | Expression base determination method, avatar driving method, device and medium | |
US20220044491A1 (en) | Three-dimensional face model generation method and apparatus, device, and medium | |
CN107358648A (en) | Real-time full-automatic high quality three-dimensional facial reconstruction method based on individual facial image | |
JP7015152B2 (en) | Processing equipment, methods and programs related to key point data | |
CN106920277A (en) | Simulation beauty and shaping effect visualizes the method and system of online scope of freedom carving | |
CN106874861A (en) | A kind of face antidote and system | |
CN108875485A (en) | A kind of base map input method, apparatus and system | |
EP3446592B1 (en) | Device and method for eyeliner-wearing guide | |
CN105118023B (en) | Real-time video human face cartoon generation method based on human face characteristic point | |
CN109712080A (en) | Image processing method, image processing apparatus and storage medium | |
EP3182362A1 (en) | Method and sytem for evaluating fitness between eyeglasses wearer and eyeglasses worn thereby | |
CN104809638A (en) | Virtual glasses trying method and system based on mobile terminal | |
CN103208133A (en) | Method for adjusting face plumpness in image | |
CN110147729A (en) | User emotion recognition methods, device, computer equipment and storage medium | |
WO2018079255A1 (en) | Image processing device, image processing method, and image processing program | |
CN108463823A (en) | A kind of method for reconstructing, device and the terminal of user's Hair model | |
CN104123562A (en) | Human body face expression identification method and device based on binocular vision | |
CN108615256A (en) | A kind of face three-dimensional rebuilding method and device | |
CN109993108B (en) | Gesture error correction method, system and device under a kind of augmented reality environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19933800 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19933800 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19933800 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 08.08.2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19933800 Country of ref document: EP Kind code of ref document: A1 |