WO2020252969A1 - Eye key point labeling method and apparatus, and training method and apparatus for eye key point detection model - Google Patents

Eye key point labeling method and apparatus, and training method and apparatus for eye key point detection model Download PDF

Info

Publication number
WO2020252969A1
WO2020252969A1 PCT/CN2019/108077 CN2019108077W WO2020252969A1 WO 2020252969 A1 WO2020252969 A1 WO 2020252969A1 CN 2019108077 W CN2019108077 W CN 2019108077W WO 2020252969 A1 WO2020252969 A1 WO 2020252969A1
Authority
WO
WIPO (PCT)
Prior art keywords
eyelid
eye
points
curve
image
Prior art date
Application number
PCT/CN2019/108077
Other languages
French (fr)
Chinese (zh)
Inventor
李源
杨燕丹
王晋玮
Original Assignee
初速度(苏州)科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 初速度(苏州)科技有限公司 filed Critical 初速度(苏州)科技有限公司
Publication of WO2020252969A1 publication Critical patent/WO2020252969A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Definitions

  • the present invention relates to the technical field of intelligent detection, in particular to a method and device for labeling key points of the eye and its detection model.
  • the fatigue detection system can detect the key points contained in the image based on the pre-trained eye key point detection model.
  • the key points of the upper and lower eyelids of the human eye, and then, based on the key points of the upper and lower eyelids, the distance between the upper and lower eyelids of the human eye is calculated as the opening and closing distance of the human eye, and then based on the opening and closing distance of the human eye in the prediction Set the opening and closing distance within the time length to determine whether the person corresponding to the image is in a fatigue state, and realize the detection of the fatigue state of the person.
  • the beautifying camera can detect the eye key points of the upper and lower eyelids contained in the image based on the pre-trained eye key point detection model.
  • the positions of the key points of the upper and lower eyelids are used to scale and shrink the human eyes to beautify the human eyes.
  • the above-mentioned pre-trained eye key point detection model is obtained by training based on sample images marked with human eye key points.
  • the above-mentioned key eye points are generally manually marked by the annotator.
  • the marking standards for the key points of the eye by different annotators are not uniform.
  • the semantic features of the key points of the eye are not obvious, and the efficiency of manually marking the key points of the eye is low.
  • the present invention provides a method and device for marking eye key points and a detection model training method and device, so as to realize marking eye key points with obvious semantic features and improve marking efficiency to a certain extent.
  • the specific technical solution is as follows.
  • an embodiment of the present invention provides a method for labeling key points of the eye.
  • the method includes: obtaining a face image and an eyelid curve corresponding to each face image, wherein the face image is marked with Contains the marked corner points of the eyes and the marked eyelid points of the upper and lower eyelids.
  • Each marked eyelid curve includes: based on the corresponding marked eyelid points and marked eye corner points, the marked upper eyelid curve representing the upper eyelid and the marked lower eyelid representing the lower eyelid are generated curve;
  • the marked eyelid curve is integrated to determine the first curve length marked with the upper eyelid curve and the second curve length marked with the lower eyelid curve in the marked eyelid curve; In the marked upper eyelid curve and the marked lower eyelid curve, multiple eyelid points to be used are determined;
  • the multiple eyelid points to be used and the preset number of equal division points In marking the eyelid curve, marking the upper eyelid curve and marking the lower eyelid curve, respectively determining the preset number of equal division points minus 1 equal division upper eyelid point and the preset number of equal division points minus 1 equal division lower eyelid point;
  • the marked eye corner points, equal division of upper eyelid points, and equal division of lower eyelid points in each face image are determined as key eye points corresponding to each face image.
  • the step of obtaining the eyelid curve corresponding to each face image includes:
  • the upper eyelid curve that characterizes the upper eyelid is fitted based on the marked eye corner points in the face image, the marked eyelid points of the upper eyelid, and a preset curve fitting algorithm; and based on the face
  • the marked eye corner points, the marked eyelid points of the lower eyelid and the preset curve fitting algorithm in the image fit the lower eyelid curve that characterizes the lower eyelid to obtain the eyelid curve corresponding to the face image.
  • the method further Including: for each face image, based on the key points of the eyes in the face image, cut out the image of the area where the eyes are located from the face image to obtain an eye image marked with the key points of the eyes;
  • the eye image and its corresponding calibration information are determined as the training data of the eye key point detection model for detecting the eye key points of the eyes in the image, wherein the calibration information includes the eye in the corresponding eye image.
  • the location information of key points are determined as the training data of the eye key point detection model for detecting the eye key points of the eyes in the image, wherein the calibration information includes the eye in the corresponding eye image.
  • the step of determining the eye image and its corresponding calibration information as the training data of the eye key point detection model for detecting the eye key points of the eyes in the image includes:
  • the measured eye opening and closing length corresponding to each face image where the measured eye opening and closing length is: based on the length determined by the upper and lower eyelids of the eyes in the target 3D face model corresponding to the face image, the target 3D person
  • the face model is: a face model determined based on the face feature points in the corresponding face image and a preset three-dimensional face model, and the target three-dimensional face model includes: based on the marked eye corner points in the face image, Equally divide the upper eyelid point and the lower eyelid point to construct the upper and lower eyelids of the eye;
  • the eye image and its corresponding calibration information are determined as the training data of the eye key point detection model used to detect the upper and lower eyelids of the eye in the image, where the calibration information includes: the corresponding eye image The position information of the marked eye key points, and the actual measured deviation corresponding to the face image corresponding to the eye image.
  • the method further includes: a process of marking the upper and lower eyelids of the eyes of the person’s face in each face image, wherein , For each face image, perform the following steps to mark the upper and lower eyelids of the person’s face in each face image:
  • first face image includes eyes of a person's face, and the first face image is one of the face images
  • the tagger If it is detected that the tagger has triggered the tagging instruction for the specified eyelid in the first face image at least twice, based on the location information of the tag point carried by the tagging instruction triggered by the tagger at least twice, and the preset A curve fitting algorithm to determine a specified eyelid curve that characterizes the specified eyelid, wherein the specified eyelid is: the upper eyelid and the lower eyelid of the eye in the first face image;
  • the specified eyelid curve is displayed in the first face image, so that the annotator detects whether the annotated point is an eyelid point or an eye corner point on the specified eyelid.
  • an embodiment of the present invention provides a method for training an eye key point detection model, the method including:
  • the training data includes the upper and lower eyelid equidistant eyelid points marked with the eyes and the eye images marked with the corner points of the eyes, and calibration information corresponding to each eye image
  • the calibration information includes: the marking in the corresponding eye image
  • the position information of the divided eyelid points and the marked eye corner points, the divided eyelid points include: based on the marked eyelid curve corresponding to the face image and the principle of mathematical integration, the divided upper eyelid of the eye in the face image is determined
  • the eyelid point and the lower eyelid are equally divided into the lower eyelid points
  • the marking eyelid curve includes: marking the upper eyelid, which is generated based on the marked corner points of the eyes marked in the corresponding face image and the marked eyelid points of the upper and lower eyelids, and represents the upper eyelid
  • the curve and the marked lower eyelid curve that characterizes the lower eyelid, and each eye image is an image of the area where the eye is located from the corresponding face image;
  • the method further includes:
  • the correction processing is: processing that the ordinates in the position information of the marked corner points in the eye image are all the same;
  • the eye image and the position information of the divided eyelid points and the marked eye corner points included in the calibration information corresponding to each eye image are input into an initial eye key point detection model to be trained to detect the eyes in the image
  • the steps of detecting the key points of the eyelid points and the corner points of the upper and lower eyelids include:
  • the eye image includes a left eye image and a right eye image corresponding to the left eye image;
  • the method further includes: mirroring the left-eye image or the right-eye image corresponding to the left-eye image , Get the mirror image;
  • the mirror image and the unmirrored image are spliced to obtain a spliced image, wherein if the left-eye image is mirrored, the unmirrored image is the right-eye image corresponding to the left-eye image; The right-eye image corresponding to the left-eye image is mirrored, and the image that is not mirrored is the left-eye image;
  • Said each converted image and the calibration information corresponding to each converted image includes the position information of the divided eyelid points and the marked eye corner points, and input the initial eye key point detection model to be trained to detect the eyes in the image
  • the steps of detecting the key points of the eyelid point and the corner point of the eyelid in the upper and lower eyelids include:
  • the calibration information corresponding to each eye image further includes: a measured deviation corresponding to the eye image, where the measured deviation is: the ratio of the actual eye opening and closing length corresponding to the eye image to the measured eye opening and closing length;
  • the measured eye opening and closing length is the length determined based on the upper and lower eyelids of the eyes in the target three-dimensional face model corresponding to the face image.
  • the target three-dimensional face model is based on the face feature points and predictions in the corresponding face image.
  • the eye image and the position information of the divided eyelid points and the marked eye corner points included in the calibration information corresponding to each eye image are input into an initial eye key point detection model to be trained to detect the eyes in the image
  • the steps of detecting the key points of the eyelid point and the corner point of the eyelid in the upper and lower eyelids include:
  • the eye image and the calibration information corresponding to each eye image include the position information of the divided eyelid points and the marked eye corner points and the measured deviation corresponding to the eye image, and input the initial eye key point detection model to train the eye
  • the key point detection model of the eye wherein the key point detection model of the eye is used to detect the equally divided eyelid points and the corner points of the upper and lower eyelids of the eyes in the image, and detect the actual measured deviation corresponding to the image.
  • the method further includes:
  • the to-be-detected image is input into the eye key point detection model, and the equally divided eyelid points and eye corner points of the upper and lower eyelids of the to-be-detected person’s eyes in the to-be-detected image are determined.
  • the method further includes: using the Sobel algorithm to perform edge extraction on the image to be detected to obtain a grayscale edge map corresponding to the image to be detected;
  • the eyelid curve of the upper and lower eyelids of the person to be detected is determined as the eyelid curve to be detected, and In the grayscale edge map, the eyelid curve to be detected is drawn, wherein the eyelid curve to be detected includes: an upper eyelid curve to be detected representing the upper eyelid of the person to be detected and a lower eyelid curve representing the person to be detected The lower eyelid curve of the eyelid to be tested;
  • each group of reference points includes points corresponding to the equally divided eyelid points in the eyelid curve to be detected ;
  • the reference curve corresponding to the set of reference points is determined, and the reference curve is determined in the gray scale.
  • a reference curve corresponding to each set of reference points is drawn in the edge map, where the reference curve corresponding to each set of reference points includes: a reference upper eyelid curve that characterizes the upper eyelid of the person to be inspected and a reference curve that characterizes the lower eyelid of the person to be inspected Refer to the lower eyelid curve;
  • the sum of the pixel values of the pixel points corresponding to the first upper eyelid curve is determined, wherein the first upper eyelid curve includes: each set of reference points The corresponding reference upper eyelid curve and the upper eyelid curve to be detected;
  • the sum with the largest value is determined, and the first upper eyelid curve corresponding to the largest sum is determined as a characterizing the person to be tested
  • each first lower eyelid curve determines the sum of the pixel values of the pixel points corresponding to the first lower eyelid curve, where the first lower eyelid curve includes: each set of reference points The corresponding reference lower eyelid curve and the lower eyelid curve to be detected;
  • the sum with the largest value is determined, and the first lower eyelid curve corresponding to the largest sum is determined as a characterizing the person to be tested
  • the target eyelid curve is integrated to determine the third curve length of the target upper eyelid curve and the fourth curve length of the target lower eyelid curve in the target eyelid curve;
  • the lower eyelid curve of the target multiple reference eyelid points are determined;
  • the target upper eyelid curve is selected from the target eyelid curve.
  • the preset number of equal division points minus one equal upper eyelid point and the preset number of equal division points minus one equal lower eyelid point are respectively determined.
  • an embodiment of the present invention provides a device for marking eye key points.
  • the device includes: a first obtaining module configured to obtain a face image and an eyelid curve corresponding to each face image.
  • the face image is marked with the marked corner points of the eyes and the marked eyelid points of the upper and lower eyelids, and each marked eyelid curve includes: based on the corresponding marked eyelid point and marked eye corner points, the marked upper eyelid curve representing the upper eyelid is generated And the marked lower eyelid curve representing the lower eyelid;
  • the first determining module is configured to, for each marked eyelid curve, integrate the marked eyelid curve based on the principle of mathematical integration, and determine the first curve length of the marked upper eyelid curve and the marked lower eyelid curve in the marked eyelid curve The second curve length; respectively determine a plurality of eyelid points to be used from the marked upper eyelid curve and the marked lower eyelid curve;
  • the second determining module is configured to, for each marked eyelid curve, based on the first curve length marking the upper eyelid curve in the marked eyelid curve, the second curve length marking the lower eyelid curve, the multiple eyelid points to be used, and Preset the number of equal points, from the marked upper eyelid curve and marked lower eyelid curve from the marked eyelid curve, respectively determine the preset number of equal points minus 1 equal upper eyelid point and the preset number of equal points minus 1 Divide the lower eyelid points equally;
  • the third determining module is configured to determine the marked eye corner point, the equally divided upper eyelid point, and the equally divided lower eyelid point in each face image as the key eye points corresponding to each face image.
  • an embodiment of the present invention provides a training device for an eye key point detection model, the device including:
  • the second obtaining module is configured to obtain training data, where the training data includes the upper and lower eyelid equal eyelid points marked with the eyes and the eye images marked with the corner points of the eyes, and the calibration information corresponding to each eye image, the calibration information Including: the position information of the divided eyelid points and the marked eye corner points marked in the corresponding eye image.
  • the divided eyelid points include: based on the marked eyelid curve corresponding to the face image and the principle of mathematical integration, the determined face image The equal division of the upper eyelid of the eye and the equal division of the lower eyelid point of the lower eyelid;
  • the marked eyelid curve includes: generated based on the marked corner points of the eye marked in the corresponding face image and the marked eyelid points of the upper and lower eyelids ,
  • the input module is configured to input the eye image and the position information of the equally divided eyelid points and the marked eye corner points included in the calibration information corresponding to each eye image into the initial eye key point detection model to be trained for An eye key point detection model that detects the equally divided eyelid points and the corner points of the upper and lower eyelids of the eyes in the image.
  • the method and device for annotating key points of the eye and its detection model training provided by the embodiments of the present invention can obtain a face image and an eyelid curve corresponding to each face image, where the face image There are the marked corner points of the eyes and the marked eyelid points of the upper and lower eyelids.
  • Each marked eyelid curve includes: based on the corresponding marked eyelid points and marked eye corner points, the marked upper eyelid curve and the lower eyelid representing the upper eyelid are generated The marked lower eyelid curve; for each marked eyelid curve, based on the principle of mathematical integration, the marked eyelid curve is integrated to determine the first curve length of the marked upper eyelid curve and the second marked lower eyelid curve in the marked eyelid curve Curve length; respectively, from the marked upper eyelid curve and marked lower eyelid curve, determine multiple eyelid points to be used; for each marked eyelid curve, based on the marked eyelid curve, mark the first curve length of the upper eyelid curve, mark the lower The second curve length of the eyelid curve, multiple eyelid points to be used, and the preset number of equal points.
  • the preset number of equal points minus one is determined respectively.
  • the upper eyelid point and the preset number of equal points minus 1 equal lower eyelid point; the marked eye corner point, equal upper eyelid point and equal lower eyelid point in each face image are determined as corresponding to each face image Key points of the eye.
  • the eyelid marking curve corresponding to the face image can be integrated, and the first curve length marking the upper eyelid curve and the second curve length marking the lower eyelid curve in the marking eyelid curve can be determined, and the markings are respectively
  • the eyelid curve and the marked lower eyelid curve are densely selected to determine multiple eyelid points to be used from the marked upper eyelid curve and the marked lower eyelid curve, and then mark the upper eyelid curve based on the first curve length of the marked upper eyelid curve
  • the number of eyelid points to be used and the preset number of equal division points are determined from the marked upper eyelid curve, and the preset number of equal division points minus one equal upper eyelid point is determined; and the second curve length and label are based on the marked lower eyelid curve.
  • the multiple eyelid points to be used on the lower eyelid curve and the preset number of equal division points are determined from the marked lower eyelid curve to determine the preset number of equal division points minus 1 equal lower eyelid point to achieve semi-automatic inclusion from the face image
  • the upper and lower eyelids of the eyes are marked with equally divided eyelid points with obvious semantic characteristics, which realizes marking the key points of the eye with obvious semantic characteristics and improves the marking efficiency to a certain extent.
  • any product or method of the present invention does not necessarily need to achieve all the advantages described above at the same time.
  • the lower eyelid curve is intensively selected to determine multiple eyelid points to be used from the upper eyelid curve and the lower eyelid curve. Then, based on the first curve length of the upper eyelid curve, the multiple eyelid points on the upper eyelid curve are marked.
  • the eyelids are marked with the equally divided eyelid points with obvious semantic characteristics, which realizes marking the key points of the eye with obvious semantic characteristics and improves the marking efficiency to a certain extent.
  • the image of the area where the eye is located is intercepted from the face image to obtain the eye image marked with the key points of the eye, and
  • the eye image and its corresponding calibration information including the position information of the eye key points in the corresponding eye image are determined as the training data of the eye key point detection model for detecting the eye key points of the eyes in the image, and Eye key points with obvious semantic features are used as the training data of the eye key point detection model.
  • the eye key point detection model with high stability and detection accuracy can be trained.
  • the measured deviation corresponding to each face image is determined.
  • the measured deviation can characterize the measured distance between the upper and lower eyelids of the eye and the real The difference between the distance between the upper and lower eyelids of the eyes, the measured deviation is used as part of the training data of the eye key point detection model used to detect the upper and lower eyelids of the eye, which can make the trained eye
  • the key point detection model can be based on the measured deviation between the distance between the upper and lower eyelids measured by the image output and the actual distance between the upper and lower eyelids, and further, can correct the measured distance between the upper and lower eyelids according to the measured deviation , To a certain extent, improve the accuracy of the measured distance between the upper and lower eyelids.
  • the specified eyelid curve of the specified eyelid can be generated in real time according to the annotation points marked by the annotator, and displayed, so that the annotator can mark the marked points Check to determine whether the marked point is the eyelid point or eye corner point on the designated eyelid. To a certain extent, it can efficiently ensure the accuracy of the eyelid point or eye corner point marked by the labeler, and improve the labeling efficiency of the labeler.
  • the point detection model is trained to obtain an eye key point detection model used to detect the equally divided eyelid points and the corner points of the upper and lower eyelids of the eyes in the image.
  • the eye key point detection model can determine the eye key points with obvious semantic features from the image, and to a certain extent guarantee the stability and detection accuracy of the eye key point detection model.
  • the key point detection model is trained to obtain the eye key point detection model used to detect the equally divided eyelid points and corner points of the upper and lower eyelids of the eyes in the image, which can shorten the training time to a certain extent.
  • the left eye image and the right eye image determined from the image can be processed first, and the correction, mirroring, and stitching processing can be performed to make the eye
  • the key point detection model detects the key points of the eyes in the two human eyes in the processed image at the same time, that is, through a detection process, the upper and lower eyelids of the two human eyes in the processed image can be detected
  • the key points of the eye simplifies the detection process of the key points of the eye using the eye key point detection model.
  • FIG. 1 is a schematic flowchart of a method for marking eye key points according to an embodiment of the present invention
  • 2A, 2B, 2C, and 2D are schematic diagrams of marking eyelid curves corresponding to face images
  • FIG. 3 is a schematic flowchart of a method for training an eye key point detection model provided by an embodiment of the present invention
  • FIG. 4 is a schematic diagram of a structure of an eye key point marking device provided by an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of a training device for an eye key point detection model provided by an embodiment of the present invention.
  • the embodiment of the present invention discloses an eye key point labeling and a training method and device for its detection model, so as to realize labeling eye key points with obvious semantic features and improve labeling efficiency to a certain extent.
  • the embodiments of the present invention will be described in detail below.
  • FIG. 1 is a schematic flowchart of a method for marking key eye points according to an embodiment of the present invention. This method is applied to an electronic device, which can be a device with strong computing and processing capabilities, such as a server.
  • the electronic device that implements the method of marking the key points of the eye can be the first An electronic device. The method specifically includes the following steps:
  • each marked eyelid curve includes: the marked upper eyelid which is generated based on the corresponding marked eyelid point and marked eye corner points and represents the upper eyelid Curves and labeled lower eyelid curves that characterize the lower eyelid.
  • the human face image may include two eyes of the person's face, as shown in Figures 2B and 2D, or only one eye of the face, as shown in Figures 2A and 2C, this is both possible .
  • the face image when the face image includes two eyes of a person's face, the face image is marked with the marked corner points of the two eyes and the marked eyelid points of the upper and lower eyelids.
  • the marked eyelid curve corresponding to the face image includes : The marked eyelid curve corresponding to the left eye, and the marked eyelid curve corresponding to the right eye.
  • the labeled eyelid curve corresponding to the left eye includes: a labeled upper eyelid curve that represents the upper eyelid of the left eye and a labeled lower eyelid curve that represents the lower eyelid of the left eye, which is generated based on the labeled eyelid point and the labeled eye corner point corresponding to the left eye.
  • the labeled eyelid curve corresponding to the right eye includes: a labeled upper eyelid curve that represents the upper eyelid of the right eye and a labeled lower eyelid curve that represents the lower eyelid of the right eye, which is generated based on the labeled eyelid points and the labeled eye corner points corresponding to the right eye.
  • the marked eyelid curve corresponding to each face image may be based on the marked eyelid point of the upper eyelid of the marked eye when the marking staff is marking the upper and lower eyelids of the eyes in the face image And generated by a preset curve fitting algorithm, and generated based on the marked eyelid points of the lower lid of the marked eye and a preset curve fitting algorithm. Furthermore, when the first electronic device obtains the face image, it also obtains the marked eyelid curve corresponding to the face image.
  • the step of obtaining the eyelid curve corresponding to each face image may include: for each face image, based on the marked eye corner point, the marked eyelid point of the upper eyelid in the face image, and The preset curve fitting algorithm fits the upper eyelid curve that characterizes the upper eyelid; and based on the marked eye corner points, the lower eyelid marked eyelid points in the face image, and the preset curve fitting algorithm, The lower eyelid curve representing the lower eyelid is drawn to obtain the eyelid curve corresponding to the face image.
  • the first electronic device after the first electronic device obtains the face image, it can be fitted based on the marked corner points of the eyes contained in the face image and the marked eyelid points of the upper eyelid, and a preset curve fitting algorithm.
  • the upper eyelid curve that characterizes the upper eyelid of the eye is used as the marked upper eyelid curve; and based on the marked corner points of the included eyes and the marked eyelid points of the lower eyelid marked in the face image, and the preset curve fitting algorithm,
  • the lower eyelid curve that characterizes the lower eyelid of the eye is obtained as the labeled lower eyelid curve.
  • the preset curve fitting algorithm may be a cubic spline interpolation algorithm. It is understandable that each eye includes two marked eye corner points, which are the intersection of the marked upper eyelid curve representing the upper eyelid of the eye and the marked lower eyelid curve representing the lower eyelid of the eyelid.
  • S102 For each marked eyelid curve, based on the principle of mathematical integration, integrate the marked eyelid curve to determine the first curve length marked with the upper eyelid curve and the second curve length marked with the lower eyelid curve in the marked eyelid curve; respectively; From marking the upper eyelid curve and marking the lower eyelid curve, multiple eyelid points to be used are determined.
  • the first electronic device may integrate the marked eyelid curve based on the principle of data integration, and respectively integrate the marked upper eyelid curve in the marked eyelid curve, and determine the marked upper eyelid curve.
  • the curve length of the eyelid curve is taken as the first curve length; the marked lower eyelid curve in the marked eyelid curve is integrated to determine the curve length of the marked lower eyelid curve as the second curve length.
  • the first electronic device intensively selects points on the marked upper eyelid curve in the marked eyelid curve, and determines multiple eyelid points to be used, for example: taking a preset number of points, that is, determining A preset number of eyelid points to be used.
  • the first electronic device for each marked eyelid curve intensively selects points on the marked lower eyelid curve in the marked eyelid curve, and determines multiple eyelid points to be used, for example: take a preset number of points, namely Determine the preset number of eyelid points to be used.
  • S103 For each marked eyelid curve, based on the first curve length of the upper eyelid curve marked on the marked eyelid curve, the second curve length marked on the lower eyelid curve, multiple eyelid points to be used, and a preset number of equal division points, In marking the eyelid curve, marking the upper eyelid curve and marking the lower eyelid curve, respectively determine the preset number of equal points minus 1 equal upper eyelid point and the preset number of equal points minus 1 equal lower eyelid point.
  • the first curve length and the preset Set the number of equally divided points to determine the distance between each two adjacent equally divided upper eyelid points that need to be marked, where the distance between each adjacent two equally divided upper eyelid points is equal to the length of the first curve and The ratio of the number of equal points is preset.
  • the first electronic device may calculate the distance between the corner point of the eye and the eyelid point to be used from a certain corner of the eye.
  • the distance between a certain eyelid point to be used at the corner of the eye is determined, it is every adjacent If the distance between the two halved upper eyelid points is an integer multiple of, the eyelid point to be used can be determined as the halved upper eyelid point.
  • the integer multiple can be 1 time to the preset number of equal division points minus 1 time.
  • the electronic device can calculate the distance between the eyelid point and the eyelid point to be used from a certain corner of the eye, and when it is determined that the distance between the eyelid point to be used and the eyelid point is every two adjacent If the distance between the upper eyelid points is divided into two equal parts, the eyelid point to be used is determined as the first divided upper eyelid point, and then the first divided upper eyelid point is taken as the starting position, and the first divided upper eyelid point is sequentially traversed.
  • the length of the second curve and the preset number of equal points are used to determine the distance between each two adjacent equally divided lower eyelid points that need to be marked, where each adjacent two equally divided lower eyelids
  • the distance between the points is equal to the ratio of the length of the second curve to the preset number of equal points.
  • the first electronic device may calculate the distance between the corner point of the eye and the eyelid point to be used from a certain corner of the eye. When the distance between a certain eyelid point to be used at the corner of the eye is determined, it is every adjacent Is an integer multiple of the distance between the two equally divided lower eyelid points, then the eyelid point to be used can be determined as the equally divided lower eyelid point.
  • the integer multiple can be 1 time to the preset number of equal division points minus 1 time.
  • the electronic device can calculate the distance between the eyelid point and the eyelid point to be used from a certain corner of the eye, and when it is determined that the distance between the eyelid point to be used and the eyelid point is every two adjacent If the distance between the lower eyelid points is divided into two, it is determined that the eyelid point to be used is the first equally divided lower eyelid point, and then, taking the first equally divided lower eyelid point as the starting position, traverse the first The to-be-used eyelid points after the lower eyelid points are divided, and the distance between a certain to-be-used eyelid point and the first lower eyelid point is determined to be between every two adjacent lower eyelid points Then, the eyelid point to be used is determined as the second equally divided lower eyelid point, and so on to determine the preset number of equally divided lower eyelid points minus one equally divided lower eyelid point.
  • the above steps are performed for the marked eyelid curve corresponding to the left eye, and the above steps are performed for the marked eyelid curve corresponding to the right eye.
  • S104 Determine the marked eye corner point, equally divided upper eyelid point, and equally divided lower eyelid point in each face image as the key eye point corresponding to each face image.
  • the halved eyelid points of the upper and lower eyelids of the eyes contained in the face image are determined as the face image Corresponding key points of the eye.
  • the halved eyelid points of the upper and lower eyelids of the eye include the halved upper eyelid points of the upper eyelid and the halved lower eyelid points of the lower eyelid.
  • the first electronic device may mark the divided eyelid points in the upper and lower eyelids of the eyes of the face image based on the position information of the divided eyelid points of the upper and lower eyelids of the determined human face image, and save the The eyelid points are divided into the upper and lower eyelids of the eyes and the face images are marked with the corner points of the eyes.
  • the labeled eyelid curve corresponding to each face image can also be saved. It is also possible to save the position information of the equally divided eyelid points of the upper and lower eyelids of the eyes and the marked eye corner points in the face image in the form of text.
  • the eyelid marking curve corresponding to the face image can be integrated, and the first curve length marking the upper eyelid curve and the second curve length marking the lower eyelid curve in the marking eyelid curve can be determined, and the markings are respectively
  • the eyelid curve and the marked lower eyelid curve are densely selected to determine multiple eyelid points to be used from the marked upper eyelid curve and the marked lower eyelid curve, and then mark the upper eyelid curve based on the first curve length of the marked upper eyelid curve
  • the number of eyelid points to be used and the preset number of equal division points are determined from the marked upper eyelid curve, and the preset number of equal division points minus one equal upper eyelid point is determined; and the second curve length and label are based on the marked lower eyelid curve.
  • the multiple eyelid points to be used on the lower eyelid curve and the preset number of equal division points are determined from the marked lower eyelid curve to determine the preset number of equal division points minus 1 equal lower eyelid point to achieve semi-automatic inclusion from the face image
  • the upper and lower eyelids of the eyes are marked with equally divided eyelid points with obvious semantic characteristics, which realizes marking the key points of the eye with obvious semantic characteristics and improves the marking efficiency to a certain extent.
  • the method may further include:
  • the eye image and its corresponding calibration The information is determined as the training data of the eye key point detection model used to detect the eye key points of the eyes in the image, where the calibration information includes position information of the eye key points in the corresponding eye image.
  • the first electronic device determines the eye key points in each face image, it can determine the eye area based on the position of the eye key points in the face image, and then obtain the Intercept the image of the area where the eye is located, and obtain the eye image marked with the key points of the eye.
  • the area where the eye is located may be the smallest rectangular area containing the eyes, or it may be an area where the smallest rectangular area containing the eyes extends to the surrounding by preset pixel points, which is all right.
  • extending the preset pixel points to the surrounding is: respectively expanding the preset pixel points in the upper, lower, left, and right directions of the smallest rectangular area including the eyes.
  • Each face image has a corresponding relationship with the eye image cut out.
  • the eye image After determining the eye image, for each eye image, determine the position information of the eye key points marked in the eye image in the eye image as the calibration information corresponding to the eye image, and mark the eye key points
  • the eye image of and the calibration information corresponding to the eye image determine the training data of the eye key point detection model used to detect the eye key points of the eyes in the image.
  • the step of determining the eye image and its corresponding calibration information as the training data of the eye key point detection model for detecting the eye key points of the eyes in the image may include : Obtain the real eye opening and closing length corresponding to each face image;
  • the measured eye opening and closing length corresponding to each face image where the measured eye opening and closing length is: based on the length determined by the upper and lower eyelids of the target 3D face model corresponding to the face image, the target 3D face model is: A face model determined based on the facial feature points in the corresponding face image and a preset three-dimensional face model.
  • the target three-dimensional face model includes: based on the marked eye corner points in the face image, equally divided upper eyelid points and Divide the lower eyelid points equally to construct the upper and lower eyelids of the eye;
  • the eye image and its corresponding calibration information are determined as the training data of the eye key point detection model used to detect the upper and lower eyelids of the eye in the image, where the calibration information includes: the eye marked in the corresponding eye image The location information of the key points, and the measured deviation corresponding to the face image corresponding to the eye image.
  • the face image when it is a face image collected by a multi-image acquisition device system, it can be based on multiple image acquisition devices in the multi-image acquisition device system to target the face of the same person at the same time.
  • the collected images containing the person's face are constructed to construct a real three-dimensional face model of the person's face, and the real three-dimensional face model of the person's face includes the upper and lower eyelids of the person's eyes.
  • the distance between the upper and lower eyelids of the person’s eyes can be determined as the real eye opening and closing corresponding to each image containing the person’s face Length, that is, the actual eye opening and closing length corresponding to each face image containing a person's face.
  • the above-mentioned real three-dimensional face model can be constructed based on any current technology that can reconstruct a three-dimensional face model of a person based on a plurality of images containing the face of a person.
  • the above-mentioned process based on the center eyelid points of the upper and lower eyelids of the person's eyes included in the real three-dimensional face model to determine the distance between the upper and lower eyelids of the person's eyes may be:
  • the first value can be 0, the second value can be 1, and the second eye canthus constraint is expressed by formula (2); based on the curve equation, the third position information of each eyelid point, and the pose information of each image acquisition device and The internal reference information is used to construct the re-projection error constraints corresponding to the equally divided eyelid points, where the above-mentioned re-projection error constraints can be based on the third and second position information of each equally divided eyelid point, and the true three-dimensional corresponding to each equally divided eyelid point
  • the distance between the equally divided eyelid space points in the face model and the projection position in the face image is constructed; based on the order of the equally divided eyelid points in the face image, the order constraint can be constructed by the formula ( 3); Based on the first corner of the eye constraint, the second corner of the eye constraint, the reprojection error constraint, and the order constraint, construct the eyelid space curve equation used to characterize the upper and lower eyelids of the eye, that is, the four constraints corresponding to the above are
  • (x 0 , y 0 , z 0 ) represents the first three-dimensional position information
  • (x 1 , y 1 , z 1 ) represents the second three-dimensional position information
  • a 1 , a 2 , a 3 , b 1 , b 2 , B 3 , c 1 , c 2 , and c 3 are the required coefficients
  • t is the independent variable.
  • Formula (3) 0 ⁇ t 1 ⁇ t 2 ... ⁇ t i ... ⁇ t M ⁇ 1(3); among them, when the eye space curve equation characterizing the upper eyelid is determined, t i represents the equivalence of the ith upper eyelid The third two-dimensional position information of the eyelid points.
  • M represents the number of equal eyelid points of the upper eyelid; when the eye space curve equation representing the lower eyelid is determined, t i represents the equal division of the i-th lower eyelid
  • M represents the number of equal eyelid points of the lower eyelid.
  • the first electronic device can obtain the actual measured eye opening and closing length corresponding to each face image.
  • the actual measured eye opening and closing length can be: the length determined based on the upper and lower eyelids of the eyes in the target three-dimensional face model corresponding to the face image
  • the target three-dimensional face model can be: using 3DMM (3D Morphable Models, three-dimensional deformable Model) technology, a face model determined based on the facial feature points in the corresponding face image and a preset three-dimensional face model.
  • the target three-dimensional face model includes: based on the marked eye corner points in the face image, etc.
  • the upper eyelid point and the lower eyelid point are divided equally to construct the upper and lower eyelids of the eye.
  • the first electronic device calculates the ratio of the actual eye opening and closing length corresponding to the face image to the actual measured eye opening and closing length as the actual measurement deviation corresponding to the face image; wherein the actual measurement deviation can be determined
  • the real eye opening and closing length and the measured eye opening and closing length corresponding to the left eye in the face image can be obtained respectively, and the real eye opening and closing length and the measured eye corresponding to the right eye Opening and closing length; based on the actual eye opening and closing length corresponding to the left eye and the measured eye opening and closing length, determine the actual measurement deviation corresponding to the left eye; based on the real eye opening and closing length corresponding to the right eye and the measured eye opening and closing length, determine the right eye correspondence
  • the measured deviation of is taken as the measured deviation corresponding to the face image.
  • the first electronic device determines the eye image and its corresponding calibration information as the training data of the eye key point detection model for detecting the upper and lower eyelids of the eye in the image, and the calibration information includes the corresponding eye The position information of the key points of the eye marked by the image and the actual measured deviation corresponding to the face image corresponding to the eye image.
  • the eye key point detection model trained based on the eye image and its corresponding calibration information can not only detect the equally divided eyelid points in the upper and lower lids of the eyes in the image, but also detect the corresponding eye in the image
  • the actual measurement deviation, and further, the actual measurement eye opening and closing length corresponding to the image can be corrected based on the actual measurement deviation, so as to obtain a more accurate eye opening and closing length. Furthermore, when performing other tasks with the more accurate eye opening and closing length, the accuracy of the results of other tasks can be improved.
  • the method may further include:
  • first face image contains the eyes of the person's face, and the first face image is one of the face images
  • the tagger If it is detected that the tagger has triggered the tagging instruction for the specified eyelid in the first face image at least twice, based on the location information of the tag point carried by the tagging instruction triggered by the tagger at least twice, and the preset curve fitting algorithm, determine The specified eyelid curve representing the specified eyelid, where the specified eyelid is: the upper eyelid and the lower eyelid of the eye in the first face image;
  • the designated eyelid curve is displayed in the first face image, so that the labeler detects whether the labelled point is an eyelid point or an eye corner point on the designated eyelid.
  • a process of labeling the upper and lower eyelids of the eyes of the person's face in each face image may also be included.
  • the labeling process is a process in which the labeling person marks the corner points and the eyelid points of the upper and lower eyelids of the eyes of the person's face in the face image.
  • the first electronic device can obtain the first face image after detecting the face image annotation start instruction triggered by the annotator, and the first face image is any one of the aforementioned face images,
  • the first face image contains the eyes of the person's face.
  • the first electronic device displays the first face image, the annotator can mark the upper and lower eyelids of the eyes in the first face image, and the first electronic device receives the trigger from the annotator for the upper and lower eyelids of the eyes in the first face image.
  • a marking instruction which carries the position information of the marked marking point.
  • the first electronic device displays a preset labeling icon at a corresponding position in the first face image based on the position information of the labeling point carried in the labeling instruction; the first electronic device can count the labeling personnel’s real-time statistics on the first face The number of marking instructions triggered by the specified eyelid in the image; if it is detected that the marking person has triggered the marking instruction for the specified eyelid in the first face image at least twice, that is, when the specified eyelid includes at least two marking points, the first electronic device is based on The position information of the label points carried by the labeling instructions triggered by the labeler at least twice, and the preset curve fitting algorithm, determine the specified eyelid curve representing the specified eyelid; and display the specified eyelid curve in the first face image, and mark The person can observe the specified eyelid curve and detect whether the specified eyelid curve coincides with the specified eyelid in the first face image, that is, detect whether the marked point is an eyelid point or an eye corner point on the specified eyelid.
  • the annotator can trigger the marking point position modification instruction, and the first electronic device obtains the marking point position modification instruction, wherein the marking point position modification instruction carries the Modify the current position information of the marked point of the modified position and the target position information to be modified, the first electronic device moves the marked point of the position to be modified from the position corresponding to the current position information to the position corresponding to the target position information to be modified , That is, display the preset label icon at the location corresponding to the target location information, and delete the preset label icon displayed at the location corresponding to the current location information.
  • the first electronic device determines a new specified eyelid curve that characterizes the specified eyelid based on the new position information of the modified marked point in the specified eyelid and the position information of other marked points, and a preset curve fitting algorithm, And it is displayed in the first face image, so that the annotator continues to detect whether the marked point is an eyelid point or an eye corner point on the designated eyelid.
  • the first electronic device saves the first face image and the annotation points contained at the moment when the save instruction is triggered, and each annotation Point location information.
  • the marked points included in the trigger moment of the save instruction in the first face image may include two corner points of the eyes, the upper eyelid point of the upper eyelid and the lower eyelid point of the lower eyelid.
  • the number of upper eyelid points and lower eyelid points can be the same or different, for example: the number of upper eyelid points can be 3, the number of lower eyelid points can be 4, and so on.
  • the preset labeling icon can be a solid circle or a hollow circle, and can also be a solid image or a hollow image of other shapes, which is all right.
  • the labeling process can be executed on the first electronic device, or can be executed on another electronic device different from the first electronic device, all of which are possible. If the labeling process is performed on another electronic device different from the first electronic device, it may be that after the labeling personnel finish labeling the face image, upload the face image with the eye corner points and eyelid points marked on the upper and lower eyelids of the eyes. Cloud, so that when the first electronic device performs the marking of the key points of the eye, it can obtain the face image of the upper and lower eyelids marked with the eye corner points and the eyelid points from the cloud.
  • the embodiments of the present invention can efficiently ensure the accuracy of the eyelid points or the corner points of the eyes marked by the labeler to a certain extent, and improve the labeling efficiency of the labeler.
  • FIG. 2A, FIG. 2B, FIG. 2C, and FIG. 2D it is a schematic diagram of an eyelid curve corresponding to a face image.
  • the face image shown in FIG. 2A includes one eye, and the eye can be completely detected, and the marked eyelid curve corresponding to the face image may include the marked upper eyelid curve and the marked lower eyelid curve of the eye.
  • the occluded position in FIG. 2A, FIG. 2B, FIG. 2C, and FIG. 2D is the position where the human face is located.
  • the face image shown in FIG. 2B includes two eyes, and both eyes can be completely detected.
  • the labeled eyelid curve corresponding to the face image may include the labeled upper eyelid curve and the labeled lower eyelid curve of the left eye, and the right The eye is marked with the upper eyelid curve and the lower eyelid curve.
  • the right image in Figure 2B is a partial enlarged view of the position of the eye in the left image.
  • the face image shown in Figure 2C includes an eye, and the inner corner of the eye is occluded.
  • the eyelid points and corner points of the upper and lower lids of the partially occluded eyes in this type of face image one In this case, the eyelid points and eye corner points at the occluded position of the occluded eye can be directly marked through experience.
  • a three-dimensional face model of the person corresponding to the face image can be reconstructed based on other face images corresponding to the face image , And then, from the three-dimensional face model, determine the eye space point corresponding to the position where the eye is blocked in the face image, and then reproject the eye space point to the face image to determine the The eyelid point and/or eye corner point at the occluded position.
  • the above-mentioned partially blocked eyes may refer to eyes whose blocked area does not exceed a preset area.
  • the right image in Figure 2C is a partial enlarged view of the position of the eye in the left image.
  • the face image shown in FIG. 2D includes two eyes, and one eye can be detected at night, and one eye is partially blocked. At this time, the annotator can directly mark the eyelid points and eye corner points at the occluded position of the occluded eye based on experience. Or, when the face image is an image collected based on a multi-image acquisition device system, a three-dimensional face model of the person corresponding to the face image can be reconstructed based on other face images corresponding to the face image, and further, From the three-dimensional face model, determine the eye space point corresponding to the position where the eye is blocked in the face image, and then reproject the eye space point to the face image to determine the blocked position Eyelid point and/or corner point of the eye.
  • the right image in Figure 2D is a partial enlarged view of the position of the left eye in the left image.
  • the aforementioned face image and other face images corresponding to the face image are all images collected by the multi-image collection device system, and are images collected at the same time.
  • the first electronic device can be set to restrict the labeling staff to separate the labeling points of the upper and lower eyelids of the eyes.
  • the labeling staff can be instructed to label the labeling points of the upper eyelids of the eyes.
  • the labeling staff cannot label the eye
  • the intersection of the upper and lower eyelids of the eyes is the inner and outer corners of the corresponding eyes.
  • the first electronic device ensures that the eyelid curves of the upper and lower eyelids can intersect with the corresponding inner and outer corners of the eye respectively.
  • the first electronic device detects that the marking point in the upper eyelid has been marked.
  • the marked corner point of the eye may refer to the marked point with the smallest and largest horizontal axis coordinate of the marked point in the marked upper eyelid.
  • FIG. 3 is a schematic flowchart of a method for training an eye key point detection model provided by an embodiment of the present invention.
  • This method is applied to an electronic device, which can be a device with strong computing and processing capabilities, such as a server, etc., in which, for clear layout, in the subsequent description, the electronic device that implements the training method of the eye key point detection model can be used Become a second electronic device.
  • the method specifically includes the following steps:
  • the training data includes the upper and lower eyelid equal eyelid points marked with the eyes and the eye images marked with the corner points of the eyes, and the calibration information corresponding to each eye image.
  • the calibration information includes: the equal eyelid points and labels marked in the corresponding eye image The position information of the corner points of the eyes.
  • the equally divided eyelid points include: based on the marked eyelid curve corresponding to the face image and the principle of mathematical integration, the equal division of the upper eyelid of the eye in the face image is determined.
  • marking eyelid curves include: based on the marked corner points of the eyes and the marked eyelid points of the upper and lower eyelids in the corresponding face image, the marked upper eyelid curve representing the upper eyelid and the marked lower eyelid curve representing the lower eyelid are generated, Each eye image is an image of the area where the eye is cut out from the corresponding face image.
  • the second electronic device may first obtain training data, where the training data may include multiple eye images and calibration information corresponding to each eye image.
  • the eye image is marked with the upper and lower eyelids of the eye and the eyelid points and the corner points.
  • the specific obtaining process of dividing the eyelid points and marking the eye corner points can refer to the obtaining process of obtaining the equal eyelid points and marking the eye corner points in the marking process of the above-mentioned key points of the eye, and will not be repeated here.
  • S302 Input the eye image and the position information of the equally divided eyelid points and the marked eye corner points included in the calibration information corresponding to each eye image into the initial eye key point detection model to be trained to detect the upper and lower positions of the eyes in the image An eye key point detection model that divides the eyelid point and the corner point of the eyelid.
  • the initial eye key point detection model may be: a neural network model based on deep learning.
  • the second electronic device inputs the eye image and the position information of the eye key points included in the calibration information corresponding to each eye image into the initial eye key point detection model, where the eye key points include the upper and lower eyelids of the corresponding eye The halves of the eyelid point and the corner point of the eye.
  • the second electronic device uses the initial eye key point detection model to extract the image features in the eye image for each eye image, and based on the extracted image features, detects the eye key points and the key points of the eye in the eye image. Its position information; and the detected position information of the key points of the eye is matched with the position information of the key points of the eye in the corresponding calibration information.
  • the trained eye key point detection model If the matching is successful, it is determined that the initial key point detection model of the eye has converged to obtain The trained eye key point detection model; if the matching is unsuccessful, determine that the initial eye key point detection model has not converged, adjust the parameters of the initial eye key point detection model, and return to execute the eye image and each eye image
  • the corresponding calibration information includes the position information of the equally divided eyelid points and the marked eye corner points, and the steps of inputting the initial eye key point detection model until the matching is successful, confirm that the initial eye key point detection model converges, and obtain the trained eye Key point detection model.
  • the eye image there is a correspondence between the eye image and the detected position information of the key points of the eye, and there is a correspondence between the eye image and the calibration information, and correspondingly, the detected position information of the key points of the eye has a corresponding relationship with the calibration information.
  • the above process of matching the position information of the detected key points of the eye with the position information of the key points of the eye in the corresponding calibration information may be: using a preset loss function to calculate each detected eye The loss value between the position information of the key point and the position information of the eye key point in the corresponding calibration information, to determine whether the loss value is less than the preset loss threshold; if it is determined that the loss value is less than the preset loss threshold, and, if The current judgment loss value is less than the preset loss threshold for more than a predetermined number of times, or the ratio of the current judgment loss value less than the preset loss threshold to the total number of times exceeds the preset ratio threshold, the match is determined to be successful, and the initial eye can be determined
  • the key point detection model converges to obtain a trained eye key point detection model; if it is determined that the loss value is not less than the preset loss threshold, it is determined that the matching is unsuccessful.
  • the above process is only an example of determining the convergence of the initial eye key point detection model.
  • the embodiment of the present invention can use any determination method that can characterize the model convergence to determine whether the initial eye key point detection model has converged, and then The key point detection model of the eye is trained.
  • the aforementioned preset loss function may be a loss function such as smooth L1 loss (smooth 1 norm loss), wing loss (wing loss), and KL loss (that is, KL divergence loss).
  • the above process of adjusting the parameters of the initial eye key point detection model can be based on the position information of the eye key points detected by the eye key point detection model in the training process and the eye parts in the corresponding calibration information.
  • the "gap" between the location information of key points is getting smaller and smaller, and adjustments should be made.
  • SGD Spochastic Gradient Descent
  • SGDR sinochastic gradient descent with restarts
  • Method and other optimization strategies.
  • the batch size during the training process can be 256
  • the initial learning rate can be 0.04.
  • the more the number of eye images included in the obtained training data the higher the stability of the trained eye key point detection model, and the accuracy of the detection result obtained based on the eye key point detection model Also higher.
  • the eye key point detection model is trained to obtain the eye key point detection model used to detect the equally divided eyelid points and eye corner points in the upper and lower eyelids of the eyes in the image.
  • the eye key point detection model can determine the eye key points with obvious semantic features from the image, and to a certain extent guarantee the stability and detection accuracy of the eye key point detection model.
  • the method may further include:
  • the correction processing is: processing that the ordinates in the position information of the marked corner points in the eye image are all the same;
  • the S302 may include: inputting the position information of the halved eyelid points and the marked eye corner points included in each converted image and the calibration information corresponding to each converted image into the initial eye key point detection model to be used for training An eye key point detection model for detecting the equally divided eyelid points and eye corner points in the upper and lower eyelids of the eyes in the image.
  • the person’s face in the face image may be tilted.
  • the eye image and its calibration information are used before training the initial eye key point detection model.
  • the eye image can be corrected first to obtain the corrected image, that is, the ordinates in the position information of the marked eye corner points in the eye image are all the same. And re-determine the position information of the key points of the eye in the corrected image, and update the calibration information corresponding to each corrected image based on the key points of the eye in each corrected image, that is, the position information of the eyelid points and the marked eye corner points.
  • the key points of the eye are the position information of the eyelid points and the corner points of the eyes.
  • the eye image includes a left eye image and a right eye image corresponding to the left eye image;
  • the method may further include: mirroring the left eye image or the right eye image corresponding to the left eye image to obtain a mirror image image;
  • the mirror image and the unmirrored image are spliced to obtain a spliced image. If the left-eye image is mirrored, the unmirrored image is the right-eye image corresponding to the left-eye image; if the right-eye image corresponding to the left-eye image is The eye image is mirrored, and the image that is not mirrored is the left eye image;
  • Said each converted image and the calibration information corresponding to each converted image includes the position information of the divided eyelid points and the marked eye corner points, and input the initial eye key point detection model to be trained to detect the eyes in the image
  • the steps of detecting the key points of the eyelid point and the corner point of the eyelid in the upper and lower eyelids may include:
  • the eye image includes: an image containing the left eye of the target person, which may be called a left eye image; and an image containing the right eye of the target person, which may be called a right eye image corresponding to the left eye image.
  • the left-eye image or the left-eye image may be mirrored to obtain a mirrored image, and then the mirrored image and the unmirrored image are spliced to obtain the spliced image corresponding to the eye image.
  • Each stitched image and its corresponding calibration information are input into the initial eye key point detection model to train the initial eye key point detection model.
  • Mirroring the left-eye image or the right-eye image corresponding to the left-eye image can make the left-eye image mirror the mirrored right-eye image, or make the right-eye image mirror the mirrored left-eye image, to a certain extent. Shorten training time.
  • the left eye image and the right eye image determined from the image can be processed first, that is, the correction, mirroring, and stitching processing can make the eye
  • the key point detection model detects the key points of the eyes in the two human eyes in the processed image at the same time, that is, through a detection process, the upper and lower eyelids of the two human eyes in the processed image can be detected
  • the key points of the eye simplifies the detection process of the key points of the eye using the eye key point detection model.
  • the above-mentioned process of splicing the mirror image and the unmirrored image to obtain the spliced image may be: splicing the mirror image and the unmirrored image in the spatial dimension or the channel dimension, where the spatial dimension is
  • the splicing can be: splicing the mirror image and the unmirrored image left and right or up and down.
  • Left and right splicing can be: the right side of the mirror image is spliced with the left side of the image that is not mirrored, and the left side of the mirror image is spliced with the right side of the image that is not mirrored.
  • Top and bottom splicing may be: the upper side of the mirror image is spliced with the lower side of the image that is not mirrored, and the lower side of the mirror image is spliced with the upper side of the image that is not mirrored.
  • the splicing of the aforementioned channel dimensions may be: splicing the mirror image and the unmirrored image back and forth, that is, superimposing and splicing the mirror image and the unmirrored image.
  • the ordinate value of the corner point of the eye in the original image corresponding to the mirror image and the ordinate value of the corner point of the image without mirroring can be adjusted to the same value during the previous normalization process.
  • the original image corresponding to the mirror image is: the mirror image is obtained by performing mirror image processing.
  • the calibration information corresponding to each eye image may further include: a measured deviation corresponding to the eye image, and the measured deviation is: the actual eye opening and closing length corresponding to the eye image and the measured eye opening and closing length
  • the actual measured eye opening and closing length is: based on the length determined by the upper and lower eyelids of the eyes in the target three-dimensional face model corresponding to the face image
  • the target three-dimensional face model is: based on the facial feature points in the corresponding face image
  • the target 3D face model includes: the upper and lower eyelids of the eye constructed based on the marked eye corner points, equal division of upper eyelid points and equal division of lower eyelid points in the face image ;
  • the S302 may include: inputting the eye image and the calibration information corresponding to each eye image including the position information of the divided eyelid points and the marked eye corner points and the actual measurement deviation corresponding to the eye image, and inputting the initial eye key point detection model ,
  • the key point detection model of the eye is obtained by training, where the key point detection model of the eye is used to detect the equal-divided eyelid points and corner points of the upper and lower lids of the eyes in the image, and detect the actual measured deviation corresponding to the image.
  • the position information of the eye image, the position information of the eye corner points and the measured deviation corresponding to the eye image can be used as the calibration information corresponding to the eye image, and then the eye image and the calibration corresponding to the eye image can be used Information, train the initial eye key point detection model, that is, input the eye image and the calibration information corresponding to the eye image into the initial eye key point detection model to train to obtain the eye key point detection model.
  • the trained eye key point detection model it can be used to detect the equally divided eyelid points and corner points of the upper and lower lids of the eyes in the image, and can also detect the actual measured deviations corresponding to the image.
  • the initial eye key point detection model includes: a feature extraction layer, a first feature classification layer, and a second feature classification layer.
  • the feature extraction layer may refer to the layer used to extract image features of the image, such as the convolutional layer and the pooling layer;
  • the first feature classification layer may refer to: the key points of the eyes in the image and their position information are detected based on the image features
  • the fully connected layer may refer to a fully connected layer used to detect the actual measured deviation corresponding to the image.
  • the above process of inputting the eye image and the calibration information corresponding to the eye image into the initial eye key point detection model to train the eye key point detection model may be: first input each eye image to the feature extraction layer, and extract the The image features in the eye image; then the image features are input to the first feature classification layer to determine the current position information of the eye key points in the eye image; and then the current position information and the corresponding calibration information of the eye key points The position information is matched.
  • the middle eye key point detection model it can detect the position information of the eye key points in the image.
  • the middle eye key point detection model includes a trained feature extraction layer, a first feature classification layer, and an untrained second feature classification layer.
  • the eye image is input to the feature extraction layer of the middle eye key point detection model to obtain the image features of the eye image; the image feature is input to the first feature classification layer of the middle eye key point detection model to determine the eye key in the eye image Point position information; based on the position information of the key points of the eye in the eye image, determine the current measured deviation corresponding to the eye image; input the current measured deviation into the second feature classification layer in the middle eye key point detection model, and the current measured deviation Match with the actual measured deviation in the corresponding calibration information. If the matching is successful, it is determined that the middle eye key point detection model converges, and the trained eye key point detection model is obtained.
  • the trained eye key point detection model includes training The completed feature extraction layer and the first feature classification layer and the second feature classification layer after training; if the matching fails, adjust the parameters of the second feature classification layer of the middle eye key point detection model, and return to execute the input of the eye image
  • the eye key point detection model obtained by this training can not only detect the eye key points in the image, but also detect the actual measured deviation corresponding to the image.
  • the training obtains the eye that can detect the position information of the key point of the eye in the image.
  • the central key point detection model is used as the middle eye key point detection model, where the middle eye key point detection model includes a trained feature extraction layer, a first feature classification layer, and an untrained second feature classification layer.
  • the feature extraction layer may refer to: the convolutional layer and the pooling layer of the middle eye key point detection model are used to extract features of the image;
  • the first feature classification layer may refer to: the middle eye key point detection model
  • the fully connected layer used to detect the key points of the eyes and their position information in the image.
  • the above-mentioned second feature classification layer may refer to: a fully connected layer of the middle eye key point detection model used to detect the actual measured deviation corresponding to the image.
  • the second electronic device obtains multiple other eye images, and obtains calibration information corresponding to each other eye image.
  • the calibration information corresponding to the other eye image includes the actual measured deviation corresponding to the other eye image;
  • the calibration information corresponding to other eye images is input to the middle eye key point detection model; for each other eye image, the image features of the other eye images are extracted based on the trained feature extraction layer, and the image features are input to the trained
  • the first feature classification layer obtains the position information of the key points of the other eye images.
  • the second electronic device determines the current measured deviation corresponding to the other eye image based on the position information of the eye key points of the other eye image, and inputs the current measured deviation into the untrained second feature classification of the middle eye key point detection model Layer, the current measured deviation is matched with the measured deviation in the corresponding calibration information. If the matching is successful, it is determined that the middle eye key point detection model has converged, and the trained eye key point detection model is obtained.
  • the training completed The eye key point detection model includes the above-mentioned trained feature extraction layer, the first feature classification layer, and the trained second feature classification layer; if the matching fails, adjust the second feature classification layer of the middle eye key point detection model Return to the step of inputting other eye images and the calibration information corresponding to each other eye image to the middle eye key point detection model until the matching is successful, confirm that the middle eye key point detection model converges, and obtain the training result Eye key point detection model.
  • the eye key point detection model obtained by this training can not only detect the eye key points in the image, but also detect the actual measured deviation corresponding to the image.
  • the specific calculation process of the current actual measurement deviation can be referred to the specific calculation process of the actual measurement deviation in the above-mentioned eye key point marking process, which will not be repeated here.
  • the method may further include:
  • the image to be inspected includes the eyes of the person to be inspected
  • the trained eye key point detection model can be used to detect the corner points in the eyes of the person to be detected in the image to be detected And the equally divided corner points in the upper and lower eyelids, and the position information of the corner points in the image to be detected, and the position information of the equally divided corner points in the upper and lower eyelids in the image to be detected.
  • the facial feature points of the face of the person to be detected can also be detected, where the facial feature points are used to characterize the face of the person to be detected.
  • various parts of the face may include nose, lips, eyebrows, eyes, jaw, cheeks, ears, and forehead.
  • the facial feature points of each part of the face can respectively include: feature points in the face that characterize the position of the nose, such as nose wings, nose bridge, and tip of the nose; can also include feature points that characterize the position of the lips, such as lips The feature points of the corners of the mouth and the outer edges of the lips; it can also include the feature points that characterize the position of the eyebrows, such as the edge of the eyebrows; it can also include the feature points that characterize the location of the eyes, such as the corner of the eye feature points , Orbital feature points, pupil feature points, etc.; can also include feature points that characterize the position of the mandible, such as feature points on the contour of the mandible, that is, feature points on the contour of the chin, etc.; can also include characterizing the location of the ear The feature points of the position, such as the feature points on the contours of the ears, etc.; it can also include the feature points that characterize the position of the forehead, such as the feature points on the contour of the forehead, such as the intersection of
  • the area of the eyes is determined from the image to be detected, and the area of the eyes is cut out from the image to be detected, and the eye image corresponding to the image to be detected is obtained as the image of the eye to be detected.
  • the eye image is input into the eye key point detection model obtained by the training, so as to improve the accuracy of the detected corner points in the eyes of the person to be detected and the equally divided eye corner points in the upper and lower eyelids to a certain extent.
  • both the left eye image to be detected and the right eye image to be detected may be subjected to normalization processing, that is, the left eye image to be detected
  • the ordinate values in the position information of the two eye corner points in the image are the same, so that the ordinate values in the position information of the two eye corner points in the right-eye image to be detected are the same; input the corrected eye image to the training eye
  • the key point detection model is used to improve the accuracy of the detected corner points in the eyes of the person’s eyes and the equally divided eye corner points in the upper and lower eyelids to a certain extent, and to a certain extent lower the key point detection of the eyes obtained by this training The difficulty of model detection.
  • the left-eye image to be detected or the right-eye image to be detected after the normalization process is obtained, and the mirroring process is performed to obtain a mirrored eye image, and the mirrored eye image and the unmirrored eye image are stitched together to obtain the stitched eye Image, input the stitched eye image into the trained eye key point detection model, so that the trained eye key point detection model can detect the key points and their position information in the mirrored eye image at one time , And the eye key points and their position information in the unmirrored eye image; subsequently, the position information of the eye key points in the mirrored eye image is mirrored to obtain the eye key points corresponding to the mirrored eye image Position information in the image before mirroring. Furthermore, the position information of the key eye points in the left-eye image to be detected after the normalization processing and the position information of the key eye points in the right-eye image to be detected are obtained.
  • the method may also include:
  • the preset curve fitting algorithm determine the eyelid curve of the upper and lower eyelids of the person to be detected as the eyelid curve to be detected, and set it in the grayscale edge map Draw the eyelid curve to be detected, where the eyelid curve to be detected includes: the upper eyelid curve to be detected that characterizes the upper eyelid of the person to be detected and the lower eyelid curve to be detected that represents the lower eyelid of the person to be detected;
  • each group of reference points includes points corresponding to the equally divided eyelid points in the eyelid curve to be detected;
  • the corner points in the eyelid curve to be detected, and the preset curve fitting algorithm determine the reference curve corresponding to the set of reference points, and draw each set in the grayscale edge map
  • the reference curve corresponding to the reference point wherein the reference curve corresponding to each set of reference points includes: a reference upper eyelid curve representing the upper eyelid of the person to be inspected and a reference lower eyelid curve representing the lower eyelid of the person to be inspected;
  • the grayscale edge map for each first upper eyelid curve, determine the sum of the pixel values of the pixel points corresponding to the first upper eyelid curve, where the first upper eyelid curve includes: the reference upper eyelid corresponding to each set of reference points Eyelid curve and upper eyelid curve to be tested;
  • the grayscale edge map for each first lower eyelid curve, determine the sum of the pixel values of the pixel points corresponding to the first lower eyelid curve, where the first lower eyelid curve includes: the reference lower eyelid corresponding to each set of reference points An eyelid curve and the lower eyelid curve to be detected;
  • the target eyelid curve is integrated to determine the third curve length of the target upper eyelid curve and the fourth curve length of the target lower eyelid curve in the target eyelid curve; from the target upper eyelid curve and the target lower eyelid curve respectively , To determine multiple reference eyelid points;
  • the preset number of equal division points minus 1 equal upper eyelid point and the preset number of equal division points minus 1 equal lower eyelid point are respectively determined.
  • the position information of the corner points of the eyes in the image to be inspected is output by the key point detection model of the obtained training
  • the edges are all extracted.
  • the pixel value of the pixels corresponding to the upper and lower eyelids of the eyes in the grayscale edge map can be 255, and the pixel values of the pixels corresponding to other parts at the edge can be 0, so as to indicate the edge of the image to be detected Locations, such as the upper and lower eyelids of the eye.
  • the eyelid curve of the upper and lower eyelids of the person to be detected can be determined as the eyelid curve to be detected. It can be understood that there is a one-to-one correspondence between each pixel in the grayscale edge map and each pixel in the image to be detected. Based on the corresponding relationship, the determined eyelid curve to be detected is placed on the edge of the grayscale. It is drawn in the figure, and the eyelid curve to be detected drawn in the grayscale edge map can be used to determine the corner point of the corresponding eye and its position information, and the equally divided eyelid point and its position information.
  • each set of reference points includes points corresponding to the equally divided eyelid points of the eyelid curve to be detected.
  • at least one set of reference points may be determined respectively at the upper and lower positions of the eyelid curve to be detected in the grayscale edge map.
  • the eyelid curve to be detected includes: the upper eyelid curve to be detected that represents the upper eyelid of the eye and the lower eyelid curve to be detected that represents the lower eyelid.
  • the foregoing determination of at least one set of reference points at the upper and lower positions of the eyelid curve to be detected in the grayscale edge map may be: respectively determining at least one set of reference points for the upper and lower positions of the upper eyelid curve to be detected, and for the lower At least one set of reference points are respectively determined at the upper and lower positions of the eyelid curve.
  • the white curves in Fig. 4 respectively represent the position of the upper eyelid and the lower eyelid of the eye in the grayscale edge diagram
  • the gray curve in Fig. 4 represents the upper eyelid curve to be detected and the lower eyelid curve to be detected in the eyelid curve to be detected.
  • the white solid points on the gray curve indicate that the eyelid curve to be detected is equally divided into the eyelid point and the eye corner point.
  • the middle eyelid points of the upper eyelid curve to be detected correspond to two sets of reference points, which are white and gray hollow points
  • the middle eyelid points of the lower eyelid curve to be detected correspond to two sets of reference points, respectively White hollow dots and gray hollow dots.
  • the corner points in the eyelid curve to be detected determine the reference curve corresponding to the set of reference points, and draw each set in the grayscale edge map The reference curve corresponding to the reference point.
  • the grayscale edge map for each first upper eyelid curve, determine the sum of the pixel values of the pixel points corresponding to the first upper eyelid curve, that is, determine the pixel points corresponding to each set of reference points corresponding to the reference upper eyelid curve The sum of pixel values, and the sum of pixel values of pixels corresponding to the upper eyelid curve to be detected.
  • the pixel value of the pixel at the upper and lower eyelid positions of the eye in the grayscale edge map is 255.
  • the sum of the pixel values of each pixel corresponding to the first upper eyelid curve the more the sum of the pixel values is Larger, it indicates that the first upper eyelid curve fits the upper eyelid of the eye in the grayscale edge diagram more closely, and accordingly, the first upper eyelid curve with the largest sum of pixel values is determined as the upper eyelid of the person to be inspected The target upper eyelid curve.
  • the grayscale edge map for each first lower eyelid curve, determine the sum of the pixel values of the pixels corresponding to the first lower eyelid curve, that is, determine the reference lower eyelid curve corresponding to each set of reference points The sum of the pixel values of the pixels and the sum of the pixel values of the pixels corresponding to the lower eyelid curve to be detected.
  • the pixel value of the pixel at the upper and lower eyelid positions of the eye in the grayscale edge map is 255.
  • the first lower eyelid curve fits the lower eyelid of the eye in the grayscale edge map more closely, and accordingly, the first lower eyelid curve with the largest sum of pixel values is determined as the lower eyelid of the person to be inspected The target lower eyelid curve.
  • the target eyelid curve is integrated to determine the third curve length of the target upper eyelid curve and the fourth curve length of the target lower eyelid curve in the target eyelid curve; respectively mark the target upper eyelid curve and the lower eyelid curve
  • the curve is intensively selected, for example: take out two points of the preset book; that is, determine multiple reference eyelid points from the target upper eyelid curve, and determine multiple reference eyelid points from the target lower eyelid curve; further, based on the target The third curve length of the upper eyelid curve, multiple reference eyelid points in the target upper eyelid curve, and the preset number of equal division points are determined from the target upper eyelid curve minus 1 equal division upper eyelid point. Based on the fourth curve length of the target lower eyelid curve, multiple reference eyelid points in the target lower eyelid curve, and the preset number of equal points, the preset number of equal points minus 1 equal lower eyelid point is determined from the target lower eyelid curve .
  • the preset number of equal division points minus 1 equal division is determined from the target upper eyelid curve
  • the process of upper eyelid point please refer to the first curve length of the upper eyelid curve marked on the marked eyelid curve, multiple eyelid points to be used, and the preset number of equal points, and the preset number of equal points is determined from the marked upper eyelid curve. The process of dividing the upper eyelid point by 1 equal.
  • the preset number of equal division points minus 1 equal division is determined from the target lower eyelid curve.
  • the preset number of equal division points minus 1 equal division is determined from the target lower eyelid curve.
  • FIG. 5 is a schematic structural diagram of an eye key point marking device provided in an embodiment of the present invention.
  • the device includes:
  • the first obtaining module 510 is configured to obtain a face image and a marked eyelid curve corresponding to each face image, wherein the face image is marked with marked corner points of the eyes and marked eyelid points of the upper and lower eyelids.
  • a marked eyelid curve includes: a marked upper eyelid curve representing the upper eyelid and a marked lower eyelid curve representing the lower eyelid generated based on the corresponding marked eyelid points and marked eye corner points;
  • the first determining module 520 is configured to, for each marked eyelid curve, integrate the marked eyelid curve based on the principle of mathematical integration, and determine the first curve length of the marked upper eyelid curve and the marked lower eyelid curve in the marked eyelid curve The second curve length; respectively from the marked upper eyelid curve and the marked lower eyelid curve, determine multiple eyelid points to be used;
  • the second determination module 530 is configured to, for each marked eyelid curve, based on the first curve length marked with the upper eyelid curve in the marked eyelid curve, the second curve length marked with the lower eyelid curve, and the plurality of eyelid points to be used And the preset number of equal points, from the marked upper eyelid curve and the marked lower eyelid curve, the preset number of equal points minus 1 and the preset number of equal points minus 1 are respectively determined Two equally divided lower eyelid points
  • the third determination module 540 is configured to determine the marked eye corner point, equal division upper eyelid point, and equal division lower eyelid point in each face image as the key eye point corresponding to each face image.
  • the eyelid marking curve corresponding to the face image can be integrated, and the first curve length marking the upper eyelid curve and the second curve length marking the lower eyelid curve in the marking eyelid curve can be determined, and the markings are respectively
  • the eyelid curve and the marked lower eyelid curve are densely selected to determine multiple eyelid points to be used from the marked upper eyelid curve and the marked lower eyelid curve, and then mark the upper eyelid curve based on the first curve length of the marked upper eyelid curve
  • the number of eyelid points to be used and the preset number of equal division points are determined from the marked upper eyelid curve, and the preset number of equal division points minus one equal upper eyelid point is determined; and the second curve length and label are based on the marked lower eyelid curve.
  • the multiple eyelid points to be used on the lower eyelid curve and the preset number of equal division points are determined from the marked lower eyelid curve to determine the preset number of equal division points minus 1 equal lower eyelid point to achieve semi-automatic inclusion from the face image
  • the upper and lower eyelids of the eyes are marked with equally divided eyelid points with obvious semantic characteristics, which realizes marking the key points of the eye with obvious semantic characteristics and improves the marking efficiency to a certain extent.
  • the first obtaining module 510 is specifically configured as
  • the upper eyelid curve that characterizes the upper eyelid is fitted based on the marked eye corner points in the face image, the marked eyelid points of the upper eyelid, and a preset curve fitting algorithm; and based on the face
  • the marked eye corner points, the marked eyelid points of the lower eyelid and the preset curve fitting algorithm in the image fit the lower eyelid curve that characterizes the lower eyelid to obtain the eyelid curve corresponding to the face image.
  • the device further includes: an interception module (the image is not shown) configured to mark the corner points of the eyes, divide the upper eyelid points equally, and divide the lower eyelid points equally in each face image.
  • the eyelid points are determined as the key eye points corresponding to each face image. For each face image, based on the eye key points in the face image, the eye area image is cut out from the face image to obtain the annotation Eye image with key points of the eye;
  • the fourth determining module (the image is not shown) is configured to determine the eye image and its corresponding calibration information as the training data of the eye key point detection model for detecting the eye key points of the eyes in the image, wherein, the calibration information includes position information of key eye points in the corresponding eye image.
  • the third determining module 540 is specifically configured to: obtain the actual eye opening and closing length corresponding to each face image; obtain the actual measured eye opening and closing length corresponding to each face image, wherein, The measured eye opening and closing length is: based on the length determined by the upper and lower eyelids of the eyes in the target three-dimensional face model corresponding to the face image, and the target three-dimensional face model is: based on the face features in the corresponding face image Points and a face model determined by a preset three-dimensional face model.
  • the target three-dimensional face model includes: eyes constructed based on the marked eye corner points, equal division of upper eyelid points and equal division of lower eyelid points in the face image
  • the upper and lower eyelids for each face image, calculate the ratio of the actual eye opening and closing length corresponding to the face image to the measured eye opening and closing length as the actual measured deviation corresponding to the face image; compare the eye image and its corresponding
  • the calibration information is determined as the training data of the eye key point detection model used to detect the upper and lower eyelids of the eye in the image, wherein the calibration information includes: the position information of the eye key points marked in the corresponding eye image , And the measured deviation corresponding to the face image corresponding to the eye image.
  • the device further includes: a labeling module (the image is not shown), configured to, before obtaining the face image and labeling the eyelid curve corresponding to each face
  • the marking module includes: an obtaining and displaying unit (the image is not shown) configured to obtain and display a first face image, wherein the first The face image contains the eyes of the person’s face, and the first face image is one of the face images; the receiving unit (the image is not shown) is configured to receive an annotator’s reference to the first face image
  • the determining unit (the image is not shown) is configured to trigger the targeting at least twice if the marking person is detected
  • the labeling instruction for the designated eyelid in the first face image is determined to characterize the designation of the designated eyelid based on the position information of the labeling points
  • an embodiment of the present invention provides an eye key point detection model training device, which includes:
  • the second obtaining module 610 is configured to obtain training data, where the training data includes the upper and lower eyelid equal eyelid points marked with the eyes and the eye images marked with the corner points of the eyes and the calibration information corresponding to each eye image.
  • the calibration The information includes: the position information of the divided eyelid points and the marked eye corner points marked in the corresponding eye image, and the divided eyelid points include: the face image determined based on the marked eyelid curve corresponding to the face image and the principle of mathematical integration
  • the marked eyelid curve includes: generating based on the marked corner points of the eye marked in the corresponding face image and the marked eyelid points of the upper and lower eyelids
  • the marked upper eyelid curve representing the upper eyelid and the marked lower eyelid curve representing the lower eyelid, each eye image is an image of the area where the eye is intercepted from the corresponding face image;
  • the input module 620 is configured to input the eye image and the position information of the divided eyelid points and the marked eye corner points included in the calibration information corresponding to each eye image into the initial eye key point detection model to be used for training An eye key point detection model for detecting the equally divided eyelid points and eye corner points in the upper and lower eyelids of the eyes in the image.
  • the device further includes: a straightening module (not shown in the figure), configured to divide the eye image and the calibration information corresponding to each eye image into equal parts.
  • a straightening module (not shown in the figure), configured to divide the eye image and the calibration information corresponding to each eye image into equal parts.
  • the initial eye key point detection model Before the position information of the eyelid points and the marked eye corner points, input the initial eye key point detection model to train the eye key point detection model used to detect the equally divided eyelid points and the corner points of the upper and lower eyelids in the image, For each eye image, performing the correction processing on the eye image to obtain the correction image, wherein the correction processing is: processing that the ordinates in the position information of the marked corner points in the eye image are all the same;
  • the update module (not shown in the figure) is configured to update the calibration information corresponding to each converted image including the divided eyelid points and the marked eye canthus points based on the position information of the equally divided eyelid points and the marked eye corner points of each converted image location information;
  • the input module 620 is specifically configured to: input the position information of the divided eyelid points and the marked eye corner points included in each converted image and the calibration information corresponding to each converted image into the initial eye key point detection model, Through training, an eye key point detection model used to detect the equally divided eyelid points and eye corner points in the upper and lower eyelids of the eyes in the image is obtained.
  • the eye image includes a left-eye image and a right-eye image corresponding to the left-eye image; the device further includes: a mirroring module (not shown in the figure), configured to For each converted image, and the calibration information corresponding to each converted image, including the position information of the divided eyelid points and the marked eye corner points, input the initial eye key point detection model to train the upper and lower eyelids used to detect the eyes in the image Before the eye key point detection model that equally divides the eyelid point and the corner point of the eye, the left eye image or the right eye image corresponding to the left eye image is mirrored to obtain a mirror image; stitching module (not shown in the figure) Out), configured to splice the mirror image and the unmirrored image to obtain a spliced image, wherein, if the left-eye image is mirrored, the unmirrored image is corresponding to the left-eye image Right-eye image; if mirroring is performed on the right-eye image corresponding to the left-eye image, the image that is not
  • the calibration information corresponding to each eye image further includes: a measured deviation corresponding to the eye image, and the measured deviation is: the actual eye opening and closing length corresponding to the eye image and the measured eye opening and closing length
  • the actual measured eye opening and closing length is the length determined based on the upper and lower eyelids of the eyes in the target three-dimensional face model corresponding to the face image
  • the target three-dimensional face model is: based on the person in the corresponding face image Face feature points and a face model determined by a preset three-dimensional face model.
  • the target three-dimensional face model includes: constructing based on the marked eye corner points, equal division of the upper eyelid point and equal division of the lower eyelid point in the face image The upper and lower eyelids of your eyes;
  • the input module 620 is specifically configured to input the eye image, the position information of the divided eyelid points and the marked eye corner points included in the calibration information corresponding to each eye image, and the actual measured deviation corresponding to the eye image, into the initial An eye key point detection model to train to obtain an eye key point detection model, wherein the eye key point detection model is used to detect the equally divided eyelid points and eye corner points in the upper and lower lids of the eyes in the image, and detect the corresponding image The measured deviation.
  • the device further includes: a third obtaining module (not shown in the figure) configured to include the eye image and the calibration information corresponding to each eye image in the Divide the eyelid points and label the position information of the eye corner points, and input the initial eye key point detection model to train to obtain the eye key point detection model for detecting the divided eyelid points and the corner points of the upper and lower eyelids in the image Afterwards, an image to be detected is obtained, wherein the image to be detected includes the eyes of the person to be detected; a fifth determining module (not shown in the figure) is configured to input the image to be detected into the key point detection of the eye The model determines the equal-divided eyelid points and corner points of the upper and lower eyelids of the eyes of the person to be inspected in the image to be inspected.
  • a third obtaining module configured to include the eye image and the calibration information corresponding to each eye image in the Divide the eyelid points and label the position information of the eye corner points, and input the initial eye key point detection model to train to obtain the eye key point
  • the device further includes: an extraction module (not shown in the figure) configured to input the to-be-detected image into the eye key point detection model to determine the After the halved eyelid points and corner points of the upper and lower eyelids of the person to be detected in the image to be detected, the Sobel algorithm is used to perform edge extraction on the image to be detected to obtain the grayscale edge map corresponding to the image to be detected
  • the first determination drawing module (not shown in the figure) is configured to determine the to-be-detected image based on the equally divided eyelid points and corner points of the upper and lower eyelids of the eyes in the image to be detected, and a preset curve fitting algorithm
  • the eyelid curve of the upper and lower eyelids of the inspector is used as the eyelid curve to be inspected, and the eyelid curve to be inspected is drawn in the grayscale edge map, wherein the eyelid curve to be inspected includes: The upper eyelid curve to be detected of the upper eyelid and the lower eyelid curve to be detected that characterizes
  • the foregoing device embodiment corresponds to the method embodiment, and has the same technical effect as the method embodiment.
  • the device embodiment is obtained based on the method embodiment, and the specific description can be found in the method embodiment part, which will not be repeated here.
  • modules in the device in the embodiment may be distributed in the device in the embodiment according to the description of the embodiment, or may be located in one or more devices different from this embodiment with corresponding changes.
  • the modules of the above-mentioned embodiments can be combined into one module or further divided into multiple sub-modules.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

Disclosed in the embodiments of the present invention are an eye key point labeling method and apparatus, and a training method and apparatus for an eye key point detection model. The method comprises: obtaining a face image and eyelid labeling curves corresponding to the face image; for each eyelid labeling curve, integrating the eyelid labeling curve on the basis of a mathematical integral principle to determine a first curve length of an upper eyelid labeling curve and a second curve length of a lower eyelid labeling curve in the eyelid labeling curve; determining a plurality of eyelid points to be used from the upper eyelid labeling curve and the lower eyelid labeling curve; determining, on the basis of the first curve length of the upper eyelid labeling curve, the second curve length of the lower eyelid labeling curve, said eyelid points and a preset number of equally-divided points, equally-divided upper eyelid points and equally-divided lower eyelid points from the upper eyelid labeling curve and the lower eyelid labeling curve respectively, and then determining the eye key points corresponding to each face image on the basis of said upper and lower eyelid points in combination with labeled eye corner points. Therefore, eye key points having obvious semantic features are labeled, and the labeling efficiency is improved.

Description

眼部关键点的标注及其检测模型的训练方法和装置Method and device for labeling key points of eyes and its detection model training 技术领域Technical field
本发明涉及智能检测技术领域,具体而言,涉及一种眼部关键点的标注及其检测模型的训练方法和装置。The present invention relates to the technical field of intelligent detection, in particular to a method and device for labeling key points of the eye and its detection model.
背景技术Background technique
对图像中人眼的眼部关键点的检测在多个方面具有重要作用,例如:在疲劳检测过程中,疲劳检测系统可以基于预先训练的眼部关键点检测模型,从图像中检测出所包含的人眼的上下眼睑的眼部关键点,进而,基于上下眼睑的眼部关键点,计算人眼的上下眼睑之间距离,作为人眼的开闭距离,进而基于人眼的开闭距离在预设时长内的开闭距离,确定图像对应的人员是否处于疲劳状态,实现对人员的疲劳状态的检测。又例如:在美颜过程中,美颜相机可以基于预先训练的眼部关键点检测模型,从图像中检测出所包含的人眼的上下眼睑的眼部关键点,进而,基于图像中人眼的上下眼睑的眼部关键点的位置,对人眼进行放缩处理,以实现对人眼的美化。其中,上述预先训练的眼部关键点检测模型为:基于标注有人眼的眼部关键点的样本图像训练所得。The detection of the key points of the human eyes in the image plays an important role in many aspects. For example, in the fatigue detection process, the fatigue detection system can detect the key points contained in the image based on the pre-trained eye key point detection model. The key points of the upper and lower eyelids of the human eye, and then, based on the key points of the upper and lower eyelids, the distance between the upper and lower eyelids of the human eye is calculated as the opening and closing distance of the human eye, and then based on the opening and closing distance of the human eye in the prediction Set the opening and closing distance within the time length to determine whether the person corresponding to the image is in a fatigue state, and realize the detection of the fatigue state of the person. For another example: in the process of beautifying, the beautifying camera can detect the eye key points of the upper and lower eyelids contained in the image based on the pre-trained eye key point detection model. The positions of the key points of the upper and lower eyelids are used to scale and shrink the human eyes to beautify the human eyes. Wherein, the above-mentioned pre-trained eye key point detection model is obtained by training based on sample images marked with human eye key points.
可见,为了获得眼部关键点检测模型,对人眼的眼部关键点的标注至关重要。相关技术中,上述眼部关键点一般都是标注人员手动标注的,对于标注人员手动标注的眼部关键点而言,不同的标注人员对眼部关键点的标注标准并不统一,所标注的眼部关键点的语义特征不明显,且手动标注眼部关键点的效率低。It can be seen that in order to obtain an eye key point detection model, it is very important to label the eye key points of the human eye. In related technologies, the above-mentioned key eye points are generally manually marked by the annotator. For the key eye points manually marked by the annotator, the marking standards for the key points of the eye by different annotators are not uniform. The semantic features of the key points of the eye are not obvious, and the efficiency of manually marking the key points of the eye is low.
发明内容Summary of the invention
本发明提供了一种眼部关键点的标注及其检测模型的训练方法和装置,以实现标注出具有较明显的语义特征的眼部关键点且在一定程度上提高标注效率。具体的技术方案如下。The present invention provides a method and device for marking eye key points and a detection model training method and device, so as to realize marking eye key points with obvious semantic features and improve marking efficiency to a certain extent. The specific technical solution is as follows.
第一方面,本发明实施例提供了一种眼部关键点的标注方法,该方法包括:获得人脸图像以及每一人脸图像对应的标注眼睑曲线,其中,所述人脸图像中标注有所包含眼睛的标注眼角点和上下眼睑的标注眼睑点,每一标注眼睑曲线包括:基于所对应标注眼睑点和标注眼角点生成的、表征上眼睑的标注上眼睑曲线和表征下眼睑的标注下眼睑曲线;In the first aspect, an embodiment of the present invention provides a method for labeling key points of the eye. The method includes: obtaining a face image and an eyelid curve corresponding to each face image, wherein the face image is marked with Contains the marked corner points of the eyes and the marked eyelid points of the upper and lower eyelids. Each marked eyelid curve includes: based on the corresponding marked eyelid points and marked eye corner points, the marked upper eyelid curve representing the upper eyelid and the marked lower eyelid representing the lower eyelid are generated curve;
针对每一标注眼睑曲线,基于数学积分原理,对该标注眼睑曲线进行积分,确定出该标注眼睑曲线中标注上眼睑曲线的第一曲线长度和标注下眼睑曲线的第二曲线长度;分别从所述标注上眼睑曲线和所述标注下眼睑曲线中,确定出多个待利用眼睑点;For each marked eyelid curve, based on the principle of mathematical integration, the marked eyelid curve is integrated to determine the first curve length marked with the upper eyelid curve and the second curve length marked with the lower eyelid curve in the marked eyelid curve; In the marked upper eyelid curve and the marked lower eyelid curve, multiple eyelid points to be used are determined;
针对每一标注眼睑曲线,基于该标注眼睑曲线中标注上眼睑曲线的第一曲线长度、标注下眼睑曲线的第二曲线长度、所述多个待利用眼睑点以及预设等分点数,从该标注眼睑曲线中标注上眼睑曲线和标注下眼睑曲线中,分别确定出预设等分点数减1个等分上眼睑点和所述预设等分点数减1个等分下眼睑点;For each marked eyelid curve, based on the first curve length of the upper eyelid curve marked on the marked eyelid curve, the second curve length marked on the lower eyelid curve, the multiple eyelid points to be used and the preset number of equal division points, In marking the eyelid curve, marking the upper eyelid curve and marking the lower eyelid curve, respectively determining the preset number of equal division points minus 1 equal division upper eyelid point and the preset number of equal division points minus 1 equal division lower eyelid point;
将每一人脸图像中的标注眼角点、等分上眼睑点和等分下眼睑点,确定为每一人脸图像对应的眼部关键点。The marked eye corner points, equal division of upper eyelid points, and equal division of lower eyelid points in each face image are determined as key eye points corresponding to each face image.
可选的,获得每一人脸图像对应的眼睑曲线的步骤,包括:Optionally, the step of obtaining the eyelid curve corresponding to each face image includes:
针对每一人脸图像,基于该人脸图像中的标注眼角点、上眼睑的标注眼睑点以及预设的曲线拟合算法,拟合出表征所述上眼睑的上眼睑曲线;并基于该人脸图像中的标注眼角点、下眼睑的标注眼睑点以及所述预设的曲线拟合算法,拟合出表征所述下眼睑的下眼睑曲线,以得到所述人脸图像对应的眼睑曲线。For each face image, the upper eyelid curve that characterizes the upper eyelid is fitted based on the marked eye corner points in the face image, the marked eyelid points of the upper eyelid, and a preset curve fitting algorithm; and based on the face The marked eye corner points, the marked eyelid points of the lower eyelid and the preset curve fitting algorithm in the image fit the lower eyelid curve that characterizes the lower eyelid to obtain the eyelid curve corresponding to the face image.
可选的,在所述将每一人脸图像中的标注眼角点、等分上眼睑点和等分下眼睑点,确定为每一人脸图像对应的眼部关键点的步骤之后,所述方法还包括:针对每一人脸图像,基于该人脸图像中的眼部关键点,从该人脸图像中截取出眼睛所在区域图像,得到标注有眼部关键点的眼睛图像;Optionally, after the step of determining the marked eye corner points, equally divided upper eyelid points, and equally divided lower eyelid points in each face image as the key eye points corresponding to each face image, the method further Including: for each face image, based on the key points of the eyes in the face image, cut out the image of the area where the eyes are located from the face image to obtain an eye image marked with the key points of the eyes;
将所述眼睛图像及其对应的标定信息,确定为用于检测图像中眼睛的眼部关键点的眼部关键点检测模型的训练数据,其中,所述标定信息包括所对应眼睛图像中的眼部关键点的位置信息。The eye image and its corresponding calibration information are determined as the training data of the eye key point detection model for detecting the eye key points of the eyes in the image, wherein the calibration information includes the eye in the corresponding eye image. The location information of key points.
可选的,所述将所述眼睛图像及其对应的标定信息,确定为用于检测图像中眼睛的眼部关键点的眼部关键点检测模型的训练数据的步骤,包括:Optionally, the step of determining the eye image and its corresponding calibration information as the training data of the eye key point detection model for detecting the eye key points of the eyes in the image includes:
获得每一人脸图像对应的真实眼睛开闭长度;Obtain the real eye opening and closing length corresponding to each face image;
获得每一人脸图像对应的实测眼睛开闭长度,其中,所述实测眼睛开闭长度为:基于人脸图像对应的目标三维人脸模型中眼睛的上下眼睑所确定的长度,所述目标三维人脸模型为:基于所对应人脸图像中的人脸特征点以及预设的三维人脸模型确定的人脸模型,所述目标三维人脸模型包括:基于该人脸图像中的标注眼角点、等分上眼睑点和等分下眼睑点,构建的眼睛的上下眼睑;Obtain the measured eye opening and closing length corresponding to each face image, where the measured eye opening and closing length is: based on the length determined by the upper and lower eyelids of the eyes in the target 3D face model corresponding to the face image, the target 3D person The face model is: a face model determined based on the face feature points in the corresponding face image and a preset three-dimensional face model, and the target three-dimensional face model includes: based on the marked eye corner points in the face image, Equally divide the upper eyelid point and the lower eyelid point to construct the upper and lower eyelids of the eye;
针对每一人脸图像,计算该人脸图像对应的真实眼睛开闭长度与实测眼睛开闭长度的比值,作为该人脸图像对应的实测偏差;For each face image, calculate the ratio of the actual eye opening and closing length corresponding to the face image to the measured eye opening and closing length as the measured deviation corresponding to the face image;
将所述眼睛图像及其对应的标定信息,确定为用于检测图像中眼睛的上下眼睑中等分眼睑点的眼部关键点检测模型的训练数据,其中,所述标定信息包含:所对应眼睛图像标注的眼部关键点的位置信息,以及该眼睛图像所对应人脸图像对应的实测偏差。The eye image and its corresponding calibration information are determined as the training data of the eye key point detection model used to detect the upper and lower eyelids of the eye in the image, where the calibration information includes: the corresponding eye image The position information of the marked eye key points, and the actual measured deviation corresponding to the face image corresponding to the eye image.
可选的,在所述获得人脸图像以及每一人脸图像对应的标注眼睑曲线的步骤之前,所述方法还包 括:对每一人脸图像中人员面部的眼睛的上下眼睑进行标注的过程,其中,针对每一人脸图像,执行如下步骤,以对每一人脸图像中人员面部的眼睛的上下眼睑进行标注:Optionally, before the step of obtaining a face image and an eyelid curve corresponding to each face image, the method further includes: a process of marking the upper and lower eyelids of the eyes of the person’s face in each face image, wherein , For each face image, perform the following steps to mark the upper and lower eyelids of the person’s face in each face image:
获得第一人脸图像并显示,其中,所述第一人脸图像包含人员面部的眼睛,所述第一人脸图像为所述人脸图像中的一个;Obtain and display a first face image, wherein the first face image includes eyes of a person's face, and the first face image is one of the face images;
接收标注人员针对所述第一人脸图像中眼睛的上下眼睑触发的标注指令,其中,所述标注指令携带所标注的标注点的位置信息;Receiving an annotation instruction triggered by an annotator for the upper and lower eyelids of the eyes in the first face image, wherein the annotation instruction carries position information of the marked annotation point;
若检测到所述标注人员至少两次触发针对所述第一人脸图像中指定眼睑的标注指令,基于所述标注人员触发的至少两次标注指令携带的标注点的位置信息,以及预设的曲线拟合算法,确定表征所述指定眼睑的指定眼睑曲线,其中,所述指定眼睑为:所述第一人脸图像中眼睛的上眼睑和下眼睑;If it is detected that the tagger has triggered the tagging instruction for the specified eyelid in the first face image at least twice, based on the location information of the tag point carried by the tagging instruction triggered by the tagger at least twice, and the preset A curve fitting algorithm to determine a specified eyelid curve that characterizes the specified eyelid, wherein the specified eyelid is: the upper eyelid and the lower eyelid of the eye in the first face image;
在所述第一人脸图像中显示所述指定眼睑曲线,以使得所述标注人员检测所标注的标注点是否为所述指定眼睑上的眼睑点或眼角点。The specified eyelid curve is displayed in the first face image, so that the annotator detects whether the annotated point is an eyelid point or an eye corner point on the specified eyelid.
第二方面,本发明实施例提供了一种眼部关键点检测模型的训练方法,该方法包括:In the second aspect, an embodiment of the present invention provides a method for training an eye key point detection model, the method including:
获得训练数据,其中,所述训练数据包括标注有眼睛的上下眼睑中等分眼睑点和标注眼角点的眼睛图像以及每一眼睛图像对应的标定信息,所述标定信息包括:所对应眼睛图像中标注的等分眼睑点和标注眼角点的位置信息,所述等分眼睑点包括:基于人脸图像对应的标注眼睑曲线以及数学积分原理,确定出的人脸图像中眼睛的上眼睑的等分上眼睑点和下眼睑的等分下眼睑点;所述标注眼睑曲线包括:基于所对应人脸图像中标注的眼睛的标注眼角点和上下眼睑的标注眼睑点生成的、表征上眼睑的标注上眼睑曲线和表征下眼睑的标注下眼睑曲线,每一眼睛图像为从所对应人脸图像中截取出的眼睛所在区域图像;Obtain training data, where the training data includes the upper and lower eyelid equidistant eyelid points marked with the eyes and the eye images marked with the corner points of the eyes, and calibration information corresponding to each eye image, and the calibration information includes: the marking in the corresponding eye image The position information of the divided eyelid points and the marked eye corner points, the divided eyelid points include: based on the marked eyelid curve corresponding to the face image and the principle of mathematical integration, the divided upper eyelid of the eye in the face image is determined The eyelid point and the lower eyelid are equally divided into the lower eyelid points; the marking eyelid curve includes: marking the upper eyelid, which is generated based on the marked corner points of the eyes marked in the corresponding face image and the marked eyelid points of the upper and lower eyelids, and represents the upper eyelid The curve and the marked lower eyelid curve that characterizes the lower eyelid, and each eye image is an image of the area where the eye is located from the corresponding face image;
将所述眼睛图像,以及每一眼睛图像对应的标定信息包括的等分眼睑点和标注眼角点的位置信息,输入初始的眼部关键点检测模型,以训练得到用于检测图像中眼睛的上下眼睑中的等分眼睑点和眼角点的眼部关键点检测模型。Input the eye image and the position information of the divided eyelid points and the marked eye corner points included in the calibration information corresponding to each eye image into the initial eye key point detection model to be trained to detect the up and down of the eyes in the image An eye key point detection model that divides the eyelid point and the corner point of the eyelid.
可选的,在所述将所述眼睛图像,以及每一眼睛图像对应的标定信息包括的等分眼睑点和标注眼角点的位置信息,输入初始的眼部关键点检测模型,以训练得到用于检测图像中眼睛的上下眼睑中的等分眼睑点和眼角点的眼部关键点检测模型的步骤之前,所述方法还包括:Optionally, in the said eye image and the position information of the divided eyelid points and the marked eye corner points included in the calibration information corresponding to each eye image, input the initial eye key point detection model to obtain training Before the step of detecting the key point detection model of the eyelid points and the corner points in the upper and lower eyelids of the eyes in the image, the method further includes:
针对每一眼睛图像,对该眼睛图像进行转正处理,得到转正图像,其中,所述转正处理为:使得该眼睛图像中的标注眼角点的位置信息中的纵坐标均相同的处理;For each eye image, performing the correction processing on the eye image to obtain the correction image, wherein the correction processing is: processing that the ordinates in the position information of the marked corner points in the eye image are all the same;
基于每一转正图像中等分眼睑点和标注眼角点的位置信息,更新每一转正图像对应的标定信息包括的等分眼睑点和标注眼角点的位置信息;Based on the position information of the equal-divided eyelid points and the marked eye corner points of each converted image, update the position information of the divided eyelid points and marked eye corner points included in the calibration information corresponding to each converted image;
所述将所述眼睛图像,以及每一眼睛图像对应的标定信息包括的等分眼睑点和标注眼角点的位置信息,输入初始的眼部关键点检测模型,以训练得到用于检测图像中眼睛的上下眼睑中的等分眼睑点和眼角点的眼部关键点检测模型的步骤包括:The eye image and the position information of the divided eyelid points and the marked eye corner points included in the calibration information corresponding to each eye image are input into an initial eye key point detection model to be trained to detect the eyes in the image The steps of detecting the key points of the eyelid points and the corner points of the upper and lower eyelids include:
将每一转正图像,以及每一转正图像对应的标定信息包括的等分眼睑点和标注眼角点的位置信息,输入初始的眼部关键点检测模型,以训练得到用于检测图像中眼睛的上下眼睑中的等分眼睑点和眼角点的眼部关键点检测模型。Input the position information of the divided eyelid points and the marked corner points of each converted image and the calibration information corresponding to each converted image into the initial eye key point detection model to be trained to detect the up and down of the eyes in the image An eye key point detection model that divides the eyelid point and the corner point of the eyelid.
可选的,所述眼睛图像包括左眼图像和左眼图像对应的右眼图像;Optionally, the eye image includes a left eye image and a right eye image corresponding to the left eye image;
在所述将每一转正图像,以及每一转正图像对应的标定信息包括的等分眼睑点和标注眼角点的位置信息,输入初始的眼部关键点检测模型,以训练得到用于检测图像中眼睛的上下眼睑中的等分眼睑点和眼角点的眼部关键点检测模型的步骤之前,所述方法还包括:对所述左眼图像或所述左眼图像对应的右眼图像进行镜像处理,得到镜像图像;In the above-mentioned each converted image, and the calibration information corresponding to each converted image, the position information of the divided eyelid points and the marked eye canthus points are input into the initial eye key point detection model to be trained to be used in the detection image Before the step of detecting the key points of the eyelid point and the corner point of the eye in the upper and lower eyelids of the eye, the method further includes: mirroring the left-eye image or the right-eye image corresponding to the left-eye image , Get the mirror image;
对所述镜像图像以及未进行镜像的图像进行拼接,得到拼接图像,其中,若对所述左眼图像进行镜像处理,未进行镜像的图像为所述左眼图像对应的右眼图像;若对所述左眼图像对应的右眼图像进行镜像处理,未进行镜像的图像为所述左眼图像;The mirror image and the unmirrored image are spliced to obtain a spliced image, wherein if the left-eye image is mirrored, the unmirrored image is the right-eye image corresponding to the left-eye image; The right-eye image corresponding to the left-eye image is mirrored, and the image that is not mirrored is the left-eye image;
所述将每一转正图像,以及每一转正图像对应的标定信息包括的等分眼睑点和标注眼角点的位置信息,输入初始的眼部关键点检测模型,以训练得到用于检测图像中眼睛的上下眼睑中的等分眼睑点和眼角点的眼部关键点检测模型的步骤,包括:Said each converted image and the calibration information corresponding to each converted image includes the position information of the divided eyelid points and the marked eye corner points, and input the initial eye key point detection model to be trained to detect the eyes in the image The steps of detecting the key points of the eyelid point and the corner point of the eyelid in the upper and lower eyelids include:
将每一拼接图像以及每一拼接图像对应的标定信息包括的等分眼睑点和标注眼角点的位置信息,输入初始的眼部关键点检测模型,以训练得到用于检测图像中眼睛的上下眼睑中的等分眼睑点和眼角点的眼部关键点检测模型,其中,所述标定信息包括:镜像图像中的等分眼睑点和标注眼角点镜像之后的位置信息以及未进行镜像的图像中的等分眼睑点和标注眼角点的位置信息。Input the position information of the divided eyelid points and the marked eye corner points included in each stitched image and the calibration information corresponding to each stitched image into the initial eye key point detection model to train the upper and lower eyelids for detecting the eyes in the image The key point detection model of the eyelid points and the corner points of the eye in, where the calibration information includes: the equally divided eyelid points in the mirror image and the position information of the marked eye corner points after mirroring and the position information in the image without mirroring Equally divide the eyelid points and mark the position information of the eye corner points.
可选的,每一眼睛图像对应的标定信息还包括:该眼睛图像对应的实测偏差,所述实测偏差为:该眼睛图像对应的真实眼睛开闭长度与实测眼睛开闭长度的比值;所述实测眼睛开闭长度为基于人脸图像对应的目标三维人脸模型中眼睛的上下眼睑所确定的长度,所述目标三维人脸模型为:基于所对应人脸图像中的人脸特征点以及预设的三维人脸模型确定的人脸模型,所述目标三维人脸模型包括:基于该人脸图像中的标注眼角点、等分上眼睑点和等分下眼睑点,构建的眼睛的上下眼睑;Optionally, the calibration information corresponding to each eye image further includes: a measured deviation corresponding to the eye image, where the measured deviation is: the ratio of the actual eye opening and closing length corresponding to the eye image to the measured eye opening and closing length; The measured eye opening and closing length is the length determined based on the upper and lower eyelids of the eyes in the target three-dimensional face model corresponding to the face image. The target three-dimensional face model is based on the face feature points and predictions in the corresponding face image. A face model determined by a three-dimensional face model, where the target three-dimensional face model includes: the upper and lower eyelids of the eye constructed based on the marked eye corner points, equal division of upper eyelid points, and equal division of lower eyelid points in the face image ;
所述将所述眼睛图像,以及每一眼睛图像对应的标定信息包括的等分眼睑点和标注眼角点的位置信息,输入初始的眼部关键点检测模型,以训练得到用于检测图像中眼睛的上下眼睑中的等分眼睑点和眼角点的眼部关键点检测模型的步骤,包括:The eye image and the position information of the divided eyelid points and the marked eye corner points included in the calibration information corresponding to each eye image are input into an initial eye key point detection model to be trained to detect the eyes in the image The steps of detecting the key points of the eyelid point and the corner point of the eyelid in the upper and lower eyelids include:
将所述眼睛图像,以及每一眼睛图像对应的标定信息包括的等分眼睑点和标注眼角点的位置信息 及眼睛图像对应的实测偏差,输入初始的眼部关键点检测模型,以训练得到眼部关键点检测模型,其中,所述眼部关键点检测模型用于检测图像中眼睛的上下眼睑中的等分眼睑点和眼角点,并检测图像对应的实测偏差。The eye image and the calibration information corresponding to each eye image include the position information of the divided eyelid points and the marked eye corner points and the measured deviation corresponding to the eye image, and input the initial eye key point detection model to train the eye The key point detection model of the eye, wherein the key point detection model of the eye is used to detect the equally divided eyelid points and the corner points of the upper and lower eyelids of the eyes in the image, and detect the actual measured deviation corresponding to the image.
可选的,在所述将所述眼睛图像,以及每一眼睛图像对应的标定信息包括的等分眼睑点和标注眼角点的位置信息,输入初始的眼部关键点检测模型,以训练得到用于检测图像中眼睛的上下眼睑中的等分眼睑点和眼角点的眼部关键点检测模型的步骤之后,所述方法还包括:Optionally, in the said eye image and the position information of the divided eyelid points and the marked eye corner points included in the calibration information corresponding to each eye image, input the initial eye key point detection model to obtain training After the step of detecting the key point detection model of the eyelid points and the corner points in the upper and lower eyelids of the eyes in the image, the method further includes:
获得待检测图像,其中,所述待检测图像包括待检测人员的眼睛;Obtaining an image to be inspected, where the image to be inspected includes the eyes of the person to be inspected;
将所述待检测图像输入所述眼部关键点检测模型,确定所述待检测图像中待检测人员的眼睛的上下眼睑的等分眼睑点和眼角点。The to-be-detected image is input into the eye key point detection model, and the equally divided eyelid points and eye corner points of the upper and lower eyelids of the to-be-detected person’s eyes in the to-be-detected image are determined.
可选的,在所述将所述待检测图像输入所述眼部关键点检测模型,确定所述待检测图像中待检测人员的眼睛的上下眼睑的等分眼睑点和眼角点的步骤之后,所述方法还包括:利用索贝尔sobel算法,对所述待检测图像进行边缘提取,得到所述待检测图像对应的灰阶边缘图;Optionally, after the step of inputting the image to be detected into the key ocular point detection model, and determining the equally divided eyelid points and corner points of the upper and lower eyelids of the person to be detected in the image to be detected, The method further includes: using the Sobel algorithm to perform edge extraction on the image to be detected to obtain a grayscale edge map corresponding to the image to be detected;
基于所述待检测图像中眼睛的上下眼睑的等分眼睑点和眼角点,以及预设的曲线拟合算法,确定所述待检测人员的上下眼睑的眼睑曲线,作为待检测眼睑曲线,并在所述灰阶边缘图中,绘制所述待检测眼睑曲线,其中,所述待检测眼睑曲线包括:表征所述待检测人员的上眼睑的待检测上眼睑曲线和表征所述待检测人员的下眼睑的待检测下眼睑曲线;Based on the equally divided eyelid points and corner points of the upper and lower eyelids of the eyes in the image to be detected, and a preset curve fitting algorithm, the eyelid curve of the upper and lower eyelids of the person to be detected is determined as the eyelid curve to be detected, and In the grayscale edge map, the eyelid curve to be detected is drawn, wherein the eyelid curve to be detected includes: an upper eyelid curve to be detected representing the upper eyelid of the person to be detected and a lower eyelid curve representing the person to be detected The lower eyelid curve of the eyelid to be tested;
基于所述待检测眼睑曲线中的等分眼睑点,在所述灰阶边缘图中确定多组参考点,其中,每组参考点中包括与所述待检测眼睑曲线中等分眼睑点对应的点;Based on the equally divided eyelid points in the eyelid curve to be detected, multiple sets of reference points are determined in the grayscale edge map, wherein each group of reference points includes points corresponding to the equally divided eyelid points in the eyelid curve to be detected ;
针对每组参考点,基于该组参考点、所述待检测眼睑曲线中的眼角点以及所述预设的曲线拟合算法,确定出该组参考点对应的参考曲线,并在所述灰阶边缘图中绘制每组参考点对应的参考曲线,其中,每组参考点对应的参考曲线包括:表征所述待检测人员的上眼睑的参考上眼睑曲线和表征所述待检测人员的下眼睑的参考下眼睑曲线;For each set of reference points, based on the set of reference points, the corner points of the eyelid curve to be detected, and the preset curve fitting algorithm, the reference curve corresponding to the set of reference points is determined, and the reference curve is determined in the gray scale. A reference curve corresponding to each set of reference points is drawn in the edge map, where the reference curve corresponding to each set of reference points includes: a reference upper eyelid curve that characterizes the upper eyelid of the person to be inspected and a reference curve that characterizes the lower eyelid of the person to be inspected Refer to the lower eyelid curve;
在所述灰阶边缘图中,针对每一第一上眼睑曲线,确定该第一上眼睑曲线对应的像素点的像素值之和,其中,所述第一上眼睑曲线包括:每组参考点所对应参考上眼睑曲线和所述待检测上眼睑曲线;In the grayscale edge map, for each first upper eyelid curve, the sum of the pixel values of the pixel points corresponding to the first upper eyelid curve is determined, wherein the first upper eyelid curve includes: each set of reference points The corresponding reference upper eyelid curve and the upper eyelid curve to be detected;
从每一第一上眼睑曲线对应的像素点的像素值之和中,确定出数值最大的和,并确定所述数值最大的和对应的第一上眼睑曲线,作为表征所述待检测人员的上眼睑的目标上眼睑曲线;From the sum of the pixel values of the pixel points corresponding to each first upper eyelid curve, the sum with the largest value is determined, and the first upper eyelid curve corresponding to the largest sum is determined as a characterizing the person to be tested The target upper eyelid curve of the upper eyelid;
在所述灰阶边缘图中,针对每一第一下眼睑曲线,确定该第一下眼睑曲线对应的像素点的像素值之和,其中,所述第一下眼睑曲线包括:每组参考点所对应参考下眼睑曲线和所述待检测下眼睑曲线;In the grayscale edge map, for each first lower eyelid curve, determine the sum of the pixel values of the pixel points corresponding to the first lower eyelid curve, where the first lower eyelid curve includes: each set of reference points The corresponding reference lower eyelid curve and the lower eyelid curve to be detected;
从每一第一下眼睑曲线对应的像素点的像素值之和中,确定出数值最大的和,并确定所述数值最大的和对应的第一下眼睑曲线,作为表征所述待检测人员的下眼睑的目标下眼睑曲线;From the sum of the pixel values of the pixel points corresponding to each first lower eyelid curve, the sum with the largest value is determined, and the first lower eyelid curve corresponding to the largest sum is determined as a characterizing the person to be tested The target lower eyelid curve of the lower eyelid;
基于数学积分原理,对目标眼睑曲线进行积分,确定出所述目标眼睑曲线中目标上眼睑曲线的第三曲线长度和目标下眼睑曲线的第四曲线长度;分别从所述目标上眼睑曲线和所述目标下眼睑曲线中,确定出多个参考眼睑点;Based on the principle of mathematical integration, the target eyelid curve is integrated to determine the third curve length of the target upper eyelid curve and the fourth curve length of the target lower eyelid curve in the target eyelid curve; In the lower eyelid curve of the target, multiple reference eyelid points are determined;
基于所述目标眼睑曲线中目标上眼睑曲线的第三曲线长度、目标下眼睑曲线的第四曲线长度、所述多个参考眼睑点以及预设等分点数,从所述目标眼睑曲线中目标上眼睑曲线和目标下眼睑曲线中,分别确定出预设等分点数减1个等分上眼睑点和所述预设等分点数减1个等分下眼睑点。Based on the third curve length of the target upper eyelid curve in the target eyelid curve, the fourth curve length of the target lower eyelid curve, the multiple reference eyelid points and the preset number of equal points, the target upper eyelid curve is selected from the target eyelid curve. In the eyelid curve and the target lower eyelid curve, the preset number of equal division points minus one equal upper eyelid point and the preset number of equal division points minus one equal lower eyelid point are respectively determined.
第三方面,本发明实施例提供了一种眼部关键点的标注装置,该装置包括:第一获得模块,被配置为获得人脸图像以及每一人脸图像对应的标注眼睑曲线,其中,所述人脸图像中标注有所包含眼睛的标注眼角点和上下眼睑的标注眼睑点,每一标注眼睑曲线包括:基于所对应标注眼睑点和标注眼角点生成的、表征上眼睑的标注上眼睑曲线和表征下眼睑的标注下眼睑曲线;In a third aspect, an embodiment of the present invention provides a device for marking eye key points. The device includes: a first obtaining module configured to obtain a face image and an eyelid curve corresponding to each face image. The face image is marked with the marked corner points of the eyes and the marked eyelid points of the upper and lower eyelids, and each marked eyelid curve includes: based on the corresponding marked eyelid point and marked eye corner points, the marked upper eyelid curve representing the upper eyelid is generated And the marked lower eyelid curve representing the lower eyelid;
第一确定模块,被配置为针对每一标注眼睑曲线,基于数学积分原理,对该标注眼睑曲线进行积分,确定出该标注眼睑曲线中标注上眼睑曲线的第一曲线长度和标注下眼睑曲线的第二曲线长度;分别从所述标注上眼睑曲线和所述标注下眼睑曲线中,确定出多个待利用眼睑点;The first determining module is configured to, for each marked eyelid curve, integrate the marked eyelid curve based on the principle of mathematical integration, and determine the first curve length of the marked upper eyelid curve and the marked lower eyelid curve in the marked eyelid curve The second curve length; respectively determine a plurality of eyelid points to be used from the marked upper eyelid curve and the marked lower eyelid curve;
第二确定模块,被配置为针对每一标注眼睑曲线,基于该标注眼睑曲线中标注上眼睑曲线的第一曲线长度、标注下眼睑曲线的第二曲线长度、所述多个待利用眼睑点以及预设等分点数,从该标注眼睑曲线中标注上眼睑曲线和标注下眼睑曲线中,分别确定出预设等分点数减1个等分上眼睑点和所述预设等分点数减1个等分下眼睑点;The second determining module is configured to, for each marked eyelid curve, based on the first curve length marking the upper eyelid curve in the marked eyelid curve, the second curve length marking the lower eyelid curve, the multiple eyelid points to be used, and Preset the number of equal points, from the marked upper eyelid curve and marked lower eyelid curve from the marked eyelid curve, respectively determine the preset number of equal points minus 1 equal upper eyelid point and the preset number of equal points minus 1 Divide the lower eyelid points equally;
第三确定模块,被配置为将每一人脸图像中的标注眼角点、等分上眼睑点和等分下眼睑点,确定为每一人脸图像对应的眼部关键点。The third determining module is configured to determine the marked eye corner point, the equally divided upper eyelid point, and the equally divided lower eyelid point in each face image as the key eye points corresponding to each face image.
第四方面,本发明实施例提供了一种眼部关键点检测模型的训练装置,该装置包括:In a fourth aspect, an embodiment of the present invention provides a training device for an eye key point detection model, the device including:
第二获得模块,被配置为获得训练数据,其中,所述训练数据包括标注有眼睛的上下眼睑中等分眼睑点和标注眼角点的眼睛图像以及每一眼睛图像对应的标定信息,所述标定信息包括:所对应眼睛图像中标注的等分眼睑点和标注眼角点的位置信息,所述等分眼睑点包括:基于人脸图像对应的标注眼睑曲线以及数学积分原理,确定出的人脸图像中眼睛的上眼睑的等分上眼睑点和下眼睑的等分下眼睑点;所述标注眼睑曲线包括:基于所对应人脸图像中标注的眼睛的标注眼角点和上下眼睑的标注眼睑点生成的、表征上眼睑的标注上眼睑曲线和表征下眼睑的标注下眼睑曲线,每一眼睛图像为从所对应人脸图像中截取出的眼睛所在区域图像;The second obtaining module is configured to obtain training data, where the training data includes the upper and lower eyelid equal eyelid points marked with the eyes and the eye images marked with the corner points of the eyes, and the calibration information corresponding to each eye image, the calibration information Including: the position information of the divided eyelid points and the marked eye corner points marked in the corresponding eye image. The divided eyelid points include: based on the marked eyelid curve corresponding to the face image and the principle of mathematical integration, the determined face image The equal division of the upper eyelid of the eye and the equal division of the lower eyelid point of the lower eyelid; the marked eyelid curve includes: generated based on the marked corner points of the eye marked in the corresponding face image and the marked eyelid points of the upper and lower eyelids , The marked upper eyelid curve representing the upper eyelid and the marked lower eyelid curve representing the lower eyelid, each eye image is an image of the area where the eye is cut out from the corresponding face image;
输入模块,被配置为将所述眼睛图像,以及每一眼睛图像对应的标定信息包括的等分眼睑点和标注眼角点的位置信息,输入初始的眼部关键点检测模型,以训练得到用于检测图像中眼睛的上下眼睑 中的等分眼睑点和眼角点的眼部关键点检测模型。The input module is configured to input the eye image and the position information of the equally divided eyelid points and the marked eye corner points included in the calibration information corresponding to each eye image into the initial eye key point detection model to be trained for An eye key point detection model that detects the equally divided eyelid points and the corner points of the upper and lower eyelids of the eyes in the image.
由上述内容可知,本发明实施例提供的一种眼部关键点的标注及其检测模型的训练方法和装置,可以获得人脸图像以及每一人脸图像对应的标注眼睑曲线,其中,人脸图像中标注有所包含眼睛的标注眼角点和上下眼睑的标注眼睑点,每一标注眼睑曲线包括:基于所对应标注眼睑点和标注眼角点生成的、表征上眼睑的标注上眼睑曲线和表征下眼睑的标注下眼睑曲线;针对每一标注眼睑曲线,基于数学积分原理,对该标注眼睑曲线进行积分,确定出该标注眼睑曲线中标注上眼睑曲线的第一曲线长度和标注下眼睑曲线的第二曲线长度;分别从标注上眼睑曲线和标注下眼睑曲线中,确定出多个待利用眼睑点;针对每一标注眼睑曲线,基于该标注眼睑曲线中标注上眼睑曲线的第一曲线长度、标注下眼睑曲线的第二曲线长度、多个待利用眼睑点以及预设等分点数,从该标注眼睑曲线中标注上眼睑曲线和标注下眼睑曲线中,分别确定出预设等分点数减1个等分上眼睑点和预设等分点数减1个等分下眼睑点;将每一人脸图像中的标注眼角点、等分上眼睑点和等分下眼睑点,确定为每一人脸图像对应的眼部关键点。It can be seen from the above content that the method and device for annotating key points of the eye and its detection model training provided by the embodiments of the present invention can obtain a face image and an eyelid curve corresponding to each face image, where the face image There are the marked corner points of the eyes and the marked eyelid points of the upper and lower eyelids. Each marked eyelid curve includes: based on the corresponding marked eyelid points and marked eye corner points, the marked upper eyelid curve and the lower eyelid representing the upper eyelid are generated The marked lower eyelid curve; for each marked eyelid curve, based on the principle of mathematical integration, the marked eyelid curve is integrated to determine the first curve length of the marked upper eyelid curve and the second marked lower eyelid curve in the marked eyelid curve Curve length; respectively, from the marked upper eyelid curve and marked lower eyelid curve, determine multiple eyelid points to be used; for each marked eyelid curve, based on the marked eyelid curve, mark the first curve length of the upper eyelid curve, mark the lower The second curve length of the eyelid curve, multiple eyelid points to be used, and the preset number of equal points. From the marked upper eyelid curve and the marked lower eyelid curve, the preset number of equal points minus one is determined respectively. The upper eyelid point and the preset number of equal points minus 1 equal lower eyelid point; the marked eye corner point, equal upper eyelid point and equal lower eyelid point in each face image are determined as corresponding to each face image Key points of the eye.
应用本发明实施例,可以对人脸图像对应的标注眼睑曲线进行积分,确定出标注眼睑曲线中标注上眼睑曲线的第一曲线长度和标注下眼睑曲线的第二曲线长度,并分别对标注上眼睑曲线和标注下眼睑曲线密集取点,以从标注上眼睑曲线和标注下眼睑曲线中,确定出多个待利用眼睑点,进而基于标注上眼睑曲线的第一曲线长度、标注上眼睑曲线上的多个待利用眼睑点及预设等分点数,从标注上眼睑曲线上,确定出预设等分点数减1个等分上眼睑点;并基于标注下眼睑曲线的第二曲线长度、标注下眼睑曲线上的多个待利用眼睑点及预设等分点数,从标注下眼睑曲线上,确定出预设等分点数减1个等分下眼睑点,以实现半自动地从人脸图像包含的眼睛的上下眼睑中标注出具有较明显的语义特征的等分眼睑点,实现标注出具有较明显的语义特征的眼部关键点且在一定程度上提高标注效率。当然,实施本发明的任一产品或方法并不一定需要同时达到以上所述的所有优点。By applying the embodiment of the present invention, the eyelid marking curve corresponding to the face image can be integrated, and the first curve length marking the upper eyelid curve and the second curve length marking the lower eyelid curve in the marking eyelid curve can be determined, and the markings are respectively The eyelid curve and the marked lower eyelid curve are densely selected to determine multiple eyelid points to be used from the marked upper eyelid curve and the marked lower eyelid curve, and then mark the upper eyelid curve based on the first curve length of the marked upper eyelid curve The number of eyelid points to be used and the preset number of equal division points are determined from the marked upper eyelid curve, and the preset number of equal division points minus one equal upper eyelid point is determined; and the second curve length and label are based on the marked lower eyelid curve. The multiple eyelid points to be used on the lower eyelid curve and the preset number of equal division points are determined from the marked lower eyelid curve to determine the preset number of equal division points minus 1 equal lower eyelid point to achieve semi-automatic inclusion from the face image The upper and lower eyelids of the eyes are marked with equally divided eyelid points with obvious semantic characteristics, which realizes marking the key points of the eye with obvious semantic characteristics and improves the marking efficiency to a certain extent. Of course, implementing any product or method of the present invention does not necessarily need to achieve all the advantages described above at the same time.
本发明实施例的创新点包括:The innovative points of the embodiments of the present invention include:
1、可以对人脸图像对应的标注眼睑曲线进行积分,确定出标注眼睑曲线中标注上眼睑曲线的第一曲线长度和标注下眼睑曲线的第二曲线长度,并分别对标注上眼睑曲线和标注下眼睑曲线密集取点,以从标注上眼睑曲线和标注下眼睑曲线中,确定出多个待利用眼睑点,进而基于标注上眼睑曲线的第一曲线长度、标注上眼睑曲线上的多个待利用眼睑点及预设等分点数,从标注上眼睑曲线上,确定出预设等分点数减1个等分上眼睑点;并基于标注下眼睑曲线的第二曲线长度、标注下眼睑曲线上的多个待利用眼睑点及预设等分点数,从标注下眼睑曲线上,确定出预设等分点数减1个等分下眼睑点,以实现半自动地从人脸图像包含的眼睛的上下眼睑中标注出具有较明显的语义特征的等分眼睑点,实现标注出具有较明显的语义特征的眼部关键点且在一定程度上提高标注效率。1. It can integrate the marked eyelid curve corresponding to the face image, determine the first curve length of the upper eyelid curve and the second curve length of the lower eyelid curve in the marked eyelid curve, and mark the upper eyelid curve and the annotation respectively The lower eyelid curve is intensively selected to determine multiple eyelid points to be used from the upper eyelid curve and the lower eyelid curve. Then, based on the first curve length of the upper eyelid curve, the multiple eyelid points on the upper eyelid curve are marked. Using the eyelid point and the preset number of equal points, from the marked upper eyelid curve, determine the preset number of equal points minus 1 equal upper eyelid point; and mark the lower eyelid curve based on the second curve length of the marked lower eyelid curve The multiple eyelid points to be used and the preset number of equal division points, from the marked lower eyelid curve, determine the preset number of equal division points minus 1 equal division lower eyelid point, in order to achieve semi-automatically from the upper and lower sides of the eyes contained in the face image The eyelids are marked with the equally divided eyelid points with obvious semantic characteristics, which realizes marking the key points of the eye with obvious semantic characteristics and improves the marking efficiency to a certain extent.
2、利用每一人脸图像中的标注眼角点、等分上眼睑点和等分下眼睑点,从该人脸图像中截取出眼睛所在区域图像,得到标注有眼部关键点的眼睛图像,并将眼睛图像及其对应的包括所对应眼睛图像中的眼部关键点的位置信息的标定信息,确定为用于检测图像中眼睛的眼部关键点的眼部关键点检测模型的训练数据,将具有较明显的语义特征的眼部关键点作为眼部关键点检测模型的训练数据,在一定程度上可以训练得到稳定性以及检测准确性较高的眼部关键点检测模型。2. Using the marked eye corner points, equal division of the upper eyelid point and equal division of the lower eyelid point in each face image, the image of the area where the eye is located is intercepted from the face image to obtain the eye image marked with the key points of the eye, and The eye image and its corresponding calibration information including the position information of the eye key points in the corresponding eye image are determined as the training data of the eye key point detection model for detecting the eye key points of the eyes in the image, and Eye key points with obvious semantic features are used as the training data of the eye key point detection model. To a certain extent, the eye key point detection model with high stability and detection accuracy can be trained.
3、利用每一人脸图像对应的真实眼睛开闭长度和实测眼睛开闭长度,确定出每一人脸图像对应的实测偏差,该实测偏差可以表征出测量所得的眼睛上下眼睑之间的距离与真实的眼睛上下眼睑之间的距离之间的差距,将该实测偏差,作为用于检测眼睛的上下眼睑中等分眼睑点的眼部关键点检测模型的训练数据的一部分,可以使得训练得到的眼部关键点检测模型可以针对图像输出测量所得的上下眼睑之间的距离,与真实的上下眼睑之间的距离之间的实测偏差,进而,可以根据该实测偏差修正测量所得的上下眼睑之间的距离,在一定程度上提高测量所得的上下眼睑之间的距离的准确性。3. Using the actual eye opening and closing length corresponding to each face image and the measured eye opening and closing length, the measured deviation corresponding to each face image is determined. The measured deviation can characterize the measured distance between the upper and lower eyelids of the eye and the real The difference between the distance between the upper and lower eyelids of the eyes, the measured deviation is used as part of the training data of the eye key point detection model used to detect the upper and lower eyelids of the eye, which can make the trained eye The key point detection model can be based on the measured deviation between the distance between the upper and lower eyelids measured by the image output and the actual distance between the upper and lower eyelids, and further, can correct the measured distance between the upper and lower eyelids according to the measured deviation , To a certain extent, improve the accuracy of the measured distance between the upper and lower eyelids.
4、在标注人员针对人脸图像中眼睛的上下眼睑进行标注时,可以实时根据标注人员标注的标注点生成指定眼睑的指定眼睑曲线,并显示,以使得标注人员可以对其所标注的标注点进行检查,确定所标注的标注点是否为指定眼睑上的眼睑点或眼角点,在一定程度上可以高效的保证标注人员所标注的眼睑点或眼角点的准确性,提高标注人员的标注效率。4. When the annotator is marking the upper and lower eyelids of the eyes in the face image, the specified eyelid curve of the specified eyelid can be generated in real time according to the annotation points marked by the annotator, and displayed, so that the annotator can mark the marked points Check to determine whether the marked point is the eyelid point or eye corner point on the designated eyelid. To a certain extent, it can efficiently ensure the accuracy of the eyelid point or eye corner point marked by the labeler, and improve the labeling efficiency of the labeler.
5、利用包括标注有等分眼睑点和标注眼角点的眼睛图像,以其对应的包括该标注的等分眼睑点和标注眼角点的位置信息的标定信息的训练数据,训练初始的眼部关键点检测模型,以训练得到用于检测图像中眼睛的上下眼睑中的等分眼睑点和眼角点的眼部关键点检测模型。使得眼部关键点检测模型可以从图像中确定出具有较明显的语义特征的眼部关键点,并在一定程度上保证眼部关键点检测模型的稳定性以及检测准确性。5. Utilize the eye image that includes the marked equal-divided eyelid points and marked eye corner points, and use the corresponding training data including calibration information including the marked equal-divided eyelid points and marked eye corner points to train the initial eye key The point detection model is trained to obtain an eye key point detection model used to detect the equally divided eyelid points and the corner points of the upper and lower eyelids of the eyes in the image. The eye key point detection model can determine the eye key points with obvious semantic features from the image, and to a certain extent guarantee the stability and detection accuracy of the eye key point detection model.
6、在训练眼部关键点检测模型之前,将眼睛图像进行转正处理,进而基于转正处理后的眼睛图像及其对应的标定信息训练眼部关键点检测模型,在一定程度上可以减轻训练眼部关键点检测模型的训练负担,并在一定程度上提高训练所得的眼部关键点检测模型对图像中眼睛的眼部关键点的检测精度。6. Before training the eye key point detection model, correct the eye image, and then train the eye key point detection model based on the corrected eye image and its corresponding calibration information, which can reduce the training of the eye to a certain extent The training burden of the key point detection model, and to a certain extent improve the detection accuracy of the key point detection model of the eye in the image.
7、对左眼图像或右眼图像进行镜像处理得到镜像图像,进而对镜像图像以及未进行镜像的图像进行拼接,得到拼接图像;后续的可以利用拼接图像及其对应的标定信息训练初始的眼部关键点检测模型,以训练得到用于检测图像中眼睛的上下眼睑中的等分眼睑点和眼角点的眼部关键点检测模型,在一定程度上可以缩短训练时间。后续的,在利用眼部关键点检测模型对图像进行检测时,可以首先对从图像中确定出的左眼图像以及右眼图像进行相应的处理,及转正、镜像以及拼接处理,可以使得眼部关键点检测模型同时对处理后所得的图像中的两只人眼中的眼部关键点进行检测,即通过一次检测 过程则可检测出该处理后所得的图像中两只人眼的上下眼睑的眼部关键点,简化了利用眼部关键点检测模型,对眼部关键点的检测过程。7. Perform mirror image processing on the left eye image or the right eye image to obtain a mirror image, and then splice the mirror image and the unmirrored image to obtain a spliced image; subsequently, the spliced image and its corresponding calibration information can be used to train the initial eye The key point detection model is trained to obtain the eye key point detection model used to detect the equally divided eyelid points and corner points of the upper and lower eyelids of the eyes in the image, which can shorten the training time to a certain extent. Later, when using the eye key point detection model to detect the image, the left eye image and the right eye image determined from the image can be processed first, and the correction, mirroring, and stitching processing can be performed to make the eye The key point detection model detects the key points of the eyes in the two human eyes in the processed image at the same time, that is, through a detection process, the upper and lower eyelids of the two human eyes in the processed image can be detected The key points of the eye simplifies the detection process of the key points of the eye using the eye key point detection model.
8、利用训练所得的眼部关键点检测模型,确定出图像中上下眼睑的等分眼睑点和眼角点之后,利用sobel算法,对图像进行边缘提取,得到图像对应的灰阶边缘图,进而,利用上下眼睑的等分眼睑点和眼角点,分别确定上眼睑对应的多组参考点,以及下眼睑对应的多组参考点,进而分别针对上下眼睑,基于其对应的多组参考点以及预设的曲线拟合算法,确定出多条表征上眼睑或下眼睑的参考曲线,利用表征上眼睑的参考曲线以及待检测上眼睑曲线上、灰阶边缘图中的像素点的像素值的和,确定出最贴合灰阶边缘图中上眼睑的曲线,作为表征该上眼睑的目标上眼睑曲线;同理的,利用表征下眼睑的参考曲线以及待检测下眼睑曲线上、灰阶边缘图中的像素点的像素值的和,确定出最贴合灰阶边缘图中下眼睑的曲线,作为表征该上眼睑的目标下眼睑曲线。在一定程度上提高图像中上下眼睑的检测准确性。8. After determining the eyelid points and corner points of the upper and lower eyelids in the image using the key point detection model obtained by training, use the sobel algorithm to extract the edges of the image to obtain the grayscale edge map corresponding to the image, and then, Using the equally divided eyelid points and eye corner points of the upper and lower eyelids, multiple sets of reference points corresponding to the upper eyelid and multiple sets of reference points corresponding to the lower eyelid are determined respectively, and then the upper and lower eyelids are respectively targeted based on their corresponding multiple sets of reference points and presets The curve fitting algorithm determines multiple reference curves that characterize the upper eyelid or the lower eyelid, and use the reference curve characterizing the upper eyelid and the sum of the pixel values of the pixels on the upper eyelid curve to be detected and the grayscale edge map to determine The curve that best fits the upper eyelid in the grayscale edge map is used as the target upper eyelid curve to characterize the upper eyelid; in the same way, use the reference curve that characterizes the lower eyelid and the upper eyelid curve to be detected and the upper eyelid curve in the grayscale edge map The sum of the pixel values of the pixel points determines the curve that best fits the lower eyelid in the grayscale edge map as the target lower eyelid curve that characterizes the upper eyelid. To a certain extent, improve the detection accuracy of the upper and lower eyelids in the image.
附图说明Description of the drawings
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单介绍。显而易见地,下面描述中的附图仅仅是本发明的一些实施例。对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to explain the embodiments of the present invention or the technical solutions in the prior art more clearly, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the drawings in the following description are only some embodiments of the present invention. For those of ordinary skill in the art, other drawings can be obtained from these drawings without creative work.
图1为本发明实施例提供的眼部关键点的标注方法的一种流程示意图;FIG. 1 is a schematic flowchart of a method for marking eye key points according to an embodiment of the present invention;
图2A、2B、2C以及2D分别为人脸图像对应的标注眼睑曲线的示意图;2A, 2B, 2C, and 2D are schematic diagrams of marking eyelid curves corresponding to face images;
图3为本发明实施例提供的眼部关键点检测模型的训练方法的一种流程示意图;FIG. 3 is a schematic flowchart of a method for training an eye key point detection model provided by an embodiment of the present invention;
图4为本发明实施例提供的眼部关键点的标注装置的一种结构示意图;4 is a schematic diagram of a structure of an eye key point marking device provided by an embodiment of the present invention;
图5为本发明实施例提供的眼部关键点检测模型的训练装置的一种结构示意图。FIG. 5 is a schematic structural diagram of a training device for an eye key point detection model provided by an embodiment of the present invention.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整的描述。显然,所描述的实施例仅仅是本发明的一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有付出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the drawings in the embodiments of the present invention. Obviously, the described embodiments are only a part of the embodiments of the present invention, rather than all the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of the present invention.
需要说明的是,本发明实施例及附图中的术语“包括”和“具有”以及它们的任何变形,意图在于覆盖不排他的包含。例如包含的一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其他步骤或单元。It should be noted that the terms "including" and "having" in the embodiments of the present invention and the drawings and any variations thereof are intended to cover non-exclusive inclusions. For example, the process, method, system, product or device that contains a series of steps or units is not limited to the listed steps or units, but optionally also includes unlisted steps or units, or optionally also includes Other steps or units inherent to these processes, methods, products or equipment.
本发明实施例公开了一种眼部关键点的标注及其检测模型的训练方法和装置,以实现标注出具有较明显的语义特征的眼部关键点且在一定程度上提高标注效率。下面对本发明实施例进行详细说明。The embodiment of the present invention discloses an eye key point labeling and a training method and device for its detection model, so as to realize labeling eye key points with obvious semantic features and improve labeling efficiency to a certain extent. The embodiments of the present invention will be described in detail below.
图1为本发明实施例提供的眼部关键点的标注方法的一种流程示意图。该方法应用于电子设备,该电子设备可以为具有较强计算处理能力的设备,例如服务器等,其中,为了布局清楚,后续描述中,可以将实现眼部关键点的标注方法的电子设备成为第一电子设备。该方法具体包括以下步骤:FIG. 1 is a schematic flowchart of a method for marking key eye points according to an embodiment of the present invention. This method is applied to an electronic device, which can be a device with strong computing and processing capabilities, such as a server. In order to make the layout clear, in the subsequent description, the electronic device that implements the method of marking the key points of the eye can be the first An electronic device. The method specifically includes the following steps:
S101:获得人脸图像以及每一人脸图像对应的标注眼睑曲线。S101: Obtain a face image and an eyelid curve corresponding to each face image.
其中,人脸图像中标注有所包含眼睛的标注眼角点和上下眼睑的标注眼睑点,每一标注眼睑曲线包括:基于所对应标注眼睑点和标注眼角点生成的、表征上眼睑的标注上眼睑曲线和表征下眼睑的标注下眼睑曲线。Among them, the face image is marked with the marked corner points of the eyes and the marked eyelid points of the upper and lower eyelids, and each marked eyelid curve includes: the marked upper eyelid which is generated based on the corresponding marked eyelid point and marked eye corner points and represents the upper eyelid Curves and labeled lower eyelid curves that characterize the lower eyelid.
本发明实施例中,人脸图像中可以包括人员面部的两只眼睛,如图2B和图2D所示,也可以仅包括面部的一只眼睛,如图2A和图2C,这都是可以的。其中,当人脸图像中包括人员面部的两只眼睛时,人脸图像中标注有该两只眼睛的标注眼角点和上下眼睑的标注眼睑点,此时,人脸图像对应的标注眼睑曲线包括:左眼对应的标注眼睑曲线,和右眼对应的标注眼睑曲线。左眼对应的标注眼睑曲线包括:基于左眼对应的标注眼睑点和标注眼角点生成的、表征左眼的上眼睑的标注上眼睑曲线和表征左眼的下眼睑的标注下眼睑曲线。右眼对应的标注眼睑曲线包括:基于右眼对应的标注眼睑点和标注眼角点生成的、表征右眼的上眼睑的标注上眼睑曲线和表征右眼的下眼睑的标注下眼睑曲线。In the embodiment of the present invention, the human face image may include two eyes of the person's face, as shown in Figures 2B and 2D, or only one eye of the face, as shown in Figures 2A and 2C, this is both possible . Wherein, when the face image includes two eyes of a person's face, the face image is marked with the marked corner points of the two eyes and the marked eyelid points of the upper and lower eyelids. At this time, the marked eyelid curve corresponding to the face image includes : The marked eyelid curve corresponding to the left eye, and the marked eyelid curve corresponding to the right eye. The labeled eyelid curve corresponding to the left eye includes: a labeled upper eyelid curve that represents the upper eyelid of the left eye and a labeled lower eyelid curve that represents the lower eyelid of the left eye, which is generated based on the labeled eyelid point and the labeled eye corner point corresponding to the left eye. The labeled eyelid curve corresponding to the right eye includes: a labeled upper eyelid curve that represents the upper eyelid of the right eye and a labeled lower eyelid curve that represents the lower eyelid of the right eye, which is generated based on the labeled eyelid points and the labeled eye corner points corresponding to the right eye.
在本发明的一种实现方式中,每一人脸图像对应的标注眼睑曲线,可以是在标注人员针对人脸图像中眼睛的上下眼睑进行标注时,基于所标注的眼睛的上眼睑的标注眼睑点以及预设的曲线拟合算法生成的,以及基于所标注的眼睛的下眼睑的标注眼睑点以及预设的曲线拟合算法生成的。进而第一电子设备获得人脸图像时,同时获得该人脸图像对应的标注眼睑曲线。In an implementation of the present invention, the marked eyelid curve corresponding to each face image may be based on the marked eyelid point of the upper eyelid of the marked eye when the marking staff is marking the upper and lower eyelids of the eyes in the face image And generated by a preset curve fitting algorithm, and generated based on the marked eyelid points of the lower lid of the marked eye and a preset curve fitting algorithm. Furthermore, when the first electronic device obtains the face image, it also obtains the marked eyelid curve corresponding to the face image.
在本发明的另一种实现方式中,获得每一人脸图像对应的眼睑曲线的步骤,可以包括:针对每一人脸图像,基于该人脸图像中的标注眼角点、上眼睑的标注眼睑点以及预设的曲线拟合算法,拟合出表征所述上眼睑的上眼睑曲线;并基于该人脸图像中的标注眼角点、下眼睑的标注眼睑点以及预设的曲线拟合算法,拟合出表征下眼睑的下眼睑曲线,以得到人脸图像对应的眼睑曲线。In another implementation of the present invention, the step of obtaining the eyelid curve corresponding to each face image may include: for each face image, based on the marked eye corner point, the marked eyelid point of the upper eyelid in the face image, and The preset curve fitting algorithm fits the upper eyelid curve that characterizes the upper eyelid; and based on the marked eye corner points, the lower eyelid marked eyelid points in the face image, and the preset curve fitting algorithm, The lower eyelid curve representing the lower eyelid is drawn to obtain the eyelid curve corresponding to the face image.
本实现方式中,第一电子设备获得人脸图像之后,可以基于人脸图像中标注的所包含眼睛的标注眼角点和上眼睑的标注眼睑点,以及预设的曲线拟合算法,拟合得到表征该眼睛的上眼睑的上眼睑曲线,作为标注上眼睑曲线;并基于人脸图像中标注的所包含眼睛的标注眼角点和下眼睑的标注眼睑点,以及预设的曲线拟合算法,拟合得到表征该眼睛的下眼睑的下眼睑曲线,作为标注下眼睑曲线。其中,该预设的曲线拟合算法可以为三次样条插值算法。可以理解的是,每一眼睛包括两个标注眼角点,该眼角点为表征该眼睛的上眼睑的标注上眼睑曲线和表征该眼睑的下眼睑的标注下眼睑曲线的交点。In this implementation, after the first electronic device obtains the face image, it can be fitted based on the marked corner points of the eyes contained in the face image and the marked eyelid points of the upper eyelid, and a preset curve fitting algorithm. The upper eyelid curve that characterizes the upper eyelid of the eye is used as the marked upper eyelid curve; and based on the marked corner points of the included eyes and the marked eyelid points of the lower eyelid marked in the face image, and the preset curve fitting algorithm, The lower eyelid curve that characterizes the lower eyelid of the eye is obtained as the labeled lower eyelid curve. Wherein, the preset curve fitting algorithm may be a cubic spline interpolation algorithm. It is understandable that each eye includes two marked eye corner points, which are the intersection of the marked upper eyelid curve representing the upper eyelid of the eye and the marked lower eyelid curve representing the lower eyelid of the eyelid.
S102:针对每一标注眼睑曲线,基于数学积分原理,对该标注眼睑曲线进行积分,确定出该标注 眼睑曲线中标注上眼睑曲线的第一曲线长度和标注下眼睑曲线的第二曲线长度;分别从标注上眼睑曲线和标注下眼睑曲线中,确定出多个待利用眼睑点。S102: For each marked eyelid curve, based on the principle of mathematical integration, integrate the marked eyelid curve to determine the first curve length marked with the upper eyelid curve and the second curve length marked with the lower eyelid curve in the marked eyelid curve; respectively; From marking the upper eyelid curve and marking the lower eyelid curve, multiple eyelid points to be used are determined.
本步骤中,第一电子设备针对每一标注眼睑曲线,可以基于数据积分原理,对该标注眼睑曲线进行积分,及分别对该标注眼睑曲线中的标注上眼睑曲线进行积分,确定出该标注上眼睑曲线的曲线长度,作为第一曲线长度;对该标注眼睑曲线中的标注下眼睑曲线进行积分,确定出该标注下眼睑曲线的曲线长度,作为第二曲线长度。In this step, for each marked eyelid curve, the first electronic device may integrate the marked eyelid curve based on the principle of data integration, and respectively integrate the marked upper eyelid curve in the marked eyelid curve, and determine the marked upper eyelid curve. The curve length of the eyelid curve is taken as the first curve length; the marked lower eyelid curve in the marked eyelid curve is integrated to determine the curve length of the marked lower eyelid curve as the second curve length.
进而,第一电子设备针对每一标注眼睑曲线,在该标注眼睑曲线中的标注上眼睑曲线上密集取点,确定出多个待利用眼睑点,例如:取预设数量个点,即确定出预设数量个待利用眼睑点。Furthermore, for each marked eyelid curve, the first electronic device intensively selects points on the marked upper eyelid curve in the marked eyelid curve, and determines multiple eyelid points to be used, for example: taking a preset number of points, that is, determining A preset number of eyelid points to be used.
同理的,第一电子设备针对每一标注眼睑曲线,在该标注眼睑曲线中的标注下眼睑曲线上密集取点,确定出多个待利用眼睑点,例如:取预设数量个点,即确定出预设数量个待利用眼睑点。In the same way, the first electronic device for each marked eyelid curve intensively selects points on the marked lower eyelid curve in the marked eyelid curve, and determines multiple eyelid points to be used, for example: take a preset number of points, namely Determine the preset number of eyelid points to be used.
S103:针对每一标注眼睑曲线,基于该标注眼睑曲线中标注上眼睑曲线的第一曲线长度、标注下眼睑曲线的第二曲线长度、多个待利用眼睑点以及预设等分点数,从该标注眼睑曲线中标注上眼睑曲线和标注下眼睑曲线中,分别确定出预设等分点数减1个等分上眼睑点和预设等分点数减1个等分下眼睑点。S103: For each marked eyelid curve, based on the first curve length of the upper eyelid curve marked on the marked eyelid curve, the second curve length marked on the lower eyelid curve, multiple eyelid points to be used, and a preset number of equal division points, In marking the eyelid curve, marking the upper eyelid curve and marking the lower eyelid curve, respectively determine the preset number of equal points minus 1 equal upper eyelid point and the preset number of equal points minus 1 equal lower eyelid point.
本步骤中,第一电子设备确定出每一标注眼睑曲线中标注上眼睑曲线上的多个待利用眼睑点以及标注下眼睑曲线上的多个待利用眼睑点之后,利用第一曲线长度以及预设等分点数,确定出所需要标注的每相邻的两个等分上眼睑点之间的距离,其中,该每相邻的两个等分上眼睑点之间的距离等于第一曲线长度与预设等分点数的比值。后续的,第一电子设备可以从某一个眼角点处开始计算该眼角点与待利用眼睑点之间的距离,当确定出该眼角点某一待利用眼睑点之间的距离,为每相邻的两个等分上眼睑点之间的距离的整数倍,则可以确定该待利用眼睑点为等分上眼睑点。该整数倍可以是1倍到预设等分点数减1倍。或者,电子设备可以从某一个眼角点处开始计算该眼角点与待利用眼睑点之间的距离,当确定出该某一待利用眼睑点与该眼角点之间的距离为每相邻的两个等分上眼睑点之间的距离,则确定该待利用眼睑点为第一个等分上眼睑点,进而,以该第一个等分上眼睑点为起始位置,依次遍历该第一个等分上眼睑点之后的待利用眼睑点,在确定出某一待利用眼睑点与该第一个等分上眼睑点之间的距离为每相邻的两个等分上眼睑点之间的距离,则确定该待利用眼睑点为第二个等分上眼睑点,以此类推,确定出预设等分点数减1个等分上眼睑点。In this step, after the first electronic device determines the multiple eyelid points to be used on the upper eyelid curve and the multiple eyelid points to be used on the lower eyelid curve in each eyelid curve, the first curve length and the preset Set the number of equally divided points to determine the distance between each two adjacent equally divided upper eyelid points that need to be marked, where the distance between each adjacent two equally divided upper eyelid points is equal to the length of the first curve and The ratio of the number of equal points is preset. Subsequently, the first electronic device may calculate the distance between the corner point of the eye and the eyelid point to be used from a certain corner of the eye. When the distance between a certain eyelid point to be used at the corner of the eye is determined, it is every adjacent If the distance between the two halved upper eyelid points is an integer multiple of, the eyelid point to be used can be determined as the halved upper eyelid point. The integer multiple can be 1 time to the preset number of equal division points minus 1 time. Or, the electronic device can calculate the distance between the eyelid point and the eyelid point to be used from a certain corner of the eye, and when it is determined that the distance between the eyelid point to be used and the eyelid point is every two adjacent If the distance between the upper eyelid points is divided into two equal parts, the eyelid point to be used is determined as the first divided upper eyelid point, and then the first divided upper eyelid point is taken as the starting position, and the first divided upper eyelid point is sequentially traversed. The eyelid points to be used after the upper eyelid points are divided, and the distance between a certain eyelid point to be used and the first upper eyelid point is determined to be between every two adjacent upper eyelid points Then, the eyelid point to be used is determined to be the second equally divided upper eyelid point, and so on to determine the preset number of equally divided upper eyelid points minus 1 equally divided upper eyelid point.
同理的,利用第二曲线长度以及预设等分点数,确定出所需要标注的每相邻的两个等分下眼睑点之间的距离,其中,该每相邻的两个等分下眼睑点之间的距离等于第二曲线长度与预设等分点数的比值。后续的,第一电子设备可以从某一个眼角点处开始计算该眼角点与待利用眼睑点之间的距离,当确定出该眼角点某一待利用眼睑点之间的距离,为每相邻的两个等分下眼睑点之间的距离的整数倍,则可以确定该待利用眼睑点为等分下眼睑点。该整数倍可以是1倍到预设等分点数减1倍。或者,电子设备可以从某一个眼角点处开始计算该眼角点与待利用眼睑点之间的距离,当确定出该某一待利用眼睑点与该眼角点之间的距离为每相邻的两个等分下眼睑点之间的距离,则确定该待利用眼睑点为第一个等分下眼睑点,进而,以该第一个等分下眼睑点为起始位置,依次遍历该第一个等分下眼睑点之后的待利用眼睑点,在确定出某一待利用眼睑点与该第一个等分下眼睑点之间的距离为每相邻的两个等分下眼睑点之间的距离,则确定该待利用眼睑点为第二个等分下眼睑点,以此类推,确定出预设等分点数减1个等分下眼睑点。In the same way, the length of the second curve and the preset number of equal points are used to determine the distance between each two adjacent equally divided lower eyelid points that need to be marked, where each adjacent two equally divided lower eyelids The distance between the points is equal to the ratio of the length of the second curve to the preset number of equal points. Subsequently, the first electronic device may calculate the distance between the corner point of the eye and the eyelid point to be used from a certain corner of the eye. When the distance between a certain eyelid point to be used at the corner of the eye is determined, it is every adjacent Is an integer multiple of the distance between the two equally divided lower eyelid points, then the eyelid point to be used can be determined as the equally divided lower eyelid point. The integer multiple can be 1 time to the preset number of equal division points minus 1 time. Or, the electronic device can calculate the distance between the eyelid point and the eyelid point to be used from a certain corner of the eye, and when it is determined that the distance between the eyelid point to be used and the eyelid point is every two adjacent If the distance between the lower eyelid points is divided into two, it is determined that the eyelid point to be used is the first equally divided lower eyelid point, and then, taking the first equally divided lower eyelid point as the starting position, traverse the first The to-be-used eyelid points after the lower eyelid points are divided, and the distance between a certain to-be-used eyelid point and the first lower eyelid point is determined to be between every two adjacent lower eyelid points Then, the eyelid point to be used is determined as the second equally divided lower eyelid point, and so on to determine the preset number of equally divided lower eyelid points minus one equally divided lower eyelid point.
其中,当人脸图像中包括左眼和右眼时,分别针对左眼对应的标注眼睑曲线执行上述步骤,和右眼对应的标注眼睑曲线执行上述步骤。Wherein, when the face image includes the left eye and the right eye, the above steps are performed for the marked eyelid curve corresponding to the left eye, and the above steps are performed for the marked eyelid curve corresponding to the right eye.
S104:将每一人脸图像中的标注眼角点、等分上眼睑点和等分下眼睑点,确定为每一人脸图像对应的眼部关键点。S104: Determine the marked eye corner point, equally divided upper eyelid point, and equally divided lower eyelid point in each face image as the key eye point corresponding to each face image.
第一电子设备确定出人脸图像所包含眼睛的上下眼睑的等分眼睑点之后,将该人脸图像所包含眼睛的上下眼睑的等分眼睑点,以及标注眼角点,确定为该人脸图像对应的眼部关键点。其中,该眼睛的上下眼睑的等分眼睑点包括眼睛的上眼睑的等分上眼睑点和下眼睑的等分下眼睑点。After the first electronic device determines the halved eyelid points of the upper and lower eyelids of the eyes contained in the face image, the halved eyelid points of the upper and lower eyelids of the eyes contained in the face image and mark the eye corner points are determined as the face image Corresponding key points of the eye. Wherein, the halved eyelid points of the upper and lower eyelids of the eye include the halved upper eyelid points of the upper eyelid and the halved lower eyelid points of the lower eyelid.
并且,第一电子设备可以基于所确定出的人脸图像所包含眼睛的上下眼睑的等分眼睑点的位置信息,在人脸图像的眼睛的上下眼睑中标注出各等分眼睑点,保存该标注有眼睛的上下眼睑的等分眼睑点和标注眼角点的人脸图像。在一种情况中,还可以保存每一人脸图像对应的标注眼睑曲线。还可以以文本形式保存眼睛的上下眼睑的等分眼睑点和标注眼角点在人脸图像中的位置信息。Furthermore, the first electronic device may mark the divided eyelid points in the upper and lower eyelids of the eyes of the face image based on the position information of the divided eyelid points of the upper and lower eyelids of the determined human face image, and save the The eyelid points are divided into the upper and lower eyelids of the eyes and the face images are marked with the corner points of the eyes. In one case, the labeled eyelid curve corresponding to each face image can also be saved. It is also possible to save the position information of the equally divided eyelid points of the upper and lower eyelids of the eyes and the marked eye corner points in the face image in the form of text.
应用本发明实施例,可以对人脸图像对应的标注眼睑曲线进行积分,确定出标注眼睑曲线中标注上眼睑曲线的第一曲线长度和标注下眼睑曲线的第二曲线长度,并分别对标注上眼睑曲线和标注下眼睑曲线密集取点,以从标注上眼睑曲线和标注下眼睑曲线中,确定出多个待利用眼睑点,进而基于标注上眼睑曲线的第一曲线长度、标注上眼睑曲线上的多个待利用眼睑点及预设等分点数,从标注上眼睑曲线上,确定出预设等分点数减1个等分上眼睑点;并基于标注下眼睑曲线的第二曲线长度、标注下眼睑曲线上的多个待利用眼睑点及预设等分点数,从标注下眼睑曲线上,确定出预设等分点数减1个等分下眼睑点,以实现半自动地从人脸图像包含的眼睛的上下眼睑中标注出具有较明显的语义特征的等分眼睑点,实现标注出具有较明显的语义特征的眼部关键点且在一定程度上提高标注效率。By applying the embodiment of the present invention, the eyelid marking curve corresponding to the face image can be integrated, and the first curve length marking the upper eyelid curve and the second curve length marking the lower eyelid curve in the marking eyelid curve can be determined, and the markings are respectively The eyelid curve and the marked lower eyelid curve are densely selected to determine multiple eyelid points to be used from the marked upper eyelid curve and the marked lower eyelid curve, and then mark the upper eyelid curve based on the first curve length of the marked upper eyelid curve The number of eyelid points to be used and the preset number of equal division points are determined from the marked upper eyelid curve, and the preset number of equal division points minus one equal upper eyelid point is determined; and the second curve length and label are based on the marked lower eyelid curve. The multiple eyelid points to be used on the lower eyelid curve and the preset number of equal division points are determined from the marked lower eyelid curve to determine the preset number of equal division points minus 1 equal lower eyelid point to achieve semi-automatic inclusion from the face image The upper and lower eyelids of the eyes are marked with equally divided eyelid points with obvious semantic characteristics, which realizes marking the key points of the eye with obvious semantic characteristics and improves the marking efficiency to a certain extent.
在本发明的另一实施例中,在所述将每一人脸图像中的标注眼角点、等分上眼睑点和等分下眼睑点,确定为每一人脸图像对应的眼部关键点的步骤之后,所述方法还可以包括:In another embodiment of the present invention, in the step of determining the marked eye corner point, equally divided upper eyelid point, and equally divided lower eyelid point in each face image as the key eye point corresponding to each face image After that, the method may further include:
针对每一人脸图像,基于该人脸图像中的眼部关键点,从该人脸图像中截取出眼睛所在区域图像, 得到标注有眼部关键点的眼睛图像;将眼睛图像及其对应的标定信息,确定为用于检测图像中眼睛的眼部关键点的眼部关键点检测模型的训练数据,其中,标定信息包括所对应眼睛图像中的眼部关键点的位置信息。For each face image, based on the key points of the eyes in the face image, cut out the image of the area where the eyes are located from the face image to obtain the eye image marked with the key points of the eye; the eye image and its corresponding calibration The information is determined as the training data of the eye key point detection model used to detect the eye key points of the eyes in the image, where the calibration information includes position information of the eye key points in the corresponding eye image.
在本实施例中,第一电子设备确定出每一人脸图像中的眼部关键点之后,可以基于该眼部关键点在人脸图像中的位置,确定出眼睛所在区域,并从人脸图像中截取出眼睛所在区域图像,得到标注有眼部关键点的眼睛图像。其中,该眼睛所在区域可以为包含眼睛的最小的矩形区域,或者可以为包含眼睛的最小的矩形区域向周围扩展预设像素点的区域,这都是可以的。其中,向周围扩展预设像素点为:向包含眼睛的最小的矩形区域的上下左右四个方向分别扩展预设像素点。每一人脸图像与其截取出的眼睛图像存在对应关系。In this embodiment, after the first electronic device determines the eye key points in each face image, it can determine the eye area based on the position of the eye key points in the face image, and then obtain the Intercept the image of the area where the eye is located, and obtain the eye image marked with the key points of the eye. Wherein, the area where the eye is located may be the smallest rectangular area containing the eyes, or it may be an area where the smallest rectangular area containing the eyes extends to the surrounding by preset pixel points, which is all right. Wherein, extending the preset pixel points to the surrounding is: respectively expanding the preset pixel points in the upper, lower, left, and right directions of the smallest rectangular area including the eyes. Each face image has a corresponding relationship with the eye image cut out.
确定出眼睛图像之后,针对每一眼睛图像,确定该眼睛图像中所标注的眼部关键点在该眼睛图像中的位置信息,作为该眼睛图像对应标定信息,并将该标注有眼部关键点的眼睛图像以及该眼睛图像对应的标定信息,确定用于检测图像中眼睛的眼部关键点的眼部关键点检测模型的训练数据。After determining the eye image, for each eye image, determine the position information of the eye key points marked in the eye image in the eye image as the calibration information corresponding to the eye image, and mark the eye key points The eye image of and the calibration information corresponding to the eye image determine the training data of the eye key point detection model used to detect the eye key points of the eyes in the image.
可以理解的是,通过该标注有眼部关键点的眼睛图像以及该眼睛图像对应的标定信息,可以训练出可以检测得到图像中眼睛的上下眼睑中的等分眼睑点以及眼角点的眼部关键点检测模型。其中,为了布局清楚,训练该眼部关键点检测模型的过程在后续进行介绍。It is understandable that through the eye image marked with the eye key points and the calibration information corresponding to the eye image, it is possible to train to detect the eyelid points and the eye corner points in the upper and lower lids of the eyes in the image. Point detection model. Among them, in order to make the layout clear, the process of training the eye key point detection model will be introduced later.
在本发明的另一实施例中,所述将眼睛图像及其对应的标定信息,确定为用于检测图像中眼睛的眼部关键点的眼部关键点检测模型的训练数据的步骤,可以包括:获得每一人脸图像对应的真实眼睛开闭长度;In another embodiment of the present invention, the step of determining the eye image and its corresponding calibration information as the training data of the eye key point detection model for detecting the eye key points of the eyes in the image may include : Obtain the real eye opening and closing length corresponding to each face image;
获得每一人脸图像对应的实测眼睛开闭长度,其中,实测眼睛开闭长度为:基于人脸图像对应的目标三维人脸模型中眼睛的上下眼睑所确定的长度,目标三维人脸模型为:基于所对应人脸图像中的人脸特征点以及预设的三维人脸模型确定的人脸模型,目标三维人脸模型包括:基于该人脸图像中的标注眼角点、等分上眼睑点和等分下眼睑点,构建的眼睛的上下眼睑;Obtain the measured eye opening and closing length corresponding to each face image, where the measured eye opening and closing length is: based on the length determined by the upper and lower eyelids of the target 3D face model corresponding to the face image, the target 3D face model is: A face model determined based on the facial feature points in the corresponding face image and a preset three-dimensional face model. The target three-dimensional face model includes: based on the marked eye corner points in the face image, equally divided upper eyelid points and Divide the lower eyelid points equally to construct the upper and lower eyelids of the eye;
针对每一人脸图像,计算该人脸图像对应的真实眼睛开闭长度与实测眼睛开闭长度的比值,作为该人脸图像对应的实测偏差;For each face image, calculate the ratio of the actual eye opening and closing length corresponding to the face image to the measured eye opening and closing length as the measured deviation corresponding to the face image;
将眼睛图像及其对应的标定信息,确定为用于检测图像中眼睛的上下眼睑中等分眼睑点的眼部关键点检测模型的训练数据,其中,标定信息包含:所对应眼睛图像标注的眼部关键点的位置信息,以及该眼睛图像所对应人脸图像对应的实测偏差。The eye image and its corresponding calibration information are determined as the training data of the eye key point detection model used to detect the upper and lower eyelids of the eye in the image, where the calibration information includes: the eye marked in the corresponding eye image The location information of the key points, and the measured deviation corresponding to the face image corresponding to the eye image.
在一种情况中,当人脸图像为基于多图像采集设备系统所采集得到的人脸图像时,可以基于该多图像采集设备系统中的多个图像采集设备,在同一时刻针对同一人员的面部采集的包含人员面部的图像,构建该人员面部的真实三维人脸模型,并且该人员面部的真实三维人脸模型中包含人员的眼睛的上下眼睑。基于真实三维人脸模型中,包含的人员的眼睛的上下眼睑的中心眼睑点,可以确定出人员的眼睛的上下眼睑之间的距离,作为该每一包含人员面部的图像对应的真实眼睛开闭长度,即每一包含人员面部的人脸图像对应的真实眼睛开闭长度。其中,上述真实三维人脸模型,可以基于当前任一种可基于多个包含人员面部的图像,进行人员的三维人脸模型重建的技术构建得到。In one case, when the face image is a face image collected by a multi-image acquisition device system, it can be based on multiple image acquisition devices in the multi-image acquisition device system to target the face of the same person at the same time. The collected images containing the person's face are constructed to construct a real three-dimensional face model of the person's face, and the real three-dimensional face model of the person's face includes the upper and lower eyelids of the person's eyes. Based on the center eyelid points of the upper and lower eyelids of the person’s eyes contained in the real three-dimensional face model, the distance between the upper and lower eyelids of the person’s eyes can be determined as the real eye opening and closing corresponding to each image containing the person’s face Length, that is, the actual eye opening and closing length corresponding to each face image containing a person's face. Wherein, the above-mentioned real three-dimensional face model can be constructed based on any current technology that can reconstruct a three-dimensional face model of a person based on a plurality of images containing the face of a person.
其中,上述基于真实三维人脸模型中,包含的人员的眼睛的上下眼睑的中心眼睑点,可以确定出人员的眼睛的上下眼睑之间的距离的过程,可以是:Wherein, the above-mentioned process based on the center eyelid points of the upper and lower eyelids of the person's eyes included in the real three-dimensional face model to determine the distance between the upper and lower eyelids of the person's eyes may be:
针对人员的一个眼睛为例进行说明:获得人脸图像中眼睛的第一眼角点的第一二维位置信息,第二眼角点的第二二维位置信息和上下眼睑中每一等分眼睑点的第三二维位置信息;从真实三维人脸模型中确定眼睛的第一眼角点的第一三维位置信息以及第二眼角点的第二三维位置信息;基于第一三维位置信息、第二三维位置信息以及预设的曲线方程,构建第一眼角约束,其中,第一眼角约束通过公式(1)表示;基于预设的第一数值、预设的第二数值以及第一眼角约束,构建第二眼角约束,第一数值和第二数值用于约束第一眼角约束中自变量的取值范围。其中,第一数值可以取0,第二数值可以取1第二眼角约束通过公式(2)表示;基于曲线方程、每一眼睑点的第三位置信息以及每一图像采集设备的位姿信息和内参信息,构建等分眼睑点对应的重投影误差约束,其中,上述重投影误差约束可以通过观测到的各等分眼睑点的第三二位位置信息,与各等分眼睑点对应的真实三维人脸模型中的等分眼睑空间点投影至该人脸图像中的投影位置之间的距离构建得到;基于人脸图像中等分眼睑点的有序性,构建有序性约束,可以通过公式(3);基于第一眼角约束、第二眼角约束、重投影误差约束以及有序性约束,构建用于表征眼睛的上眼睑和下眼睑的眼睑空间曲线方程,即联立上述四个约束对应的方程,求解得到用于表征眼睛的上眼睑和下眼睑的眼睑空间曲线方程。基于该上眼睑和下眼睑的眼睑空间曲线方程,确定上眼睑的2等分点和下眼睑的2等分点,计算上眼睑的2等分点和下眼睑的2等分点之间的距离,作为眼睛的上下眼睑之间的距离。Take an eye of a person as an example: obtain the first two-dimensional position information of the first corner point of the eye in the face image, the second two-dimensional position information of the second corner point, and each equally divided eyelid point in the upper and lower eyelids Determine the first three-dimensional position information of the first corner of the eye and the second three-dimensional position information of the second corner of the eye from the real three-dimensional face model; based on the first three-dimensional position information, the second three-dimensional The position information and the preset curve equation are used to construct the first corner of the eye constraint, where the first corner of the eye constraint is expressed by formula (1); based on the preset first value, the preset second value and the first corner of the eye constraint, the first corner of the eye constraint is constructed The second eye corner constraint, the first value and the second value are used to constrain the value range of the independent variable in the first eye corner constraint. Among them, the first value can be 0, the second value can be 1, and the second eye canthus constraint is expressed by formula (2); based on the curve equation, the third position information of each eyelid point, and the pose information of each image acquisition device and The internal reference information is used to construct the re-projection error constraints corresponding to the equally divided eyelid points, where the above-mentioned re-projection error constraints can be based on the third and second position information of each equally divided eyelid point, and the true three-dimensional corresponding to each equally divided eyelid point The distance between the equally divided eyelid space points in the face model and the projection position in the face image is constructed; based on the order of the equally divided eyelid points in the face image, the order constraint can be constructed by the formula ( 3); Based on the first corner of the eye constraint, the second corner of the eye constraint, the reprojection error constraint, and the order constraint, construct the eyelid space curve equation used to characterize the upper and lower eyelids of the eye, that is, the four constraints corresponding to the above are combined The equation is solved to obtain the eyelid space curve equation used to characterize the upper eyelid and the lower eyelid of the eye. Based on the eyelid space curve equations of the upper and lower eyelids, determine the bisecting point of the upper eyelid and the bisecting point of the lower eyelid, and calculate the distance between the bisecting point of the upper eyelid and the bisecting point of the lower eyelid , As the distance between the upper and lower eyelids of the eye.
其中,公式(1):
Figure PCTCN2019108077-appb-000001
Among them, formula (1):
Figure PCTCN2019108077-appb-000001
其中,(x 0,y 0,z 0)表示第一三维位置信息,(x 1,y 1,z 1)表示第二三维位置信息,a 1、a 2、 a 3、b 1、b 2、b 3、c 1、c 2、和c 3分别为所需求得的系数,t为自变量。 Among them, (x 0 , y 0 , z 0 ) represents the first three-dimensional position information, (x 1 , y 1 , z 1 ) represents the second three-dimensional position information, a 1 , a 2 , a 3 , b 1 , b 2 , B 3 , c 1 , c 2 , and c 3 are the required coefficients, and t is the independent variable.
公式(2):
Figure PCTCN2019108077-appb-000002
Formula (2):
Figure PCTCN2019108077-appb-000002
公式(3):0≤t 1≤t 2…≤t i…≤t M≤1(3);其中,当确定表征上眼睑的眼见空间曲线方程时,t i表示第i个上眼睑的等分眼睑点的第三二维位置信息,相应的,M表示上眼睑的等分眼睑点的个数;当确定表征下眼睑的眼见空间曲线方程时,t i表示第i个下眼睑的等分眼睑点的第三二维位置信息,相应的,M表示下眼睑的等分眼睑点的个数。 Formula (3): 0≤t 1 ≤t 2 …≤t i …≤t M ≤1(3); among them, when the eye space curve equation characterizing the upper eyelid is determined, t i represents the equivalence of the ith upper eyelid The third two-dimensional position information of the eyelid points. Correspondingly, M represents the number of equal eyelid points of the upper eyelid; when the eye space curve equation representing the lower eyelid is determined, t i represents the equal division of the i-th lower eyelid The third two-dimensional position information of the eyelid points, correspondingly, M represents the number of equal eyelid points of the lower eyelid.
第一电子设备获得每一人脸图像对应的真实眼睛开闭长度之后,可以获得每一人脸图像对应的实测眼睛开闭长度。其中,实测眼睛开闭长度可以为:基于人脸图像对应的目标三维人脸模型中眼睛的上下眼睑所确定的长度,该目标三维人脸模型可以为:采用3DMM(3D Morphable Models,三维可变形模型)技术,基于所对应人脸图像中的人脸特征点以及预设的三维人脸模型确定的人脸模型,该目标三维人脸模型包括:基于该人脸图像中的标注眼角点、等分上眼睑点和等分下眼睑点,构建的眼睛的上下眼睑。After obtaining the real eye opening and closing length corresponding to each face image, the first electronic device can obtain the actual measured eye opening and closing length corresponding to each face image. Among them, the actual measured eye opening and closing length can be: the length determined based on the upper and lower eyelids of the eyes in the target three-dimensional face model corresponding to the face image, the target three-dimensional face model can be: using 3DMM (3D Morphable Models, three-dimensional deformable Model) technology, a face model determined based on the facial feature points in the corresponding face image and a preset three-dimensional face model. The target three-dimensional face model includes: based on the marked eye corner points in the face image, etc. The upper eyelid point and the lower eyelid point are divided equally to construct the upper and lower eyelids of the eye.
进而,第一电子设备针对每一人脸图像,计算该人脸图像对应的真实眼睛开闭长度与实测眼睛开闭长度的比值,作为该人脸图像对应的实测偏差;其中,该实测偏差可以确定出所对应眼睛的真实眼睛开闭长度与实测眼睛开闭长度之间的差异程度,其中,该实测偏差越大,所对应眼睛的真实眼睛开闭长度与实测眼睛开闭长度之间的差异程度越大。其中,人脸图像中包括左眼和右眼时,可以分别获得该人脸图像中左眼对应的真实眼睛开闭长度和实测眼睛开闭长度,右眼对应的真实眼睛开闭长度和实测眼睛开闭长度;基于左眼对应的真实眼睛开闭长度和实测眼睛开闭长度,确定左眼对应的实测偏差;基于右眼对应的真实眼睛开闭长度和实测眼睛开闭长度,确定右眼对应的实测偏差,作为该人脸图像对应的实测偏差。Furthermore, for each face image, the first electronic device calculates the ratio of the actual eye opening and closing length corresponding to the face image to the actual measured eye opening and closing length as the actual measurement deviation corresponding to the face image; wherein the actual measurement deviation can be determined The degree of difference between the actual eye opening and closing length of the corresponding eye and the measured eye opening and closing length, where the greater the actual measurement deviation, the greater the degree of difference between the actual eye opening and closing length of the corresponding eye and the measured eye opening and closing length Big. Among them, when the face image includes the left eye and the right eye, the real eye opening and closing length and the measured eye opening and closing length corresponding to the left eye in the face image can be obtained respectively, and the real eye opening and closing length and the measured eye corresponding to the right eye Opening and closing length; based on the actual eye opening and closing length corresponding to the left eye and the measured eye opening and closing length, determine the actual measurement deviation corresponding to the left eye; based on the real eye opening and closing length corresponding to the right eye and the measured eye opening and closing length, determine the right eye correspondence The measured deviation of is taken as the measured deviation corresponding to the face image.
进而,第一电子设备将眼睛图像,及其对应的标定信息,确定为用于检测图像中眼睛的上下眼睑中等分眼睑点的眼部关键点检测模型的训练数据,该标定信息包括所对应眼睛图像标注的眼部关键点的位置信息,和该眼睛图像所对应人脸图像对应的实测偏差。Furthermore, the first electronic device determines the eye image and its corresponding calibration information as the training data of the eye key point detection model for detecting the upper and lower eyelids of the eye in the image, and the calibration information includes the corresponding eye The position information of the key points of the eye marked by the image and the actual measured deviation corresponding to the face image corresponding to the eye image.
后续的,基于眼睛图像,及其对应的标定信息训练所得的眼部关键点检测模型,不仅可以检测出图像中眼睛的上下眼睑中的等分眼睑点,同时可检测出该图像中眼睛对应的实测偏差,进而,可以基于该实测偏差校正该图像对应的实测眼睛开闭长度,以得到较准确地眼睛开闭长度。进而,利用该较准确地眼睛开闭长度执行其他任务时,可以提高其他任务地结果地准确性。Subsequent, the eye key point detection model trained based on the eye image and its corresponding calibration information can not only detect the equally divided eyelid points in the upper and lower lids of the eyes in the image, but also detect the corresponding eye in the image The actual measurement deviation, and further, the actual measurement eye opening and closing length corresponding to the image can be corrected based on the actual measurement deviation, so as to obtain a more accurate eye opening and closing length. Furthermore, when performing other tasks with the more accurate eye opening and closing length, the accuracy of the results of other tasks can be improved.
在本发明的另一实施例中,在所述获得人脸图像以及每一人脸图像对应的标注眼睑曲线的步骤之前,所述方法还可以包括:In another embodiment of the present invention, before the step of obtaining a face image and an eyelid curve corresponding to each face image, the method may further include:
对每一人脸图像中人员面部的眼睛的上下眼睑进行标注的过程,其中,针对每一人脸图像,执行如下步骤,以对每一人脸图像中人员面部的眼睛的上下眼睑进行标注:The process of labeling the upper and lower eyelids of the eyes of the person's face in each face image, wherein for each face image, the following steps are performed to label the upper and lower eyelids of the eyes of the person's face in each face image:
获得第一人脸图像并显示,其中,第一人脸图像包含人员面部的眼睛,第一人脸图像为人脸图像中的一个;Obtain and display a first face image, where the first face image contains the eyes of the person's face, and the first face image is one of the face images;
接收标注人员针对第一人脸图像中眼睛的上下眼睑触发的标注指令,其中,标注指令携带所标注的标注点的位置信息;Receiving an annotation instruction triggered by an annotator for the upper and lower eyelids of the eyes in the first face image, where the annotation instruction carries position information of the marked annotation point;
若检测到标注人员至少两次触发针对第一人脸图像中指定眼睑的标注指令,基于标注人员触发的至少两次标注指令携带的标注点的位置信息,以及预设的曲线拟合算法,确定表征指定眼睑的指定眼睑曲线,其中,指定眼睑为:第一人脸图像中眼睛的上眼睑和下眼睑;If it is detected that the tagger has triggered the tagging instruction for the specified eyelid in the first face image at least twice, based on the location information of the tag point carried by the tagging instruction triggered by the tagger at least twice, and the preset curve fitting algorithm, determine The specified eyelid curve representing the specified eyelid, where the specified eyelid is: the upper eyelid and the lower eyelid of the eye in the first face image;
在第一人脸图像中显示指定眼睑曲线,以使得标注人员检测所标注的标注点是否为指定眼睑上的眼睑点或眼角点。The designated eyelid curve is displayed in the first face image, so that the labeler detects whether the labelled point is an eyelid point or an eye corner point on the designated eyelid.
在获得人脸图像以及每一人脸图像对应的标注眼睑曲线之前,还可以包括对每一人脸图像中人员面部的眼睛的上下眼睑进行标注的过程。其中,该标注过程为:标注人员对人脸图像中人员面部的眼睛的上下眼睑标注眼角点和眼睑点的过程。Before obtaining the face image and the labeled eyelid curve corresponding to each face image, a process of labeling the upper and lower eyelids of the eyes of the person's face in each face image may also be included. The labeling process is a process in which the labeling person marks the corner points and the eyelid points of the upper and lower eyelids of the eyes of the person's face in the face image.
本发明实施例中,第一电子设备在检测到标注人员触发的人脸图像标注开始指令之后,可以获得第一人脸图像,该第一人脸图像为上述人脸图像中的任一图像,该第一人脸图像包含人员面部的眼睛。第一电子设备显示该第一人脸图像,标注人员可以针对该第一人脸图像中眼睛的上下眼睑进行标注,第一电子设备接收标注人员针对第一人脸图像中眼睛的上下眼睑触发的标注指令,该标注指令中携带所标注的标注点的位置信息。In the embodiment of the present invention, the first electronic device can obtain the first face image after detecting the face image annotation start instruction triggered by the annotator, and the first face image is any one of the aforementioned face images, The first face image contains the eyes of the person's face. The first electronic device displays the first face image, the annotator can mark the upper and lower eyelids of the eyes in the first face image, and the first electronic device receives the trigger from the annotator for the upper and lower eyelids of the eyes in the first face image. A marking instruction, which carries the position information of the marked marking point.
第一电子设备基于该标注指令中携带的所标注的标注点的位置信息,在第一人脸图像中相应位置处显示预设标注图标;第一电子设备可以实时统计标注人员针对第一人脸图像中指定眼睑触发的标注指令的次数;若检测到标注人员至少两次触发针对第一人脸图像中指定眼睑的标注指令,即该指定眼睑包括至少两个标注点时,第一电子设备基于标注人员触发的至少两次标注指令携带的标注点的位置信息,以及预设的曲线拟合算法,确定表征指定眼睑的指定眼睑曲线;并在第一人脸图像中显示该指 定眼睑曲线,标注人员可以观察该指定眼睑曲线,检测该指定眼睑曲线是否与第一人脸图像中的指定眼睑重合,即检测所标注的标注点是否为指定眼睑上的眼睑点或眼角点。The first electronic device displays a preset labeling icon at a corresponding position in the first face image based on the position information of the labeling point carried in the labeling instruction; the first electronic device can count the labeling personnel’s real-time statistics on the first face The number of marking instructions triggered by the specified eyelid in the image; if it is detected that the marking person has triggered the marking instruction for the specified eyelid in the first face image at least twice, that is, when the specified eyelid includes at least two marking points, the first electronic device is based on The position information of the label points carried by the labeling instructions triggered by the labeler at least twice, and the preset curve fitting algorithm, determine the specified eyelid curve representing the specified eyelid; and display the specified eyelid curve in the first face image, and mark The person can observe the specified eyelid curve and detect whether the specified eyelid curve coincides with the specified eyelid in the first face image, that is, detect whether the marked point is an eyelid point or an eye corner point on the specified eyelid.
后续的,标注人员确定需要修改所标注的标注点的位置时,标注人员可以触发标注点位置修改指令,第一电子设备获得该标注点位置修改指令,其中,该标注点位置修改指令中携带待修改位置的标注点的当前位置信息以及所要修改到的目标位置信息,第一电子设备将该待修改位置的标注点从当前位置信息对应的位置,移动至所要修改到的目标位置信息对应的位置处,即在目标位置信息对应的位置处显示预设标注图标,并删除该当前位置信息对应的位置显示的预设标注图标。并且,第一电子设备重新基于该指定眼睑中修改后的标注点的新的位置信息和其他标注点的位置信息,以及预设的曲线拟合算法,确定表征指定眼睑的新的指定眼睑曲线,并在第一人脸图像中显示,以使得标注人员继续检测所标注的标注点是否为指定眼睑上的眼睑点或眼角点。Subsequently, when the annotator determines that the position of the marked point needs to be modified, the annotator can trigger the marking point position modification instruction, and the first electronic device obtains the marking point position modification instruction, wherein the marking point position modification instruction carries the Modify the current position information of the marked point of the modified position and the target position information to be modified, the first electronic device moves the marked point of the position to be modified from the position corresponding to the current position information to the position corresponding to the target position information to be modified , That is, display the preset label icon at the location corresponding to the target location information, and delete the preset label icon displayed at the location corresponding to the current location information. In addition, the first electronic device determines a new specified eyelid curve that characterizes the specified eyelid based on the new position information of the modified marked point in the specified eyelid and the position information of other marked points, and a preset curve fitting algorithm, And it is displayed in the first face image, so that the annotator continues to detect whether the marked point is an eyelid point or an eye corner point on the designated eyelid.
直至第一电子设备检测到标注人员针对该第一人脸图像触发保存指令时,第一电子设备保存该第一人脸图像及其对应的保存指令触发时刻所包含的标注点,和每一标注点的位置信息。一种情况中,第一人脸图像中保存指令触发时刻所包含的标注点可以包括眼睛的两个眼角点,以及上眼睑的上眼睑点和下眼睑的下眼睑点。其中,该上眼睑点和下眼睑点的数量可以相同,可以不同,例如:上眼睑点的数量可以为3个,下眼睑点的数量可以为4个等等。其中,该预设标注图标可以为一个实心圆或空心圆,也可以为其他形状的实心图像或空心图像,这都是可以的。Until the first electronic device detects that the annotator triggers a save instruction for the first face image, the first electronic device saves the first face image and the annotation points contained at the moment when the save instruction is triggered, and each annotation Point location information. In one case, the marked points included in the trigger moment of the save instruction in the first face image may include two corner points of the eyes, the upper eyelid point of the upper eyelid and the lower eyelid point of the lower eyelid. Wherein, the number of upper eyelid points and lower eyelid points can be the same or different, for example: the number of upper eyelid points can be 3, the number of lower eyelid points can be 4, and so on. Wherein, the preset labeling icon can be a solid circle or a hollow circle, and can also be a solid image or a hollow image of other shapes, which is all right.
可以理解的是,该标注过程可以在第一电子设备上执行,也可以在不同于第一电子设备的其他电子设备上执行,这都是可以。若在不同于第一电子设备的其他电子设备上执行该标注过程,可以是在标注人员对人脸图像标注完成后,将标注有眼睛的上下眼睑标注眼角点和眼睑点的人脸图像上传至云端,以使得第一电子设备执行眼部关键点的标注时,可以从云端获得该标注有眼睛的上下眼睑标注眼角点和眼睑点的人脸图像。It is understandable that the labeling process can be executed on the first electronic device, or can be executed on another electronic device different from the first electronic device, all of which are possible. If the labeling process is performed on another electronic device different from the first electronic device, it may be that after the labeling personnel finish labeling the face image, upload the face image with the eye corner points and eyelid points marked on the upper and lower eyelids of the eyes. Cloud, so that when the first electronic device performs the marking of the key points of the eye, it can obtain the face image of the upper and lower eyelids marked with the eye corner points and the eyelid points from the cloud.
本发明实施例,可以在一定程度上可以高效的保证标注人员所标注的眼睑点或眼角点的准确性,提高标注人员的标注效率。The embodiments of the present invention can efficiently ensure the accuracy of the eyelid points or the corner points of the eyes marked by the labeler to a certain extent, and improve the labeling efficiency of the labeler.
如图2A、图2B、图2C以及图2D所示,为人脸图像对应的标注眼睑曲线的示意图。其中,图2A所示人脸图像包括一只眼睛,且该眼睛可以完全被检测到,该人脸图像对应的标注眼睑曲线可以包括该眼睛的标注上眼睑曲线和标注下眼睑曲线。其中,图2A、图2B、图2C以及图2D中被遮挡的位置为人脸所在位置处。As shown in FIG. 2A, FIG. 2B, FIG. 2C, and FIG. 2D, it is a schematic diagram of an eyelid curve corresponding to a face image. Wherein, the face image shown in FIG. 2A includes one eye, and the eye can be completely detected, and the marked eyelid curve corresponding to the face image may include the marked upper eyelid curve and the marked lower eyelid curve of the eye. Wherein, the occluded position in FIG. 2A, FIG. 2B, FIG. 2C, and FIG. 2D is the position where the human face is located.
图2B所示人脸图像包括两只眼睛,且该两只眼睛均可以完全被检测到,该人脸图像对应的标注眼睑曲线可以包括左眼的标注上眼睑曲线和标注下眼睑曲线,以及右眼的标注上眼睑曲线和标注下眼睑曲线。图2B中右图为左图中眼睛所在位置的局部放大图。The face image shown in FIG. 2B includes two eyes, and both eyes can be completely detected. The labeled eyelid curve corresponding to the face image may include the labeled upper eyelid curve and the labeled lower eyelid curve of the left eye, and the right The eye is marked with the upper eyelid curve and the lower eyelid curve. The right image in Figure 2B is a partial enlarged view of the position of the eye in the left image.
图2C所示人脸图像中包括一只眼睛,且该眼睛的内眼角被遮挡,标注人员在标注该类人脸图像中的部分被遮挡的眼睛的上下眼睑的眼睑点和眼角点时,一种情况,可以通过经验将该被遮挡的眼睛中被遮挡的位置处的眼睑点和眼角点,直接标注出。另一种情况,当该人脸图像为基于多图像采集设备系统所采集的图像时,可以基于该人脸图像对应的其他人脸图像,重建出该人脸图像对应的人员的三维人脸模型,进而,从该三维人脸模型中,确定处该人脸图像中眼睛被遮挡的位置处对应的眼睛空间点,进而,将该眼睛空间点重投影至该人脸图像中,以确定该被遮挡的位置处的眼睑点和/或眼角点。其中,上述部分被遮挡的眼睛可以指被遮挡的面积不超过预设面积的眼睛。图2C中右图为左图眼睛所在位置的局部放大图。The face image shown in Figure 2C includes an eye, and the inner corner of the eye is occluded. When annotating the eyelid points and corner points of the upper and lower lids of the partially occluded eyes in this type of face image, one In this case, the eyelid points and eye corner points at the occluded position of the occluded eye can be directly marked through experience. In another case, when the face image is an image collected by a multi-image acquisition device system, a three-dimensional face model of the person corresponding to the face image can be reconstructed based on other face images corresponding to the face image , And then, from the three-dimensional face model, determine the eye space point corresponding to the position where the eye is blocked in the face image, and then reproject the eye space point to the face image to determine the The eyelid point and/or eye corner point at the occluded position. The above-mentioned partially blocked eyes may refer to eyes whose blocked area does not exceed a preset area. The right image in Figure 2C is a partial enlarged view of the position of the eye in the left image.
图2D所示人脸图像中包括两只眼睛,且一只眼睛可以被晚间检测到,一只眼睛被部分遮挡。此时标注人员可以根据经验,将该被遮挡的眼睛中被遮挡的位置处的眼睑点和眼角点,直接标注出。或者,当该人脸图像为基于多图像采集设备系统所采集的图像时,可以基于该人脸图像对应的其他人脸图像,重建出该人脸图像对应的人员的三维人脸模型,进而,从该三维人脸模型中,确定处该人脸图像中眼睛被遮挡的位置处对应的眼睛空间点,进而,将该眼睛空间点重投影至该人脸图像中,以确定该被遮挡的位置处的眼睑点和/或眼角点。图2D中右图为左图中左眼所在位置的局部放大图。The face image shown in FIG. 2D includes two eyes, and one eye can be detected at night, and one eye is partially blocked. At this time, the annotator can directly mark the eyelid points and eye corner points at the occluded position of the occluded eye based on experience. Or, when the face image is an image collected based on a multi-image acquisition device system, a three-dimensional face model of the person corresponding to the face image can be reconstructed based on other face images corresponding to the face image, and further, From the three-dimensional face model, determine the eye space point corresponding to the position where the eye is blocked in the face image, and then reproject the eye space point to the face image to determine the blocked position Eyelid point and/or corner point of the eye. The right image in Figure 2D is a partial enlarged view of the position of the left eye in the left image.
其中,上述人脸图像与该人脸图像对应的其他人脸图像均为被该多图像采集设备系统采集的图像,且为同一时刻采集的图像。Wherein, the aforementioned face image and other face images corresponding to the face image are all images collected by the multi-image collection device system, and are images collected at the same time.
其中,为了避免第一电子设备出现误识别的情况,如将针对下眼睑标注的标注点识别为针对上眼睑标注的标注点,或将针对上眼睑标注的标注点识别为针对下眼睑标注的标注点,第一电子设备可以设置成限制标注人员分开针对眼睛的上下眼睑的标注点进行标注,例如:可以首先指示标注人员标注眼睛的上眼睑的标注点,此时,标注人员无法标注该眼睛的下眼睑的标注点;当检测到标注人员触发表征上眼睑的标注点已标注完成的指令后,再指示标注人员标注该眼睛的下眼睑的标注点,此时标注人员无法标注该眼睛的上眼睑的标注点。Among them, in order to avoid misrecognition of the first electronic device, for example, the marked points marked for the lower eyelid are recognized as marked points marked for the upper eyelid, or the marked points marked for the upper eyelid are recognized as marked marked for the lower eyelid Point, the first electronic device can be set to restrict the labeling staff to separate the labeling points of the upper and lower eyelids of the eyes. For example, the labeling staff can be instructed to label the labeling points of the upper eyelids of the eyes. At this time, the labeling staff cannot label the eye The marking point of the lower eyelid; after detecting that the labeler triggers the instruction indicating that the marking point of the upper eyelid has been marked, the labeling staff is then instructed to mark the marking point of the lower eyelid of the eye. At this time, the marking staff cannot mark the upper eyelid of the eye Marked points.
相应的,在一种情况中,标注人员在对人脸图像中眼睛的上下眼睑进行标注时,眼睛的上下眼睑的相交处为所对应眼睛的内外眼角处。第一电子设备为了保证上下眼睑的眼睑曲线可以分别相交与所对应的内外眼角处。第一电子设备在检测到上眼睑中的标注点已标注完成,当标注人员在下眼睑中标注的一标注点的位置,距离已标注的某一眼角点的位置,低于预设距离时,可将该标注点从当前位置吸附至该眼角点的位置处。其中,已标注的眼角点可以指:已标注的上眼睑中的标注点横轴坐标最小和最大的标注点。Correspondingly, in one case, when the annotator marks the upper and lower eyelids of the eyes in the face image, the intersection of the upper and lower eyelids of the eyes is the inner and outer corners of the corresponding eyes. The first electronic device ensures that the eyelid curves of the upper and lower eyelids can intersect with the corresponding inner and outer corners of the eye respectively. The first electronic device detects that the marking point in the upper eyelid has been marked. When the position of a marking point marked in the lower eyelid by the marking staff is less than a preset distance from the marked eye corner point, Snap the label point from the current position to the position of the corner of the eye. Wherein, the marked corner point of the eye may refer to the marked point with the smallest and largest horizontal axis coordinate of the marked point in the marked upper eyelid.
图3为本发明实施例提供的眼部关键点检测模型的训练方法的一种流程示意图。该方法应用于电 子设备,该电子设备可以为具有较强计算处理能力的设备,例如服务器等,其中,为了布局清楚,后续描述中,可以将实现眼部关键点检测模型的训练方法的电子设备成为第二电子设备。该方法具体包括以下步骤:3 is a schematic flowchart of a method for training an eye key point detection model provided by an embodiment of the present invention. This method is applied to an electronic device, which can be a device with strong computing and processing capabilities, such as a server, etc., in which, for clear layout, in the subsequent description, the electronic device that implements the training method of the eye key point detection model can be used Become a second electronic device. The method specifically includes the following steps:
S301:获得训练数据。其中,该训练数据包括标注有眼睛的上下眼睑中等分眼睑点和标注眼角点的眼睛图像以及每一眼睛图像对应的标定信息,标定信息包括:所对应眼睛图像中标注的等分眼睑点和标注眼角点的位置信息,等分眼睑点包括:基于人脸图像对应的标注眼睑曲线以及数学积分原理,确定出的人脸图像中眼睛的上眼睑的等分上眼睑点和下眼睑的等分下眼睑点;标注眼睑曲线包括:基于所对应人脸图像中标注的眼睛的标注眼角点和上下眼睑的标注眼睑点生成的、表征上眼睑的标注上眼睑曲线和表征下眼睑的标注下眼睑曲线,每一眼睛图像为从所对应人脸图像中截取出的眼睛所在区域图像。S301: Obtain training data. Among them, the training data includes the upper and lower eyelid equal eyelid points marked with the eyes and the eye images marked with the corner points of the eyes, and the calibration information corresponding to each eye image. The calibration information includes: the equal eyelid points and labels marked in the corresponding eye image The position information of the corner points of the eyes. The equally divided eyelid points include: based on the marked eyelid curve corresponding to the face image and the principle of mathematical integration, the equal division of the upper eyelid of the eye in the face image is determined. The upper eyelid point and the lower eyelid are divided equally Eyelid points; marking eyelid curves include: based on the marked corner points of the eyes and the marked eyelid points of the upper and lower eyelids in the corresponding face image, the marked upper eyelid curve representing the upper eyelid and the marked lower eyelid curve representing the lower eyelid are generated, Each eye image is an image of the area where the eye is cut out from the corresponding face image.
第二电子设备可以首先获得训练数据,其中,该训练数据可以包括多个眼睛图像及每一眼睛图像对应的标定信息。该眼睛图像标注有眼睛的上下眼睑中等分眼睑点和标注眼角点。其中,该等分眼睑点和标注眼角点的具体获得流程可以参见上述眼部关键点的标注流程中的获得等分眼睑点和标注眼角点的获得流程,在此不再进行赘述。The second electronic device may first obtain training data, where the training data may include multiple eye images and calibration information corresponding to each eye image. The eye image is marked with the upper and lower eyelids of the eye and the eyelid points and the corner points. Among them, the specific obtaining process of dividing the eyelid points and marking the eye corner points can refer to the obtaining process of obtaining the equal eyelid points and marking the eye corner points in the marking process of the above-mentioned key points of the eye, and will not be repeated here.
S302:将眼睛图像,以及每一眼睛图像对应的标定信息包括的等分眼睑点和标注眼角点的位置信息,输入初始的眼部关键点检测模型,以训练得到用于检测图像中眼睛的上下眼睑中的等分眼睑点和眼角点的眼部关键点检测模型。S302: Input the eye image and the position information of the equally divided eyelid points and the marked eye corner points included in the calibration information corresponding to each eye image into the initial eye key point detection model to be trained to detect the upper and lower positions of the eyes in the image An eye key point detection model that divides the eyelid point and the corner point of the eyelid.
本步骤中,该初始的眼部关键点检测模型可以为:基于深度学习的神经网络模型。第二电子设备将眼睛图像以及每一眼睛图像对应的标定信息包括的眼部关键点的位置信息,输入初始的眼部关键点检测模型,其中,该眼部关键点包括所对应眼睛的上下眼睑中的等分眼睑点和眼角点。In this step, the initial eye key point detection model may be: a neural network model based on deep learning. The second electronic device inputs the eye image and the position information of the eye key points included in the calibration information corresponding to each eye image into the initial eye key point detection model, where the eye key points include the upper and lower eyelids of the corresponding eye The halves of the eyelid point and the corner point of the eye.
第二电子设备通过初始的眼部关键点检测模型针对每一眼睛图像,提取该眼睛图像中的图像特征,并基于所提取的图像特征,检测出该眼睛图像中眼睛的眼部关键点和及其位置信息;并将所检测出眼部关键点的位置信息,与所对应标定信息中的眼部关键点的位置信息进行匹配,若匹配成功,确定初始的眼部关键点检测模型收敛,得到训练所得的眼部关键点检测模型;若匹配不成功,确定初始的眼部关键点检测模型未收敛,调整初始的眼部关键点检测模型的参数,返回执行将眼睛图像,以及每一眼睛图像对应的标定信息包括的等分眼睑点和标注眼角点的位置信息,输入初始的眼部关键点检测模型的步骤,直至匹配成功,确定初始的眼部关键点检测模型收敛,得到训练所得的眼部关键点检测模型。The second electronic device uses the initial eye key point detection model to extract the image features in the eye image for each eye image, and based on the extracted image features, detects the eye key points and the key points of the eye in the eye image. Its position information; and the detected position information of the key points of the eye is matched with the position information of the key points of the eye in the corresponding calibration information. If the matching is successful, it is determined that the initial key point detection model of the eye has converged to obtain The trained eye key point detection model; if the matching is unsuccessful, determine that the initial eye key point detection model has not converged, adjust the parameters of the initial eye key point detection model, and return to execute the eye image and each eye image The corresponding calibration information includes the position information of the equally divided eyelid points and the marked eye corner points, and the steps of inputting the initial eye key point detection model until the matching is successful, confirm that the initial eye key point detection model converges, and obtain the trained eye Key point detection model.
其中,眼睛图像与检测出的眼部关键点的位置信息存在对应关系,眼睛图像与标定信息存在对应关系,相应的,检测出的眼部关键点的位置信息与标定信息存在对应关系。Among them, there is a correspondence between the eye image and the detected position information of the key points of the eye, and there is a correspondence between the eye image and the calibration information, and correspondingly, the detected position information of the key points of the eye has a corresponding relationship with the calibration information.
上述将所检测出眼部关键点的位置信息,与所对应标定信息中的眼部关键点的位置信息进行匹配的过程,可以是:利用预设的损失函数,计算每一所检测出眼部关键点的位置信息与其对应的标定信息中的眼部关键点的位置信息之间的损失值,判断该损失值是否小于预设损失阈值;若判断该损失值小于预设损失阈值,且,若当前的判断损失值小于预设损失阈值的次数超过预定次数,或,当前的判断损失值小于预设损失阈值的次数占总次数的比值超过预设比值阈值,确定匹配成功,可确定初始的眼部关键点检测模型收敛,得到训练所得的眼部关键点检测模型;若判断该损失值不小于预设损失阈值,确定匹配不成功。The above process of matching the position information of the detected key points of the eye with the position information of the key points of the eye in the corresponding calibration information may be: using a preset loss function to calculate each detected eye The loss value between the position information of the key point and the position information of the eye key point in the corresponding calibration information, to determine whether the loss value is less than the preset loss threshold; if it is determined that the loss value is less than the preset loss threshold, and, if The current judgment loss value is less than the preset loss threshold for more than a predetermined number of times, or the ratio of the current judgment loss value less than the preset loss threshold to the total number of times exceeds the preset ratio threshold, the match is determined to be successful, and the initial eye can be determined The key point detection model converges to obtain a trained eye key point detection model; if it is determined that the loss value is not less than the preset loss threshold, it is determined that the matching is unsuccessful.
上述过程仅作为一种确定该初始的眼部关键点检测模型收敛的示例,本发明实施例可以利用任一可以表征模型收敛的确定方式,确定初始的眼部关键点检测模型是否收敛,进而以训练得到眼部关键点检测模型。The above process is only an example of determining the convergence of the initial eye key point detection model. The embodiment of the present invention can use any determination method that can characterize the model convergence to determine whether the initial eye key point detection model has converged, and then The key point detection model of the eye is trained.
其中,上述预设的损失函数可以为smooth L1 loss(平滑的1范数损失)、wing loss(机翼损失)以及KL loss(即KL散度损失)等损失函数。Among them, the aforementioned preset loss function may be a loss function such as smooth L1 loss (smooth 1 norm loss), wing loss (wing loss), and KL loss (that is, KL divergence loss).
上述调整初始的眼部关键点检测模型的参数的过程中,可以基于使得训练过程中的眼部关键点检测模型所检测到的眼部关键点的位置信息和所对应的标定信息中的眼部关键点的位置信息之间的“差距”越来越小的原则,进行调整,其中,可以使用SGD(Stochastic Gradient Descent,随机梯度下降)或SGDR(stochastic gradient descent with restarts,带重启的随机梯度下降方法)等优化策略。其中,训练过程中批处理大小可以为256,其初始学习率可以为0.04。The above process of adjusting the parameters of the initial eye key point detection model can be based on the position information of the eye key points detected by the eye key point detection model in the training process and the eye parts in the corresponding calibration information. The "gap" between the location information of key points is getting smaller and smaller, and adjustments should be made. Among them, SGD (Stochastic Gradient Descent) or SGDR (stochastic gradient descent with restarts) can be used. Method) and other optimization strategies. Among them, the batch size during the training process can be 256, and the initial learning rate can be 0.04.
可以理解的是,从眼睛图像中检测出的眼部关键点与眼睛图像对应的标定信息中包括的位置信息对应的眼部关键点存在对应关系,在计算每一所检测出眼部关键点的位置信息与其对应的标定信息中的眼部关键点的位置信息之间的损失值时,需要计算每一对存在对应关系的检测出的眼部关键点的位置信息,与标定信息中的眼部关键点的位置信息之间的损失值。It is understandable that there is a correspondence between the key eye points detected from the eye image and the key eye points corresponding to the position information included in the calibration information corresponding to the eye image. The calculation of the key points of each eye detected When the loss value between the position information and the position information of the eye key points in the corresponding calibration information, it is necessary to calculate the position information of each pair of detected key eye points in the corresponding relationship, and the eye in the calibration information The loss value between the location information of the key points.
其中,所获得的训练数据中所包括的眼睛图像的数量越多,训练所得的眼部关键点检测模型的稳定性越高,且基于该眼部关键点检测模型所获得的检测结果的准确性也越高。Among them, the more the number of eye images included in the obtained training data, the higher the stability of the trained eye key point detection model, and the accuracy of the detection result obtained based on the eye key point detection model Also higher.
应用本发明实施例,利用包括标注有等分眼睑点和标注眼角点的眼睛图像,以其对应的包括该标注的等分眼睑点和标注眼角点的位置信息的标定信息的训练数据,训练初始的眼部关键点检测模型,以训练得到用于检测图像中眼睛的上下眼睑中的等分眼睑点和眼角点的眼部关键点检测模型。使得眼部关键点检测模型可以从图像中确定出具有较明显的语义特征的眼部关键点,并在一定程度上保证眼部关键点检测模型的稳定性以及检测准确性。Applying the embodiment of the present invention, using the eye image including the marked equal-divided eyelid points and the marked eye corner points, with its corresponding training data including the marked equal-divided eyelid points and the position information of the marked eye corner points, the initial training The eye key point detection model is trained to obtain the eye key point detection model used to detect the equally divided eyelid points and eye corner points in the upper and lower eyelids of the eyes in the image. The eye key point detection model can determine the eye key points with obvious semantic features from the image, and to a certain extent guarantee the stability and detection accuracy of the eye key point detection model.
在本发明的另一实施例中,在所述S302之前,所述方法还可以包括:In another embodiment of the present invention, before the S302, the method may further include:
针对每一眼睛图像,对该眼睛图像进行转正处理,得到转正图像,其中,该转正处理为:使得该眼睛图像中的标注眼角点的位置信息中的纵坐标均相同的处理;For each eye image, performing the correction processing on the eye image to obtain the correction image, where the correction processing is: processing that the ordinates in the position information of the marked corner points in the eye image are all the same;
基于每一转正图像中等分眼睑点和标注眼角点的位置信息,更新每一转正图像对应的标定信息包括的等分眼睑点和标注眼角点的位置信息;Based on the position information of the equal-divided eyelid points and the marked eye corner points of each converted image, update the position information of the divided eyelid points and marked eye corner points included in the calibration information corresponding to each converted image;
所述S302,可以包括:将每一转正图像,以及每一转正图像对应的标定信息包括的等分眼睑点和标注眼角点的位置信息,输入初始的眼部关键点检测模型,以训练得到用于检测图像中眼睛的上下眼睑中的等分眼睑点和眼角点的眼部关键点检测模型。The S302 may include: inputting the position information of the halved eyelid points and the marked eye corner points included in each converted image and the calibration information corresponding to each converted image into the initial eye key point detection model to be used for training An eye key point detection model for detecting the equally divided eyelid points and eye corner points in the upper and lower eyelids of the eyes in the image.
在一种情况中,人脸图像中人员的面部可能会出现倾斜的情况,本实施例中,为了提高训练所得的眼部关键点检测模型的检测结果的精度,在利用眼睛图像及其标定信息训练初始的眼部关键点检测模型之前,可以首先对眼睛图像进行转正处理,得到转正图像,即使得眼睛图像中的标注眼角点的位置信息中的纵坐标均相同。并重新确定眼部关键点在转正图像中位置信息,并基于每一转正图像中眼部关键点,即等分眼睑点和标注眼角点的位置信息,更新每一转正图像对应的标定信息包括的眼部关键点,即等分眼睑点和标注眼角点的位置信息。并将每一转正图像以及每一转正图像对应的标定信息输入初始的眼部关键点检测模型。这在一定程度上可以减轻训练眼部关键点检测模型的训练负担,并在一定程度上提高训练所得的眼部关键点检测模型对图像中眼睛的眼部关键点的检测精度。In one case, the person’s face in the face image may be tilted. In this embodiment, in order to improve the accuracy of the detection result of the trained eye key point detection model, the eye image and its calibration information are used Before training the initial eye key point detection model, the eye image can be corrected first to obtain the corrected image, that is, the ordinates in the position information of the marked eye corner points in the eye image are all the same. And re-determine the position information of the key points of the eye in the corrected image, and update the calibration information corresponding to each corrected image based on the key points of the eye in each corrected image, that is, the position information of the eyelid points and the marked eye corner points. The key points of the eye are the position information of the eyelid points and the corner points of the eyes. And input each converted image and the calibration information corresponding to each converted image into the initial eye key point detection model. This can reduce the training burden of training the eye key point detection model to a certain extent, and improve the detection accuracy of the eye key point detection model of the eye in the image to a certain extent.
在本发明的另一实施例中,该眼睛图像包括左眼图像和左眼图像对应的右眼图像;In another embodiment of the present invention, the eye image includes a left eye image and a right eye image corresponding to the left eye image;
在所述将每一转正图像,以及每一转正图像对应的标定信息包括的等分眼睑点和标注眼角点的位置信息,输入初始的眼部关键点检测模型,以训练得到用于检测图像中眼睛的上下眼睑中的等分眼睑点和眼角点的眼部关键点检测模型的步骤之前,所述方法还可以包括:对左眼图像或左眼图像对应的右眼图像进行镜像处理,得到镜像图像;In the above-mentioned each converted image, and the calibration information corresponding to each converted image, the position information of the divided eyelid points and the marked eye canthus points are input into the initial eye key point detection model to be trained to be used in the detection image Before the step of detecting the key points of the eyelid point and the corner point of the eye in the upper and lower eyelids of the eye, the method may further include: mirroring the left eye image or the right eye image corresponding to the left eye image to obtain a mirror image image;
对镜像图像以及未进行镜像的图像进行拼接,得到拼接图像,其中,若对左眼图像进行镜像处理,未进行镜像的图像为左眼图像对应的右眼图像;若对左眼图像对应的右眼图像进行镜像处理,未进行镜像的图像为左眼图像;The mirror image and the unmirrored image are spliced to obtain a spliced image. If the left-eye image is mirrored, the unmirrored image is the right-eye image corresponding to the left-eye image; if the right-eye image corresponding to the left-eye image is The eye image is mirrored, and the image that is not mirrored is the left eye image;
所述将每一转正图像,以及每一转正图像对应的标定信息包括的等分眼睑点和标注眼角点的位置信息,输入初始的眼部关键点检测模型,以训练得到用于检测图像中眼睛的上下眼睑中的等分眼睑点和眼角点的眼部关键点检测模型的步骤,可以包括:Said each converted image and the calibration information corresponding to each converted image includes the position information of the divided eyelid points and the marked eye corner points, and input the initial eye key point detection model to be trained to detect the eyes in the image The steps of detecting the key points of the eyelid point and the corner point of the eyelid in the upper and lower eyelids may include:
将每一拼接图像以及每一拼接图像对应的标定信息包括的等分眼睑点和标注眼角点的位置信息,输入初始的眼部关键点检测模型,以训练得到用于检测图像中眼睛的上下眼睑中的等分眼睑点和眼角点的眼部关键点检测模型,其中,标定信息包括:镜像图像中的等分眼睑点和标注眼角点镜像之后的位置信息以及未进行镜像的图像中的等分眼睑点和标注眼角点的位置信息。Input the position information of the divided eyelid points and the marked eye corner points included in each stitched image and the calibration information corresponding to each stitched image into the initial eye key point detection model to train the upper and lower eyelids for detecting the eyes in the image The key point detection model of the eyelid points and the corner points of the eye in, where the calibration information includes: the equally divided eyelid points in the mirror image and the position information of the marked eye corner points after mirroring, and the equal division in the image without mirroring The position information of the eyelid points and the marked eye corner points.
其中,眼睛图像包括:包含目标人员左眼的图像,可称为左眼图像;和包含目标人员的右眼的图像,可称为左眼图像对应的右眼图像。Wherein, the eye image includes: an image containing the left eye of the target person, which may be called a left eye image; and an image containing the right eye of the target person, which may be called a right eye image corresponding to the left eye image.
为了在一定程度上降低利用训练所得的眼部关键点检测模型,检测到图像中眼睛的眼部关键点的复杂度,并缩短利用训练所得的眼部关键点检测模型,检测得到图像中眼睛的眼部关键点所需的检测时间。本实施例中,可对左眼图像或左眼图像进行镜像处理,得到镜像图像,进而对镜像图像及未进行镜像的图像进行拼接,得到眼睛图像对应的拼接图像。In order to reduce to a certain extent the use of the trained eye key point detection model to detect the complexity of the eye key point of the eye in the image, and shorten the eye key point detection model obtained using the training to detect the eye key point in the image The detection time required for key points of the eye. In this embodiment, the left-eye image or the left-eye image may be mirrored to obtain a mirrored image, and then the mirrored image and the unmirrored image are spliced to obtain the spliced image corresponding to the eye image.
将每一拼接图像及其对应的标定信息输入初始的眼部关键点检测模型,以训练该初始的眼部关键点检测模型。对左眼图像或左眼图像对应的右眼图像进行镜像处理,可以使得左眼图像镜像为镜像后的右眼图像,或使得右眼图像镜像为镜像后的左眼图像,在一定程度上可以缩短训练时间。后续的,在利用眼部关键点检测模型对图像进行检测时,可以首先对从图像中确定出的左眼图像以及右眼图像进行相应的处理,即转正、镜像以及拼接处理,可以使得眼部关键点检测模型同时对处理后所得的图像中的两只人眼中的眼部关键点进行检测,即通过一次检测过程则可检测出该处理后所得的图像中两只人眼的上下眼睑的眼部关键点,简化了利用眼部关键点检测模型,对眼部关键点的检测过程。Each stitched image and its corresponding calibration information are input into the initial eye key point detection model to train the initial eye key point detection model. Mirroring the left-eye image or the right-eye image corresponding to the left-eye image can make the left-eye image mirror the mirrored right-eye image, or make the right-eye image mirror the mirrored left-eye image, to a certain extent. Shorten training time. Later, when using the eye key point detection model to detect the image, the left eye image and the right eye image determined from the image can be processed first, that is, the correction, mirroring, and stitching processing can make the eye The key point detection model detects the key points of the eyes in the two human eyes in the processed image at the same time, that is, through a detection process, the upper and lower eyelids of the two human eyes in the processed image can be detected The key points of the eye simplifies the detection process of the key points of the eye using the eye key point detection model.
其中,上述对镜像图像以及未进行镜像的图像进行拼接,得到拼接图像的过程,可以是:对镜像图像以及未进行镜像的图像进行空间维度的拼接或者通道维度的拼接,其中,该空间维度的拼接可以为:将镜像图像以及未进行镜像的图像进行左右拼接或上下拼接。左右拼接可以是:镜像图像的右侧边与未进行镜像的图像的左侧边进行拼接,镜像图像的左侧边与未进行镜像的图像的右侧边进行拼接。上下拼接可以是:镜像图像的上侧边与未进行镜像的图像的下侧边进行拼接,镜像图像的下侧边与未进行镜像的图像的上侧边进行拼接。上述通道维度的拼接可以为:将镜像图像以及未进行镜像的图像进行前后拼接,即将镜像图像以及未进行镜像的图像叠加拼接。Among them, the above-mentioned process of splicing the mirror image and the unmirrored image to obtain the spliced image may be: splicing the mirror image and the unmirrored image in the spatial dimension or the channel dimension, where the spatial dimension is The splicing can be: splicing the mirror image and the unmirrored image left and right or up and down. Left and right splicing can be: the right side of the mirror image is spliced with the left side of the image that is not mirrored, and the left side of the mirror image is spliced with the right side of the image that is not mirrored. Top and bottom splicing may be: the upper side of the mirror image is spliced with the lower side of the image that is not mirrored, and the lower side of the mirror image is spliced with the upper side of the image that is not mirrored. The splicing of the aforementioned channel dimensions may be: splicing the mirror image and the unmirrored image back and forth, that is, superimposing and splicing the mirror image and the unmirrored image.
相应的,此时,可以在之前进行转正处理时,将镜像图像对应的原图像中的眼角点的纵坐标值,和未进行镜像的图像的眼角点的纵坐标值均调整成相同的值。其中,镜像图像对应的原图像为:进行镜像处理得到该镜像图像的图像。Correspondingly, at this time, the ordinate value of the corner point of the eye in the original image corresponding to the mirror image and the ordinate value of the corner point of the image without mirroring can be adjusted to the same value during the previous normalization process. Among them, the original image corresponding to the mirror image is: the mirror image is obtained by performing mirror image processing.
在本发明的另一实施例中,每一眼睛图像对应的标定信息还可以包括:该眼睛图像对应的实测偏差,实测偏差为:该眼睛图像对应的真实眼睛开闭长度与实测眼睛开闭长度的比值;实测眼睛开闭长度为:基于人脸图像对应的目标三维人脸模型中眼睛的上下眼睑所确定的长度,目标三维人脸模型为:基于所对应人脸图像中的人脸特征点以及预设的三维人脸模型确定的人脸模型,目标三维人脸模型包括:基于该人脸图像中的标注眼角点、等分上眼睑点和等分下眼睑点,构建的眼睛的上下眼睑;In another embodiment of the present invention, the calibration information corresponding to each eye image may further include: a measured deviation corresponding to the eye image, and the measured deviation is: the actual eye opening and closing length corresponding to the eye image and the measured eye opening and closing length The actual measured eye opening and closing length is: based on the length determined by the upper and lower eyelids of the eyes in the target three-dimensional face model corresponding to the face image, the target three-dimensional face model is: based on the facial feature points in the corresponding face image As well as the face model determined by the preset 3D face model, the target 3D face model includes: the upper and lower eyelids of the eye constructed based on the marked eye corner points, equal division of upper eyelid points and equal division of lower eyelid points in the face image ;
所述S302,可以包括:将眼睛图像,以及每一眼睛图像对应的标定信息包括的等分眼睑点和标注 眼角点的位置信息及眼睛图像对应的实测偏差,输入初始的眼部关键点检测模型,以训练得到眼部关键点检测模型,其中,眼部关键点检测模型用于检测图像中眼睛的上下眼睑中的等分眼睑点和眼角点,并检测图像对应的实测偏差。The S302 may include: inputting the eye image and the calibration information corresponding to each eye image including the position information of the divided eyelid points and the marked eye corner points and the actual measurement deviation corresponding to the eye image, and inputting the initial eye key point detection model , The key point detection model of the eye is obtained by training, where the key point detection model of the eye is used to detect the equal-divided eyelid points and corner points of the upper and lower lids of the eyes in the image, and detect the actual measured deviation corresponding to the image.
本实现方式中,可以将眼睛图像中等分眼睑的位置信息、标注眼角点的位置信息以及眼睛图像对应的实测偏差,均作为眼睛图像对应的标定信息,进而利用该眼睛图像以及眼睛图像对应的标定信息,训练初始的眼部关键点检测模型,即将眼睛图像以及眼睛图像对应的标定信息,输入初始的眼部关键点检测模型,以训练得到眼部关键点检测模型,其中,本实现方式中,训练完成的眼部关键点检测模型:即可以用于检测图像中眼睛的上下眼睑中的等分眼睑点和眼角点,又可以检测图像对应的实测偏差。In this implementation, the position information of the eye image, the position information of the eye corner points and the measured deviation corresponding to the eye image can be used as the calibration information corresponding to the eye image, and then the eye image and the calibration corresponding to the eye image can be used Information, train the initial eye key point detection model, that is, input the eye image and the calibration information corresponding to the eye image into the initial eye key point detection model to train to obtain the eye key point detection model. In this implementation, The trained eye key point detection model: it can be used to detect the equally divided eyelid points and corner points of the upper and lower lids of the eyes in the image, and can also detect the actual measured deviations corresponding to the image.
其中,初始的眼部关键点检测模型,包括:特征提取层、第一特征分类层及第二特征分类层。该特征提取层可以指:卷积层和池化层等用于提取图像的图像特征的层;该第一特征分类层可以指:用于基于图像特征检测图像中眼部关键点及其位置信息的全连接层。该第二特征分类层可以指:用于检测图像对应的实测偏差的全连接层。Among them, the initial eye key point detection model includes: a feature extraction layer, a first feature classification layer, and a second feature classification layer. The feature extraction layer may refer to the layer used to extract image features of the image, such as the convolutional layer and the pooling layer; the first feature classification layer may refer to: the key points of the eyes in the image and their position information are detected based on the image features The fully connected layer. The second feature classification layer may refer to a fully connected layer used to detect the actual measured deviation corresponding to the image.
上述将眼睛图像以及眼睛图像对应的标定信息,输入初始的眼部关键点检测模型,以训练得到眼部关键点检测模型的过程,可以是:首先将每一眼睛图像输入特征提取层,提取该眼睛图像中的图像特征;进而将图像特征输入第一特征分类层,确定该眼睛图像中眼部关键点的当前位置信息;进而将当前位置信息及其对应的标定信息中的眼部关键点的位置信息进行匹配,若匹配成功,则确定初始的眼部关键点检测模型中特征点检测分支模型收敛,得到中间眼部关键点检测模型;其中,该征点检测分支模型为包含特征提取层和第一特征分类层的模型;若匹配失败,则调整特征提取层和第一特征分类层的参数,并返回执行将每一眼睛图像输入特征提取层,提取该眼睛图像中的图像特征的步骤,直至确定匹配成功,得到中间眼部关键点检测模型。该中间眼部关键点检测模型:可以检测图像中眼部关键点的位置信息。该中间眼部关键点检测模型包括训练完成的特征提取层和第一特征分类层以及未训练的第二特征分类层。The above process of inputting the eye image and the calibration information corresponding to the eye image into the initial eye key point detection model to train the eye key point detection model may be: first input each eye image to the feature extraction layer, and extract the The image features in the eye image; then the image features are input to the first feature classification layer to determine the current position information of the eye key points in the eye image; and then the current position information and the corresponding calibration information of the eye key points The position information is matched. If the matching is successful, it is determined that the feature point detection branch model in the initial eye key point detection model is converged, and the middle eye key point detection model is obtained; where the feature point detection branch model includes the feature extraction layer and The model of the first feature classification layer; if the matching fails, adjust the parameters of the feature extraction layer and the first feature classification layer, and return to execute the step of inputting each eye image into the feature extraction layer to extract the image features in the eye image, Until it is determined that the matching is successful, the key point detection model of the middle eye is obtained. The middle eye key point detection model: it can detect the position information of the eye key points in the image. The middle eye key point detection model includes a trained feature extraction layer, a first feature classification layer, and an untrained second feature classification layer.
进而,将眼睛图像输入中间眼部关键点检测模型的特征提取层,得到眼睛图像的图像特征;将图像特征输入中间眼部关键点检测模型的第一特征分类层,确定眼睛图像中眼部关键点的位置信息;基于眼睛图像中眼部关键点的位置信息,确定眼睛图像对应的当前实测偏差;将当前实测偏差输入中间眼部关键点检测模型中的第二特征分类层,将当前实测偏差与其对应的标定信息中的实测偏差进行匹配,若匹配成功,则确定中间眼部关键点检测模型收敛,得到训练完成的眼部关键点检测模型,该训练完成的眼部关键点检测模型包括训练完成的特征提取层和第一特征分类层以及训练完成的第二特征分类层;若匹配失败,则调整中间眼部关键点检测模型的该第二特征分类层的参数,返回执行将眼睛图像输入中间眼部关键点检测模型的特征提取层,得到眼睛图像的图像特征的步骤,直至匹配成功,确定中间眼部关键点检测模型收敛,得到训练所得的眼部关键点检测模型。该训练所得的眼部关键点检测模型即可以检测图像中的眼部关键点,又可以检测图像对应的实测偏差。Furthermore, the eye image is input to the feature extraction layer of the middle eye key point detection model to obtain the image features of the eye image; the image feature is input to the first feature classification layer of the middle eye key point detection model to determine the eye key in the eye image Point position information; based on the position information of the key points of the eye in the eye image, determine the current measured deviation corresponding to the eye image; input the current measured deviation into the second feature classification layer in the middle eye key point detection model, and the current measured deviation Match with the actual measured deviation in the corresponding calibration information. If the matching is successful, it is determined that the middle eye key point detection model converges, and the trained eye key point detection model is obtained. The trained eye key point detection model includes training The completed feature extraction layer and the first feature classification layer and the second feature classification layer after training; if the matching fails, adjust the parameters of the second feature classification layer of the middle eye key point detection model, and return to execute the input of the eye image The feature extraction layer of the key point detection model of the middle eye, the step of obtaining the image features of the eye image, until the matching is successful, it is determined that the key point detection model of the middle eye is converged, and the key point detection model obtained by training is obtained. The eye key point detection model obtained by this training can not only detect the eye key points in the image, but also detect the actual measured deviation corresponding to the image.
另一种情况中,还可以是:首先基于上述眼睛图像以及眼睛图像对应的包含眼部关键点的位置信息的标定信息,训练得到可以检测得到图像中眼睛的眼部关键点的位置信息的眼部关键点检测模型,作为中间眼部关键点检测模型,其中,该中间眼部关键点检测模型包括训练完成的特征提取层和第一特征分类层以及未训练的第二特征分类层。该特征提取层可以指:该中间眼部关键点检测模型的卷积层和池化层等用于提取图像的特征的层;该第一特征分类层可以指:该中间眼部关键点检测模型的用于检测图像中眼部关键点及其位置信息的全连接层。上述第二特征分类层可以指:该中间眼部关键点检测模型的用于检测图像对应的实测偏差的全连接层。In another case, it can also be: first, based on the above-mentioned eye image and the calibration information corresponding to the eye image that contains the position information of the key points of the eye, the training obtains the eye that can detect the position information of the key point of the eye in the image. The central key point detection model is used as the middle eye key point detection model, where the middle eye key point detection model includes a trained feature extraction layer, a first feature classification layer, and an untrained second feature classification layer. The feature extraction layer may refer to: the convolutional layer and the pooling layer of the middle eye key point detection model are used to extract features of the image; the first feature classification layer may refer to: the middle eye key point detection model The fully connected layer used to detect the key points of the eyes and their position information in the image. The above-mentioned second feature classification layer may refer to: a fully connected layer of the middle eye key point detection model used to detect the actual measured deviation corresponding to the image.
第二电子设备获得多个其他眼睛图像,并获得每一其他眼睛图像对应的标定信息,该其他眼睛图像对应的标定信息包括该其他眼睛图像对应的实测偏差;将多个其他眼睛图像以及每一其他眼睛图像对应的标定信息,输入该中间眼部关键点检测模型;针对每一其他眼睛图像,基于训练完成的特征提取层提取该其他眼睛图像的图像特征,并将该图像特征输入训练完成的第一特征分类层,得到该其他眼睛图像眼部关键点的位置信息。The second electronic device obtains multiple other eye images, and obtains calibration information corresponding to each other eye image. The calibration information corresponding to the other eye image includes the actual measured deviation corresponding to the other eye image; The calibration information corresponding to other eye images is input to the middle eye key point detection model; for each other eye image, the image features of the other eye images are extracted based on the trained feature extraction layer, and the image features are input to the trained The first feature classification layer obtains the position information of the key points of the other eye images.
第二电子设备基于该其他眼睛图像眼部关键点的位置信息,确定该其他眼睛图像对应的当前实测偏差,将该当前实测偏差输入该中间眼部关键点检测模型的未训练的第二特征分类层,将该当前实测偏差与所对应的标定信息中的实测偏差进行匹配,若匹配成功,确定该中间眼部关键点检测模型收敛,得到训练完成的眼部关键点检测模型,该训练完成的眼部关键点检测模型包括上述训练完成的特征提取层和第一特征分类层以及训练完成的第二特征分类层;若匹配失败,则调整中间眼部关键点检测模型的该第二特征分类层的参数,返回执行将其他眼睛图像,以及每一其他眼睛图像对应的标定信息,输入中间眼部关键点检测模型的步骤,直至匹配成功,确定中间眼部关键点检测模型收敛,得到训练所得的眼部关键点检测模型。该训练所得的眼部关键点检测模型即可以检测图像中的眼部关键点,又可以检测图像对应的实测偏差。The second electronic device determines the current measured deviation corresponding to the other eye image based on the position information of the eye key points of the other eye image, and inputs the current measured deviation into the untrained second feature classification of the middle eye key point detection model Layer, the current measured deviation is matched with the measured deviation in the corresponding calibration information. If the matching is successful, it is determined that the middle eye key point detection model has converged, and the trained eye key point detection model is obtained. The training completed The eye key point detection model includes the above-mentioned trained feature extraction layer, the first feature classification layer, and the trained second feature classification layer; if the matching fails, adjust the second feature classification layer of the middle eye key point detection model Return to the step of inputting other eye images and the calibration information corresponding to each other eye image to the middle eye key point detection model until the matching is successful, confirm that the middle eye key point detection model converges, and obtain the training result Eye key point detection model. The eye key point detection model obtained by this training can not only detect the eye key points in the image, but also detect the actual measured deviation corresponding to the image.
其中,该当前实测偏差的具体计算流程,可以参见上述眼部关键点的标注流程中的实测偏差的具体计算流程,在此不再进行赘述。Among them, the specific calculation process of the current actual measurement deviation can be referred to the specific calculation process of the actual measurement deviation in the above-mentioned eye key point marking process, which will not be repeated here.
在本发明的另一实施例中,在所述S302之后,所述方法还可以包括:In another embodiment of the present invention, after the S302, the method may further include:
获得待检测图像,其中,待检测图像包括待检测人员的眼睛;Obtain an image to be inspected, where the image to be inspected includes the eyes of the person to be inspected;
将待检测图像输入眼部关键点检测模型,确定待检测图像中待检测人员的眼睛的上下眼睑的等分 眼睑点和眼角点。Input the image to be detected into the key point detection model of the eye, and determine the equal eyelid points and corner points of the upper and lower eyelids of the person to be detected in the image to be detected.
本实施例中,在第二电子设备获得训练所得的眼部关键点检测模型之后,可以利用该训练所得的眼部关键点检测模型,检测待检测图像中的待检测人员的眼睛中的眼角点以及上下眼睑中的等分眼角点,及眼角点在待检测图像中的位置信息,以及上下眼睑中的等分眼角点在待检测图像中的位置信息。In this embodiment, after the second electronic device obtains the trained eye key point detection model, the trained eye key point detection model can be used to detect the corner points in the eyes of the person to be detected in the image to be detected And the equally divided corner points in the upper and lower eyelids, and the position information of the corner points in the image to be detected, and the position information of the equally divided corner points in the upper and lower eyelids in the image to be detected.
在一种实现方式中,在将待检测图像输入眼部关键点检测模型之前,还可以检测待检测人员的面部的面部特征点,其中,该面部特征点用于表征待检测人员的面部的各部位所在位置,该面部的各个部位可以包括鼻子、嘴唇、眉毛、眼睛、下颌、脸颊、耳朵以及额头等部位。该面部的各个部位的面部特征点可以分别包括:面部中表征出鼻子所在位置的各特征点,如鼻翼、鼻梁以及鼻尖等特征点;还可以包括表征出嘴唇所在位置的各特征点,如嘴唇的嘴角以及嘴唇的外边缘的各特征点;还可以包括表征出眉毛所在位置的各特征点,如眉毛边缘的各特征点;还可以包括表征出眼睛所在位置的各特征点,如眼角特征点、眼窝特征点以及瞳孔特征点等等;还可以包括表征出下颌所在位置的各特征点,如下颌轮廓上的各特征点,即下巴轮廓上的各特征点等;还可以包括表征出耳朵所在位置的各特征点,如耳朵的各轮廓上的各特征点等;还可以包括表征出额头所在位置的各特征点,如额头轮廓上的各特征点等,如头发与额头的交接处的各特征点。In an implementation manner, before the image to be detected is input into the eye key point detection model, the facial feature points of the face of the person to be detected can also be detected, where the facial feature points are used to characterize the face of the person to be detected. Where the parts are located, various parts of the face may include nose, lips, eyebrows, eyes, jaw, cheeks, ears, and forehead. The facial feature points of each part of the face can respectively include: feature points in the face that characterize the position of the nose, such as nose wings, nose bridge, and tip of the nose; can also include feature points that characterize the position of the lips, such as lips The feature points of the corners of the mouth and the outer edges of the lips; it can also include the feature points that characterize the position of the eyebrows, such as the edge of the eyebrows; it can also include the feature points that characterize the location of the eyes, such as the corner of the eye feature points , Orbital feature points, pupil feature points, etc.; can also include feature points that characterize the position of the mandible, such as feature points on the contour of the mandible, that is, feature points on the contour of the chin, etc.; can also include characterizing the location of the ear The feature points of the position, such as the feature points on the contours of the ears, etc.; it can also include the feature points that characterize the position of the forehead, such as the feature points on the contour of the forehead, such as the intersection of the hair and the forehead. Feature points.
通过该面部特征点,从待检测图像中确定出眼睛所在区域,并将眼睛所在区域从待检测图像中截取出,得到待检测图像对应的眼睛图像,作为待检测眼睛图像,进而,将待检测眼睛图像输入该训练所得的眼部关键点检测模型,以在一定程度上提高检测出的待检测人员眼睛中的眼角点以及上下眼睑中的等分眼角点的准确性。Through the facial feature points, the area of the eyes is determined from the image to be detected, and the area of the eyes is cut out from the image to be detected, and the eye image corresponding to the image to be detected is obtained as the image of the eye to be detected. The eye image is input into the eye key point detection model obtained by the training, so as to improve the accuracy of the detected corner points in the eyes of the person to be detected and the equally divided eye corner points in the upper and lower eyelids to a certain extent.
在一种实现方式中,当该待检测眼睛图像包括待检测左眼图像和待检测右眼图像时,可以对待检测左眼图像和待检测右眼图像均进行转正处理,即使得待检测左眼图像中两眼角点的位置信息中的纵坐标值相同,使得待检测右眼图像中两眼角点的位置信息中的纵坐标值相同;将转正处理后的待检测眼睛图像输入该训练所得的眼部关键点检测模型,以在一定程度上提高检测出的人员眼睛中的眼角点以及上下眼睑中的等分眼角点的准确性,并在一定程度上较低该训练所得的眼部关键点检测模型的检测难度。In an implementation manner, when the eye image to be detected includes the left eye image to be detected and the right eye image to be detected, both the left eye image to be detected and the right eye image to be detected may be subjected to normalization processing, that is, the left eye image to be detected The ordinate values in the position information of the two eye corner points in the image are the same, so that the ordinate values in the position information of the two eye corner points in the right-eye image to be detected are the same; input the corrected eye image to the training eye The key point detection model is used to improve the accuracy of the detected corner points in the eyes of the person’s eyes and the equally divided eye corner points in the upper and lower eyelids to a certain extent, and to a certain extent lower the key point detection of the eyes obtained by this training The difficulty of model detection.
在一种实现方式中,获得转正处理之后的待检测左眼图像或待检测右眼图像,进行镜像处理,得到镜像眼睛图像,将镜像眼睛图像和未进行镜像的眼睛图像进行拼接,得到拼接眼睛图像,将该拼接眼睛图像输入该训练所得的眼部关键点检测模型,以使得该训练所得的眼部关键点检测模型可以一次性的检测得到镜像眼睛图像中的眼部关键点及其位置信息,和未进行镜像的眼睛图像中的眼部关键点及其位置信息;后续的,将镜像眼睛图像中眼部关键点的位置信息进行镜像处理,以得到眼部关键点在镜像眼睛图像对应的镜像前的图像中的位置信息。进而,得到转正处理之后的待检测左眼图像中眼部关键点的位置信息以及待检测右眼图像中眼部关键点的位置信息。In one implementation, the left-eye image to be detected or the right-eye image to be detected after the normalization process is obtained, and the mirroring process is performed to obtain a mirrored eye image, and the mirrored eye image and the unmirrored eye image are stitched together to obtain the stitched eye Image, input the stitched eye image into the trained eye key point detection model, so that the trained eye key point detection model can detect the key points and their position information in the mirrored eye image at one time , And the eye key points and their position information in the unmirrored eye image; subsequently, the position information of the eye key points in the mirrored eye image is mirrored to obtain the eye key points corresponding to the mirrored eye image Position information in the image before mirroring. Furthermore, the position information of the key eye points in the left-eye image to be detected after the normalization processing and the position information of the key eye points in the right-eye image to be detected are obtained.
在本发明的另一实施例中,在所述将待检测图像输入眼部关键点检测模型,确定待检测图像中待检测人员的眼睛的上下眼睑的等分眼睑点和眼角点的步骤之后,所述方法还可以包括:In another embodiment of the present invention, after the step of inputting the image to be detected into the eye key point detection model, and determining the equally divided eyelid points and corner points of the upper and lower eyelids of the person to be detected in the image to be detected, The method may also include:
利用索贝尔sobel算法,对待检测图像进行边缘提取,得到待检测图像对应的灰阶边缘图;Use the Sobel algorithm to extract the edge of the image to be detected, and obtain the grayscale edge map corresponding to the image to be detected;
基于待检测图像中眼睛的上下眼睑的等分眼睑点和眼角点,以及预设的曲线拟合算法,确定待检测人员的上下眼睑的眼睑曲线,作为待检测眼睑曲线,并在灰阶边缘图中,绘制待检测眼睑曲线,其中,待检测眼睑曲线包括:表征待检测人员的上眼睑的待检测上眼睑曲线和表征待检测人员的下眼睑的待检测下眼睑曲线;Based on the equally divided eyelid points and corner points of the upper and lower eyelids of the eyes in the image to be detected, and the preset curve fitting algorithm, determine the eyelid curve of the upper and lower eyelids of the person to be detected as the eyelid curve to be detected, and set it in the grayscale edge map Draw the eyelid curve to be detected, where the eyelid curve to be detected includes: the upper eyelid curve to be detected that characterizes the upper eyelid of the person to be detected and the lower eyelid curve to be detected that represents the lower eyelid of the person to be detected;
基于待检测眼睑曲线中的等分眼睑点,在灰阶边缘图中确定多组参考点,其中,每组参考点中包括与待检测眼睑曲线中等分眼睑点对应的点;Based on the equally divided eyelid points in the eyelid curve to be detected, multiple sets of reference points are determined in the grayscale edge map, where each group of reference points includes points corresponding to the equally divided eyelid points in the eyelid curve to be detected;
针对每组参考点,基于该组参考点、待检测眼睑曲线中的眼角点以及预设的曲线拟合算法,确定出该组参考点对应的参考曲线,并在灰阶边缘图中绘制每组参考点对应的参考曲线,其中,每组参考点对应的参考曲线包括:表征待检测人员的上眼睑的参考上眼睑曲线和表征待检测人员的下眼睑的参考下眼睑曲线;For each set of reference points, based on the set of reference points, the corner points in the eyelid curve to be detected, and the preset curve fitting algorithm, determine the reference curve corresponding to the set of reference points, and draw each set in the grayscale edge map The reference curve corresponding to the reference point, wherein the reference curve corresponding to each set of reference points includes: a reference upper eyelid curve representing the upper eyelid of the person to be inspected and a reference lower eyelid curve representing the lower eyelid of the person to be inspected;
在灰阶边缘图中,针对每一第一上眼睑曲线,确定该第一上眼睑曲线对应的像素点的像素值之和,其中,第一上眼睑曲线包括:每组参考点所对应参考上眼睑曲线和待检测上眼睑曲线;In the grayscale edge map, for each first upper eyelid curve, determine the sum of the pixel values of the pixel points corresponding to the first upper eyelid curve, where the first upper eyelid curve includes: the reference upper eyelid corresponding to each set of reference points Eyelid curve and upper eyelid curve to be tested;
从每一第一上眼睑曲线对应的像素点的像素值之和中,确定出数值最大的和,并确定数值最大的和对应的第一上眼睑曲线,作为表征待检测人员的上眼睑的目标上眼睑曲线;From the sum of the pixel values of each pixel corresponding to the first upper eyelid curve, determine the sum with the largest value, and determine the first upper eyelid curve corresponding to the largest sum as the target for characterizing the upper eyelid of the person to be inspected Upper eyelid curve;
在灰阶边缘图中,针对每一第一下眼睑曲线,确定该第一下眼睑曲线对应的像素点的像素值之和,其中,第一下眼睑曲线包括:每组参考点所对应参考下眼睑曲线和所述待检测下眼睑曲线;In the grayscale edge map, for each first lower eyelid curve, determine the sum of the pixel values of the pixel points corresponding to the first lower eyelid curve, where the first lower eyelid curve includes: the reference lower eyelid corresponding to each set of reference points An eyelid curve and the lower eyelid curve to be detected;
从每一第一下眼睑曲线对应的像素点的像素值之和中,确定出数值最大的和,并确定数值最大的和对应的第一下眼睑曲线,作为表征待检测人员的下眼睑的目标下眼睑曲线;From the sum of the pixel values of the pixels corresponding to each first lower eyelid curve, determine the sum with the largest value, and determine the first lower eyelid curve corresponding to the largest sum as the target for characterizing the lower eyelid of the person to be inspected Lower eyelid curve
基于数学积分原理,对目标眼睑曲线进行积分,确定出目标眼睑曲线中目标上眼睑曲线的第三曲线长度和目标下眼睑曲线的第四曲线长度;分别从目标上眼睑曲线和目标下眼睑曲线中,确定出多个参考眼睑点;Based on the principle of mathematical integration, the target eyelid curve is integrated to determine the third curve length of the target upper eyelid curve and the fourth curve length of the target lower eyelid curve in the target eyelid curve; from the target upper eyelid curve and the target lower eyelid curve respectively , To determine multiple reference eyelid points;
基于目标眼睑曲线中目标上眼睑曲线的第三曲线长度、目标下眼睑曲线的第四曲线长度、多个参考眼睑点以及预设等分点数,从目标眼睑曲线中目标上眼睑曲线和目标下眼睑曲线中,分别确定出预设等分点数减1个等分上眼睑点和预设等分点数减1个等分下眼睑点。Based on the third curve length of the target upper eyelid curve in the target eyelid curve, the fourth curve length of the target lower eyelid curve, multiple reference eyelid points and the preset number of equal points, the target upper eyelid curve and the target lower eyelid curve from the target eyelid curve In the curve, the preset number of equal division points minus 1 equal upper eyelid point and the preset number of equal division points minus 1 equal lower eyelid point are respectively determined.
本实施例中,为了更准确的确定出待检测图像中待检测人员的眼睛的上下眼睑,在获得该训练所 得的眼部关键点检测模型输出的该待检测图像中眼睛的眼角点的位置信息以及上下眼睑的等分眼睑点的位置信息之后,可以继续使用索贝尔sobel算法,对待检测图像进行边缘提取,得到待检测图像对应的灰阶边缘图,其中,sobel算法可以将待检测图像中的边缘均提取出。该灰阶边缘图中眼睛的上下眼睑对应的像素点的像素值可以为255,而其他位处于边缘的部位对应的像素点的像素值可以为0,以可以表征出待检测图像中处于边缘的部位,如眼睛的上下眼睑。In this embodiment, in order to more accurately determine the upper and lower eyelids of the eyes of the person to be inspected in the image to be inspected, the position information of the corner points of the eyes in the image to be inspected is output by the key point detection model of the obtained training And after the position information of the equally divided eyelid points of the upper and lower eyelids, you can continue to use the Sobel sobel algorithm to perform edge extraction on the image to be detected, and obtain the gray-scale edge map corresponding to the image to be detected. The edges are all extracted. The pixel value of the pixels corresponding to the upper and lower eyelids of the eyes in the grayscale edge map can be 255, and the pixel values of the pixels corresponding to other parts at the edge can be 0, so as to indicate the edge of the image to be detected Locations, such as the upper and lower eyelids of the eye.
基于待检测图像中眼睛的上下眼睑的等分眼睑点和眼角点,及预设的曲线拟合算法,可以确定出待检测人员的上下眼睑的眼睑曲线,作为待检测眼睑曲线。可以理解的,灰阶边缘图中每一像素点与待检测图像中的每一像素点存在一一对应的关系,基于该对应的关系,将所确定出的待检测眼睑曲线,在灰阶边缘图中绘制出,并可以在该灰阶边缘图中所绘制出的待检测眼睑曲线,确定出相应的眼睛的眼角点及其位置信息和等分眼睑点及其位置信息。Based on the equally divided eyelid points and corner points of the upper and lower eyelids of the eyes in the image to be detected, and a preset curve fitting algorithm, the eyelid curve of the upper and lower eyelids of the person to be detected can be determined as the eyelid curve to be detected. It can be understood that there is a one-to-one correspondence between each pixel in the grayscale edge map and each pixel in the image to be detected. Based on the corresponding relationship, the determined eyelid curve to be detected is placed on the edge of the grayscale. It is drawn in the figure, and the eyelid curve to be detected drawn in the grayscale edge map can be used to determine the corner point of the corresponding eye and its position information, and the equally divided eyelid point and its position information.
基于待检测眼睑曲线中的等分眼睑点,在灰阶边缘图的待检测眼睑曲线周围确定多组参考点,其中,每组参考点中包括与待检测眼睑曲线中等分眼睑点对应的点。如图4所示,可以是在灰阶边缘图的待检测眼睑曲线的上下位置处分别确定至少一组参考点。可以理解的是,待检测眼睑曲线中包括:表征眼睛的上眼睑的待检测上眼睑曲线和表征下眼睑的待检测下眼睑曲线。上述在灰阶边缘图的待检测眼睑曲线的上下位置处分别确定至少一组参考点,可以是:分别针对待检测上眼睑曲线的上下位置处分别确定至少一组参考点,和针对待检测下眼睑曲线的上下位置处分别确定至少一组参考点。其中,图4中白色的曲线分别表示灰阶边缘图中眼睛的上眼睑所在位置和下眼睑所在位置,图4中灰色曲线表示待检测眼睑曲线中待检测上眼睑曲线和待检测下眼睑曲线,其中,灰色曲线上的白色实心点表示待检测眼睑曲线中等分眼睑点以及眼角点。如图4中所示,待检测上眼睑曲线中等分眼睑点对应两组参考点,分别为白色空心点和灰色空心点,并且待检测下眼睑曲线中等分眼睑点对应两组参考点,分别为白色空心点和灰色空心点。Based on the equally divided eyelid points in the eyelid curve to be detected, multiple sets of reference points are determined around the eyelid curve to be detected in the grayscale edge map, wherein each set of reference points includes points corresponding to the equally divided eyelid points of the eyelid curve to be detected. As shown in FIG. 4, at least one set of reference points may be determined respectively at the upper and lower positions of the eyelid curve to be detected in the grayscale edge map. It is understandable that the eyelid curve to be detected includes: the upper eyelid curve to be detected that represents the upper eyelid of the eye and the lower eyelid curve to be detected that represents the lower eyelid. The foregoing determination of at least one set of reference points at the upper and lower positions of the eyelid curve to be detected in the grayscale edge map may be: respectively determining at least one set of reference points for the upper and lower positions of the upper eyelid curve to be detected, and for the lower At least one set of reference points are respectively determined at the upper and lower positions of the eyelid curve. Among them, the white curves in Fig. 4 respectively represent the position of the upper eyelid and the lower eyelid of the eye in the grayscale edge diagram, and the gray curve in Fig. 4 represents the upper eyelid curve to be detected and the lower eyelid curve to be detected in the eyelid curve to be detected. Among them, the white solid points on the gray curve indicate that the eyelid curve to be detected is equally divided into the eyelid point and the eye corner point. As shown in Figure 4, the middle eyelid points of the upper eyelid curve to be detected correspond to two sets of reference points, which are white and gray hollow points, and the middle eyelid points of the lower eyelid curve to be detected correspond to two sets of reference points, respectively White hollow dots and gray hollow dots.
针对每组参考点,基于该组参考点、待检测眼睑曲线中的眼角点以及预设的曲线拟合算法,确定出该组参考点对应的参考曲线,并在灰阶边缘图中绘制每组参考点对应的参考曲线。在灰阶边缘图中,针对每一第一上眼睑曲线,确定该第一上眼睑曲线对应的像素点的像素值之和,即确定每组参考点所对应参考上眼睑曲线对应的像素点的像素值之和,以及待检测上眼睑曲线对应的像素点的像素值之和。For each set of reference points, based on the set of reference points, the corner points in the eyelid curve to be detected, and the preset curve fitting algorithm, determine the reference curve corresponding to the set of reference points, and draw each set in the grayscale edge map The reference curve corresponding to the reference point. In the grayscale edge map, for each first upper eyelid curve, determine the sum of the pixel values of the pixel points corresponding to the first upper eyelid curve, that is, determine the pixel points corresponding to each set of reference points corresponding to the reference upper eyelid curve The sum of pixel values, and the sum of pixel values of pixels corresponding to the upper eyelid curve to be detected.
其中,在灰阶边缘图中眼睛的上下眼睑位置处的像素点的像素值为255,在上述每一第一上眼睑曲线对应的像素点的像素值之和中,像素值之和的数值越大,表征该第一上眼睑曲线与灰阶边缘图中眼睛的上眼睑越贴合,相应的,将像素值之和的数值最大的第一上眼睑曲线,确定为表征待检测人员的上眼睑的目标上眼睑曲线。Among them, the pixel value of the pixel at the upper and lower eyelid positions of the eye in the grayscale edge map is 255. Among the sum of the pixel values of each pixel corresponding to the first upper eyelid curve, the more the sum of the pixel values is Larger, it indicates that the first upper eyelid curve fits the upper eyelid of the eye in the grayscale edge diagram more closely, and accordingly, the first upper eyelid curve with the largest sum of pixel values is determined as the upper eyelid of the person to be inspected The target upper eyelid curve.
同理的,在灰阶边缘图中,针对每一第一下眼睑曲线,确定该第一下眼睑曲线对应的像素点的像素值之和,即确定每组参考点所对应参考下眼睑曲线对应的像素点的像素值之和,以及待检测下眼睑曲线对应的像素点的像素值之和。其中,在灰阶边缘图中眼睛的上下眼睑位置处的像素点的像素值为255,在上述每一第一下眼睑曲线对应的像素点的像素值之和中,像素值之和的数值越大,表征该第一下眼睑曲线与灰阶边缘图中眼睛的下眼睑越贴合,相应的,将像素值之和的数值最大的第一下眼睑曲线,确定为表征待检测人员的下眼睑的目标下眼睑曲线。Similarly, in the grayscale edge map, for each first lower eyelid curve, determine the sum of the pixel values of the pixels corresponding to the first lower eyelid curve, that is, determine the reference lower eyelid curve corresponding to each set of reference points The sum of the pixel values of the pixels and the sum of the pixel values of the pixels corresponding to the lower eyelid curve to be detected. Among them, the pixel value of the pixel at the upper and lower eyelid positions of the eye in the grayscale edge map is 255. Among the sum of the pixel values of each pixel corresponding to the first lower eyelid curve, the more the sum of the pixel values is Larger, it indicates that the first lower eyelid curve fits the lower eyelid of the eye in the grayscale edge map more closely, and accordingly, the first lower eyelid curve with the largest sum of pixel values is determined as the lower eyelid of the person to be inspected The target lower eyelid curve.
进而,基于数学积分原理,对目标眼睑曲线进行积分,确定出目标眼睑曲线中目标上眼睑曲线的第三曲线长度和目标下眼睑曲线的第四曲线长度;分别对目标上眼睑曲线和标下眼睑曲线进行密集取点,例如:取出预设书两个点;即从目标上眼睑曲线中确定出多个参考眼睑点,并从目标下眼睑曲线中确定出多个参考眼睑点;进而,基于目标上眼睑曲线的第三曲线长度、目标上眼睑曲线中的多个参考眼睑点以及预设等分点数,从目标上眼睑曲线中确定出预设等分点数减1个等分上眼睑点。基于目标下眼睑曲线的第四曲线长度、目标下眼睑曲线中的多个参考眼睑点以及预设等分点数,从目标下眼睑曲线中确定出预设等分点数减1个等分下眼睑点。Furthermore, based on the principle of mathematical integration, the target eyelid curve is integrated to determine the third curve length of the target upper eyelid curve and the fourth curve length of the target lower eyelid curve in the target eyelid curve; respectively mark the target upper eyelid curve and the lower eyelid curve The curve is intensively selected, for example: take out two points of the preset book; that is, determine multiple reference eyelid points from the target upper eyelid curve, and determine multiple reference eyelid points from the target lower eyelid curve; further, based on the target The third curve length of the upper eyelid curve, multiple reference eyelid points in the target upper eyelid curve, and the preset number of equal division points are determined from the target upper eyelid curve minus 1 equal division upper eyelid point. Based on the fourth curve length of the target lower eyelid curve, multiple reference eyelid points in the target lower eyelid curve, and the preset number of equal points, the preset number of equal points minus 1 equal lower eyelid point is determined from the target lower eyelid curve .
其中,上述基于目标上眼睑曲线的第三曲线长度、目标上眼睑曲线中的多个参考眼睑点以及预设等分点数,从目标上眼睑曲线中确定出预设等分点数减1个等分上眼睑点的过程,可以参见上述基于标注眼睑曲线中标注上眼睑曲线的第一曲线长度、多个待利用眼睑点以及预设等分点数,从标注上眼睑曲线中确定出预设等分点数减1个等分上眼睑点的过程。并且,上述基于目标下眼睑曲线的第四曲线长度、目标下眼睑曲线中的多个参考眼睑点以及预设等分点数,从目标下眼睑曲线中确定出预设等分点数减1个等分下眼睑点的过程,可以参见上述基于标注眼睑曲线中标注下眼睑曲线的第二曲线长度、多个待利用眼睑点以及预设等分点数,从标注下眼睑曲线中确定出预设等分点数减1个等分下眼睑点的过程,在此不再赘述。Wherein, based on the third curve length of the target upper eyelid curve, multiple reference eyelid points in the target upper eyelid curve, and the preset number of equal division points, the preset number of equal division points minus 1 equal division is determined from the target upper eyelid curve For the process of upper eyelid point, please refer to the first curve length of the upper eyelid curve marked on the marked eyelid curve, multiple eyelid points to be used, and the preset number of equal points, and the preset number of equal points is determined from the marked upper eyelid curve. The process of dividing the upper eyelid point by 1 equal. In addition, based on the fourth curve length of the target lower eyelid curve, multiple reference eyelid points in the target lower eyelid curve, and the preset number of equal division points, the preset number of equal division points minus 1 equal division is determined from the target lower eyelid curve. For the process of lower eyelid points, refer to the second curve length of the lower eyelid curve marked on the marked eyelid curve, multiple eyelid points to be used, and the preset number of equal points, and the preset number of equal points can be determined from the marked lower eyelid curve. The process of subtracting one point of the lower eyelid equally will not be repeated here.
相应于上述方法实施例,图5为本发明实施例提供的眼部关键点的标注装置的一种结构示意图。该装置包括:Corresponding to the foregoing method embodiment, FIG. 5 is a schematic structural diagram of an eye key point marking device provided in an embodiment of the present invention. The device includes:
第一获得模块510,被配置为获得人脸图像以及每一人脸图像对应的标注眼睑曲线,其中,所述人脸图像中标注有所包含眼睛的标注眼角点和上下眼睑的标注眼睑点,每一标注眼睑曲线包括:基于所对应标注眼睑点和标注眼角点生成的、表征上眼睑的标注上眼睑曲线和表征下眼睑的标注下眼睑曲线;The first obtaining module 510 is configured to obtain a face image and a marked eyelid curve corresponding to each face image, wherein the face image is marked with marked corner points of the eyes and marked eyelid points of the upper and lower eyelids. A marked eyelid curve includes: a marked upper eyelid curve representing the upper eyelid and a marked lower eyelid curve representing the lower eyelid generated based on the corresponding marked eyelid points and marked eye corner points;
第一确定模块520,被配置为针对每一标注眼睑曲线,基于数学积分原理,对该标注眼睑曲线进行积分,确定出该标注眼睑曲线中标注上眼睑曲线的第一曲线长度和标注下眼睑曲线的第二曲线长度;分别从所述标注上眼睑曲线和所述标注下眼睑曲线中,确定出多个待利用眼睑点;The first determining module 520 is configured to, for each marked eyelid curve, integrate the marked eyelid curve based on the principle of mathematical integration, and determine the first curve length of the marked upper eyelid curve and the marked lower eyelid curve in the marked eyelid curve The second curve length; respectively from the marked upper eyelid curve and the marked lower eyelid curve, determine multiple eyelid points to be used;
第二确定模块530,被配置为针对每一标注眼睑曲线,基于该标注眼睑曲线中标注上眼睑曲线的第 一曲线长度、标注下眼睑曲线的第二曲线长度、所述多个待利用眼睑点以及预设等分点数,从该标注眼睑曲线中标注上眼睑曲线和标注下眼睑曲线中,分别确定出预设等分点数减1个等分上眼睑点和所述预设等分点数减1个等分下眼睑点;The second determination module 530 is configured to, for each marked eyelid curve, based on the first curve length marked with the upper eyelid curve in the marked eyelid curve, the second curve length marked with the lower eyelid curve, and the plurality of eyelid points to be used And the preset number of equal points, from the marked upper eyelid curve and the marked lower eyelid curve, the preset number of equal points minus 1 and the preset number of equal points minus 1 are respectively determined Two equally divided lower eyelid points
第三确定模块540,被配置为将每一人脸图像中的标注眼角点、等分上眼睑点和等分下眼睑点,确定为每一人脸图像对应的眼部关键点。The third determination module 540 is configured to determine the marked eye corner point, equal division upper eyelid point, and equal division lower eyelid point in each face image as the key eye point corresponding to each face image.
应用本发明实施例,可以对人脸图像对应的标注眼睑曲线进行积分,确定出标注眼睑曲线中标注上眼睑曲线的第一曲线长度和标注下眼睑曲线的第二曲线长度,并分别对标注上眼睑曲线和标注下眼睑曲线密集取点,以从标注上眼睑曲线和标注下眼睑曲线中,确定出多个待利用眼睑点,进而基于标注上眼睑曲线的第一曲线长度、标注上眼睑曲线上的多个待利用眼睑点及预设等分点数,从标注上眼睑曲线上,确定出预设等分点数减1个等分上眼睑点;并基于标注下眼睑曲线的第二曲线长度、标注下眼睑曲线上的多个待利用眼睑点及预设等分点数,从标注下眼睑曲线上,确定出预设等分点数减1个等分下眼睑点,以实现半自动地从人脸图像包含的眼睛的上下眼睑中标注出具有较明显的语义特征的等分眼睑点,实现标注出具有较明显的语义特征的眼部关键点且在一定程度上提高标注效率。By applying the embodiment of the present invention, the eyelid marking curve corresponding to the face image can be integrated, and the first curve length marking the upper eyelid curve and the second curve length marking the lower eyelid curve in the marking eyelid curve can be determined, and the markings are respectively The eyelid curve and the marked lower eyelid curve are densely selected to determine multiple eyelid points to be used from the marked upper eyelid curve and the marked lower eyelid curve, and then mark the upper eyelid curve based on the first curve length of the marked upper eyelid curve The number of eyelid points to be used and the preset number of equal division points are determined from the marked upper eyelid curve, and the preset number of equal division points minus one equal upper eyelid point is determined; and the second curve length and label are based on the marked lower eyelid curve. The multiple eyelid points to be used on the lower eyelid curve and the preset number of equal division points are determined from the marked lower eyelid curve to determine the preset number of equal division points minus 1 equal lower eyelid point to achieve semi-automatic inclusion from the face image The upper and lower eyelids of the eyes are marked with equally divided eyelid points with obvious semantic characteristics, which realizes marking the key points of the eye with obvious semantic characteristics and improves the marking efficiency to a certain extent.
本发明的另一实施例中,第一获得模块510,被具体配置为In another embodiment of the present invention, the first obtaining module 510 is specifically configured as
针对每一人脸图像,基于该人脸图像中的标注眼角点、上眼睑的标注眼睑点以及预设的曲线拟合算法,拟合出表征所述上眼睑的上眼睑曲线;并基于该人脸图像中的标注眼角点、下眼睑的标注眼睑点以及所述预设的曲线拟合算法,拟合出表征所述下眼睑的下眼睑曲线,以得到所述人脸图像对应的眼睑曲线。For each face image, the upper eyelid curve that characterizes the upper eyelid is fitted based on the marked eye corner points in the face image, the marked eyelid points of the upper eyelid, and a preset curve fitting algorithm; and based on the face The marked eye corner points, the marked eyelid points of the lower eyelid and the preset curve fitting algorithm in the image fit the lower eyelid curve that characterizes the lower eyelid to obtain the eyelid curve corresponding to the face image.
本发明的另一实施例中,所述装置还包括:截取模块(图像未示出),被配置为在所述将每一人脸图像中的标注眼角点、等分上眼睑点和等分下眼睑点,确定为每一人脸图像对应的眼部关键点之后,针对每一人脸图像,基于该人脸图像中的眼部关键点,从该人脸图像中截取出眼睛所在区域图像,得到标注有眼部关键点的眼睛图像;In another embodiment of the present invention, the device further includes: an interception module (the image is not shown) configured to mark the corner points of the eyes, divide the upper eyelid points equally, and divide the lower eyelid points equally in each face image. The eyelid points are determined as the key eye points corresponding to each face image. For each face image, based on the eye key points in the face image, the eye area image is cut out from the face image to obtain the annotation Eye image with key points of the eye;
第四确定模块(图像未示出),被配置为将所述眼睛图像及其对应的标定信息,确定为用于检测图像中眼睛的眼部关键点的眼部关键点检测模型的训练数据,其中,所述标定信息包括所对应眼睛图像中的眼部关键点的位置信息。The fourth determining module (the image is not shown) is configured to determine the eye image and its corresponding calibration information as the training data of the eye key point detection model for detecting the eye key points of the eyes in the image, Wherein, the calibration information includes position information of key eye points in the corresponding eye image.
本发明的另一实施例中,所述第三确定模块540,被具体配置为:获得每一人脸图像对应的真实眼睛开闭长度;获得每一人脸图像对应的实测眼睛开闭长度,其中,所述实测眼睛开闭长度为:基于人脸图像对应的目标三维人脸模型中眼睛的上下眼睑所确定的长度,所述目标三维人脸模型为:基于所对应人脸图像中的人脸特征点以及预设的三维人脸模型确定的人脸模型,所述目标三维人脸模型包括:基于该人脸图像中的标注眼角点、等分上眼睑点和等分下眼睑点,构建的眼睛的上下眼睑;针对每一人脸图像,计算该人脸图像对应的真实眼睛开闭长度与实测眼睛开闭长度的比值,作为该人脸图像对应的实测偏差;将所述眼睛图像及其对应的标定信息,确定为用于检测图像中眼睛的上下眼睑中等分眼睑点的眼部关键点检测模型的训练数据,其中,所述标定信息包含:所对应眼睛图像标注的眼部关键点的位置信息,以及该眼睛图像所对应人脸图像对应的实测偏差。In another embodiment of the present invention, the third determining module 540 is specifically configured to: obtain the actual eye opening and closing length corresponding to each face image; obtain the actual measured eye opening and closing length corresponding to each face image, wherein, The measured eye opening and closing length is: based on the length determined by the upper and lower eyelids of the eyes in the target three-dimensional face model corresponding to the face image, and the target three-dimensional face model is: based on the face features in the corresponding face image Points and a face model determined by a preset three-dimensional face model. The target three-dimensional face model includes: eyes constructed based on the marked eye corner points, equal division of upper eyelid points and equal division of lower eyelid points in the face image The upper and lower eyelids; for each face image, calculate the ratio of the actual eye opening and closing length corresponding to the face image to the measured eye opening and closing length as the actual measured deviation corresponding to the face image; compare the eye image and its corresponding The calibration information is determined as the training data of the eye key point detection model used to detect the upper and lower eyelids of the eye in the image, wherein the calibration information includes: the position information of the eye key points marked in the corresponding eye image , And the measured deviation corresponding to the face image corresponding to the eye image.
本发明的另一实施例中,所述装置还包括:标注模块(图像未示出),被配置为在所述获得人脸图像以及每一人脸图像对应的标注眼睑曲线之前,对每一人脸图像中人员面部的眼睛的上下眼睑进行标注的过程,其中,所述标注模块包括:获得显示单元(图像未示出),被配置为获得第一人脸图像并显示,其中,所述第一人脸图像包含人员面部的眼睛,所述第一人脸图像为所述人脸图像中的一个;接收单元(图像未示出),被配置为接收标注人员针对所述第一人脸图像中眼睛的上下眼睑触发的标注指令,其中,所述标注指令携带所标注的标注点的位置信息;确定单元(图像未示出),被配置为若检测到所述标注人员至少两次触发针对所述第一人脸图像中指定眼睑的标注指令,基于所述标注人员触发的至少两次标注指令携带的标注点的位置信息,以及预设的曲线拟合算法,确定表征所述指定眼睑的指定眼睑曲线,其中,所述指定眼睑为:所述第一人脸图像中眼睛的上眼睑和下眼睑;显示单元(图像未示出),被配置为在所述第一人脸图像中显示所述指定眼睑曲线,以使得所述标注人员检测所标注的标注点是否为所述指定眼睑上的眼睑点或眼角点。In another embodiment of the present invention, the device further includes: a labeling module (the image is not shown), configured to, before obtaining the face image and labeling the eyelid curve corresponding to each face The process of marking the upper and lower eyelids of the eyes of a person’s face in an image, wherein the marking module includes: an obtaining and displaying unit (the image is not shown) configured to obtain and display a first face image, wherein the first The face image contains the eyes of the person’s face, and the first face image is one of the face images; the receiving unit (the image is not shown) is configured to receive an annotator’s reference to the first face image The marking instruction triggered by the upper and lower eyelids of the eye, wherein the marking instruction carries the position information of the marked marking point; the determining unit (the image is not shown) is configured to trigger the targeting at least twice if the marking person is detected The labeling instruction for the designated eyelid in the first face image is determined to characterize the designation of the designated eyelid based on the position information of the labeling points carried in at least two labeling instructions triggered by the labeling personnel, and a preset curve fitting algorithm Eyelid curve, wherein the specified eyelid is: the upper eyelid and the lower eyelid of the eye in the first face image; a display unit (image not shown) is configured to display the first face image The specified eyelid curve is described, so that the annotator detects whether the marked point is an eyelid point or an eye corner point on the specified eyelid.
相应于上述方法实施例,如图6所示,本发明实施例提供了一种眼部关键点检测模型的训练装置,该装置包括:Corresponding to the foregoing method embodiment, as shown in FIG. 6, an embodiment of the present invention provides an eye key point detection model training device, which includes:
第二获得模块610,被配置为获得训练数据,其中,所述训练数据包括标注有眼睛的上下眼睑中等分眼睑点和标注眼角点的眼睛图像以及每一眼睛图像对应的标定信息,所述标定信息包括:所对应眼睛图像中标注的等分眼睑点和标注眼角点的位置信息,所述等分眼睑点包括:基于人脸图像对应的标注眼睑曲线以及数学积分原理,确定出的人脸图像中眼睛的上眼睑的等分上眼睑点和下眼睑的等分下眼睑点;所述标注眼睑曲线包括:基于所对应人脸图像中标注的眼睛的标注眼角点和上下眼睑的标注眼睑点生成的、表征上眼睑的标注上眼睑曲线和表征下眼睑的标注下眼睑曲线,每一眼睛图像为从所对应人脸图像中截取出的眼睛所在区域图像;The second obtaining module 610 is configured to obtain training data, where the training data includes the upper and lower eyelid equal eyelid points marked with the eyes and the eye images marked with the corner points of the eyes and the calibration information corresponding to each eye image. The calibration The information includes: the position information of the divided eyelid points and the marked eye corner points marked in the corresponding eye image, and the divided eyelid points include: the face image determined based on the marked eyelid curve corresponding to the face image and the principle of mathematical integration The equal division of the upper eyelid of the middle eye, the upper eyelid point and the equal division of the lower eyelid point of the lower eyelid; the marked eyelid curve includes: generating based on the marked corner points of the eye marked in the corresponding face image and the marked eyelid points of the upper and lower eyelids The marked upper eyelid curve representing the upper eyelid and the marked lower eyelid curve representing the lower eyelid, each eye image is an image of the area where the eye is intercepted from the corresponding face image;
输入模块620,被配置为将所述眼睛图像,以及每一眼睛图像对应的标定信息包括的等分眼睑点和标注眼角点的位置信息,输入初始的眼部关键点检测模型,以训练得到用于检测图像中眼睛的上下眼睑中的等分眼睑点和眼角点的眼部关键点检测模型。The input module 620 is configured to input the eye image and the position information of the divided eyelid points and the marked eye corner points included in the calibration information corresponding to each eye image into the initial eye key point detection model to be used for training An eye key point detection model for detecting the equally divided eyelid points and eye corner points in the upper and lower eyelids of the eyes in the image.
本发明的另一实施例中,所述装置还包括:转正模块(图中未示出),被配置为在所述将所述眼睛图像,以及每一眼睛图像对应的标定信息包括的等分眼睑点和标注眼角点的位置信息,输入初始的 眼部关键点检测模型,以训练得到用于检测图像中眼睛的上下眼睑中的等分眼睑点和眼角点的眼部关键点检测模型之前,针对每一眼睛图像,对该眼睛图像进行转正处理,得到转正图像,其中,所述转正处理为:使得该眼睛图像中的标注眼角点的位置信息中的纵坐标均相同的处理;In another embodiment of the present invention, the device further includes: a straightening module (not shown in the figure), configured to divide the eye image and the calibration information corresponding to each eye image into equal parts. Before the position information of the eyelid points and the marked eye corner points, input the initial eye key point detection model to train the eye key point detection model used to detect the equally divided eyelid points and the corner points of the upper and lower eyelids in the image, For each eye image, performing the correction processing on the eye image to obtain the correction image, wherein the correction processing is: processing that the ordinates in the position information of the marked corner points in the eye image are all the same;
更新模块(图中未示出),被配置为基于每一转正图像中等分眼睑点和标注眼角点的位置信息,更新每一转正图像对应的标定信息包括的等分眼睑点和标注眼角点的位置信息;The update module (not shown in the figure) is configured to update the calibration information corresponding to each converted image including the divided eyelid points and the marked eye canthus points based on the position information of the equally divided eyelid points and the marked eye corner points of each converted image location information;
所述输入模块620,被具体配置为:将每一转正图像,以及每一转正图像对应的标定信息包括的等分眼睑点和标注眼角点的位置信息,输入初始的眼部关键点检测模型,以训练得到用于检测图像中眼睛的上下眼睑中的等分眼睑点和眼角点的眼部关键点检测模型。The input module 620 is specifically configured to: input the position information of the divided eyelid points and the marked eye corner points included in each converted image and the calibration information corresponding to each converted image into the initial eye key point detection model, Through training, an eye key point detection model used to detect the equally divided eyelid points and eye corner points in the upper and lower eyelids of the eyes in the image is obtained.
本发明的另一实施例中,所述眼睛图像包括左眼图像和左眼图像对应的右眼图像;所述装置还包括:镜像模块(图中未示出),被配置为在所述将每一转正图像,以及每一转正图像对应的标定信息包括的等分眼睑点和标注眼角点的位置信息,输入初始的眼部关键点检测模型,以训练得到用于检测图像中眼睛的上下眼睑中的等分眼睑点和眼角点的眼部关键点检测模型之前,对所述左眼图像或所述左眼图像对应的右眼图像进行镜像处理,得到镜像图像;拼接模块(图中未示出),被配置为对所述镜像图像以及未进行镜像的图像进行拼接,得到拼接图像,其中,若对所述左眼图像进行镜像处理,未进行镜像的图像为所述左眼图像对应的右眼图像;若对所述左眼图像对应的右眼图像进行镜像处理,未进行镜像的图像为所述左眼图像;所述输入模块620,被具体配置为:将每一拼接图像以及每一拼接图像对应的标定信息包括的等分眼睑点和标注眼角点的位置信息,输入初始的眼部关键点检测模型,以训练得到用于检测图像中眼睛的上下眼睑中的等分眼睑点和眼角点的眼部关键点检测模型,其中,所述标定信息包括:镜像图像中的等分眼睑点和标注眼角点镜像之后的位置信息以及未进行镜像的图像中的等分眼睑点和标注眼角点的位置信息。In another embodiment of the present invention, the eye image includes a left-eye image and a right-eye image corresponding to the left-eye image; the device further includes: a mirroring module (not shown in the figure), configured to For each converted image, and the calibration information corresponding to each converted image, including the position information of the divided eyelid points and the marked eye corner points, input the initial eye key point detection model to train the upper and lower eyelids used to detect the eyes in the image Before the eye key point detection model that equally divides the eyelid point and the corner point of the eye, the left eye image or the right eye image corresponding to the left eye image is mirrored to obtain a mirror image; stitching module (not shown in the figure) Out), configured to splice the mirror image and the unmirrored image to obtain a spliced image, wherein, if the left-eye image is mirrored, the unmirrored image is corresponding to the left-eye image Right-eye image; if mirroring is performed on the right-eye image corresponding to the left-eye image, the image that is not mirrored is the left-eye image; the input module 620 is specifically configured to: each stitched image and each The calibration information corresponding to a stitched image includes the position information of the equally divided eyelid points and the marked eye corner points, and the initial eye key point detection model is input to train to obtain the equally divided eyelid points and the upper and lower eyelids of the eyes in the image. An eye key point detection model for corner points, wherein the calibration information includes: the position information of the divided eyelid points and the marked corner points in the mirror image, and the divided eyelid points and marked eye corners in the image without mirroring Point location information.
本发明的另一实施例中,每一眼睛图像对应的标定信息还包括:该眼睛图像对应的实测偏差,所述实测偏差为:该眼睛图像对应的真实眼睛开闭长度与实测眼睛开闭长度的比值;所述实测眼睛开闭长度为基于人脸图像对应的目标三维人脸模型中眼睛的上下眼睑所确定的长度,所述目标三维人脸模型为:基于所对应人脸图像中的人脸特征点以及预设的三维人脸模型确定的人脸模型,所述目标三维人脸模型包括:基于该人脸图像中的标注眼角点、等分上眼睑点和等分下眼睑点,构建的眼睛的上下眼睑;In another embodiment of the present invention, the calibration information corresponding to each eye image further includes: a measured deviation corresponding to the eye image, and the measured deviation is: the actual eye opening and closing length corresponding to the eye image and the measured eye opening and closing length The actual measured eye opening and closing length is the length determined based on the upper and lower eyelids of the eyes in the target three-dimensional face model corresponding to the face image, and the target three-dimensional face model is: based on the person in the corresponding face image Face feature points and a face model determined by a preset three-dimensional face model. The target three-dimensional face model includes: constructing based on the marked eye corner points, equal division of the upper eyelid point and equal division of the lower eyelid point in the face image The upper and lower eyelids of your eyes;
所述输入模块620,被具体配置为:将所述眼睛图像,以及每一眼睛图像对应的标定信息包括的等分眼睑点和标注眼角点的位置信息及眼睛图像对应的实测偏差,输入初始的眼部关键点检测模型,以训练得到眼部关键点检测模型,其中,所述眼部关键点检测模型用于检测图像中眼睛的上下眼睑中的等分眼睑点和眼角点,并检测图像对应的实测偏差。The input module 620 is specifically configured to input the eye image, the position information of the divided eyelid points and the marked eye corner points included in the calibration information corresponding to each eye image, and the actual measured deviation corresponding to the eye image, into the initial An eye key point detection model to train to obtain an eye key point detection model, wherein the eye key point detection model is used to detect the equally divided eyelid points and eye corner points in the upper and lower lids of the eyes in the image, and detect the corresponding image The measured deviation.
本发明的另一实施例中,所述装置还包括:第三获得模块(图中未示出),被配置为在所述将所述眼睛图像,以及每一眼睛图像对应的标定信息包括的等分眼睑点和标注眼角点的位置信息,输入初始的眼部关键点检测模型,以训练得到用于检测图像中眼睛的上下眼睑中的等分眼睑点和眼角点的眼部关键点检测模型之后,获得待检测图像,其中,所述待检测图像包括待检测人员的眼睛;第五确定模块(图中未示出),被配置为将所述待检测图像输入所述眼部关键点检测模型,确定所述待检测图像中待检测人员的眼睛的上下眼睑的等分眼睑点和眼角点。In another embodiment of the present invention, the device further includes: a third obtaining module (not shown in the figure) configured to include the eye image and the calibration information corresponding to each eye image in the Divide the eyelid points and label the position information of the eye corner points, and input the initial eye key point detection model to train to obtain the eye key point detection model for detecting the divided eyelid points and the corner points of the upper and lower eyelids in the image Afterwards, an image to be detected is obtained, wherein the image to be detected includes the eyes of the person to be detected; a fifth determining module (not shown in the figure) is configured to input the image to be detected into the key point detection of the eye The model determines the equal-divided eyelid points and corner points of the upper and lower eyelids of the eyes of the person to be inspected in the image to be inspected.
本发明的另一实施例中,所述装置还包括:提取模块(图中未示出),被配置为在所述将所述待检测图像输入所述眼部关键点检测模型,确定所述待检测图像中待检测人员的眼睛的上下眼睑的等分眼睑点和眼角点之后,利用索贝尔sobel算法,对所述待检测图像进行边缘提取,得到所述待检测图像对应的灰阶边缘图;第一确定绘制模块(图中未示出),被配置为基于所述待检测图像中眼睛的上下眼睑的等分眼睑点和眼角点,以及预设的曲线拟合算法,确定所述待检测人员的上下眼睑的眼睑曲线,作为待检测眼睑曲线,并在所述灰阶边缘图中,绘制所述待检测眼睑曲线,其中,所述待检测眼睑曲线包括:表征所述待检测人员的上眼睑的待检测上眼睑曲线和表征所述待检测人员的下眼睑的待检测下眼睑曲线;第六确定模块(图中未示出),被配置为基于所述待检测眼睑曲线中的等分眼睑点,在所述灰阶边缘图中确定多组参考点,其中,每组参考点中包括与所述待检测眼睑曲线中等分眼睑点对应的点;第二确定绘制模块(图中未示出),被配置为针对每组参考点,基于该组参考点、所述待检测眼睑曲线中的眼角点以及所述预设的曲线拟合算法,确定出该组参考点对应的参考曲线,并在所述灰阶边缘图中绘制每组参考点对应的参考曲线,其中,每组参考点对应的参考曲线包括:表征所述待检测人员的上眼睑的参考上眼睑曲线和表征所述待检测人员的下眼睑的参考下眼睑曲线;第七确定模块(图中未示出),被配置为在所述灰阶边缘图中,针对每一第一上眼睑曲线,确定该第一上眼睑曲线对应的像素点的像素值之和,其中,所述第一上眼睑曲线包括:每组参考点所对应参考上眼睑曲线和所述待检测上眼睑曲线;第八确定模块(图中未示出),被配置为从每一第一上眼睑曲线对应的像素点的像素值之和中,确定出数值最大的和,并确定所述数值最大的和对应的第一上眼睑曲线,作为表征所述待检测人员的上眼睑的目标上眼睑曲线;第九确定模块(图中未示出),被配置为在所述灰阶边缘图中,针对每一第一下眼睑曲线,确定该第一下眼睑曲线对应的像素点的像素值之和,其中,所述第一下眼睑曲线包括:每组参考点所对应参考下眼睑曲线和所述待检测下眼睑曲线;第十确定模块(图中未示出),被配置为从每一第一下眼睑曲线对应的像素点的像素值之和中,确定出数值最大的和,并确定所述数值最大的和对应的第一下眼睑曲线,作为表征所述待检测人员的下眼睑的目标下眼睑曲线;第十一确定模块(图中未示出),被配置为基于数学积分原理,对目标眼睑曲线进行积分, 确定出所述目标眼睑曲线中目标上眼睑曲线的第三曲线长度和目标下眼睑曲线的第四曲线长度;分别从所述目标上眼睑曲线和所述目标下眼睑曲线中,确定出多个参考眼睑点;第十二确定模块(图中未示出),被配置为基于所述目标眼睑曲线中目标上眼睑曲线的第三曲线长度、目标下眼睑曲线的第四曲线长度、所述多个参考眼睑点以及预设等分点数,从所述目标眼睑曲线中目标上眼睑曲线和目标下眼睑曲线中,分别确定出预设等分点数减1个等分上眼睑点和所述预设等分点数减1个等分下眼睑点。In another embodiment of the present invention, the device further includes: an extraction module (not shown in the figure) configured to input the to-be-detected image into the eye key point detection model to determine the After the halved eyelid points and corner points of the upper and lower eyelids of the person to be detected in the image to be detected, the Sobel algorithm is used to perform edge extraction on the image to be detected to obtain the grayscale edge map corresponding to the image to be detected The first determination drawing module (not shown in the figure) is configured to determine the to-be-detected image based on the equally divided eyelid points and corner points of the upper and lower eyelids of the eyes in the image to be detected, and a preset curve fitting algorithm The eyelid curve of the upper and lower eyelids of the inspector is used as the eyelid curve to be inspected, and the eyelid curve to be inspected is drawn in the grayscale edge map, wherein the eyelid curve to be inspected includes: The upper eyelid curve to be detected of the upper eyelid and the lower eyelid curve to be detected that characterizes the lower eyelid of the person to be detected; the sixth determination module (not shown in the figure) is configured to be based on the equivalence in the eyelid curve to be detected The eyelid points are divided, and multiple groups of reference points are determined in the grayscale edge map, wherein each group of reference points includes points corresponding to the equally divided eyelid points of the eyelid curve to be detected; the second determination drawing module (not shown in the figure) Shown), configured to determine, for each set of reference points, a reference curve corresponding to the set of reference points based on the set of reference points, the corner points in the eyelid curve to be detected, and the preset curve fitting algorithm , And draw a reference curve corresponding to each set of reference points in the grayscale edge map, where the reference curve corresponding to each set of reference points includes: a reference upper eyelid curve that characterizes the upper eyelid of the person to be tested and The reference lower eyelid curve of the lower eyelid of the person to be inspected; the seventh determining module (not shown in the figure) is configured to determine the first upper eyelid curve for each first upper eyelid curve in the grayscale edge map The sum of the pixel values of the pixel points corresponding to the eyelid curve, wherein the first upper eyelid curve includes: the reference upper eyelid curve corresponding to each set of reference points and the upper eyelid curve to be detected; an eighth determining module (not shown in the figure) Shown), is configured to determine the sum with the largest value from the sum of the pixel values of the pixel points corresponding to each first upper eyelid curve, and determine the first upper eyelid curve corresponding to the sum with the largest value as A target upper eyelid curve that characterizes the upper eyelid of the person to be inspected; a ninth determination module (not shown in the figure), configured to determine the first lower eyelid curve in the grayscale edge map The sum of the pixel values of the pixel points corresponding to the first lower eyelid curve, where the first lower eyelid curve includes: a reference lower eyelid curve corresponding to each set of reference points and the lower eyelid curve to be detected; a tenth determination module ( (Not shown in the figure), is configured to determine the sum with the largest value from the sum of the pixel values of the pixel points corresponding to each first lower eyelid curve, and determine the first lower eyelid corresponding to the sum with the largest value Curve, as the target lower eyelid curve that characterizes the lower eyelid of the person to be inspected; the eleventh determination module (not shown in the figure) is configured to be based on mathematical integration The principle is to integrate the target eyelid curve to determine the third curve length of the target upper eyelid curve and the fourth curve length of the target lower eyelid curve in the target eyelid curve; from the target upper eyelid curve and the target lower curve respectively In the eyelid curve, multiple reference eyelid points are determined; the twelfth determination module (not shown in the figure) is configured to be based on the third curve length of the target upper eyelid curve and the target lower eyelid curve in the target eyelid curve The fourth curve length, the multiple reference eyelid points, and the preset number of equal division points, from the target upper eyelid curve and the target lower eyelid curve in the target eyelid curve, respectively determine the preset number of equal division points minus 1 equal division The upper eyelid point and the preset number of equal division points minus 1 equal division lower eyelid point.
上述装置实施例与方法实施例相对应,与该方法实施例具有同样的技术效果,具体说明参见方法实施例。装置实施例是基于方法实施例得到的,具体的说明可以参见方法实施例部分,此处不再赘述。The foregoing device embodiment corresponds to the method embodiment, and has the same technical effect as the method embodiment. For specific description, refer to the method embodiment. The device embodiment is obtained based on the method embodiment, and the specific description can be found in the method embodiment part, which will not be repeated here.
本领域普通技术人员可以理解:附图只是一个实施例的示意图,附图中的模块或流程并不一定是实施本发明所必须的。A person of ordinary skill in the art can understand that the drawings are only schematic diagrams of an embodiment, and the modules or processes in the drawings are not necessarily necessary for implementing the present invention.
本领域普通技术人员可以理解:实施例中的装置中的模块可以按照实施例描述分布于实施例的装置中,也可以进行相应变化位于不同于本实施例的一个或多个装置中。上述实施例的模块可以合并为一个模块,也可以进一步拆分成多个子模块。A person of ordinary skill in the art can understand that the modules in the device in the embodiment may be distributed in the device in the embodiment according to the description of the embodiment, or may be located in one or more devices different from this embodiment with corresponding changes. The modules of the above-mentioned embodiments can be combined into one module or further divided into multiple sub-modules.
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明实施例技术方案的精神和范围。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present invention, not to limit them; although the present invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: The technical solutions recorded in the foregoing embodiments are modified, or some of the technical features are equivalently replaced; these modifications or replacements do not cause the essence of the corresponding technical solutions to deviate from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

  1. 一种眼部关键点的标注方法,其特征在于,包括:A method for marking key points of eyes, which is characterized in that it includes:
    获得人脸图像以及每一人脸图像对应的标注眼睑曲线,其中,所述人脸图像中标注有所包含眼睛的标注眼角点和上下眼睑的标注眼睑点,每一标注眼睑曲线包括:基于所对应标注眼睑点和标注眼角点生成的、表征上眼睑的标注上眼睑曲线和表征下眼睑的标注下眼睑曲线;Obtain the face image and the marked eyelid curve corresponding to each face image, wherein the face image is marked with the marked corner points of the eyes and the marked eyelid points of the upper and lower eyelids, and each marked eyelid curve includes: based on the corresponding Annotated upper eyelid curve generated by marking eyelid points and marking eye corner points, which characterizes the upper eyelid, and marked lower eyelid curve which characterizes the lower eyelid;
    针对每一标注眼睑曲线,基于数学积分原理,对该标注眼睑曲线进行积分,确定出该标注眼睑曲线中标注上眼睑曲线的第一曲线长度和标注下眼睑曲线的第二曲线长度;分别从所述标注上眼睑曲线和所述标注下眼睑曲线中,确定出多个待利用眼睑点;For each marked eyelid curve, based on the principle of mathematical integration, the marked eyelid curve is integrated to determine the first curve length marked with the upper eyelid curve and the second curve length marked with the lower eyelid curve in the marked eyelid curve; In the marked upper eyelid curve and the marked lower eyelid curve, multiple eyelid points to be used are determined;
    针对每一标注眼睑曲线,基于该标注眼睑曲线中标注上眼睑曲线的第一曲线长度、标注下眼睑曲线的第二曲线长度、所述多个待利用眼睑点以及预设等分点数,从该标注眼睑曲线中标注上眼睑曲线和标注下眼睑曲线中,分别确定出预设等分点数减1个等分上眼睑点和所述预设等分点数减1个等分下眼睑点;For each marked eyelid curve, based on the first curve length of the upper eyelid curve marked on the marked eyelid curve, the second curve length marked on the lower eyelid curve, the multiple eyelid points to be used and the preset number of equal division points, In marking the eyelid curve, marking the upper eyelid curve and marking the lower eyelid curve, respectively determining the preset number of equal division points minus 1 equal division upper eyelid point and the preset number of equal division points minus 1 equal division lower eyelid point;
    将每一人脸图像中的标注眼角点、等分上眼睑点和等分下眼睑点,确定为每一人脸图像对应的眼部关键点。The marked eye corner points, equal division of upper eyelid points, and equal division of lower eyelid points in each face image are determined as key eye points corresponding to each face image.
  2. 如权利要求1所述的方法,其特征在于,在所述将每一人脸图像中的标注眼角点、等分上眼睑点和等分下眼睑点,确定为每一人脸图像对应的眼部关键点的步骤之后,所述方法还包括:The method of claim 1, wherein the marking of the corner of the eye, the equal division of the upper eyelid point and the equal division of the lower eyelid point in each face image are determined as the eye key corresponding to each face image After the step of point, the method further includes:
    针对每一人脸图像,基于该人脸图像中的眼部关键点,从该人脸图像中截取出眼睛所在区域图像,得到标注有眼部关键点的眼睛图像;For each face image, based on the eye key points in the face image, cut out the eye area image from the face image to obtain the eye image marked with the eye key points;
    将所述眼睛图像及其对应的标定信息,确定为用于检测图像中眼睛的眼部关键点的眼部关键点检测模型的训练数据,其中,所述标定信息包括所对应眼睛图像中的眼部关键点的位置信息。The eye image and its corresponding calibration information are determined as the training data of the eye key point detection model for detecting the eye key points of the eyes in the image, wherein the calibration information includes the eye in the corresponding eye image. The location information of key points.
  3. 如权利要求2所述的方法,其特征在于,所述将所述眼睛图像及其对应的标定信息,确定为用于检测图像中眼睛的眼部关键点的眼部关键点检测模型的训练数据的步骤,包括:The method of claim 2, wherein the eye image and its corresponding calibration information are determined as training data of an eye key point detection model for detecting eye key points of the eyes in the image The steps include:
    获得每一人脸图像对应的真实眼睛开闭长度;Obtain the real eye opening and closing length corresponding to each face image;
    获得每一人脸图像对应的实测眼睛开闭长度,其中,所述实测眼睛开闭长度为:基于人脸图像对应的目标三维人脸模型中眼睛的上下眼睑所确定的长度,所述目标三维人脸模型为:基于所对应人脸图像中的人脸特征点以及预设的三维人脸模型确定的人脸模型,所述目标三维人脸模型包括:基于该人脸图像中的标注眼角点、等分上眼睑点和等分下眼睑点,构建的眼睛的上下眼睑;Obtain the measured eye opening and closing length corresponding to each face image, where the measured eye opening and closing length is: based on the length determined by the upper and lower eyelids of the eyes in the target 3D face model corresponding to the face image, the target 3D person The face model is: a face model determined based on the face feature points in the corresponding face image and a preset three-dimensional face model, and the target three-dimensional face model includes: based on the marked eye corner points in the face image, Equally divide the upper eyelid point and the lower eyelid point to construct the upper and lower eyelids of the eye;
    针对每一人脸图像,计算该人脸图像对应的真实眼睛开闭长度与实测眼睛开闭长度的比值,作为该人脸图像对应的实测偏差;For each face image, calculate the ratio of the actual eye opening and closing length corresponding to the face image to the measured eye opening and closing length as the measured deviation corresponding to the face image;
    将所述眼睛图像及其对应的标定信息,确定为用于检测图像中眼睛的上下眼睑中等分眼睑点的眼部关键点检测模型的训练数据,其中,所述标定信息包含:所对应眼睛图像标注的眼部关键点的位置信息,以及该眼睛图像所对应人脸图像对应的实测偏差。The eye image and its corresponding calibration information are determined as the training data of the eye key point detection model used to detect the upper and lower eyelids of the eye in the image, where the calibration information includes: the corresponding eye image The position information of the marked eye key points, and the actual measured deviation corresponding to the face image corresponding to the eye image.
  4. 如权利要求1-3任一项所述的方法,其特征在于,在所述获得人脸图像以及每一人脸图像对应的标注眼睑曲线的步骤之前,所述方法还包括:The method according to any one of claims 1 to 3, characterized in that, before the step of obtaining a face image and an eyelid curve corresponding to each face image, the method further comprises:
    对每一人脸图像中人员面部的眼睛的上下眼睑进行标注的过程,其中,针对每一人脸图像,执行如下步骤,以对每一人脸图像中人员面部的眼睛的上下眼睑进行标注:The process of labeling the upper and lower eyelids of the eyes of the person's face in each face image, wherein for each face image, the following steps are performed to label the upper and lower eyelids of the eyes of the person's face in each face image:
    获得第一人脸图像并显示,其中,所述第一人脸图像包含人员面部的眼睛,所述第一人脸图像为所述人脸图像中的一个;Obtain and display a first face image, wherein the first face image includes eyes of a person's face, and the first face image is one of the face images;
    接收标注人员针对所述第一人脸图像中眼睛的上下眼睑触发的标注指令,其中,所述标注指令携带所标注的标注点的位置信息;Receiving an annotation instruction triggered by an annotator for the upper and lower eyelids of the eyes in the first face image, wherein the annotation instruction carries position information of the marked annotation point;
    若检测到所述标注人员至少两次触发针对所述第一人脸图像中指定眼睑的标注指令,基于所述标注人员触发的至少两次标注指令携带的标注点的位置信息,以及预设的曲线拟合算法,确定表征所述指定眼睑的指定眼睑曲线,其中,所述指定眼睑为:所述第一人脸图像中眼睛的上眼睑和下眼睑;If it is detected that the tagger has triggered the tagging instruction for the specified eyelid in the first face image at least twice, based on the location information of the tag point carried by the tagging instruction triggered by the tagger at least twice, and the preset A curve fitting algorithm to determine a specified eyelid curve that characterizes the specified eyelid, wherein the specified eyelid is: the upper eyelid and the lower eyelid of the eye in the first face image;
    在所述第一人脸图像中显示所述指定眼睑曲线,以使得所述标注人员检测所标注的标注点是否为所述指定眼睑上的眼睑点或眼角点。The specified eyelid curve is displayed in the first face image, so that the annotator detects whether the annotated point is an eyelid point or an eye corner point on the specified eyelid.
  5. 一种眼部关键点检测模型的训练方法,其特征在于,所述方法包括:A method for training an eye key point detection model, characterized in that the method includes:
    获得训练数据,其中,所述训练数据包括标注有眼睛的上下眼睑中等分眼睑点和标注眼角点的眼睛图像以及每一眼睛图像对应的标定信息,所述标定信息包括:所对应眼睛图像中标注的等分眼睑点和标注眼角点的位置信息,所述等分眼睑点包括:基于人脸图像对应的标注眼睑曲线以及数学积分原 理,确定出的人脸图像中眼睛的上眼睑的等分上眼睑点和下眼睑的等分下眼睑点;所述标注眼睑曲线包括:基于所对应人脸图像中标注的眼睛的标注眼角点和上下眼睑的标注眼睑点生成的、表征上眼睑的标注上眼睑曲线和表征下眼睑的标注下眼睑曲线,每一眼睛图像为从所对应人脸图像中截取出的眼睛所在区域图像;Obtain training data, where the training data includes the upper and lower eyelid equidistant eyelid points marked with the eyes and the eye images marked with the corner points of the eyes, and calibration information corresponding to each eye image, and the calibration information includes: the marking in the corresponding eye image The position information of the divided eyelid points and the marked eye corner points, the divided eyelid points include: based on the marked eyelid curve corresponding to the face image and the principle of mathematical integration, the divided upper eyelid of the eye in the face image is determined The eyelid point and the lower eyelid are equally divided into the lower eyelid points; the marking eyelid curve includes: marking the upper eyelid, which is generated based on the marked corner points of the eyes marked in the corresponding face image and the marked eyelid points of the upper and lower eyelids, and represents the upper eyelid The curve and the marked lower eyelid curve that characterizes the lower eyelid, and each eye image is an image of the area where the eye is located from the corresponding face image;
    将所述眼睛图像,以及每一眼睛图像对应的标定信息包括的等分眼睑点和标注眼角点的位置信息,输入初始的眼部关键点检测模型,以训练得到用于检测图像中眼睛的上下眼睑中的等分眼睑点和眼角点的眼部关键点检测模型。Input the eye image and the position information of the divided eyelid points and the marked eye corner points included in the calibration information corresponding to each eye image into the initial eye key point detection model to be trained to detect the up and down of the eyes in the image An eye key point detection model that divides the eyelid point and the corner point of the eyelid.
  6. 如权利要求5所述的方法,其特征在于,每一眼睛图像对应的标定信息还包括:该眼睛图像对应的实测偏差,所述实测偏差为:该眼睛图像对应的真实眼睛开闭长度与实测眼睛开闭长度的比值;所述实测眼睛开闭长度为基于人脸图像对应的目标三维人脸模型中眼睛的上下眼睑所确定的长度,所述目标三维人脸模型为:基于所对应人脸图像中的人脸特征点以及预设的三维人脸模型确定的人脸模型,所述目标三维人脸模型包括:基于该人脸图像中的标注眼角点、等分上眼睑点和等分下眼睑点,构建的眼睛的上下眼睑;The method of claim 5, wherein the calibration information corresponding to each eye image further comprises: an actual measurement deviation corresponding to the eye image, and the actual measurement deviation is: the actual eye opening and closing length corresponding to the eye image and the actual measurement deviation The ratio of the opening and closing length of the eyes; the actual measured opening and closing length of the eyes is the length determined based on the upper and lower eyelids of the eyes in the target 3D face model corresponding to the face image, and the target 3D face model is based on the corresponding face Face feature points in the image and a face model determined by a preset three-dimensional face model. The target three-dimensional face model includes: annotated eye corner points, equal division of upper eyelid points and equal division of lower eyelid points based on the face image. Eyelid points, the upper and lower eyelids of the constructed eye;
    所述将所述眼睛图像,以及每一眼睛图像对应的标定信息包括的等分眼睑点和标注眼角点的位置信息,输入初始的眼部关键点检测模型,以训练得到用于检测图像中眼睛的上下眼睑中的等分眼睑点和眼角点的眼部关键点检测模型的步骤,包括:The eye image and the position information of the divided eyelid points and the marked eye corner points included in the calibration information corresponding to each eye image are input into an initial eye key point detection model to be trained to detect the eyes in the image The steps of detecting the key points of the eyelid point and the corner point of the eyelid in the upper and lower eyelids include:
    将所述眼睛图像,以及每一眼睛图像对应的标定信息包括的等分眼睑点和标注眼角点的位置信息及眼睛图像对应的实测偏差,输入初始的眼部关键点检测模型,以训练得到眼部关键点检测模型,其中,所述眼部关键点检测模型用于检测图像中眼睛的上下眼睑中的等分眼睑点和眼角点,并检测图像对应的实测偏差。The eye image and the calibration information corresponding to each eye image include the position information of the divided eyelid points and the marked eye corner points and the measured deviation corresponding to the eye image, and input the initial eye key point detection model to train the eye The key point detection model of the eye, wherein the key point detection model of the eye is used to detect the equally divided eyelid points and the corner points of the upper and lower eyelids of the eyes in the image, and detect the actual measured deviation corresponding to the image.
  7. 如权利要求5所述的方法,其特征在于,在所述将所述眼睛图像,以及每一眼睛图像对应的标定信息包括的等分眼睑点和标注眼角点的位置信息,输入初始的眼部关键点检测模型,以训练得到用于检测图像中眼睛的上下眼睑中的等分眼睑点和眼角点的眼部关键点检测模型的步骤之后,所述方法还包括:The method according to claim 5, characterized in that, in the said eye image, and the calibration information corresponding to each eye image, including the equally divided eyelid points and the position information of the marked eye corner points, input the initial eye After the key point detection model is trained to obtain the eye key point detection model used to detect the equally divided eyelid points and the corner points of the upper and lower eyelids of the eyes in the image, the method further includes:
    获得待检测图像,其中,所述待检测图像包括待检测人员的眼睛;Obtaining an image to be inspected, where the image to be inspected includes the eyes of the person to be inspected;
    将所述待检测图像输入所述眼部关键点检测模型,确定所述待检测图像中待检测人员的眼睛的上下眼睑的等分眼睑点和眼角点。The to-be-detected image is input into the eye key point detection model, and the equally divided eyelid points and eye corner points of the upper and lower eyelids of the to-be-detected person’s eyes in the to-be-detected image are determined.
  8. 如权利要求7所述的方法,其特征在于,在所述将所述待检测图像输入所述眼部关键点检测模型,确定所述待检测图像中待检测人员的眼睛的上下眼睑的等分眼睑点和眼角点的步骤之后,所述方法还包括:The method according to claim 7, wherein in said inputting said image to be detected into said eye key point detection model, the equal division of the upper and lower eyelids of the eyes of the person to be detected in said image to be detected is determined After the steps of eyelid point and eye corner point, the method further includes:
    利用索贝尔sobel算法,对所述待检测图像进行边缘提取,得到所述待检测图像对应的灰阶边缘图;Using the Sobel algorithm to perform edge extraction on the image to be detected to obtain a grayscale edge map corresponding to the image to be detected;
    基于所述待检测图像中眼睛的上下眼睑的等分眼睑点和眼角点,以及预设的曲线拟合算法,确定所述待检测人员的上下眼睑的眼睑曲线,作为待检测眼睑曲线,并在所述灰阶边缘图中,绘制所述待检测眼睑曲线,其中,所述待检测眼睑曲线包括:表征所述待检测人员的上眼睑的待检测上眼睑曲线和表征所述待检测人员的下眼睑的待检测下眼睑曲线;Based on the equally divided eyelid points and corner points of the upper and lower eyelids of the eyes in the image to be detected, and a preset curve fitting algorithm, the eyelid curve of the upper and lower eyelids of the person to be detected is determined as the eyelid curve to be detected, and In the grayscale edge map, the eyelid curve to be detected is drawn, wherein the eyelid curve to be detected includes: an upper eyelid curve to be detected representing the upper eyelid of the person to be detected and a lower eyelid curve representing the person to be detected The lower eyelid curve of the eyelid to be tested;
    基于所述待检测眼睑曲线中的等分眼睑点,在所述灰阶边缘图中确定多组参考点,其中,每组参考点中包括与所述待检测眼睑曲线中等分眼睑点对应的点;Based on the equally divided eyelid points in the eyelid curve to be detected, multiple sets of reference points are determined in the grayscale edge map, wherein each group of reference points includes points corresponding to the equally divided eyelid points in the eyelid curve to be detected ;
    针对每组参考点,基于该组参考点、所述待检测眼睑曲线中的眼角点以及所述预设的曲线拟合算法,确定出该组参考点对应的参考曲线,并在所述灰阶边缘图中绘制每组参考点对应的参考曲线,其中,每组参考点对应的参考曲线包括:表征所述待检测人员的上眼睑的参考上眼睑曲线和表征所述待检测人员的下眼睑的参考下眼睑曲线;For each set of reference points, based on the set of reference points, the corner points of the eyelid curve to be detected, and the preset curve fitting algorithm, the reference curve corresponding to the set of reference points is determined, and the reference curve is determined in the gray scale. A reference curve corresponding to each set of reference points is drawn in the edge map, where the reference curve corresponding to each set of reference points includes: a reference upper eyelid curve that characterizes the upper eyelid of the person to be inspected and a reference curve that characterizes the lower eyelid of the person to be inspected Refer to the lower eyelid curve;
    在所述灰阶边缘图中,针对每一第一上眼睑曲线,确定该第一上眼睑曲线对应的像素点的像素值之和,其中,所述第一上眼睑曲线包括:每组参考点所对应参考上眼睑曲线和所述待检测上眼睑曲线;In the grayscale edge map, for each first upper eyelid curve, the sum of the pixel values of the pixel points corresponding to the first upper eyelid curve is determined, wherein the first upper eyelid curve includes: each set of reference points The corresponding reference upper eyelid curve and the upper eyelid curve to be detected;
    从每一第一上眼睑曲线对应的像素点的像素值之和中,确定出数值最大的和,并确定所述数值最大的和对应的第一上眼睑曲线,作为表征所述待检测人员的上眼睑的目标上眼睑曲线;From the sum of the pixel values of the pixel points corresponding to each first upper eyelid curve, the sum with the largest value is determined, and the first upper eyelid curve corresponding to the largest sum is determined as a characterizing the person to be tested The target upper eyelid curve of the upper eyelid;
    在所述灰阶边缘图中,针对每一第一下眼睑曲线,确定该第一下眼睑曲线对应的像素点的像素值之和,其中,所述第一下眼睑曲线包括:每组参考点所对应参考下眼睑曲线和所述待检测下眼睑曲线;In the grayscale edge map, for each first lower eyelid curve, determine the sum of the pixel values of the pixel points corresponding to the first lower eyelid curve, where the first lower eyelid curve includes: each set of reference points The corresponding reference lower eyelid curve and the lower eyelid curve to be detected;
    从每一第一下眼睑曲线对应的像素点的像素值之和中,确定出数值最大的和,并确定所述数值最大的和对应的第一下眼睑曲线,作为表征所述待检测人员的下眼睑的目标下眼睑曲线;From the sum of the pixel values of the pixel points corresponding to each first lower eyelid curve, the sum with the largest value is determined, and the first lower eyelid curve corresponding to the largest sum is determined as a characterizing the person to be tested The target lower eyelid curve of the lower eyelid;
    基于数学积分原理,对目标眼睑曲线进行积分,确定出所述目标眼睑曲线中目标上眼睑曲线的第 三曲线长度和目标下眼睑曲线的第四曲线长度;分别从所述目标上眼睑曲线和所述目标下眼睑曲线中,确定出多个参考眼睑点;Based on the principle of mathematical integration, the target eyelid curve is integrated to determine the third curve length of the target upper eyelid curve and the fourth curve length of the target lower eyelid curve in the target eyelid curve; In the curve of the target lower eyelid, multiple reference eyelid points are determined;
    基于所述目标眼睑曲线中目标上眼睑曲线的第三曲线长度、目标下眼睑曲线的第四曲线长度、所述多个参考眼睑点以及预设等分点数,从所述目标眼睑曲线中目标上眼睑曲线和目标下眼睑曲线中,分别确定出预设等分点数减1个等分上眼睑点和所述预设等分点数减1个等分下眼睑点。Based on the third curve length of the target upper eyelid curve in the target eyelid curve, the fourth curve length of the target lower eyelid curve, the multiple reference eyelid points and the preset number of equal points, the target upper eyelid curve is selected from the target eyelid curve. In the eyelid curve and the target lower eyelid curve, the preset number of equal division points minus one equal upper eyelid point and the preset number of equal division points minus one equal lower eyelid point are respectively determined.
  9. 一种眼部关键点的标注装置,其特征在于,包括:A marking device for key points of eyes, characterized in that it comprises:
    第一获得模块,被配置为获得人脸图像以及每一人脸图像对应的标注眼睑曲线,其中,所述人脸图像中标注有所包含眼睛的标注眼角点和上下眼睑的标注眼睑点,每一标注眼睑曲线包括:基于所对应标注眼睑点和标注眼角点生成的、表征上眼睑的标注上眼睑曲线和表征下眼睑的标注下眼睑曲线;The first obtaining module is configured to obtain a face image and a marked eyelid curve corresponding to each face image, wherein the face image is marked with marked corner points of the eyes and marked eyelid points of the upper and lower eyelids, each Annotating the eyelid curve includes: an upper eyelid curve representing the upper eyelid and a lower eyelid curve representing the lower eyelid generated based on the corresponding eyelid points and corner points of the eye;
    第一确定模块,被配置为针对每一标注眼睑曲线,基于数学积分原理,对该标注眼睑曲线进行积分,确定出该标注眼睑曲线中标注上眼睑曲线的第一曲线长度和标注下眼睑曲线的第二曲线长度;分别从所述标注上眼睑曲线和所述标注下眼睑曲线中,确定出多个待利用眼睑点;The first determining module is configured to, for each marked eyelid curve, integrate the marked eyelid curve based on the principle of mathematical integration, and determine the first curve length of the marked upper eyelid curve and the marked lower eyelid curve in the marked eyelid curve The second curve length; respectively determine a plurality of eyelid points to be used from the marked upper eyelid curve and the marked lower eyelid curve;
    第二确定模块,被配置为针对每一标注眼睑曲线,基于该标注眼睑曲线中标注上眼睑曲线的第一曲线长度、标注下眼睑曲线的第二曲线长度、所述多个待利用眼睑点以及预设等分点数,从该标注眼睑曲线中标注上眼睑曲线和标注下眼睑曲线中,分别确定出预设等分点数减1个等分上眼睑点和所述预设等分点数减1个等分下眼睑点;The second determining module is configured to, for each marked eyelid curve, based on the first curve length marking the upper eyelid curve in the marked eyelid curve, the second curve length marking the lower eyelid curve, the multiple eyelid points to be used, and Preset the number of equal points, from the marked upper eyelid curve and marked lower eyelid curve from the marked eyelid curve, respectively determine the preset number of equal points minus 1 equal upper eyelid point and the preset number of equal points minus 1 Divide the lower eyelid points equally;
    第三确定模块,被配置为将每一人脸图像中的标注眼角点、等分上眼睑点和等分下眼睑点,确定为每一人脸图像对应的眼部关键点。The third determining module is configured to determine the marked eye corner point, the equally divided upper eyelid point, and the equally divided lower eyelid point in each face image as the key eye points corresponding to each face image.
  10. 一种眼部关键点检测模型的训练装置,其特征在于,包括:A training device for an eye key point detection model, characterized in that it comprises:
    第二获得模块,被配置为获得训练数据,其中,所述训练数据包括标注有眼睛的上下眼睑中等分眼睑点和标注眼角点的眼睛图像以及每一眼睛图像对应的标定信息,所述标定信息包括:所对应眼睛图像中标注的等分眼睑点和标注眼角点的位置信息,所述等分眼睑点包括:基于人脸图像对应的标注眼睑曲线以及数学积分原理,确定出的人脸图像中眼睛的上眼睑的等分上眼睑点和下眼睑的等分下眼睑点;所述标注眼睑曲线包括:基于所对应人脸图像中标注的眼睛的标注眼角点和上下眼睑的标注眼睑点生成的、表征上眼睑的标注上眼睑曲线和表征下眼睑的标注下眼睑曲线,每一眼睛图像为从所对应人脸图像中截取出的眼睛所在区域图像;The second obtaining module is configured to obtain training data, where the training data includes the upper and lower eyelid equal eyelid points marked with the eyes and the eye images marked with the corner points of the eyes, and the calibration information corresponding to each eye image, the calibration information Including: the position information of the divided eyelid points and the marked eye corner points marked in the corresponding eye image. The divided eyelid points include: based on the marked eyelid curve corresponding to the face image and the principle of mathematical integration, the determined face image The equal division of the upper eyelid of the eye and the equal division of the lower eyelid point of the lower eyelid; the marked eyelid curve includes: generated based on the marked corner points of the eye marked in the corresponding face image and the marked eyelid points of the upper and lower eyelids , The marked upper eyelid curve representing the upper eyelid and the marked lower eyelid curve representing the lower eyelid, each eye image is an image of the area where the eye is cut out from the corresponding face image;
    输入模块,被配置为将所述眼睛图像,以及每一眼睛图像对应的标定信息包括的等分眼睑点和标注眼角点的位置信息,输入初始的眼部关键点检测模型,以训练得到用于检测图像中眼睛的上下眼睑中的等分眼睑点和眼角点的眼部关键点检测模型。The input module is configured to input the eye image and the position information of the equally divided eyelid points and the marked eye corner points included in the calibration information corresponding to each eye image into the initial eye key point detection model to be trained for An eye key point detection model that detects the equally divided eyelid points and corner points of the upper and lower eyelids of the eyes in the image.
PCT/CN2019/108077 2019-06-21 2019-09-26 Eye key point labeling method and apparatus, and training method and apparatus for eye key point detection model WO2020252969A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910541988.5 2019-06-21
CN201910541988.5A CN110956071B (en) 2019-06-21 2019-06-21 Eye key point labeling and detection model training method and device

Publications (1)

Publication Number Publication Date
WO2020252969A1 true WO2020252969A1 (en) 2020-12-24

Family

ID=69975485

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/108077 WO2020252969A1 (en) 2019-06-21 2019-09-26 Eye key point labeling method and apparatus, and training method and apparatus for eye key point detection model

Country Status (2)

Country Link
CN (1) CN110956071B (en)
WO (1) WO2020252969A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113221599B (en) * 2020-01-21 2022-06-10 魔门塔(苏州)科技有限公司 Eyelid curve construction method and device
CN113516705B (en) * 2020-04-10 2024-04-02 魔门塔(苏州)科技有限公司 Calibration method and device for hand key points
CN113743172B (en) * 2020-05-29 2024-04-16 魔门塔(苏州)科技有限公司 Personnel gazing position detection method and device
CN113723214B (en) * 2021-08-06 2023-10-13 武汉光庭信息技术股份有限公司 Face key point labeling method, system, electronic equipment and storage medium
CN113591815B (en) * 2021-09-29 2021-12-21 北京万里红科技有限公司 Method for generating canthus recognition model and method for recognizing canthus in eye image

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101583971A (en) * 2006-12-04 2009-11-18 爱信精机株式会社 Eye detecting device, eye detecting method, and program
CN106203262A (en) * 2016-06-27 2016-12-07 辽宁工程技术大学 A kind of ocular form sorting technique based on eyelid curve similarity Yu ocular form index

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104091150B (en) * 2014-06-26 2019-02-26 浙江捷尚视觉科技股份有限公司 A kind of human eye state judgment method based on recurrence
CN108229301B (en) * 2017-11-03 2020-10-09 北京市商汤科技开发有限公司 Eyelid line detection method and device and electronic equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101583971A (en) * 2006-12-04 2009-11-18 爱信精机株式会社 Eye detecting device, eye detecting method, and program
CN106203262A (en) * 2016-06-27 2016-12-07 辽宁工程技术大学 A kind of ocular form sorting technique based on eyelid curve similarity Yu ocular form index

Also Published As

Publication number Publication date
CN110956071A (en) 2020-04-03
CN110956071B (en) 2022-06-03

Similar Documents

Publication Publication Date Title
WO2020252969A1 (en) Eye key point labeling method and apparatus, and training method and apparatus for eye key point detection model
US10068128B2 (en) Face key point positioning method and terminal
US10255484B2 (en) Method and system for assessing facial skin health from a mobile selfie image
KR102591552B1 (en) Eyelid shape estimation using eye pose measurement
US20200103675A1 (en) Method, device, and computer program for virtually adjusting a spectacle frame
CN101339606B (en) Human face critical organ contour characteristic points positioning and tracking method and device
CN111325846B (en) Expression base determination method, avatar driving method, device and medium
US20220044491A1 (en) Three-dimensional face model generation method and apparatus, device, and medium
CN107358648A (en) Real-time full-automatic high quality three-dimensional facial reconstruction method based on individual facial image
JP7015152B2 (en) Processing equipment, methods and programs related to key point data
CN106920277A (en) Simulation beauty and shaping effect visualizes the method and system of online scope of freedom carving
CN106874861A (en) A kind of face antidote and system
CN108875485A (en) A kind of base map input method, apparatus and system
EP3446592B1 (en) Device and method for eyeliner-wearing guide
CN105118023B (en) Real-time video human face cartoon generation method based on human face characteristic point
CN109712080A (en) Image processing method, image processing apparatus and storage medium
EP3182362A1 (en) Method and sytem for evaluating fitness between eyeglasses wearer and eyeglasses worn thereby
CN104809638A (en) Virtual glasses trying method and system based on mobile terminal
CN103208133A (en) Method for adjusting face plumpness in image
CN110147729A (en) User emotion recognition methods, device, computer equipment and storage medium
WO2018079255A1 (en) Image processing device, image processing method, and image processing program
CN108463823A (en) A kind of method for reconstructing, device and the terminal of user's Hair model
CN104123562A (en) Human body face expression identification method and device based on binocular vision
CN108615256A (en) A kind of face three-dimensional rebuilding method and device
CN109993108B (en) Gesture error correction method, system and device under a kind of augmented reality environment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19933800

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19933800

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19933800

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 08.08.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 19933800

Country of ref document: EP

Kind code of ref document: A1