CN110837757A - Face proportion calculation method, system, equipment and storage medium - Google Patents

Face proportion calculation method, system, equipment and storage medium Download PDF

Info

Publication number
CN110837757A
CN110837757A CN201810939046.8A CN201810939046A CN110837757A CN 110837757 A CN110837757 A CN 110837757A CN 201810939046 A CN201810939046 A CN 201810939046A CN 110837757 A CN110837757 A CN 110837757A
Authority
CN
China
Prior art keywords
face
point
points
hairline
face image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810939046.8A
Other languages
Chinese (zh)
Inventor
冯玉娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201810939046.8A priority Critical patent/CN110837757A/en
Publication of CN110837757A publication Critical patent/CN110837757A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/285Selection of pattern recognition techniques, e.g. of classifiers in a multi-classifier system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a face proportion calculation method, a face proportion calculation system, face proportion calculation equipment and a storage medium, wherein a training model is based on, and the lower base point, the left eye outer hairline point, the right eye outer hairline point, two partition points of three tribes and four partition points of five eyes in a face image are detected; detecting a top central hairline point in the face image; determining the height range of the human face according to the distance between the top central hairline point and the chin top; determining a human face width range according to the distance between the left eye outer hairline point and the right eye outer hairline point; and calculating proportion information of the three families and proportion information of the five eyes according to the detected characteristic points. The method improves the detection accuracy of the key feature points of the face, thereby improving the accuracy of calculating the proportion information of the three kingdoms and the five eyes.

Description

Face proportion calculation method, system, equipment and storage medium
Technical Field
The invention relates to the technical field of image processing, in particular to a face proportion calculation method, a face proportion calculation system, face proportion calculation equipment and a storage medium.
Background
In recent years, facial shaping techniques have become popular in human life, and the importance of the results is self-evident as one of evaluation bases of the results of facial analysis. The three-family five-eye model is a description mode of facial features of the human face, is one of important bases of facial analysis, accurately obtains proportion information of the three-family five-eye model of the human face, and has important significance for the accuracy of facial analysis results.
The existing face three-family five-eye proportion confirmation is mainly obtained by the following three methods:
(1) directly measuring the actual distance between each point of the face, and calculating to obtain a proportional relation;
(2) directly measuring the actual distance between each point of the facial picture, and calculating to obtain a proportional relation;
(3) the method comprises the steps of obtaining a face image, detecting local features of the face image, positioning each feature point of the face, obtaining pixel distances among all points, and calculating to obtain a proportional relation.
In the existing human face three-family five-eye proportion confirmation method, the method (1) of directly measuring the actual distance between each feature point of the face has larger error and inconvenient manual operation because the measured person and the measurer are interfered by artificial subjective factors; in the method (2) for directly measuring the distance between each point of the facial photo, the human subjective factors of the measurer intervene, the human errors cannot be avoided, and the manual operation is complicated; in the method (3) for obtaining the face image, detecting the local facial features, positioning the feature points and calculating the pixel distance between the points, two operations of detecting the local facial features and positioning the feature points at the early stage need to be realized by a certain algorithm to obtain accurate detection and positioning results, the realization mode and the processing accuracy of the algorithm are relied on, and the accuracy of the measurement result needs to be further confirmed.
Disclosure of Invention
The invention aims to provide a human face proportion calculation method, a system, equipment and a storage medium, solves the problems of overlarge error and inconvenient operation caused by human factors in a method for directly measuring and measuring a human face or a facial picture in the prior art, solves the problem of excessively depending on local feature detection and a feature point positioning algorithm in the prior method for detecting local features of the face at the early stage, positioning each feature point and calculating the pixel interval proportion between each point, and improves the accuracy of calculating the proportion information of three divisions and five eyes.
The embodiment of the invention provides a face proportion calculation method, which comprises the following steps:
detecting a chin top, a left eye outer hairline point, a right eye outer hairline point, two partition points of three groups and four partition points of five eyes in the face image based on the training model;
detecting a top central hairline point in the face image;
determining the height range of the human face according to the distance between the top central hairline point and the chin top;
determining a human face width range according to the distance between the left eye outer hairline point and the right eye outer hairline point;
calculating proportion information of the three families according to the two segmentation points of the three families and the face height range;
and calculating the proportion information of the five eyes according to the four segmentation points of the five eyes and the face width range.
Optionally, before detecting the chin top, the left-eye outer hairline point, the right-eye outer hairline point, two segmentation points of the three groups, and four segmentation points of the five eyes in the face image, the method further includes the following steps:
collecting a face image, and detecting a face region in the face image, wherein the height and the width of the face region are respectively HFAnd WFThe central point of the face area is CFPoint;
with CFPoint is the central point, with HFHeight of (1+ k1), in WFAnd (1+ k2) is the width, and a face image is extracted from the face image, wherein k1 and k2 are preset expansion ratios, k1 belongs to (0,0.5), and k2 belongs to (0, 0.5).
Optionally, the detecting the top central hairline point in the face image includes the following steps:
extracting skin color information in the face image to obtain a face skin color distribution map, and performing gray processing on the face skin color distribution map to obtain a skin color gray map;
searching points of which the gray value is higher than a preset gray threshold value and which are located in a preset top central hairline position range in the skin color gray map, and taking the points as alternative top central hairline points;
and the highest point in the alternative top central hairline points is used as the top central hairline point.
Optionally, the preset gray threshold is SkinGray k3, where SkinGray is an average gray value in a skin color gray map, where k3 is a threshold adjustment coefficient, and k3 ∈ (0.3-0.8).
Optionally, a coordinate system is established in the face image, the upper left corner point of the face image is taken as an origin, the width direction is the x-axis direction, and the height direction is the y-axis direction;
the coordinate values of the points within the position range of the preset top central hair line meet the following conditions:
3*faceWidth/8<XfaceT<5*faceWidth/8
Yeyebrow/3<YfaceT<Yeyebrow
wherein, faceWidth is the width value of the face image, YeyebrowIs the longitudinal coordinate, X, of the tail point of the eyebrowfaceTAnd YfaceTThe x-axis and y-axis coordinate values of the top central hairline point, respectively.
Optionally, the partition points of the three courtyards include a subnasal point, a left eyebrow tail point and a right eyebrow tail point, and the coordinate values of the points within the preset top central hairline position range satisfy the following condition:
3*faceWidth/8<XfaceT<5*faceWidth/8
(YeyebrowL+YeyebrowR)/6<YfaceT<(YeyebrowL+YeyebrowR)/2
wherein, faceWidth is the width value of the face image, YeyebrowLAnd YeyebrowRRespectively the longitudinal coordinate, X, of the left and right eyebrow tail pointsfaceTAnd YfaceTAre respectively a topX-axis and y-axis coordinate values of the central hairline point.
Optionally, the segmentation points of the atrium include infranasal point and eyebrow tail point, and the segmentation feature points of the five eyes include left eye external canthus, left eye internal canthus, right eye internal canthus and right eye external canthus.
Optionally, a coordinate system is established in the face image, the upper left corner point of the face image is taken as an origin, the width direction is the x-axis direction, and the height direction is the y-axis direction;
the method for respectively calculating the proportion of the height values of the three parts to the height value of the face comprises the following steps:
extracting y-axis coordinate values of a chin tip, a top central hairline point, an eyebrow tail point and a subnasal point in the face image;
calculating the face height value H according to the following formula:
H=YfaceB-YfaceT
wherein, YfaceBAnd YfaceTRespectively are the y-axis coordinate values of the lower base point and the top central hairline point in the face image;
the height values of the three fractions H1, H2 and H3 were calculated according to the following formula:
H1=Yeyebrow-YfaceT
H2=Ynose-Yeyebrow
H3=YfaceB-Ynose
wherein, YeyebrowAnd YnoseThe y-axis coordinate values of the tail point and the under-nose point of the eyebrow are respectively;
and respectively calculating the proportion of the height values H1, H2 and H3 of the three parts to the height value H of the human face.
Optionally, the calculating the ratio of the width value of the five parts to the width value of the face respectively includes the following steps:
extracting x-axis coordinate values of left eye outer hairline points, right eye outer hairline points, left eye outer canthus, left eye inner canthus, right eye inner canthus and right eye outer canthus in the human face;
calculating the face width value W according to the following formula:
W=XhairR-XhairL
wherein, XhairRAnd XhairLX-axis coordinate values of the right eye outer hairline point and the left eye outer hairline point respectively;
the width values W1, W2, W3, W4 and W5 of the five sections were calculated according to the following formula:
W1=XeyeL_out-XhairL
W2=XeyeL-in-XeyeL_out
W3=XeyeR_in-XeyeL-in
W4=XeyeR_out-XeyeR_in
W5=XhairR-XeyeR_out
wherein, XeyeL_out,XhairL,XeyeL-in,XeyeR_inAnd XeyeR_outX-axis coordinate values of the left eye external canthus, the left eye internal canthus, the right eye internal canthus and the right eye external canthus respectively;
the ratios of the width values W1, W2, W3, W4 and W5 of the five parts to W of the width of the human face are calculated respectively.
The embodiment of the invention also provides a face proportion calculation system, which is applied to the face proportion calculation method, and the system comprises:
the characteristic point detection module is used for detecting a chin top, a left eye outer hairline point, a right eye outer hairline point, two partition points of three families and four partition points of five eyes in the face image based on the training model; detecting a top central hairline point in the face image;
the human face range determining module is used for determining a human face height range according to the distance between the top central hairline point and the chin top and determining a human face width range according to the distance between the left eye outer side hairline point and the right eye outer side hairline point;
the human face range segmentation module is used for dividing the human face height range into three parts according to two segmentation points of three groups and dividing the human face width range into five parts according to four segmentation points of five eyes;
the proportion information calculation module is used for calculating the proportion of the height values of the three parts of the height range to the height value of the face respectively to obtain proportion information of three categories; and respectively calculating the ratio of the width value of the five parts of the width range to the width value of the human face to obtain the ratio information of the five eyes.
The embodiment of the invention also provides face proportion calculation equipment, which comprises a processor; a memory having stored therein executable instructions of the processor;
wherein the processor is configured to perform the steps of the face proportion calculation method via execution of the executable instructions.
The embodiment of the present invention further provides a computer-readable storage medium for storing a program, wherein the program implements the steps of the face proportion calculation method when executed.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
The face proportion calculation method, the face proportion calculation system, the face proportion calculation equipment and the face proportion calculation storage medium have the following advantages:
after a face image is obtained, extracting face structure feature points by adopting a trained ASM shape model, extracting and marking key feature point information used for calculating the proportion relation of the three-family five-eye of the face in the face image, calculating to obtain the three pixel spacing between the three families, the five pixel spacing between the five eyes, the face height pixel spacing and the face width pixel spacing, then respectively calculating the ratio of the three pixel spacing between the three families to the face height pixel spacing, respectively calculating the ratio of the five pixel spacing between the five eyes to the face width pixel spacing, and confirming the proportion information of the five eyes of the face, thereby improving the accuracy of the detection of the feature points of the three-family five-eye; furthermore, the center hairline point at the top of the forehead in the face image can be searched and marked through the skin color information to serve as the edge point of the face height range, and the detection accuracy of the center hairline point at the top is improved.
Drawings
Other features, objects and advantages of the present invention will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, with reference to the accompanying drawings.
FIG. 1 is a flow chart of a face proportion calculation method according to an embodiment of the present invention;
FIG. 2 is a diagram of feature point labels for three eyes and five eyes of a human face according to an embodiment of the present invention;
FIG. 3 is a flow chart of a face proportion calculation method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of face image extraction according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a face proportion calculation system according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of a face proportion calculation device according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present invention.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
In order to solve the above technical problem, an embodiment of the present invention provides a method for calculating face three-family five-eye proportion information. The three-court five-eye model is a description mode of facial features of a human face, wherein the three-court generally refers to three distances divided by a central hairline point at the top of the forehead, an eyebrow tail point, a subnasal point and a chin top in the height direction (length direction) of the human face; the five eyes generally refer to five distances in the width direction of the face, which are divided by six points of the left eye outer hairline point, the left eye outer canthus, the left eye inner canthus, the right eye outer canthus and the right eye outer hairline point.
As shown in fig. 1, an embodiment of the present invention provides a face proportion calculation method, where the method includes the following steps:
s110: detecting a chin top, a left eye outer hairline point, a right eye outer hairline point, two partition points of three groups and four partition points of five eyes in the face image by adopting a training model;
s120: detecting a top central hairline point in the face image;
s130: determining the height range of the human face according to the distance between the top central hairline point and the chin top;
s140: determining a human face width range according to the distance between the left eye outer hairline point and the right eye outer hairline point;
s150: calculating proportion information of the three families according to the two segmentation points of the three families and the face height range;
s160: and calculating the proportion information of the five eyes according to the four segmentation points of the five eyes and the face width range.
In this embodiment, the training Model is an ASM (Active Shape Model), that is, a global Shape Model, which is an algorithm based on a point distribution Model, that is, the geometric Shape of an object with a similar Shape may be identified by sequentially connecting coordinates of a plurality of key feature points in series to form a Shape vector; in practical application, the ASM is divided into two parts of training and searching, and the trained ASM shape model is adopted to automatically mark and extract the human face characteristic points, so that the accuracy and the recognition efficiency of human face characteristic point detection are improved.
In practical applications, other training models, such as a training model obtained by learning and training a convolutional neural network, a training model obtained by learning and training a support vector machine, and the like, may also be used, and all of them are within the scope of the present invention. Wherein the active shape model may enable recognition of particular feature points more quickly and accurately than other machine-learned models.
The invention adopts the top central hairline point and the chin top to define the height range of the human face, and the left eye outer hairline point and the right eye outer hairline point define the width range of the human face. In this embodiment, the segmentation points of the atrium include infranasal point and eyebrow tail point, and the segmentation feature points of the five eyes include left eye external canthus, left eye internal canthus, right eye internal canthus, and right eye external canthus. In application, considering that two eyebrow tail points have a height difference due to facial expressions of a person in an acquired face image, the heights of the left eyebrow tail point or the right eyebrow tail point can be acquired simultaneously, and the heights of the left eyebrow tail point and the right eyebrow tail point are averaged to obtain the position of the central point of the two eyebrow tail points.
Fig. 2 shows a labeled diagram of key feature points of five eyes in three different categories of a human face used in the present invention. Where a1 denotes the top central hairline point, a2 denotes the left eyebrow tail point, A3 denotes the left eye outer hairline point, a4 denotes the left eye outer corner, a5 denotes the right eye outer corner, a6 denotes the under nose point, a7 denotes the chin top, A8 denotes the right eyebrow tail point, a9 denotes the right eye outer hairline point, a10 denotes the right eye outer corner, and a11 denotes the right eye inner corner.
As shown in fig. 3, in this embodiment, before the ASM shape model is used to detect the key feature points, the method further includes the following steps:
s210: acquiring a facial image, specifically shooting an image faceImg including a face by using a camera;
s220: detecting a face region faceRec in the face image faceImg, wherein the height and the width of the face region faceRec are respectively defined as HFAnd WFThe central point of the face area is CFPoint;
since the face region may be a relatively limited region during the face region recognition, and some features of the face edge may be lost, in this embodiment, S260 is further performed: the face area is enlarged, and the integrity of face image extraction is ensured;
in particular, here with CFPoint is the central point, with HFHeight of (1+ k1), in WFAnd (1+ k2) is the width, and the face image faceImage is extracted from the face image faceImage, wherein k1 and k2 are preset expansion ratios, wherein k1 belongs to (0,0.5) and k2 belongs to (0, 0.5).
Then, step S270 is executed: using the trained ASM shape model to identify and label the feature points, that is, S290: marking of each feature point.
Fig. 4 is a schematic diagram of face image extraction according to an embodiment of the present invention. Where B1 represents a face image, B2 represents a detected face region, FC is the center point of the face region, FH (1+0.15) in height and FH (1+0.2) in width, with the face region being the center point, and a face image B3 is extracted from the face image B1. The values of k1 and k2 are merely examples, and the present invention is not limited thereto.
In this embodiment, the detecting a face region in the face image includes detecting a face region in the face image by using a trained Adaboost classifier. Adaboost is an iterative method, and the core idea is to train the same weak classifier against different training sets, and then to assemble the weak classifiers obtained on different training sets to form a final strong classifier.
In other embodiments, when detecting a face region, it is also possible to use other types of machine learning models instead of the Adaboost classifier, for example, to use a support vector machine to learn features of the face region to obtain a face region recognition model, or to use an AlexNet neural network to learn features of the face region to obtain a face two-class recognition model, or to use a deep learning neural network to learn features of the face region to obtain a face region recognition model, and the like, all of which are within the protection scope of the present invention. Compared with other models, the Adaboost classifier is more suitable for detecting the face region and can realize more accurate and faster face region detection.
The top central hairline point is used as an edge point of the face height, the detection is also important, and the detection accuracy is directly related to the calculation accuracy of the proportion of the three kingdoms five eyes. The top central hairline point can also be identified and detected by adopting a trained ASM shape model. However, in the application, the top central hairline point is detected by using a shape recognition method, and the effect is not very high. Considering that a difference of skin color is generated at the top central hairline point due to the existence of hair, the invention provides a technical scheme for detecting the top central hairline point based on skin color information.
In this embodiment, the detecting the top central hairline point in the face image includes the following steps:
extracting skin color information in the face image to obtain a face skin color distribution map, and performing gray processing on the face skin color distribution map to obtain a skin color gray map; specifically, the face image faceImage may be subjected to skin color detection by using an OpenCV-carried skin color detection type adativskindetector, so as to obtain a face image face _ skin with a detected skin color area;
searching points of which the gray value is higher than a preset gray threshold value and which are located in a preset top central hairline position range in the skin color gray map, and taking the points as alternative top central hairline points;
and taking the highest point in the alternative top central hairline points as the top central hairline point.
Wherein, the preset gray threshold value can be selected as SkinGray k3, wherein SkinGray is an average gray value in a skin color gray map. k3 is the threshold adjustment factor, and in this embodiment, k3 may be selected to be 0.5. However, the present invention is not limited to this, and other preset gray level thresholds may be used, for example, values in the range of SkinGray 0.3 to SkinGray 0.8 may be selected, or values in other ranges may be selected, so that a better identification of the top center hairline point may be achieved, and all of them are within the scope of the present invention.
The top central hairline point is located within a preset top central hairline point position range, which may be a point located substantially at the center of the face image in the width direction (with a certain error range left and right), located substantially above the brow tail point in the face in the height direction, and located below the empirically obtained highest position.
In this embodiment, as shown in fig. 2, a coordinate system is established in the face image, with an upper left corner O of the face image as an origin, a width direction as an x-axis direction, and a height direction as a y-axis direction;
the coordinate values of the points within the position range of the preset top central hair line meet the following conditions:
3*faceWidth/8<XfaceT<5*faceWidth/8
Yeyebrow/3<YfaceT<Yeyebrow
wherein, faceWidth is the width value of the face image, YeyebrowIs the longitudinal coordinate, X, of the tail point of the eyebrowfaceTAnd YfaceTThe x-axis and y-axis coordinate values of the top central hairline point, respectively.
Highest position Y in height directioneyebrowA/3 is an empirical value which, in practical applications, can be adjusted as desired, for example to Yeyebrow/4、YeyebrowAnd/3.5, etc., are within the scope of the present invention.
When the positions of the eyebrow tail points are obtained by averaging the left eyebrow tail points and the right eyebrow tail points, the y-axis coordinate value of the top central hairline point in the formula meets the following condition:
(YeyebrowL+YeyebrowR)/6<YfaceT<(YeyebrowL+YeyebrowR)/2
wherein, YeyebrowLAnd YeyebrowRRespectively are the longitudinal coordinates of the tail point of the left eyebrow and the tail point of the right eyebrow.
As shown in fig. 3, in an embodiment of the present invention, after the skin color map is obtained in step S230 when detecting the top central hairline point, S240: graying to obtain a grayscale map face _ SkinGray, and executing step S250: calculating the average value of the gray scale of the face _ SkinGray of the gray scale map as SkinGray, binarizing the gray scale map, and selecting a binarization threshold value as 0.5 SkinGray whenWhen the gray value of the pixel in the face _ skinggray is lower than the threshold, the pixel is set to be black (namely, the gray value is 0), when the gray value of the pixel in the face _ skinggray is higher than the threshold, the pixel is set to be white (namely, the gray value is 255), a binary image face _ skinblack is obtained by extraction, the binary image face _ skinblack is scanned line by line from top to bottom, the white point obtained by scanning is recorded as an unknown point, the coordinate of the unknown point is set to be (X, Y), whether the coordinate value of the unknown point meets the coordinate requirement in the preset position range or not is judged, namely, 3 × face width/8<X<5*faceWidth/8,(YeyebrowL+YeyebrowR)/6<Y<(YeyebrowL+YeyebrowR) And/2, if the false point is not met, judging the false point, discarding the false point, continuing scanning until the point unsownwPoint meeting the condition is met, and marking the point as the forehead top central hairline point faceT. Thereby realizing S280: the top central hairline point is marked.
In this embodiment, after recognition of all the feature points is achieved, the coordinate values of the feature points are extracted, and S310: calculating the pixel spacing between the three eyes, S320: and calculating the proportion information of the three kingdoms and the five eyes.
Specifically, the calculating the ratio of the height values of the three parts to the height value of the face respectively includes the following steps:
extracting y-axis coordinate values of a chin tip, a top central hairline point, an eyebrow tail point and a subnasal point in the face image;
calculating the face height value H according to the following formula:
H=YfaceB-YfaceT
wherein, YfaceBAnd YfaceTRespectively are the y-axis coordinate values of the lower base point and the top central hairline point in the face image;
the height values of the three fractions H1, H2 and H3 were calculated according to the following formula:
H1=Yeyebrow-YfaceT
H2=Ynose-Yeyebrow
H3=YfaceB-Ynose
wherein, YeyebrowAnd YnoseY-axis seat respectively serving as tail point and subnasal point of eyebrowMarking a value;
the proportion of the height values H1, H2 and H3 of the three parts to the height value H of the human face is respectively calculated:
BH1=H1/H
BH2=H2/H
BH3=H3/H
the method for respectively calculating the ratio of the width value of the five parts to the width value of the face comprises the following steps:
extracting x-axis coordinate values of left eye outer hairline points, right eye outer hairline points, left eye outer canthus, left eye inner canthus, right eye inner canthus and right eye outer canthus in the human face;
calculating the face width value W according to the following formula:
W=XhairR-XhairL
wherein, XhairRAnd XhairLX-axis coordinate values of the right eye outer hairline point and the left eye outer hairline point respectively;
the width values W1, W2, W3, W4 and W5 of the five sections were calculated according to the following formula:
W1=XeyeL_out-XhairL
W2=XeyeL-in-XeyeL_out
W3=XeyeR_in-XeyeL-in
W4=XeyeR_out-XeyeR_in
W5=XhairR-XeyeR_out
wherein, XeyeL_out,XhairL,XeyeL-in,XeyeR_inAnd XeyeR_outX-axis coordinate values of the left eye external canthus, the left eye internal canthus, the right eye internal canthus and the right eye external canthus respectively;
calculating the ratio of the width values W1, W2, W3, W4 and W5 of the five parts to the width W of the human face respectively:
BW1=W1/W
BW2=W2/W
BW3=W3/W
BW4=W4/W
BW5=W5/W
the finally obtained BH1, BH2, BH3, BW1, BW2, BW3, BW4 and BW5 are the confirmed three-family five-eye ratio.
As shown in fig. 5, an embodiment of the present invention further provides a face proportion calculation system, which is applied to the face proportion calculation method, and the system includes:
the feature point detection module 100 is configured to detect a chin top, a left eye outer hairline point, a right eye outer hairline point, two segmentation points of three families, and four segmentation points of five eyes in the face image by using a trained ASM shape model; detecting a top central hairline point in the face image;
the human face range determining module 200 is used for determining a human face height range according to the distance between the top central hairline point and the chin top, and determining a human face width range according to the distance between the left eye outer side hairline point and the right eye outer side hairline point;
the human face range segmentation module 300 is used for dividing the human face height range into three parts according to two segmentation points of three groups, and dividing the human face width range into five parts according to four segmentation points of five eyes;
the proportion information calculation module 400 is used for calculating the proportion of the height values of the three parts of the height range to the height value of the face respectively to obtain the proportion information of three categories; and respectively calculating the ratio of the width value of the five parts of the width range to the width value of the human face to obtain the ratio information of the five eyes.
The embodiment of the invention also provides face proportion calculation equipment, which comprises a processor; a memory having stored therein executable instructions of the processor; wherein the processor is configured to perform the steps of the face proportion calculation method via execution of the executable instructions.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" platform.
An electronic device 600 according to this embodiment of the invention is described below with reference to fig. 6. The electronic device 600 shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 6, the electronic device 600 is embodied in the form of a general purpose computing device. The components of the electronic device 600 may include, but are not limited to: at least one processing unit 610, at least one memory unit 620, a bus 630 connecting the different platform components (including the memory unit 620 and the processing unit 610), a display unit 640, etc.
Wherein the storage unit stores program code executable by the processing unit 610 to cause the processing unit 610 to perform steps according to various exemplary embodiments of the present invention described in the above-mentioned electronic prescription flow processing method section of the present specification. For example, the processing unit 610 may perform the steps as shown in fig. 1.
The storage unit 620 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM)6201 and/or a cache memory unit 6202, and may further include a read-only memory unit (ROM) 6203.
The memory unit 620 may also include a program/utility 6204 having a set (at least one) of program modules 6205, such program modules 6205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 630 may be one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 600 may also communicate with one or more external devices 700 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 600, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 600 to communicate with one or more other computing devices. Such communication may occur via an input/output (I/O) interface 650. Also, the electronic device 600 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 660. The network adapter 660 may communicate with other modules of the electronic device 600 via the bus 630. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 600, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage platforms, to name a few.
The embodiment of the invention also provides a computer-readable storage medium for storing a program, and the program realizes the steps of the face proportion calculation method when being executed. In some possible embodiments, aspects of the present invention may also be implemented in the form of a program product comprising program code for causing a terminal device to perform the steps according to various exemplary embodiments of the present invention described in the above-mentioned electronic prescription flow processing method section of this specification, when the program product is run on the terminal device.
Referring to fig. 7, a program product 800 for implementing the above method according to an embodiment of the present invention is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable storage medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable storage medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
The face proportion calculation method, the face proportion calculation system, the face proportion calculation equipment and the face proportion calculation storage medium have the following advantages:
after a face image is obtained, extracting face structure feature points by adopting a trained ASM shape model, extracting and marking key feature point information used for calculating the proportion relation of the three-family five-eye of the face in the face image, calculating to obtain the three pixel spacing between the three families, the five pixel spacing between the five eyes, the face height pixel spacing and the face width pixel spacing, then respectively calculating the ratio of the three pixel spacing between the three families to the face height pixel spacing, respectively calculating the ratio of the five pixel spacing between the five eyes to the face width pixel spacing, and confirming the proportion information of the five eyes of the face, thereby improving the accuracy of the detection of the feature points of the three-family five-eye; furthermore, the center hairline point at the top of the forehead in the face image can be searched and marked through the skin color information to serve as the edge point of the face height range, and the detection accuracy of the center hairline point at the top is improved.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.

Claims (12)

1. A face proportion calculation method is characterized by comprising the following steps:
detecting a chin top, a left eye outer hairline point, a right eye outer hairline point, two partition points of three groups and four partition points of five eyes in the face image based on the training model;
detecting a top central hairline point in the face image;
determining the height range of the human face according to the distance between the top central hairline point and the chin top;
determining a human face width range according to the distance between the left eye outer hairline point and the right eye outer hairline point;
calculating proportion information of the three families according to the two segmentation points of the three families and the face height range;
and calculating the proportion information of the five eyes according to the four segmentation points of the five eyes and the face width range.
2. The face proportion calculation method according to claim 1, wherein before detecting the chin top, the left-eye outer hairline point, the right-eye outer hairline point, the two division points of the atrium, and the four division points of the five eyes in the face image, the method further comprises the steps of:
collecting a face image, and detecting a face region in the face image, wherein the height and the width of the face region are respectively HFAnd WFThe central point of the face area is CFPoint;
with CFPoint is the central point, with HFHeight of (1+ k1), in WFAnd (1+ k2) is the width, and a face image is extracted from the face image, wherein k1 and k2 are preset expansion ratios, k1 belongs to (0,0.5), and k2 belongs to (0, 0.5).
3. The method of claim 1, wherein the step of detecting the top central hairline point in the face image comprises the steps of:
extracting skin color information in the face image to obtain a face skin color distribution map, and performing gray processing on the face skin color distribution map to obtain a skin color gray map;
searching points of which the gray value is higher than a preset gray threshold value and which are located in a preset top central hairline position range in the skin color gray map, and taking the points as alternative top central hairline points;
and taking the highest point in the alternative top central hairline points as the top central hairline point.
4. The face proportion calculation method according to claim 3, wherein the preset gray threshold is SkinGray x k3, wherein SkinGray is an average gray value in a skin color gray map, k3 is a threshold adjustment coefficient, and k3 e (0.3-0.8).
5. The face proportion calculation method according to claim 3, wherein a coordinate system is established in the face image, the upper left corner point of the face image is taken as an origin, the width direction is the x-axis direction, and the height direction is the y-axis direction;
the coordinate values of the points within the position range of the preset top central hair line meet the following conditions:
3*faceWidth/8<XfaceT<5*faceWidth/8
Yeyebrow/3<YfaceT<Yeyebrow
wherein, faceWidth is the width value of the face image, YeyebrowIs the longitudinal coordinate, X, of the tail point of the eyebrowfaceTAnd YfaceTThe x-axis and y-axis coordinate values of the top central hairline point, respectively.
6. The face proportion calculation method according to claim 3, wherein the partition points of the three categories include a subnasal point, a left eyebrow tail point and a right eyebrow tail point, and the coordinate values of the points within the preset top central hairline position range satisfy the following condition:
3*faceWidth/8<XfaceT<5*faceWidth/8
(YeyebrowL+YeyebrowR)/6<YfaceT<(YeyebrowL+YeyebrowR)/2
wherein, faceWidth is the width value of the face image, YeyebrowLAnd YeyebrowRRespectively the longitudinal coordinate, X, of the left and right eyebrow tail pointsfaceTAnd YfaceTThe x-axis and y-axis coordinate values of the top central hairline point, respectively.
7. The face proportion calculation method according to claim 1, wherein the segmentation points of the three categories include infranasal points and eyebrow tail points, and the segmentation feature points of the five eyes include a left eye external canthus, a left eye internal canthus, a right eye internal canthus and a right eye external canthus.
8. The face proportion calculation method according to claim 7, wherein a coordinate system is established in the face image, the upper left corner point of the face image is taken as an origin, the width direction is the x-axis direction, and the height direction is the y-axis direction;
the method for calculating the proportion information of the three kingdoms according to the two segmentation points of the three kingdoms and the face height range comprises the following steps:
extracting y-axis coordinate values of a chin tip, a top central hairline point, an eyebrow tail point and a subnasal point in the face image;
calculating the face height value H according to the following formula:
H=YfaceB-YfaceT
wherein, YfaceBAnd YfaceTRespectively are the y-axis coordinate values of the lower base point and the top central hairline point in the face image;
the height values of the three fractions H1, H2 and H3 were calculated according to the following formula:
H1=Yeyebrow-YfaceT
H2=Ynose-Yeyebrow
H3=YfaceB-Ynose
wherein, YeyebrowAnd YnoseThe y-axis coordinate values of the tail point and the under-nose point of the eyebrow are respectively;
and respectively calculating the proportion of the height values H1, H2 and H3 of the three parts to the height value H of the human face.
9. The face proportion calculation method according to claim 8, wherein the calculating of the proportion information of the five eyes from the four division points of the five eyes and the face width range comprises the steps of:
extracting x-axis coordinate values of left eye outer hairline points, right eye outer hairline points, left eye outer canthus, left eye inner canthus, right eye inner canthus and right eye outer canthus in the human face;
calculating the face width value W according to the following formula:
W=XhairR-XhairL
wherein, XhairRAnd XhairLRespectively the lateral side of the right eyeX-axis coordinate values of the interpolar point and the left eye lateral hairline point;
the width values W1, W2, W3, W4 and W5 of the five sections were calculated according to the following formula:
W1=XeyeL_out-XhairL
W2=XeyeL-in-XeyeL_out
W3=XeyeR_in-XeyeL-in
W4=XeyeR_out-XeyeR_in
W5=XhairR-XeyeR_out
wherein, XeyeL_out,XhairL,XeyeL-in,XeyeR_inAnd XeyeR_outX-axis coordinate values of the left eye external canthus, the left eye internal canthus, the right eye internal canthus and the right eye external canthus respectively;
the ratios of the width values W1, W2, W3, W4 and W5 of the five parts to W of the width of the human face are calculated respectively.
10. A face proportion calculation system, applied to the face proportion calculation method according to any one of claims 1 to 9, the system comprising:
the characteristic point detection module is used for detecting a chin top, a left eye outer hairline point, a right eye outer hairline point, two partition points of three families and four partition points of five eyes in the face image based on the training model; detecting a top central hairline point in the face image;
the human face range determining module is used for determining a human face height range according to the distance between the top central hairline point and the chin top and determining a human face width range according to the distance between the left eye outer side hairline point and the right eye outer side hairline point;
the human face range segmentation module is used for dividing the human face height range into three parts according to two segmentation points of three groups and dividing the human face width range into five parts according to four segmentation points of five eyes;
the proportion information calculation module is used for calculating the proportion of the height values of the three parts of the height range to the height value of the face respectively to obtain proportion information of three categories; and respectively calculating the ratio of the width value of the five parts of the width range to the width value of the human face to obtain the ratio information of the five eyes.
11. A face proportion calculation apparatus, comprising:
a processor;
a memory having stored therein executable instructions of the processor;
wherein the processor is configured to perform the steps of the face proportion calculation method of any one of claims 1 to 9 via execution of the executable instructions.
12. A computer-readable storage medium storing a program, wherein the program is configured to implement the steps of the face proportion calculation method according to any one of claims 1 to 9 when executed.
CN201810939046.8A 2018-08-17 2018-08-17 Face proportion calculation method, system, equipment and storage medium Pending CN110837757A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810939046.8A CN110837757A (en) 2018-08-17 2018-08-17 Face proportion calculation method, system, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810939046.8A CN110837757A (en) 2018-08-17 2018-08-17 Face proportion calculation method, system, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110837757A true CN110837757A (en) 2020-02-25

Family

ID=69573657

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810939046.8A Pending CN110837757A (en) 2018-08-17 2018-08-17 Face proportion calculation method, system, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110837757A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111738989A (en) * 2020-06-02 2020-10-02 北京全域医疗技术集团有限公司 Organ delineation method and device
CN113936292A (en) * 2021-09-01 2022-01-14 北京旷视科技有限公司 Skin detection method, device, equipment and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102024156A (en) * 2010-11-16 2011-04-20 中国人民解放军国防科学技术大学 Method for positioning lip region in color face image
CN104021550A (en) * 2014-05-22 2014-09-03 西安理工大学 Automatic positioning and proportion determining method for proportion of human face
CN104408462A (en) * 2014-09-22 2015-03-11 广东工业大学 Quick positioning method of facial feature points
CN105844252A (en) * 2016-04-01 2016-08-10 南昌大学 Face key part fatigue detection method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102024156A (en) * 2010-11-16 2011-04-20 中国人民解放军国防科学技术大学 Method for positioning lip region in color face image
CN104021550A (en) * 2014-05-22 2014-09-03 西安理工大学 Automatic positioning and proportion determining method for proportion of human face
CN104408462A (en) * 2014-09-22 2015-03-11 广东工业大学 Quick positioning method of facial feature points
CN105844252A (en) * 2016-04-01 2016-08-10 南昌大学 Face key part fatigue detection method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
何俊 等: "基于ASM和肤色模型的疲劳驾驶检测" *
李尚国: "基于肤色和人脸特征的人眼定位方法" *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111738989A (en) * 2020-06-02 2020-10-02 北京全域医疗技术集团有限公司 Organ delineation method and device
CN111738989B (en) * 2020-06-02 2023-10-24 北京全域医疗技术集团有限公司 Organ sketching method and device
CN113936292A (en) * 2021-09-01 2022-01-14 北京旷视科技有限公司 Skin detection method, device, equipment and medium

Similar Documents

Publication Publication Date Title
CN108229509B (en) Method and device for identifying object class and electronic equipment
US10049262B2 (en) Method and system for extracting characteristic of three-dimensional face image
US10216979B2 (en) Image processing apparatus, image processing method, and storage medium to detect parts of an object
Kalogerakis et al. Learning 3D mesh segmentation and labeling
US20180068461A1 (en) Posture estimating apparatus, posture estimating method and storing medium
CN108701216A (en) A kind of face shape of face recognition methods, device and intelligent terminal
CN106845416B (en) Obstacle identification method and device, computer equipment and readable medium
WO2023010758A1 (en) Action detection method and apparatus, and terminal device and storage medium
CN110909618B (en) Method and device for identifying identity of pet
WO2008154314A1 (en) Salient object detection
US11734954B2 (en) Face recognition method, device and electronic equipment, and computer non-volatile readable storage medium
US9082000B2 (en) Image processing device and image processing method
US20230022554A1 (en) Automatic pressure ulcer measurement
CN106407978B (en) Method for detecting salient object in unconstrained video by combining similarity degree
CN108416304B (en) Three-classification face detection method using context information
CN110837757A (en) Face proportion calculation method, system, equipment and storage medium
JP2015125731A (en) Person attribute evaluation device, person attribute evaluation method and program
CN112818946A (en) Training of age identification model, age identification method and device and electronic equipment
EP3843038A1 (en) Image processing method and system
CN104978583A (en) Person action recognition method and person action recognition device
WO2023068956A1 (en) Method and system for identifying synthetically altered face images in a video
CN109858379A (en) Smile&#39;s sincerity degree detection method, device, storage medium and electronic equipment
CN113780040A (en) Lip key point positioning method and device, storage medium and electronic equipment
CN111967383A (en) Age estimation method, and training method and device of age estimation model
CN111723688A (en) Human body action recognition result evaluation method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200225

RJ01 Rejection of invention patent application after publication