CN108509866B - Face contour extraction method - Google Patents

Face contour extraction method Download PDF

Info

Publication number
CN108509866B
CN108509866B CN201810199612.6A CN201810199612A CN108509866B CN 108509866 B CN108509866 B CN 108509866B CN 201810199612 A CN201810199612 A CN 201810199612A CN 108509866 B CN108509866 B CN 108509866B
Authority
CN
China
Prior art keywords
contour
local
curve
face
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810199612.6A
Other languages
Chinese (zh)
Other versions
CN108509866A (en
Inventor
李桂清
曹旭
聂勇伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201810199612.6A priority Critical patent/CN108509866B/en
Publication of CN108509866A publication Critical patent/CN108509866A/en
Application granted granted Critical
Publication of CN108509866B publication Critical patent/CN108509866B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/169Holistic features and representations, i.e. based on the facial image taken as a whole

Abstract

The invention discloses a face contour extraction method, which comprises the following steps: 1) after an image is input, extracting a region of a human face by using a human face detection algorithm, and finding a rough region of a human face contour by using a key characteristic point positioning algorithm; 2) sampling a rough area along the face contour to generate a series of dense squares, and containing the whole face contour area; 3) in each local square area, extracting a parabola to guide a local contour curve based on gradient information, and forming a local result set consisting of the local contour curves; 4) and fusing the dense redundant local contour curves into a global contour curve result through a global fusion algorithm based on PCA (principal component analysis), so as to obtain a complete face contour line. The method has the advantages of high precision, high speed, full automation and user interaction, and the finally obtained human face contour curve has high precision at the pixel level and conforms to the parabolic characteristic of the human face contour.

Description

Face contour extraction method
Technical Field
The invention relates to the technical field of image processing, in particular to a face contour extraction method.
Background
In the related application of the face image, it is often necessary to automatically locate key feature points of the face, such as eyes, nose tip, mouth corner points and the outline of the face, according to the input face image. The existing method can accurately locate the position with obvious characteristics of eyes, nose tip, mouth corner and the like, and the contour of the face on the cheek and the chin is challenging. Most of the existing feature point-based face alignment methods only set a plurality of feature points on the edge of a face, and are difficult to extract the curved shape details of the face contour line. These detail features are important in many computer vision applications such as face recognition, expression recognition, and three-dimensional face reconstruction. Improving the accuracy of the face contour has important significance and value in the fields of digital image processing and computer vision research, but extracting a continuous face contour curve with accurate pixel level is a very challenging problem at present.
Earlier methods of face contour detection simply fit the chin area of the face with a parabola, but this method is too simple to be used in the chin area and a parabola does not show the curvature of the face contour. The Active contour model proposed by Kass et al [ M.Kass, A.Witkin, and D.Terzopoulos, "Snakes: Active contour models," IJCV, vol.1, No.4, pp.321-331,1988 ] can be used to describe the contour curve of the object, and is mainly applied to the shape-based object segmentation. Some researchers [ V.Perlibakas, "automatic detection of face features and exact face contact," Pattern Recognition Letters, vol.24, No.16, pp.2977-2985,2003 ] applied this method to the field of human face to extract the contour lines of human face. The active contour model is sensitive to the parameters and converges to different locally optimal solutions due to different initialization positions, resulting in undesirable results. The active contour method usually requires constant parameter adjustment and accurate initialization, which is difficult to achieve in natural environment. While the active contour model iteratively fits the curve from the initialized position to the target contour, this process is usually so noisy that it often stops at a halfway or local optimum.
Face segmentation is also a method capable of extracting face contours, such as the Skin detection and face segmentation methods proposed by Wang et al [ b.wang, x.chang, and. liu, "Skin detection and segmentation of human face in color images," International Journal of organic Engineering and Systems, vol.4, No.1, pp.10-17,2011 ], which generally divide an image into a face region and a non-face region, so that the boundary between the two regions can be used as a face contour. However, these methods are rough in processing the boundary of the face, so that the boundary curve is too zigzag and does not fit the actual contour of the face, and a part of the neck region and the upper hair part close to the face are often included in the region of the face, and a good face contour curve cannot be obtained.
The method for aligning the human face is to search predefined human face key points, such as eyes, nose tips, mouth corner points, eyebrows and contour points of each part of the human face, on a human face picture, and is a strategy based on supervised learning. In recent years, a cascade shape regression model has good performance on key feature point positioning problems, for example, Kazemi and other [ v.kazemi and j.sublivan, "One discrete face alignment with an ensemble of regressingtrees," in CVPR,2014, pp.1867-1874 ] uses a regression model to directly learn a mapping function from a face image to a face shape model, is simple and efficient, and obtains good experimental results in both laboratory scenes and natural scenes. The method for aligning the human face greatly improves the robustness of positioning the key points of the human face. However, the number of key points on the face contour is rare, and due to the complex scene in the real situation, the situation of being in or out can occur. So that such sparse points are difficult to represent features in which the face contour lines of different persons are different from one another.
The above methods are too limited by parameters and initialization to robustly operate normally on natural pictures, or can find the approximate position of the face contour, so that the curved features of the face contour cannot be accurately described. In various visual applications which are increasingly developed at present, a method capable of robustly and quickly acquiring a high-precision face contour is needed, and key contour information is provided for applications such as face recognition, facial expression recognition, 3D face reconstruction and the like on an upper layer.
Disclosure of Invention
The invention aims to overcome the defects and shortcomings of the prior art, and provides a face contour extraction method, which can quickly find a complete continuous face contour curve accurate to the pixel level when any face image is input.
In order to achieve the purpose, the technical scheme provided by the invention is as follows: a face contour extraction method comprises the following steps:
1) face image preprocessing
After an image is input, extracting a region of a human face by using a human face detection algorithm, and finding a rough region of a human face contour by using a key characteristic point positioning algorithm;
2) local square region sampling
Sampling a rough area along the face contour to generate a series of dense squares, and containing the whole face contour area;
3) local contour curve extraction
In each local square area, extracting a parabola to guide a local contour curve based on gradient information, and forming a local result set consisting of the local contour curves;
4) global contour curve fusion
And fusing the dense redundant local contour curves into a global contour curve result through a global fusion algorithm based on PCA (principal component analysis), so as to obtain a complete face contour line.
In step 1), the approximate region of the face contour is found through a key feature point positioning algorithm, and robustness is provided for an initialized position, specifically as follows:
extracting the region of the face in the picture by using a face detection algorithm to obtain a bounding box of the face region, and simultaneously obtaining the position and scale size information of the face in the picture; then using a human face alignment algorithm to obtain a plurality of rough key points on the face contour as initialization, and connecting the rough key points into an initialization curve representing the face contour region;
in step 2), a series of dense squares are generated by sampling the approximate region along the face contour, and the whole face contour region is included therein, specifically as follows:
an initialization curve representing a face contour region is obtained in the step 1), the whole face contour region is divided into a plurality of overlapped square regions along the curve, the center points of the square regions are located on the initialization curve, the size of the square regions is set to be a preset multiplying power of the size of a face bounding box extracted by a face detection algorithm, the overlapped square regions cover all the face contour regions, the direction of the square regions is set to be the tangential direction of the initialization curve of the current center point, the obtained local contour curve result always starts from the top of the square region and ends at the bottom of the square, the initialization curve is located near the real face contour, and the overlapped square regions cover the whole real face contour; in addition, a large number of overlapping redundant local contour curve results can be obtained by adopting a method for densely sampling the face contour region, and the requirement of cross validation in the step 4) of extracting the global contour curve is met;
in step 3), a parabola is extracted from each local square region to guide a local contour curve based on gradient information, and a local result set composed of the local contour curves is formed, specifically as follows:
obtaining a large number of overlapped square areas in the step 2), wherein each square area comprises a section of real face contour curve, and the direction of each square area, namely the vertical direction of any square side, is consistent with the direction of the initialization curve; in this step, a parabolic guiding local contour curve based on gradient information is obtained according to the following method:
the direction of each square is consistent with the direction of the initialization curve, the square area is locally seen by taking the side of the square area as a rectangular coordinate system, and a local face contour line C is determined to start from the top of the square area and end at the bottom of the square area; this square area is an N × N matrix of pixels, then the local contour curve C is represented as N sets of pixels from different rows:
C=<p1,p2,...,pi,...,pN
where C is the local profile curve, pi(i, j) is the ith point on the local contour C, and the position of this point is in the ith row and jth column of the square region, where i and j are both in the range of [1, N [ ]];p1Is the first point, pNIs the last point; for two adjacent points p on the local face contour lineiAnd pi+1The difference between two adjacent columns is required to be not more than 1, and the smoothness between pixels is ensured;
the parabola guides the local contour curve based on the gradient information to determine the position by the following energy function:
Figure GDA0002358526170000051
wherein the content of the first and second substances,
Figure GDA0002358526170000052
optimizing to obtain an optimal local contour curve, G (C) representing the gradient value of the local contour curve C, S (C) representing the curvature of the local contour curve C, α value adjusting the smoothness degree of the local contour curve C, wherein the above energy function is difficult to directly solve, so that the problem is solved by using a dynamic programming method, meanwhile, the smoothness S (C) of the whole local contour curve is not directly solved, but a greedy algorithm is used for guiding the result to be more like a parabola, C (C) is used for solving the problem*(i, j) is the local contour curve ending at point (i, j) in the region from line 0 to line i using the dynamic programming algorithm, then it must come from the previous { C }*(i-1,j-1),C*(i-1,j),C*One of the three contour lines (i-1, j +1) }, using di-1,j+Δ(i, j) represents the point (i, j) to the parabola C*(i-1, j + Δ), Δ { -1, 0, 1} distance, using
Figure GDA0002358526170000053
Representing the error from the parabola, the process of the dynamic programming algorithm is represented as follows:
Figure GDA0002358526170000061
where M represents the dynamically programmed matrix, M (i-1, j + Δ) is the energy of the corresponding point in a row of the matrix, g (i, j) represents the gradient value at this point, i is the number of rows, j is the number of columns, ei-1,j+Δ(i, j) error from parabola, α values are parameters controlling smoothness, different α values versus local profile curveThe result C has influence, and an appropriate value is obtained through experiments.
In step 4), the dense redundant local contour curves are fused into a global contour curve result by a global fusion algorithm based on PCA, so as to obtain a complete face contour, specifically as follows:
obtaining a square area set on the face contour area through step 2) by intensive sampling, and then obtaining a local contour curve in each square area through step 3); a set of partial contour curves is now obtained which are overlapped by segments; order to
Figure GDA0002358526170000062
Representing the ith point on the kth local contour curve; m represents the number of local face contour lines, and N represents the length of each local face contour line; all such local profile curves are represented by a series of points
Figure GDA0002358526170000063
Because step 2) is intensive sampling of the face contour region, the set of P points is redundant in a large amount, and not only contains points on the face contour, but also points on a failed local contour curve, and a single-pixel-width accurate global contour curve is found out in the global contour curve fusion;
the face contour curve usually contains a large amount of bending detail features, and cannot be represented by a simple parameterization form; it is therefore necessary to use a series of points Q ═ Ql}l∈[1,L]Represents the final global profile result, where L is the length of the global profile, while the points in Q are all from P to preserve the curved detail features of the global profile;
at the very beginning the queue Q is empty, it is clear that Q is0I.e. the first point p on the first local contour curve0Then, searching the next point through a PCA algorithm each time;
to calculate points
Figure GDA0002358526170000071
Direction of PCA, first calculate the point
Figure GDA0002358526170000072
The covariance matrix for all other points in P is as follows:
Figure GDA0002358526170000073
wherein the content of the first and second substances,
Figure GDA0002358526170000074
is a point
Figure GDA0002358526170000075
The covariance matrix of (a) is determined,
Figure GDA0002358526170000076
represents the ith point on the kth local profile curve,
Figure GDA0002358526170000077
represent other points, and i' e [1, N]\{i},k'∈[1,M]\ { k }, M represents the number of local face contour lines, and N represents the length of each local face contour line; function theta (-) represents
Figure GDA0002358526170000078
r is the distance between two points, h is a fixed threshold; when the distance between the points is longer, the influence between the points is smaller; so PCA does not act on the entire set of points, using a threshold h to speed up the computation; the eigenvalue of the covariance matrix and the eigenvector corresponding to the maximum eigenvalue obtained by the calculation are the required points
Figure GDA0002358526170000079
Main direction of
Figure GDA00023585261700000710
When the last point in queue Q is QlCorresponding to a point in P of
Figure GDA00023585261700000711
To obtain the next point ql+1Firstly, using K nearest neighbor algorithm to find out departure point qlNearest K points
Figure GDA00023585261700000712
K is set to 7, if all K points belong to the kth local contour curve
Figure GDA00023585261700000713
Go down along the local contour curve, if there is a point not belonging to the k-th local contour curve, then it is necessary to find the point
Figure GDA00023585261700000714
Is most consistent with the PCA direction
Figure GDA00023585261700000715
Is calculated by the following formula:
Figure GDA00023585261700000716
wherein the content of the first and second substances,
Figure GDA00023585261700000717
is the inner product, dot, between two directions
Figure GDA00023585261700000718
Is most consistent with the PCA direction
Figure GDA00023585261700000719
One point of (a) is by solving an extremum k*,
Figure GDA00023585261700000720
Obtained
Figure GDA00023585261700000721
This process is iterated until no more points are added to queue Q, and all points of Q constituteIs the desired global profile curve.
Compared with the prior art, the invention has the following advantages and beneficial effects:
1. the invention adopts a dense sampling method on the initial human face contour region, and the square regions obtained by different sampling contain different image information, so that different local contour curve results can be found. In one of the regions, the real face contour curve cannot be fitted, and in the other region, an accurate result may be obtained. The cross validation mechanism ensures the accuracy and robustness of the algorithm.
2. The invention adopts a parabola-guided local contour curve mode to generate local face contour lines, fully utilizes the characteristic that real face contour lines conform to the parabola shape, and skillfully utilizes a dynamic programming algorithm to find the optimal solution of the layout. The parabola guiding method not only accelerates the speed of the algorithm, but also enables the global contour curve to accord with the real situation.
3. The method adopts three steps from local to global to obtain the high-quality continuous face curve, has simple calculation process, higher accuracy and pixel-level precision, and is not easily interfered by the illumination change of the picture.
4. In practical application, the method can be fully automatic, and can also be interacted by a user, so that the initialized position of the face contour extraction is robust.
Drawings
Fig. 1 is a flow chart of a face contour extraction method.
Fig. 2 is a diagram illustrating results of various steps of the face contour extraction method.
FIG. 3 is an exemplary diagram of preprocessing and sampling square regions.
FIG. 4 is a comparison of local profile curve variations for different α values.
Fig. 5 is a comparison of (a) (b) parabolic guidance and (c) (d) no parabolic guidance.
FIG. 6 is an exemplary graph comparing the present invention to ERT and the like.
FIG. 7 is a graph comparing the results of the present invention (g) (h) (i) and the ACM method (d) (e) (f) at different initialization positions.
Detailed Description
The present invention will be further described with reference to the following specific examples.
As shown in fig. 1 and fig. 2, the method for extracting a face contour provided by this embodiment includes the following steps:
1) preprocessing a face image: after an image is input, extracting a region of a human face by using a human face detection algorithm, and finding a rough region of a human face contour by using a key characteristic point positioning algorithm;
2) sampling local square regions, generating a series of dense squares by sampling approximate regions along the face contour, and containing the whole face contour region;
3) extracting a local contour curve, namely extracting a parabola to guide the local contour curve based on gradient information in each local square area to form a local result set consisting of the local contour curves;
4) and (3) fusing the global contour curves, namely fusing the dense redundant local contour curves into a global contour curve result through a global fusion algorithm based on PCA (principal component analysis), so as to obtain a complete face contour line.
In the step 1), an image is input, and is preprocessed firstly, the area of the face part in the image is extracted, the approximate area of the face contour is found, and the face contour extraction algorithm is initialized.
Firstly, a face detection algorithm is used for extracting the region of the face in the picture to obtain a bounding box of the face region, and the size of the face scale in the picture can be determined. Then, any face alignment algorithm is used for providing a plurality of rough key points on the face contour for the face contour extraction algorithm of the invention to be used as initialization, and the rough key points are connected into an initialization curve representing the face contour region.
In the invention, an Ensembles of Regression Trees (ERT) algorithm proposed by Kazemi and the like [ V.Kazemi and J.Sullivan, "One mile boundary alignment with an ensemble of Regression Trees," in CVPR,2014, pp.1867-1874 ] is used, and the face contour extraction algorithm does not require initialization to have higher accuracy, and only needs to indicate a rough area where the face contour is located. In order to speed up the whole algorithm, the invention uses ERT algorithm to detect key feature points near the outline of 5 human faces. Their positions are in the region of the eyes, mouth and chin top, respectively. Fig. 3(a) shows an example of 5 key feature point locations, from which it can be seen that the present invention uses only very coarse initialization location input.
And then forming an initialization curve by the 5 points through a curve fitting algorithm to represent the human face contour region, and providing square center position and direction information for the next local square region sampling. Only 5 key points are very sparse, and in order to ensure that the fitting curve passes through all key points and does not deviate from the face area, the Catmull-Rom algorithm is used in the invention. FIG. 3(b) shows the fitting of 5 key feature points to one initialization curve using a Catmull-Rom curve.
In step 2), the method for sampling the local square region specifically includes the following steps:
an initialization curve characterizing the face contour region is obtained in the step 1), and along the curve, the whole face contour region can be divided into a plurality of small overlapped square regions. The central points of the square areas are located on the initialization curve, the size of each square area is set to be a certain multiplying power (within a range of 0-0.5) of the size of a face bounding box extracted by a face detection algorithm, and the size of the face bounding box is set to be 0.2 times of the size of the face bounding box. The direction of the square area is set to be the tangential direction of the initialization curve of the current center point, so that the obtained local contour curve result always starts from the top of the square area and ends at the bottom of the square.
The initialization curve is located near the real face contour, and the overlapped square areas can cover the whole real face contour. For the accuracy of the result, a method of densely sampling the face contour region is adopted, and the default value of the invention is set to 70 square regions. Therefore, a large number of overlapping redundant local contour curve results can be obtained, and the requirement of obtaining a high-precision global contour curve result in the step 4) of extracting the global contour curve is met. Fig. 3(c) shows a densely sampled square region.
In step 3), the method for extracting the local contour curve specifically includes the following steps:
in the previous step, a large number of overlapped square regions are obtained, each square region contains a segment of real face contour curve, and the direction of each square region (the vertical direction of any square side) is consistent with the direction of the initialization curve. In this step a parabolic guiding local contour curve based on gradient information is to be found.
The direction of each square is consistent with the direction of the initialization curve, the side of the square area is locally used as a rectangular coordinate system to independently view the square area, and the local face contour line C is determined to start from the top of the square area and end at the bottom of the square area. This curve not only possesses the maximum gradient, but also possesses a smoothness as parabolic as possible. This square area is an N × N matrix of pixels, then the local contour curve C can be represented as N sets of pixels from different rows:
C=<p1,p2,...,pi,...,pN
wherein p isi(i, j) is the ith point on the local contour C, and the position of this point is in the ith row and jth column of the square region, where i and j are both in the range of [1, N [ ]]。p1Is the first point, pNIs the first point. For two adjacent points p on the local face contour lineiAnd pi+1The invention requires that the number difference of two adjacent columns is not more than 1, and ensures the smoothness between pixels.
The local contour curves possess large gradient values and are smooth like parabolas. The parabola in the invention guides the local contour curve based on the gradient information to determine the position by the following energy function:
Figure GDA0002358526170000112
wherein
Figure GDA0002358526170000113
The method is characterized in that an optimal local contour curve is obtained through optimization, G (C) represents the gradient value of the local contour curve C, S (C) represents the curvature of the local contour curve C, α value regulates the smoothness degree of the local contour curve C, the above energy function is difficult to directly solve, a dynamic programming method is used for solving the problem, meanwhile, the smoothness S (C) of the whole local contour curve is not solved, and a greedy algorithm is used for guiding the result to be more like a parabola.
C*(i, j) is the local contour curve ending at point (i, j) in the region from line 0 to line i using the dynamic programming algorithm, then it must come from the previous { C }*(i-1,j-1),C*(i-1,j),C*One of the three contour lines (i-1, j +1) }, using di-1,j+Δ(i, j) represents the point (i, j) to the parabola C*(i-1, j + Δ), Δ { -1, 0, 1} distance, using
Figure GDA0002358526170000111
Indicating the error from the parabola. The process of the dynamic programming algorithm is represented as follows:
Figure GDA0002358526170000121
where M represents the dynamically programmed matrix, M (i-1, j + Δ) is the energy at the corresponding point in a row of the matrix, g (i, j) represents the gradient value at this point, i is the number of rows, and j is the number of columns. e.g. of the typei-1,j+Δ(i, j) error from parabola α values are parameters that control smoothness, FIG. 4 shows the effect of different α values on the local profile curve C results, the α value of the smoothing parameter is set to 0.7 in the present invention FIG. 5 shows the different results from a parabolic-guided and a purely gradient-based dynamic programming algorithm without parabolic guidance.
In step 4), the method for fusing the global contour curve specifically includes the following steps:
the previous step 2) obtains a square region set on the face contour region through dense sampling, and then obtains a local contour curve in each square region through the step 3). A collection of many shorter and overlapping local contour curves is now obtained. Order to
Figure GDA0002358526170000122
Indicating the ith point on the kth local profile curve. M denotes the number of partial face contours and N denotes the length of each partial face contour. All such local contour curves may be represented by a series of points
Figure GDA0002358526170000123
And 2) densely sampling the face contour region, wherein the set of P points is redundant in a large amount, and not only contains points on the face contour, but also contains points on a failed local contour curve. In the process of global contour curve fusion, the invention finds out a global contour curve with accurate single-pixel width.
Face contour curves usually contain a large number of curved detail features and cannot be represented in a simple parametric form. It is therefore necessary to use a series of points Q ═ Ql}l∈[1,L]Represents the final global profile result, where L is the length of the global profile. While the points in Q are all from P to preserve the curved detail features of the global profile curve.
At the very beginning the queue Q is empty, it is clear that Q is0I.e. the first point p on the first local contour curve0Then, searching the next point through a PCA algorithm each time;
to calculate points
Figure GDA0002358526170000131
In the direction of PCA, the invention first calculates the point
Figure GDA0002358526170000132
The covariance matrix for all other points in P is as follows:
Figure GDA0002358526170000133
wherein
Figure GDA0002358526170000134
Is a point
Figure GDA0002358526170000135
The covariance matrix of (2).
Figure GDA0002358526170000136
Represents the ith point on the kth local profile curve,
Figure GDA0002358526170000137
represent other points, and i' e [1, N]\{i},k'∈[1,M]\ { k }, M represents the number of local face contours, and N represents the length of each local face contour. Function theta (-) represents
Figure GDA0002358526170000138
r is the distance between two points and h is a fixed threshold. The further the distance between the points, the less the influence between them. So PCA does not act on the entire set of points, the present invention uses a threshold h to speed up the computation. Then, the eigenvalue of the covariance matrix and the eigenvector corresponding to the maximum eigenvalue are obtained as the required point
Figure GDA0002358526170000139
Main direction of
Figure GDA00023585261700001310
When the last point in queue Q is QlCorresponding to a point in P of
Figure GDA00023585261700001311
To obtain the next point ql+1Firstly, using K nearest neighbor algorithm to find out departure point qlNearest K points
Figure GDA00023585261700001312
In the present invention, K is set to 7. If all K points belong to the kth local contour curve
Figure GDA00023585261700001313
Go down along the local contour curve, if there is a point not belonging to the k-th local contour curve, then it is necessary to find the point
Figure GDA00023585261700001314
Is most consistent with the PCA direction
Figure GDA00023585261700001315
Is calculated by the following formula:
Figure GDA00023585261700001316
wherein
Figure GDA00023585261700001317
Is the inner product between the two directions. Dot
Figure GDA00023585261700001318
Is most consistent with the PCA direction
Figure GDA00023585261700001319
One point of (a) is by solving an extremum k*,
Figure GDA0002358526170000141
Obtained
Figure GDA0002358526170000142
This process is iterated until no more points are added to queue Q, and the curve formed by all the points of Q is the desired global profile curve.
In conclusion, the method has the characteristics of high precision and high speed, can be fully automatic in practical application, can also realize user interaction, and the finally obtained face contour curve has the precision of the pixel level and conforms to the parabolic shape of the face contour. Compared with the prior art, the invention has the advantages that the result is shown in figure 6. Meanwhile, the method of the invention is robust to the initialized position of face contour extraction, and the result is not affected by some changes in the original initialized position, as shown in fig. 7, 3 initialized positions are used, the result of the invention is still accurate, and the compared ACM method does not obtain a correct result.
The above-mentioned embodiments are merely preferred embodiments of the present invention, and the scope of the present invention is not limited thereto, so that the changes in the shape and principle of the present invention should be covered within the protection scope of the present invention.

Claims (3)

1. A face contour extraction method is characterized by comprising the following steps:
1) face image preprocessing
After an image is input, extracting a region of a human face by using a human face detection algorithm, and finding a rough region of a human face contour by using a key characteristic point positioning algorithm;
2) local square region sampling
Sampling a rough area along the face contour to generate a series of dense squares, and containing the whole face contour area;
3) local contour curve extraction
In each local square area, extracting a parabola to guide a local contour curve based on gradient information, and forming a local result set consisting of the local contour curves;
4) global contour curve fusion
And fusing the dense redundant local contour curves into a global contour curve result through a global fusion algorithm based on PCA (principal component analysis), so as to obtain a complete face contour line.
2. The method for extracting a face contour according to claim 1, wherein in step 1), the rough area of the face contour is found by a key feature point positioning algorithm, and the method is robust to an initialized position, and specifically comprises the following steps:
extracting the region of the face in the picture by using a face detection algorithm to obtain a bounding box of the face region, and simultaneously obtaining the position and scale size information of the face in the picture; then using a human face alignment algorithm to obtain a plurality of rough key points on the face contour as initialization, and connecting the rough key points into an initialization curve representing the face contour region;
in step 2), a series of dense squares are generated by sampling the approximate region along the face contour, and the whole face contour region is included therein, specifically as follows:
an initialization curve representing a face contour region is obtained in the step 1), the whole face contour region is divided into a plurality of overlapped square regions along the curve, the center points of the square regions are located on the initialization curve, the size of the square regions is set to be a preset multiplying power of the size of a face bounding box extracted by a face detection algorithm, the overlapped square regions cover all the face contour regions, the direction of the square regions is set to be the tangential direction of the initialization curve of the current center point, the obtained local contour curve result always starts from the top of the square region and ends at the bottom of the square, the initialization curve is located near the real face contour, and the overlapped square regions cover the whole real face contour; in addition, a large number of overlapping redundant local contour curve results can be obtained by adopting a method for densely sampling the face contour region, and the requirement of cross validation in the step 4) of extracting the global contour curve is met;
in step 3), a parabola is extracted from each local square region to guide a local contour curve based on gradient information, and a local result set composed of the local contour curves is formed, specifically as follows:
obtaining a large number of overlapped square areas in the step 2), wherein each square area comprises a section of real face contour curve, and the direction of each square area, namely the vertical direction of any square side, is consistent with the direction of the initialization curve; in this step, a parabolic guiding local contour curve based on gradient information is obtained according to the following method:
the direction of each square is consistent with the direction of the initialization curve, the square area is locally seen by taking the side of the square area as a rectangular coordinate system, and a local face contour line C is determined to start from the top of the square area and end at the bottom of the square area; this square area is an N × N matrix of pixels, then the local contour curve C is represented as N sets of pixels from different rows:
C=<p1,p2,...,pi,...,pN
where C is the local profile curve, pi(i, j) is the ith point on the local contour C, and the position of this point is in the ith row and jth column of the square region, where i and j are both in the range of [1, N [ ]];p1Is the first point, pNIs the last point; for two adjacent points p on the local face contour lineiAnd pi+1The difference between two adjacent columns is required to be not more than 1, and the smoothness between pixels is ensured;
the parabola guides the local contour curve based on the gradient information to determine the position by the following energy function:
Figure FDA0002358526160000031
wherein the content of the first and second substances,
Figure FDA0002358526160000032
optimizing to obtain an optimal local contour curve, G (C) representing the gradient value of the local contour curve C, S (C) representing the curvature of the local contour curve C, α value adjusting the smoothness degree of the local contour curve C, wherein the above energy function is difficult to directly solve, so that the problem is solved by using a dynamic programming method, meanwhile, the smoothness S (C) of the whole local contour curve is not directly solved, but a greedy algorithm is used for guiding the result to be more like a parabola, C (C) is used for solving the problem*(i, j) is the local contour curve ending at point (i, j) in the region from line 0 to line i using the dynamic programming algorithm, then it must come from the previous { C }*(i-1,j-1),C*(i-1,j),C*One of the three contour lines (i-1, j +1) }, using di-1,j+Δ(i, j) represents the point (i, j) to the parabola C*(i-1, j + Δ), Δ { -1, 0, 1} distance, using
Figure FDA0002358526160000033
Representing the error from the parabola, the process of the dynamic programming algorithm is represented as follows:
Figure FDA0002358526160000034
where M represents the dynamically programmed matrix, M (i-1, j + Δ) is the energy of the corresponding point in a row of the matrix, g (i, j) represents the gradient value at this point, i is the number of rows, j is the number of columns, ei-1,j+Δ(i, j) shows the error from the parabola, the value of α is a parameter for controlling smoothness, and different values of α have an influence on the result of the local profile curve C, and an appropriate value needs to be obtained through experiments.
3. The face contour extraction method according to claim 1, characterized in that: in step 4), the dense redundant local contour curves are fused into a global contour curve result by a global fusion algorithm based on PCA, so as to obtain a complete face contour, specifically as follows:
obtaining a square area set on the face contour area through step 2) by intensive sampling, and then obtaining a local contour curve in each square area through step 3); a set of partial contour curves is now obtained which are overlapped by segments; order to
Figure FDA0002358526160000041
Representing the ith point on the kth local contour curve; m represents the number of local face contour lines, and N represents the length of each local face contour line; such asAll of the local profile curves are represented by a series of points
Figure FDA0002358526160000042
Because step 2) is intensive sampling of the face contour region, the set of P points is redundant in a large amount, and not only contains points on the face contour, but also points on a failed local contour curve, and a single-pixel-width accurate global contour curve is found out in the global contour curve fusion;
the face contour curve usually contains a large amount of bending detail features, and cannot be represented by a simple parameterization form; it is therefore necessary to use a series of points Q ═ Ql}l∈[1,L]Represents the final global profile result, where L is the length of the global profile, while the points in Q are all from P to preserve the curved detail features of the global profile;
at the very beginning the queue Q is empty, it is clear that Q is0I.e. the first point p on the first local contour curve0Then, searching the next point through a PCA algorithm each time;
to calculate points
Figure FDA0002358526160000043
Direction of PCA, first calculate the point
Figure FDA0002358526160000044
The covariance matrix for all other points in P is as follows:
Figure FDA0002358526160000045
wherein the content of the first and second substances,
Figure FDA0002358526160000051
is a point
Figure FDA0002358526160000052
The covariance matrix of (a) is determined,
Figure FDA0002358526160000053
represents the ith point on the kth local profile curve,
Figure FDA0002358526160000054
represent other points, and i' e [1, N]\{i},k'∈[1,M]\ { k }, M represents the number of local face contour lines, and N represents the length of each local face contour line; function theta (-) represents
Figure FDA0002358526160000055
r is the distance between two points, h is a fixed threshold; when the distance between the points is longer, the influence between the points is smaller; so PCA does not act on the entire set of points, using a threshold h to speed up the computation; the eigenvalue of the covariance matrix and the eigenvector corresponding to the maximum eigenvalue obtained by the calculation are the required points
Figure FDA0002358526160000056
Main direction of
Figure FDA0002358526160000057
When the last point in queue Q is QlCorresponding to a point in P of
Figure FDA0002358526160000058
To obtain the next point ql+1Firstly, using K nearest neighbor algorithm to find out departure point qlNearest K points
Figure FDA0002358526160000059
K is set to 7, if all K points belong to the kth local contour curve
Figure FDA00023585261600000510
Go down along the local contour curve, if there is a point not belonging to the k-th local contour curve, then it is necessary to find outDot
Figure FDA00023585261600000511
Is most consistent with the PCA direction
Figure FDA00023585261600000512
Is calculated by the following formula:
Figure FDA00023585261600000513
wherein the content of the first and second substances,
Figure FDA00023585261600000514
is the inner product, dot, between two directions
Figure FDA00023585261600000515
Is most consistent with the PCA direction
Figure FDA00023585261600000516
One point of (A) is by extremum determination
Figure FDA00023585261600000517
Obtained
Figure FDA00023585261600000518
This process is iterated until no more points are added to queue Q, and the curve formed by all the points of Q is the desired global profile curve.
CN201810199612.6A 2018-03-12 2018-03-12 Face contour extraction method Active CN108509866B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810199612.6A CN108509866B (en) 2018-03-12 2018-03-12 Face contour extraction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810199612.6A CN108509866B (en) 2018-03-12 2018-03-12 Face contour extraction method

Publications (2)

Publication Number Publication Date
CN108509866A CN108509866A (en) 2018-09-07
CN108509866B true CN108509866B (en) 2020-06-19

Family

ID=63377528

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810199612.6A Active CN108509866B (en) 2018-03-12 2018-03-12 Face contour extraction method

Country Status (1)

Country Link
CN (1) CN108509866B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109409262A (en) * 2018-10-11 2019-03-01 北京迈格威科技有限公司 Image processing method, image processing apparatus, computer readable storage medium
CN109558880B (en) * 2018-10-16 2021-06-04 杭州电子科技大学 Contour detection method based on visual integral and local feature fusion
CN111667400B (en) * 2020-05-30 2021-03-30 温州大学大数据与信息技术研究院 Human face contour feature stylization generation method based on unsupervised learning
CN113160223A (en) * 2021-05-17 2021-07-23 深圳中科飞测科技股份有限公司 Contour determination method, contour determination device, detection device and storage medium
CN113837067B (en) * 2021-09-18 2023-06-02 成都数字天空科技有限公司 Organ contour detection method, organ contour detection device, electronic device, and readable storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101136105A (en) * 2007-05-11 2008-03-05 辽宁师范大学 Freely differences calculus and deformable contour outline extracting system
CN101339612A (en) * 2008-08-19 2009-01-07 陈建峰 Face contour checking and classification method
CN102063727A (en) * 2011-01-09 2011-05-18 北京理工大学 Covariance matching-based active contour tracking method
CN106156739A (en) * 2016-07-05 2016-11-23 华南理工大学 A kind of certificate photo ear detection analyzed based on face mask and extracting method
CN106156692A (en) * 2015-03-25 2016-11-23 阿里巴巴集团控股有限公司 A kind of method and device for face edge feature point location
CN106529437A (en) * 2016-10-25 2017-03-22 广州酷狗计算机科技有限公司 Method and device for face detection
CN106650635A (en) * 2016-11-30 2017-05-10 厦门理工学院 Method and system for detecting rearview mirror viewing behavior of driver
CN107403144A (en) * 2017-07-11 2017-11-28 北京小米移动软件有限公司 Face localization method and device
CN107452030A (en) * 2017-08-04 2017-12-08 南京理工大学 Method for registering images based on contour detecting and characteristic matching

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014165972A1 (en) * 2013-04-09 2014-10-16 Laboratoires Bodycad Inc. Concurrent active contour segmentation

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101136105A (en) * 2007-05-11 2008-03-05 辽宁师范大学 Freely differences calculus and deformable contour outline extracting system
CN101339612A (en) * 2008-08-19 2009-01-07 陈建峰 Face contour checking and classification method
CN102063727A (en) * 2011-01-09 2011-05-18 北京理工大学 Covariance matching-based active contour tracking method
CN106156692A (en) * 2015-03-25 2016-11-23 阿里巴巴集团控股有限公司 A kind of method and device for face edge feature point location
CN106156739A (en) * 2016-07-05 2016-11-23 华南理工大学 A kind of certificate photo ear detection analyzed based on face mask and extracting method
CN106529437A (en) * 2016-10-25 2017-03-22 广州酷狗计算机科技有限公司 Method and device for face detection
CN106650635A (en) * 2016-11-30 2017-05-10 厦门理工学院 Method and system for detecting rearview mirror viewing behavior of driver
CN107403144A (en) * 2017-07-11 2017-11-28 北京小米移动软件有限公司 Face localization method and device
CN107452030A (en) * 2017-08-04 2017-12-08 南京理工大学 Method for registering images based on contour detecting and characteristic matching

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
《Facial feature extraction and image warping using PCA based statistic model》;Zhong Xue等;《Proceedings 2001 International Conference on Image Processing》;20011231;第2卷;第689-692页 *
《人脸特征点提取方法综述》;李月龙等;《计算机学报》;20160731;第39卷(第7期);第1356-1374页 *
《基于形状识别的人脸轮廓线提取》;陈鹏飞等;《计算机工程与设计》;20140331;第35卷(第3期);第890-894页 *

Also Published As

Publication number Publication date
CN108509866A (en) 2018-09-07

Similar Documents

Publication Publication Date Title
CN108509866B (en) Face contour extraction method
Dai et al. A 3d morphable model of craniofacial shape and texture variation
CN109408653B (en) Human body hairstyle generation method based on multi-feature retrieval and deformation
EP1650711B1 (en) Image processing device, imaging device, image processing method
Nikolaidis et al. Facial feature extraction and pose determination
CN111899334B (en) Visual synchronous positioning and map building method and device based on point-line characteristics
Zhu et al. Discriminative 3D morphable model fitting
US7835568B2 (en) Method and apparatus for image-based photorealistic 3D face modeling
US7266225B2 (en) Face direction estimation using a single gray-level image
KR100930994B1 (en) Method and apparatus for generating 3D image model, image recognition method and apparatus using same, and recording medium recording program for performing the methods
KR101184097B1 (en) Method for determining frontal pose of face
KR20010042659A (en) Face recognition from video images
CN110110694B (en) Visual SLAM closed-loop detection method based on target detection
Jang et al. Robust 3D head tracking by online feature registration
CN110991258B (en) Face fusion feature extraction method and system
Zhu et al. Robust 3d morphable model fitting by sparse sift flow
CN114270417A (en) Face recognition system and method capable of updating registered face template
Chen et al. Eyes localization algorithm based on prior MTCNN face detection
CN113780040A (en) Lip key point positioning method and device, storage medium and electronic equipment
CN108665470B (en) Interactive contour extraction method
Gökberk et al. 3D face recognition
Lefevre et al. Structure and appearance features for robust 3d facial actions tracking
Li et al. Example-based 3D face reconstruction from uncalibrated frontal and profile images
Pears et al. Automatic 3D modelling of craniofacial form
Prajapati et al. DToLIP: Detection and tracking of lip contours from human facial images using Snake's method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant