CN105139007B - Man face characteristic point positioning method and device - Google Patents

Man face characteristic point positioning method and device Download PDF

Info

Publication number
CN105139007B
CN105139007B CN201510641854.2A CN201510641854A CN105139007B CN 105139007 B CN105139007 B CN 105139007B CN 201510641854 A CN201510641854 A CN 201510641854A CN 105139007 B CN105139007 B CN 105139007B
Authority
CN
China
Prior art keywords
point
coordinate
characteristic point
initial characteristics
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510641854.2A
Other languages
Chinese (zh)
Other versions
CN105139007A (en
Inventor
张涛
张旭华
张胜凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Inc
Original Assignee
Xiaomi Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Inc filed Critical Xiaomi Inc
Priority to CN201510641854.2A priority Critical patent/CN105139007B/en
Publication of CN105139007A publication Critical patent/CN105139007A/en
Application granted granted Critical
Publication of CN105139007B publication Critical patent/CN105139007B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure proposes a kind of man face characteristic point positioning method, which comprises is modified according to fisrt feature point correction model to initial characteristics point coordinate, obtains correcting characteristic point coordinate for the first time;The identification of central feature point is carried out to multiple first amendment characteristic point coordinates, obtains at least one central feature point coordinate;Coordinate mapping is carried out to multiple first amendment characteristic point coordinates according to characteristic point mapping function, obtains multiple second-order correction characteristic point coordinates, wherein the characteristic point mapping function is mapping relations of the central feature point to the second-order correction characteristic point coordinate.The disclosure can be obviously improved the accuracy of facial modeling.

Description

Man face characteristic point positioning method and device
Technical field
This disclosure relates to field of image processing more particularly to man face characteristic point positioning method and device.
Background technique
SDM (gradient descent method of supervised decent method supervision) is computer vision field latest find Accurate facial modeling algorithm, fast, robustness is good, versatility and scalability are strong because positioning by SDM, and application is more and more wider It is general.After the characteristic point for orienting face by SDM algorithm, can very easily carry out subsequent face it is a series of other Processing, such as face U.S. face, recognition of face.However, more and more extensive with related application, user is for human face characteristic point The requirement of positional accuracy is higher and higher, therefore how to improve SDM algorithm for the precision of facial modeling, has more next More important meaning.
Summary of the invention
To overcome the problems in correlation technique, the disclosure provides a kind of man face characteristic point positioning method and device.
According to the first aspect of the embodiments of the present disclosure, a kind of man face characteristic point positioning method is provided, which comprises
Initial characteristics point coordinate is modified according to fisrt feature point correction model, obtains correcting characteristic point seat for the first time Mark;
The identification of central feature point is carried out to multiple first amendment characteristic point coordinates, obtains at least one central feature point Coordinate;
Coordinate mapping is carried out to multiple first amendment characteristic point coordinates according to characteristic point mapping function, obtains multiple two Secondary amendment characteristic point coordinate, wherein the characteristic point mapping function is the central feature point to the second-order correction characteristic point The mapping relations of coordinate.
Optionally, the method also includes:
Multiple second-order correction characteristic point coordinates are modified according to second feature point correction model, obtain it is multiple most Amendment characteristic point coordinate eventually.
Optionally, the method also includes:
Human face region detection is carried out to target picture, obtains human face region;
According to the coordinate accounting of multiple initial characteristics points, multiple initial characteristics in the human face region are obtained Point coordinate.
Optionally, the coordinate accounting of the initial characteristics point is by demarcating the human face region in multiple pictures sample Measurement obtains.
Optionally, the central feature point coordinate is eyeball center point coordinate.
Optionally, described that initial characteristics point coordinate is modified according to fisrt feature point correction model, it is repaired for the first time Positive characteristic point coordinate, comprising:
Matrix multiplication calculating is carried out to multiple initial characteristics point coordinates according to the fisrt feature point correction model, is obtained To multiple first initial characteristics point coordinates;
Matrix multiplication meter is carried out to multiple first initial characteristics point coordinates according to the fisrt feature point correction model It calculates, obtains multiple second initial characteristics point coordinates;
……
Matrix multiplication meter is carried out according to the multiple N-1 initial characteristics point coordinates of the N initial characteristics point coordinate pair It calculates, obtains multiple first amendment characteristic point coordinates, wherein N is the integer more than or equal to 2.
Optionally, described that multiple second-order correction characteristic point coordinates are repaired according to second feature point correction model Just, multiple final amendment characteristic point coordinates are obtained, comprising:
Matrix multiplication meter is carried out to multiple second-order correction characteristic point coordinates according to the second feature point correction model It calculates, obtains the first final characteristic point coordinate;
Matrix multiplication calculating is carried out to the described first final characteristic point coordinate according to the second feature point correction model, is obtained To the second final characteristic point coordinate;
……
Matrix multiplication calculating is carried out according to M-1 initial characteristics point coordinate described in the M initial characteristics point coordinate pair, is obtained To the final amendment characteristic point coordinate, wherein M is the integer more than or equal to 2.
Optionally, the fisrt feature point correction model is the features of multiple initial characteristics points, offset and multiple first Feature, the mapping relations of offset of characteristic point are corrected, the fisrt feature point correction model is projection matrix model.
Optionally, the second feature point correction model be the features of multiple second-order correction characteristic points, offset with it is multiple Feature, the mapping relations of offset of final amendment characteristic point, the second feature point correction model are projection matrix model.
Optionally, the quantity of the initial characteristics point is 44 or 98.
According to the second aspect of an embodiment of the present disclosure, a kind of facial modeling device, described device include:
First correction module is configured as being modified initial characteristics point coordinate according to fisrt feature point correction model, It obtains correcting characteristic point coordinate for the first time;
Identification module, the multiple first amendment characteristic points for being configured as correcting first correction module are sat Mark carries out the identification of central feature point, obtains at least one central feature point coordinate;
Mapping block is configured as the multiple institutes corrected according to characteristic point mapping function to first correction module It states first amendment characteristic point coordinate and carries out coordinate mapping, obtain multiple second-order correction characteristic point coordinates, wherein the characteristic point is reflected The mapping that function is the central feature point to the second-order correction characteristic point coordinate that the identification module identifies is penetrated to close System.
Optionally, described device further include:
Second correction module, the mapping block is mapped according to second feature point correction model multiple described two Secondary amendment characteristic point coordinate is modified, and obtains multiple final amendment characteristic point coordinates.
Optionally, described device further include:
Detection module is configured as carrying out human face region detection to target picture, obtains human face region;
Module is obtained, is configured as being obtained in the human face region according to the coordinate accounting of multiple initial characteristics points Multiple initial characteristics point coordinates.
Optionally, the coordinate accounting of the initial characteristics point is by demarcating the human face region in multiple pictures sample Measurement obtains.
Optionally, the central feature point coordinate is eyeball center point coordinate.
Optionally, first correction module includes:
First computational submodule is configured as according to the fisrt feature point correction model to multiple initial characteristics points Coordinate carries out matrix multiplication calculating, obtains multiple first initial characteristics point coordinates;
Matrix multiplication meter is carried out to multiple first initial characteristics point coordinates according to the fisrt feature point correction model It calculates, obtains multiple second initial characteristics point coordinates;
……
Matrix multiplication meter is carried out according to the multiple N-1 initial characteristics point coordinates of the N initial characteristics point coordinate pair It calculates, obtains multiple first amendment characteristic point coordinates, wherein N is the integer more than or equal to 2.
Optionally, second correction module includes:
Second computational submodule is configured as special to multiple second-order corrections according to the second feature point correction model Sign point coordinate carries out matrix multiplication calculating, obtains the first final characteristic point coordinate;
Second computational submodule is calculated according to the second feature point correction model described first final Characteristic point coordinate carries out matrix multiplication calculating, obtains the second final characteristic point coordinate;
……
Matrix multiplication calculating is carried out according to M-1 initial characteristics point coordinate described in the M initial characteristics point coordinate pair, is obtained To the final amendment characteristic point coordinate, wherein M is the integer more than or equal to 2.
Optionally, the fisrt feature point correction model is the features of multiple initial characteristics points, offset and multiple first Feature, the mapping relations of offset of characteristic point are corrected, the fisrt feature point correction model is projection matrix model.
Optionally, the second feature point correction model be the features of multiple second-order correction characteristic points, offset with it is multiple Feature, the mapping relations of offset of final amendment characteristic point, the second feature point correction model are projection matrix model.
Optionally, the quantity of the initial characteristics point is 44 or 98.
According to the third aspect of an embodiment of the present disclosure, a kind of facial modeling device is provided, which is characterized in that packet It includes:
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to:
Initial characteristics point coordinate is modified according to fisrt feature point correction model, obtains correcting characteristic point seat for the first time Mark;
The identification of central feature point is carried out to multiple first amendment characteristic point coordinates, obtains at least one central feature point Coordinate;
Coordinate mapping is carried out to multiple first amendment characteristic point coordinates according to characteristic point mapping function, obtains multiple two Secondary amendment characteristic point coordinate, wherein the characteristic point mapping function is the central feature point to the second-order correction characteristic point The mapping relations of coordinate.
In the above embodiments of the disclosure, initial characteristics point coordinate is modified by fisrt feature point correction model, It obtains correcting characteristic point coordinate for the first time, and the identification of central feature point is carried out to multiple first amendment characteristic point coordinates, obtain Then at least one central feature point coordinate carries out multiple first amendment characteristic point coordinates according to characteristic point mapping function Coordinate mapping, obtains multiple second-order correction characteristic point coordinates, wherein since the characteristic point mapping function is the central feature Point arrives the mapping relations of the second-order correction characteristic point coordinate, and the central feature point is based on the first amendment characteristic point The more accurate characteristic point that coordinate is identified, therefore the positioning accuracy of human face characteristic point can be improved.
In the above embodiments of the disclosure, multiple second-order correction characteristic points are sat by second feature point correction model Mark is modified, and obtains multiple final amendment characteristic point coordinates, due to secondary being repaired by second feature point correction model to described Positive characteristic point is corrected again, therefore can be further improved the positioning accuracy of human face characteristic point.
In the above embodiments of the disclosure, by carrying out human face region detection to target picture, human face region is obtained, then According to the coordinate accounting of multiple initial characteristics points, multiple initial characteristics point coordinates in the human face region are obtained, Wherein, it is obtained due to the coordinate accounting of the initial characteristics by carrying out calibration measurement to the human face region in multiple pictures, because This can be target picture setting initial characteristics point with fast accurate.
It should be understood that above general description and following detailed description be only it is exemplary and explanatory, not The disclosure can be limited.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows the implementation for meeting the disclosure Example, and together with specification for explaining the principles of this disclosure.
Fig. 1 is a kind of flow diagram of man face characteristic point positioning method shown according to an exemplary embodiment;
Fig. 2 is the flow diagram of another man face characteristic point positioning method shown according to an exemplary embodiment;
Fig. 3 is a kind of schematic block diagram of facial modeling device shown according to an exemplary embodiment;
Fig. 4 is the schematic block diagram of another facial modeling device shown according to an exemplary embodiment;
Fig. 5 is the schematic block diagram of another facial modeling device shown according to an exemplary embodiment;
Fig. 6 is the schematic block diagram of another facial modeling device shown according to an exemplary embodiment;
Fig. 7 is the schematic block diagram of another facial modeling device shown according to an exemplary embodiment;
Fig. 8 is that an a kind of structure for the facial modeling device shown according to an exemplary embodiment is shown It is intended to.
Specific embodiment
Example embodiments are described in detail here, and the example is illustrated in the accompanying drawings.Following description is related to When attached drawing, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements.Following exemplary embodiment Described in embodiment do not represent all implementations consistent with this disclosure.On the contrary, they be only with it is such as appended The example of the consistent device and method of some aspects be described in detail in claims, the disclosure.
It is only to be not intended to be limiting the disclosure merely for for the purpose of describing particular embodiments in the term that the disclosure uses. The "an" of the singular used in disclosure and the accompanying claims book, " described " and "the" are also intended to including majority Form, unless the context clearly indicates other meaning.It is also understood that term "and/or" used herein refers to and wraps It may be combined containing one or more associated any or all of project listed.
It will be appreciated that though various information, but this may be described using term first, second, third, etc. in the disclosure A little information should not necessarily be limited by these terms.These terms are only used to for same type of information being distinguished from each other out.For example, not departing from In the case where disclosure range, the first information can also be referred to as the second information, and similarly, the second information can also be referred to as One information.Depending on context, word as used in this " if " can be construed to " ... when " or " when ... When " or " in response to determination ".
SDM (Supervised Descent Method, the gradient descent method of supervision) algorithm belongs to a kind of iterative algorithm, It can be used for carrying out face characteristic to position, algorithm principle is:
One group of initial characteristics point is set, which extracts image feature vector for this group of initial characteristics point, obtain one Group image feature vector Y0, and use Y0It predicts from initial characteristics point current location X0Start, to the offset of next target point delta_X0, then by offset and current location X0It is added, and starts next iteration, entire iterative process can be with following Formula indicates:
Xn+1=Xn+delta_Xn
delta_Xn=fn(Yn)
N=0,1,2...
Wherein, above-mentioned delta_XnFor multi-C vector, above-mentioned delta_XnValue, i.e., the meter of the offset of each iteration Calculation method is the key that the iterative algorithm.SDM algorithm is calculating delta_XnValue when, generally use the side of linear prediction Method, i.e., the offset delta_X of each iterationnIt is image feature vector YnLinear function fn(Yn), in which:
fn(Yn)=An*Yn
In above-mentioned linear function fn(Yn) expression formula in, AnRefer to location prediction matrix, for predicting the inclined of each iteration Shifting amount delta_Xn.During operation, if shared p point feature point needs to position, YnIt is the vector of k*p dimension (the specific value of each feature point extraction k dimensional feature vector, k is set according to actual needs), AnIt is the square of 2p x kp Battle array, XnIt is 2*p dimensional vector (each characteristic point has 2 dimension coordinates).
SDM algorithm based on above-mentioned initial characteristics point during being iterated, it is common practice to utilize fast face Detection, the approximate location for finding out face obtain an initial face frame, then set initial characteristics in initial face frame Then point is iterated by SDM algorithm come locating human face's characteristic point.
However, in the above scheme, by SDM algorithm come when carrying out facial modeling, positioning accuracy highly dependent upon In the position of initial block, for initial block at the inside of practical face, the variation inside face is smaller, therefore after SDM algorithm iteration Positioning result can be relatively good, when initial block is at the outside of practical face, since the variation of external context may be very big, It is inaccurate to will cause the positioning result of SDM algorithm iteration all.As it can be seen that in the related art, the precise degrees of SDM algorithm, very greatly Degree depends on the position of initial block, and it is inaccurate to frequently can lead to facial modeling.
To solve the above problems, the disclosure proposes a kind of face characteristic point methods, pass through fisrt feature point correction model pair Initial characteristics point coordinate is modified, and obtains correcting characteristic point coordinate for the first time, and to multiple first amendment characteristic point coordinates The identification of central feature point is carried out, at least one central feature point coordinate is obtained, then according to characteristic point mapping function to multiple institutes It states first amendment characteristic point coordinate and carries out coordinate mapping, obtain multiple second-order correction characteristic point coordinates, wherein due to the feature Point mapping function is mapping relations of the central feature point to the second-order correction characteristic point coordinate, and central feature point It is the more accurate characteristic point identified based on the first amendment characteristic point coordinate, therefore face spy can be improved Levy the positioning accuracy of point.
As shown in Figure 1, Fig. 1 is a kind of man face characteristic point positioning method shown according to an exemplary embodiment, this method For server-side, comprising the following steps:
In a step 101, initial characteristics point coordinate is modified according to fisrt feature point correction model, is repaired for the first time Positive characteristic point coordinate;
In a step 102, the identification of central feature point is carried out to multiple first amendment characteristic point coordinates, obtains at least one A central feature point coordinate;
In step 103, coordinate is carried out to multiple first amendment characteristic point coordinates according to characteristic point mapping function to reflect It penetrates, obtains multiple second-order correction characteristic point coordinates, wherein the characteristic point mapping function is the central feature point to described two The mapping relations of secondary amendment characteristic point coordinate.
In the present embodiment, server-side may include that user oriented provides the server of facial modeling service, clothes Business device cluster or cloud platform.Above-mentioned fisrt feature point correction model, can be image texture characteristic based on initial characteristics point, Mapping association between the image texture characteristic of offset and first amendment characteristic point, offset, projection matrix made of training Model, the coordinate that can be used for above-mentioned initial characteristics point are modified, and obtain the coordinate of above-mentioned first amendment characteristic point.
For example, above-mentioned fisrt feature point correction model can be the projection matrix model based on SDM algorithm, it is above-mentioned in training When fisrt feature point correction model, the initial characteristics that can be demarcated based on the human face region in the photo sample of preset quantity Point training obtains.
It is below the projection matrix model based on SDM algorithm to above-mentioned fisrt feature with the fisrt feature point correction model The training process of point correction model is described.
In the preparation stage of fisrt feature point correction model training, the photo sample of preset quantity can be prepared, in training When above-mentioned fisrt feature point correction model, human face region can be manually calibrated on all photo samples, then demarcated In human face region out, along facial contour, manually calibrated using the unified label in unified position a certain number of uniform The human face characteristic point of distribution.For example, when demarcating human face characteristic point position can be passed through in the human face region of photo sample Fixed and equally distributed characteristic point, sketches the contours shape of face, eyebrow, eyes, nose and mouth, depicts all of face Facial characteristics.
Wherein, the characteristic point distribution of calibration is more uniform, and quantity is more, and the model that final training obtains is more accurate, however marks Fixed excessive feature will increase the calculation amount of system more, therefore during realization, the quantity for the characteristic point demarcated, can be with Using engineering experience value, either set according to the practical computing capability or demand of system.For example, when realizing, it can be with Prepare 50,000 image patterns, can be along facial contour in each human face region of all photo samples around, it is artificial to demarcate One group is uniformly distributed and 44 or 98 characteristic points that position is fixed.For example, it is assumed that artificial calibration in all picture samples 98 points, then the number that can be used uniformly 0~97 carries out label, the characteristic point of identical label exists in each picture sample Relative position in human face region is fixed.
On all photo samples, after the completion of human face characteristic point is demarcated, it can use these at this time and demarcate into The human face characteristic point of function trains above-mentioned fisrt feature point correction model based on the matrix model training method of SDM algorithm.
It is a certain number of when manually being calibrated on all photo samples in the above-mentioned fisrt feature point correction model of training It, can be using the human face region of calibration as prime area, according to proven characteristic point, in all photo samples after characteristic point One group of corresponding initial characteristics point is set in the human face region of middle calibration.
It wherein, can be according to above-mentioned initial characteristics point in human face region when setting initial characteristics point in human face region Coordinate accounting set, the coordinate accounting is than being demarcated by the human face region in the photo sample to preset quantity It is obtained after measurement.
For example, each feature can be measured respectively during to all photo samples artificial feature point for calibration Coordinate accounting of the point in photo human face region can be all to what is measured after the completion of all photo sample standard deviation calibration The coordinate accounting data of each characteristic point are analyzed in photo sample, and suitable coordinate is respectively set for each characteristic point Accounting (for example taking mean value), and using the coordinate accounting as the coordinate accounting of setting initial characteristics point.
Wherein, above-mentioned coordinate accounting can be used for measuring relative position of each characteristic point in human face region.It is different Photo sample in the magnitude range of human face region be all different, even if being different the feature of the same position in photo sample Point, corresponding coordinate may also be different, therefore relative position of the characteristic point in human face region is measured by coordinate accounting, Relative to using characteristic point coordinate more accurate come the position for measuring characteristic point.For example, by taking XY axis coordinate system as an example, above-mentioned coordinate Accounting can be characterized with the ratio of the X axis coordinate of characteristic point and Y axis coordinate.
It therefore, can be according to the people in the photo sample to preset quantity when setting initial characteristics point in human face region Face region carries out the coordinate accounting obtained after calibration measurement, and corresponding initial characteristics point is directly obtained in all photo samples Coordinate.The quantity of characteristic point manually demarcated in the quantity of the initial characteristics point set at this time and all photo samples is consistent.
Certainly, when setting initial characteristics point in human face region, above-mentioned initial characteristics can be passed through in addition to described above Coordinate accounting of the point in human face region other than setting, can also be achieved by other means.Shown in the present embodiment In another implementation, the coordinate mean value of proven human face characteristic point can be calculated on every photo sample, then root It is that corresponding initial spy is set separately in every photo sample in above-mentioned initial human face region according to the calculated coordinate mean value Sign point.
For example, it is assumed that 98 characteristic points have been demarcated on all photo samples, then can calculate on all photo samples The average coordinates value of this 98 characteristic points, for example the coordinate mean value of No. 1 point on all photo samples is calculated, then calculate No. 2 points Coordinate mean value, and so on.It, can be equal by calculated 98 coordinates after calculating the average coordinates value of this 98 characteristic points Value is used as discreet value, and 98 characteristic points are also estimated out in the initial human face region detected.
It is worth noting that being set during training above-mentioned fisrt feature point correction model for every photo sample When determining initial characteristics point, certain random file can also be added as disturbed value for the initial characteristics point of setting.Initial special Disturbed value is added in sign point, finally when based on initial characteristics point to train corresponding model, the precision of model can be increased.
It in the present embodiment, at this time can be based on these initial characteristics points of setting after the completion of the setting of initial characteristic point To train above-mentioned fisrt feature point correction model.
As previously described, because the offset delta_X of each iteration of SDM algorithmnIt is image feature vector YnLinear function fn(Yn), and fn(Yn)=An*Yn, therefore after the completion of the setting of initial characteristic point, it is corresponding that these initial characteristics points can be extracted Image texture characteristic vector Yn, and calculate positional matrix An
It on the one hand, can be respectively for the initial characteristics point X being set in all photo samples0(X0Indicate one group of setting The initial characteristics point of completion), to extract corresponding image texture characteristic vector Y0
When extracting image texture characteristic vector, can by each initial characteristics point extract a k dimensional vector come As Feature Descriptor, the Feature Descriptor that then will be extracted from all initial characteristics points connects into a k*p dimensional vector Yn(p is the characteristic point for needing to position).
Wherein, when extracting Feature Descriptor, can there are many selections, and under normal conditions, Feature Descriptor is wanted Ask dimension low, be capable of the picture material of concisely Expressive Features point, to illumination variation, the robustness of Geometrical change will well etc., Therefore in a kind of embodiment shown in the present embodiment, HOG (the Histogram of Oriented of 3x3 can be extracted Gradient, histograms of oriented gradients) and 3x3 gray scale dot matrix as Feature Descriptor.
It is worth noting that it is excessively high to describe sub- dimension in actual application process, it will usually it is pre- to directly influence positioning Survey matrix AnSize, therefore in order to control the number of parameter for needing to learn, can also according to preset dimension-reduction algorithm to from The Feature Descriptor extracted in picture carries out dimensionality reduction;For example, can be collected all from the human face characteristic point of mark, It is handled using PCA (Principal Component Analysis Principal Component Analysis Method) algorithm, obtains a dimensionality reduction square Battle array B (m x k dimension), then using the dimensionality reduction matrix to YnEach description in vector carries out dimensionality reduction, after obtaining a dimensionality reduction Image feature vector Zn(m*p dimensional vector), it is subsequent to calculate offset delta_XnWhen, the image feature vector can be used ZnSubstitute above-mentioned image feature vector Yn, i.e. linear function fn(Yn) f can be expressed asn(Yn)=An*Yn=An*B(Yn)=An* Zn
On the other hand, can also for the position for the initial characteristics point being set in every photo sample, come calculate with By the offset delta_X between the human face characteristic point that manually marks in all photo samples0, delta_X at this time0=X*-X0, X0 Indicate the position coordinates of initial characteristics point being set in all photo samples, X*It indicates in all photo samples by manually marking The position coordinates of the human face characteristic point of note.
When extracting the corresponding image feature vector Y of initial characteristics point being set in every picture0, and calculate By between the human face characteristic point that manually marks in the initial characteristics point and all photo samples being set in every photo sample Offset delta_X0Afterwards, then offset delta_X can be based on0With initial characteristics point X0Between existing linear relationship, lead to The mode of linear fit is crossed to learn location prediction matrix A out0.Wherein, A0It indicates change to based on for the first time by SBM algorithm Used location prediction matrix when calculation.
For example, as previously mentioned, above-mentioned linear relationship can use linear function delta_X in SDM algorithmn=An*YnCome It indicates, therefore according to the linear relationship, can be easy to obtain delta_X0=A0*Y0
It can be very easy to find by above-mentioned linear function, at this time calculated delta_X0And Y0To above-mentioned linear letter A in number0There are certain the constraint relationship, when by delta_X0And Y0When as prediction data, A0Then it can be understood as delta_X0And Y0Constraint matrix.
In this case, A is being solved based on above-mentioned linear function0When, it can be by calculated offset delta_X0 And the image feature vector Y extracted0As prediction data, A is solved by way of least square method linear fit0
Wherein, A is solved by way of least square method linear fit0Process, no longer carry out in the present embodiment It is described in detail, those skilled in the art can refer to introduction in the prior art when assisting realizing above technical scheme.
When solving A by way of the least square method linear fit0When, A at this time0As the is carried out based on SBM algorithm Used location prediction matrix when once changing to be calculated.When calculating A0When, can be calculated by above-mentioned linear function Offset delta_X when an iteration0, above-mentioned X0One group of characteristic point A of next iteration can be obtained plus the offset1。 As one group of characteristic point A for calculating next iteration1Afterwards, the iterative process more than can repeating, until SDM algorithmic statement.
It is worth noting that during continuous iteration, in the lineup's face characteristic point and photo sample oriented The displacement error between lineup's face characteristic point manually demarcated will be constantly corrected, and after SDM algorithmic statement, be oriented at this time Human face characteristic point and photo sample in the human face characteristic point manually demarcated between displacement error be minimum, therefore work as SDM After algorithmic statement, above-mentioned fisrt feature point correction model training is finished at this time, is changed every time in above-mentioned fisrt feature point correction model For rear calculated location prediction matrix, can be used for carrying out facial modeling to the Target Photo that user provides.
Wherein, in the above-mentioned fisrt feature point correction model of training, the number of iterations that when SDM algorithmic statement carries out, at this Disclosure is without being particularly limited to.For example, engineering experience value is based on, in the application of facial modeling, it usually needs iteration 4 It is secondary, therefore in above-mentioned fisrt feature point correction model, A can be provided0~A3Deng 4 location prediction matrixes.
The above description is that the detailed process being trained to fisrt feature point correction model.
Above-mentioned fisrt feature point correction model is initial based on equally distributed one group around facial contour in photo sample Characteristic point is formed by prime area training of human face region.
For the above-mentioned fisrt feature point correction model trained, can be used for the initial characteristics set on target picture The coordinate of point is modified, and the coordinate of above-mentioned first amendment characteristic point is obtained, to realize the precise positioning of human face characteristic point.On State target picture, as user's photo for needing to carry out facial modeling.
When carrying out facial modeling for Target Photo, can use fast face detection technique (such as can be with Use the human-face detector of the maturation such as adaboost), human face region is carried out to above-mentioned target picture and detects to obtain at the beginning of one The human face region of beginning, and initial characteristics point is set in the human face region.
It wherein, still can be according to above-mentioned when setting initial characteristics point in the human face region detected in target picture Coordinate accounting of the initial characteristics point in human face region, or calculate the coordinate for the human face characteristic point demarcated in every photo sample Mean value is set, and detailed process repeats no more.
It, can be in target picture after setting initial characteristics point in the human face region detected in target picture This group of initial characteristics point X being set0, to extract corresponding image texture characteristic vector Y0, the image that then will extract Feature vector Y0, with the location prediction matrix A provided in the fisrt feature point correction model of trained completionnIt is iterated fortune It calculates, to initial characteristics point X above-mentioned in target picture0It is corrected for the first time, obtains correcting characteristic point coordinate for the first time.
By the above-mentioned image texture characteristic vector Y of target picture0, corrected with the fisrt feature of trained completion point The location prediction matrix A provided in modelnWhen being iterated operation, it is assumed that provide A in fisrt feature point correction model0~A3 Deng 4 location prediction matrixes, calculated then 4 submatrix multiplication will be carried out, it first can be according to A0To above-mentioned image texture characteristic Vector Y0Matrix multiplication calculating is carried out, first time iteration is carried out, one group of first initial characteristics point coordinate is obtained, then according to A1It is right The above-mentioned first initial characteristics point coordinate being calculated carries out matrix multiplication calculating again, carries out second of iteration, obtains one group Second initial characteristics point coordinate, then further according to A2Matrix is carried out again to the above-mentioned second initial characteristics point coordinate being calculated Multiplication calculates, and carries out third time iteration, obtains third initial characteristics point coordinate, when third time iteration is completed, further according to A4To meter Obtained above-mentioned third initial characteristics point coordinate carries out matrix multiplication calculating, obtains above-mentioned first amendment characteristic point coordinate, this When iteration complete.
In the present embodiment, since fisrt feature point correction model is carrying out facial modeling for Target Photo When, it is using the face frame detected as prime area, positioning accuracy is highly dependent on the position of initial block, and initial block is in reality When the inside of face, the variation inside face is smaller, and the positioning result after SDM algorithm iteration can be relatively good, when initial block is in reality When the outside of border face, since the variation of external context may be very big, it is not smart to will result in the positioning result of SDM algorithm iteration all Really, therefore in order to improve positioning accuracy, when fisrt feature point correction model is carrying out facial modeling for Target Photo After, the multiple above-mentioned first amendment characteristic points obtained after can also calculating matrix multiplication carry out second-order correction, obtain pre- If the second-order correction characteristic point coordinate of quantity.
When carrying out second-order correction for above-mentioned first amendment characteristic point, multiple above-mentioned first amendment characteristic points can be directed to The identification of central feature point is carried out, at least one central feature point coordinate is obtained, then according to central feature point coordinate and above-mentioned two Mapping relations between secondary amendment characteristic point coordinate carry out coordinate mapping to above-mentioned first amendment characteristic point, obtain preset quantity Second-order correction characteristic point coordinate.
Wherein, in a kind of implementation shown in the present embodiment, above-mentioned center characteristic point can be eyeball center, above-mentioned Central feature point coordinate then can be the coordinate at the eyeball center of eyes.
When above-mentioned center characteristic point is eyeball center, eyeball center is being identified based on above-mentioned first amendment characteristic point When, since the textural characteristics of eyeball central point are relatively abundant, can be repaired for the first time by preset Ins location algorithm by above-mentioned The coordinate of positive characteristic point carries out the positioning at eyeball center by identifying the textural characteristics of eyeball central point as auxiliary parameter. Wherein, in the present embodiment without being particularly limited to, those skilled in the art can refer to above-mentioned preset Ins location algorithm Realization process in the related technology.
When the coordinate based on above-mentioned first amendment characteristic point identifies the coordinate at two eyeball centers, eyeball can be based on Mapping relations between the coordinate at center and above-mentioned second-order correction characteristic point coordinate, to be sat to above-mentioned first amendment characteristic point Mark mapping carries out second-order correction to above-mentioned first characteristic point, obtains the coordinate of the second-order correction characteristic point of preset quantity.
Wherein, the mapping relations between the coordinate at eyeball center and above-mentioned second-order correction characteristic point coordinate can be with preset Characteristic point mapping function characterizes, and this feature point mapping function then can be based on eyeball in the photo sample of above-mentioned preset quantity The relative distance between each characteristic point manually marked in center and photo sample learns to obtain.
For the photo sample of above-mentioned preset quantity, the size and range of human face region not phase in different photos Together, in different photo samples, the eyeball center of eyes and the distance between each characteristic point manually marked, then relative to compared with It is constant, therefore when manually marking characteristic point to above-mentioned photo sample, it can be directed to each photo sample, measure eyes respectively Eyeball center to the distance between each characteristic point of mark, the data then obtained to measurement pass through line as prediction data Property fitting mode learn reflecting between the coordinate at eyeball center of eyes out and the coordinate of the other each characteristic points marked Relationship is penetrated, then according to the mapping relations learnt out, to the coordinate of above-mentioned first amendment characteristic point.
Eyeball center due to eyes and the distance between each characteristic point for manually marking, relative to more constant therefore logical After features described above point mapping function is crossed to the coordinate progress coordinate mapping of above-mentioned first amendment characteristic point, it may be implemented to above-mentioned first The second-order correction of secondary amendment characteristic point, obtains the coordinate of the second-order correction characteristic point of preset quantity, so as to improve face spy Levy the positioning accuracy of point.
In the present embodiment, when based on features described above point mapping function to above-mentioned first amendment characteristic point progress second-order correction Afterwards, the coordinate of the second-order correction characteristic point obtained is also based on second feature point correction model and is corrected again, obtained most The coordinate of amendment characteristic point eventually.
Wherein, it is special to can be the image texture based on above-mentioned second-order correction characteristic point for above-mentioned second feature point correction model Mapping association between sign, offset and the above-mentioned final image texture characteristic for correcting characteristic point, offset, throws made of training Shadow matrix model, the coordinate that can be used for above-mentioned second-order correction characteristic point are corrected again, obtain above-mentioned final amendment feature The coordinate of point.
For example, above-mentioned second feature point correction model still can be the projection matrix model based on SDM algorithm, such as preceding institute It states, in the above-mentioned fisrt feature point correction model of training, is marked based on the human face region in the photo sample of preset quantity Fixed initial characteristics point is obtained by prime area training of human face region.Since the coordinate of above-mentioned second-order correction characteristic point is base It corrects to obtain in the eyeball center of eyes and the mapping relations of above-mentioned second-order correction characteristic point, therefore in the above-mentioned second feature of training When point correction model, in the human face region that can be constituted based on the eyeball center in the eyes in the photo sample of preset quantity The initial characteristics point demarcated is obtained by prime area training of the center of eyes eyeball.
It is below the projection matrix model based on SDM algorithm to above-mentioned second feature with the second feature point correction model The training process of point correction model is described.
In the above-mentioned second feature point correction model of training, still can make using when training fisrt feature point correction model With those of photo sample, the eyeball center of eyes in all photo samples is demarcated as central feature point first, when eyes After the completion of the calibration of eyeball center, a rectangle frame can be centrally generated according to the eyeball of calibration eyes, and using the rectangle frame as One group of corresponding initial characteristics is arranged according to proven characteristic point in photo sample in prime area in the prime area Point.
It wherein, still can be first at this according to above-mentioned initial characteristics point when setting initial characteristics point in the prime area Coordinate accounting in beginning region is set, and the coordinate accounting is still than by above-mentioned in the photo sample to preset quantity Prime area obtains after carrying out calibration measurement.It, can be with for example, during to all photo samples artificial feature point for calibration The coordinate accounting for measuring each characteristic point in above-mentioned prime area respectively can after the completion of all photo sample standard deviation calibration To analyze coordinate accounting data of each characteristic point in above-mentioned prime area in all photo samples measured, Suitable coordinate accounting (for example taking mean value) is respectively set for each of prime area characteristic point, and by the coordinate accounting Coordinate accounting as setting initial characteristics point.
After the completion of the setting of initial characteristic point, the corresponding image feature vector Y of these initial characteristics points can be extractedn, and Calculate positional matrix An.It on the one hand, can be respectively for the initial characteristics point X being set in all photo samples0(X0Still table Show the one group of initial characteristics being provided with point), to extract corresponding image feature vector Y0.It on the other hand, can also be for every The position for the initial characteristics point being set in photo sample, come calculate in all photo samples in above-mentioned prime area by Offset delta_X between the human face characteristic point manually marked0, delta_X at this time0=X*-X0, X0Indicate all photo samples In the position coordinates of initial characteristics point that have been set, X*It indicates in all photo samples in above-mentioned prime area by manually marking Human face characteristic point position coordinates.When extract the corresponding characteristics of image of initial characteristics point that has been set in every picture to Measure Y0, and calculate above-mentioned prime area in the initial characteristics point and all photo samples being set in every photo sample In by the offset delta_X between the human face characteristic point that manually marks0Afterwards, then offset delta_X can be based on0With it is initial Characteristic point X0Between existing linear relationship, learn location prediction matrix A out by way of linear fit0
Wherein, learning location prediction matrix A out by way of linear fit0When, it still can be using minimum two The mode of multiplication linear fit realizes that detailed process repeats no more, those skilled in the art may refer to described above The training process of one characteristic point correction model carries out equivalent implementation.
Assuming that iteration is had altogether 4 times after SDM algorithmic statement during training second feature point correction model, then In above-mentioned second feature point correction model, A can be provided0~A3Deng 4 location prediction matrixes.
The above description is that the training process of second feature point correction model.
Above-mentioned fisrt feature point correction model is using the eyeball center of eyes as prime area, based on above-mentioned in photo sample One group of initial characteristics point training in prime area forms.For the above-mentioned second feature point correction model trained, Ke Yiyong It is corrected again in the coordinate to above-mentioned second-order correction characteristic point, obtains the coordinate for finally correcting characteristic point.To improve people The positioning accuracy of face characteristic point.
It, can be with when being modified according to above-mentioned second feature point correction model to the coordinate of above-mentioned second-order correction characteristic point For pass through revised this group of second-order correction characteristic point X of features described above point mapping function0, to extract corresponding image texture Feature vector Y0, the image feature vector Y that then will extract0, and mentioned in the fisrt feature point correction model of trained completion The location prediction matrix A of confessionnIt is iterated operation, to above-mentioned second-order correction characteristic point X0It is corrected, is finally repaired again Positive characteristic point coordinate.
By the above-mentioned image texture characteristic vector Y of second-order correction characteristic point0, with the second feature with trained completion The location prediction matrix A provided in point correction modelnWhen being iterated operation, it is assumed that still provided in second feature point correction model A0~A3Deng 4 location prediction matrixes, calculated then 4 submatrix multiplication will be carried out, it first can be according to A0To above-mentioned image line Manage feature vector Y0Matrix multiplication calculating is carried out, first time iteration is carried out, obtains one group of first final characteristic point coordinate, then root According to A1Matrix multiplication calculating is carried out to the above-mentioned first final beginning characteristic point coordinate being calculated again, carries out second of iteration, One group of second final characteristic point coordinate is obtained, then further according to A2Again to the above-mentioned second final characteristic point coordinate being calculated Matrix multiplication calculating is carried out, third time iteration is carried out, obtains the final characteristic point coordinate of third, when the completion of third time iteration, then root According to A3Matrix multiplication calculating is carried out to the final characteristic point coordinate of the above-mentioned third being calculated, obtains above-mentioned final amendment characteristic point Coordinate, iteration is completed at this time.
By the above-mentioned image texture characteristic vector Y of second-order correction characteristic point0, with the second feature with trained completion The location prediction matrix A provided in point correction modelnAfter being iterated operation, the final amendment characteristic point coordinate obtained at this time, The final result of facial modeling is as carried out for above-mentioned target picture.
As can be seen from the above description, letter is mapped by fisrt feature point correction model, preset characteristic point in the present embodiment Several and second feature point correction model, corrects the initial characteristics point set in above-mentioned target picture, therefore three times The positioning accuracy of human face characteristic point can be obviously improved.
In the above embodiments of the disclosure, initial characteristics point coordinate is modified by fisrt feature point correction model, It obtains correcting characteristic point coordinate for the first time, and the identification of central feature point is carried out to multiple first amendment characteristic point coordinates, obtain Then at least one central feature point coordinate carries out multiple first amendment characteristic point coordinates according to characteristic point mapping function Coordinate mapping, obtains multiple second-order correction characteristic point coordinates, wherein since the characteristic point mapping function is the central feature Point arrives the mapping relations of the second-order correction characteristic point coordinate, and the central feature point is based on the first amendment characteristic point The more accurate characteristic point that coordinate is identified, therefore the positioning accuracy of human face characteristic point can be improved.
In the above embodiments of the disclosure, multiple second-order correction characteristic points are sat by second feature point correction model Mark is modified, and obtains multiple final amendment characteristic point coordinates, due to secondary being repaired by second feature point correction model to described Positive characteristic point is corrected again, therefore can be further improved the positioning accuracy of human face characteristic point.
In the above embodiments of the disclosure, by carrying out human face region detection to target picture, human face region is obtained, then According to the coordinate accounting of multiple initial characteristics points, multiple initial characteristics point coordinates in the human face region are obtained, Wherein, it is obtained due to the coordinate accounting of the initial characteristics by carrying out calibration measurement to the human face region in multiple pictures, because This can be target picture setting initial characteristics point with fast accurate.
As shown in Fig. 2, Fig. 2 is a kind of man face characteristic point positioning method shown according to an exemplary embodiment, it is applied to In server-side, comprising the following steps:
In step 201, human face region detection is carried out to target picture, obtains human face region;
In step 202, it according to the coordinate accounting of multiple initial characteristics points, obtains multiple in the human face region The initial characteristics point coordinate;The coordinate accounting of the initial characteristics point is by carrying out the human face region in multiple pictures sample Calibration measurement obtains;
In step 203, it according to the coordinate accounting of multiple initial characteristics points, obtains multiple in the human face region The initial characteristics point coordinate;The coordinate accounting of the initial characteristics point is by carrying out the human face region in multiple pictures sample Calibration measurement obtains;
In step 204, it according to the coordinate accounting of multiple initial characteristics points, obtains multiple in the human face region The initial characteristics point coordinate;The coordinate accounting of the initial characteristics point is by carrying out the human face region in multiple pictures sample Calibration measurement obtains;
In step 205, it according to the coordinate accounting of multiple initial characteristics points, obtains multiple in the human face region The initial characteristics point coordinate;The coordinate accounting of the initial characteristics point is by carrying out the human face region in multiple pictures sample Calibration measurement obtains.
In the present embodiment, server-side may include that user oriented provides the server of facial modeling service, clothes Business device cluster or cloud platform.Above-mentioned fisrt feature point correction model, can be image texture characteristic based on initial characteristics point, Mapping association between the image texture characteristic of offset and first amendment characteristic point, offset, projection matrix made of training Model, the coordinate that can be used for above-mentioned initial characteristics point are modified, and obtain the coordinate of above-mentioned first amendment characteristic point.
For example, above-mentioned fisrt feature point correction model can be the projection matrix model based on SDM algorithm, it is above-mentioned in training When fisrt feature point correction model, the initial characteristics that can be demarcated based on the human face region in the photo sample of preset quantity Point training obtains.
It is below the projection matrix model based on SDM algorithm to above-mentioned fisrt feature with the fisrt feature point correction model The training process of point correction model is described.
In the preparation stage of fisrt feature point correction model training, the photo sample of preset quantity can be prepared, in training When above-mentioned fisrt feature point correction model, human face region can be manually calibrated on all photo samples, then demarcated In human face region out, along facial contour, manually calibrated using the unified label in unified position a certain number of uniform The human face characteristic point of distribution.For example, when demarcating human face characteristic point position can be passed through in the human face region of photo sample Fixed and equally distributed characteristic point, sketches the contours shape of face, eyebrow, eyes, nose and mouth, depicts all of face Facial characteristics.
Wherein, the characteristic point distribution of calibration is more uniform, and quantity is more, and the model that final training obtains is more accurate, however marks Fixed excessive feature will increase the calculation amount of system more, therefore during realization, the quantity for the characteristic point demarcated, can be with Using engineering experience value, either set according to the practical computing capability or demand of system.For example, when realizing, it can be with Prepare 50,000 image patterns, can be along facial contour in each human face region of all photo samples around, it is artificial to demarcate One group is uniformly distributed and 44 or 98 characteristic points that position is fixed.For example, it is assumed that artificial calibration in all picture samples 98 points, then the number that can be used uniformly 0~97 carries out label, the characteristic point of identical label exists in each picture sample Relative position in human face region is fixed.
On all photo samples, after the completion of human face characteristic point is demarcated, it can use these at this time and demarcate into The human face characteristic point of function trains above-mentioned fisrt feature point correction model based on the matrix model training method of SDM algorithm.
It is a certain number of when manually being calibrated on all photo samples in the above-mentioned fisrt feature point correction model of training It, can be using the human face region of calibration as prime area, according to proven characteristic point, in all photo samples after characteristic point One group of corresponding initial characteristics point is set in the human face region of middle calibration.
It wherein, can be according to above-mentioned initial characteristics point in human face region when setting initial characteristics point in human face region Coordinate accounting set, the coordinate accounting is than being demarcated by the human face region in the photo sample to preset quantity It is obtained after measurement.
For example, each feature can be measured respectively during to all photo samples artificial feature point for calibration Coordinate accounting of the point in photo human face region can be all to what is measured after the completion of all photo sample standard deviation calibration The coordinate accounting data of each characteristic point are analyzed in photo sample, and suitable coordinate is respectively set for each characteristic point Accounting (for example taking mean value), and using the coordinate accounting as the coordinate accounting of setting initial characteristics point.
Wherein, above-mentioned coordinate accounting can be used for measuring relative position of each characteristic point in human face region.It is different Photo sample in the magnitude range of human face region be all different, even if being different the feature of the same position in photo sample Point, corresponding coordinate may also be different, therefore relative position of the characteristic point in human face region is measured by coordinate accounting, Relative to using characteristic point coordinate more accurate come the position for measuring characteristic point.For example, by taking XY axis coordinate system as an example, above-mentioned coordinate Accounting can be characterized with the ratio of the X axis coordinate of characteristic point and Y axis coordinate.
It therefore, can be according to the people in the photo sample to preset quantity when setting initial characteristics point in human face region Face region carries out the coordinate accounting obtained after calibration measurement, and corresponding initial characteristics point is directly obtained in all photo samples Coordinate.The quantity of characteristic point manually demarcated in the quantity of the initial characteristics point set at this time and all photo samples is consistent.
Certainly, when setting initial characteristics point in human face region, above-mentioned initial characteristics can be passed through in addition to described above Coordinate accounting of the point in human face region other than setting, can also be achieved by other means.Shown in the present embodiment In another implementation, the coordinate mean value of proven human face characteristic point can be calculated on every photo sample, then root It is that corresponding initial spy is set separately in every photo sample in above-mentioned initial human face region according to the calculated coordinate mean value Sign point.
For example, it is assumed that 98 characteristic points have been demarcated on all photo samples, then can calculate on all photo samples The average coordinates value of this 98 characteristic points, for example the coordinate mean value of No. 1 point on all photo samples is calculated, then calculate No. 2 points Coordinate mean value, and so on.It, can be equal by calculated 98 coordinates after calculating the average coordinates value of this 98 characteristic points Value is used as discreet value, and 98 characteristic points are also estimated out in the initial human face region detected.
It is worth noting that being set during training above-mentioned fisrt feature point correction model for every photo sample When determining initial characteristics point, certain random file can also be added as disturbed value for the initial characteristics point of setting.Initial special Disturbed value is added in sign point, finally when based on initial characteristics point to train corresponding model, the precision of model can be increased.
It in the present embodiment, at this time can be based on these initial characteristics points of setting after the completion of the setting of initial characteristic point To train above-mentioned fisrt feature point correction model.
As previously described, because the offset delta_X of each iteration of SDM algorithmnIt is image feature vector YnLinear function fn(Yn), and fn(Yn)=An*Yn, therefore after the completion of the setting of initial characteristic point, it is corresponding that these initial characteristics points can be extracted Image texture characteristic vector Yn, and calculate positional matrix An
It on the one hand, can be respectively for the initial characteristics point X being set in all photo samples0(X0Indicate one group of setting The initial characteristics point of completion), to extract corresponding image texture characteristic vector Y0
When extracting image texture characteristic vector, can by each initial characteristics point extract a k dimensional vector come As Feature Descriptor, the Feature Descriptor that then will be extracted from all initial characteristics points connects into a k*p dimensional vector Yn(p is the characteristic point for needing to position).
Wherein, when extracting Feature Descriptor, can there are many selections, and under normal conditions, Feature Descriptor is wanted Ask dimension low, be capable of the picture material of concisely Expressive Features point, to illumination variation, the robustness of Geometrical change will well etc., Therefore in a kind of embodiment shown in the present embodiment, HOG (the Histogram of Oriented of 3x3 can be extracted Gradient, histograms of oriented gradients) and 3x3 gray scale dot matrix as Feature Descriptor.
It is worth noting that it is excessively high to describe sub- dimension in actual application process, it will usually it is pre- to directly influence positioning Survey matrix AnSize, therefore in order to control the number of parameter for needing to learn, can also according to preset dimension-reduction algorithm to from The Feature Descriptor extracted in picture carries out dimensionality reduction;For example, can be collected all from the human face characteristic point of mark, It is handled using PCA (Principal Component Analysis Principal Component Analysis Method) algorithm, obtains a dimensionality reduction square Battle array B (m x k dimension), then using the dimensionality reduction matrix to YnEach description in vector carries out dimensionality reduction, after obtaining a dimensionality reduction Image feature vector Zn(m*p dimensional vector), it is subsequent to calculate offset delta_XnWhen, the image feature vector can be used ZnSubstitute above-mentioned image feature vector Yn, i.e. linear function fn(Yn) f can be expressed asn(Yn)=An*Yn=An*B(Yn)=An* Zn
On the other hand, can also for the position for the initial characteristics point being set in every photo sample, come calculate with By the offset delta_X between the human face characteristic point that manually marks in all photo samples0, delta_X at this time0=X*-X0, X0 Indicate the position coordinates of initial characteristics point being set in all photo samples, X*It indicates in all photo samples by manually marking The position coordinates of the human face characteristic point of note.
When extracting the corresponding image feature vector Y of initial characteristics point being set in every picture0, and calculate By between the human face characteristic point that manually marks in the initial characteristics point and all photo samples being set in every photo sample Offset delta_X0Afterwards, then offset delta_X can be based on0With initial characteristics point X0Between existing linear relationship, lead to The mode of linear fit is crossed to learn location prediction matrix A out0.Wherein, A0It indicates change to based on for the first time by SBM algorithm Used location prediction matrix when calculation.
For example, as previously mentioned, above-mentioned linear relationship can use linear function delta_X in SDM algorithmn=An*YnCome It indicates, therefore according to the linear relationship, can be easy to obtain delta_X0=A0*Y0
It can be very easy to find by above-mentioned linear function, at this time calculated delta_X0And Y0To above-mentioned linear letter A in number0There are certain the constraint relationship, when by delta_X0And Y0When as prediction data, A0Then it can be understood as delta_X0And Y0Constraint matrix.
In this case, A is being solved based on above-mentioned linear function0When, it can be by calculated offset delta_X0 And the image feature vector Y extracted0As prediction data, A is solved by way of least square method linear fit0
Wherein, A is solved by way of least square method linear fit0Process, no longer carry out in the present embodiment It is described in detail, those skilled in the art can refer to introduction in the prior art when assisting realizing above technical scheme.
When solving A by way of the least square method linear fit0When, A at this time0As the is carried out based on SBM algorithm Used location prediction matrix when once changing to be calculated.When calculating A0When, can be calculated by above-mentioned linear function Offset delta_X when an iteration0, above-mentioned X0One group of characteristic point A of next iteration can be obtained plus the offset1。 As one group of characteristic point A for calculating next iteration1Afterwards, the iterative process more than can repeating, until SDM algorithmic statement.
It is worth noting that during continuous iteration, in the lineup's face characteristic point and photo sample oriented The displacement error between lineup's face characteristic point manually demarcated will be constantly corrected, and after SDM algorithmic statement, be oriented at this time Human face characteristic point and photo sample in the human face characteristic point manually demarcated between displacement error be minimum, therefore work as SDM After algorithmic statement, above-mentioned fisrt feature point correction model training is finished at this time, is changed every time in above-mentioned fisrt feature point correction model For rear calculated location prediction matrix, can be used for carrying out facial modeling to the Target Photo that user provides.
Wherein, in the above-mentioned fisrt feature point correction model of training, the number of iterations that when SDM algorithmic statement carries out, at this Disclosure is without being particularly limited to.For example, engineering experience value is based on, in the application of facial modeling, it usually needs iteration 4 It is secondary, therefore in above-mentioned fisrt feature point correction model, A can be provided0~A3Deng 4 location prediction matrixes.
The above description is that the detailed process being trained to fisrt feature point correction model.
Above-mentioned fisrt feature point correction model is initial based on equally distributed one group around facial contour in photo sample Characteristic point is formed by prime area training of human face region.
For the above-mentioned fisrt feature point correction model trained, can be used for the initial characteristics set on target picture The coordinate of point is modified, and the coordinate of above-mentioned first amendment characteristic point is obtained, to realize the precise positioning of human face characteristic point.On State target picture, as user's photo for needing to carry out facial modeling.
When carrying out facial modeling for Target Photo, can use fast face detection technique (such as can be with Use the human-face detector of the maturation such as adaboost), human face region is carried out to above-mentioned target picture and detects to obtain at the beginning of one The human face region of beginning, and initial characteristics point is set in the human face region.
It wherein, still can be according to above-mentioned when setting initial characteristics point in the human face region detected in target picture Coordinate accounting of the initial characteristics point in human face region, or calculate the coordinate for the human face characteristic point demarcated in every photo sample Mean value is set, and detailed process repeats no more.
It, can be in target picture after setting initial characteristics point in the human face region detected in target picture This group of initial characteristics point X being set0, to extract corresponding image texture characteristic vector Y0, the image that then will extract Feature vector Y0, with the location prediction matrix A provided in the fisrt feature point correction model of trained completionnIt is iterated fortune It calculates, to initial characteristics point X above-mentioned in target picture0It is corrected for the first time, obtains correcting characteristic point coordinate for the first time.
By the above-mentioned image texture characteristic vector Y of target picture0, corrected with the fisrt feature of trained completion point The location prediction matrix A provided in modelnWhen being iterated operation, it is assumed that provide A in fisrt feature point correction model0~A3 Deng 4 location prediction matrixes, calculated then 4 submatrix multiplication will be carried out, it first can be according to A0To above-mentioned image texture characteristic Vector Y0Matrix multiplication calculating is carried out, first time iteration is carried out, one group of first initial characteristics point coordinate is obtained, then according to A1It is right The above-mentioned first initial characteristics point coordinate being calculated carries out matrix multiplication calculating again, carries out second of iteration, obtains one group Second initial characteristics point coordinate, then further according to A2Matrix is carried out again to the above-mentioned second initial characteristics point coordinate being calculated Multiplication calculates, and carries out third time iteration, obtains third initial characteristics point coordinate, when third time iteration is completed, further according to A4To meter Obtained above-mentioned third initial characteristics point coordinate carries out matrix multiplication calculating, obtains above-mentioned first amendment characteristic point coordinate, this When iteration complete.
In the present embodiment, since fisrt feature point correction model is carrying out facial modeling for Target Photo When, it is using the face frame detected as prime area, positioning accuracy is highly dependent on the position of initial block, and initial block is in reality When the inside of face, the variation inside face is smaller, and the positioning result after SDM algorithm iteration can be relatively good, when initial block is in reality When the outside of border face, since the variation of external context may be very big, it is not smart to will result in the positioning result of SDM algorithm iteration all Really, therefore in order to improve positioning accuracy, when fisrt feature point correction model is carrying out facial modeling for Target Photo After, the multiple above-mentioned first amendment characteristic points obtained after can also calculating matrix multiplication carry out second-order correction, obtain pre- If the second-order correction characteristic point coordinate of quantity.
When carrying out second-order correction for above-mentioned first amendment characteristic point, multiple above-mentioned first amendment characteristic points can be directed to The identification of central feature point is carried out, at least one central feature point coordinate is obtained, then according to central feature point coordinate and above-mentioned two Mapping relations between secondary amendment characteristic point coordinate carry out coordinate mapping to above-mentioned first amendment characteristic point, obtain preset quantity Second-order correction characteristic point coordinate.
Wherein, in a kind of implementation shown in the present embodiment, above-mentioned center characteristic point can be eyeball center, above-mentioned Central feature point coordinate then can be the coordinate at the eyeball center of eyes.
When above-mentioned center characteristic point is eyeball center, eyeball center is being identified based on above-mentioned first amendment characteristic point When, since the textural characteristics of eyeball central point are relatively abundant, can be repaired for the first time by preset Ins location algorithm by above-mentioned The coordinate of positive characteristic point carries out the positioning at eyeball center by identifying the textural characteristics of eyeball central point as auxiliary parameter. Wherein, in the present embodiment without being particularly limited to, those skilled in the art can refer to above-mentioned preset Ins location algorithm Realization process in the related technology.
When the coordinate based on above-mentioned first amendment characteristic point identifies the coordinate at two eyeball centers, eyeball can be based on Mapping relations between the coordinate at center and above-mentioned second-order correction characteristic point coordinate, to be sat to above-mentioned first amendment characteristic point Mark mapping carries out second-order correction to above-mentioned first characteristic point, obtains the coordinate of the second-order correction characteristic point of preset quantity.
Wherein, the mapping relations between the coordinate at eyeball center and above-mentioned second-order correction characteristic point coordinate can be with preset Characteristic point mapping function characterizes, and this feature point mapping function then can be based on eyeball in the photo sample of above-mentioned preset quantity The relative distance between each characteristic point manually marked in center and photo sample learns to obtain.
For the photo sample of above-mentioned preset quantity, the size and range of human face region not phase in different photos Together, in different photo samples, the eyeball center of eyes and the distance between each characteristic point manually marked, then relative to compared with It is constant, therefore when manually marking characteristic point to above-mentioned photo sample, it can be directed to each photo sample, measure eyes respectively Eyeball center to the distance between each characteristic point of mark, the data then obtained to measurement pass through line as prediction data Property fitting mode learn reflecting between the coordinate at eyeball center of eyes out and the coordinate of the other each characteristic points marked Relationship is penetrated, then according to the mapping relations learnt out, to the coordinate of above-mentioned first amendment characteristic point.
Eyeball center due to eyes and the distance between each characteristic point for manually marking, relative to more constant therefore logical After features described above point mapping function is crossed to the coordinate progress coordinate mapping of above-mentioned first amendment characteristic point, it may be implemented to above-mentioned first The second-order correction of secondary amendment characteristic point, obtains the coordinate of the second-order correction characteristic point of preset quantity, so as to improve face spy Levy the positioning accuracy of point.
In the present embodiment, when based on features described above point mapping function to above-mentioned first amendment characteristic point progress second-order correction Afterwards, the coordinate of the second-order correction characteristic point obtained is also based on second feature point correction model and is corrected again, obtained most The coordinate of amendment characteristic point eventually.
Wherein, it is special to can be the image texture based on above-mentioned second-order correction characteristic point for above-mentioned second feature point correction model Mapping association between sign, offset and the above-mentioned final image texture characteristic for correcting characteristic point, offset, throws made of training Shadow matrix model, the coordinate that can be used for above-mentioned second-order correction characteristic point are corrected again, obtain above-mentioned final amendment feature The coordinate of point.
For example, above-mentioned second feature point correction model still can be the projection matrix model based on SDM algorithm, such as preceding institute It states, in the above-mentioned fisrt feature point correction model of training, is marked based on the human face region in the photo sample of preset quantity Fixed initial characteristics point is obtained by prime area training of human face region.Since the coordinate of above-mentioned second-order correction characteristic point is base It corrects to obtain in the eyeball center of eyes and the mapping relations of above-mentioned second-order correction characteristic point, therefore in the above-mentioned second feature of training When point correction model, in the human face region that can be constituted based on the eyeball center in the eyes in the photo sample of preset quantity The initial characteristics point demarcated is obtained by prime area training of the center of eyes eyeball.
It is below the projection matrix model based on SDM algorithm to above-mentioned second feature with the second feature point correction model The training process of point correction model is described.
In the above-mentioned second feature point correction model of training, still can make using when training fisrt feature point correction model With those of photo sample, the eyeball center of eyes in all photo samples is demarcated as central feature point first, when eyes After the completion of the calibration of eyeball center, a rectangle frame can be centrally generated according to the eyeball of calibration eyes, and using the rectangle frame as One group of corresponding initial characteristics is arranged according to proven characteristic point in photo sample in prime area in the prime area Point.
It wherein, still can be first at this according to above-mentioned initial characteristics point when setting initial characteristics point in the prime area Coordinate accounting in beginning region is set, and the coordinate accounting is still than by above-mentioned in the photo sample to preset quantity Prime area obtains after carrying out calibration measurement.It, can be with for example, during to all photo samples artificial feature point for calibration The coordinate accounting for measuring each characteristic point in above-mentioned prime area respectively can after the completion of all photo sample standard deviation calibration To analyze coordinate accounting data of each characteristic point in above-mentioned prime area in all photo samples measured, Suitable coordinate accounting (for example taking mean value) is respectively set for each of prime area characteristic point, and by the coordinate accounting Coordinate accounting as setting initial characteristics point.
After the completion of the setting of initial characteristic point, the corresponding image feature vector Y of these initial characteristics points can be extractedn, and Calculate positional matrix An.It on the one hand, can be respectively for the initial characteristics point X being set in all photo samples0(X0Still table Show the one group of initial characteristics being provided with point), to extract corresponding image feature vector Y0.It on the other hand, can also be for every The position for the initial characteristics point being set in photo sample, come calculate in all photo samples in above-mentioned prime area by Offset delta_X between the human face characteristic point manually marked0, delta_X at this time0=X*-X0, X0Indicate all photo samples In the position coordinates of initial characteristics point that have been set, X*It indicates in all photo samples in above-mentioned prime area by manually marking Human face characteristic point position coordinates.When extract the corresponding characteristics of image of initial characteristics point that has been set in every picture to Measure Y0, and calculate above-mentioned prime area in the initial characteristics point and all photo samples being set in every photo sample In by the offset delta_X between the human face characteristic point that manually marks0Afterwards, then offset delta_X can be based on0With it is initial Characteristic point X0Between existing linear relationship, learn location prediction matrix A out by way of linear fit0
Wherein, learning location prediction matrix A out by way of linear fit0When, it still can be using minimum two The mode of multiplication linear fit realizes that detailed process repeats no more, those skilled in the art may refer to described above The training process of one characteristic point correction model carries out equivalent implementation.
Assuming that iteration is had altogether 4 times after SDM algorithmic statement during training second feature point correction model, then In above-mentioned second feature point correction model, A can be provided0~A3Deng 4 location prediction matrixes.
The above description is that the training process of second feature point correction model.
Above-mentioned fisrt feature point correction model is using the eyeball center of eyes as prime area, based on above-mentioned in photo sample One group of initial characteristics point training in prime area forms.For the above-mentioned second feature point correction model trained, Ke Yiyong It is corrected again in the coordinate to above-mentioned second-order correction characteristic point, obtains the coordinate for finally correcting characteristic point.To improve people The positioning accuracy of face characteristic point.
It, can be with when being modified according to above-mentioned second feature point correction model to the coordinate of above-mentioned second-order correction characteristic point For pass through revised this group of second-order correction characteristic point X of features described above point mapping function0, to extract corresponding image texture Feature vector Y0, the image feature vector Y that then will extract0, and mentioned in the fisrt feature point correction model of trained completion The location prediction matrix A of confessionnIt is iterated operation, to above-mentioned second-order correction characteristic point X0It is corrected, is finally repaired again Positive characteristic point coordinate.
By the above-mentioned image texture characteristic vector Y of second-order correction characteristic point0, with the second feature with trained completion The location prediction matrix A provided in point correction modelnWhen being iterated operation, it is assumed that still provided in second feature point correction model A0~A3Deng 4 location prediction matrixes, calculated then 4 submatrix multiplication will be carried out, it first can be according to A0To above-mentioned image line Manage feature vector Y0Matrix multiplication calculating is carried out, first time iteration is carried out, obtains one group of first final characteristic point coordinate, then root According to A1Matrix multiplication calculating is carried out to the above-mentioned first final beginning characteristic point coordinate being calculated again, carries out second of iteration, One group of second final characteristic point coordinate is obtained, then further according to A2Again to the above-mentioned second final characteristic point coordinate being calculated Matrix multiplication calculating is carried out, third time iteration is carried out, obtains the final characteristic point coordinate of third, when the completion of third time iteration, then root According to A3Matrix multiplication calculating is carried out to the final characteristic point coordinate of the above-mentioned third being calculated, obtains above-mentioned final amendment characteristic point Coordinate, iteration is completed at this time.
By the above-mentioned image texture characteristic vector Y of second-order correction characteristic point0, with the second feature with trained completion The location prediction matrix A provided in point correction modelnAfter being iterated operation, the final amendment characteristic point coordinate obtained at this time, The final result of facial modeling is as carried out for above-mentioned target picture.
As can be seen from the above description, letter is mapped by fisrt feature point correction model, preset characteristic point in the present embodiment Several and second feature point correction model, corrects the initial characteristics point set in above-mentioned target picture, therefore three times The positioning accuracy of human face characteristic point can be obviously improved.
In the above embodiments of the disclosure, initial characteristics point coordinate is modified by fisrt feature point correction model, It obtains correcting characteristic point coordinate for the first time, and the identification of central feature point is carried out to multiple first amendment characteristic point coordinates, obtain Then at least one central feature point coordinate carries out multiple first amendment characteristic point coordinates according to characteristic point mapping function Coordinate mapping, obtains multiple second-order correction characteristic point coordinates, wherein since the characteristic point mapping function is the central feature Point arrives the mapping relations of the second-order correction characteristic point coordinate, and the central feature point is based on the first amendment characteristic point The more accurate characteristic point that coordinate is identified, therefore the positioning accuracy of human face characteristic point can be improved.
In the above embodiments of the disclosure, multiple second-order correction characteristic points are sat by second feature point correction model Mark is modified, and obtains multiple final amendment characteristic point coordinates, due to secondary being repaired by second feature point correction model to described Positive characteristic point is corrected again, therefore can be further improved the positioning accuracy of human face characteristic point.
In the above embodiments of the disclosure, by carrying out human face region detection to target picture, human face region is obtained, then According to the coordinate accounting of multiple initial characteristics points, multiple initial characteristics point coordinates in the human face region are obtained, Wherein, it is obtained due to the coordinate accounting of the initial characteristics by carrying out calibration measurement to the human face region in multiple pictures, because This can be target picture setting initial characteristics point with fast accurate.
Corresponding with aforementioned man face characteristic point positioning method embodiment, the disclosure additionally provides a kind of embodiment of device.
Fig. 3 is a kind of schematic block diagram of facial modeling device shown according to an exemplary embodiment.
As shown in figure 3, a kind of facial modeling device 300 shown according to an exemplary embodiment, comprising: first Correction module 301, identification module 302 and mapping block 303;Wherein:
First correction module 301 is configured as, according to fisrt feature point correction model to initial characteristics point coordinate into Row amendment, obtains correcting characteristic point coordinate for the first time;
The identification module 302 is configured as, and multiple described is repaired for the first time to what first correction module 301 amendment obtained Positive characteristic point coordinate carries out the identification of central feature point, obtains at least one central feature point coordinate;
The mapping block 303 is configured as, and is corrected according to characteristic point mapping function to first correction module 301 The multiple first amendment characteristic point coordinates arrived carry out coordinate mapping, obtain multiple second-order correction characteristic point coordinates, wherein institute It states characteristic point mapping function and identifies the obtained central feature point to the second-order correction characteristic point for the identification module 302 The mapping relations of coordinate.
In the embodiment above, initial characteristics point coordinate is modified by fisrt feature point correction model, is obtained just Secondary amendment characteristic point coordinate, and the identification of central feature point is carried out to multiple first amendment characteristic point coordinates, obtain at least one Then a central feature point coordinate carries out coordinate to multiple first amendment characteristic point coordinates according to characteristic point mapping function and reflects It penetrates, obtains multiple second-order correction characteristic point coordinates, wherein since the characteristic point mapping function is the central feature point to institute State the mapping relations of second-order correction characteristic point coordinate, and central feature point be based on the first amendment characteristic point coordinate into The more accurate characteristic point that row identification obtains, therefore the positioning accuracy of human face characteristic point can be improved.
It should be noted that in the embodiment above, the fisrt feature point correction model is multiple initial characteristics points Feature, the mapping relations of offset of feature, offset and multiple first amendment characteristic points, the fisrt feature point correction model For projection matrix model.The quantity of the initial characteristics point is 44 or 98.
Fig. 4 is referred to, Fig. 4 is the block diagram of the disclosure another device shown according to an exemplary embodiment, the implementation For example on the basis of aforementioned embodiment illustrated in fig. 3, described device 300 can also include the second correction module 304;Wherein:
Second correction module 304 is configured as, and is reflected according to second feature point correction model to the mapping block 303 The multiple second-order correction characteristic point coordinates penetrated are modified, and obtain multiple final amendment characteristic point coordinates.
In the embodiment above, multiple second-order correction characteristic point coordinates are carried out by second feature point correction model Amendment, obtains multiple final amendment characteristic point coordinates, due to passing through second feature point correction model to the second-order correction feature Point is corrected again, therefore can be further improved the positioning accuracy of human face characteristic point.
It should be noted that in the embodiment above, the second feature point correction model is multiple second-order correction features Feature, feature, the mapping relations of offset of offset and multiple final amendment characteristic points of point, the second feature point amendment Model is projection matrix model.
Fig. 5 is referred to, Fig. 5 is the block diagram of the disclosure another device shown according to an exemplary embodiment, the implementation For example on the basis of aforementioned embodiment illustrated in fig. 4, described device 300 can also include detection module 305 and acquisition module 306; Wherein:
The detection module 305 is configured as, and is carried out human face region detection to target picture, is obtained human face region;
The acquisition module 306 is configured as, and according to the coordinate accounting of multiple initial characteristics points, obtains the face Multiple initial characteristics point coordinates in region.
In the embodiment above, by carrying out human face region detection to target picture, human face region is obtained, then according to more The coordinate accounting of a initial characteristics point obtains multiple initial characteristics point coordinates in the human face region, wherein by It is obtained in the coordinate accounting of the initial characteristics by carrying out calibration measurement to the human face region in multiple pictures, therefore can be fast Speed accurately sets initial characteristics point for target picture.
It should be noted that in the embodiment above, the coordinate accounting of the initial characteristics point passes through to multiple pictures sample Human face region in this carries out calibration measurement and obtains.The central feature point coordinate is eyeball center point coordinate.Shown in above-mentioned Fig. 5 Installation practice shown in detection module 305 and obtain module 306 structure also may be embodied in earlier figures 3 device it is real It applies in example, this disclosure is not limited.
Fig. 6 is referred to, Fig. 6 is the block diagram of the disclosure another device shown according to an exemplary embodiment, the implementation For example on the basis of aforementioned embodiment illustrated in fig. 3, first correction module 301 may include the first computational submodule 301A; Wherein:
The first computational submodule 301A is configured as, according to the fisrt feature point correction model to multiple described first Beginning characteristic point coordinate carries out matrix multiplication calculating, obtains multiple first initial characteristics point coordinates;
Matrix multiplication meter is carried out to multiple first initial characteristics point coordinates according to the fisrt feature point correction model It calculates, obtains multiple second initial characteristics point coordinates;
……
Matrix multiplication meter is carried out according to the multiple N-1 initial characteristics point coordinates of the N initial characteristics point coordinate pair It calculates, obtains multiple first amendment characteristic point coordinates, wherein N is the integer more than or equal to 2.
It should be noted that the structure of the first computational submodule 301A shown in above-mentioned Installation practice shown in fig. 6 Also it may be embodied in the Installation practice of earlier figures 4-5, this disclosure be not limited.
Fig. 7 is referred to, Fig. 7 is the block diagram of the disclosure another device shown according to an exemplary embodiment, the implementation For example on the basis of aforementioned embodiment illustrated in fig. 4, second correction module 304 may include the second computational submodule 304A; Wherein:
The second computational submodule 304A is configured as,
Matrix multiplication meter is carried out to multiple second-order correction characteristic point coordinates according to the second feature point correction model It calculates, obtains the first final characteristic point coordinate;
Second computational submodule is calculated according to the second feature point correction model described first final Characteristic point coordinate carries out matrix multiplication calculating, obtains the second final characteristic point coordinate;
……
Matrix multiplication calculating is carried out according to M-1 initial characteristics point coordinate described in the M initial characteristics point coordinate pair, is obtained To the final amendment characteristic point coordinate, wherein M is the integer more than or equal to 2.
It should be noted that the structure of the second computational submodule 304A shown in above-mentioned Installation practice shown in Fig. 7 Also it may be embodied in earlier figures 4 or the Installation practice of 5-6, this disclosure be not limited.Each mould in above-mentioned apparatus The function of block and the realization process of effect are specifically detailed in the realization process that step is corresponded in the above method, and details are not described herein.
For device embodiment, since it corresponds essentially to embodiment of the method, so related place is referring to method reality Apply the part explanation of example.The apparatus embodiments described above are merely exemplary, wherein described be used as separation unit The module of explanation may or may not be physically separated, and the component shown as module can be or can also be with It is not physical module, it can it is in one place, or may be distributed on multiple network modules.It can be according to actual The purpose for needing to select some or all of the modules therein to realize disclosure scheme.Those of ordinary skill in the art are not paying Out in the case where creative work, it can understand and implement.
Correspondingly, the disclosure also provides a kind of facial modeling device, described device includes:
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to:
Initial characteristics point coordinate is modified according to fisrt feature point correction model, obtains correcting characteristic point seat for the first time Mark;
The identification of central feature point is carried out to multiple first amendment characteristic point coordinates, obtains at least one central feature point Coordinate;
Coordinate mapping is carried out to multiple first amendment characteristic point coordinates according to characteristic point mapping function, obtains multiple two Secondary amendment characteristic point coordinate, wherein the characteristic point mapping function is the central feature point to the second-order correction characteristic point The mapping relations of coordinate.
Correspondingly, the disclosure also provides a kind of server-side, the server-side includes memory and one or one Above program, one of them perhaps more than one program be stored in memory and be configured to by one or one with It includes the instruction for performing the following operation that upper processor, which executes the one or more programs:
Initial characteristics point coordinate is modified according to fisrt feature point correction model, obtains correcting characteristic point seat for the first time Mark;
The identification of central feature point is carried out to multiple first amendment characteristic point coordinates, obtains at least one central feature point Coordinate;
Coordinate mapping is carried out to multiple first amendment characteristic point coordinates according to characteristic point mapping function, obtains multiple two Secondary amendment characteristic point coordinate, wherein the characteristic point mapping function is the central feature point to the second-order correction characteristic point The mapping relations of coordinate.
Correspondingly, the disclosure also provides a kind of facial modeling device, described device includes:
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to:
Initial characteristics point coordinate is modified according to fisrt feature point correction model, obtains correcting characteristic point seat for the first time Mark;
The identification of central feature point is carried out to multiple first amendment characteristic point coordinates, obtains at least one central feature point Coordinate;
Coordinate mapping is carried out to multiple first amendment characteristic point coordinates according to characteristic point mapping function, obtains multiple two Secondary amendment characteristic point coordinate, wherein the characteristic point mapping function is the central feature point to the second-order correction characteristic point The mapping relations of coordinate.
Fig. 8 is a kind of block diagram for facial modeling device 8000 shown according to an exemplary embodiment.Example Such as, device 8000 may be provided as a server.Referring to Fig. 8, device 8000 includes processing component 8022, further comprises One or more processors, and the memory resource as representated by memory 8032, can be by processing component 8022 for storing Execution instruction, such as application program.The application program stored in memory 8032 may include one or more Each corresponds to the module of one group of instruction.In addition, processing component 8022 is configured as executing instruction, set with executing above-mentioned intelligence Standby control method.
Device 8000 can also include that a power supply module 8026 be configured as the power management of executive device 8000, and one Wired or wireless network interface 8050 is configured as device 8000 being connected to network and input and output (I/O) interface 8058.Device 8000 can be operated based on the operating system for being stored in memory 8032, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or similar.
Those skilled in the art after considering the specification and implementing the invention disclosed here, will readily occur to its of the disclosure Its embodiment.This application is intended to cover any variations, uses, or adaptations of the disclosure, these modifications, purposes or Person's adaptive change follows the general principles of this disclosure and including the undocumented common knowledge in the art of the disclosure Or conventional techniques.The description and examples are only to be considered as illustrative, and the true scope and spirit of the disclosure are by following Claim is pointed out.
It should be understood that the present disclosure is not limited to the precise structures that have been described above and shown in the drawings, and And various modifications and changes may be made without departing from the scope thereof.The scope of the present disclosure is only limited by the accompanying claims.

Claims (21)

1. a kind of man face characteristic point positioning method, which is characterized in that the described method includes:
Initial characteristics point coordinate is modified according to fisrt feature point correction model, obtains correcting characteristic point coordinate for the first time;
The identification of central feature point is carried out to multiple first amendment characteristic point coordinates, obtains at least one central feature point seat Mark;
Coordinate mappings are carried out to multiple first amendment characteristic point coordinates according to characteristic point mapping function, obtain multiple secondary repair Positive characteristic point coordinate, wherein the characteristic point mapping function is the central feature point to the second-order correction characteristic point coordinate Mapping relations.
2. the method according to claim 1, wherein the method also includes:
Multiple second-order correction characteristic point coordinates are modified according to second feature point correction model, obtain multiple finally repairing Positive characteristic point coordinate.
3. the method according to claim 1, wherein the method also includes:
Human face region detection is carried out to target picture, obtains human face region;
According to the coordinate accounting of multiple initial characteristics points, the multiple initial characteristics points obtained in the human face region are sat Mark.
4. according to the method described in claim 3, it is characterized in that, the coordinate accounting of initial characteristics point passes through to multiple photographs Human face region in piece sample carries out calibration measurement and obtains.
5. the method according to claim 1, wherein central feature point coordinate is eyeball center point coordinate.
6. the method according to claim 1, wherein it is described according to fisrt feature point correction model to initial characteristics Point coordinate is modified, and obtains correcting characteristic point coordinate for the first time, comprising:
Matrix multiplication calculating is carried out to multiple initial characteristics point coordinates according to the fisrt feature point correction model, is obtained more A first initial characteristics point coordinate;
Matrix multiplication calculating is carried out to multiple first initial characteristics point coordinates according to the fisrt feature point correction model, is obtained To multiple second initial characteristics point coordinates;
……
Matrix multiplication calculating is carried out according to the multiple N-1 initial characteristics point coordinates of N initial characteristics point coordinate pair, obtains multiple institutes State first amendment characteristic point coordinate, wherein N is the integer more than or equal to 2.
7. according to the method described in claim 2, it is characterized in that, it is described according to second feature point correction model to multiple described Second-order correction characteristic point coordinate is modified, and obtains multiple final amendment characteristic point coordinates, comprising:
Matrix multiplication calculating is carried out to multiple second-order correction characteristic point coordinates according to the second feature point correction model, is obtained To the first final characteristic point coordinate;
Matrix multiplication calculating is carried out to the described first final characteristic point coordinate according to the second feature point correction model, obtains the Two final characteristic point coordinates;
……
Matrix multiplication calculating is carried out according to M initial characteristics point coordinate pair M-1 initial characteristics point coordinate, obtains described finally repairing Positive characteristic point coordinate, wherein M is the integer more than or equal to 2.
8. the method according to claim 1, wherein fisrt feature point correction model is multiple initial characteristics Feature, feature, the mapping relations of offset of offset and multiple first amendment characteristic points of point, the fisrt feature point amendment Model is projection matrix model.
9. according to the method described in claim 2, it is characterized in that, second feature point correction model is multiple second-order corrections Feature, feature, the mapping relations of offset of offset and multiple final amendment characteristic points of characteristic point, the second feature point Correction model is projection matrix model.
10. the method according to claim 1, wherein the quantity of initial characteristics point is 44 or 98.
11. a kind of facial modeling device, which is characterized in that described device includes:
First correction module is configured as being modified initial characteristics point coordinate according to fisrt feature point correction model, obtain First amendment characteristic point coordinate;
Identification module, be configured as multiple first amendment characteristic point coordinates that first correction module is corrected into The identification of row central feature point, obtains at least one central feature point coordinate;
Mapping block is configured as correcting first correction module according to characteristic point mapping function multiple described first Secondary amendment characteristic point coordinate carries out coordinate mapping, obtains multiple second-order correction characteristic point coordinates, wherein the characteristic point maps letter Mapping relations of several central feature points identified for the identification module to the second-order correction characteristic point coordinate.
12. device according to claim 11, which is characterized in that described device further include:
Second correction module multiple described secondary is repaired according to second feature point correction model to what the mapping block mapped Positive characteristic point coordinate is modified, and obtains multiple final amendment characteristic point coordinates.
13. device according to claim 11, which is characterized in that described device further include:
Detection module is configured as carrying out human face region detection to target picture, obtains human face region;
Module is obtained, is configured as being obtained more in the human face region according to the coordinate accounting of multiple initial characteristics points A initial characteristics point coordinate.
14. device according to claim 13, which is characterized in that the coordinate accounting of the initial characteristics point passes through to multiple Human face region in photo sample carries out calibration measurement and obtains.
15. device according to claim 11, which is characterized in that the central feature point coordinate is eyeball central point seat Mark.
16. device according to claim 11, which is characterized in that first correction module includes:
First computational submodule is configured as according to the fisrt feature point correction model to multiple initial characteristics point coordinates Matrix multiplication calculating is carried out, multiple first initial characteristics point coordinates are obtained;
Matrix multiplication calculating is carried out to multiple first initial characteristics point coordinates according to the fisrt feature point correction model, is obtained To multiple second initial characteristics point coordinates;
……
Matrix multiplication calculating is carried out according to the multiple N-1 initial characteristics point coordinates of N initial characteristics point coordinate pair, obtains multiple institutes State first amendment characteristic point coordinate, wherein N is the integer more than or equal to 2.
17. device according to claim 12, which is characterized in that second correction module includes:
Second computational submodule is configured as according to the second feature point correction model to multiple second-order correction characteristic points Coordinate carries out matrix multiplication calculating, obtains the first final characteristic point coordinate;
The described first final feature that second computational submodule is calculated according to the second feature point correction model Point coordinate carries out matrix multiplication calculating, obtains the second final characteristic point coordinate;
……
Matrix multiplication calculating is carried out according to M initial characteristics point coordinate pair M-1 initial characteristics point coordinate, obtains described finally repairing Positive characteristic point coordinate, wherein M is the integer more than or equal to 2.
18. device according to claim 11, which is characterized in that the fisrt feature point correction model is multiple initial spies The feature of point, feature, the mapping relations of offset of offset and multiple first amendment characteristic points are levied, the fisrt feature point is repaired Positive model is projection matrix model.
19. device according to claim 12, which is characterized in that the second feature point correction model is multiple secondary repairs Feature, feature, the mapping relations of offset of offset and multiple final amendment characteristic points of positive characteristic point, the second feature Point correction model is projection matrix model.
20. device according to claim 11, which is characterized in that the quantity of the initial characteristics point is 44 or 98 It is a.
21. a kind of facial modeling device characterized by comprising
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to:
Initial characteristics point coordinate is modified according to fisrt feature point correction model, obtains correcting characteristic point coordinate for the first time;
The identification of central feature point is carried out to multiple first amendment characteristic point coordinates, obtains at least one central feature point seat Mark;
Coordinate mappings are carried out to multiple first amendment characteristic point coordinates according to characteristic point mapping function, obtain multiple secondary repair Positive characteristic point coordinate, wherein the characteristic point mapping function is the central feature point to the second-order correction characteristic point coordinate Mapping relations.
CN201510641854.2A 2015-09-30 2015-09-30 Man face characteristic point positioning method and device Active CN105139007B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510641854.2A CN105139007B (en) 2015-09-30 2015-09-30 Man face characteristic point positioning method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510641854.2A CN105139007B (en) 2015-09-30 2015-09-30 Man face characteristic point positioning method and device

Publications (2)

Publication Number Publication Date
CN105139007A CN105139007A (en) 2015-12-09
CN105139007B true CN105139007B (en) 2019-04-16

Family

ID=54724350

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510641854.2A Active CN105139007B (en) 2015-09-30 2015-09-30 Man face characteristic point positioning method and device

Country Status (1)

Country Link
CN (1) CN105139007B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107194980A (en) * 2017-05-18 2017-09-22 成都通甲优博科技有限责任公司 Faceform's construction method, device and electronic equipment
CN108875646B (en) * 2018-06-22 2022-09-27 青岛民航凯亚系统集成有限公司 Method and system for double comparison and authentication of real face image and identity card registration
CN110826372B (en) * 2018-08-10 2024-04-09 浙江宇视科技有限公司 Face feature point detection method and device
CN109903297B (en) * 2019-03-08 2020-12-29 数坤(北京)网络科技有限公司 Coronary artery segmentation method and system based on classification model
CN112629546B (en) * 2019-10-08 2023-09-19 宁波吉利汽车研究开发有限公司 Position adjustment parameter determining method and device, electronic equipment and storage medium
CN110782408B (en) * 2019-10-18 2022-04-08 杭州小影创新科技股份有限公司 Intelligent beautifying method and system based on convolutional neural network
CN111488836B (en) * 2020-04-13 2023-06-02 广州市百果园信息技术有限公司 Face contour correction method, device, equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101377814A (en) * 2007-08-27 2009-03-04 索尼株式会社 Face image processing apparatus, face image processing method, and computer program
CN104077585A (en) * 2014-05-30 2014-10-01 小米科技有限责任公司 Image correction method and device and terminal
CN104182718A (en) * 2013-05-21 2014-12-03 腾讯科技(深圳)有限公司 Human face feature point positioning method and device thereof

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8818131B2 (en) * 2010-08-20 2014-08-26 Adobe Systems Incorporated Methods and apparatus for facial feature replacement

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101377814A (en) * 2007-08-27 2009-03-04 索尼株式会社 Face image processing apparatus, face image processing method, and computer program
CN104182718A (en) * 2013-05-21 2014-12-03 腾讯科技(深圳)有限公司 Human face feature point positioning method and device thereof
CN104077585A (en) * 2014-05-30 2014-10-01 小米科技有限责任公司 Image correction method and device and terminal

Also Published As

Publication number Publication date
CN105139007A (en) 2015-12-09

Similar Documents

Publication Publication Date Title
CN105139007B (en) Man face characteristic point positioning method and device
CN108764048B (en) Face key point detection method and device
CN108549873B (en) Three-dimensional face recognition method and three-dimensional face recognition system
CN111795704B (en) Method and device for constructing visual point cloud map
Rad et al. Bb8: A scalable, accurate, robust to partial occlusion method for predicting the 3d poses of challenging objects without using depth
CN108875524B (en) Sight estimation method, device, system and storage medium
CN107358648B (en) Real-time full-automatic high quality three-dimensional facial reconstruction method based on individual facial image
US11315264B2 (en) Laser sensor-based map generation
CN109859305B (en) Three-dimensional face modeling and recognizing method and device based on multi-angle two-dimensional face
US9928405B2 (en) System and method for detecting and tracking facial features in images
CN109325437A (en) Image processing method, device and system
JP5924862B2 (en) Information processing apparatus, information processing method, and program
CN111325846B (en) Expression base determination method, avatar driving method, device and medium
WO2017186016A1 (en) Method and device for image warping processing and computer storage medium
CN109086798A (en) A kind of data mask method and annotation equipment
KR20160062572A (en) Method and apparatus for generating personalized 3d face model
CN108154104A (en) A kind of estimation method of human posture based on depth image super-pixel union feature
CN110096925A (en) Enhancement Method, acquisition methods and the device of Facial Expression Image
CN111695431A (en) Face recognition method, face recognition device, terminal equipment and storage medium
CN109598234A (en) Critical point detection method and apparatus
CN108475424A (en) Methods, devices and systems for 3D feature trackings
CN110074788A (en) A kind of body data acquisition methods and device based on machine learning
CN109655011B (en) Method and system for measuring dimension of human body modeling
CN110852257A (en) Method and device for detecting key points of human face and storage medium
CN105787464B (en) A kind of viewpoint scaling method of a large amount of pictures in three-dimensional scenic

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant