CN105488472B - A kind of digital cosmetic method based on sample form - Google Patents

A kind of digital cosmetic method based on sample form Download PDF

Info

Publication number
CN105488472B
CN105488472B CN201510860633.4A CN201510860633A CN105488472B CN 105488472 B CN105488472 B CN 105488472B CN 201510860633 A CN201510860633 A CN 201510860633A CN 105488472 B CN105488472 B CN 105488472B
Authority
CN
China
Prior art keywords
image
face
point
formula
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510860633.4A
Other languages
Chinese (zh)
Other versions
CN105488472A (en
Inventor
金连文
黄双萍
黎小凤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201510860633.4A priority Critical patent/CN105488472B/en
Publication of CN105488472A publication Critical patent/CN105488472A/en
Application granted granted Critical
Publication of CN105488472B publication Critical patent/CN105488472B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • G06T3/4076Super resolution, i.e. output image resolution higher than sensor resolution by iteratively correcting the provisional high resolution image using the original low-resolution image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction

Abstract

The present invention provides a kind of digital cosmetic method based on sample form, photo first, which is provided, to light make-up or plain face human face photo and dressing carries out Face datection, facial modeling detection and facial image deformation alignment, block layer decomposition is carried out using improved wave filter to facial image on this basis, solve the problems, such as that dressing provides the differences in resolution of image and input facial image using the super-resolution reconstruction algorithm based on sample, the figure layer synthesis for finally carrying out facial image exports the image after light make-up or plain face photo makeup.The present invention proposes the novel digital cosmetic method based on template of one kind, by the improvement to block layer decomposition and figure layer composition algorithm, obtains more true dressing effect, and significant Reduction algorithm time complexity, so that number makeup algorithm functionization and real time implementation.

Description

A kind of digital cosmetic method based on sample form
Technical field
The invention belongs to Digital Image Processing and field of artificial intelligence, digitize more particularly to a kind of facial image The processing method of adornment.
Background technique
Modern society, which makes up, has become the living habit of many people.With the universal and figure that can image intelligent mobile terminal As the development of processing technique, nowadays people, which have begun, is beautified using the application on intelligent mobile terminal to the image of oneself Even number makeup.
Digital make-up technique has very huge application potential in daily life.For example, cosmetics electric business can benefit Virtual trial functions are provided with digital cosmetic applications for client;Makeup Service Providers can use digital cosmetic applications as visitor Family provides optimal makeup scheme;It can use digital cosmetic applications in people's daily life and select oneself suitable dressing;People Can use digital cosmetic applications before sharing picture to social networks and the picture of oneself make up etc..
Currently, the document of digital face make-up technique is few in number.Zhu Xiuping et al. is in computers in 2008 and information skill It proposes to establish cosmetic color and complexion model in the paper " face virtually make up systematic research " delivered on art periodical, carrys out mould Anthropomorphic face dressing effect, but the basic effects such as some whitenings, lip gloss can only be provided, eye shadow, informer and other effects are not obvious.Wai- Itd is proposed in meeting paper " Example-based cosmetic transfer " that Shun Tong et al. was delivered in 2007 from Facial image centering before same face makeup and after makeup learns the influence made up out to face appearance, then this influence is added It is added in an other face, realizes makeup transfer effect.This method limitation is more, because wanting the makeup realized to each Effect all has to collect the makeup front and back picture of the almost the same angle of the same face, and requires sample image and target figure Picture has similar shape of face and expression, or even needs to be manually operated so that the eyes of samples pictures and Target Photo, mouth are in identical bits It sets, complicated for operation, practical level is lower.Dong Guo et al. was in meeting paper " Digital face makeup by 2009 The digital face make-up technique based on makeup transfer thought is proposed in example ", but no longer needs sample face makeup front and back Image pair, it is only necessary to the facial image after having makeup.Due to using thin slice twist distortion algorithm alignment sample face and target person Face image, therefore do not require sample face similar with target face shape of face, it is not required to worry the face position misalignment of two faces. But the algorithm marks characteristic point using active shape model ASM, this algorithm accuracy rate is not high enough, many times needs to use Adjustment characteristic point position is removed at family manually.
Summary of the invention
It is an object of the invention to utilize digital image processing method, a kind of digital makeup side based on sample form is provided Method after treatment, exports the image after a face makeup to the light make-up or plain face human face photo that user provides.
The technical solution adopted by the present invention is as follows.
A kind of method of face picture number makeup provides image to the facial image and dressing of mobile intelligent terminal acquisition After successively carrying out Face datection, facial modeling, Facial metamorphosis alignment and facial image block layer decomposition step respectively, then into The corresponding figure layer synthesis of row, the facial image after the final makeup for obtaining output.
The digital cosmetic method based on sample form includes:
(1), Face datection;
(2), facial feature points detection positions;
(3), Facial metamorphosis is registrated;
(4), face block layer decomposition;
(5), face figure layer synthesizes.
The step (1) is Face datection, the purpose is to be whether detection input picture contains face, if so, judgement Face location, size and number.The present invention detects using the AdaBoost cascade classifier of Like-Fenton Oxidation and indicates image Middle face location.
The step (2) is facial feature points detection positioning, and the purpose is to calibrate face in the human face region of image The position of characteristic point.Human face characteristic point is the profile point for including human face five-sense-organ, and the present invention utilizes active apparent model AAM (Active Appearance Models) carries out positioning feature point.Firstly, the priori knowledges such as shape and texture are efficiently extracted, The training image collection manually marked in advance is counted, is analyzed, obtains the model about target object shape and texture, so Afterwards according to this shape and texture model, successive ignition search, matching are carried out to test sample, while according to the reality of test sample Situation adjusts prior model parameter, to obtain final accurate positioning feature point output.
The step (3) is deformable registration, the purpose is to using picture deformation technology by dressing provide image be registered to it is defeated Enter facial image.The key step of anamorphose algorithm includes specific characteristic pel, is generated according to the corresponding relationship of characteristic pixel Warping function, simultaneously interpolation obtains final output image for finally mapping.The present invention is adopted using human face characteristic point as characteristic pixel Deformation map is carried out with level free shape deformation algorithm (Multilevel Free-form Deformation, MFFD), from And the face to be made up after being aligned.
Concrete operations are as follows: specified that human face characteristic point is used to be pushed away as characteristic pixel according to the corresponding relationship of characteristic pixel Warping function required for exporting, provides dressing to image and is registered to input facial image.Ω represents Morph Target object, p= (u, v) (1≤u≤m and 1≤v≤n) represents any one point in Ω.Function w (p)=(x (p), y (p)) is obtained after representing Ω deformation The shape arrived.The control grid being covered on Morph Target Ω for being (m+2) × (n+2) for target object Ω building size Φ.Control point on control grid Φ is likely to be at different location under different conditions.WithIt represents when Φ is in algorithm When beginning, i-th j control point position.By the position at control point on control grid Φ, calculates and become required for deformation Shape function w.Warping function w is defined as:
In formula, subscript meets conditionFunction Bk(s)、Bl(t) it is The basic function of uniform B-spline Curve, these functional expressions are as follows:
B0(t)=(- t3+3t2-3t+1)/6
B1(t)=(3t3+6t2+4)/6
B2(t)=(- 3t3+3t2+3t+1)/6
B3(t)=t3/6
0≤t < 1 in formula.
When beginning, control point is certainly in initial position, the B-spline curves at conllinear control point by two or above Linear, there is no deform by Φ at this time, it is possible to obtain:
Single feature point is first moved, i.e., it is specified should to be deformed so that p therein is moved to by hypothesis Morph Target Ω On the q of position, i.e. w (p)=q, displacement is Δ q=w (p)-w in the process0(p)=q-p, for simplified formula expression, it is assumed that p= (u, v), 1≤u, v < 2 can obtain:
Wherein wkl=Bk(s)Bl(t) and s=u-1, t=v-1.ΔφklThe solution for meeting above formula has many.The present invention makes With the solution based on least squares sense, i.e., as follows:
When the multiple control points of movement, it is assumed that by the control point of mobile control grid, the point of target object Ω will be represented The point collected in P is all deformed, i.e., has w (p)=q for any one point p in origin collection P, and wherein q is represented after p is deformed in point Collect the position in Q.Enable P'={ (uc,vc) be P in subset, meet i-2≤uc< i+2 and j-2≤vc<j+2.Remember that φ is in Φ The initial position of i-th j control point and φ are (i, j), it can be seen that P' is exactly the feature point set by φ Influence of Displacement.To P' Each of point pcFor, by pcAdjacent control point displacement φ needed for being moved to given target positioncCalculating such as formulaIt is shown.For each point in P', Δ φcMight not be identical, so each control point φ Moving Δ φ should all make the distortion inaccuracy of all the points minimum, to avoid the displacement of control point φ as far as possible so that other in P Point be moved to mistake position.
Distortion inaccuracy is defined as follows:
In formula, wcΔ φ represents the point p as caused by the displacement φ of control point φcDisplacement, wcΔφcIt represents and realizes shifting Dynamic point pcThe displacement of point set caused by being displaced to whole control points needed for this target of designated position.Therefore, formulaThat show distortion inaccuracy representative is each of P' point pc=(uc,vc) corresponding wcΔ φ and wcΔ φcDifference quadratic sum.In above formula, wc=Bk(s)Bl(t) By formulaTo Δ φ derivation, and make derivative zero, can obtain:
The correct displacement for calculating each control point in this way, the new position at control point, is substituted into formula after being deformedObtain suitable deformations function w.Further, during for control aforesaid operations The phenomenon that ghost image phenomenon, i.e., partial dot overlaps after being mapped in original image, goes substep is mobile to control using iteration several times The processing method of control sizing grid is gradually reduced with iterations going in point.
The control grid Φ successively reduced for g size01,…,Φg, corresponding warping function is solved respectively, Realize multi-level free shape deformation.F-th of control grid ΦfSize of mesh opening size h when initializationfIt indicates, it is assumed here that h0、hmIt is given, and for each size, all meets hf=2hf+1.Use w0,w1,…,wnTo represent corresponding different controls The warping function that grid processed acquires, final whole deformation function is by w ... wnοwn-1ο…οw0Combination indicates, in formula, w (Ω)=wnn), wherein Ω0=Ω, Ωi+1=wii).In successive ignition, preceding an iteration as a result, exactly next iteration Input, thus P can be obtainedi+1=wi(Pi), wherein primary condition P0=P.The error of iteration each time is defined as:
In formula, qcThe point p in point set P before representing deformationcCorresponding position in point set Q after deformation.
After certain iteration, if by formulaThe value acquired is less than some given threshold value, calculates Method just restrains, and provides final result.
The step (4) is face block layer decomposition, and the purpose is to provide image to dressing to carry out block layer decomposition, dressing is believed Breath is separated from sample form.The present invention provides image to dressing using wave filter and carries out the smooth filter of edge reservation Wave further decomposes and obtains human face structure layer and grain details layer.For digital cosmetic applications, face different zones are taken completely Different makeup strategies, therefore, the present invention improves wave filter, makes it in image different zones according to location information Different degrees of smooth and edge retention is obtained, number makeup algorithm operation time of the reduction based on sample form, is improved Algorithm practicability, the application that can be embodied as on iOS platform.
Concrete operations are as follows: the input of wave filter is navigational figure I and image p to be filtered, then filters output are as follows:
In formula, W is the filtering kernel function codetermined by the navigational figure I and image p to be filtered of input, and i and j are pixels Location index.Assuming that there is a simple linear models in regional area by navigational figure I and filtering output image q, i.e., It can be obtained by navigational figure I in the linear transformation of the pixel value of corresponding position in the pixel value that regional area filters output image q , it is indicated with mathematical formulae are as follows:Wherein, ωkRepresent the filter window centered on pixel k, ak And bkIt is linear factor, numerical value and the window ω where itkIt is related.The filter window that wave filter uses is radius for r Square window.By formulaDerivation is carried out to space to obtain
Linear factor in model is acquired by minimizing input image to be filtered with the output direct difference of image, i.e., So thatValue it is minimum, by qi=akIi+bkIt substitutes into:
In formula, ε is in order to avoid parameter akValue is too big and the regularization parameter that introduces.Solution formula
:
μ in formulakAnd σk 2Navigational figure I is respectively represented in window ωkThe mean value and variance of middle all pixels, | ω | represent window Mouth ωkIn pixel quantity,The image p to be filtered of input is represented in window ωkThe mean value of middle all pixels value.By ak And bkAbove-mentioned expression formula is updated to formula qi=akIi+bkIn, and multiple windows are averaged, it acquires:
In formula,
In view of really making up, what the different zones of face were taken is entirely different makeup strategy, and the present invention is to above-mentioned It improves filter filtering process to improve, so that it is obtained different degrees of smooth and edge in image different zones and retain effect Fruit.By regularization factors ε fixed in former algorithm, it is changed to variable element β related with location of pixels.In the present invention, by face Image divides into three facial skin, eyebrow and other human face regions parts, takes different value to the β parameter in these three regions, i.e., βI ∈ skin=1, βI ∈ eyebrow=0.7, βI ∈ other=0.Further, to β matrix using erosion algorithm and Gaussian Blur algorithm Reason, so that it carries out boundary expansion in value lower region, to meet the requirement seamlessly transitted between different faces region, together When ensure the acutance of image boundary.After introducing β parameter, alternate formIn consolidate Determine parameter ε, obtain:
Above formula is solved to obtain:
In formula,For βiIn window ωkIn mean value.Thereby realizing smoothness can be according to location-based parameter The wave filter of variation completes block layer decomposition using improved wave filter.
The step (5) be face figure layer synthesis, the purpose is to input picture and dressing are provided the structure figure layer of image, Details figure layer and form and aspect figure layer carry out synthetic operation, facial image after being made up.Concrete operations are as follows: input picture and dressing Image is provided after being sequentially completed Face datection, positioning feature point, Facial metamorphosis alignment and facial image block layer decomposition step, it is right Structure figure layer (the I answereds、Es), details figure layer (Id、Ed) and form and aspect figure layer (Ic、Ec), next each figure layer is synthesized respectively Operation.
Makeup front and back facial image structure sheaf be consistent, therefore directly use input facial image structure sheaf as Export the structure sheaf of facial image, i.e. Rs=Is.Non-ocular region provides figure with input facial image levels of detail and dressing on face The levels of detail weighted sum of picture obtains the levels of detail of output facial image, to simulate foundation emulsion, foundation cream to the line of face skin Covering effect is managed, i.e.,
Rd(p)=δIId(p)+δEEd(p)
In the eye areas of people and non-face region uses the method for inputting the details of facial image to be made up.Therefore, defeated Facial image levels of detail is indicated with following formula out.
In formula, 0≤δ of weightI≤ 1,0≤δE≤ 1, the reserving degree and dressing for respectively representing input picture levels of detail provide The metastasis degree of image detail layer.Here δ is not requiredIAnd δEAnd be equal to 1 or less than 1, but δIAnd δEAnd cannot be too small Or close to 0, the makeup that otherwise will lead to input seems untrue since grain details are very few.
The mixed effect of cosmetics is simulated with alpha hybrid algorithm, it may be assumed that
In formula, γ provides the form and aspect layer weight of image for dressing in form and aspect mixing.
It is all the color for directlying adopt input facial image form and aspect layer as output facial image in eyes and non-face region Mutually it is worth, it may be assumed that
After the corresponding synthesis of structure sheaf, levels of detail and form and aspect layer that input facial image and dressing offer image are provided, obtain Corresponding three figure layers of image after makeup.Levels of detail is to subtract edge by brightness layer L to retain the resulting structure sheaf acquisition of filtering, Therefore the structure sheaf for exporting image being simply added with levels of detail can obtain exporting image in cielab color space Brightness layer, i.e.,
Export the form and aspect layer R of imagecIt is by CIELAB color spaceWithComposition, thus export imageWithIt channel can be by RcIt directly obtains.All three of output image in cielab color space are completely obtained now A channel, it is only necessary to image will be exported from CIELAB color space conversion and return the makeup that RGB color is obtained with output Facial image afterwards.
Basic principle of the invention is: the dressing information of dressing image being transferred in facial image, digital makeup is reached Effect.The basic step of entire number cosmetic method includes Face datection, facial feature points detection positions, Facial metamorphosis is matched Quasi-, face block layer decomposition and the synthesis of face figure layer.
The present invention is had the following advantages and beneficial effects: compared with existing digit cosmetic method
(1), efficiency of algorithm is better than existing algorithm;
(2), it improves wave filter and block layer decomposition is carried out to facial image, obtain fairly good block layer decomposition effect, and Reduction algorithm time complexity significantly;
(3), facial image levels of detail composition algorithm is improved, different synthetic strategies are used to different faces region, so that changing The effect is more real for adornment;
(4), the super-resolution reconstruction algorithm based on sample solves the number makeup algorithm based on sample form and is applying The differences in resolution problem encountered during realizing.
Detailed description of the invention
Fig. 1 is number makeup algorithm overall flow figure of the invention;
Fig. 2 is block layer decomposition flow chart of the invention;
Fig. 3 is of the invention based on sample super-resolution reconstruction algorithm overall flow figure.
Specific embodiment
The above content and very detailed abundant disclosure is made to technical solution of the present invention, below in conjunction with attached drawing pair The present invention is described further.The mobile client of iOS platform based on Apple Inc., and it is based on Microsoft Windows The server of platform constructs client-server architecture, implements mobile intelligent terminal number cosmetic applications of the invention.Service Windows Server is write with programming language C# in device end, is responsible for session management, the algorithm that the number based on sample form is made up Calling and data base administration etc..Client writes mobile terminal application program with programming language Objective-C, and responsible image is searched Collection, data transmission and user's interaction.
Number makeup algorithm overall flow based on sample form of the invention is as shown in Fig. 1, is mentioning respectively to dressing For image and after facial image of making up carries out facial modeling, the step of progress is to utilize multi-level free form first Shape deforms (Multilevel Free-form Deformation, MFFD), it will thus provide the image E of dressing*In face alignment To the facial image I to be made up of input, E is obtained;Then, I and E are transformed into CIELAB color space respectively, obtain brightness layer L*With two color figure layer a*And b*, a*With b*The face form and aspect figure layer I in attached drawing 1 is collectively constitutedc、Ec.Then, improvement is utilized Wave filter respectively to the L of two images*Layer carries out block layer decomposition, and then obtains representing the large scale figure layer of human face structure Is、EsWith the details figure layer I where dressingd、Ed;Because makeup front and back human face structure is constant, the people of facial image after makeup Face structure figure layer RsIt is directly equal to Is, and the details figure layer R of dressingdAnd form and aspect figure layer RcThen respectively by weighted average and Alpha hybrid algorithm calculates;Utilize Rs、Rd、RcThree figure layers can be obtained by facial image R after final makeup.In attached drawing 1, Symbol W represent to dressing provide image according to the shape of input picture carry out facial image deformation alignment ,+represent to levels of detail into Row weighted average operation, α, which is represented, carries out alpha hybrid manipulation to form and aspect layer.
Block layer decomposition flow chart of the invention is as shown in Fig. 2, and it is empty that original image is first transformed into CIELAB color from rgb space Between, three channel informations are obtained, are the channel L*, a* and b* respectively, L* is luminance channel, and a* and b* collectively constitute form and aspect layer.It is right The channel L* guides filtering, obtains large scale Information Level, and for representing face structure sheaf, L* layers subtract human face structure layer and then may be used To obtain texture detail information layer.
Of the invention is as shown in Fig. 3 based on sample super-resolution reconstruction algorithm overall flow, and main purpose is to solve adornment Hold and the facial image resolution ratio different problems to be made up of image and input are provided.
It is then original big by these image downs firstly, being collected into enough high-definition picture composition data collection Small a quarter obtains low resolution image data collection.Then the image of above-mentioned high-resolution and low-resolution is divided into part respectively Fritter and one-to-one correspondence.Its high-frequency information is stored to the image block of all acquisitions, the high-frequency information of all image blocks is common Training data needed for composition.
It predicts to use Markov Network (Markov network) in high-frequency information step and come to the sky between image block Between information modeled, regard the low resolution localized mass of input as in Markov Network visible observation node, and incite somebody to action Need the concealed nodes regarded as by predicting obtained corresponding high-resolution part block message.It solves in Markov Network All concealed nodes, it can obtain output image high-frequency information.With the belief propagation algorithm (Belief of iteration Propagation) approximate optimal solution of Markov Network is solved, general iteration 3 to 4 times is just enough.
As above it can preferably realize the present invention and obtain technical effect above-mentioned.

Claims (5)

1. a kind of digital cosmetic method based on sample form, it is characterised in that it is first collected to mobile intelligent terminal it is light make-up, Or plain face human face photo and dressing provide photo and carry out Face datection, facial modeling detection and facial image deformation pair Standard carries out block layer decomposition to facial image on this basis, solves dressing using the super-resolution reconstruction algorithm based on sample and mentions For the differences in resolution problem of image and input facial image, the figure layer synthesis of facial image is finally carried out, is exported light make-up or plain Image after the makeup of face photo;
Include the following steps:
1) Face datection: whether detection input picture contains face, if so, judging the position of face, size and number;This step Suddenly face location in image is detected and indicated using the AdaBoost cascade classifier of Like-Fenton Oxidation;
2) facial feature points detection positions: in the human face region of image, calibrating the position of human face characteristic point;The face is special Sign point is the profile point for including human face five-sense-organ, this step utilizes active apparent model AAM (Active Appearance Models facial modeling) is carried out;
3) facial image deformable registration: dressing is provided to image using picture deformation algorithm and is registered to input facial image;Image Deformation algorithm includes: specific characteristic pel, generates warping function according to the corresponding relationship of characteristic pixel, and simultaneously interpolation obtains for finally mapping To final output image;This step is using human face characteristic point as characteristic pixel, using level free shape deformation algorithm (Multilevel Free-form Deformation, MFFD) carries out deformation map, thus the people to be made up after being aligned Face;
Step 3) the described image deformation algorithm concrete operations are as follows: specifying human face characteristic point as characteristic pixel, according to spy The corresponding relationship for levying pel, derives required warping function, provides dressing to image and be registered to input facial image;If Ω Morph Target object is represented, p=(u, v) represents any one point, 1≤u≤m and 1≤v≤n in Ω, indicates target object transverse and longitudinal Pixel number;Function w (p)=(x (p), y (p)) represents the shape obtained after Ω deformation;It is for target object Ω building size (m+2) the control grid Φ that × (n+2) is covered on Morph Target Ω;The control point on grid Φ is controlled under different conditions It is likely to be at different location;WithIt represents when Φ is when algorithm starts, i-th j control point position;Pass through control The position at control point on grid Φ calculates warping function w required for deformation;Warping function w is defined as:
In formula, subscript meets condition It is to be taken to variable u, v It is whole, function Bk(s)、Bl(t) be uniform cubic B-spline curve basic function, these function definitions are as follows:
B0(t)=(- t3+3t2-3t+1)/6
B1(t)=(3t3+6t2+4)/6
B2(t)=(- 3t3+3t2+3t+1)/6
B3(t)=t3/6
0≤t < 1 in formula;
When beginning, control point in initial position, the B-spline curves at conllinear control point by two or above be it is linear, Φ is there is no deforming at this time, so obtaining:
Single feature point is first moved, i.e. hypothesis Morph Target Ω should be deformed so that p therein is moved to specified position q On, i.e. w (p)=q, displacement is Δ q=w (p)-w in the process0(p)=q-p, for simplified formula expression, it is assumed that p=(u, v), 1 ≤ u, v < 2 can obtain:
W in formulakl=Bk(s)Bl(t) and s=u-1, t=v-1;ΔφklThe solution for meeting above formula has many;Using based on minimum The solution of square sense, i.e., as follows:
The case where considering further that while moving multiple control points, it is assumed that by the control point of mobile control grid, target pair will be represented As Ω point set P in point it is all deformed, i.e., have w (p)=q for any one point p in origin collection P, wherein q represent p change Position after shape in point set Q;Enable P'={ (uc,vc) be P in subset, meet i-2≤uc< i+2 and j-2≤vc< j+2, note φ is that the initial position of the i-th j control point and φ are (i, j) in Φ, it can be seen that P' is exactly the feature by φ Influence of Displacement Point set;To each of P' point pcFor, by pcAdjacent control point displacement φ needed for being moved to given target positioncMeter It calculates such as formulaIt is shown;For each point in P', Δ φcMight not be identical, so each control point φ displacement φ should make the distortion inaccuracy of all the points minimum, to avoid the displacement of control point φ as far as possible so that in P Others point is moved to the position of mistake;
Distortion inaccuracy is defined as follows:
In formula, wcΔ φ represents the point p as caused by the displacement φ of control point φcDisplacement, wcΔφcIt represents and realizes transfer point pc The displacement of point set caused by being displaced to whole control points needed for this target of designated position, therefore, formula That show distortion inaccuracy representative is each of P' point pc=(uc,vc) corresponding wcΔ φ and wcΔφcDifference quadratic sum, In formula, wc=Bk(s)Bl(t), By formulaTo Δ φ derivation, and make derivative zero, can obtain:
Calculate the correct displacement at each control point accordingly, thus after being deformed control point new position, substitute into formulaObtain suitable deformations function w;It further, is folded during control aforesaid operations The phenomenon that shadow phenomenon, i.e., partial dot overlaps after being mapped in original image, goes to the mobile control point of substep using iteration several times, With iterations going on, the processing method of control sizing grid is gradually reduced;
The control grid Φ successively reduced for g size01,…,Φg, corresponding warping function is solved respectively, thus Realize multi-level free shape deformation;F-th of control grid ΦfSize of mesh opening size h when initializationfIt indicates, herein h0、 hgIt is given, and for each size, all meets hf=2hf+1, use w0,w1,…,wnTo represent corresponding different controls The warping function that grid acquires, final whole deformation function is by w=wn·wn-1·…·w0Combination indicates, in formula, w (Ω)=wnn), wherein Ω0=Ω, Ωi+1=wii);In successive ignition, preceding an iteration as a result, exactly next iteration Input, thus P can be obtainedi+1=wi(Pi), wherein primary condition P0=P, each time error of iteration is defined as:
In formula, qcThe point p in point set P before representing deformationcCorresponding position in point set Q after deformation;
Wherein after certain iteration, if by formulaThe value acquired is less than some given threshold value, calculates Method just restrains, and provides final result;
4) face block layer decomposition: image is provided to dressing and carries out block layer decomposition, dressing information is separated from sample image; Image is provided to dressing using wave filter and carries out edge reservation smothing filtering, is further decomposed and is obtained human face structure layer and line Manage levels of detail;For digital cosmetic applications, the different zones of face take entirely different makeup strategy, this step filters guidance Wave device is filtered the improvement of parameter adaptive adjustment, obtains wave filter not according to location information in image different zones With the smooth and edge retention of degree, the operation time of number makeup algorithm of the reduction based on sample form, algorithm is improved Practicability, one on iOS platform application can be embodied as;
5) face figure layer synthesizes: providing structure figure layer, details figure layer and the color obtained after picture breakdown to input picture and dressing Phase figure layer carries out synthetic operation, the facial image after being made up.
2. number cosmetic method according to claim 1, it is characterised in that the step 1) utilizes the class of training sample Haar feature trains classifier, to obtain an AdaBoost cascade classifier.
3. the digital cosmetic method according to claim 1 based on sample form, which is characterized in that the step 2) is specific It operates as follows: utilizing AAM model extraction shape and texture priori knowledge, firstly, to the training image collection manually marked in advance Statistics, the analysis for carrying out shape and texture information, so that the model about target object shape and texture is obtained, then according to this Shape and texture model carry out successive ignition search, matching to test sample, while being adjusted according to the actual conditions of test sample Prior model parameter, to obtain final accurate positioning feature point output.
4. the digital cosmetic method according to claim 1 based on sample form, which is characterized in that the step 4) is specific It operates as follows: assuming that the input of wave filter is navigational figure I and image p to be filtered, then filtering output are as follows:
In formula, i and j are location of pixels index, WijIt is the corresponding filtering core functional value of location of pixels index ij, filters kernel function W It is codetermined by the navigational figure I and image p to be filtered of input;pjIt is filtering image pixel value, qiIt is defeated for the filtering at index i It is worth out;Assuming that there is a simple linear models in regional area by navigational figure I and filtering output image q, i.e., in office The pixel value of portion region filtering output image q is obtained by navigational figure I in the linear transformation of the pixel value of corresponding position, uses mathematics Formula indicates are as follows:Wherein, ωkRepresent the filter window centered on pixel k, akAnd bkIt is linear The factor, numerical value and the window ω where itkIt is related;The filter window that wave filter uses is radius for the square window of r;It will FormulaCarrying out derivation to space can obtain:
Linear factor in model inputs image to be filtered and acquires with the direct difference of image is exported by minimizing, i.e., so thatValue it is minimum, willSubstitution can obtain:
In formula, ε is in order to avoid parameter akValue is too big and the regularization parameter that introduces, solve formulaIt can obtain:
Wherein, μkWithNavigational figure I is respectively represented in window ωkThe mean value and variance of middle all pixels, | ω | represent window ωkIn pixel quantity,The image p to be filtered of input is represented in window ωkThe mean value of middle all pixels value;By akWith bkAbove-mentioned expression formula is updated to formulaIn, and multiple windows are averaged, it can be in the hope of:
Wherein,
When true makeup, what the different zones of face were taken is entirely different makeup strategy, is carried out to filter filtering process It improves, it is allow to obtain different degrees of smooth and edge retention in the different region of image, i.e., it will be solid in former algorithm Fixed regularization factors ε is changed to variable element β related with location of pixels;By facial image divide into facial skin, eyebrow and Three parts of other human face regions take different values to the β parameter in these three regions, take β respectivelyI ∈ skin=1, βI ∈ eyebrow=0.7, βI ∈ other=0;Further, β matrix is handled β matrix using erosion algorithm and Gaussian Blur algorithm, and makes β Matrix has certain boundary expansion, to meet the requirement seamlessly transitted between different human face regions, while ensuring image The acutance on boundary;After introducing β parameter, alternate formIn preset parameter ε, then It is available:
Solving above formula can obtain:
Wherein,For βiIn window ωkIn mean value;Thereby realizing smoothness can be according to location-based Parameters variation Wave filter, utilize improved wave filter complete block layer decomposition.
5. the digital cosmetic method according to claim 1 based on sample form, which is characterized in that the step 5) is specific Operate as follows: input picture and dressing provide image and are being sequentially completed Face datection, positioning feature point, Facial metamorphosis alignment and people After face image block layer decomposition step, corresponding structure figure layer (Is、Es), details figure layer (Id、Ed) and form and aspect figure layer (Ic、Ec), it connects Under synthetic operation is carried out respectively to each figure layer,
The structure sheaf of makeup front and back facial image is consistent, and directly uses the structure sheaf of input facial image as output face The structure sheaf of image, i.e. Rs=Is;The levels of detail and dressing of non-ocular region input facial image provide the thin of image on face Ganglionic layer weighted sum exports the levels of detail of facial image to obtain, and covers to simulate foundation emulsion, foundation cream to the texture of face skin Effect, i.e.,
Rd(p)=δIId(p)+δEEd(p),
In the eye areas of people and non-face region uses the method for inputting the details of facial image to be made up;Therefore, people is exported Face image levels of detail indicates with following formula,
Wherein, 0≤δ of weightI≤ 1,0≤δE≤ 1, the reserving degree and dressing for respectively representing input picture levels of detail provide image The metastasis degree of levels of detail;
The mixed effect of cosmetics is simulated with alpha hybrid algorithm, it may be assumed that
In formula, γ provides the form and aspect layer weight of image for dressing in form and aspect mixing;
It is all the hue value for directlying adopt input facial image form and aspect layer as output facial image in eyes and non-face region, That is:
Structure sheaf, levels of detail and the form and aspect layer for providing image to input facial image and dressing are respectively correspondingly synthesized Afterwards, corresponding three figure layers of image after the makeup exported;Levels of detail is to subtract edge by brightness layer L to retain filtering gained Structure sheaf obtain, therefore the structure sheaf for exporting image is simply added with levels of detail can obtain exporting image and exist Brightness layer in CIELAB color space, i.e.,
Export the form and aspect layer R of imagecIt is by CIELAB color spaceWithComposition, thus export imageWithChannel It can be by RcIt directly obtains;After completely obtaining all three channels of output image in cielab color space, it is only necessary to Output image is returned into RGB color from CIELAB color space conversion and is obtained with facial image after the makeup of output.
CN201510860633.4A 2015-11-30 2015-11-30 A kind of digital cosmetic method based on sample form Active CN105488472B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510860633.4A CN105488472B (en) 2015-11-30 2015-11-30 A kind of digital cosmetic method based on sample form

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510860633.4A CN105488472B (en) 2015-11-30 2015-11-30 A kind of digital cosmetic method based on sample form

Publications (2)

Publication Number Publication Date
CN105488472A CN105488472A (en) 2016-04-13
CN105488472B true CN105488472B (en) 2019-04-09

Family

ID=55675444

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510860633.4A Active CN105488472B (en) 2015-11-30 2015-11-30 A kind of digital cosmetic method based on sample form

Country Status (1)

Country Link
CN (1) CN105488472B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105956522A (en) * 2016-04-21 2016-09-21 腾讯科技(深圳)有限公司 Picture processing method and device
JP6720882B2 (en) * 2017-01-19 2020-07-08 カシオ計算機株式会社 Image processing apparatus, image processing method and program
CN107729855B (en) * 2017-10-25 2022-03-18 成都尽知致远科技有限公司 Mass data processing method
CN109871564A (en) * 2017-12-01 2019-06-11 英属开曼群岛商玩美股份有限公司 Method, system and the readable memory medium of cosmetics identification and simulation application
CN108596992B (en) * 2017-12-31 2021-01-01 广州二元科技有限公司 Rapid real-time lip gloss makeup method
CN108509846B (en) * 2018-02-09 2022-02-11 腾讯科技(深圳)有限公司 Image processing method, image processing apparatus, computer device, storage medium, and computer program product
CN108257084B (en) * 2018-02-12 2021-08-24 北京中视广信科技有限公司 Lightweight face automatic makeup method based on mobile terminal
JP6908013B2 (en) * 2018-10-11 2021-07-21 カシオ計算機株式会社 Image processing equipment, image processing methods and programs
EP3871194A4 (en) 2018-10-26 2022-08-24 Soul Machines Limited Digital character blending and generation system and method
CN110599534B (en) * 2019-09-12 2022-01-21 清华大学深圳国际研究生院 Learnable guided filtering module and method suitable for 2D convolutional neural network
CN111586424B (en) * 2020-04-28 2022-05-31 永康精信软件开发有限公司 Video live broadcast method and device for realizing multi-dimensional dynamic display of cosmetics
CN111586428A (en) * 2020-04-30 2020-08-25 永康精信软件开发有限公司 Cosmetic live broadcast system and method with virtual character makeup function
CN113344836B (en) * 2021-06-28 2023-04-14 展讯通信(上海)有限公司 Face image processing method and device, computer readable storage medium and terminal

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1731417A (en) * 2005-08-19 2006-02-08 清华大学 Method of robust human face detection in complicated background image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7620218B2 (en) * 2006-08-11 2009-11-17 Fotonation Ireland Limited Real-time face tracking with reference images

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1731417A (en) * 2005-08-19 2006-02-08 清华大学 Method of robust human face detection in complicated background image

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
廖文鑫.基于数据驱动的人脸美化技术及应用开发.《中国优秀硕士学位论文全文数据库 信息科技辑》.2013,(第05期),I138-2054.
林剑楚 等.一种高保真人脸图像妆容移植方法.《计算机应用与软件》.2015,第32卷(第8期),187-210.
梁凌宇.人脸图像的自适应美化与渲染研究.《中国博士学位论文全文数据库 信息科技辑》.2014,(第11期),I138-19.
黎小凤.基于样本模板的数字化妆算法的研究.《中国优秀硕士学位论文全文数据库 信息科技辑》.2015,(第12期),I138-675.

Also Published As

Publication number Publication date
CN105488472A (en) 2016-04-13

Similar Documents

Publication Publication Date Title
CN105488472B (en) A kind of digital cosmetic method based on sample form
CN105469407B (en) A kind of facial image block layer decomposition method based on improved wave filter
Chen et al. Beautyglow: On-demand makeup transfer framework with reversible generative network
CN108734659B (en) Sub-pixel convolution image super-resolution reconstruction method based on multi-scale label
CN107833183B (en) Method for simultaneously super-resolving and coloring satellite image based on multitask deep neural network
CN108805814B (en) Image super-resolution reconstruction method based on multi-band deep convolutional neural network
CN108830913B (en) Semantic level line draft coloring method based on user color guidance
DE102017010210A1 (en) Image Matting by means of deep learning
CN109325398A (en) A kind of face character analysis method based on transfer learning
Gandhi A method for automatic synthesis of aged human facial images
JP7246811B2 (en) Data processing method, data processing device, computer program, and computer device for facial image generation
CN109166102A (en) It is a kind of based on critical region candidate fight network image turn image interpretation method
CN105550649B (en) Extremely low resolution ratio face identification method and system based on unity couping local constraint representation
CN109035267B (en) Image target matting method based on deep learning
CN108053398A (en) A kind of melanoma automatic testing method of semi-supervised feature learning
CN107909622A (en) Model generating method, the scanning planing method of medical imaging and medical image system
CN116583878A (en) Method and system for personalizing 3D head model deformation
CN116997933A (en) Method and system for constructing facial position map
CN109829507B (en) Aerial high-voltage transmission line environment detection method
CN116648733A (en) Method and system for extracting color from facial image
CN110717978B (en) Three-dimensional head reconstruction method based on single image
CN109345604A (en) Image processing method, computer equipment and storage medium
Yang et al. Elegant: Exquisite and locally editable gan for makeup transfer
Bähr et al. CellCycleGAN: Spatiotemporal microscopy image synthesis of cell populations using statistical shape models and conditional GANs
CN117157673A (en) Method and system for forming personalized 3D head and face models

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant