CN109784281A - Products Show method, apparatus and computer equipment based on face characteristic - Google Patents
Products Show method, apparatus and computer equipment based on face characteristic Download PDFInfo
- Publication number
- CN109784281A CN109784281A CN201910048289.7A CN201910048289A CN109784281A CN 109784281 A CN109784281 A CN 109784281A CN 201910048289 A CN201910048289 A CN 201910048289A CN 109784281 A CN109784281 A CN 109784281A
- Authority
- CN
- China
- Prior art keywords
- point
- face
- target image
- current clip
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The present invention is suitable for technical field of face recognition, provides a kind of Products Show method, apparatus, storage medium and computer equipment based on face characteristic.Wherein, method includes: to obtain 2D target image, and the 2D target image is rendered to 3D target image;Recognition of face is carried out to the 3D target image, extracts the facial characteristics of the 3D target image;It is obtained by face recognition model and is trained to obtain to have the 3D facial image of dressing according at least one with the matched dressing parameter of facial characteristics, the face recognition model;Search makeup recommendation information set corresponding with the dressing parameter;Show at least one makeup recommendation information for including in the makeup recommendation information set, the makeup recommendation information of displaying is more targeted, make-up person is helped to improve dressing effect, and the accuracy of recommendation is improved, solving the problems, such as that user selects blindly to select when skin care item and/or cosmetics in the prior art causes the product of selection inappropriate.
Description
Technical field
The invention belongs to technical field of face recognition more particularly to a kind of Products Show methods based on face characteristic, dress
It sets, storage medium and computer equipment.
Background technique
Recognition of face is a more mature technology, is applied to the every field of society, is people's lives
Bring many conveniences.However, many users still do not know how to select most suitable skin care for the face of oneself
Product and/or cosmetics.Currently, some users follow the influence of advertisement to go to buy some trend brands, other are according to oneself friend
The recommendation of friend is bought, however there is no the truth of the face based on oneself is most suitable to select by these users
Skin care item and/or cosmetics.
In conclusion current user, which selects the process of skin care item and/or cosmetics to select in the presence of blindness, leads to selection
The inappropriate problem of product.
Summary of the invention
In view of this, the Products Show method, apparatus that the embodiment of the invention provides a kind of based on face characteristic, storage are situated between
Matter and computer equipment select the process of skin care item and/or cosmetics to lead in the presence of blindly selecting to solve user in the prior art
Cause the inappropriate problem of the product of selection.
The first aspect of the embodiment of the present invention provides a kind of Products Show method based on face characteristic, comprising:
Obtain 2D target image;
The 2D target image is rendered to 3D target image;
Recognition of face is carried out to the 3D target image, extracts the facial characteristics of the 3D target image;
By face recognition model obtain with the matched dressing parameter of facial characteristics, according to the face recognition model extremely
A few 3D facial image with dressing is trained to obtain;
Search makeup recommendation information set corresponding with the dressing parameter;
Show at least one makeup recommendation information for including in the makeup recommendation information set.
The second aspect of the embodiment of the present invention provides a kind of device for recommending shoes money type based on image recognition, comprising:
First obtains module, for obtaining 2D target image;
Rendering module, for the 2D target image to be rendered to 3D target image;
Identification module, for carrying out recognition of face to the 3D target image, the face for extracting the 3D target image is special
Sign;
Second obtains module, for being obtained and the matched dressing parameter of facial characteristics, the face by face recognition model
Portion's identification model is to be trained to obtain according at least one 3D facial image with dressing;
Searching module, for searching makeup recommendation information set corresponding with the dressing parameter;
Display module, for showing at least one makeup recommendation information for including in the makeup recommendation information set.
The third aspect of the embodiment of the present invention provides a kind of computer equipment, comprising: memory, processor and storage
In the memory and the computer program that can run on the processor, the processor execute the computer program
When perform the steps of
Obtain 2D target image;
The 2D target image is rendered to 3D target image;
Recognition of face is carried out to the 3D target image, extracts the facial characteristics of the 3D target image;
By face recognition model obtain with the matched dressing parameter of facial characteristics, according to the face recognition model extremely
A few 3D facial image with dressing is trained to obtain;
Search makeup recommendation information set corresponding with the dressing parameter;
Show at least one makeup recommendation information for including in the makeup recommendation information set.
The fourth aspect of the embodiment of the present invention provides a kind of computer readable storage medium, the computer-readable storage
Media storage has computer program, and the computer program performs the steps of when being executed by processor
Obtain 2D target image;
The 2D target image is rendered to 3D target image;
Recognition of face is carried out to the 3D target image, extracts the facial characteristics of the 3D target image;
By face recognition model obtain with the matched dressing parameter of facial characteristics, according to the face recognition model extremely
A few 3D facial image with dressing is trained to obtain;
Search makeup recommendation information set corresponding with the dressing parameter;
Show at least one makeup recommendation information for including in the makeup recommendation information set.
The 2D target image is rendered to 3D target image, to institute by obtaining 2D target image by the embodiment of the present invention
State 3D target image carry out recognition of face, extract the facial characteristics of the 3D target image, by face recognition model obtain with
The matched dressing parameter of facial characteristics, face recognition model are to be trained according at least one 3D facial image with dressing
It obtains, searches makeup recommendation information set corresponding with the dressing parameter, and show in makeup recommendation information set and include
At least one makeup recommendation information.As it can be seen that this programme first converts 3D target image for 2D target image, then to the 3D mesh
Logo image carries out recognition of face, extracts the facial characteristics of target image, is obtained by face recognition model and is matched with facial characteristics
Dressing parameter, and corresponding with dressing parameter makeup recommendation information set is searched, in recommendation information set of making up
At least one makeup recommendation information is shown, and the makeup recommendation information of displaying is more targeted, helps make-up person's raisingization
Adornment effect, and the accuracy of recommendation is improved, it solves user in the prior art and selects blindly to choose when skin care item and/or cosmetics
Choosing leads to the inappropriate problem of the product of selection.
Detailed description of the invention
It to describe the technical solutions in the embodiments of the present invention more clearly, below will be to embodiment or description of the prior art
Needed in attached drawing be briefly described, it should be apparent that, the accompanying drawings in the following description is only of the invention some
Embodiment for those of ordinary skill in the art without any creative labor, can also be according to these
Attached drawing obtains other attached drawings.
Fig. 1 is an application environment schematic diagram of the Products Show method in the embodiment of the present invention based on face characteristic;
Fig. 2 is the implementation process schematic diagram for the Products Show method based on face characteristic that the embodiment of the present invention one provides;
Fig. 3 is the schematic diagram of the Products Show device provided by Embodiment 2 of the present invention based on face characteristic;
Fig. 4 is that edge processing module is shown in Products Show device based on face characteristic that one embodiment of the invention provides
It is intended to;
Fig. 5 be another embodiment of the present invention provides the Products Show device based on face characteristic in second acquisition unit
Schematic diagram;
Fig. 6 is the schematic diagram for the computer equipment that the embodiment of the present invention three provides.
Specific embodiment
In being described below, for illustration and not for limitation, the tool of such as particular system structure, technology etc is proposed
Body details, to understand thoroughly the embodiment of the present invention.However, it will be clear to one skilled in the art that there is no these specific
The present invention also may be implemented in the other embodiments of details.In other situations, it omits to well-known system, device, electricity
The detailed description of road and method, in case unnecessary details interferes description of the invention.
In order to illustrate technical solutions according to the invention, the following is a description of specific embodiments.
Embodiment one
Products Show method provided by the present application based on face characteristic, can be applicable in the application environment such as Fig. 1,
In, client is communicated by network with server.Wherein, client can be, but not limited to various personal computers, notes
This computer, smart phone, tablet computer and portable wearable device.Server can be either multiple with independent server
The server cluster of server composition is realized.
In one embodiment, it as shown in Fig. 2, providing a kind of Products Show method based on face characteristic, answers in this way
It is illustrated, includes the following steps: for the server in Fig. 1
Fig. 1 shows the implementation process signal of the Products Show method based on face characteristic of the offer of the embodiment of the present invention one
Figure.As shown in Figure 1, should Products Show method based on face characteristic specifically comprise the following steps 101 to step 107.
101,2D target image is obtained.
102, the 2D target image is rendered to 3D target image.
Wherein for step 101 and 102,2D target image is obtained by terminal device, 2D target image can be terminal
The image that equipment is shot by imaging devices such as cameras is also possible to the image stored originally in terminal device.When with
When family carries out makeup recommendation information displaying, terminal device can start the image that camera acquisition includes user's face, and will
The image comprising user's face of acquisition is as 2D target image.Optionally, user can also be in the figure of terminal device storage originally
As being selected, terminal device can be using the image that user selects as 2D target image.Wherein, the 2D target image can be
Plain face, or the image with dressing.
As an embodiment of the present invention, terminal device can start camera acquire multiple include user's face figure
Picture, and top-quality image is chosen as target image from multiple images of acquisition.Optionally, terminal device can be set in advance
Determine the quality factor of image, and set the corresponding weight of each quality factor, wherein quality factor may include but be not limited to image
Clarity, brightness and face size etc..Terminal device can obtain the numerical value in each image of acquisition in each quality factor,
And be weighted and calculate according to the data of quality factor and corresponding weight, obtain the mass fraction of image.Terminal device can
The highest image of mass fraction is chosen as target image.
Wherein, the 2D target image is rendered to 3D target image, specifically included:
201, face face is divided into Re subregion by the reflected intensity attribute value based on face face each point, in each subregion
The interior curvature attribute value based on each point continues for subregion to be divided into M child partition, and the point number being distributed in each sub-district is N,
S point in each child partition is randomly selected as sample point.
202, normal vector, irradiation level and depth information based on each segment point, calculate the front and back of each segment point to attribute and
Emittance value.
203, the Euclidean between other points being distributed in each sample point and affiliated child partition and corresponding backward point is calculated
Color value and color gradient magnitude at distance and the sample point.
204, the building and training of neural net regression model.
205, each child partition Subsurface Scattering effect of the face face is fitted using the neural network after training,
Obtain the color value of the face face each point.
For step 202, it is assumed that in m-th of child partition, selecting call number is the point of i as sample point, wherein m ∈
[0, M-1], i ∈ [0, S-1], the interior point being distributed of the affiliated child partition of this sample point then, calculate current clip point as segment point
Front and back is specifically included to attribute and emittance value:
301, it is drawn since viewpoint position, by normal vector, irradiation level, depth of the current clip point under world coordinate system
Information is plotted to respectively in tri- width texture of ENormTex, IrraTex and EDepthTex;
302, normal vector and irradiation level information are obtained respectively from two width texture ENormTex, IrraTex, calculate current
The emittance value of segment point;
303, by texture mapping, the depth information of current clip point in EDepthTex is read;Calculate current clip point and
The distance is compared by the distance of viewpoint with depth information, if the distance is equal to depth distance, before current clip point is
Xiang Dian, if the distance is greater than depth distance, current clip point be after to point;
304, its emittance value is recorded to attribute according to the front and back of current clip point respectively;
For step 203, in m-th of child partition, when selecting call number for the point of i as sample point, calculate current
Euclidean distance, color value and color gradient magnitude between segment point and sample point, specifically include:
401, it is drawn since viewpoint position, comprising where current clip point in generation texture IRegTex, IRegTex
Euclidean distance between child partition call number, sample point call number, segment point call number, current clip point and sample point;
402, it is drawn since viewpoint position, generates texture LuminTex, include the face of current clip point in LuminTex
Color value;Gradient algorithm is carried out to texture mapping LuminTex, obtains gradient texture Lumin_gradientTex, Lumin_
It include the color gradient magnitude of current clip point in gradientTex;
403, texture EDepthTex is loaded, by texture mapping, reads the depth letter of current clip point in EDepthTex
Breath;Calculate current clip point and viewpoint distance, this distance is compared with depth information, if this distance be greater than depth away from
From to point after then current clip point is;
404, if it is backward point, then Euclidean distance between the point and sample point is calculated;Conversely, further judging current slice
Whether section point is sample point, if it is sample point, then reads the color value and texture Lumin_ in texture LuminTex at the point
Color gradient magnitude in gradientTex at the point;Conversely, showing that current clip point is forward face point, then texture IRegTex is read
Euclidean distance between middle current clip point and sample point.
It for step 204, specifically includes: Hermite of the construction comprising 1 input layer, 3 layers of hidden layer and 1 output layer
Interpolation neural network model;By sample point piWith point p in its child partitionjBetween Euclidean distance xij、piWith pjCorresponding is backward
Point pB, jBetween Euclidean distance yijAs input layer data, using at sample point color value, color gradient magnitude is as output
Node layer data, radiancy parameter is as the weight between the node in input layer and first hidden layer;And utilize sample
This set pair analysis model is trained.
103, recognition of face is carried out to the 3D target image, extracts the facial characteristics of the 3D target image.
After terminal device obtains target image, recognition of face can be carried out to target image, determine the face in target image
Region, and extract the facial characteristics in human face region.Target image is divided into forehead, eye circumference, two cheeks, the wing of nose, lip, chin
Etc. multiple key point regions, by the colour of skin, pore size, whether there are spot, fuel-displaced situation and wrinkle etc. to detect skin quality.
Facial characteristics may include face contour feature, features of skin colors, skin quality feature of face etc..Face contour feature can be used for indicating people
The face branch of face and shape of face etc., face contour feature can be made of the characteristic point of face.Features of skin colors can be used for table
Let others have a look at face skin presentation color, bright dark etc., features of skin colors may include the colouring information of skin area and bright in human face region
Spend information etc..Skin quality feature can be used for indicating that the state of face skin, skin quality feature may include skin area in human face region
Texture information, marginal information, edge strength etc., wherein texture information refers to the grain distribution situation of skin area, such as line
Thickness, density etc. are managed, marginal information then may include the pixel for occurring sudden transformation or ridge variation in skin area, edge
Intensity may refer to the variation degree for the pixel of sudden transformation or ridge variation occur.
As an embodiment of the present invention, after terminal device can carry out recognition of face to target image, people can be first determined
Skin area in face region, then the facial characteristics such as features of skin colors, skin quality feature are extracted from skin area.Optionally, terminal
Equipment can calculate the mean value for each pixel each component in YUV color space for including in skin area, and by the mean value
Features of skin colors as skin area.YUV color space may include luminance signal Y and two carrier chrominance signal B-Y (i.e. U), R-Y
(i.e. V), wherein Y-component indicates brightness, can be grayscale value, and U and V indicate coloration, can be used for describing the color of image and satisfies
And degree, the luminance signal Y and carrier chrominance signal U, V of YUV color space are separation.Terminal device can calculate in skin area
All pixels point in Y-component, the mean value of U component and V component, and using in the mean value of Y-component, U component and V component as the colour of skin
Feature.
Optionally, terminal device extracts skin quality feature, first can carry out edge detection to skin area, obtain skin area
Marginal information and texture information, wherein marginal information may include the information such as the position or orientation of edge pixel point.Edge detection can
Using a variety of edge detection operators, the specific method of edge detection includes single order edge detection and second order edge detection, Yi Jiqi
His edge detection method.Wherein, single order edge detection include Roberts crossover operator, Prewitt operator, Sobel operator and
Canny operator;Second order edge detection includes Lapacian operator, Marr-Hildreth operator and LapLacianofGaussian
Operator;Other edge detection methods include Spacek operator, Petrou operator and Susan operator.Analysis based on edge detection
The not influence vulnerable to global illumination Strength Changes, while being easy to highlight target information and reach using marginal information to simplify processing
Purpose, therefore many image understanding methods are all based on edge.Edge detection is it is emphasised that picture contrast, and contrast is from straight
Understanding in sight is exactly the size of difference, is exactly the difference of gray value (brightness value) for gray level image.These differences can
To enhance the boundary characteristic in image, because these boundaries are exactly the biggish embodiment of picture contrast.Here it is us to perceive mesh
The general mechanism for marking boundary, because the performance of target is exactly and the difference in brightness around it.Wherein, brightness change can be by right
Consecutive points carry out difference processing to enhance.The consecutive points progress difference processing of horizontal direction can detecte bright in vertical direction
Degree variation is commonly known as horizontal edge detective operators (horizontaledgedetector) according to its effect, thus may be used
To detect vertical edge;Carrying out difference processing to the consecutive points of vertical direction can detecte the brightness change in horizontal direction,
It is commonly known as vertical edge detective operators (verticaledgedetector) according to its effect, thus can detecte water outlet
Pingbian edge.Horizontal edge detective operators and vertical edge detective operators are combined, so that it may while detecting vertical edge and water
Pingbian edge.
It is understood that using Taylor series analysis it is known that the difference of adjacent two o'clock is the estimation of first derivative
Value.If being inserted into a pixel between two neighboring difference point to realize, it is equivalent to and is made with the first-order difference of two consecutive points
For new differential horizontal, it is known that the estimated value of first differential is two separated by a pixel using Taylor series analysis
The difference of point.The basis that Roberts crossover operator is realized is single order edge detection, using two templates, calculate on diagonal line and
It is not the differential of two pixels in reference axis.
Wherein, for Prewitt edge detection operator edge detection similar to differential process, the part for the variation that it is detected is inevitable
There is respective handling to the brightness change of noise and image.Therefore, average value processing is added in edge detection process and is had to
Very with caution.For example, vertical formwork Mx can be extended to three rows, and horizontal shuttering My is extended to three column.Thus obtain
Prewitt edge detection operator.Further, if the weight of two Prewitt template operator center pixels is gone twice
Numerical value, just obtain Sobel edge detection operator, it determines that two masks at edge form by approach vector.
Wherein, the common version of Sobel operator combines on optimal smoothing and another reference axis in a reference axis
Optimal difference.It should be noted that the benefit of big edge detection template be it reduce noise smooth effect it is more preferable.
Wherein, Canny edge detection operator is formed by three main targets: the optimal detection without additional response detects side
Distance is the smallest between edge position and actual edge position is properly positioned, and reduces the multiple response of single edges and obtains single response,
Gauss operator is optimal to picture smooth treatment.The step of Canny edge detection is generally handled can be divided into following four step
It is rapid: to apply Gaussian smoothing, using Sobel operator, (non-maxima suppression is substantially to find side using non-maxima suppression
Highest point in edge intensity data), to connect marginal point, (threshold process needs two threshold values, i.e. upper limit threshold for hysteresis threshold processing
Value and lower threshold).
Wherein, the premise of single order edge detection is that differential process can be such that variation enhances.Look for image change rate maximumly
Side can not only be found by the extreme value of single order change rate, while can also be found by the zero crossing that second order changes.Second order
The difference that differential can use two adjacent first differentials is come approximate.This is also consistent with the concept in mathematics.If level
Second-order Operator and disposition Second Order Differential Operator combine, available one full Laplacian template operator.Marr-
Hidreth is also with gaussian filtering.The surface chart of the operator is the shape of mexican hat, so being also known as " ink sometimes
Western brother's cap " operator, in fact, if Gaussian smoothing and Laplacian operator are combined, an available LoG
(LaplacianofGaussian) operator, it is exactly the base of Marr-Hidreth.
Terminal device can find gray value in skin area by edge detection and sudden transformation or ridge variation etc. occurs
Pixel, the gray value occur sudden transformation or ridge variation etc. pixel can be identified as the edge picture in skin area
Vegetarian refreshments.It, can be according to the position or orientation of edge pixel point after terminal device acquires the information such as the position or orientation of edge pixel point
Etc. information calculate Texture complication, obtain the skin quality feature of skin area.It is to be appreciated that extracting the facial characteristics of target image
Mode be not limited in aforesaid way, can also be used other modes extract facial characteristics.
104, by face recognition model obtain with the matched dressing parameter of facial characteristics, according to face recognition model extremely
A few 3D facial image with dressing is trained to obtain.
The facial characteristics of the 3D target image of extraction can be inputted preset face recognition model by terminal device, pass through the face
Portion's identification model analyzes the facial characteristics of extraction, and obtains and the matched dressing parameter of the facial characteristics.In a reality
It applies in example, dressing parameter may include location parameter and dressing parameter corresponding with location parameter.Location parameter can be by pixel
Coordinate value composition, location parameter can be used for indicating the regional location made up.Dressing parameter may include dressing type, adornment
Appearance color etc., wherein dressing type may include eye make-up, bottom adornment, lip adornment, repair face and blush etc., and different dressing types can use difference
Character be indicated, for example, bottom adornment can be indicated with 1, eye make-up can be indicated with 2, and lip adornment can be indicated with 3, repair Yan Keyong 4 expression,
Blush can indicate with 5, but not limited to this.Dressing color can be indicated with RGB (red, green, blue) value.
As an embodiment of the present invention, the mode that terminal device can first pass through machine learning in advance constructs face recognition mould
Type can acquire a large amount of 3D facial images comprising dressing as in sample input face recognition model, and face recognition model can be right
The sample of input is trained study, gradually establishes the corresponding relationship of dressing parameter Yu facial image septum reset feature.It needs
It is bright, the 3D facial image comprising dressing, the matching degree of dressing and 3D facial image be above predetermined threshold (for example,
0.98).Optionally, the sample that terminal device is input to data model can carry dressing assisted tag, input to every
Sample labeling has the dressings auxiliary informations such as the position with adornment, dressing type and dressing color.Face recognition model can be according to dressing
Assisted tag obtains the dressing parameter of each sample, and extracts the facial characteristics of each sample, and dressing parameter to acquisition and
Facial characteristics is learnt.Optionally, the sample that terminal device is input to face recognition model can not also carry dressing auxiliary
Label, face recognition model can voluntarily learn the sample of input, and extract dressing parameter, to establish dressing parameter
With the corresponding relationship of 3D target image septum reset feature.In the present embodiment, since facial image is 3D rendering, with 2D face figure
As comparing, the dressing parameter of face various pieces can be embodied completely.
As an embodiment of the present invention, terminal device is in addition to the facial characteristics acquisition according to face in target image
Outside the dressing parameter matched, also dressing parameter can be obtained according to other features.For example, terminal device can obtain current season time
Section, according to it is current when season and facial characteristics obtain matched dressing parameter.For example, winter is likely to be suited for denseer adornment
Hold, the dressing color in dressing parameter can correspond to deeper color value, and spring is suitable for thin dressing, the adornment in dressing parameter
Appearance color can correspond to shallower color value.Terminal device also can extract the apparel characteristic of personage in target image, and according to clothing
Feature and facial characteristics obtain matched dressing parameter etc..For example, what personage wore in target image is dark overcoat, then dressing
Dressing color in parameter can correspond to deeper color value etc..
105, makeup recommendation information set corresponding with the dressing parameter is searched.
106, at least one makeup recommendation information for including in the makeup recommendation information set is shown.
For 105 and 106, terminal device can search makeup recommendation information corresponding with the dressing parameter in the database
Set may include one or more makeup recommendation informations in recommending data set of making up, and makeup recommendation information may include but unlimited
In cosmetic product information, makeup step data, makeup points for attention etc..Cosmetic product information may include product type, title, product
Board, model, color number, capacity, price etc., for example, product type is lipstick, brand A, color number is 213, and price is 220 etc..Change
Successive step content when adornment step data may include makeup, and gimmick when makeup etc..For example, the first step, is being bold entirely
Then the spot printing sun screen of area presses full face with the centre of the palm;Second step squeezes out suitable BB frost, in cheek by interior in painting
Sprawling outward is opened;Third step, with eye profession concealer below the eyes gently on point, with the fingering to play the piano by concealer
Pat it is seamless etc., but not limited to this.
Terminal device can recommend at least one makeup for including in makeup recommendation information set corresponding with dressing parameter
Information is shown.Optionally, terminal device can be used preset exhibition method and show makeup recommendation information, wherein displaying side
Formula may include but be not limited to text, picture and text combination, audio, video etc..The makeup recommendation information of displaying is different, can be used different
The mode that text or picture and text combine can be used for example, showing cosmetic product information in exhibition method, shows makeup step data, can
The modes such as picture and text combination, audio or video are acquired, but not limited to this.Makeup recommending data corresponding with dressing parameter is opened up
Show, the makeup recommendation information shown fitting facial characteristics can be made, more targetedly.
As an embodiment of the present invention, in order to improve the excavation to facial detail, optionally, to the 3D target
Image carries out recognition of face, before the facial characteristics for extracting target image, further includes:
107, the light filling orientation of the 3D target image is determined by luminance compensation model;
108, the best light filling value in the light filling orientation is determined.
The training method of training luminance compensation model includes: that albedo image, the face of face area are extracted from training image
The surface normal image and illumination feature in portion region, described extract are based on luminance compensation model;Based on albedo image, surface method
Line image and illumination feature generate illumination restoration image;Luminance compensation mould is trained based on training image and illumination restoration image
Type.
Specifically, training equipment is based on luminance compensation model and extracts various characteristic patterns from training image, and is based on lambert
Model generates luminance recovery image from characteristic pattern.Training equipment can extract the facial regions appeared in training image from training image
Albedo image, surface normal image and the brightness in domain.Training equipment can based on the deformation model of autocoder, from
Albedo image, surface normal image and brightness generate luminance recovery image.
Training data can determine the loss letter of luminance compensation model based on the difference between training image and luminance recovery image
Number, and the parameter based on loss function update luminance compensation model.The parameter of training equipment brightness-adjusting compensation model, with
The difference between training image and luminance recovery image is reduced based on loss function.In such an example, loss function indicates to use
In define desired value and from luminance compensation model export estimated value between error function.
Training equipment can will be input to face by the albedo image of luminance compensation model extraction and surface normal image
Identification model, and based on desired value corresponding with input picture and from face recognition model export estimated value between difference,
Determine the loss function of face recognition model.For example, face recognition model can albedo image and surface normal based on input
Image exports ID value.Training equipment can based on the ID value exported from face recognition model with desired ID value (for example, correctly answering
Case ID) between difference determine loss function, and update the parameter of face recognition model based on loss function.
It as an embodiment of the present invention, can after terminal device obtains makeup step data corresponding with dressing parameter
Audio data corresponding with the makeup step data and/or video data are acquired from database, and to the audio data of acquisition
And/or video data is shown, and the makeup step recommended and makeup gimmick etc. can be made more intuitive, is facilitated user to compare and is shown
Audio data and/or video data make up, improve makeup efficiency.
In embodiments of the present invention, 3D target image first is converted by 2D target image, then to the 3D target image
Recognition of face is carried out, the facial characteristics of target image is extracted, is obtained and the matched dressing of facial characteristics by face recognition model
Parameter, and makeup recommendation information set corresponding with the dressing parameter is searched, at least one in makeup recommendation information set
Kind makeup recommendation information is shown, and the makeup recommendation information of displaying is more targeted, and make-up person is helped to improve dressing effect,
And the accuracy of recommendation is improved, solving user in the prior art and selecting blindly to select when skin care item and/or cosmetics causes
The inappropriate problem of the product of selection.
It should be understood that the size of the serial number of each step is not meant that the order of the execution order in above-described embodiment, each process
Execution sequence should be determined by its function and internal logic, the implementation process without coping with the embodiment of the present invention constitutes any limit
It is fixed.
Embodiment two
Referring to FIG. 3, it illustrates the provided by Embodiment 2 of the present invention Products Show device based on face characteristic shows
It is intended to.The Products Show device based on face characteristic, comprising: first obtains module 31, rendering module 33, identification module
33, second obtains module 34, searching module 35 and display module 36.Wherein, the concrete function of each module is as follows:
First obtains module 31, for obtaining 2D target image;
Rendering module 32, for the 2D target image to be rendered to 3D target image;
Identification module 33 extracts the face of the 3D target image for carrying out recognition of face to the 3D target image
Feature;
Second obtains module 34, described for passing through the acquisition of face recognition model and the matched dressing parameter of facial characteristics
Face recognition model is to be trained to obtain according at least one 3D facial image with dressing;
Searching module 35, for searching makeup recommendation information set corresponding with the dressing parameter;
Display module 36, for showing at least one makeup recommendation information for including in the makeup recommendation information set.
Optionally, as shown in figure 4, rendering module 32 includes:
Face face is divided by the first division unit 321 for the reflected intensity attribute value based on face face each point
Multiple subregions;
Subregion is divided by the second division unit 322 for the curvature attribute value in each subregion based on each point
Multiple child partitions, the child partition include segment point;
Selection unit 323, for randomly selecting the segment point of the preset quantity in each child partition as sample segment point;
First computing unit 324 calculates sample for normal vector, irradiation level and depth information based on sample segment point
To attribute and emittance value, the front and back is that current clip point belongs to forward face point or backward point to attribute for the front and back of segment point;
Second computing unit 325, for calculating Euclidean distance between each sample point and corresponding backward point and described
Color value at sample point;
Construction unit 326 is based on the sample segment point, the sample for constructing initial neural net regression model
Euclidean distance between the color value of segment point, the sample segment point and corresponding backward point, to the initial nerve net
Network regression model is trained, the neural net regression model after being trained;
Fitting unit 327, for using the neural network after training to each child partition Subsurface Scattering of the face face
Effect is fitted, and obtains the color value of the face face each point.
Optionally, as shown in figure 5, the first computing unit 324 includes:
Subelement 3241 is drawn, for drawing since viewpoint position, by method of the current clip point under world coordinate system
Vector, irradiation level, depth information are plotted to respectively in tri- width texture of ENormTex, IrraTex and EDepthTex;
First computation subunit 3242 obtains irradiation for obtaining normal vector from texture ENormTex from IrraTex
Information is spent, the emittance value of current sample segment point is calculated;
Reading subunit 3243, for reading the depth information of current clip point in EDepthTex by texture mapping;
Second computation subunit 3244, for calculating the distance of current clip point and the viewpoint;
First determines subelement 3245, if being less than or equal to depth distance for the distance, it is determined that current clip point
For forward face point;
Second determines subelement 3246, if being greater than depth distance for the distance, it is determined that after current clip point is
Xiang Dian;
Subelement 3247 is recorded, for recording emittance value respectively to attribute according to the front and back of current clip point.
Optionally, the Products Show device based on face characteristic, further includes:
Subelement is generated, includes the color value of current clip point for generating texture LuminTex, in the LuminTex;
Operation subelement obtains gradient texture Lumin_ for carrying out gradient algorithm to texture mapping LuminTex
It include the color gradient magnitude of current clip point in gradientTex, Lumin_gradientTex.
Optionally, the Products Show device based on face characteristic, further includes:
First determining module, for determining the light filling orientation of the 3D target image by luminance compensation model;
Second determining module, for determining the best light filling value in the light filling orientation.
Products Show device provided in an embodiment of the present invention based on face characteristic, by obtaining 2D target image, by institute
It states 2D target image and is rendered to 3D target image, recognition of face is carried out to the 3D target image, extracts the 3D target image
Facial characteristics, by face recognition model obtain with the matched dressing parameter of facial characteristics, according to face recognition model extremely
A few 3D facial image with dressing is trained to obtain, and searches makeup recommendation information collection corresponding with the dressing parameter
It closes, and shows at least one makeup recommendation information for including in makeup recommendation information set.As it can be seen that this programme is first by 2D target figure
As being converted into 3D target image, recognition of face then is carried out to the 3D target image, extracts the facial characteristics of target image, led to
The acquisition of face recognition model and the matched dressing parameter of facial characteristics are crossed, and searches makeup corresponding with the dressing parameter and recommends
Information aggregate is shown at least one of makeup recommendation information set makeup recommendation information, the makeup recommendation of displaying
More targeted, help make-up person's raising dressing effect is ceased, and improves the accuracy of recommendation, solves and uses in the prior art
Blindly selecting when family selection skin care item and/or cosmetics leads to the inappropriate problem of the product of selection.
Embodiment three
In one embodiment, a kind of computer equipment is provided, which can be server, internal junction
Composition can be as shown in Figure 6.The computer equipment include by system bus connect processor, memory, network interface and
Database.Wherein, the processor of the computer equipment is for providing calculating and control ability.The memory packet of the computer equipment
Include non-volatile memory medium, built-in storage.The non-volatile memory medium is stored with operating system, computer program and data
Library.The built-in storage provides environment for the operation of operating system and computer program in non-volatile memory medium.The calculating
The database of machine equipment is for storing the data being related in the Products Show method based on face characteristic.The computer equipment
Network interface is used to communicate with external terminal by network connection.To realize one kind when the computer program is executed by processor
Products Show method based on face characteristic.
In one embodiment, a kind of computer equipment is provided, including memory, processor and storage are on a memory
And the computer program that can be run on a processor, processor are realized in above-described embodiment when executing computer program based on face
The step of Products Show method of feature, such as step 101 shown in Fig. 2 is to step 106.Alternatively, processor executes computer
Realize the function of each module/unit of XXXX device in above-described embodiment when program, such as module 31 shown in Fig. 3 is to module 36
Function.To avoid repeating, which is not described herein again.
In one embodiment, a kind of computer readable storage medium is provided, computer program is stored thereon with, is calculated
Machine program realizes the step of Products Show method in above-described embodiment based on face characteristic, such as Fig. 2 when being executed by processor
Shown step 101 is to step 106.Alternatively, being realized when computer program is executed by processor in above-described embodiment based on face
The function of each module/unit of the Products Show device of feature, such as module 31 shown in Fig. 3 is to the function of module 36.To avoid
It repeats, which is not described herein again.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with
Relevant hardware is instructed to complete by computer program, the computer program can be stored in a non-volatile computer
In read/write memory medium, the computer program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein,
To any reference of memory, storage, database or other media used in each embodiment provided herein,
Including non-volatile and/or volatile memory.Nonvolatile memory may include read-only memory (ROM), programming ROM
(PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include
Random access memory (RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms,
Such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhancing
Type SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM
(RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each function
Can unit, module division progress for example, in practical application, can according to need and by above-mentioned function distribution by different
Functional unit, module are completed, i.e., the internal structure of described device is divided into different functional unit or module, more than completing
The all or part of function of description.
Embodiment described above is merely illustrative of the technical solution of the present invention, rather than its limitations;Although referring to aforementioned reality
Applying example, invention is explained in detail, those skilled in the art should understand that: it still can be to aforementioned each
Technical solution documented by embodiment is modified or equivalent replacement of some of the technical features;And these are modified
Or replacement, the spirit and scope for technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution should all
It is included within protection scope of the present invention.
Claims (10)
1. a kind of Products Show method based on face characteristic characterized by comprising
Obtain 2D target image;
The 2D target image is rendered to 3D target image;
Recognition of face is carried out to the 3D target image, extracts the facial characteristics of the 3D target image;
It is obtained by face recognition model and the matched dressing parameter of facial characteristics, the face recognition model is according at least one
The 3D facial image with dressing is opened to be trained to obtain;
Search makeup recommendation information set corresponding with the dressing parameter;
Show at least one makeup recommendation information for including in the makeup recommendation information set.
2. the Products Show method according to claim 1 based on face characteristic, which is characterized in that by the 2D target figure
As being rendered to 3D target image, comprising:
Face face is divided into multiple subregions by the reflected intensity attribute value based on face face each point;
Subregion is divided into multiple child partitions by the curvature attribute value in each subregion based on each point, and the child partition includes
Segment point;
The segment point of the preset quantity in each child partition is randomly selected as sample segment point;
Based on normal vector, irradiation level and the depth information of sample segment point, the front and back of sample segment point is calculated to attribute and radiation
Angle value, the front and back are that current clip point belongs to forward face point or backward point to attribute;
Calculate the Euclidean distance between each sample point and corresponding backward point and the color value at the sample point;
Initial neural net regression model is constructed, color value, the sample based on the sample segment point, the sample segment point
Euclidean distance between this segment point and corresponding backward point, is trained the initial neural net regression model, obtains
Neural net regression model after to training;
Each child partition Subsurface Scattering effect of the face face is fitted using the neural network after training, is obtained described
The color value of face face each point.
3. the Products Show method according to claim 2 based on face characteristic, which is characterized in that be based on sample segment point
Normal vector, irradiation level and depth information, calculate the front and back of sample segment point to attribute and emittance value, comprising:
It is drawn since viewpoint position, normal vector, irradiation level, depth information of the current clip point under world coordinate system is distinguished
It is plotted in tri- width texture of ENormTex, IrraTex and EDepthTex;
Normal vector is obtained from texture ENormTex, irradiation level information is obtained from IrraTex, calculates current sample segment point
Emittance value;
By texture mapping, the depth information of current clip point in EDepthTex is read;
Calculate the distance of current clip point and the viewpoint;
If the distance is less than or equal to depth distance, it is determined that current clip point is forward face point;
If the distance is greater than depth distance, it is determined that current clip point be after to point;
Emittance value is recorded respectively to attribute according to the front and back of current clip point.
4. the Products Show method according to claim 1 based on face characteristic, after the drafting since viewpoint position,
Further include:
It is drawn since viewpoint position, generates texture LuminTex, include the color value of current clip point in the LuminTex;
Gradient algorithm is carried out to texture mapping LuminTex, obtains gradient texture Lumin_gradientTex, Lumin_
It include the color gradient magnitude of current clip point in gradientTex.
5. the Products Show method according to claim 1 based on face characteristic is carrying out people to the 3D target image
Face identifies, before the facial characteristics for extracting target image, further includes:
The light filling orientation of the 3D target image is determined by luminance compensation model;
Determine the best light filling value in the light filling orientation.
6. a kind of Products Show device based on face characteristic characterized by comprising
First obtains module, for obtaining 2D target image;
Rendering module, for the 2D target image to be rendered to 3D target image;
Identification module extracts the facial characteristics of the 3D target image for carrying out recognition of face to the 3D target image;
Second obtains module, knows for being obtained by face recognition model with the matched dressing parameter of facial characteristics, the face
Other model is to be trained to obtain according at least one 3D facial image with dressing;
Searching module, for searching makeup recommendation information set corresponding with the dressing parameter;
Display module, for showing at least one makeup recommendation information for including in the makeup recommendation information set.
7. the Products Show device according to claim 6 based on face characteristic, which is characterized in that the rendering module packet
It includes:
Face face is divided into multiple points for the reflected intensity attribute value based on face face each point by the first division unit
Area;
Subregion is divided into multiple sons point for the curvature attribute value in each subregion based on each point by the second division unit
Area, the child partition include segment point;
Selection unit, for randomly selecting the segment point of the preset quantity in each child partition as sample segment point;
First computing unit calculates sample segment point for normal vector, irradiation level and depth information based on sample segment point
To attribute and emittance value, the front and back is that current clip point belongs to forward face point or backward point to attribute for front and back;
Second computing unit, for calculating Euclidean distance and the sample point between each sample point and corresponding backward point
The color value at place;
Construction unit, for constructing initial neural net regression model, based on the sample segment point, the sample segment point
Euclidean distance between color value, the sample segment point and corresponding backward point, to the initial neural net regression mould
Type is trained, the neural net regression model after being trained;
Fitting unit, for using the neural network after training to carry out each child partition Subsurface Scattering effect of the face face
Fitting, obtains the color value of the face face each point.
8. the Products Show device according to claim 7 based on face characteristic, which is characterized in that described first calculates list
Member includes:
Subelement is drawn, for drawing since viewpoint position, by normal vector of the current clip point under world coordinate system, irradiation
Degree, depth information are plotted to respectively in tri- width texture of ENormTex, IrraTex and EDepthTex;
First computation subunit obtains irradiation level information for obtaining normal vector from texture ENormTex from IrraTex,
Calculate the emittance value of current sample segment point;
Reading subunit, for reading the depth information of current clip point in EDepthTex by texture mapping;
Second computation subunit, for calculating the distance of current clip point and the viewpoint;
First determines subelement, if being less than or equal to depth distance for the distance, it is determined that current clip point is forward face point;
Second determines subelement, if being greater than depth distance for the distance, it is determined that current clip point be after to point;
Subelement is recorded, for recording emittance value respectively to attribute according to the front and back of current clip point.
9. a kind of computer equipment, including memory, processor and storage are in the memory and can be in the processor
The computer program of upper operation, which is characterized in that the processor realized when executing the computer program as claim 1 to
The step of any one of 5 the method.
10. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, and feature exists
In when the computer program is executed by processor the step of any one of such as claim 1 to 5 of realization the method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910048289.7A CN109784281A (en) | 2019-01-18 | 2019-01-18 | Products Show method, apparatus and computer equipment based on face characteristic |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910048289.7A CN109784281A (en) | 2019-01-18 | 2019-01-18 | Products Show method, apparatus and computer equipment based on face characteristic |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109784281A true CN109784281A (en) | 2019-05-21 |
Family
ID=66501652
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910048289.7A Pending CN109784281A (en) | 2019-01-18 | 2019-01-18 | Products Show method, apparatus and computer equipment based on face characteristic |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109784281A (en) |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110245590A (en) * | 2019-05-29 | 2019-09-17 | 广东技术师范大学 | A kind of Products Show method and system based on skin image detection |
CN110458810A (en) * | 2019-07-19 | 2019-11-15 | 苏宁易购集团股份有限公司 | A kind of more classification and Detection method and devices of skin quality based on Face datection |
CN111383054A (en) * | 2020-03-10 | 2020-07-07 | 中国联合网络通信集团有限公司 | Advertisement checking method and device |
CN111444979A (en) * | 2020-04-07 | 2020-07-24 | 深圳小佳科技有限公司 | Face-lifting scheme recommendation method, cloud device and storage medium |
CN111815533A (en) * | 2020-07-14 | 2020-10-23 | 厦门美图之家科技有限公司 | Dressing method, device, electronic apparatus, and readable storage medium |
CN111859122A (en) * | 2020-06-30 | 2020-10-30 | 北京百度网讯科技有限公司 | Method and device for recommending medical and cosmetic products, electronic equipment and readable storage medium |
CN111968248A (en) * | 2020-08-11 | 2020-11-20 | 深圳追一科技有限公司 | Intelligent makeup method and device based on virtual image, electronic equipment and storage medium |
CN112102543A (en) * | 2019-05-31 | 2020-12-18 | 杭州海康威视数字技术股份有限公司 | Security check system and method |
CN112749634A (en) * | 2020-12-28 | 2021-05-04 | 广州星际悦动股份有限公司 | Control method and device based on beauty equipment and electronic equipment |
CN112800884A (en) * | 2021-01-15 | 2021-05-14 | 深圳市鑫海创达科技有限公司 | Intelligent auxiliary method based on cosmetic mirror |
CN112906529A (en) * | 2021-02-05 | 2021-06-04 | 深圳前海微众银行股份有限公司 | Face recognition light supplementing method and device, face recognition equipment and face recognition system |
CN113298593A (en) * | 2020-07-16 | 2021-08-24 | 阿里巴巴集团控股有限公司 | Commodity recommendation and image detection method, commodity recommendation and image detection device, commodity recommendation and image detection equipment and storage medium |
CN113837017A (en) * | 2021-08-31 | 2021-12-24 | 北京新氧科技有限公司 | Cosmetic progress detection method, device, equipment and storage medium |
CN114119154A (en) * | 2021-11-25 | 2022-03-01 | 北京百度网讯科技有限公司 | Virtual makeup method and device |
CN114463217A (en) * | 2022-02-08 | 2022-05-10 | 口碑(上海)信息技术有限公司 | Image processing method and device |
CN115577183A (en) * | 2022-11-09 | 2023-01-06 | 网娱互动科技(北京)股份有限公司 | Cosmetic scheme recommendation method and system |
WO2023061429A1 (en) * | 2021-10-14 | 2023-04-20 | 北京字节跳动网络技术有限公司 | Method and apparatus for determining article acting on face, and device and medium |
CN116797864A (en) * | 2023-04-14 | 2023-09-22 | 东莞莱姆森科技建材有限公司 | Auxiliary cosmetic method, device, equipment and storage medium based on intelligent mirror |
CN117197541A (en) * | 2023-08-17 | 2023-12-08 | 广州兴趣岛信息科技有限公司 | User classification method and system based on convolutional neural network |
CN117495664A (en) * | 2023-12-25 | 2024-02-02 | 成都白泽智汇科技有限公司 | Intelligent auxiliary cosmetic system |
CN112906529B (en) * | 2021-02-05 | 2024-06-04 | 深圳前海微众银行股份有限公司 | Face recognition light supplementing method, device, face recognition equipment and system thereof |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106446768A (en) * | 2015-08-10 | 2017-02-22 | 三星电子株式会社 | Method and apparatus for face recognition |
CN106530383A (en) * | 2016-11-01 | 2017-03-22 | 河海大学 | Human face rendering method based on Hermite interpolation neural network regression model |
CN107123027A (en) * | 2017-04-28 | 2017-09-01 | 广东工业大学 | A kind of cosmetics based on deep learning recommend method and system |
CN108229415A (en) * | 2018-01-17 | 2018-06-29 | 广东欧珀移动通信有限公司 | Information recommendation method, device, electronic equipment and computer readable storage medium |
CN108898068A (en) * | 2018-06-06 | 2018-11-27 | 腾讯科技(深圳)有限公司 | A kind for the treatment of method and apparatus and computer readable storage medium of facial image |
CN109117801A (en) * | 2018-08-20 | 2019-01-01 | 深圳壹账通智能科技有限公司 | Method, apparatus, terminal and the computer readable storage medium of recognition of face |
CN109191569A (en) * | 2018-09-29 | 2019-01-11 | 深圳阜时科技有限公司 | A kind of simulation cosmetic device, simulation cosmetic method and equipment |
US20190014884A1 (en) * | 2017-07-13 | 2019-01-17 | Shiseido Americas Corporation | Systems and Methods for Virtual Facial Makeup Removal and Simulation, Fast Facial Detection and Landmark Tracking, Reduction in Input Video Lag and Shaking, and a Method for Recommending Makeup |
-
2019
- 2019-01-18 CN CN201910048289.7A patent/CN109784281A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106446768A (en) * | 2015-08-10 | 2017-02-22 | 三星电子株式会社 | Method and apparatus for face recognition |
CN106530383A (en) * | 2016-11-01 | 2017-03-22 | 河海大学 | Human face rendering method based on Hermite interpolation neural network regression model |
CN107123027A (en) * | 2017-04-28 | 2017-09-01 | 广东工业大学 | A kind of cosmetics based on deep learning recommend method and system |
US20190014884A1 (en) * | 2017-07-13 | 2019-01-17 | Shiseido Americas Corporation | Systems and Methods for Virtual Facial Makeup Removal and Simulation, Fast Facial Detection and Landmark Tracking, Reduction in Input Video Lag and Shaking, and a Method for Recommending Makeup |
CN108229415A (en) * | 2018-01-17 | 2018-06-29 | 广东欧珀移动通信有限公司 | Information recommendation method, device, electronic equipment and computer readable storage medium |
CN108898068A (en) * | 2018-06-06 | 2018-11-27 | 腾讯科技(深圳)有限公司 | A kind for the treatment of method and apparatus and computer readable storage medium of facial image |
CN109117801A (en) * | 2018-08-20 | 2019-01-01 | 深圳壹账通智能科技有限公司 | Method, apparatus, terminal and the computer readable storage medium of recognition of face |
CN109191569A (en) * | 2018-09-29 | 2019-01-11 | 深圳阜时科技有限公司 | A kind of simulation cosmetic device, simulation cosmetic method and equipment |
Cited By (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110245590B (en) * | 2019-05-29 | 2023-04-28 | 广东技术师范大学 | Product recommendation method and system based on skin image detection |
CN110245590A (en) * | 2019-05-29 | 2019-09-17 | 广东技术师范大学 | A kind of Products Show method and system based on skin image detection |
CN112102543A (en) * | 2019-05-31 | 2020-12-18 | 杭州海康威视数字技术股份有限公司 | Security check system and method |
CN110458810A (en) * | 2019-07-19 | 2019-11-15 | 苏宁易购集团股份有限公司 | A kind of more classification and Detection method and devices of skin quality based on Face datection |
CN111383054A (en) * | 2020-03-10 | 2020-07-07 | 中国联合网络通信集团有限公司 | Advertisement checking method and device |
CN111444979A (en) * | 2020-04-07 | 2020-07-24 | 深圳小佳科技有限公司 | Face-lifting scheme recommendation method, cloud device and storage medium |
CN111859122B (en) * | 2020-06-30 | 2024-06-11 | 北京百度网讯科技有限公司 | Method, device, electronic equipment and readable storage medium for recommending medical and aesthetic products |
CN111859122A (en) * | 2020-06-30 | 2020-10-30 | 北京百度网讯科技有限公司 | Method and device for recommending medical and cosmetic products, electronic equipment and readable storage medium |
CN111815533B (en) * | 2020-07-14 | 2024-01-19 | 厦门美图之家科技有限公司 | Dressing processing method, device, electronic equipment and readable storage medium |
CN111815533A (en) * | 2020-07-14 | 2020-10-23 | 厦门美图之家科技有限公司 | Dressing method, device, electronic apparatus, and readable storage medium |
CN113298593A (en) * | 2020-07-16 | 2021-08-24 | 阿里巴巴集团控股有限公司 | Commodity recommendation and image detection method, commodity recommendation and image detection device, commodity recommendation and image detection equipment and storage medium |
CN111968248A (en) * | 2020-08-11 | 2020-11-20 | 深圳追一科技有限公司 | Intelligent makeup method and device based on virtual image, electronic equipment and storage medium |
CN112749634A (en) * | 2020-12-28 | 2021-05-04 | 广州星际悦动股份有限公司 | Control method and device based on beauty equipment and electronic equipment |
CN112800884A (en) * | 2021-01-15 | 2021-05-14 | 深圳市鑫海创达科技有限公司 | Intelligent auxiliary method based on cosmetic mirror |
CN112800884B (en) * | 2021-01-15 | 2024-04-30 | 深圳市鑫海创达科技有限公司 | Intelligent auxiliary method based on cosmetic mirror |
CN112906529B (en) * | 2021-02-05 | 2024-06-04 | 深圳前海微众银行股份有限公司 | Face recognition light supplementing method, device, face recognition equipment and system thereof |
CN112906529A (en) * | 2021-02-05 | 2021-06-04 | 深圳前海微众银行股份有限公司 | Face recognition light supplementing method and device, face recognition equipment and face recognition system |
CN113837017A (en) * | 2021-08-31 | 2021-12-24 | 北京新氧科技有限公司 | Cosmetic progress detection method, device, equipment and storage medium |
WO2023061429A1 (en) * | 2021-10-14 | 2023-04-20 | 北京字节跳动网络技术有限公司 | Method and apparatus for determining article acting on face, and device and medium |
CN114119154A (en) * | 2021-11-25 | 2022-03-01 | 北京百度网讯科技有限公司 | Virtual makeup method and device |
CN114463217A (en) * | 2022-02-08 | 2022-05-10 | 口碑(上海)信息技术有限公司 | Image processing method and device |
CN115577183A (en) * | 2022-11-09 | 2023-01-06 | 网娱互动科技(北京)股份有限公司 | Cosmetic scheme recommendation method and system |
CN116797864B (en) * | 2023-04-14 | 2024-03-19 | 东莞莱姆森科技建材有限公司 | Auxiliary cosmetic method, device, equipment and storage medium based on intelligent mirror |
CN116797864A (en) * | 2023-04-14 | 2023-09-22 | 东莞莱姆森科技建材有限公司 | Auxiliary cosmetic method, device, equipment and storage medium based on intelligent mirror |
CN117197541B (en) * | 2023-08-17 | 2024-04-30 | 广州兴趣岛信息科技有限公司 | User classification method and system based on convolutional neural network |
CN117197541A (en) * | 2023-08-17 | 2023-12-08 | 广州兴趣岛信息科技有限公司 | User classification method and system based on convolutional neural network |
CN117495664A (en) * | 2023-12-25 | 2024-02-02 | 成都白泽智汇科技有限公司 | Intelligent auxiliary cosmetic system |
CN117495664B (en) * | 2023-12-25 | 2024-04-09 | 成都白泽智汇科技有限公司 | Intelligent auxiliary cosmetic system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109784281A (en) | Products Show method, apparatus and computer equipment based on face characteristic | |
JP7200139B2 (en) | Virtual face makeup removal, fast face detection and landmark tracking | |
Tewari et al. | Fml: Face model learning from videos | |
US11450075B2 (en) | Virtually trying cloths on realistic body model of user | |
US8908904B2 (en) | Method and system for make-up simulation on portable devices having digital cameras | |
CN109690617A (en) | System and method for digital vanity mirror | |
CN109840825A (en) | The recommender system of physical features based on user | |
CN105210110A (en) | Makeup assistance device, makeup assistance system, makeup assistance method, and makeup assistance program | |
US20120309520A1 (en) | Generation of avatar reflecting player appearance | |
EP3335195A2 (en) | Methods of generating personalized 3d head models or 3d body models | |
TW202234341A (en) | Image processing method and device, electronic equipment and storage medium | |
CN101779218A (en) | Makeup simulation system, makeup simulation apparatus, makeup simulation method, and makeup simulation program | |
JP2010507854A (en) | Method and apparatus for virtual simulation of video image sequence | |
WO2018189802A1 (en) | Image processing device, image processing method, and program | |
CN111767817B (en) | Dress collocation method and device, electronic equipment and storage medium | |
CN104794693A (en) | Human image optimization method capable of automatically detecting mask in human face key areas | |
Mould et al. | Developing and applying a benchmark for evaluating image stylization | |
CN108874145A (en) | A kind of image processing method calculates equipment and storage medium | |
Jampour et al. | Face inpainting based on high-level facial attributes | |
KR102430740B1 (en) | Apparatus and method for developing style analysis model based on data augmentation | |
CN107692701A (en) | The display methods and device of a kind of Intelligent mirror | |
CN114155569B (en) | Cosmetic progress detection method, device, equipment and storage medium | |
Koshy et al. | A complexion based outfit color recommender using neural networks | |
AU2021101766A4 (en) | Cartoonify Image Detection Using Machine Learning | |
CN107728981A (en) | The method and device of display |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |