CN110110611A - Portrait attribute model construction method, device, computer equipment and storage medium - Google Patents
Portrait attribute model construction method, device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN110110611A CN110110611A CN201910303910.XA CN201910303910A CN110110611A CN 110110611 A CN110110611 A CN 110110611A CN 201910303910 A CN201910303910 A CN 201910303910A CN 110110611 A CN110110611 A CN 110110611A
- Authority
- CN
- China
- Prior art keywords
- data
- face
- trained
- model
- attribute
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000010276 construction Methods 0.000 title claims abstract description 30
- 238000012549 training Methods 0.000 claims abstract description 148
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 86
- 238000000034 method Methods 0.000 claims abstract description 25
- 238000002372 labelling Methods 0.000 claims abstract description 20
- 238000013526 transfer learning Methods 0.000 claims abstract description 13
- 238000003062 neural network model Methods 0.000 claims description 42
- 238000004590 computer program Methods 0.000 claims description 27
- 210000004709 eyebrow Anatomy 0.000 claims description 18
- 239000011521 glass Substances 0.000 claims description 9
- 239000000284 extract Substances 0.000 claims description 8
- 210000001061 forehead Anatomy 0.000 claims description 8
- 210000004209 hair Anatomy 0.000 claims description 8
- 230000037303 wrinkles Effects 0.000 claims description 8
- 230000001537 neural effect Effects 0.000 claims description 4
- 230000008569 process Effects 0.000 abstract description 10
- 238000013473 artificial intelligence Methods 0.000 abstract description 4
- 238000005457 optimization Methods 0.000 abstract description 4
- 238000005516 engineering process Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 210000003128 head Anatomy 0.000 description 3
- 210000005036 nerve Anatomy 0.000 description 3
- 241001269238 Data Species 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000004218 nerve net Anatomy 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Processing Or Creating Images (AREA)
- Image Analysis (AREA)
Abstract
This application involves field of artificial intelligence, for Identification of Images industry, a kind of portrait attribute model construction method is provided, device, computer equipment and storage medium, its method includes: to obtain Face datection data, determine interested human face region, portrait attribute labeling is carried out to the interested human face region, obtain training sample, using training sample as input, and default convolutional neural networks model is trained using portrait attribute as output, the convolutional neural networks model trained is finely adjusted by way of based on transfer learning algorithm, obtain portrait attribute model.In whole process, with convolutional neural networks model training, and based on being adjusted, being optimized, the performance of lift scheme to the convolutional neural networks model trained by way of transfer learning algorithm, the portrait attribute model after optimization can accurately realize portrait Attribute Recognition.
Description
Technical field
This application involves portrait recognition technology fields, more particularly to a kind of portrait attribute model construction method, device, meter
Calculate machine equipment and storage medium.
Background technique
Intelligent marketing solicits guests or diners under line and a large amount of manpower is needed to go to execute generally by way of soliciting guests or diners under line at present, solicits guests or diners
Specific aim it is weaker, generally produce little effect.
At the same time, with the development of artificial intelligence technology, AI (artificial intelligence, artificial intelligence)
Portrait technology is more and more mature, can to people live bring more enjoyment with it is convenient.AI portrait technology is applied to intelligence
Marketing domain realizes that intelligently solicit patrons has become possibility on line to realize intelligent initiative recognition user group.
Intelligent subscriber portrait building is one of the technical problem by AI portrait technology applied to intelligent marketing field, and portrait
It is that user draws a portrait the key factor of building that whether attributive character, which can accurately be extracted, and general portrait attributive character is all based on portrait
Attribute model extracts, it is seen that portrait attribute model whether can reasonable construction can finally influence entire intelligent subscriber portrait structure
The effect built.However, traditional portrait attribute model building is not accurate enough, portrait attributive character can not be accurately extracted.
Summary of the invention
Based on this, it is necessary in view of the above technical problems, provide a kind of accurate portrait attribute model construction method, dress
It sets, computer equipment and storage medium.
A kind of portrait attribute model construction method, which comprises
Face datection data are obtained, determine interested human face region;
Record is to the interested human face region portrait attribute labeling as a result, obtaining training sample;
Using interested human face region described in the training sample as input and with portrait category in the training sample
Property as output, default convolutional neural networks model is trained, the convolutional neural networks model trained;
It is to be wrapped in the training sample by the output of the last one full articulamentum in the convolutional neural networks model trained
The human face region quantity contained, the learning rate setting convolutional neural networks model trained of the last one full articulamentum
In default times of other layers, learning rate is maintained at preset value, to adjust the convolutional neural networks model trained, obtains people
As attribute model.
The record is to the interested human face region portrait attribute labeling as a result, obtaining in one of the embodiments,
Include: to training sample
ATTRIBUTE INDEX set to be marked is obtained, the ATTRIBUTE INDEX set to be marked includes multiple ATTRIBUTE INDEXs, the category
Property index include the age, gender, whether there is or not fringe, whether wear glasses, type of making up, whether pencil one's eyebrows, whether use lipstick, whether the cheek
Red, hair style, skin condition, shape of face, face stop comparison up and down, whether face three stops comparison, beard type, eyebrow form and have
Wrinkles on one's forehead;
It identifies corresponding each ATTRIBUTE INDEX in the interested human face region, and the result of identification is recorded;
The result of record is associated with the interested human face region, obtains training sample.
The acquisition Face datection data in one of the embodiments, determine that interested human face region includes:
Obtain Face datection data;
The Face datection data are input to the neural network model trained, determine interested human face region, institute
The neural network model trained is stated using Face datection data in sample data as input data and with face position in sample data
It sets as output, parameter preset in neural network model is adjusted using back-propagation algorithm and intersection entropy loss,
Until frequency of training reaches preset threshold value, wherein the intersection entropy loss is by the neural network model to the sample
Face datection data are identified in notebook data, obtain prediction face location, and by the face location of prediction and the sample number
The data training being compared according to middle face location obtains.
The acquisition Face datection data in one of the embodiments, the Face datection data are input to and have been instructed
Experienced neural network model determines that interested human face region includes:
Obtain Face datection data;
The Face datection data are input to the neural network model trained, obtain face location region;
Identify the edge in face location region;
Along the border extended predetermined number pixel distance, interested human face region is obtained.
In one of the embodiments, it is described using interested human face region described in the training sample as input,
And using portrait attribute in the training sample as output, default convolutional neural networks model is trained, has been trained
Convolutional neural networks model include:
The training sample is randomly divided into training data and verify data;
Using interested human face region described in the training data as input and with portrait category in the training data
Property as output, default convolutional neural networks model is trained;
The convolutional neural networks model after training is verified according to the verify data;
When being verified, the convolutional neural networks model trained;
When verifying does not pass through, the training sample is reacquired.
It is described in one of the embodiments, that the convolutional neural networks model trained is adjusted by transfer learning algorithm,
After obtaining portrait attribute model, further includes:
The human face photo for receiving input carries out Face datection to the human face photo;
Face datection is obtained picture to be normalized;
Data after normalized are input to the portrait attribute model, extract user's portrait attributive character.
A kind of portrait attribute model construction device, described device include:
Data acquisition module determines interested human face region for obtaining Face datection data;
Mark records module, for recording to the interested human face region portrait attribute labeling as a result, being trained
Sample;
Model training module is used for using interested human face region described in the training sample as input and with institute
Portrait attribute in training sample is stated to be trained default convolutional neural networks model, the convolution trained as output
Neural network model;
Model construction module, in the convolutional neural networks model for will train the output of the last one full articulamentum be
The human face region quantity for including in the training sample, the learning rate setting of the last one full articulamentum is described have been trained
Default times of other layers in convolutional neural networks model, learning rate is maintained at preset value, to adjust the convolution mind trained
Through network model, portrait attribute model is obtained.
The mark records module is also used to obtain ATTRIBUTE INDEX set to be marked in one of the embodiments, identification
Corresponding each ATTRIBUTE INDEX in the interested human face region, and by the result of identification record, by the result of record with it is described
Interested human face region association, obtains training sample, wherein the ATTRIBUTE INDEX set to be marked includes that multiple attributes refer to
Mark, the ATTRIBUTE INDEX include age, gender, whether there is or not fringe, whether wear glasses, type of making up, whether pencil one's eyebrows, whether apply mouth
It is red, whether blush, hair style, skin condition, shape of face, face up and down stop comparison, face three stop comparison, beard type, eyebrow form
And whether there are wrinkles on one's forehead.
A kind of computer equipment, including memory and processor, the memory are stored with computer program, the processing
Device is realized when executing the computer program such as the step of the above method.
A kind of computer readable storage medium, is stored thereon with computer program, and the computer program is held by processor
It realizes when row such as the step of above-mentioned method.
Above-mentioned portrait attribute model construction method, device, computer equipment and storage medium obtain Face datection data,
It determines interested human face region, portrait attribute labeling is carried out to the interested human face region, training sample is obtained, with instruction
Practice sample to be trained default convolutional neural networks model as input and using portrait attribute as output, be moved by being based on
The mode for moving learning algorithm is finely adjusted the convolutional neural networks model trained, and obtains portrait attribute model.Whole process
In, with convolutional neural networks model training, and based on by way of transfer learning algorithm to the convolutional neural networks trained
Model is adjusted, optimizes, the performance of lift scheme, and the portrait attribute model after optimization can accurately realize that portrait attribute is known
Not.
Detailed description of the invention
Fig. 1 is the flow diagram of portrait attribute model construction method in one embodiment;
Fig. 2 is the flow diagram of portrait attribute model construction method in another embodiment;
Fig. 3 is the structural block diagram of portrait attribute model construction device in one embodiment;
Fig. 4 is the internal structure chart of computer equipment in one embodiment.
Specific embodiment
It is with reference to the accompanying drawings and embodiments, right in order to which the objects, technical solutions and advantages of the application are more clearly understood
The application is further elaborated.It should be appreciated that specific embodiment described herein is only used to explain the application, not
For limiting the application.
As shown in Figure 1, a kind of portrait attribute model construction method, method include:
S200: Face datection data are obtained, determine interested human face region.
Face datection data, which refer to, carries out the data obtained after Face datection to user, can specifically be directed to sample face
It carries out Face datection and obtains Face datection data.Face datection data are analyzed, determine interested human face region.Specifically
For, Face datection data can be input to trained neural network model, it is quasi- by trained neural network model
Determine interested human face region.
S400: record is to interested human face region portrait attribute labeling as a result, obtaining training sample.
Portrait attribute labeling can be completed based on expert data.Specifically, server can be by interested face area
Domain is sent to expert database server, and portrait ATTRIBUTE INDEX set is sent together, in portrait ATTRIBUTE INDEX set
Including multiple ATTRIBUTE INDEXs, can specifically include 16 attributes, 16 attributes specifically include the age, gender, whether there is or not fringe, whether
It wears glasses, type of making up, whether pencils one's eyebrows, whether uses lipstick, whether blush, hair style, skin condition, shape of face, face stop pair up and down
Stop comparison than, face three, beard type, eyebrow form, whether there are wrinkles on one's forehead.The interested face obtained for step S200
Region is based on expertise data, carries out portrait attribute labeling to it using prior art means, obtains training sample.In reality
In, the interested human face region that need to be marked can be pushed to professional, professional passes through its terminal (computer)
Portrait attribute labeling is carried out, and by the data back after mark, server obtains training sample.
S600: using in training sample interested human face region as input and using in training sample portrait attribute as
Output, is trained default convolutional neural networks model, the convolutional neural networks model trained.
Default convolutional neural networks model is the model of an initial generic.When training, select to feel emerging in training sample
The human face region of interest is trained as input using portrait attribute as output.It is non-essential, training sample can be drawn at random
It is divided into training set and verifying collection, according to the default convolutional neural networks model of training set training, after collecting verifying training according to verifying
Default convolutional neural networks model, portrait attribute training pattern is obtained when being verified.Herein, to a small amount of interested people
The attribute in face region is labeled, and training and verifying portrait attribute model reduce institute under the premise of ensuring that model accurately constructs
The data volume that need to be handled.Portrait attribute model can be the initial convolution neural network model constructed in advance, the training that will be obtained
Sample is trained the initial convolution neural network model using portrait attribute as output, after being trained as input
Convolutional neural networks model.
S800: being to wrap in training sample by the output of the last one full articulamentum in the convolutional neural networks model trained
Other layers in the convolutional neural networks model trained are arranged in the learning rate of the human face region quantity contained, the last one full articulamentum
Default times, learning rate is maintained at preset value, to adjust the convolutional neural networks model trained, obtains portrait attribute model.
Herein to the convolutional neural networks trained by the way of being based on transfer learning (Transfer Learning)
Model is adjusted.(target is to be used to the knowledge acquired from an environment study in new environment is helped to appoint to transfer learning
Business.Specifically, specifically adjustment process be, by the convolutional neural networks model trained the last one full articulamentum it is defeated
It is out the human face region quantity for including in training sample, the convolutional Neural trained is arranged in the learning rate of the last one full articulamentum
Default times of other in network model layer, learning rate is maintained at preset value.Non-essential, convolutional neural networks model is provided with 8
A convolutional layer, 4 down-sampling layers and 2 full articulamentum structures, when adjustment, by the convolutional neural networks model trained most
The output of the full articulamentum of the latter is the personal quantity for including in training sample, and the learning rate of the last one full articulamentum is other
20 times of layer, learning rate is maintained at 0.0001, amounts to 10 wheel of training.
Above-mentioned portrait attribute model construction method obtains Face datection data, determines interested human face region, emerging to feeling
Interest human face region carry out portrait attribute labeling, obtain training sample, using training sample as input and using portrait attribute as
Output is trained default convolutional neural networks model, to the convolution mind trained by way of based on transfer learning algorithm
It is finely adjusted through network model, obtains portrait attribute model.In whole process, with convolutional neural networks model training, and it is based on
The convolutional neural networks model trained is adjusted by way of transfer learning algorithm, is optimized, the performance of lift scheme,
Portrait attribute model after optimization can accurately realize portrait Attribute Recognition.
It is recorded in one of the embodiments, to interested human face region portrait attribute labeling as a result, obtaining training sample
It originally include: to obtain ATTRIBUTE INDEX set to be marked, ATTRIBUTE INDEX set to be marked includes multiple ATTRIBUTE INDEXs, and ATTRIBUTE INDEX includes
Age, gender, whether there is or not fringe, whether wear glasses, type of making up, whether pencil one's eyebrows, whether use lipstick, whether blush, hair style, skin
State, shape of face, face stop comparison up and down, face three stops comparison, beard type, eyebrow form and whether has wrinkles on one's forehead;Identification
Corresponding each ATTRIBUTE INDEX in interested human face region, and the result of identification is recorded;By the result of record with it is interested
Human face region association, obtains training sample.
Attribute labeling process can be completed by expert database server, and expert database server is one by one to each attribute
Index is identified, by the way of big data analysis and expertise purpose data classifying, for each interested human face region
All carry out above-mentioned 16 portrait Attribute Recognitions, recognition result fed back into server, server by interested human face region with
And corresponding portrait attribute labeling result associated storage.
Face datection data are obtained in one of the embodiments, determine that interested human face region includes: acquisition face
Detection data;The Face datection data are input to the neural network model trained, determine interested human face region, institute
The neural network model trained is stated using Face datection data in sample data as input data and with face position in sample data
It sets as output, parameter preset in neural network model is adjusted using back-propagation algorithm and intersection entropy loss,
Until frequency of training reaches preset threshold value, wherein the intersection entropy loss is by the neural network model to the sample
Face datection data are identified in notebook data, obtain prediction face location, and by the face location of prediction and the sample number
The data training being compared according to middle face location obtains.
In the present embodiment, by the neural network model trained that builds in advance, according to Face datection data into
The interested human face region identification of row.The neural network model trained be using in sample data Face datection=data as defeated
Enter data, is obtained using face location as output constantly training.In practical applications, for a large amount of human face datas of acquisition, and
And the corresponding face location of these human face datas is got using conventional means, it is inputted human face data as sample human face data
Into initial model, constantly initial model is trained using corresponding to face location as output, the nerve net trained
Network model.Its specific training process are as follows: using Face datection data in sample data as input data and with people in sample data
Face position is adjusted parameter preset in neural network using back-propagation algorithm and intersection entropy loss as output,
Until frequency of training reaches preset threshold value, wherein intersecting entropy loss is by initial convolutional neural networks in sample data
Face datection data are identified, obtain prediction face location, and by face location in the face location and sample data of prediction
The data training being compared obtains.
Face datection data are obtained in one of the embodiments, and Face datection data are input to the nerve trained
Network model determines that interested human face region includes: to obtain Face datection data;Face datection data are input to and have been trained
Neural network model, obtain face location region;Identify the edge in face location region;Along border extended predetermined number pixel
Distance obtains interested human face region.
Face datection data are input in the neural network model trained, the Neural Network model predictive trained is defeated
Out include the position of the multiple regions such as eyes, nose, mouth, head on face, face location region is obtained, to face location area
Domain is expanded, and when expansion, the border extended predetermined number pixel distance along face location region determines interested face area
Domain.In practical applications, it can be and the photo of user be input to server, server carries out face to the photo of input
Detection, obtains Face datection data, Face datection data are input in the neural network model trained by server, by this
The Neural Network model predictive trained trained goes out face location, obtains face head position information according to face location,
Expanded according to face head position information, finally determines interested human face region.
In one of the embodiments, using in training sample interested human face region as input and with training sample
Middle portrait attribute is trained default convolutional neural networks model, the convolutional neural networks mould trained as output
Type includes: that training sample is randomly divided into training data and verify data;Using in training data interested human face region as
It inputs and portrait attribute is trained default convolutional neural networks model as output using in training data;According to verifying number
It is verified according to the convolutional neural networks model after training;When being verified, the convolutional neural networks mould trained
Type;When verifying does not pass through, training sample is reacquired.
In the present embodiment, training sample is divided into training data and verify data two parts, with training data to people
As attribute model is trained, the portrait attribute model after training is verified with verify data, weight obstructed out-of-date when verifying
New selection training data is again trained portrait attribute model, and reselecting training data can be the last round of training of selection
Training sample is randomly divided into training data again and verify data carries out by other parts in data.It is non-essential, it is inciting somebody to action
Training sample is divided into during training data and verify data, more part training data can be divided into training data, will
Fewer parts data are divided into verify data.
As shown in Fig. 2, in one of the embodiments, after step S800, further includes:
S920: receiving the human face photo of input, carries out Face datection to human face photo;
S940: Face datection is obtained into picture and is normalized;
S960: the data after normalized are input to portrait attribute model, extract user's portrait attributive character.
Server is loaded with the portrait attribute model obtained after step S800 processing, and server receives the people of user's input
Face photo, carries out Face datection to the human face photo, and the data after Face datection are input to by the data after obtaining Face datection
Accurate to extract user's portrait attributive character in portrait attribute model, the portrait attribute extracted is fed back to user by server, is taken
Device be engaged in after carrying out Face datection to the photo of input, the interested human face region picture obtained to Face datection is returned
One change processing, then the picture after normalized is input in portrait attribute model.Specifically, server can pass through fortune
The picture of Face datection result is normalized in row MATLAB software, then the picture after normalized is input to excellent
Portrait attribute model after change.
In practical applications, above-mentioned portrait attribute model construction method building portrait attribute model can be used and extract
Portrait attribute was specifically included with model construction stage and model service stage.In the model construction stage: server is with history number
Based on, the human face photo in historical record is obtained, Face datection is carried out to the human face photo in historical record, obtains face
Detection data determines that interested human face region, server will carry out the human face region that emotion is gone according to expertise data
Active portrait data mark, generates training sample, and server obtains general convolutional neural networks model, passes through training sample pair
The general convolutional neural networks model is trained, using training sample as input and with portrait attribute work in training process
For output, when the training is completed, then passes through transfer learning algorithm to the convolutional neural networks model trained, obtain portrait attribute
Model completes model construction stages operating.In model service stage: server is loaded with the portrait category constructed using aforesaid way
Property model, photo is input to server by user, when server receives user picture, carry out face add survey, obtain interested
Human face region picture, obtained picture is normalized, the data after normalized are input to framework
Portrait attribute model realizes the accurate extraction of portrait feature by the portrait attribute model.
It should be understood that although each step in the flow chart of Fig. 1-2 is successively shown according to the instruction of arrow,
These steps are not that the inevitable sequence according to arrow instruction successively executes.Unless expressly stating otherwise herein, these steps
Execution there is no stringent sequences to limit, these steps can execute in other order.Moreover, at least one in Fig. 1-2
Part steps may include that perhaps these sub-steps of multiple stages or stage are not necessarily in synchronization to multiple sub-steps
Completion is executed, but can be executed at different times, the execution sequence in these sub-steps or stage is also not necessarily successively
It carries out, but can be at least part of the sub-step or stage of other steps or other steps in turn or alternately
It executes.
As shown in figure 3, a kind of portrait attribute model construction device, device include:
Data acquisition module 200 determines interested human face region for obtaining Face datection data;
Mark records module 400, for recording to interested human face region portrait attribute labeling as a result, obtaining training sample
This;
Model training module 600, for using in training sample interested human face region as input and with training sample
Middle portrait attribute is trained default convolutional neural networks model, the convolutional neural networks mould trained as output
Type;
Model construction module 800, the last one full articulamentum is defeated in the convolutional neural networks model for will train
It is out the human face region quantity for including in training sample, the convolutional Neural trained is arranged in the learning rate of the last one full articulamentum
Default times of other in network model layer, learning rate is maintained at preset value, to adjust the convolutional neural networks model trained, obtains
To portrait attribute model.
Above-mentioned portrait attribute model construction device, data acquisition module 200 obtain Face datection data, determine interested
Human face region, labeling module 400 carry out portrait attribute labeling to interested human face region, obtain training sample, model training
Module 600 is trained default convolutional neural networks model using training sample as input and using portrait attribute as output,
Model construction module 800 is finely adjusted the convolutional neural networks model trained by way of based on transfer learning algorithm,
Obtain portrait attribute model.In whole process, with convolutional neural networks model training, and based on the side for passing through transfer learning algorithm
Formula is adjusted the convolutional neural networks model trained, optimizes, the performance of lift scheme, the portrait attribute model after optimization
It can accurately realize portrait Attribute Recognition.
Mark records module 400 is also used to obtain ATTRIBUTE INDEX set to be marked, identification sense in one of the embodiments,
Corresponding each ATTRIBUTE INDEX in the human face region of interest, and the result of identification is recorded, by the result of record and interested people
The association of face region, obtains training sample, wherein ATTRIBUTE INDEX set to be marked includes multiple ATTRIBUTE INDEXs, and ATTRIBUTE INDEX includes
Age, gender, whether there is or not fringe, whether wear glasses, type of making up, whether pencil one's eyebrows, whether use lipstick, whether blush, hair style, skin
State, shape of face, face stop comparison up and down, face three stops comparison, beard type, eyebrow form and whether has wrinkles on one's forehead.
Data acquisition module 200 is also used to obtain Face datection data in one of the embodiments,;The face is examined
Measured data is input to the neural network model trained, and determines interested human face region, the neural network mould trained
Face location is as output using Face datection data in sample data as input data and using in sample data for type, using reversed
Propagation algorithm and intersection entropy loss are adjusted parameter preset in neural network model, until frequency of training reaches default
Threshold value, wherein the intersection entropy loss is by the neural network model to Face datection data in the sample data
It is identified, obtains prediction face location, and face location in the face location of prediction and the sample data is compared
Obtained data training obtains.
Data acquisition module 200 is also used to obtain Face datection data in one of the embodiments,;By Face datection number
According to the neural network model trained is input to, face location region is obtained;Identify the edge in face location region;Expand along edge
Predetermined number pixel distance is opened up, interested human face region is obtained.
In one of the embodiments, model training module 600 be also used to for training sample being randomly divided into training data and
Verify data;Using in training data interested human face region as input and using in training data portrait attribute as export,
Default convolutional neural networks model is trained;The convolutional neural networks model after training is tested according to verify data
Card;When being verified, the convolutional neural networks model trained;When verifying does not pass through, training sample is reacquired.
Above-mentioned portrait attribute model construction device further includes characteristic extracting module in one of the embodiments, for connecing
The human face photo for receiving input carries out Face datection to human face photo;Face datection is obtained picture to be normalized;It will return
One changes that treated data are input to portrait attribute model, extracts user's portrait attributive character.
Specific restriction about portrait attribute model construction device may refer to construct above for portrait attribute model
The restriction of method, details are not described herein.Modules in above-mentioned portrait attribute model construction device can be fully or partially through
Software, hardware and combinations thereof are realized.Above-mentioned each module can be embedded in the form of hardware or independently of the place in computer equipment
It manages in device, can also be stored in a software form in the memory in computer equipment, in order to which processor calls execution or more
The corresponding operation of modules.
In one embodiment, a kind of computer equipment is provided, which can be server, internal junction
Composition can be as shown in Figure 4.The computer equipment include by system bus connect processor, memory, network interface and
Database.Wherein, the processor of the computer equipment is for providing calculating and control ability.The memory packet of the computer equipment
Include non-volatile memory medium, built-in storage.The non-volatile memory medium is stored with operating system, computer program and data
Library.The built-in storage provides environment for the operation of operating system and computer program in non-volatile memory medium.The calculating
The database of machine equipment is for storing sample human face photo or Face datection data.The network interface of the computer equipment be used for
External terminal passes through network connection communication.To realize a kind of portrait attribute model structure when the computer program is executed by processor
Construction method.
It will be understood by those skilled in the art that structure shown in Fig. 4, only part relevant to application scheme is tied
The block diagram of structure does not constitute the restriction for the computer equipment being applied thereon to application scheme, specific computer equipment
It may include perhaps combining certain components or with different component layouts than more or fewer components as shown in the figure.
In one embodiment, a kind of computer equipment is provided, including memory, processor and storage are on a memory
And the computer program that can be run on a processor, processor perform the steps of when executing computer program
Face datection data are obtained, determine interested human face region;
Record is to interested human face region portrait attribute labeling as a result, obtaining training sample;
Using in training sample interested human face region as input and using in training sample portrait attribute as export,
Default convolutional neural networks model is trained, the convolutional neural networks model trained;
Being in training sample by the output of the last one full articulamentum in the convolutional neural networks model trained includes
Human face region quantity, other layers is pre- in the convolutional neural networks model trained of learning rate setting of the last one full articulamentum
If again, learning rate is maintained at preset value, to adjust the convolutional neural networks model trained, portrait attribute model is obtained.
In one embodiment, it is also performed the steps of when processor executes computer program
ATTRIBUTE INDEX set to be marked is obtained, ATTRIBUTE INDEX set to be marked includes multiple ATTRIBUTE INDEXs, ATTRIBUTE INDEX packet
Include the age, gender, whether there is or not fringe, whether wear glasses, type of making up, whether pencil one's eyebrows, whether use lipstick, whether blush, hair style, skin
Skin state, shape of face, face stop comparison up and down, face three stops comparison, beard type, eyebrow form and whether has wrinkles on one's forehead;Know
Corresponding each ATTRIBUTE INDEX in not interested human face region, and the result of identification is recorded;By the result of record with it is interested
Human face region association, obtain training sample.
In one embodiment, it is also performed the steps of when processor executes computer program
Obtain Face datection data;The Face datection data are input to the neural network model trained, determine sense
The human face region of interest, the neural network model trained using Face datection data in sample data as input data, simultaneously
Face location using back-propagation algorithm and intersects entropy loss to pre- in neural network model as output using in sample data
If parameter be adjusted, until frequency of training reaches preset threshold value, wherein the intersection entropy loss is by the nerve
Network model identifies Face datection data in the sample data, obtains prediction face location, and by the face of prediction
The data training that face location is compared in position and the sample data obtains.
In one embodiment, it is also performed the steps of when processor executes computer program
Obtain Face datection data;Face datection data are input to the neural network model trained, obtain face position
Set region;Identify the edge in face location region;Along border extended predetermined number pixel distance, interested face area is obtained
Domain.
In one embodiment, it is also performed the steps of when processor executes computer program
Training sample is randomly divided into training data and verify data;Using in training data interested human face region as
It inputs and portrait attribute is trained default convolutional neural networks model as output using in training data;According to verifying number
It is verified according to the convolutional neural networks model after training;When being verified, the convolutional neural networks mould trained
Type;When verifying does not pass through, training sample is reacquired.
In one embodiment, it is also performed the steps of when processor executes computer program
The human face photo for receiving input carries out Face datection to human face photo;Face datection is obtained into picture and carries out normalizing
Change processing;Data after normalized are input to portrait attribute model, extract user's portrait attributive character.
In one embodiment, a kind of computer readable storage medium is provided, computer program is stored thereon with, is calculated
Machine program performs the steps of when being executed by processor
Face datection data are obtained, determine interested human face region;
Record is to interested human face region portrait attribute labeling as a result, obtaining training sample;
Using in training sample interested human face region as input and using in training sample portrait attribute as export,
Default convolutional neural networks model is trained, the convolutional neural networks model trained;
Being in training sample by the output of the last one full articulamentum in the convolutional neural networks model trained includes
Human face region quantity, other layers is pre- in the convolutional neural networks model trained of learning rate setting of the last one full articulamentum
If again, learning rate is maintained at preset value, to adjust the convolutional neural networks model trained, portrait attribute model is obtained.
In one embodiment, it is also performed the steps of when computer program is executed by processor
ATTRIBUTE INDEX set to be marked is obtained, ATTRIBUTE INDEX set to be marked includes multiple ATTRIBUTE INDEXs, ATTRIBUTE INDEX packet
Include the age, gender, whether there is or not fringe, whether wear glasses, type of making up, whether pencil one's eyebrows, whether use lipstick, whether blush, hair style, skin
Skin state, shape of face, face stop comparison up and down, face three stops comparison, beard type, eyebrow form and whether has wrinkles on one's forehead;Know
Corresponding each ATTRIBUTE INDEX in not interested human face region, and the result of identification is recorded;By the result of record with it is interested
Human face region association, obtain training sample.
In one embodiment, it is also performed the steps of when computer program is executed by processor
Obtain Face datection data;The Face datection data are input to the neural network model trained, determine sense
The human face region of interest, the neural network model trained using Face datection data in sample data as input data, simultaneously
Face location using back-propagation algorithm and intersects entropy loss to pre- in neural network model as output using in sample data
If parameter be adjusted, until frequency of training reaches preset threshold value, wherein the intersection entropy loss is by the nerve
Network model identifies Face datection data in the sample data, obtains prediction face location, and by the face of prediction
The data training that face location is compared in position and the sample data obtains.
In one embodiment, it is also performed the steps of when computer program is executed by processor
Obtain Face datection data;Face datection data are input to the neural network model trained, obtain face position
Set region;Identify the edge in face location region;Along border extended predetermined number pixel distance, interested face area is obtained
Domain.
In one embodiment, it is also performed the steps of when computer program is executed by processor
Training sample is randomly divided into training data and verify data;Using in training data interested human face region as
It inputs and portrait attribute is trained default convolutional neural networks model as output using in training data;According to verifying number
It is verified according to the convolutional neural networks model after training;When being verified, the convolutional neural networks mould trained
Type;When verifying does not pass through, training sample is reacquired.
In one embodiment, it is also performed the steps of when computer program is executed by processor
The human face photo for receiving input carries out Face datection to human face photo;Face datection is obtained into picture and carries out normalizing
Change processing;Data after normalized are input to portrait attribute model, extract user's portrait attributive character.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with
Instruct relevant hardware to complete by computer program, computer program to can be stored in a non-volatile computer readable
It takes in storage medium, the computer program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein, this Shen
Please provided by any reference used in each embodiment to memory, storage, database or other media, may each comprise
Non-volatile and/or volatile memory.Nonvolatile memory may include read-only memory (ROM), programming ROM
(PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include
Random access memory (RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms,
Such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhancing
Type SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM
(RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
Each technical characteristic of above embodiments can be combined arbitrarily, for simplicity of description, not to above-described embodiment
In each technical characteristic it is all possible combination be all described, as long as however, the combination of these technical characteristics be not present lance
Shield all should be considered as described in this specification.
Above embodiments only express the several embodiments of the application, and the description thereof is more specific and detailed, but can not
Therefore it is construed as limiting the scope of the patent.It should be pointed out that for those of ordinary skill in the art,
Under the premise of not departing from the application design, various modifications and improvements can be made, these belong to the protection scope of the application.
Therefore, the scope of protection shall be subject to the appended claims for the application patent.
Claims (10)
1. a kind of portrait attribute model construction method, which comprises
Face datection data are obtained, determine interested human face region;
Record is to the interested human face region portrait attribute labeling as a result, obtaining training sample;
Made using interested human face region described in the training sample as input and with portrait attribute in the training sample
For output, default convolutional neural networks model is trained, the convolutional neural networks model trained;
To include in the output training sample of the last one full articulamentum in the convolutional neural networks model that trained
Its in the convolutional neural networks model trained is arranged in human face region quantity, the learning rate of the last one full articulamentum
Default times of his layer, learning rate is maintained at preset value, to adjust the convolutional neural networks model trained, obtains portrait category
Property model.
2. the method according to claim 1, wherein the record is to the interested human face region portrait category
Property annotation results, obtaining training sample includes:
ATTRIBUTE INDEX set to be marked is obtained, the ATTRIBUTE INDEX set to be marked includes multiple ATTRIBUTE INDEXs, and the attribute refers to
Mark include the age, gender, whether there is or not fringe, whether wear glasses, type of make up, whether pencil one's eyebrows, whether use lipstick, whether blush, send out
Type, skin condition, shape of face, face stop comparison up and down, face three stops comparison, beard type, eyebrow form and whether has new line
Line;
It identifies corresponding each ATTRIBUTE INDEX in the interested human face region, and the result of identification is recorded;
The result of record is associated with the interested human face region, obtains training sample.
3. the method according to claim 1, wherein the acquisition Face datection data, determine interested people
Face region includes:
Obtain Face datection data;
The Face datection data are input to the neural network model trained, determine interested human face region, it is described
Trained neural network model is made using Face datection data in sample data as input data and with face location in sample data
For output, parameter preset in neural network model is adjusted using back-propagation algorithm and intersection entropy loss, until
Frequency of training reaches preset threshold value, wherein the intersection entropy loss is by the neural network model to the sample number
It is identified according to middle Face datection data, obtains prediction face location, and will be in the face location of prediction and the sample data
The data training that face location is compared obtains.
4. according to the method described in claim 3, it is characterized in that, the acquisition Face datection data, by the Face datection
Data are input to the neural network model trained, and determine that interested human face region includes:
Obtain Face datection data;
The Face datection data are input to the neural network model trained, obtain face location region;
Identify the edge in face location region;
Along the border extended predetermined number pixel distance, interested human face region is obtained.
5. the method according to claim 1, wherein described with interested face described in the training sample
Default convolutional neural networks model is instructed as input and using portrait attribute in the training sample as output in region
Practice, the convolutional neural networks model trained includes:
The training sample is randomly divided into training data and verify data;
Made using interested human face region described in the training data as input and with portrait attribute in the training data
For output, default convolutional neural networks model is trained;
The convolutional neural networks model after training is verified according to the verify data;
When being verified, the convolutional neural networks model trained;
When verifying does not pass through, the training sample is reacquired.
6. the method according to claim 1, wherein described adjust the convolution trained by transfer learning algorithm
Neural network model, after obtaining portrait attribute model, further includes:
The human face photo for receiving input carries out Face datection to the human face photo;
Face datection is obtained picture to be normalized;
Data after normalized are input to the portrait attribute model, extract user's portrait attributive character.
7. a kind of portrait attribute model construction device, which is characterized in that described device includes:
Data acquisition module determines interested human face region for obtaining Face datection data;
Mark records module, for recording to the interested human face region portrait attribute labeling as a result, obtaining training sample;
Model training module is used for using interested human face region described in the training sample as input and with the instruction
Practice portrait attribute in sample to be trained default convolutional neural networks model, the convolutional Neural trained as output
Network model;
Model construction module, the output of the last one full articulamentum is described in the convolutional neural networks model for will train
The human face region quantity for including in training sample, the learning rate setting convolution trained of the last one full articulamentum
Default times of other in neural network model layer, learning rate is maintained at preset value, to adjust the convolutional Neural net trained
Network model obtains portrait attribute model.
8. device according to claim 7, which is characterized in that the mark records module is also used to obtain attribute to be marked
Index set identifies corresponding each ATTRIBUTE INDEX in the interested human face region, and the result of identification is recorded, and will record
Result be associated with the interested human face region, obtain training sample, wherein the ATTRIBUTE INDEX set to be marked includes
Multiple ATTRIBUTE INDEXs, the ATTRIBUTE INDEX include the age, gender, whether there is or not fringe, whether wear glasses, type of making up, whether pencil one's eyebrows,
Whether use lipstick, whether blush, hair style, skin condition, shape of face, face up and down stop comparison, face three stop comparison, beard type,
Eyebrow form and whether there are wrinkles on one's forehead.
9. a kind of computer equipment, including memory and processor, the memory are stored with computer program, feature exists
In the step of processor realizes any one of claims 1 to 6 the method when executing the computer program.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program
The step of method described in any one of claims 1 to 6 is realized when being executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910303910.XA CN110110611A (en) | 2019-04-16 | 2019-04-16 | Portrait attribute model construction method, device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910303910.XA CN110110611A (en) | 2019-04-16 | 2019-04-16 | Portrait attribute model construction method, device, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110110611A true CN110110611A (en) | 2019-08-09 |
Family
ID=67483892
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910303910.XA Pending CN110110611A (en) | 2019-04-16 | 2019-04-16 | Portrait attribute model construction method, device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110110611A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110555514A (en) * | 2019-08-20 | 2019-12-10 | 北京迈格威科技有限公司 | Neural network model searching method, image identification method and device |
CN110796089A (en) * | 2019-10-30 | 2020-02-14 | 上海掌门科技有限公司 | Method and apparatus for training face-changing model |
CN111209874A (en) * | 2020-01-09 | 2020-05-29 | 北京百目科技有限公司 | Method for analyzing and identifying wearing attribute of human head |
CN111291632A (en) * | 2020-01-17 | 2020-06-16 | 厦门中控智慧信息技术有限公司 | Pedestrian state detection method, device and equipment |
CN112036249A (en) * | 2020-08-04 | 2020-12-04 | 汇纳科技股份有限公司 | Method, system, medium and terminal for end-to-end pedestrian detection and attribute identification |
CN112163462A (en) * | 2020-09-08 | 2021-01-01 | 北京数美时代科技有限公司 | Face-based juvenile recognition method and device and computer equipment |
CN112307110A (en) * | 2020-10-30 | 2021-02-02 | 京东方科技集团股份有限公司 | User portrait generation method and device, computer equipment and storage medium |
WO2021174820A1 (en) * | 2020-03-03 | 2021-09-10 | 平安科技(深圳)有限公司 | Discovery method and apparatus for difficult sample, and computer device |
CN113553909A (en) * | 2021-06-23 | 2021-10-26 | 北京百度网讯科技有限公司 | Model training method for skin detection and skin detection method |
US20230030740A1 (en) * | 2021-07-29 | 2023-02-02 | Lemon Inc. | Image annotating method, classification method and machine learning model training method |
CN117826448A (en) * | 2024-01-24 | 2024-04-05 | 江苏鸿晨集团有限公司 | Self-correcting high-order aberration lens based on transition region optimization and processing method |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106295584A (en) * | 2016-08-16 | 2017-01-04 | 深圳云天励飞技术有限公司 | Depth migration study is in the recognition methods of crowd's attribute |
WO2018054283A1 (en) * | 2016-09-23 | 2018-03-29 | 北京眼神科技有限公司 | Face model training method and device, and face authentication method and device |
CN107992783A (en) * | 2016-10-26 | 2018-05-04 | 上海银晨智能识别科技有限公司 | Face image processing process and device |
CN108229268A (en) * | 2016-12-31 | 2018-06-29 | 商汤集团有限公司 | Expression Recognition and convolutional neural networks model training method, device and electronic equipment |
US20190065906A1 (en) * | 2017-08-25 | 2019-02-28 | Baidu Online Network Technology (Beijing) Co., Ltd . | Method and apparatus for building human face recognition model, device and computer storage medium |
CN109409198A (en) * | 2018-08-31 | 2019-03-01 | 平安科技(深圳)有限公司 | AU detection model training method, AU detection method, device, equipment and medium |
WO2019061661A1 (en) * | 2017-09-30 | 2019-04-04 | 平安科技(深圳)有限公司 | Image tamper detecting method, electronic device and readable storage medium |
-
2019
- 2019-04-16 CN CN201910303910.XA patent/CN110110611A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106295584A (en) * | 2016-08-16 | 2017-01-04 | 深圳云天励飞技术有限公司 | Depth migration study is in the recognition methods of crowd's attribute |
CN107766787A (en) * | 2016-08-16 | 2018-03-06 | 深圳云天励飞技术有限公司 | Face character recognition methods, device, terminal and storage medium |
WO2018054283A1 (en) * | 2016-09-23 | 2018-03-29 | 北京眼神科技有限公司 | Face model training method and device, and face authentication method and device |
CN107992783A (en) * | 2016-10-26 | 2018-05-04 | 上海银晨智能识别科技有限公司 | Face image processing process and device |
CN108229268A (en) * | 2016-12-31 | 2018-06-29 | 商汤集团有限公司 | Expression Recognition and convolutional neural networks model training method, device and electronic equipment |
US20190065906A1 (en) * | 2017-08-25 | 2019-02-28 | Baidu Online Network Technology (Beijing) Co., Ltd . | Method and apparatus for building human face recognition model, device and computer storage medium |
WO2019061661A1 (en) * | 2017-09-30 | 2019-04-04 | 平安科技(深圳)有限公司 | Image tamper detecting method, electronic device and readable storage medium |
CN109409198A (en) * | 2018-08-31 | 2019-03-01 | 平安科技(深圳)有限公司 | AU detection model training method, AU detection method, device, equipment and medium |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110555514B (en) * | 2019-08-20 | 2022-07-12 | 北京迈格威科技有限公司 | Neural network model searching method, image identification method and device |
CN110555514A (en) * | 2019-08-20 | 2019-12-10 | 北京迈格威科技有限公司 | Neural network model searching method, image identification method and device |
CN110796089A (en) * | 2019-10-30 | 2020-02-14 | 上海掌门科技有限公司 | Method and apparatus for training face-changing model |
CN110796089B (en) * | 2019-10-30 | 2023-12-12 | 上海掌门科技有限公司 | Method and apparatus for training face model |
CN111209874A (en) * | 2020-01-09 | 2020-05-29 | 北京百目科技有限公司 | Method for analyzing and identifying wearing attribute of human head |
CN111291632A (en) * | 2020-01-17 | 2020-06-16 | 厦门中控智慧信息技术有限公司 | Pedestrian state detection method, device and equipment |
WO2021174820A1 (en) * | 2020-03-03 | 2021-09-10 | 平安科技(深圳)有限公司 | Discovery method and apparatus for difficult sample, and computer device |
CN112036249A (en) * | 2020-08-04 | 2020-12-04 | 汇纳科技股份有限公司 | Method, system, medium and terminal for end-to-end pedestrian detection and attribute identification |
CN112163462A (en) * | 2020-09-08 | 2021-01-01 | 北京数美时代科技有限公司 | Face-based juvenile recognition method and device and computer equipment |
CN112307110A (en) * | 2020-10-30 | 2021-02-02 | 京东方科技集团股份有限公司 | User portrait generation method and device, computer equipment and storage medium |
CN113553909A (en) * | 2021-06-23 | 2021-10-26 | 北京百度网讯科技有限公司 | Model training method for skin detection and skin detection method |
CN113553909B (en) * | 2021-06-23 | 2023-08-04 | 北京百度网讯科技有限公司 | Model training method for skin detection and skin detection method |
US20230030740A1 (en) * | 2021-07-29 | 2023-02-02 | Lemon Inc. | Image annotating method, classification method and machine learning model training method |
CN117826448A (en) * | 2024-01-24 | 2024-04-05 | 江苏鸿晨集团有限公司 | Self-correcting high-order aberration lens based on transition region optimization and processing method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110110611A (en) | Portrait attribute model construction method, device, computer equipment and storage medium | |
CN110135263A (en) | Portrait attribute model construction method, device, computer equipment and storage medium | |
US10489683B1 (en) | Methods and systems for automatic generation of massive training data sets from 3D models for training deep learning networks | |
Han et al. | Heterogeneous face attribute estimation: A deep multi-task learning approach | |
Liu et al. | Wow! you are so beautiful today! | |
US9760935B2 (en) | Method, system and computer program product for generating recommendations for products and treatments | |
WO2020147430A1 (en) | Image identification-based product display method, device, apparatus, and medium | |
CN108960167B (en) | Hairstyle identification method, device, computer readable storage medium and computer equipment | |
CN109508638A (en) | Face Emotion identification method, apparatus, computer equipment and storage medium | |
CN109767261A (en) | Products Show method, apparatus, computer equipment and storage medium | |
CN110110118A (en) | Dressing recommended method, device, storage medium and mobile terminal | |
CN106326857A (en) | Gender identification method and gender identification device based on face image | |
US11507781B2 (en) | Methods and systems for automatic generation of massive training data sets from 3D models for training deep learning networks | |
US11756298B2 (en) | Analysis and feedback system for personal care routines | |
CN112819718A (en) | Image processing method and device, electronic device and storage medium | |
US20220164852A1 (en) | Digital Imaging and Learning Systems and Methods for Analyzing Pixel Data of an Image of a Hair Region of a User's Head to Generate One or More User-Specific Recommendations | |
CN109858022A (en) | A kind of user's intension recognizing method, device, computer equipment and storage medium | |
CN109685713A (en) | Makeup analog control method, device, computer equipment and storage medium | |
JP2023531264A (en) | Systems and methods for improved facial attribute classification and its use | |
CN115862120B (en) | Face action unit identification method and equipment capable of decoupling separable variation from encoder | |
Wei et al. | Deep collocative learning for immunofixation electrophoresis image analysis | |
KR20200031959A (en) | Method for evaluating social intelligence and apparatus using the same | |
CN111325173A (en) | Hair type identification method and device, electronic equipment and storage medium | |
CN115828175A (en) | Resampling method for updating leaf nodes of depth regression forest | |
Tanveez et al. | Facial Emotional Recognition System using Machine Learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |