CN107766787A - Face character recognition methods, device, terminal and storage medium - Google Patents
Face character recognition methods, device, terminal and storage medium Download PDFInfo
- Publication number
- CN107766787A CN107766787A CN201710591603.7A CN201710591603A CN107766787A CN 107766787 A CN107766787 A CN 107766787A CN 201710591603 A CN201710591603 A CN 201710591603A CN 107766787 A CN107766787 A CN 107766787A
- Authority
- CN
- China
- Prior art keywords
- face
- network model
- neural network
- recognition
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/53—Recognition of crowd images, e.g. recognition of crowd congestion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Artificial Intelligence (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
A kind of face character recognition methods, methods described include:Pre-training is carried out to neural network model;Recognition of face task fine setting is carried out to the neural network model after pre-training;Neural network model after being finely tuned to recognition of face task carries out face character identification mission fine setting;Neural network model after being finely tuned using face character identification mission carries out face character identification to given image.The present invention also provides a kind of face character identification device, terminal and storage medium.The present invention can train the neural network model of suitable face character identification, obtain preferable face character recognition effect.
Description
Technical field
The present invention relates to image identification technical field, and in particular to a kind of face character recognition methods, device, terminal and deposits
Storage media.
Background technology
The face character identification of view-based access control model, such as race/race identification, sex identification, age identification, are supervised in video
The fields such as control, recognition of face, population analysis, business analysis have a very wide range of applications.Traditional knowledge based on artificial feature
Other algorithm, it is difficult to meet the needs in reality scene in precision.In recent years, the visual correlation algorithm based on deep learning is being schemed
As the fields such as classification, target detection, object segmentation achieve very big progress.But the greatest problem of deep learning is to need
Sample that will be very huge carries out the training of model, and this causes it to be calculated in the task of some limited sample sizes than tradition
Method is difficult to obtain compared with quantum jump.In addition, there are some researches show the feature and corresponding task that deep learning learns are to be closely related
, it is difficult to directly apply it in other tasks.
The content of the invention
In view of the foregoing, it is necessary to propose a kind of face character recognition methods, device, terminal and storage medium, it can
To train the neural network model of suitable face character identification, preferable face character recognition effect is obtained.
The first aspect of the application provides a kind of face character recognition methods, and methods described includes:
Pre-training is carried out to neural network model;
Recognition of face task fine setting is carried out to the neural network model after pre-training;
Neural network model after being finely tuned to recognition of face task carries out face character identification mission fine setting;
Neural network model after being finely tuned using face character identification mission carries out face character identification to given image.
In alternatively possible implementation, the pre-training that carried out to neural network model is including the use of natural scene
As carrying out pre-training to the neural network model;
The neural network model to after pre-training carries out recognition of face task fine setting including the use of facial image to pre-
Neural network model after training carries out recognition of face task fine setting;
Neural network model after the fine setting to recognition of face task, which carries out the fine setting of face character identification mission, to be included making
Neural network model after being finely tuned with the image for having marked face character to recognition of face task carries out face character identification mission
Fine setting.
In alternatively possible implementation, the quantity of the natural scene image is more than the quantity of the facial image,
The quantity of the facial image is more than the quantity of the image for having marked face character.
In alternatively possible implementation, the neural network model is convolutional network model, the convolutional network mould
In type, a ReLU activation primitive is connected behind each convolutional layer.
In alternatively possible implementation, the face character includes race, sex, age, expression.
In alternatively possible implementation, it is micro- that the neural network model to after pre-training carries out recognition of face task
In the step of adjusting, and the fine setting of face character identification mission carried out to the neural network model after the fine setting of recognition of face task, institute
The learning rate for stating last layer of neural network model is 10 times of other layers.
In alternatively possible implementation, it is micro- that the neural network model to after pre-training carries out recognition of face task
In the step of adjusting, and the fine setting of face character identification mission carried out to the neural network model after the fine setting of recognition of face task,
Before image is inputted into the neural network model, Face datection, face alignment, face normalization are carried out to described image.
The second aspect of the application provides a kind of face character identification device, and described device includes:
Pre-training unit, for carrying out pre-training to neural network model;
First fine-adjusting unit, for carrying out recognition of face task fine setting to the neural network model after pre-training;
Second fine-adjusting unit, carry out face character identification for the neural network model after being finely tuned to recognition of face task and appoint
Business fine setting;
Recognition unit, pedestrian is entered to given image for the neural network model after being finely tuned using face character identification mission
Face Attribute Recognition.
The third aspect of the application provides a kind of terminal, and the terminal includes processor, and the processor is deposited for execution
The step of neural network model training method or the face character recognition methods are realized during the computer program stored in reservoir.
The fourth aspect of the application provides a kind of computer-readable recording medium, is stored thereon with computer program, described
The step of neural network model training method or the face character recognition methods are realized when computer program is executed by processor.
The present invention carries out pre-training to neural network model;Recognition of face is carried out to the neural network model after pre-training to appoint
Business fine setting;Neural network model after being finely tuned to recognition of face task carries out face character identification mission fine setting;Utilize face category
Property identification mission fine setting after neural network model to given image carry out face character identification.The present invention is by deep learning and moves
The thought for moving study is combined together, and is applied in face character identification, on the premise of sample size is limited, can trained
Go out the neural network model of suitable face character identification, obtain preferable face character recognition effect.
Brief description of the drawings
Fig. 1 is the flow chart for the neural network model training method that the embodiment of the present invention one provides.
Fig. 2 is the schematic diagram of the training process of neural network model.
Fig. 3 is the flow chart for the face character recognition methods that the embodiment of the present invention two provides.
Fig. 4 is the structure chart for the neural network model trainer that the embodiment of the present invention three provides.
Fig. 5 is the structure chart for the face character identification device that the embodiment of the present invention four provides.
Fig. 6 is the schematic diagram for the terminal that the embodiment of the present invention five provides.
Following embodiment will combine above-mentioned accompanying drawing and further illustrate the present invention.
Embodiment
It is below in conjunction with the accompanying drawings and specific real in order to be more clearly understood that the above objects, features and advantages of the present invention
Applying example, the present invention will be described in detail.It should be noted that in the case where not conflicting, embodiments herein and embodiment
In feature can be mutually combined.
Elaborate many details in the following description to facilitate a thorough understanding of the present invention, described embodiment only
Only it is part of the embodiment of the present invention, rather than whole embodiments.Based on the embodiment in the present invention, ordinary skill
The every other embodiment that personnel are obtained under the premise of creative work is not made, belongs to the scope of protection of the invention.
Unless otherwise defined, all of technologies and scientific terms used here by the article is with belonging to technical field of the invention
The implication that technical staff is generally understood that is identical.Term used in the description of the invention herein is intended merely to description tool
The purpose of the embodiment of body, it is not intended that in the limitation present invention.
Preferably, face character recognition methods of the invention is applied in one or more terminal.The terminal is one
Kind can be according to the instruction for being previously set or storing, and the automatic numerical computations and/or the equipment of information processing, its hardware of carrying out include
But be not limited to microprocessor, application specific integrated circuit (Application Specific Integrated Circuit, ASIC),
Programmable gate array (Field-Programmable Gate Array, FPGA), digital processing unit (Digital Signal
Processor, DSP), embedded device etc..
The terminal can be the computing devices such as desktop PC, notebook, palm PC and cloud server.It is described
Terminal can carry out man-machine interaction with user by modes such as keyboard, mouse, remote control, touch pad or voice-operated devices.
Embodiment one
With reference to Fig. 1 and Fig. 2, neural network model training method of the present invention is illustrated.The neutral net mould
Type training method is applied to terminal.Fig. 1 is the flow chart for the neural network model training method that the embodiment of the present invention one provides.Figure
2 be the schematic diagram of the training process of neural network model.
As shown in figure 1, the neural network model training method specifically includes following steps:
101:Pre-training is carried out to neural network model.
In the present embodiment, the neural network model is convolutional neural networks model.The convolutional neural networks model
Including convolutional layer, down-sampling layer, full articulamentum.The convolutional neural networks model can also include other layers, such as standardize
Layer, Dropout layers.The quantity of the Internet of neural network model can be configured as needed.If for example, image of input
Size it is larger, then from more convolutional layer and down-sampling layer.If the size of the image of input is smaller, from less volume
Lamination and down-sampling layer.
In a preferred embodiment, convolutional neural networks model include 8 convolutional layers (Conv11, Conv12, Conv21,
Conv22, Conv31, Conv32, Conv41, Conv42), 4 down-sampling layers (Pool 1, Pool 2, Pool 3, Pool 4),
1 Dropout layer (Dropout1), 2 full articulamentums (Fc6, Fc7).Wherein, connect behind each two convolutional layer and adopted under one
Sample layer, for example, connecting Pool 1 behind Conv11, Conv12, Pool 2 is connected behind Conv21, Conv22.Layer with parameter
Totally 10 layers (i.e. 8 convolutional layers and 2 full articulamentums).Conv11 is the first layer of convolutional neural networks model, and Fc7 is convolution god
Last layer through network model, Dropout layers are located at after convolutional layer and down-sampling layer, before full articulamentum.In this implementation
In example, an activation primitive is connected behind each convolutional layer.In the present embodiment, the activation primitive is ReLU functions.At it
In his embodiment, the activation primitive can be other functions, such as sigmoid functions or tanh functions.
Natural scene image can be used to carry out pre-training to neural network model.Especially, in a preferred embodiment,
Pre-training is carried out to neural network model using ImageNet image libraries.ImageNet image libraries have more than 1,000,000 and marked
The image of classification, cover more than 1000 classifications, be adapted to large-scale network training.In other examples, can use
Other image libraries carry out pre-training.
Table 1 is the design parameter of present pre-ferred embodiments neural network model training process.
The training parameter of the neural network model training process of table 1
Batch sizes | Learning rate | Exercise wheel number | |
Pre-training | 128 | 0.01~0.00001 | 40 |
Recognition of face task is finely tuned | 64 | 0.00001 | 20 |
Attribute Recognition task is finely tuned | 32 | 0.00001 | 10 |
As shown in table 1, during pre-training is carried out to neural network model, each Batch (per a collection of, every cluster)
Amount of images be 128, learning rate (i.e. learning rate is 0.01,0.001,0.0001,0.00001) from 0.01~0.00001, be total to
The wheel of meter training 40.In other examples, the size of the training parameter of pre-training can change.For example, to neutral net
During model carries out pre-training, each Batch amount of images is 256, and learning rate (learns from 0.001~0.00001
Rate be 0.001,0.0001,0.00001), exercise wheel number be 60.
It also show in table 1 and the fine setting of recognition of face task, the instruction of Attribute Recognition task fine setting carried out to neural network model
Practice parameter, behind will be described in more detail.
During pre-training, neural network BP training algorithm, such as back-propagation algorithm can be used to neutral net mould
Type is trained.Neural network BP training algorithm is prior art, and here is omitted.
102:Recognition of face task fine setting is carried out to the neural network model after pre-training.
Neural network model fine setting is referred on the basis of the model trained, by certain Training strategy to model
The parameter that learning arrives carries out local minor adjustments, and then different object functions can be expressed.
Facial image database can be used to carry out recognition of face task fine setting to the neural network model after pre-training.Work as use
When facial image database carries out the fine setting of recognition of face task to the neural network model after pre-training, by the last of neural network model
The output of one layer (such as last full articulamentum) is changed to consistent with the number that facial image database includes.It is for example, used
Facial image database includes 500 people, then the output of last layer of neural network model is changed into 500 (exports 500 classes
Not).In at least one embodiment, in recognition of face task trim process, last layer of neural network model (such as most
The full articulamentum of the latter) learning rate can be set to 10 times of other layers.For example, the learning rate of other layers of neural network model
0.00001 is remained, the learning rate of last layer is 0.0001.
The quantity of image can be less than the quantity of image used in pre-training used in the fine setting of recognition of face task.Example
Such as, image used in pre-training covers more than 1000 classifications more than 1,000,000;Used in the fine setting of recognition of face task
Facial image database includes 500 people, everyone about 40~100 images (altogether about 20,000~50,000).
As shown in table 1, in recognition of face task trim process, each Batch amount of images is 64, and learning rate is
0.00001,20 wheels are trained altogether.In other examples, the size of the training parameter of recognition of face task fine setting can change
Become.For example, in recognition of face task trim process, each Batch amount of images is 32, learning rate 0.0001, training
It is 15 to take turns number.
As shown in Fig. 2 in this preferred embodiment, when recognition of face task is finely tuned, facial image is being inputted into nerve
Before network model, Face datection, face alignment, face normalization are carried out to facial image first.
It is similar with pre-training, in recognition of face task trim process, neural network BP training algorithm can be used, such as reversely
Propagation algorithm is trained to the neural network model after pre-training.
103:Neural network model after being finely tuned to recognition of face task carries out face character identification mission fine setting.
The image for having marked face character can be used to enter pedestrian to the neural network model after the fine setting of recognition of face task
Face Attribute Recognition task is finely tuned.
The face character includes race, sex, age, expression etc..Specifically, race can be divided into yellow, white
Kind people, black race and palm fibre kind people, or it is divided into particular race (such as yellow) and other races.Sex can be divided into male and
Women.Age can be divided into child, teenager, youth, middle age, old age, or be divided into the different specific ages.Expression can be divided into
Glad, sad, angry, frightened, surprised, detest etc..Different divisions can be carried out to face character according to being actually needed.
When carrying out the fine setting of face character identification mission, different marks can be used according to the difference of face character
The image of face character.
For example, the present invention is used for race's identification, yellow's image, white people's image, black race's image and palm fibre can be used
Kind of people image carries out face character identification mission fine setting, or using particular race image (such as yellow's image) with it is nonspecific
Ethnic image (such as non-yellow's image) carries out face character identification mission fine setting.
And for example, the present invention is used for sex identification, and male's image can be used to carry out face character identification times with woman image
Business fine setting.
For another example, the present invention be used for the age identify, can use child's image, juvenile image, young image, middle aged image with
Old image carries out face character identification mission fine setting, or carries out face character knowledge using the image of different specific age groups
Other task fine setting.
Neural network model after being finely tuned using the image for having marked face character to recognition of face task carries out face
Attribute Recognition task finely tune when, by the output of last layer (such as last full articulamentum) of neural network model be changed to
The classification for having marked the image of face character is consistent.For example, the used image for having marked face character include four classes (such as
Yellow's image, white people's image, black race's image and palm fibre kind people image), then by the defeated of last layer of neural network model
Go out to be changed to 4 (exporting 4 classifications).In at least one embodiment, in face character identification mission trim process, nerve net
The learning rate of last layer (such as last full articulamentum) of network model can be set to 10 times of other layers.It is for example, neural
The learning rate of other layers of network model remains 0.00001, and the learning rate of last layer is 0.0001.
The quantity of image can be less than the fine setting of recognition of face task and be used used in the fine setting of face character identification mission
Image quantity.For example, facial image database used in the fine setting of recognition of face task includes 500 people, everyone about 40~100
Open image (altogether about 20,000~50,000);Face character identification mission fine setting used in image include 2 classes (such as yellow figure
Picture and non-yellow's image), per class image about 1000 (altogether about 2000).
As shown in table 1, in face character identification mission trim process, each Batch amount of images is 32, learning rate
For 0.00001,10 wheels are trained altogether.In other examples, the size of the training parameter of face character identification mission fine setting
It can change.For example, in face character identification mission trim process, each Batch amount of images is 8, and learning rate is
0.0001, exercise wheel number is 8.
As shown in Fig. 2 in this preferred embodiment, it is similar with the process of recognition of face task fine setting, know in face character
During other task fine setting, before facial image is inputted into neural network model, Face datection, face pair are carried out to image first
Together, face normalization.
It is similar with pre-training, in face character identification mission trim process, neural network BP training algorithm can be used, such as
Back-propagation algorithm, the neural network model after being finely tuned to recognition of face task are trained.
By above-mentioned steps 101~103, you can obtain the neural network model for being adapted to face character identification.
The neural network model training method of embodiment one carries out pre-training to neural network model;To the god after pre-training
Recognition of face task fine setting is carried out through network model;Neural network model after being finely tuned to recognition of face task carries out face character
Identification mission is finely tuned.Embodiment one is using the neural network model with deep learning structure, with the thought pair of deep learning
Neural network model carries out pre-training, also, enters pedestrian to the neural network model after pre-training with the thought of transfer learning
The fine setting of face identification mission obtains the neural network model for being appropriate for recognition of face, and to the god after the fine setting of recognition of face task
Face character identification mission is carried out through network model to finely tune to obtain the neural network model for being appropriate for face character identification.Cause
This, the thought of deep learning and transfer learning is combined together by the neural network model training method of embodiment one, is answered
Identified with face character, on the premise of sample size is limited, the neutral net of suitable face character identification can be trained
Model.
Embodiment two
Fig. 3 is the flow chart for the face character recognition methods that the embodiment of the present invention two provides.As shown in figure 3, the face
Attribute recognition approach specifically includes following steps:
301:Pre-training is carried out to neural network model.
Step 301 is consistent with step 101 in embodiment one in the present embodiment, referring specifically to step 101 in embodiment one
Associated description, do not repeat herein.
302:Recognition of face task fine setting is carried out to the neural network model after pre-training.
Step 302 is consistent with step 102 in embodiment one in the present embodiment, referring specifically to step 102 in embodiment one
Associated description, do not repeat herein.
303:Neural network model after being finely tuned to recognition of face task carries out face character identification mission fine setting.
Step 303 is consistent with step 103 in embodiment one in the present embodiment, referring specifically to step 103 in embodiment one
Associated description, do not repeat herein.
304:Neural network model after being finely tuned using face character identification mission carries out face character knowledge to given image
Not.
When needing to carry out face character identification to given image, given image is inputted into the fine setting of face character identification mission
Neural network model afterwards.Neural network model receives given image and is identified, and is identified result.The recognition result
The face character determined according to given image, such as race, sex, age, expression etc..
The face character recognition methods of embodiment two carries out pre-training to neural network model;To the nerve net after pre-training
Network model carries out recognition of face task fine setting;Neural network model after being finely tuned to recognition of face task carries out face character identification
Task is finely tuned;Neural network model after being finely tuned using face character identification mission carries out face character identification to given image.
Embodiment two is entered using the neural network model with deep learning structure with the thought of deep learning to neural network model
Row pre-training, also, recognition of face task fine setting is carried out to the neural network model after pre-training with the thought of transfer learning
The neural network model for being appropriate for recognition of face is obtained, and the neural network model after the fine setting of recognition of face task is carried out
Face character identification mission finely tunes to obtain the neural network model for being appropriate for face character identification.Therefore, embodiment two
The thought of deep learning and transfer learning is combined together by face character recognition methods, is applied and is identified in face character
On, on the premise of sample size is limited, the neural network model of suitable face character identification can be trained, utilizes the nerve net
Network model obtains preferable face character recognition result.
In order to verify the validity of scheme proposed by the invention, to it is described including 8 convolutional layers (Conv11, Conv12,
Conv21, Conv22, Conv31, Conv32, Conv41, Conv42), 4 down-sampling layers (Pool 1, Pool 2, Pool 3,
Pool 4), the convolutional neural networks model of 1 Dropout layer (Dropout1) and 2 full articulamentums (Fc6, Fc7) carries out
Contrast test, test set are as shown in table 2 comprising 500 positive samples and 500 negative samples, the classification results of Different Strategies.
The classification results of the Different Strategies of table 2
Experiment 1:Neural network model pre-training is carried out using ImageNet image libraries, the feature of extraction Fc6 layers is belonged to
Property information representation, attributive classification device is used as using SVM (Support Vector Machine, SVMs).The SVM is
One learning model for having supervision, commonly used to carry out pattern-recognition, classification and regression analysis.
Experiment 2:Neural network model pre-training is carried out using ImageNet image libraries, carries out recognition of face task fine setting,
The feature for extracting Fc6 layers carries out attribute information expression, and attributive classification device is used as using SVM.
Experiment 3:Neural network model pre-training is carried out using ImageNet image libraries, carries out recognition of face task and face
Attribute Recognition task is finely tuned, and directly carries out attributive classification using the output of network model.
Experiment 4:Neural network model pre-training is carried out using ImageNet image libraries, carries out recognition of face task and face
Attribute Recognition task is finely tuned, and the feature of extraction Fc6 layers carries out attribute information expression, and attributive classification device is used as using SVM.
It can be seen from table 2, pre-training is carried out to neural network model, and carries out recognition of face task and face character knowledge
After other task fine setting, neural network model can be identified well to face character.Also, the feature of extraction Fc6 layers is entered
Row attribute information is expressed, and the classification results obtained using SVM as attributive classification device are better than the output for directly using network model
Carry out attributive classification.
The present invention is low for precision present in the identification of existing face character, easily by factors such as illumination, posture, expressions
Influence the defects of, using advantage of the deep learning in image recognition, applied in face character identification mission.It is based on
The feature representation of deep learning needs great amount of samples to carry out the training of model, it is generally the case that the task of some specific areas
A small amount of sample can only be obtained.The thought of deep learning and transfer learning is combined together by the present invention, is applied in face
In Attribute Recognition (such as race's identification), on the premise of sample size is limited, good effect can be obtained.
In present pre-ferred embodiments, during model training, first, using natural scene image to neutral net mould
Type carries out pre-training.Natural scene image is to be very easy to obtain, and has many extensive disclosed data sets, such as
ImageNet, it can be used for carrying out the pre-training of neural network model.Secondly, using facial image to the nerve net after pre-training
Network model carries out recognition of face task fine setting.Attributive classification is carried out according to facial image, due to model do not have it is any on people
The priori of face feature, directly the feature obtained from natural scene image to be applied on facial image, effect can be poor,
So by the neural network model after pre-training the enterprising pedestrian's face identification mission of a number of facial image fine setting.Most
Afterwards, last stage learning to feature can preferably carry out the expression of face characteristic, but be a lack of Given Face attribute,
The related knowledge such as race, so further finely tuning model using the image for having marked face character on a small quantity.Especially in sample size
In the case of very limited, this method can obtain satisfied effect.Training of the initialization of neutral net to network and
Convergence is vital, can generally be calculated under conditions of sample size is more sufficient by some specific initialization
Method is directly trained under conditions of sample size is highly effective to ensure the convergence of model and is likely to result in network and is absorbed in office
Portion is minimum, but the parameter learnt by pre-training is largely that can ensure to obtain the overall situation along with network fine setting
Optimal result.Pre-training and fine setting by above-mentioned several stages, obtained feature can be relatively good to face character
Classified.
The face character identification of the present invention can apply to every field, such as video monitoring, population analysis, business analysis
Deng.For example, the pedestrian or the facial image of driver that can be obtained in traffic monitoring using the present invention to monitoring carry out face
Attribute Recognition, determine the face character (such as sex) of pedestrian or driver.
Embodiment three
Fig. 4 is the structure chart for the neural network model trainer that the embodiment of the present invention three provides.It is as shown in figure 4, described
Neural network model trainer 10 can include:Pre-training unit 401, the first fine-adjusting unit 402, the second fine-adjusting unit 403.
Pre-training unit 401, for carrying out pre-training to neural network model.
In the present embodiment, the neural network model is convolutional neural networks model.The convolutional neural networks model
Including convolutional layer, down-sampling layer, full articulamentum.The convolutional neural networks model can also include other layers, such as standardize
Layer, Dropout layers.The quantity of the Internet of neural network model can be configured as needed.If for example, image of input
Size it is larger, then from more convolutional layer and down-sampling layer.If the size of the image of input is smaller, from less volume
Lamination and down-sampling layer.
In a preferred embodiment, convolutional neural networks model include 8 convolutional layers (Conv11, Conv12, Conv21,
Conv22, Conv31, Conv32, Conv41, Conv42), 4 down-sampling layers (Pool 1, Pool 2, Pool 3, Pool 4),
1 Dropout layer (Dropout1), 2 full articulamentums (Fc6, Fc7).Wherein, connect behind each two convolutional layer and adopted under one
Sample layer, for example, connecting Pool 1 behind Conv11, Conv12, Pool 2 is connected behind Conv21, Conv22.Layer with parameter
Totally 10 layers (i.e. 8 convolutional layers and 2 full articulamentums).Conv11 is the first layer of convolutional neural networks model, and Fc7 is convolution god
Last layer through network model, Dropout layers are located at after convolutional layer and down-sampling layer, before full articulamentum.In this implementation
In example, an activation primitive is connected behind each convolutional layer.In the present embodiment, the activation primitive is ReLU functions.At it
In his embodiment, the activation primitive can be other functions, such as sigmoid functions or tanh functions.
Natural scene image can be used to carry out pre-training to neural network model.Especially, in a preferred embodiment,
Pre-training is carried out to neural network model using ImageNet image libraries.ImageNet image libraries have more than 1,000,000 and marked
The image of classification, cover more than 1000 classifications, be adapted to large-scale network training.In other examples, can use
Other image libraries carry out pre-training.
As shown in table 1, during pre-training is carried out to neural network model, each Batch (per a collection of, every cluster)
Amount of images be 128, learning rate (i.e. learning rate is 0.01,0.001,0.0001,0.00001) from 0.01~0.00001, be total to
The wheel of meter training 40.In other examples, the size of the training parameter of pre-training can change.For example, to neutral net
During model carries out pre-training, each Batch amount of images is 256, and learning rate (learns from 0.001~0.00001
Rate be 0.001,0.0001,0.00001), exercise wheel number be 60.
It also show in table 1 and the fine setting of recognition of face task, the instruction of Attribute Recognition task fine setting carried out to neural network model
Practice parameter, behind will be described in more detail.
During pre-training, neural network BP training algorithm, such as back-propagation algorithm can be used to neutral net mould
Type is trained.Neural network BP training algorithm is prior art, and here is omitted.
First fine-adjusting unit 402, for carrying out recognition of face task fine setting to the neural network model after pre-training.
Neural network model fine setting is referred on the basis of the model trained, by certain Training strategy to model
The parameter that learning arrives carries out local minor adjustments, and then different object functions can be expressed.
Facial image database can be used to carry out recognition of face task fine setting to the neural network model after pre-training.Work as use
When facial image database carries out the fine setting of recognition of face task to the neural network model after pre-training, by the last of neural network model
The output of one layer (such as last full articulamentum) is changed to consistent with the number that facial image database includes.It is for example, used
Facial image database includes 500 people, then the output of last layer of neural network model is changed into 500 (exports 500 classes
Not).In at least one embodiment, in recognition of face task trim process, last layer of neural network model (such as most
The full articulamentum of the latter) learning rate can be set to 10 times of other layers.For example, the learning rate of other layers of neural network model
0.00001 is remained, the learning rate of last layer is 0.0001.
The quantity of image can be less than the quantity of image used in pre-training used in the fine setting of recognition of face task.Example
Such as, image used in pre-training covers more than 1000 classifications more than 1,000,000;Used in the fine setting of recognition of face task
Facial image database includes 500 people, everyone about 40~100 images (altogether about 20,000~50,000).
As shown in table 1, in recognition of face task trim process, each Batch amount of images is 64, and learning rate is
0.00001,20 wheels are trained altogether.In other examples, the size of the training parameter of recognition of face task fine setting can change
Become.For example, in recognition of face task trim process, each Batch amount of images is 32, learning rate 0.0001, training
It is 15 to take turns number.
As shown in Fig. 2 in this preferred embodiment, when recognition of face task is finely tuned, facial image is being inputted into nerve
Before network model, Face datection, face alignment, face normalization are carried out to facial image first.
It is similar with pre-training, in recognition of face task trim process, neural network BP training algorithm can be used, such as reversely
Propagation algorithm is trained to the neural network model after pre-training.
Second fine-adjusting unit 403, face character knowledge is carried out for the neural network model after being finely tuned to recognition of face task
Other task fine setting.
The image for having marked face character can be used to enter pedestrian to the neural network model after the fine setting of recognition of face task
Face Attribute Recognition task is finely tuned.
The face character includes race, sex, age, expression etc..Specifically, race can be divided into yellow, white
Kind people, black race and palm fibre kind people, or it is divided into particular race (such as yellow) and other races.Sex can be divided into male and
Women.Age can be divided into child, teenager, youth, middle age, old age, or be divided into the different specific ages.Expression can be divided into
Glad, sad, angry, frightened, surprised, detest etc..Different divisions can be carried out to face character according to being actually needed.
When carrying out the fine setting of face character identification mission, different marks can be used according to the difference of face character
The image of face character.
For example, the present invention is used for race's identification, yellow's image, white people's image, black race's image and palm fibre can be used
Kind of people image carries out face character identification mission fine setting, or using particular race image (such as yellow's image) with it is nonspecific
Ethnic image (such as non-yellow's image) carries out face character identification mission fine setting.
And for example, the present invention is used for sex identification, and male's image can be used to carry out face character identification times with woman image
Business fine setting.
For another example, the present invention be used for the age identify, can use child's image, juvenile image, young image, middle aged image with
Old image carries out face character identification mission fine setting, or carries out face character knowledge using the image of different specific age groups
Other task fine setting.
Neural network model after being finely tuned using the image for having marked face character to recognition of face task carries out face
Attribute Recognition task finely tune when, by the output of last layer (such as last full articulamentum) of neural network model be changed to
The classification for having marked the image of face character is consistent.For example, the used image for having marked face character include four classes (such as
Yellow's image, white people's image, black race's image and palm fibre kind people image), then by the defeated of last layer of neural network model
Go out to be changed to 4 (exporting 4 classifications).In at least one embodiment, in face character identification mission trim process, nerve net
The learning rate of last layer (such as last full articulamentum) of network model can be set to 10 times of other layers.It is for example, neural
The learning rate of other layers of network model remains 0.00001, and the learning rate of last layer is 0.0001.
The quantity of image can be less than the fine setting of recognition of face task and be used used in the fine setting of face character identification mission
Image quantity.For example, facial image database used in the fine setting of recognition of face task includes 500 people, everyone about 40~100
Open image (altogether about 20,000~50,000);Face character identification mission fine setting used in image include 2 classes (such as yellow figure
Picture and non-yellow's image), per class image about 1000 (altogether about 2000).
As shown in table 1, in face character identification mission trim process, each Batch amount of images is 32, learning rate
For 0.00001,10 wheels are trained altogether.In other examples, the size of the training parameter of face character identification mission fine setting
It can change.For example, in face character identification mission trim process, each Batch amount of images is 8, and learning rate is
0.0001, exercise wheel number is 8.
As shown in Fig. 2 in this preferred embodiment, it is similar with the process of recognition of face task fine setting, know in face character
During other task fine setting, before facial image is inputted into neural network model, Face datection, face pair are carried out to image first
Together, face normalization.
It is similar with pre-training, in face character identification mission trim process, neural network BP training algorithm can be used, such as
Back-propagation algorithm, the neural network model after being finely tuned to recognition of face task are trained.
Neural network model trainer can train the nerve that suitable face character identifies by unit 401~403
Network model.
The neural network model trainer of embodiment three carries out pre-training to neural network model;To the god after pre-training
Recognition of face task fine setting is carried out through network model;Neural network model after being finely tuned to recognition of face task carries out face character
Identification mission is finely tuned.Embodiment three is using the neural network model with deep learning structure, with the thought pair of deep learning
Neural network model carries out pre-training, also, enters pedestrian to the neural network model after pre-training with the thought of transfer learning
The fine setting of face identification mission obtains the neural network model for being appropriate for recognition of face, and to the god after the fine setting of recognition of face task
Face character identification mission is carried out through network model to finely tune to obtain the neural network model for being appropriate for face character identification.Cause
This, the thought of deep learning and transfer learning is combined together by the neural network model trainer of embodiment three, is answered
Identified with face character, on the premise of sample size is limited, the neutral net of suitable face character identification can be trained
Model.
Example IV
Fig. 5 is the structure chart for the face character identification device that the embodiment of the present invention four provides.As shown in figure 5, the face
Property recognition means 50 can include:Pre-training unit 501, the first fine-adjusting unit 502, the second fine-adjusting unit 503, recognition unit
504。
Pre-training unit 501, for carrying out pre-training to neural network model.
Pre-training unit 501 is consistent with pre-training unit 401 in embodiment three in the present embodiment, referring specifically to embodiment
The associated description of pre-training unit 401, is not repeated herein in three.
First fine-adjusting unit 502, for carrying out recognition of face task fine setting to the neural network model after pre-training.
The first fine-adjusting unit 502 is consistent with the first fine-adjusting unit 402 in embodiment three in the present embodiment, referring specifically to reality
The associated description of the first fine-adjusting unit 402 in example three is applied, is not repeated herein.
Second fine-adjusting unit 503, face character knowledge is carried out for the neural network model after being finely tuned to recognition of face task
Other task fine setting.
The second fine-adjusting unit 503 is consistent with the second fine-adjusting unit 403 in embodiment three in the present embodiment, referring specifically to reality
The associated description of the second fine-adjusting unit 403 in example three is applied, is not repeated herein.
Recognition unit 504, given image is entered for the neural network model after being finely tuned using face character identification mission
Pedestrian's face Attribute Recognition.
When needing to carry out face character identification to given image, given image is inputted into the fine setting of face character identification mission
Neural network model afterwards.Neural network model receives given image and is identified, and is identified result.The recognition result
The face character determined according to given image, such as race, sex, age, expression etc..
The face character identification device of example IV carries out pre-training to neural network model;To the nerve net after pre-training
Network model carries out recognition of face task fine setting;Neural network model after being finely tuned to recognition of face task carries out face character identification
Task is finely tuned;Neural network model after being finely tuned using face character identification mission carries out face character identification to given image.
Example IV uses the neural network model with deep learning structure, and neural network model is entered with the thought of deep learning
Row pre-training, also, recognition of face task fine setting is carried out to the neural network model after pre-training with the thought of transfer learning
The neural network model for being appropriate for recognition of face is obtained, and the neural network model after the fine setting of recognition of face task is carried out
Face character identification mission finely tunes to obtain the neural network model for being appropriate for face character identification.Therefore, example IV
The thought of deep learning and transfer learning is combined together by face character identification device, is applied and is identified in face character
On, on the premise of sample size is limited, the neural network model of suitable face character identification can be trained, utilizes the nerve net
Network model obtains preferable face character recognition result.
Embodiment five
Fig. 6 is the schematic diagram for the terminal that the embodiment of the present invention five provides.The terminal 1 includes memory 20, processor 30
And the computer program 40 that can be run in the memory 20 and on the processor 30 is stored in, such as neutral net mould
Type training program or face character recognizer.The processor 30 realizes above-mentioned nerve net when performing the computer program 40
Step in network model training method or face character recognition methods embodiment, such as step 101~103 shown in Fig. 1 or Fig. 3
Shown step 301~304.Or the processor 30 realizes said apparatus embodiment when performing the computer program 40
In each module/unit function, such as the unit 501~504 in unit 401~403 or Fig. 5 in Fig. 4.
Exemplary, the computer program 40 can be divided into one or more module/units, it is one or
Multiple module/units are stored in the memory 20, and are performed by the processor 30, to complete the present invention.Described one
Individual or multiple module/units can be the series of computation machine programmed instruction section that can complete specific function, and the instruction segment is used for
Implementation procedure of the computer program 40 in the terminal 1 is described.For example, the computer program 40 can be divided into
Pre-training unit 401, the first fine-adjusting unit 402, the second fine-adjusting unit 403 in Fig. 4, or the pre- instruction being divided into Fig. 5
Practice unit 501, the first fine-adjusting unit 502, the second fine-adjusting unit 503, recognition unit 504, each unit concrete function is referring to implementation
Example three and example IV.
The terminal 1 can be the computing devices such as desktop PC, notebook, palm PC and cloud server.This
Art personnel are appreciated that the schematic diagram 6 is only the restriction of the example, not structure paired terminal 1 of terminal 1, can be with
Including than illustrating more or less parts, either combining some parts or different parts, such as the terminal 1 may be used also
With including input-output equipment, network access equipment, bus etc..
Alleged processor 30 can be CPU (Central Processing Unit, CPU), can also be
Other general processors, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit
(Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field-
Programmable Gate Array, FPGA) either other PLDs, discrete gate or transistor logic,
Discrete hardware components etc..General processor can be microprocessor or the processor 30 can also be any conventional processor
Deng the processor 30 is the control centre of the terminal 1, utilizes various interfaces and each portion of the whole terminal 1 of connection
Point.
The memory 20 can be used for storing the computer program 40 and/or module/unit, and the processor 30 passes through
Operation performs the computer program and/or module/unit being stored in the memory 20, and calls and be stored in memory
Data in 20, realize the various functions of the terminal 1.The memory 20 can mainly include storing program area and data storage
Area, wherein, storing program area can storage program area, needed at least one function application program (such as sound-playing function,
Image player function etc.) etc.;Storage data field can store uses created data (such as voice data, electricity according to terminal 1
Script for story-telling etc.) etc..In addition, memory 20 can include high-speed random access memory, nonvolatile memory, example can also be included
Such as hard disk, internal memory, plug-in type hard disk, intelligent memory card (Smart Media Card, SMC), secure digital (Secure
Digital, SD) card, flash card (Flash Card), at least one disk memory, flush memory device or other volatibility are consolidated
State memory device.
If the integrated module/unit of the terminal 1 is realized in the form of SFU software functional unit and is used as independent product
Sale in use, can be stored in a computer read/write memory medium.It is of the invention to realize based on such understanding
All or part of flow in embodiment method is stated, by computer program the hardware of correlation can also be instructed to complete, institute
The computer program stated can be stored in a computer-readable recording medium, and the computer program, can when being executed by processor
The step of realizing above-mentioned each embodiment of the method.Wherein, the computer program includes computer program code, the computer
Program code can be source code form, object identification code form, executable file or some intermediate forms etc..The computer can
Reading medium can include:Any entity or device of the computer program code, recording medium, USB flash disk, mobile hard can be carried
Disk, magnetic disc, CD, computer storage, read-only storage (ROM, Read-Only Memory), random access memory
(RAM, Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium etc..Need what is illustrated
It is that the content that the computer-readable medium includes can be fitted according to legislation in jurisdiction and the requirement of patent practice
When increase and decrease, such as in some jurisdictions, according to legislation and patent practice, computer-readable medium, which does not include electric carrier wave, to be believed
Number and telecommunication signal.
In several embodiments provided by the present invention, it should be understood that disclosed terminal and method, it can be passed through
Its mode is realized.For example, terminal embodiment described above is only schematical, for example, the division of the unit, only
Only a kind of division of logic function, can there is other dividing mode when actually realizing.
In addition, each functional unit in each embodiment of the present invention can be integrated in same treatment unit, can also
That unit is individually physically present, can also two or more units be integrated in same unit.Above-mentioned integrated list
Member can both be realized in the form of hardware, can also be realized in the form of hardware adds software function module.
It is obvious to a person skilled in the art that the invention is not restricted to the details of above-mentioned one exemplary embodiment, Er Qie
In the case of without departing substantially from spirit or essential attributes of the invention, the present invention can be realized in other specific forms.Therefore, no matter
From the point of view of which point, embodiment all should be regarded as exemplary, and be nonrestrictive, the scope of the present invention is by appended power
Profit requires rather than described above limits, it is intended that all in the implication and scope of the equivalency of claim by falling
Change is included in the present invention.Any reference in claim should not be considered as to the involved claim of limitation.This
Outside, it is clear that the word of " comprising " one is not excluded for other units or step, and odd number is not excluded for plural number.That is stated in terminal claim is multiple
Unit or terminal can also be realized by same unit or terminal by software or hardware.The first, the second grade word is used for
Title is represented, and is not offered as any specific order.
Finally it should be noted that the above embodiments are merely illustrative of the technical solutions of the present invention and it is unrestricted, although reference
The present invention is described in detail for preferred embodiment, it will be understood by those within the art that, can be to the present invention's
Technical scheme is modified or equivalent substitution, without departing from the spirit and scope of technical solution of the present invention.
Claims (10)
1. a kind of face character recognition methods, it is characterised in that methods described includes:
Pre-training is carried out to neural network model;
Recognition of face task fine setting is carried out to the neural network model after pre-training;
Neural network model after being finely tuned to recognition of face task carries out face character identification mission fine setting;
Neural network model after being finely tuned using face character identification mission carries out face character identification to given image.
2. the method as described in claim 1, it is characterised in that the pre-training that carried out to neural network model is including the use of certainly
Right scene image carries out pre-training to the neural network model;
The neural network model to after pre-training carries out recognition of face task fine setting including the use of facial image to pre-training
Neural network model afterwards carries out recognition of face task fine setting;
Neural network model after the fine setting to recognition of face task carries out face character identification mission fine setting including the use of
The image for marking face character carries out face character identification mission fine setting to the neural network model after the fine setting of recognition of face task.
3. method as claimed in claim 2, it is characterised in that the quantity of the natural scene image is more than the facial image
Quantity, the quantity of the facial image is more than the quantity of the image for having marked face character.
4. method as claimed any one in claims 1 to 3, it is characterised in that the neural network model is convolutional network
Model, in the convolutional network model, an activation primitive is connected behind each convolutional layer.
5. method as claimed any one in claims 1 to 3, it is characterised in that the face character include race, sex,
Age, expression.
6. method as claimed any one in claims 1 to 3, it is characterised in that the neutral net mould to after pre-training
Type carries out recognition of face task fine setting, and carries out face character identification to the neural network model after the fine setting of recognition of face task
In the step of task is finely tuned, the learning rate of last layer of the neural network model is 10 times of other layers.
7. method as claimed any one in claims 1 to 3, it is characterised in that the neutral net mould to after pre-training
Type carries out recognition of face task fine setting, and carries out face character identification to the neural network model after the fine setting of recognition of face task
In the step of task is finely tuned, before image is inputted into the neural network model, Face datection, face are carried out to described image
Alignment, face normalization.
8. a kind of face character identification device, it is characterised in that described device includes:
Pre-training unit, for carrying out pre-training to neural network model;
First fine-adjusting unit, for carrying out recognition of face task fine setting to the neural network model after pre-training;
Second fine-adjusting unit, it is micro- to carry out face character identification mission for the neural network model after being finely tuned to recognition of face task
Adjust;
Recognition unit, face category is carried out to given image for the neural network model after being finely tuned using face character identification mission
Property identification.
A kind of 9. terminal, it is characterised in that:The terminal includes processor, and the processor is used to perform what is stored in memory
Realized during computer program as any one of claim 1-7 the step of face character recognition methods.
10. a kind of computer-readable recording medium, is stored thereon with computer program, it is characterised in that:The computer program
Realized when being executed by processor as any one of claim 1-7 the step of face character recognition methods.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610674170.7A CN106295584A (en) | 2016-08-16 | 2016-08-16 | Depth migration study is in the recognition methods of crowd's attribute |
CN2016106741707 | 2016-08-16 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107766787A true CN107766787A (en) | 2018-03-06 |
CN107766787B CN107766787B (en) | 2023-04-07 |
Family
ID=57678081
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610674170.7A Pending CN106295584A (en) | 2016-08-16 | 2016-08-16 | Depth migration study is in the recognition methods of crowd's attribute |
CN201710591603.7A Active CN107766787B (en) | 2016-08-16 | 2017-07-19 | Face attribute identification method, device, terminal and storage medium |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610674170.7A Pending CN106295584A (en) | 2016-08-16 | 2016-08-16 | Depth migration study is in the recognition methods of crowd's attribute |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN106295584A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109190514A (en) * | 2018-08-14 | 2019-01-11 | 电子科技大学 | Face character recognition methods and system based on two-way shot and long term memory network |
CN109271884A (en) * | 2018-08-29 | 2019-01-25 | 厦门理工学院 | Face character recognition methods, device, terminal device and storage medium |
CN109726170A (en) * | 2018-12-26 | 2019-05-07 | 上海新储集成电路有限公司 | A kind of on-chip system chip of artificial intelligence |
CN110008841A (en) * | 2019-03-08 | 2019-07-12 | 中国华戎科技集团有限公司 | A kind of Expression Recognition model building method and system |
CN110110611A (en) * | 2019-04-16 | 2019-08-09 | 深圳壹账通智能科技有限公司 | Portrait attribute model construction method, device, computer equipment and storage medium |
WO2020034902A1 (en) * | 2018-08-11 | 2020-02-20 | 昆山美卓智能科技有限公司 | Smart desk having status monitoring function, monitoring system server, and monitoring method |
CN111325311A (en) * | 2018-12-14 | 2020-06-23 | 深圳云天励飞技术有限公司 | Neural network model generation method and device, electronic equipment and storage medium |
CN111651989A (en) * | 2020-04-13 | 2020-09-11 | 上海明略人工智能(集团)有限公司 | Named entity recognition method and device, storage medium and electronic device |
CN112069898A (en) * | 2020-08-05 | 2020-12-11 | 中国电子科技集团公司电子科学研究院 | Method and device for recognizing human face group attribute based on transfer learning |
Families Citing this family (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106295584A (en) * | 2016-08-16 | 2017-01-04 | 深圳云天励飞技术有限公司 | Depth migration study is in the recognition methods of crowd's attribute |
CN107220600B (en) * | 2017-05-17 | 2019-09-10 | 清华大学深圳研究生院 | A kind of Picture Generation Method and generation confrontation network based on deep learning |
CN108932459B (en) * | 2017-05-26 | 2021-12-10 | 富士通株式会社 | Face recognition model training method and device and face recognition method |
CN107704877B (en) * | 2017-10-09 | 2020-05-29 | 哈尔滨工业大学深圳研究生院 | Image privacy perception method based on deep learning |
CN107730497B (en) * | 2017-10-27 | 2021-09-10 | 哈尔滨工业大学 | Intravascular plaque attribute analysis method based on deep migration learning |
CN108108662B (en) * | 2017-11-24 | 2021-05-25 | 深圳市华尊科技股份有限公司 | Deep neural network recognition model and recognition method |
CN107862387B (en) * | 2017-12-05 | 2022-07-08 | 深圳地平线机器人科技有限公司 | Method and apparatus for training supervised machine learning models |
CN109934242A (en) * | 2017-12-15 | 2019-06-25 | 北京京东尚科信息技术有限公司 | Image identification method and device |
CN108256447A (en) * | 2017-12-29 | 2018-07-06 | 广州海昇计算机科技有限公司 | A kind of unmanned plane video analysis method based on deep neural network |
CN108427972B (en) * | 2018-04-24 | 2024-06-07 | 云南佳叶现代农业发展有限公司 | Tobacco leaf classification method and system based on online learning |
CN108596138A (en) * | 2018-05-03 | 2018-09-28 | 南京大学 | A kind of face identification method based on migration hierarchical network |
CN110516514B (en) * | 2018-05-22 | 2022-09-30 | 杭州海康威视数字技术股份有限公司 | Modeling method and device of target detection model |
CN110738071A (en) * | 2018-07-18 | 2020-01-31 | 浙江中正智能科技有限公司 | face algorithm model training method based on deep learning and transfer learning |
CN109214386B (en) * | 2018-09-14 | 2020-11-24 | 京东数字科技控股有限公司 | Method and apparatus for generating image recognition model |
CN109377501A (en) * | 2018-09-30 | 2019-02-22 | 上海鹰觉科技有限公司 | Remote sensing images naval vessel dividing method and system based on transfer learning |
CN109300170B (en) * | 2018-10-18 | 2022-10-28 | 云南大学 | Method for transmitting shadow of portrait photo |
CN109743579A (en) * | 2018-12-24 | 2019-05-10 | 秒针信息技术有限公司 | A kind of method for processing video frequency and device, storage medium and processor |
CN109743580A (en) * | 2018-12-24 | 2019-05-10 | 秒针信息技术有限公司 | A kind of method for processing video frequency and device, storage medium and processor |
CN110009059B (en) * | 2019-04-16 | 2022-03-29 | 北京字节跳动网络技术有限公司 | Method and apparatus for generating a model |
CN111383357A (en) * | 2019-05-31 | 2020-07-07 | 纵目科技(上海)股份有限公司 | Network model fine-tuning method, system, terminal and storage medium adapting to target data set |
CN110569780A (en) * | 2019-09-03 | 2019-12-13 | 北京清帆科技有限公司 | high-precision face recognition method based on deep transfer learning |
CN110879993B (en) * | 2019-11-29 | 2023-03-14 | 北京市商汤科技开发有限公司 | Neural network training method, and execution method and device of face recognition task |
CN112801054B (en) * | 2021-04-01 | 2021-06-22 | 腾讯科技(深圳)有限公司 | Face recognition model processing method, face recognition method and device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101021900A (en) * | 2007-03-15 | 2007-08-22 | 上海交通大学 | Method for making human face posture estimation utilizing dimension reduction method |
CN104217216A (en) * | 2014-09-01 | 2014-12-17 | 华为技术有限公司 | Method and device for generating detection model, method and device for detecting target |
CN104408470A (en) * | 2014-12-01 | 2015-03-11 | 中科创达软件股份有限公司 | Gender detection method based on average face preliminary learning |
CN104751140A (en) * | 2015-03-30 | 2015-07-01 | 常州大学 | Three-dimensional face recognition algorithm based on deep learning SDAE theory and application thereof in field of finance |
CN106295584A (en) * | 2016-08-16 | 2017-01-04 | 深圳云天励飞技术有限公司 | Depth migration study is in the recognition methods of crowd's attribute |
-
2016
- 2016-08-16 CN CN201610674170.7A patent/CN106295584A/en active Pending
-
2017
- 2017-07-19 CN CN201710591603.7A patent/CN107766787B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101021900A (en) * | 2007-03-15 | 2007-08-22 | 上海交通大学 | Method for making human face posture estimation utilizing dimension reduction method |
CN104217216A (en) * | 2014-09-01 | 2014-12-17 | 华为技术有限公司 | Method and device for generating detection model, method and device for detecting target |
CN104408470A (en) * | 2014-12-01 | 2015-03-11 | 中科创达软件股份有限公司 | Gender detection method based on average face preliminary learning |
CN104751140A (en) * | 2015-03-30 | 2015-07-01 | 常州大学 | Three-dimensional face recognition algorithm based on deep learning SDAE theory and application thereof in field of finance |
CN106295584A (en) * | 2016-08-16 | 2017-01-04 | 深圳云天励飞技术有限公司 | Depth migration study is in the recognition methods of crowd's attribute |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11836631B2 (en) * | 2018-08-11 | 2023-12-05 | Kunshan Meizhuo Intelligent Technology Co., Ltd. | Smart desk having status monitoring function, monitoring system server, and monitoring method |
CN113792625A (en) * | 2018-08-11 | 2021-12-14 | 昆山美卓智能科技有限公司 | Intelligent table with state monitoring function, state monitoring system and server |
US20210326585A1 (en) * | 2018-08-11 | 2021-10-21 | Kunshan Meizhuo Intelligent Technology Co., Ltd. | Smart desk having status monitoring function, monitoring system server, and monitoring method |
WO2020034902A1 (en) * | 2018-08-11 | 2020-02-20 | 昆山美卓智能科技有限公司 | Smart desk having status monitoring function, monitoring system server, and monitoring method |
CN109190514B (en) * | 2018-08-14 | 2021-10-01 | 电子科技大学 | Face attribute recognition method and system based on bidirectional long-short term memory network |
CN109190514A (en) * | 2018-08-14 | 2019-01-11 | 电子科技大学 | Face character recognition methods and system based on two-way shot and long term memory network |
CN109271884A (en) * | 2018-08-29 | 2019-01-25 | 厦门理工学院 | Face character recognition methods, device, terminal device and storage medium |
CN111325311A (en) * | 2018-12-14 | 2020-06-23 | 深圳云天励飞技术有限公司 | Neural network model generation method and device, electronic equipment and storage medium |
CN111325311B (en) * | 2018-12-14 | 2024-03-29 | 深圳云天励飞技术有限公司 | Neural network model generation method for image recognition and related equipment |
CN109726170A (en) * | 2018-12-26 | 2019-05-07 | 上海新储集成电路有限公司 | A kind of on-chip system chip of artificial intelligence |
CN110008841A (en) * | 2019-03-08 | 2019-07-12 | 中国华戎科技集团有限公司 | A kind of Expression Recognition model building method and system |
CN110110611A (en) * | 2019-04-16 | 2019-08-09 | 深圳壹账通智能科技有限公司 | Portrait attribute model construction method, device, computer equipment and storage medium |
CN111651989A (en) * | 2020-04-13 | 2020-09-11 | 上海明略人工智能(集团)有限公司 | Named entity recognition method and device, storage medium and electronic device |
CN111651989B (en) * | 2020-04-13 | 2024-04-02 | 上海明略人工智能(集团)有限公司 | Named entity recognition method and device, storage medium and electronic device |
CN112069898A (en) * | 2020-08-05 | 2020-12-11 | 中国电子科技集团公司电子科学研究院 | Method and device for recognizing human face group attribute based on transfer learning |
Also Published As
Publication number | Publication date |
---|---|
CN106295584A (en) | 2017-01-04 |
CN107766787B (en) | 2023-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107766787A (en) | Face character recognition methods, device, terminal and storage medium | |
Chen et al. | Two-layer fuzzy multiple random forest for speech emotion recognition in human-robot interaction | |
Wen et al. | Ensemble of deep neural networks with probability-based fusion for facial expression recognition | |
Liu et al. | Deepfood: Deep learning-based food image recognition for computer-aided dietary assessment | |
CN106445919A (en) | Sentiment classifying method and device | |
CN107844784A (en) | Face identification method, device, computer equipment and readable storage medium storing program for executing | |
WO2022042043A1 (en) | Machine learning model training method and apparatus, and electronic device | |
CN108776774A (en) | A kind of human facial expression recognition method based on complexity categorization of perception algorithm | |
CN104679863A (en) | Method and system for searching images by images based on deep learning | |
CN104850617B (en) | Short text processing method and processing device | |
CN110110719A (en) | A kind of object detection method based on attention layer region convolutional neural networks | |
CN109918642A (en) | The sentiment analysis method and system of Active Learning frame based on committee's inquiry | |
Bertrand et al. | Bark and leaf fusion systems to improve automatic tree species recognition | |
CN115907001B (en) | Knowledge distillation-based federal graph learning method and automatic driving method | |
CN110210027B (en) | Fine-grained emotion analysis method, device, equipment and medium based on ensemble learning | |
Cai et al. | MIFAD-net: multi-layer interactive feature fusion network with angular distance loss for face emotion recognition | |
CN108021908A (en) | Face age bracket recognition methods and device, computer installation and readable storage medium storing program for executing | |
CN110415071A (en) | A kind of competing product control methods of automobile based on opining mining analysis | |
Xiao | Artificial intelligence programming with Python: from zero to hero | |
CN108009248A (en) | A kind of data classification method and system | |
Bian et al. | Ensemble feature learning for material recognition with convolutional neural networks | |
Li et al. | Hierarchical knowledge squeezed adversarial network compression | |
CN111310820A (en) | Foundation meteorological cloud chart classification method based on cross validation depth CNN feature integration | |
Al-Tamimi | Combining convolutional neural networks and slantlet transform for an effective image retrieval scheme | |
CN114625908A (en) | Text expression package emotion analysis method and system based on multi-channel attention mechanism |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |