CN109325443A - A kind of face character recognition methods based on the study of more example multi-tag depth migrations - Google Patents
A kind of face character recognition methods based on the study of more example multi-tag depth migrations Download PDFInfo
- Publication number
- CN109325443A CN109325443A CN201811093395.9A CN201811093395A CN109325443A CN 109325443 A CN109325443 A CN 109325443A CN 201811093395 A CN201811093395 A CN 201811093395A CN 109325443 A CN109325443 A CN 109325443A
- Authority
- CN
- China
- Prior art keywords
- face
- tag
- network model
- image data
- characteristic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of face character recognition methods based on the study of more example multi-tag depth migrations, include the following steps: to prepare face image data collection, to each facial image, multiple nervous layer features of depth convolutional neural networks migration models are extracted, multilayer face characteristic is combined into;The network model for extracting multi-tag relationship characteristic is built, and is input with multilayer face characteristic, plurality of human faces attribute tags are true value, training fixed network model parameter;A linear binary classifier is designed for each face character, at most face character sorter model is migrated using the network model of trained multi-tag relationship characteristic as feature extractor, utilizes each linear binary classifier of face image data collection training.The present invention selects the mode of transfer learning, by the very strong migration models of vigor migrating to selected data set quickly and efficiently, and builds the simple multi-tag relationship characteristic model of training structure, while the linear binary classifier of the multiple face characters of training.
Description
Technical field
The face character recognition methods that the invention belongs to apply deep learning to realize in computer vision field.
Background technique
Face character identification is an important issue, and can be applied to the research of field of face identification, for example face is true
Recognize and face detection.Face and neighboring area in facial image have many attributes, wherein the attribute often considered have gender,
Hair style, expression, whether wear a pair of spectacles or cap etc..Study the complexity of the face character identification of non-laboratory scene human face picture
Degree and difficulty are bigger, because many constraint condition is different, for example face posture is different, there are shelter, resolution ratio not
Same and brightness and color difference.But it is of great significance for the research of non-laboratory scene human face Attribute Recognition, because this
Closer to the face picture obtained in actual life.Relevant people is carried out automatically for given face picture using computer technology
Face attribute labeling, this has important use value in real life, for example supervisory systems and criminal case investigation etc. are all
It is multi-field.
It is essential that feature extraction for face picture in face character identification, the face characteristic of proposition it is good with
It is bad often to directly influence face character classifier for the recognition capability of face character.Traditional some hand-designed features,
Such as LBP operator is poor effect for face character identification.Therefore stronger come learning vitality using deep learning,
The better feature of effect has become one of popular focus of computer vision field.From the beginning one people of network and training is built
Face attribute Recognition Model needs the image data of a large amount of tape labels.And training is a very time-consuming process, different situations
Under parameter to be adjusted also be not easy to determine, executable degree in practical applications is not very high.
Summary of the invention
Goal of the invention: being directed to the above-mentioned prior art, proposes a kind of face based on the study of more example multi-tag depth migrations
Attribute recognition approach.
Technical solution: a kind of face character recognition methods based on the study of more example multi-tag depth migrations, including it is as follows
Step:
Step 1: preparing face image data collection, facial image includes related face character mark corresponding to facial image
Label;Then the key point position coordinates for extracting every facial image carry out face alignment using key point;Again by facial image number
FaceA and FaceB two parts are randomly divided into according to collection;
Step 2: to each facial image, multiple nervous layer features of depth convolutional neural networks migration models are extracted,
It is combined into multilayer face characteristic;
Step 3: building the network model for extracting multi-tag relationship characteristic, and be with the multilayer face characteristic that step 2 is extracted
Input, plurality of human faces attribute tags are true value, utilize the training of FaceA partial face image data set and fixed extraction multi-tag relationship
The network model parameter of feature;
Step 4: a linear binary classifier is designed for each face character, by the trained multi-tag of step 3
The network model of relationship characteristic remains into network layer second from the bottom, and as feature extractor migration at most face character classifier
Model utilizes each linear binary classifier of FaceB partial face image data set training.
The utility model has the advantages that huge convolutional neural networks are built from the beginning and are trained in previous certain methods selection, training
Effect it is often unsatisfactory, and mass data is needed to support.The present invention selects the mode of transfer learning, and vigor is very strong
Migration models migrating to the data set of oneself quickly and efficiently.Different from the past only extracts migration models the last layer
Feature, this method extract multiple hidden layers in neural network.In order to enable model more adapts to the data set of oneself, this method
The simultaneously simple multi-tag relationship characteristic model of training structure is built, and trains the linear binary classification of multiple face characters simultaneously
Device, this not only further increases the speed of training pattern, but also the effect of face character identification is more preferable.
Specific embodiment
Further explanation is done to the present invention below.
A kind of face character recognition methods based on the study of more example multi-tag depth migrations, includes the following steps:
Step 1: preparing face image data collection, facial image includes related face character mark corresponding to facial image
Label;Then the key point position coordinates for extracting every facial image carry out face alignment using key point;Again by facial image number
FaceA and FaceB two parts are randomly divided into according to 1:1 ratio according to collection, next the two human face data collection will be respectively used to
The step of.
The face image data collection of this example selection is the CelebA that face alignment has been carried out, wherein there is more than 20 ten thousand
Facial image, there are also 40 face character true value.In order to reduce each unbalanced situation of face attribute data, this implementation as far as possible
Example select Arched Eyebrows therein, Attractive, Heavy Makeup, High Cheekbones, Male,
Mouth Slightly Open, Oval Face, Pointy Nose, Smiling, Wavy Hair and Wearing Lipstick
This 11 face characters, selection standard are that the positive and negative label picture quantity of each attribute is greater than 50,000.
Step 2: to each facial image, multiple nervous layer features of depth convolutional neural networks migration models are extracted,
It is combined into multilayer face characteristic.Wherein, depth convolutional neural networks migration models are the inception v3 model of Google, model
The network architecture be expressed as:
Input->con_1->pool_1->conv_2->pool_2->mixed_1->mixed_2->…->mixed_9->
mixed_10->pool_3->fc->softmax
Wherein, mixed_i { i=1,2 ..., 10 } indicates an inception module, by the convolution of multiple and different structures
Layer and pond layer composition, have saved quantity of parameters and have accelerated operation, alleviated over-fitting.The extracted characteristic feature of disparate modules
Difference, the mode for extracting nervous layer feature are expressed as:
Wherein, featureiIndicate the nervous layer feature extracted in depth convolutional neural networks migration models, N indicates face
Image data set number, xiIndicate i-th facial image, poolkAnd mixedkIt is with xiMigration models kth layer is extracted for input
The function of the last layer feature of pool module and kth layer mixed module, fkAnd gkIt is down-sampling and converts feature to one-dimensional
The function of vector.
Extract relevant parameter information required for nervous layer feature, the NameSpace of the neural net layer including use, phase
It closes nervous layer size and down-sampling parameter setting is as shown in table 1:
Table one
About pond layer it should be noted that all selecting average pond layer here, and padding parameter is both configured to
VALID。
Each layer is extracted and the feature for having been converted into one-dimensional vector turns to be spliced into multilayer face characteristic, the multilayer face
The size of feature is 14336 (4*4*64+2*2*192+2*2*256+2*2*288+2*2*288+768*5+1280+2048*2).
Step 3: building the network model for extracting multi-tag relationship characteristic, and be with the multilayer face characteristic that step 2 is extracted
Input, plurality of human faces attribute tags are true value, utilize the training of FaceA partial face image data set and fixed extraction multi-tag relationship
The network model parameter of feature, so that the model is further migrated at most face character sorter model.
Since multilayer face characteristic is one-dimensional characteristic, design extracts the network model of multi-tag relationship characteristic by connecting entirely
Layer composition is connect, each full articulamentum uses non-linear correction activation primitive.The size of the input layer of network model is equal to multilayer people
The size of face feature, the neuron number of output layer are the face character number equal to selection, extract multi-tag relationship characteristic
Network model indicates are as follows:
dropout→fc→fcc→logits
Wherein dropout indicates to input the dropout layer having a size of 14336;Fc, fcc and logits are full articulamentums, defeated
Entering size is successively 14336,2048 and 2048, and Output Size is successively 2048,2048 and 11.
FaceA partial face image data set is collected by training set and verifying of the ratio random division of 9:1, it will be via step
The rapid 2 obtained corresponding multilayer face characteristics of FaceA partial face image data set are as the net for extracting multi-tag relationship characteristic
The input of network model, using face character label as true value, to train the network model for extracting multi-tag relationship characteristic.Wherein train
Collection is used for the model training of the step, to prevent over-fitting and improving training speed, has used the small skill of the training such as discarding, when
When the penalty values on verifying collection reach desirable balance, deconditioning preservation model parameter.
For each facial image x of the part FaceAi, first pass around step 2 and extract multilayer face characteristic, then will
Network model dropout layer of input of the multilayer face characteristic as extraction multi-tag relationship characteristic, so dropout layers
Input is having a size of 14336.The last one full articulamentum logits is corresponded in the prediction of plurality of human faces attribute, therefore Output Size is equal to
Face character number Q=11 selected by the present embodiment.For the training speed of accelerans network, using batch gradient updating side
Method carrys out undated parameter, and the sample number of every batch of training is set as batch, by the way of randomly selecting.Optimizer selection is realized
The optimizer of batch gradient descent algorithm, initial learning rate are set as 0.01.
Since multi-tag relationship characteristic trained here will consider the relationship between different faces attribute, mentioning for training is needed
Take the loss function of the network model of multi-tag relationship characteristic as follows:
Wherein, N1Indicate the number of face image data in FaceA partial face image data set, Q indicates face character
Number, label true value yikIndicate whether i-th facial image contains k-th of face character, pikIndicate i-th facial image
Probability containing k-th of face character, fccikIndicate k-th of the last layer of the network model of extraction multi-tag relationship characteristic
Neuron value, fccimIndicate m-th of neuron value of the last layer of the network model of extraction multi-tag relationship characteristic.
When training terminates to save the multi-tag relationship characteristic model, it should be noted that: the mind that logits layers of the last layer
It is the number of face attribute tags through first number, this layer is left out when migrating at most face character sorter model, the migration
Model remains into layer fcc layers second from the bottom of network.
Step 4: the present embodiment has chosen 11 face attribute tags, designs one linear two for each face character
Meta classifier.The network model of the trained multi-tag relationship characteristic of step 3 is remained into network layer second from the bottom, and as spy
Extractor migration at most face character sorter model is levied, network model parameter is for different face character classifiers
Fixed and shared, utilize each linear binary classifier of FaceB partial face image data set training.
Specifically, FaceB face image data is integrated using the ratio random division of 8:1:1 as training set, verifying collection first
And test set, wherein training set and test set are for the linear binary classifier of this step training, and test set is for linear binary point
The recruitment evaluation of class device.Then the corresponding multilayer face characteristic of FaceB partial face image data set that will be obtained via step 2
As the input for the network model for extracting multi-tag relationship characteristic, then the network model output of multi-tag relationship characteristic will be extracted
Input of the multi-tag relationship characteristic as each linear binary classifier, each linear binary classifier of training.In the above process, Gu
Determine the parameter of the first two model, only the parameter of this 11 classifiers of training, improves training speed.
The network structure of face character classifier is simple, only a full articulamentum, input size and the more marks of extraction
The multi-tag feature for signing the network model output of relationship characteristic is unanimously 2048, and Output Size size is 2.It is linear for k-th
The loss function of binary classifier is as follows:
Wherein, N2Indicate the number of face image data in FaceB partial face image data set, label true value yikIt indicates
Whether i-th facial image contains k-th of face character, pikIndicate that i-th facial image contains the general of k-th of face character
Rate, logitsimThe output layer of presentation class device, m indicate output layer neuron number.
By the training simultaneously of multiple linear binary classifiers, and the importance of each face character is consistent, then is integrated into
Following overall loss function:
Wherein, Q indicates the number of face character, λkIndicate the weight of each linear binary classifier.When on verifying collection
When the recognition effect obtained, deconditioning simultaneously fixes the parameter of these face character classifiers.
Step 5: together by preceding step trains and parameter is fixed model integration, being combined into one completely with people
Face image is to input, predict that face character values are the neural network model of output with 11.
To the measure of merit of this method, the face image data used is that the inner FaceB of step 4 marks off the test set come.
By facial image xiAs input, multilayer face characteristic successively is extracted by extracting depth convolutional neural networks migration models, is mentioned
It takes the network model of multi-tag relationship characteristic to extract multi-tag feature, is finally obtained by 11 face character binary linearity classifiers
To 11 face attribute forecast values.This whole set of complete human face recognition model is saved, when there is new face image data to go out
Now, each face attribute value of the model prediction is directly taken.
We devise the face character recognition methods based on the study of more example multi-tag depth migrations in the method.Needle
This more example multi-tag problem is identified to face character, considers not only the connection between multiple examples, improves migration models
The method of feature is extracted to extract multilayer feature;The relationship being additionally contemplates that between different faces attribute builds and trains extraction more
The model of label relationship characteristic is to extract multi-tag feature.More example multi-tag features of better effect have finally been used to come simultaneously
The multiple face character classifiers of training, not only have promotion in speed, also obtain good effect in accuracy rate.
The above is only a preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art
For member, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications are also answered
It is considered as protection scope of the present invention.
Claims (6)
1. a kind of face character recognition methods based on the study of more example multi-tag depth migrations, which is characterized in that including as follows
Step:
Step 1: preparing face image data collection, facial image includes related face attribute tags corresponding to facial image;So
The key point position coordinates for extracting every facial image afterwards carry out face alignment using key point;Again by face image data collection
It is randomly divided into FaceA and FaceB two parts;
Step 2: to each facial image, extracting multiple nervous layer features of depth convolutional neural networks migration models, combination
At multilayer face characteristic;
Step 3: the network model for extracting multi-tag relationship characteristic is built, and is input with the multilayer face characteristic that step 2 is extracted,
Plurality of human faces attribute tags are true value, utilize the training of FaceA partial face image data set and fixed extraction multi-tag relationship characteristic
Network model parameter;
Step 4: a linear binary classifier is designed for each face character, by the trained multi-tag relationship of step 3
The network model of feature remains into network layer second from the bottom, and as feature extractor migration at most face character classifier mould
Type utilizes each linear binary classifier of FaceB partial face image data set training.
2. the face character recognition methods according to claim 1 based on the study of more example multi-tag depth migrations, special
Sign is, in the step 2, depth convolutional neural networks migration models are the inception v3 model of Google, extracts nerve
The mode of layer feature is expressed as:
Wherein, featureiIndicate the nervous layer feature extracted in depth convolutional neural networks migration models, N indicates facial image
Data set number, xiIndicate i-th facial image, poolkAnd mixedkIt is with xiMigration models kth layer pool is extracted for input
The function of the last layer feature of module and kth layer mixed module, fkAnd gkIt is down-sampling and converts one-dimensional vector for feature
Function.
3. the face character recognition methods according to claim 1 based on the study of more example multi-tag depth migrations, special
Sign is, in the step 3, the network model for extracting multi-tag relationship characteristic is made of full articulamentum, each full connection
Layer uses non-linear correction activation primitive, and the size of the input layer of network model is equal to the size of multilayer face characteristic, output layer
Neuron number be face character number equal to selection, extract the network model of multi-tag relationship characteristic by setting gradually
Dropout layers, fc layers, fcc layers, logits layers of composition;FaceA partial face image data set is drawn at random with the ratio of 9:1
It is divided into training set and verifying collection, the corresponding multilayer face characteristic of FaceA partial face image data set that will be obtained via step 2
As the input for the network model for extracting multi-tag relationship characteristic, using face character label as true value, to train extraction multi-tag
The network model of relationship characteristic.
4. the face character recognition methods according to claim 1 based on the study of more example multi-tag depth migrations, special
Sign is that the step 4 comprises the following specific steps that: first by FaceB face image data integrate random division as training set, test
Then card collection and test set are made the corresponding multilayer face characteristic of the FaceB partial face image data set obtained via step 2
For it is described extract multi-tag relationship characteristic network model input, then by it is described extract multi-tag relationship characteristic network model
Input of the multi-tag relationship characteristic of output as each linear binary classifier, each linear binary classifier of training.
5. the face character recognition methods according to claim 3 based on the study of more example multi-tag depth migrations, special
Sign is, needs the loss function of the network model of the extraction multi-tag relationship characteristic of training as follows:
Wherein, N1Indicate the number of face image data in FaceA partial face image data set, Q indicates of face character
Number, label true value yikIndicate whether i-th facial image contains k-th of face character, pikIndicate that i-th facial image contains
The probability of k-th of face character, fccikIndicate k-th of nerve of the last layer of the network model of extraction multi-tag relationship characteristic
First value, fccimIndicate m-th of neuron value of the last layer of the network model of extraction multi-tag relationship characteristic.
6. the face character recognition methods according to claim 4 based on the study of more example multi-tag depth migrations, special
Sign is that the loss function of binary classifier linear for k-th is as follows:
Wherein, N2Indicate the number of face image data in FaceB partial face image data set, label true value yikIndicate i-th
Open whether facial image contains k-th of face character, pikIndicate the probability that i-th facial image contains k-th of face character,
logitsimThe output layer of presentation class device, m indicate output layer neuron number;
By the training simultaneously of multiple linear binary classifiers, and the importance of each face character is consistent, then is integrated into as follows
Overall loss function:
Wherein, Q indicates the number of face character, λkIndicate the weight of each linear binary classifier.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811093395.9A CN109325443B (en) | 2018-09-19 | 2018-09-19 | Face attribute identification method based on multi-instance multi-label deep migration learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811093395.9A CN109325443B (en) | 2018-09-19 | 2018-09-19 | Face attribute identification method based on multi-instance multi-label deep migration learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109325443A true CN109325443A (en) | 2019-02-12 |
CN109325443B CN109325443B (en) | 2021-09-17 |
Family
ID=65266207
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811093395.9A Active CN109325443B (en) | 2018-09-19 | 2018-09-19 | Face attribute identification method based on multi-instance multi-label deep migration learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109325443B (en) |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109829441A (en) * | 2019-02-19 | 2019-05-31 | 山东大学 | A kind of human facial expression recognition method and device based on course learning |
CN110097033A (en) * | 2019-05-15 | 2019-08-06 | 成都电科智达科技有限公司 | A kind of single sample face recognition method expanded based on feature |
CN110660478A (en) * | 2019-09-18 | 2020-01-07 | 西安交通大学 | Cancer image prediction and discrimination method and system based on transfer learning |
CN110689025A (en) * | 2019-09-16 | 2020-01-14 | 腾讯医疗健康(深圳)有限公司 | Image recognition method, device and system, and endoscope image recognition method and device |
CN110827545A (en) * | 2019-11-14 | 2020-02-21 | 北京首汽智行科技有限公司 | Optimal vehicle number prediction method |
CN111275107A (en) * | 2020-01-20 | 2020-06-12 | 西安奥卡云数据科技有限公司 | Multi-label scene image classification method and device based on transfer learning |
CN111368917A (en) * | 2020-03-04 | 2020-07-03 | 西安邮电大学 | Multi-example ensemble learning method for criminal investigation image classification |
CN111507263A (en) * | 2020-04-17 | 2020-08-07 | 电子科技大学 | Face multi-attribute recognition method based on multi-source data |
CN111598000A (en) * | 2020-05-18 | 2020-08-28 | 中移(杭州)信息技术有限公司 | Face recognition method, device, server and readable storage medium based on multiple tasks |
CN111652256A (en) * | 2019-03-18 | 2020-09-11 | 上海铼锶信息技术有限公司 | Method and system for acquiring multidimensional data |
CN111932561A (en) * | 2020-09-21 | 2020-11-13 | 深圳大学 | Real-time enteroscopy image segmentation method and device based on integrated knowledge distillation |
CN112052709A (en) * | 2019-06-06 | 2020-12-08 | 搜狗(杭州)智能科技有限公司 | Face attribute identification method and device |
CN112069898A (en) * | 2020-08-05 | 2020-12-11 | 中国电子科技集团公司电子科学研究院 | Method and device for recognizing human face group attribute based on transfer learning |
CN112149556A (en) * | 2020-09-22 | 2020-12-29 | 南京航空航天大学 | Face attribute recognition method based on deep mutual learning and knowledge transfer |
CN112200260A (en) * | 2020-10-19 | 2021-01-08 | 厦门大学 | Figure attribute identification method based on discarding loss function |
CN112215162A (en) * | 2020-10-13 | 2021-01-12 | 北京中电兴发科技有限公司 | Multi-label multi-task face attribute prediction method based on MCNN (multi-core neural network) |
CN112329598A (en) * | 2020-11-02 | 2021-02-05 | 杭州格像科技有限公司 | Method, system, electronic device and storage medium for positioning key points of human face |
CN112800869A (en) * | 2021-01-13 | 2021-05-14 | 网易(杭州)网络有限公司 | Image facial expression migration method and device, electronic equipment and readable storage medium |
CN113642541A (en) * | 2021-10-14 | 2021-11-12 | 环球数科集团有限公司 | Face attribute recognition system based on deep learning |
CN113657486A (en) * | 2021-08-16 | 2021-11-16 | 浙江新再灵科技股份有限公司 | Multi-label multi-attribute classification model establishing method based on elevator picture data |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107993191A (en) * | 2017-11-30 | 2018-05-04 | 腾讯科技(深圳)有限公司 | A kind of image processing method and device |
US20180253539A1 (en) * | 2017-03-05 | 2018-09-06 | Ronald H. Minter | Robust system and method of authenticating a client in non-face-to-face online interactions based on a combination of live biometrics, biographical data, blockchain transactions and signed digital certificates. |
-
2018
- 2018-09-19 CN CN201811093395.9A patent/CN109325443B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180253539A1 (en) * | 2017-03-05 | 2018-09-06 | Ronald H. Minter | Robust system and method of authenticating a client in non-face-to-face online interactions based on a combination of live biometrics, biographical data, blockchain transactions and signed digital certificates. |
CN107993191A (en) * | 2017-11-30 | 2018-05-04 | 腾讯科技(深圳)有限公司 | A kind of image processing method and device |
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109829441A (en) * | 2019-02-19 | 2019-05-31 | 山东大学 | A kind of human facial expression recognition method and device based on course learning |
CN111652256B (en) * | 2019-03-18 | 2024-02-02 | 上海铼锶信息技术有限公司 | Method and system for acquiring multidimensional data |
CN111652256A (en) * | 2019-03-18 | 2020-09-11 | 上海铼锶信息技术有限公司 | Method and system for acquiring multidimensional data |
CN110097033A (en) * | 2019-05-15 | 2019-08-06 | 成都电科智达科技有限公司 | A kind of single sample face recognition method expanded based on feature |
CN110097033B (en) * | 2019-05-15 | 2023-04-07 | 成都电科智达科技有限公司 | Single-sample face recognition method based on feature expansion |
CN112052709B (en) * | 2019-06-06 | 2024-04-19 | 北京搜狗科技发展有限公司 | Face attribute identification method and device |
CN112052709A (en) * | 2019-06-06 | 2020-12-08 | 搜狗(杭州)智能科技有限公司 | Face attribute identification method and device |
CN110689025A (en) * | 2019-09-16 | 2020-01-14 | 腾讯医疗健康(深圳)有限公司 | Image recognition method, device and system, and endoscope image recognition method and device |
CN110689025B (en) * | 2019-09-16 | 2023-10-27 | 腾讯医疗健康(深圳)有限公司 | Image recognition method, device and system and endoscope image recognition method and device |
CN110660478A (en) * | 2019-09-18 | 2020-01-07 | 西安交通大学 | Cancer image prediction and discrimination method and system based on transfer learning |
CN110827545A (en) * | 2019-11-14 | 2020-02-21 | 北京首汽智行科技有限公司 | Optimal vehicle number prediction method |
CN111275107A (en) * | 2020-01-20 | 2020-06-12 | 西安奥卡云数据科技有限公司 | Multi-label scene image classification method and device based on transfer learning |
CN111368917A (en) * | 2020-03-04 | 2020-07-03 | 西安邮电大学 | Multi-example ensemble learning method for criminal investigation image classification |
CN111507263A (en) * | 2020-04-17 | 2020-08-07 | 电子科技大学 | Face multi-attribute recognition method based on multi-source data |
CN111507263B (en) * | 2020-04-17 | 2022-08-05 | 电子科技大学 | Face multi-attribute recognition method based on multi-source data |
CN111598000A (en) * | 2020-05-18 | 2020-08-28 | 中移(杭州)信息技术有限公司 | Face recognition method, device, server and readable storage medium based on multiple tasks |
CN112069898A (en) * | 2020-08-05 | 2020-12-11 | 中国电子科技集团公司电子科学研究院 | Method and device for recognizing human face group attribute based on transfer learning |
CN111932561A (en) * | 2020-09-21 | 2020-11-13 | 深圳大学 | Real-time enteroscopy image segmentation method and device based on integrated knowledge distillation |
CN112149556A (en) * | 2020-09-22 | 2020-12-29 | 南京航空航天大学 | Face attribute recognition method based on deep mutual learning and knowledge transfer |
CN112149556B (en) * | 2020-09-22 | 2024-05-03 | 南京航空航天大学 | Face attribute identification method based on deep mutual learning and knowledge transfer |
CN112215162A (en) * | 2020-10-13 | 2021-01-12 | 北京中电兴发科技有限公司 | Multi-label multi-task face attribute prediction method based on MCNN (multi-core neural network) |
CN112215162B (en) * | 2020-10-13 | 2023-07-25 | 北京中电兴发科技有限公司 | Multi-label and multi-task face attribute prediction method based on MCNN (media channel network) |
CN112200260A (en) * | 2020-10-19 | 2021-01-08 | 厦门大学 | Figure attribute identification method based on discarding loss function |
CN112200260B (en) * | 2020-10-19 | 2022-06-14 | 厦门大学 | Figure attribute identification method based on discarding loss function |
CN112329598A (en) * | 2020-11-02 | 2021-02-05 | 杭州格像科技有限公司 | Method, system, electronic device and storage medium for positioning key points of human face |
CN112329598B (en) * | 2020-11-02 | 2024-05-31 | 杭州格像科技有限公司 | Method, system, electronic device and storage medium for positioning key points of human face |
CN112800869B (en) * | 2021-01-13 | 2023-07-04 | 网易(杭州)网络有限公司 | Image facial expression migration method and device, electronic equipment and readable storage medium |
CN112800869A (en) * | 2021-01-13 | 2021-05-14 | 网易(杭州)网络有限公司 | Image facial expression migration method and device, electronic equipment and readable storage medium |
CN113657486B (en) * | 2021-08-16 | 2023-11-07 | 浙江新再灵科技股份有限公司 | Multi-label multi-attribute classification model building method based on elevator picture data |
CN113657486A (en) * | 2021-08-16 | 2021-11-16 | 浙江新再灵科技股份有限公司 | Multi-label multi-attribute classification model establishing method based on elevator picture data |
CN113642541B (en) * | 2021-10-14 | 2022-02-08 | 环球数科集团有限公司 | Face attribute recognition system based on deep learning |
CN113642541A (en) * | 2021-10-14 | 2021-11-12 | 环球数科集团有限公司 | Face attribute recognition system based on deep learning |
Also Published As
Publication number | Publication date |
---|---|
CN109325443B (en) | 2021-09-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109325443A (en) | A kind of face character recognition methods based on the study of more example multi-tag depth migrations | |
Han et al. | Heterogeneous face attribute estimation: A deep multi-task learning approach | |
Wen et al. | Ensemble of deep neural networks with probability-based fusion for facial expression recognition | |
CN110399821B (en) | Customer satisfaction acquisition method based on facial expression recognition | |
Shao et al. | Feature learning for image classification via multiobjective genetic programming | |
Arora et al. | AutoFER: PCA and PSO based automatic facial emotion recognition | |
CN106570521B (en) | Multilingual scene character recognition method and recognition system | |
CN106504064A (en) | Clothes classification based on depth convolutional neural networks recommends method and system with collocation | |
CN109344759A (en) | A kind of relatives' recognition methods based on angle loss neural network | |
CN108121975A (en) | A kind of face identification method combined initial data and generate data | |
CN108446601A (en) | A kind of face identification method based on sound Fusion Features | |
CN109886154A (en) | Most pedestrian's appearance attribute recognition methods according to collection joint training based on Inception V3 | |
CN109886153A (en) | A kind of real-time face detection method based on depth convolutional neural networks | |
CN104298974A (en) | Human body behavior recognition method based on depth video sequence | |
CN110378208A (en) | A kind of Activity recognition method based on depth residual error network | |
CN106203313A (en) | The clothing classification of a kind of image content-based and recommendation method | |
CN103617609B (en) | Based on k-means non-linearity manifold cluster and the representative point choosing method of graph theory | |
CN112668486A (en) | Method, device and carrier for identifying facial expressions of pre-activated residual depth separable convolutional network | |
CN114863572A (en) | Myoelectric gesture recognition method of multi-channel heterogeneous sensor | |
CN100365645C (en) | Identity recognition method based on eyebrow recognition | |
CN114663766A (en) | Plant leaf identification system and method based on multi-image cooperative attention mechanism | |
Dong et al. | Brain cognition-inspired dual-pathway CNN architecture for image classification | |
Ruan et al. | Facial expression recognition in facial occlusion scenarios: A path selection multi-network | |
Cai et al. | Performance analysis of distance teaching classroom based on machine learning and virtual reality | |
CN103336974B (en) | A kind of flowers classification discrimination method based on local restriction sparse representation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |