CN109117797A - A kind of face snapshot recognition method based on face quality evaluation - Google Patents
A kind of face snapshot recognition method based on face quality evaluation Download PDFInfo
- Publication number
- CN109117797A CN109117797A CN201810940587.2A CN201810940587A CN109117797A CN 109117797 A CN109117797 A CN 109117797A CN 201810940587 A CN201810940587 A CN 201810940587A CN 109117797 A CN109117797 A CN 109117797A
- Authority
- CN
- China
- Prior art keywords
- face
- quality
- key point
- human
- method based
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Abstract
The invention discloses a kind of face snapshot recognition methods based on face quality evaluation.The present invention carries out face quality evaluation score regression training using deep learning on three typical datasets with credit rating difference, obtain face Environmental Evaluation Model, the model is used for face snap system, scene is captured for plurality of human faces, in order to reduce system load while without misalignment true property, the video flowing of real-time reception is divided into several by the settable frame number of fixation and captures the period, quality requirements are captured in dynamic adjustment in each candid photograph period, and the highest face of quality is supplied to face recognition algorithms and is identified in output a period of time.The present invention carries out face quality score using the technology of deep learning, face snap is cooperated to carry out the filtering of low quality face and the selection of optimal face, to preferably serve face identification system, realizes and reach higher face identification rate while lower system hardware resources occupy.
Description
Technical field
The invention belongs to technical field of computer vision, are related to a kind of face snap identification side based on face quality evaluation
Method.
Background technique
The input of video monitoring scene human face identification is a large amount of candid photograph face figures exported by Intelligent human-face capturing system
Picture, due to the uncontrollability of scene, the plurality of pictures quality of the same person is irregular under non-ideal environment, Common Factors Influencing
As luminance contrast is too low, human face posture is excessively inclined, face blocks, violent expression shape change, low resolution, image are fuzzy and noise
Deng.In order to improve accuracy of identification, misclassification rate is reduced to improve system performance, needs select using face quality evaluation excellent.
Chinese invention patent: CN20111017191.X " monitoring face image quality at client acquisition terminal in real time " is logical
It crosses and judges whether face wears glasses, whether face blocks and the posture for the picture quality that whether has an impact prompts face
Filtering the disadvantage is that the influencing factors of quality considered is relatively single, and is unfavorable for distinguishing over the people of different quality under same factor
Face.CN201710774182.1 " face quality discrimination and the method and device of picture enhancing " is big according to face clarity, face
Small and human eye opening degree calculates face overall merit score." one kind takes into account real-time and face matter to CN201610674387.8
The filtering selection method and system of amount " it is scored respectively according to facial angle, face size, backlight degree, it is then weighted to obtain
Final score.Similar, CN201610870876.0 " a kind of face preferred method and system based on visible light " has chosen people
Face three-dimensional rotation angle, face sunlight exposure area, face clarity and face shielded area are scored and are weighted.
Although these methods in conjunction with multiple and different evaluation indexes, avoid carry out matter by monofactor to a certain extent
The limitation of evaluation is measured, but based on traditional image procossing and method for mode matching, is still had in the selection of feature larger
Subjectivity, and calculate complex, also need the training debugging for carrying out the parameters such as weight, not can guarantee various changeable scenes
The accuracy rate of human face identification.
CN201710121461.8 " face method for evaluating quality and system based on deep layer convolutional neural networks ",
CN201711439458.7 " method and apparatus based on convolutional neural networks assessment quality of human face image " patent has been all made of base
In the face method for evaluating quality of deep layer convolutional neural networks, compared to conventional method, convolutional neural networks (CNN) have excellent
Feature learning ability, the ability to express of complex characteristic and contextual information, can be simultaneously reached higher precision.But these
The acquisition of specific image quality tab value needed for training face quality evaluation in method, it is still necessary to be given by above-mentioned conventional method
Out, or a large amount of human cost of investment is labeled, this results in more complicated process, or upper for more time and more
Consumption on manpower.And in CN201711180270.5 " quality of human face image evaluation method and device based on face alignment " then
Using the similarity of facial image and standard reference image progress face alignment as quality references label value, this method is directly facing
Identification avoids the defect that above-mentioned label obtains, but brings new problem simultaneously: required firstly for deep learning training
For mass data, the acquisition and selection of a large amount of reference pictures are relatively difficult;Another is it is envisaged that the quality trained
Evaluation model only can may preferably serve Given Face identification comparison model, but different identification models is chosen
Optimum quality face effect can be deteriorated, because without having universality.
Therefore, it is necessary to provide a kind of face quality that is easily operated, adapting to multiple scenes based on deep learning technology
Evaluation method, and the perfect video monitoring face snap identifying system to match, to solve the face matter in recognition of face
Measure evaluation problem.
Summary of the invention
In view of the deficiencies of the prior art, the present invention provides a kind of face snap identification sides based on face quality evaluation
Method.
The technical solution adopted for solving the technical problem of the present invention are as follows:
Step 1, initialization system, safeguard face state.
Step 2, video image acquisition equipment obtain current frame image.
Step 3, human-face detector obtain all face rectangular areas coordinate in image.
Step 4, human face region input key point location algorithm obtain face key point.
Human face region figure is rotated scale transformation to fixed size according to face key point by step 5.
Image input face Environmental Evaluation Model in step 5 is obtained face score by step 6.
Step 7, before and after frames face of being connected using track algorithm, are updated each face maximum score, optimal facial image, grabbed
Clap number and recognition confidence history.
Step 8, within a candid photograph period, if the same face snap number is more, improve quality threshold, if identification
Confidence level history maximum value is then arranged greater than certain threshold value does not capture mark in the recent period, exports when otherwise tracking mode is disappearance optimal
Face figure.
Step 9 carries out face alignment identification using optimal face figure.
The face Environmental Evaluation Model is established in the following manner:
Preparing whole face quality, there are three the non-face of grade difference, mean quality face, high quality face data sets.
Data set picture containing face human-face detector is obtained into face rectangular area coordinate.
Face rectangular area inputs key point location algorithm and obtains face key point.
According to face key point by data set picture rotation scale transformation to fixed size, normalized facial image is obtained
Data.
Three data sets assign 0,1,2 three label by quality respectively, and obtain normalized face image data together
It is trained tune ginseng as training data input convolutional neural networks, obtains face Environmental Evaluation Model.
Beneficial effects of the present invention:
The present invention is obtained in the face quality evaluation training stage without carrying out complicated mass fraction label, it is only necessary to based on whole
Data acquisition system quality carries out the classification of mass fraction label, largely reduces training data in this way and prepares difficulty and training standard
Standby stage computation complexity, saves plenty of time and human cost.
The present invention learns the evaluation for face picture quality by convolutional neural networks, with the current some images of tradition
Algorithm compare without artificial design features and robustness it is stronger, feature representation is more preferable, and accuracy rate is higher.
The present invention selects optimal face of the same person in one section of video sequence to identify, reduces low quality picture ginseng
The possibility for causing misclassification rate to increase with identification, avoids unnecessary identification, saves the time, improve the speed of entire identifying system
Degree, while high quality face further improves the accuracy rate of identification.
Detailed description of the invention
Fig. 1 is face quality evaluation training stage flow chart.
Fig. 2 is quality evaluation network structure.
Fig. 3 is that video human face captures identification process figure.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention
In attached drawing, the technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is only
It is only a part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiment of the present invention, ordinary skill people
Member's every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.
The technical scheme adopted by the invention is that: depth is used on three typical datasets with credit rating difference
Study carries out face quality evaluation score regression training, obtains face Environmental Evaluation Model, which is used for face snap system
System captures scene for plurality of human faces, in order to reduce system load while without misalignment true property, by the video flowing of real-time reception by fixation
Settable frame number is divided into several and captures the period, and quality requirements are captured in dynamic adjustment in each candid photograph period, when exporting one section
The interior highest face of quality is supplied to face recognition algorithms and is identified.Simultaneously as Environmental Evaluation Model quality evaluation itself
A kind of auxiliaring effect evaluation means, present invention further introduces it is a kind of based on weighting identification similarity permutation number quality evaluation calculation
Method error metrics method.
Of the invention includes following technology contents:
1, the face quality evaluation training stage, see Fig. 1:
1-1, preparing whole face quality, there are three the non-face of grade difference, mean quality face, high quality face data
Collection.
1-2, the human-face detector of the data set picture containing face is obtained into face rectangular area coordinate.
1-3, face rectangular area input key point location algorithm obtain face key point.
1-4, according to face key point by data set picture rotation scale transformation to fixed size, obtain normalized face
Image data.
1-5, three data sets assign 0,1,2 three label by quality respectively, together with normalized face image data
It is trained tune ginseng as training data input convolutional neural networks, obtains face Environmental Evaluation Model.
For above-mentioned model, the candid photograph figure exported using Environmental Evaluation Model compares to obtain similar to face bottom library template figure
Degree, multiple face quality scores of the same person sort from small to large by corresponding similarity, obtained mass fraction sequence
Permutation number weight to obtain final error as error, and with the difference of the similarity of backward pair.
2, video human face captures cognitive phase:
2-1, initialization system maintenance face state.
2-2, video image acquisition equipment obtain current frame image.
2-3, human-face detector obtain all face rectangular areas coordinate in image.
2-4, human face region input key point location algorithm obtain face key point.
2-5, human face region figure is rotated to fixed size by scale transformation according to face key point.
2-6, image input face Environmental Evaluation Model in step 2-5 is obtained into face score.
2-7, before and after frames face of being connected using track algorithm, are updated each face maximum score, optimal facial image, captured
Number, the states such as recognition confidence history.
2-8, within a candid photograph period, if the same face snap number is more, improve quality threshold, if identification set
Reliability history maximum value is then arranged greater than certain threshold value does not capture mark in the recent period, and otherwise tracking mode is to export optimal people when disappearing
Face figure.
2-9, face alignment identification is carried out using optimal face figure.
Embodiment:
Face datection algorithm used in the present embodiment is the human-face detector based on Faster-RCNN training, crucial point location
Algorithm is 5 point location models of depth network training, and track algorithm carries out characteristic matching using lightweight identification model.Quality is commented
Valence network model input dimension of picture is normalized to wide 64 pixel * high, 128 pixel.
The face quality evaluation training stage:
1, preparing whole face quality, there are three the non-face of grade difference, mean quality face, high quality face data sets.
Wherein human face data quality shows Gaussian Profile form as far as possible, and actual distribution should not too uniformly;Non-face data
The face square for being cut from various scene pictures at random using program, and ensuring to cut obtained rectangular area and algorithm detects
Cutting figure is finally zoomed to 64*128 size again less than 0.2 by the intersection area and union area ratio (IoU) in shape region.In
Etc. quality face select public data collection ALFW, high quality face select certificate photo database face.
2, the human-face detector of the data set picture containing face is obtained into face rectangular area coordinate.With mean quality data
Collection, quality data collection are inputted as Face datection, obtain all face locations in the every picture of each data set.
3, human face region input key point location algorithm obtains face key point.The face area according to determined by face location
Domain, input 5 point location network of depth obtain the key point that each face location corresponds to face, include left eye, right eye, nose, a left side
The corners of the mouth, the right corners of the mouth totally five points.
4, according to face key point by data set picture rotation scale transformation to fixed size.This step main purpose be by
Face does alignment pretreatment, reduces the interference to network inputs such as rotation angle, scale, specific implementation method are as follows: according to upper one
The key point that step obtains, obtains a minimum circumscribed circle C, and radius is denoted as r, and right and left eyes two o'clock line obtains straight line l, l with
Face rotation a degree is kept the direction l horizontal, is expanded centered on the center of circle circumscribed circle C and obtain one by horizontal line structure a in an angle
The new rectangle human face region of 3r*6r, then by the area zoom to fixed size 64*128.
5, three data sets assign 0,1,2 three label by quality respectively, with the image data in step 4 together as instruction
Practice data input network and is trained tune ginseng.Quality evaluation network (see figure 2) mainly includes four connected convolution-
BatchNorm-ReLu- mean value pond layer, one complete-ReLu layers of connection, and last full articulamentum, Euclidean distance loss letter
Several layers.Quality evaluation network score return process, in the present embodiment be using EuclideanLoss loss function and with
The continuous iteration of machine gradient descent method, backpropagation carry out the process of network tune ginseng.Model finally is trained, for three data sets
The score returned out will show three wave crest states, and exist between adjacent peaks and intersect.
Training effect evaluation: being directed to above-mentioned evaluation model, using the candid photograph figure and face bottom library template figure ratio of model output
To similarity is obtained, multiple face quality scores of the same person sort from small to large by corresponding similarity, obtained matter
The permutation number of amount fraction sequence weights to obtain final error as error, and with the difference of the similarity of backward pair.One arrangement
Permutation number is defined as (from wikipedia): the ordered set (n > 1) that A has n number as one is set, wherein each not phase of all numbers
Together.Make 1≤i < j≤n and A [i] > A [j] if there is positive integer i, j, then<A [i], A [j]>this orderly
To referred to as A backward pair, also referred to as backward.The quantity of backward pair is referred to as permutation number.In the present embodiment, for specific knowledge
Other model, Environmental Evaluation Model error metrics standard are are as follows: the face and bottom storehouse reference human face similarity degree that model is selected get over Gao Ze
It is higher to be considered as face quality.Under ideal conditions, after facial image sorts from small to large by alignment similarity, by quality evaluation
The score of model output will be an ordered sequence from small to large, and error is 0 at this time, if similarity comes i-th bit and jth
Two face its similarities of position (i<j) are respectively Si, Sj, quality score Qi, Qj and Qi>Qj, then the mistake of the backward pair
Difference is (Sj-Si) * 1, the error of all backwards pair and as final error, error is smaller illustrate Environmental Evaluation Model for
Recognition effect is better.
Video human face captures cognitive phase, sees Fig. 3:
1, it initializes system maintenance face state: initializing each face maximum and be scored at-∞, maintenance state should include every simultaneously
The coordinate of a human face region, the ID mark being assigned to, captures quality threshold, tracking mode, and optimal facial image has been captured and opened
Number, recognition feature vector, recognition confidence history etc..
2, video image acquisition equipment obtains current frame image.Acquisition equipment can be IPC network video camera, and mobile phone etc. is embedding
Enter formula equipment, USB camera etc..
3, human-face detector obtains all face rectangular areas coordinate in image.Face based on Faster-RCNN training
Detector carries out the fast accurate positioning of face.
4, human face region input key point location algorithm obtains face key point.It is obtained based on 5 point location network model of depth
Take corresponding five key points of each face location.
5, human face region figure is uniformly transformed to by fixed size according to face key point.This step main purpose is by face
Alignment pretreatment is done, the interference to network inputs such as rotation angle, scale, specific implementation method are as follows: according to previous step are reduced
Obtained key point obtains a minimum circumscribed circle C, and radius is denoted as r, and right and left eyes two o'clock line obtains straight line l, l and level
Face rotation a degree is kept the direction l horizontal, is expanded centered on the center of circle circumscribed circle C and obtain a 3r*6r by line structure a in an angle
New rectangle human face region, then by the area zoom to fixed size 64*128.
6, image input face Environmental Evaluation Model in step 5 is obtained into face score.Since training stage label is 0,
1,2,50 times of amplifications are carried out when reality output, obtained score is concentrated mainly between 0 ~ 100.
7, using track algorithm series connection before and after frames face, each face maximum score, optimal facial image is updated, captures and opens
The states such as several and recognition confidence history.Track side uses the identification network of lightweight, and video before and after frames are detected
Face extraction recognition feature vector is matched according to similarity principle of optimality.
8, within a candid photograph period, if the same face snap number is more, quality threshold is improved, if identification confidence
Degree history maximum value is then arranged greater than certain threshold value does not capture mark in the recent period, and otherwise tracking mode is to export optimal face when disappearing
Scheming this step can reach the effect that load balancing is carried out to identifying system.In embodiment the face quality threshold of specific ID according to
1.1 ratio is incremented by, and recognition and verification confidence level is set as 0.99.
9, face alignment identification is carried out using optimal face area figure.Optimum quality face has ensured the accurate of recognition of face
Property.
To sum up, too simple or excessively complicated for modeling existing for quality assessment scheme in existing face identification system,
The deficiencies of cumbersome human cost is high, practical feasibility is weak, the present invention provide it is a kind of it is based on convolutional neural networks, be easy to instruct
White silk and deployment, the face quality evaluation of strong operability and grasp shoot method carry out face quality using the technology of deep learning and beat
Point, cooperation face snap carries out the filtering of low quality face and the selection of optimal face, so that face identification system is preferably served,
It realizes and reaches higher face identification rate while lower system hardware resources occupy.
The foregoing is only a preferred embodiment of the present invention, is not intended to limit the scope of the present invention, should
Understand, the present invention is not limited to implementation as described herein, the purpose of these implementations description is to help this field
In technical staff practice the present invention.
Claims (7)
1. a kind of face snapshot recognition method based on face quality evaluation, it is characterised in that method includes the following steps:
Step 1, initialization system, safeguard face state;
Step 2, video image acquisition equipment obtain current frame image;
Step 3, human-face detector obtain all face rectangular areas coordinate in image;
Step 4, human face region input key point location algorithm obtain face key point;
Human face region figure is rotated scale transformation to fixed size according to face key point by step 5;
Image input face Environmental Evaluation Model in step 5 is obtained face score by step 6;
Step 7, before and after frames face of being connected using track algorithm, are updated each face maximum score, optimal facial image, capture and open
Several and recognition confidence history;
Step 8, within a candid photograph period, if the same face snap number is more, improve quality threshold, if identification confidence
Degree history maximum value is then arranged greater than certain threshold value does not capture mark in the recent period, and otherwise tracking mode is to export optimal face when disappearing
Figure;
Step 9 carries out face alignment identification using optimal face figure;
The face Environmental Evaluation Model is established in the following manner:
Preparing whole face quality, there are three the non-face of grade difference, mean quality face, high quality face data sets;
Data set picture containing face human-face detector is obtained into face rectangular area coordinate;
Face rectangular area inputs key point location algorithm and obtains face key point;
According to face key point by data set picture rotation scale transformation to fixed size, normalized facial image number is obtained
According to;
Three data sets assign 0,1,2 three label by quality respectively, and obtain the conduct together of normalized face image data
Training data input convolutional neural networks are trained tune ginseng, obtain face Environmental Evaluation Model.
2. a kind of face snapshot recognition method based on face quality evaluation according to claim 1, it is characterised in that: step
Face key point in rapid 4 includes left eye, right eye, nose, the left corners of the mouth, the right corners of the mouth totally five points.
3. a kind of face snapshot recognition method based on face quality evaluation according to claim 1, it is characterised in that:
In face Environmental Evaluation Model establishment process:
Non-face data are to cut to obtain at random from various scene pictures, and ensure to cut obtained rectangular area and detect
Face rectangular area intersection area and union area ratio less than 0.2, then by cutting figure zoom to 64*128 size;
Mean quality human face data is selected from ALFW data set;
High quality human face data is selected from certificate photo face database.
4. a kind of face snapshot recognition method based on face quality evaluation according to claim 1, it is characterised in that: figure
Piece rotates scale transformation to fixed size:
According to face key point, a minimum circumscribed circle C is obtained, radius is denoted as r, and right and left eyes two o'clock line obtains straight line l, directly
Line l and horizontal line structure a in an angle, are kept the direction straight line l horizontal face rotation a degree, are expanded centered on the center of circle circumscribed circle C
Obtain the new rectangle human face region of a 3r*6r, then by the area zoom to fixed size 64*128.
5. a kind of face snapshot recognition method based on face quality evaluation according to claim 1, it is characterised in that: volume
Product neural network mainly includes four connected convolution-BatchNorm-ReLu- mean value pond layers, and one complete-ReLu layers of connection
And last full articulamentum, Euclidean distance loss function layer, use EuclideanLoss loss function and stochastic gradient descent
The continuous iteration of method, backpropagation carry out network tune ginseng;Finally train face Environmental Evaluation Model.
6. a kind of face snapshot recognition method based on face quality evaluation according to claim 1, it is characterised in that: step
The face extraction recognition feature vector detected in rapid 7 for video before and after frames is matched according to similarity principle of optimality.
7. a kind of face snapshot recognition method based on face quality evaluation according to any one of claim 1 to 6,
It is characterized in that: for face Environmental Evaluation Model, using the candid photograph figure and face bottom library template figure ratio of Environmental Evaluation Model output
To similarity is obtained, multiple face quality scores of the same person sort from small to large by corresponding similarity, obtained matter
The permutation number of amount fraction sequence weights to obtain final error as error, and with the difference of the similarity of backward pair.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810940587.2A CN109117797A (en) | 2018-08-17 | 2018-08-17 | A kind of face snapshot recognition method based on face quality evaluation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810940587.2A CN109117797A (en) | 2018-08-17 | 2018-08-17 | A kind of face snapshot recognition method based on face quality evaluation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109117797A true CN109117797A (en) | 2019-01-01 |
Family
ID=64853456
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810940587.2A Pending CN109117797A (en) | 2018-08-17 | 2018-08-17 | A kind of face snapshot recognition method based on face quality evaluation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109117797A (en) |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109740555A (en) * | 2019-01-10 | 2019-05-10 | 上海向素智能科技有限公司 | A kind of stranger's recognition methods followed based on face quality and face |
CN109800704A (en) * | 2019-01-17 | 2019-05-24 | 深圳英飞拓智能技术有限公司 | Capture the method and device of video human face detection |
CN109948564A (en) * | 2019-03-25 | 2019-06-28 | 四川川大智胜软件股份有限公司 | It is a kind of based on have supervision deep learning quality of human face image classification and appraisal procedure |
CN109978884A (en) * | 2019-04-30 | 2019-07-05 | 恒睿(重庆)人工智能技术研究院有限公司 | More people's image methods of marking, system, equipment and medium based on human face analysis |
CN110059634A (en) * | 2019-04-19 | 2019-07-26 | 山东博昂信息科技有限公司 | A kind of large scene face snap method |
CN110070010A (en) * | 2019-04-10 | 2019-07-30 | 武汉大学 | A kind of face character correlating method identified again based on pedestrian |
CN110321843A (en) * | 2019-07-04 | 2019-10-11 | 杭州视洞科技有限公司 | A kind of face out of kilter method based on deep learning |
CN110363126A (en) * | 2019-07-04 | 2019-10-22 | 杭州视洞科技有限公司 | A kind of plurality of human faces real-time tracking and out of kilter method |
CN110427888A (en) * | 2019-08-05 | 2019-11-08 | 北京深醒科技有限公司 | A kind of face method for evaluating quality based on feature clustering |
CN110610127A (en) * | 2019-08-01 | 2019-12-24 | 平安科技(深圳)有限公司 | Face recognition method and device, storage medium and electronic equipment |
CN110751043A (en) * | 2019-09-19 | 2020-02-04 | 平安科技(深圳)有限公司 | Face recognition method and device based on face visibility and storage medium |
CN110866471A (en) * | 2019-10-31 | 2020-03-06 | Oppo广东移动通信有限公司 | Face image quality evaluation method and device, computer readable medium and communication terminal |
CN111028477A (en) * | 2019-12-06 | 2020-04-17 | 哈尔滨理工大学 | Intelligent tumble detection device and method based on convolutional neural network |
CN111160307A (en) * | 2019-12-31 | 2020-05-15 | 帷幄匠心科技(杭州)有限公司 | Face recognition method and face recognition card punching system |
CN111340213A (en) * | 2020-02-19 | 2020-06-26 | 浙江大华技术股份有限公司 | Neural network training method, electronic device, and storage medium |
CN111738059A (en) * | 2020-05-07 | 2020-10-02 | 中山大学 | Non-sensory scene-oriented face recognition method |
CN111914781A (en) * | 2020-08-10 | 2020-11-10 | 杭州海康威视数字技术股份有限公司 | Method and device for processing face image |
CN112069887A (en) * | 2020-07-31 | 2020-12-11 | 深圳市优必选科技股份有限公司 | Face recognition method, face recognition device, terminal equipment and storage medium |
CN112215156A (en) * | 2020-10-13 | 2021-01-12 | 北京中电兴发科技有限公司 | Face snapshot method and system in video monitoring |
CN112307855A (en) * | 2019-08-07 | 2021-02-02 | 北京字节跳动网络技术有限公司 | User state detection method and device, electronic equipment and storage medium |
CN112329665A (en) * | 2020-11-10 | 2021-02-05 | 上海大学 | Face snapshot system |
CN112597909A (en) * | 2020-12-25 | 2021-04-02 | 北京芯翌智能信息技术有限公司 | Method and equipment for evaluating quality of face picture |
CN112597916A (en) * | 2020-12-24 | 2021-04-02 | 中标慧安信息技术股份有限公司 | Face image snapshot quality analysis method and system |
CN112631896A (en) * | 2020-12-02 | 2021-04-09 | 武汉旷视金智科技有限公司 | Equipment performance testing method and device, storage medium and electronic equipment |
CN112637487A (en) * | 2020-12-17 | 2021-04-09 | 四川长虹电器股份有限公司 | Television intelligent photographing method based on time stack expression recognition |
CN113435443A (en) * | 2021-06-28 | 2021-09-24 | 中国兵器装备集团自动化研究所有限公司 | Method for automatically identifying landmark from video |
CN116386119A (en) * | 2023-05-09 | 2023-07-04 | 北京维艾狄尔信息科技有限公司 | Body-building footpath-based identity recognition method, body-building footpath-based identity recognition system, body-building footpath-based identity recognition terminal and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102170563A (en) * | 2011-03-24 | 2011-08-31 | 杭州海康威视软件有限公司 | Intelligent person capture system and person monitoring management method |
CN106897748A (en) * | 2017-03-02 | 2017-06-27 | 上海极链网络科技有限公司 | Face method for evaluating quality and system based on deep layer convolutional neural networks |
CN108269254A (en) * | 2018-01-17 | 2018-07-10 | 百度在线网络技术(北京)有限公司 | Image quality measure method and apparatus |
-
2018
- 2018-08-17 CN CN201810940587.2A patent/CN109117797A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102170563A (en) * | 2011-03-24 | 2011-08-31 | 杭州海康威视软件有限公司 | Intelligent person capture system and person monitoring management method |
CN106897748A (en) * | 2017-03-02 | 2017-06-27 | 上海极链网络科技有限公司 | Face method for evaluating quality and system based on deep layer convolutional neural networks |
CN108269254A (en) * | 2018-01-17 | 2018-07-10 | 百度在线网络技术(北京)有限公司 | Image quality measure method and apparatus |
Non-Patent Citations (2)
Title |
---|
VIGNESH S ET.AL.: ""Face Image Quality Assessment for Face Selection in Surveillance Video using Convolutional Neural Networks"", 《2015 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING》 * |
陈正浩: ""基于多特征融合的卡口人脸质量评估方法研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109740555A (en) * | 2019-01-10 | 2019-05-10 | 上海向素智能科技有限公司 | A kind of stranger's recognition methods followed based on face quality and face |
CN109800704A (en) * | 2019-01-17 | 2019-05-24 | 深圳英飞拓智能技术有限公司 | Capture the method and device of video human face detection |
CN109948564A (en) * | 2019-03-25 | 2019-06-28 | 四川川大智胜软件股份有限公司 | It is a kind of based on have supervision deep learning quality of human face image classification and appraisal procedure |
CN110070010A (en) * | 2019-04-10 | 2019-07-30 | 武汉大学 | A kind of face character correlating method identified again based on pedestrian |
CN110070010B (en) * | 2019-04-10 | 2022-06-14 | 武汉大学 | Face attribute association method based on pedestrian re-recognition |
CN110059634B (en) * | 2019-04-19 | 2023-04-18 | 山东博昂信息科技有限公司 | Large-scene face snapshot method |
CN110059634A (en) * | 2019-04-19 | 2019-07-26 | 山东博昂信息科技有限公司 | A kind of large scene face snap method |
CN109978884A (en) * | 2019-04-30 | 2019-07-05 | 恒睿(重庆)人工智能技术研究院有限公司 | More people's image methods of marking, system, equipment and medium based on human face analysis |
CN109978884B (en) * | 2019-04-30 | 2020-06-30 | 恒睿(重庆)人工智能技术研究院有限公司 | Multi-person image scoring method, system, equipment and medium based on face analysis |
CN110321843A (en) * | 2019-07-04 | 2019-10-11 | 杭州视洞科技有限公司 | A kind of face out of kilter method based on deep learning |
CN110321843B (en) * | 2019-07-04 | 2021-11-09 | 杭州视洞科技有限公司 | Face optimization method based on deep learning |
CN110363126A (en) * | 2019-07-04 | 2019-10-22 | 杭州视洞科技有限公司 | A kind of plurality of human faces real-time tracking and out of kilter method |
CN110610127B (en) * | 2019-08-01 | 2023-10-27 | 平安科技(深圳)有限公司 | Face recognition method and device, storage medium and electronic equipment |
CN110610127A (en) * | 2019-08-01 | 2019-12-24 | 平安科技(深圳)有限公司 | Face recognition method and device, storage medium and electronic equipment |
CN110427888A (en) * | 2019-08-05 | 2019-11-08 | 北京深醒科技有限公司 | A kind of face method for evaluating quality based on feature clustering |
CN112307855A (en) * | 2019-08-07 | 2021-02-02 | 北京字节跳动网络技术有限公司 | User state detection method and device, electronic equipment and storage medium |
CN110751043A (en) * | 2019-09-19 | 2020-02-04 | 平安科技(深圳)有限公司 | Face recognition method and device based on face visibility and storage medium |
CN110751043B (en) * | 2019-09-19 | 2023-08-22 | 平安科技(深圳)有限公司 | Face recognition method and device based on face visibility and storage medium |
CN110866471A (en) * | 2019-10-31 | 2020-03-06 | Oppo广东移动通信有限公司 | Face image quality evaluation method and device, computer readable medium and communication terminal |
WO2021083241A1 (en) * | 2019-10-31 | 2021-05-06 | Oppo广东移动通信有限公司 | Facial image quality evaluation method, feature extraction model training method, image processing system, computer readable medium, and wireless communications terminal |
CN111028477A (en) * | 2019-12-06 | 2020-04-17 | 哈尔滨理工大学 | Intelligent tumble detection device and method based on convolutional neural network |
CN111160307A (en) * | 2019-12-31 | 2020-05-15 | 帷幄匠心科技(杭州)有限公司 | Face recognition method and face recognition card punching system |
CN111340213B (en) * | 2020-02-19 | 2023-01-17 | 浙江大华技术股份有限公司 | Neural network training method, electronic device, and storage medium |
CN111340213A (en) * | 2020-02-19 | 2020-06-26 | 浙江大华技术股份有限公司 | Neural network training method, electronic device, and storage medium |
CN111738059A (en) * | 2020-05-07 | 2020-10-02 | 中山大学 | Non-sensory scene-oriented face recognition method |
CN111738059B (en) * | 2020-05-07 | 2024-03-29 | 中山大学 | Face recognition method oriented to non-inductive scene |
CN112069887B (en) * | 2020-07-31 | 2023-12-29 | 深圳市优必选科技股份有限公司 | Face recognition method, device, terminal equipment and storage medium |
CN112069887A (en) * | 2020-07-31 | 2020-12-11 | 深圳市优必选科技股份有限公司 | Face recognition method, face recognition device, terminal equipment and storage medium |
CN111914781B (en) * | 2020-08-10 | 2024-03-19 | 杭州海康威视数字技术股份有限公司 | Face image processing method and device |
CN111914781A (en) * | 2020-08-10 | 2020-11-10 | 杭州海康威视数字技术股份有限公司 | Method and device for processing face image |
CN112215156B (en) * | 2020-10-13 | 2022-10-14 | 北京中电兴发科技有限公司 | Face snapshot method and system in video monitoring |
CN112215156A (en) * | 2020-10-13 | 2021-01-12 | 北京中电兴发科技有限公司 | Face snapshot method and system in video monitoring |
CN112329665A (en) * | 2020-11-10 | 2021-02-05 | 上海大学 | Face snapshot system |
CN112631896B (en) * | 2020-12-02 | 2024-04-05 | 武汉旷视金智科技有限公司 | Equipment performance test method and device, storage medium and electronic equipment |
CN112631896A (en) * | 2020-12-02 | 2021-04-09 | 武汉旷视金智科技有限公司 | Equipment performance testing method and device, storage medium and electronic equipment |
CN112637487A (en) * | 2020-12-17 | 2021-04-09 | 四川长虹电器股份有限公司 | Television intelligent photographing method based on time stack expression recognition |
CN112597916B (en) * | 2020-12-24 | 2021-10-26 | 中标慧安信息技术股份有限公司 | Face image snapshot quality analysis method and system |
CN112597916A (en) * | 2020-12-24 | 2021-04-02 | 中标慧安信息技术股份有限公司 | Face image snapshot quality analysis method and system |
CN112597909A (en) * | 2020-12-25 | 2021-04-02 | 北京芯翌智能信息技术有限公司 | Method and equipment for evaluating quality of face picture |
CN113435443A (en) * | 2021-06-28 | 2021-09-24 | 中国兵器装备集团自动化研究所有限公司 | Method for automatically identifying landmark from video |
CN116386119A (en) * | 2023-05-09 | 2023-07-04 | 北京维艾狄尔信息科技有限公司 | Body-building footpath-based identity recognition method, body-building footpath-based identity recognition system, body-building footpath-based identity recognition terminal and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109117797A (en) | A kind of face snapshot recognition method based on face quality evaluation | |
CN104517104B (en) | A kind of face identification method and system based under monitoring scene | |
CN106845357B (en) | A kind of video human face detection and recognition methods based on multichannel network | |
CN104866829B (en) | A kind of across age face verification method based on feature learning | |
CN100361138C (en) | Method and system of real time detecting and continuous tracing human face in video frequency sequence | |
CN102214291B (en) | Method for quickly and accurately detecting and tracking human face based on video sequence | |
CN104063719B (en) | Pedestrian detection method and device based on depth convolutional network | |
CN105069472B (en) | A kind of vehicle checking method adaptive based on convolutional neural networks | |
CN103902961B (en) | Face recognition method and device | |
CN108986064A (en) | A kind of people flow rate statistical method, equipment and system | |
CN109819208A (en) | A kind of dense population security monitoring management method based on artificial intelligence dynamic monitoring | |
CN107871100A (en) | The training method and device of faceform, face authentication method and device | |
CN108197587A (en) | A kind of method that multi-modal recognition of face is carried out by face depth prediction | |
CN109657609A (en) | Face identification method and system | |
CN107403168A (en) | A kind of facial-recognition security systems | |
CN110363124A (en) | Rapid expression recognition and application method based on face key points and geometric deformation | |
CN103886305B (en) | Specific face searching method for grassroots policing, safeguard stability and counter-terrorism | |
CN109815874A (en) | A kind of personnel identity recognition methods, device, equipment and readable storage medium storing program for executing | |
CN104504362A (en) | Face detection method based on convolutional neural network | |
CN104951773A (en) | Real-time face recognizing and monitoring system | |
CN107122707A (en) | Video pedestrian based on macroscopic features compact representation recognition methods and system again | |
CN109886141A (en) | A kind of pedestrian based on uncertainty optimization discrimination method again | |
CN101833654B (en) | Sparse representation face identification method based on constrained sampling | |
CN109614907A (en) | Pedestrian recognition methods and device again based on characteristic strengthening guidance convolutional neural networks | |
CN107066969A (en) | A kind of face identification method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20190101 |
|
WD01 | Invention patent application deemed withdrawn after publication |