CN107704813A - A kind of face vivo identification method and system - Google Patents
A kind of face vivo identification method and system Download PDFInfo
- Publication number
- CN107704813A CN107704813A CN201710852804.8A CN201710852804A CN107704813A CN 107704813 A CN107704813 A CN 107704813A CN 201710852804 A CN201710852804 A CN 201710852804A CN 107704813 A CN107704813 A CN 107704813A
- Authority
- CN
- China
- Prior art keywords
- characteristic point
- picture
- face
- face characteristic
- dimensional
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Abstract
The invention discloses a kind of face vivo identification method, including:Carry out face characteristic training and 3 d pose estimation training respectively to training image;The CNN networks of two layers of human face characteristic point are set, and two main stor(e)y CNN networks form a concatenated convolutional neutral net;First pass through network output result and processing picture that the first main stor(e)y CNN networks obtain human face characteristic point rough position to feature pretreatment picture processing, the human face characteristic point for handling picture is corrected by the second main stor(e)y CNN networks again, and exports face characteristic point model;The three-dimensional pretreatment picture of input and three-dimensional perspective estimate carry out 3 d pose estimation training;Export three dimensional angular angle value model;Living body faces expression is estimated by the face characteristic point model and detects identification, is estimated to nodding and shaking the head by three dimensional angular angle value model and detects identification.By the inventive method and system, can effective integration vivo identification and Face datection, and reduce the attack of cribber.
Description
Technical field
The present invention relates to field of face identification, and in particular to a kind of face estimated based on human face characteristic point and 3 d pose
Vivo identification method and system.
Background technology
In present face identification system, it is identified mostly using photo or video, but in this way, people
The system that can be out-tricked with photo and video.And vivo identification can effectively resist this attack.Existing vivo identification system pair
It is not very accurate in the estimation of action, and accounts for very much resource.In the present invention, by the network structure of lightweight, by based on
The face landmark detections and the estimation of face 3 d pose of deep learning, can reach very high accuracy rate, while model size
Summation is no more than 3M, and more than 25fps speed can be reached in embedded device (Android mobile phone, iPhone etc.), is entirely capable of
Reach real-time requirement.
The content of the invention
The purpose of the present invention overcomes above-mentioned problems of the prior art, there is provided a kind of face vivo identification method and is
System.
To achieve these goals, the invention provides a kind of face work estimated based on human face characteristic point and 3 d pose
Body recognition methods, including:
Carry out face characteristic training and 3 d pose estimation training respectively to training image, respectively obtain feature pretreatment figure
Piece and three-dimensional pretreatment picture;Human face characteristic point is mapped on the relevant position of feature pretreatment picture;Three-dimensional is pre-processed
Picture carries out data enhancing;
The CNN networks of two layers of human face characteristic point are set, and two main stor(e)y CNN networks form a concatenated convolutional nerve net
Network;First pass through the first main stor(e)y CNN networks and the network output of human face characteristic point rough position is obtained to feature pretreatment picture processing
As a result and picture is handled, then the human face characteristic point for handling picture is corrected by the second main stor(e)y CNN networks, and export face
Feature point model;
The three-dimensional pretreatment picture of input and three-dimensional perspective mark value carry out 3 d pose estimation training;Export three dimensional angular angle value
Model;
Living body faces expression is estimated by the face characteristic point model and detects identification, passes through three dimensional angular angle value
Model is estimated to nodding and shaking the head and detects identification.
Further, face vivo identification method as the aforementioned, it is specific to carry out face characteristic training respectively to training image
Including:Face datection is carried out to training image by opencv, face frame is amplified 1.2 times, then zooms to 96 × 96 chi
Degree.
Further, face vivo identification method as the aforementioned, 3 d pose estimation instruction is carried out respectively to training image
White silk specifically includes:Face datection is carried out to training image by opencv, face frame is amplified 1.25 times, then zoom to 64 ×
64 yardstick.
Further, face vivo identification method as the aforementioned, described by the first main stor(e)y CNN networks to obtain feature pre-
The network output result of human face characteristic point rough position in picture is handled, specific method is:By the feature pretreatment picture and
68 coordinate points input the first main stor(e)y CNN networks;Carried out by the CNN networks of preceding 5 sublayers in the first main stor(e)y CNN networks special
Sign extraction, then carries out combinations of features by the sparse full connection of two sublayers again, and recurrence behaviour is carried out finally by a full connection
Make, and obtain network output result and the processing picture for including 68 coordinate points rough positions.
Further, face vivo identification method as the aforementioned, it is described by the second main stor(e)y CNN networks to handle picture
Human face characteristic point be corrected, specific method is:First the processing picture is carried out according to the network output result corresponding
The sectional drawing of position;Feature extraction is carried out by the CNN networks of preceding 5 sublayers in the second main stor(e)y CNN networks, then passes through two
The full connection of sublayer rarefaction carries out combinations of features, and the output number that last layer connects entirely is individual for the human face characteristic point of relevant position
Number.
Further, face vivo identification method as the aforementioned, the progress 3 d pose estimation training, it is specially:
Five layers of convolution that batch normalization are added are subjected to feature extraction to the three-dimensional pretreatment picture, pass through one layer
Sparse full connection carries out combinations of features, is most followed by an output as 3 full connection, exports three-dimensional perspective predicted value.
The present invention also provides a kind of face vivo identification system, and the system is by the face characteristic point model to live body
Human face expression estimated and detects identification, is estimated to nodding and shaking the head by three dimensional angular angle value model and detects identification.
In the above-mentioned technical solutions, the present invention has beneficial effect as described below:
Speed is fast:3 d pose estimation model can reach 200fps speed, cascade face landmark on monokaryon cpu
Estimation model can reach 50fps speed on monokaryon cpu.Requirement in real time can be met.
Model is small:Compressed by dimensionality reduction so that more than 100 times of model size reduction.
Accuracy rate is high:Face landmark estimations all reach very high accuracy rate on data set HELEN, AFW, 300-W.
Anti- cheating:By the inventive method and system, can effective integration vivo identification and Face datection, and reduce cheating
The attack of person.
Brief description of the drawings
, below will be to institute in embodiment in order to illustrate more clearly of the embodiment of the present application or technical scheme of the prior art
The accompanying drawing needed to use is briefly described, it should be apparent that, drawings in the following description are only one described in the present invention
A little embodiments, for those of ordinary skill in the art, other accompanying drawings can also be obtained according to these accompanying drawings.
Fig. 1 is face vivo identification method flow chart in an embodiment of the present invention;
Fig. 2 is the schematic diagram of three-dimensional perspective in an embodiment of the present invention;
Fig. 3 is detection method flow chart in a kind of application examples of the present invention;
Fig. 4 is the network structure applied in an embodiment of the present invention as schemed.
Embodiment
In order that those skilled in the art more fully understands technical scheme, below in conjunction with accompanying drawing to this hair
It is bright to be further detailed.
As shown in figure 1, the present invention provides a kind of face vivo identification side estimated based on human face characteristic point and 3 d pose
Method, including:
Carry out face characteristic training and 3 d pose estimation training respectively to training image, it is special that face is carried out to training image
Sign training specifically includes:Face datection is carried out to training image by opencv, face frame is amplified 1.2 times (in face frame
On the basis of by frame expand 1.2 times), then zoom to 96 × 96 resolution-scale, if less than 96 × 96, contracting can be passed through
Put, reach 96 × 96 yardstick;3 d pose estimation training is carried out to training image to specifically include:Training is schemed by opencv
As carrying out Face datection, face frame is amplified 1.25 times, then zooms to 64 × 64 yardstick;Then, it is pre- to respectively obtain feature
Handle picture and three-dimensional pretreatment picture;Human face characteristic point is mapped on the relevant position of feature pretreatment picture;By three-dimensional
Pre-process picture and carry out data enhancing, data enhancing may include but be not limited to overturn, obscure and light conversion;
The CNN networks of two layers of human face characteristic point are set, and two main stor(e)y CNN networks form a concatenated convolutional nerve net
Network;First pass through the first main stor(e)y CNN networks and the network output of human face characteristic point rough position is obtained to feature pretreatment picture processing
As a result and processing picture, specific method are:The feature is pre-processed into picture and 68 coordinate points input the first main stor(e)y CNN nets
Network, 68 coordinate points are predicted to obtain by the convolutional neural networks trained;Pass through before in the first main stor(e)y CNN networks 5
The CNN networks of individual sublayer carry out feature extraction, then carry out combinations of features by the sparse full connection of two sublayers again, finally by
One full connection carries out recurrence operation, and obtains network output result and the processing picture for including 68 coordinate points rough positions;
The human face characteristic point for handling picture is corrected by the second main stor(e)y CNN networks again, specific method is:First according to the network
Output result carries out the sectional drawing of relevant position to the processing picture;Pass through preceding 5 sublayers in the second main stor(e)y CNN networks
CNN networks carry out feature extraction, then carry out combinations of features by the full connection of two sublayer rarefactions, last layer connects entirely
Output number be relevant position (eyes, face, nose) human face characteristic point number, and export face characteristic point model;
In the training process, first it is trained first two layers of full coupling part with big parameter, wherein, described first two layers
Two layers before convolutional neural networks of full connection is referred specifically to, the network structure used in training process is as shown in figure 4, in figure
Fc6, fc7 are described preceding two layers full connection, third layer connect full fc8 be used for export the human face characteristic point predicted;It can so accelerate
Training process so that the parameter of conventional part is well adapted to.But in practical process, the parameter connected entirely is too big, meeting
So that model becomes very big, speed also can be slack-off.Therefore convolution parameter training well after, reduce the parameter that connects entirely, then
Model fine setting is carried out, so in the case where accuracy rate is constant so that the size of model have compressed more than 100 times, while speed
Also it is improved.
When assessment performance, with average deviations index err:
Wherein, M is the quantity of human face characteristic point, and p is the position of prediction, and g is the position of mark, and l and r are left eye and right eye
Center.It can reach 5.2 or so by this method landmark err indexs.
The three-dimensional pretreatment picture of input and three-dimensional perspective mark value carry out 3 d pose estimation training, are specially:By batch
Five layers of convolution that normal ization are added carry out feature extraction to the three-dimensional pretreatment picture, sparse complete by one layer
Connection carries out combinations of features, is most followed by an output as 3 full connection, exports three-dimensional perspective predicted value;Export three dimensional angular angle value
Model;Specifically, the definition of three-dimensional perspective is as shown in Figure 2.
3 d pose estimates us using two indices to be evaluated, and one is consensus forecast deviation, i.e., each prediction
The poor average value of value and mark value;Another index is predictablity rate, it is believed that deviation is to predict just within 15 °
Really, otherwise it is prediction error.We using compression after model, the predictablity rate in three angles obtain 90% with
On.The average deviation and accuracy rate of specific three-dimensional perspective are as shown in the table:
roll | pitch | yaw | |
deviation | 5.2° | 9.3° | 6.7° |
accuracy | 95% | 90% | 92% |
Living body faces expression is estimated by the face characteristic point model and detects identification, passes through three dimensional angular angle value
Model is estimated to nodding and shaking the head and detects identification.
The present invention also provides a kind of face vivo identification system, and the system is by the face characteristic point model to live body
Human face expression estimated and detects identification, is estimated to nodding and shaking the head by three dimensional angular angle value model and detects identification.
Application examples:
Present system can be divided into three modules:Face datection, human face action identification, recognition of face.For input
Test pictures, we come the position of locating human face, find to go out in effective coverage if detected by opencv Face datection
Show multiple faces, then prompt mistake.If detecting an only face, by into next link.Moved in face
Identify, i.e., in vivo identification, user acts according to our prompting, if action recognition success, into recognition of face
Link.Specific detection method is as shown in figure N.
Photo on the photo and identity card of true man is compared recognition of face link, and we are with based on depth characteristic
Face recognition algorithms, the similarity of two photos is returned to, if similarity is more than the threshold value of setting, comparison passes through.
Some one exemplary embodiments of the present invention are only described by way of explanation above, undoubtedly, for ability
The those of ordinary skill in domain, without departing from the spirit and scope of the present invention, can be with a variety of modes to institute
The embodiment of description is modified.Therefore, above-mentioned accompanying drawing and description are inherently illustrative, should not be construed as to the present invention
The limitation of claims.
Claims (7)
- A kind of 1. face vivo identification method, it is characterised in that including:Training image is carried out respectively face characteristic training and 3 d pose estimation training, respectively obtain feature pretreatment picture and Three-dimensional pretreatment picture;Human face characteristic point is mapped on the relevant position of feature pretreatment picture;Three-dimensional is pre-processed into picture Carry out data enhancing;The CNN networks of two layers of human face characteristic point are set, and two main stor(e)y CNN networks form a concatenated convolutional neutral net;First By the first main stor(e)y CNN networks to feature pre-process picture processing obtain human face characteristic point rough position network output result and Picture is handled, then the human face characteristic point for handling picture is corrected by the second main stor(e)y CNN networks, and exports human face characteristic point Model;The three-dimensional pretreatment picture of input and three-dimensional perspective mark value carry out 3 d pose estimation training;Export three dimensional angular angle value mould Type;Living body faces expression is estimated by the face characteristic point model and detects identification, passes through three dimensional angular angle value model Estimated to nodding and shaking the head and detect identification.
- 2. face vivo identification method according to claim 1, it is characterised in that carry out face spy respectively to training image Sign training specifically includes:Face datection is carried out to training image by opencv, face frame is amplified 1.2 times, then zoomed to 96 × 96 yardstick.
- 3. face vivo identification method according to claim 1, it is characterised in that three-dimensional appearance is carried out respectively to training image State estimation training specifically includes:Face datection is carried out to training image by opencv, face frame is amplified 1.25 times, then contracted It is put into 64 × 64 yardstick.
- 4. face vivo identification method according to claim 1, it is characterised in that described to pass through the first main stor(e)y CNN networks The network output result of human face characteristic point rough position in feature pretreatment picture is obtained, specific method is:The feature is pre- Handle picture and 68 coordinate points input the first main stor(e)y CNN networks;Pass through the CNN of preceding 5 sublayers in the first main stor(e)y CNN networks Network carries out feature extraction, then combinations of features is carried out by the sparse full connection of two sublayers again, finally by a full connection Recurrence operation is carried out, and obtains network output result and the processing picture for including 68 coordinate points rough positions.
- 5. face vivo identification method according to claim 1, it is characterised in that described to pass through the second main stor(e)y CNN networks The human face characteristic point for handling picture is corrected, specific method is:First the processing is schemed according to the network output result Piece carries out the sectional drawing of relevant position;Feature extraction is carried out by the CNN networks of preceding 5 sublayers in the second main stor(e)y CNN networks, so Combinations of features is carried out by the full connection of two sublayer rarefactions afterwards, the output number that last layer connects entirely is the people of relevant position Face characteristic point number.
- 6. face vivo identification method according to claim 1, it is characterised in that the progress 3 d pose estimation instruction Practice, be specially:Five layers of convolution that batch normalization are added are carried out into feature to the three-dimensional pretreatment picture to carry Take, combinations of features is carried out by one layer of sparse full connection, it is pre- for 3 full connection, output three-dimensional perspective to be most followed by an output Measured value.
- A kind of 7. face vivo identification system described in using any one of claim 1~6 method, it is characterised in that the system System is estimated living body faces expression by the face characteristic point model and detects identification, passes through three dimensional angular angle value model pair Nod and shake the head and estimated and detected identification.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710852804.8A CN107704813B (en) | 2017-09-19 | 2017-09-19 | Face living body identification method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710852804.8A CN107704813B (en) | 2017-09-19 | 2017-09-19 | Face living body identification method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107704813A true CN107704813A (en) | 2018-02-16 |
CN107704813B CN107704813B (en) | 2020-11-17 |
Family
ID=61171660
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710852804.8A Active CN107704813B (en) | 2017-09-19 | 2017-09-19 | Face living body identification method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107704813B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108830901A (en) * | 2018-06-22 | 2018-11-16 | 维沃移动通信有限公司 | A kind of image processing method and electronic equipment |
CN108921131A (en) * | 2018-07-26 | 2018-11-30 | 中国银联股份有限公司 | A kind of method and device generating Face datection model, three-dimensional face images |
CN109670452A (en) * | 2018-12-20 | 2019-04-23 | 北京旷视科技有限公司 | Method for detecting human face, device, electronic equipment and Face datection model |
CN110147776A (en) * | 2019-05-24 | 2019-08-20 | 北京百度网讯科技有限公司 | The method and apparatus for determining face key point position |
CN111860078A (en) * | 2019-04-30 | 2020-10-30 | 北京眼神智能科技有限公司 | Face silence living body detection method and device, readable storage medium and equipment |
CN111860055A (en) * | 2019-04-29 | 2020-10-30 | 北京眼神智能科技有限公司 | Face silence living body detection method and device, readable storage medium and equipment |
CN111860078B (en) * | 2019-04-30 | 2024-05-14 | 北京眼神智能科技有限公司 | Face silence living body detection method, device, readable storage medium and equipment |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105447473A (en) * | 2015-12-14 | 2016-03-30 | 江苏大学 | PCANet-CNN-based arbitrary attitude facial expression recognition method |
CN105631439A (en) * | 2016-02-18 | 2016-06-01 | 北京旷视科技有限公司 | Human face image collection method and device |
CN106203395A (en) * | 2016-07-26 | 2016-12-07 | 厦门大学 | Face character recognition methods based on the study of the multitask degree of depth |
CN106599830A (en) * | 2016-12-09 | 2017-04-26 | 中国科学院自动化研究所 | Method and apparatus for positioning face key points |
CN106951840A (en) * | 2017-03-09 | 2017-07-14 | 北京工业大学 | A kind of facial feature points detection method |
CN106980812A (en) * | 2016-12-14 | 2017-07-25 | 四川长虹电器股份有限公司 | Three-dimensional face features' independent positioning method based on concatenated convolutional neutral net |
CN107038429A (en) * | 2017-05-03 | 2017-08-11 | 四川云图睿视科技有限公司 | A kind of multitask cascade face alignment method based on deep learning |
-
2017
- 2017-09-19 CN CN201710852804.8A patent/CN107704813B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105447473A (en) * | 2015-12-14 | 2016-03-30 | 江苏大学 | PCANet-CNN-based arbitrary attitude facial expression recognition method |
CN105631439A (en) * | 2016-02-18 | 2016-06-01 | 北京旷视科技有限公司 | Human face image collection method and device |
CN106203395A (en) * | 2016-07-26 | 2016-12-07 | 厦门大学 | Face character recognition methods based on the study of the multitask degree of depth |
CN106599830A (en) * | 2016-12-09 | 2017-04-26 | 中国科学院自动化研究所 | Method and apparatus for positioning face key points |
CN106980812A (en) * | 2016-12-14 | 2017-07-25 | 四川长虹电器股份有限公司 | Three-dimensional face features' independent positioning method based on concatenated convolutional neutral net |
CN106951840A (en) * | 2017-03-09 | 2017-07-14 | 北京工业大学 | A kind of facial feature points detection method |
CN107038429A (en) * | 2017-05-03 | 2017-08-11 | 四川云图睿视科技有限公司 | A kind of multitask cascade face alignment method based on deep learning |
Non-Patent Citations (3)
Title |
---|
HAOXIANG LI等: "A Convolutional Neural Network Cascade for Face Detection", 《IEEE》 * |
SUNI SS等: "A Real Time Decision Support System using Head Nod and Shake", 《IEEE》 * |
曾成等: "基于多任务CNN的人脸活体多属性检测", 《科学技术与工程》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108830901A (en) * | 2018-06-22 | 2018-11-16 | 维沃移动通信有限公司 | A kind of image processing method and electronic equipment |
CN108921131A (en) * | 2018-07-26 | 2018-11-30 | 中国银联股份有限公司 | A kind of method and device generating Face datection model, three-dimensional face images |
CN108921131B (en) * | 2018-07-26 | 2022-05-24 | 中国银联股份有限公司 | Method and device for generating face detection model and three-dimensional face image |
CN109670452A (en) * | 2018-12-20 | 2019-04-23 | 北京旷视科技有限公司 | Method for detecting human face, device, electronic equipment and Face datection model |
CN111860055A (en) * | 2019-04-29 | 2020-10-30 | 北京眼神智能科技有限公司 | Face silence living body detection method and device, readable storage medium and equipment |
CN111860055B (en) * | 2019-04-29 | 2023-10-24 | 北京眼神智能科技有限公司 | Face silence living body detection method, device, readable storage medium and equipment |
CN111860078A (en) * | 2019-04-30 | 2020-10-30 | 北京眼神智能科技有限公司 | Face silence living body detection method and device, readable storage medium and equipment |
CN111860078B (en) * | 2019-04-30 | 2024-05-14 | 北京眼神智能科技有限公司 | Face silence living body detection method, device, readable storage medium and equipment |
CN110147776A (en) * | 2019-05-24 | 2019-08-20 | 北京百度网讯科技有限公司 | The method and apparatus for determining face key point position |
CN110147776B (en) * | 2019-05-24 | 2021-06-11 | 北京百度网讯科技有限公司 | Method and device for determining positions of key points of human face |
Also Published As
Publication number | Publication date |
---|---|
CN107704813B (en) | 2020-11-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107704813A (en) | A kind of face vivo identification method and system | |
CN111091075B (en) | Face recognition method and device, electronic equipment and storage medium | |
WO2018188453A1 (en) | Method for determining human face area, storage medium, and computer device | |
TWI686774B (en) | Human face live detection method and device | |
WO2020107847A1 (en) | Bone point-based fall detection method and fall detection device therefor | |
CN109902546A (en) | Face identification method, device and computer-readable medium | |
CN105260726B (en) | Interactive video biopsy method and its system based on human face posture control | |
WO2020248780A1 (en) | Living body testing method and apparatus, electronic device and readable storage medium | |
CN110569756A (en) | face recognition model construction method, recognition method, device and storage medium | |
WO2020192112A1 (en) | Facial recognition method and apparatus | |
CN106203242A (en) | A kind of similar image recognition methods and equipment | |
CN108875907B (en) | Fingerprint identification method and device based on deep learning | |
WO2022252642A1 (en) | Behavior posture detection method and apparatus based on video image, and device and medium | |
CN110956082B (en) | Face key point detection method and detection system based on deep learning | |
CN105335719A (en) | Living body detection method and device | |
WO2019153175A1 (en) | Machine learning-based occluded face recognition system and method, and storage medium | |
US20220262093A1 (en) | Object detection method and system, and non-transitory computer-readable medium | |
CN110135277B (en) | Human behavior recognition method based on convolutional neural network | |
CN112597885A (en) | Face living body detection method and device, electronic equipment and computer storage medium | |
CN111754391A (en) | Face correcting method, face correcting equipment and computer readable storage medium | |
CN104376312A (en) | Face recognition method based on word bag compressed sensing feature extraction | |
CN110008825A (en) | Palm grain identification method, device, computer equipment and storage medium | |
CN113298158A (en) | Data detection method, device, equipment and storage medium | |
CN112149517A (en) | Face attendance checking method and system, computer equipment and storage medium | |
CN109409322B (en) | Living body detection method and device, face recognition method and face detection system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20200930 Address after: Room 502, 5 / F, No.9 Zhichun Road, Haidian District, Beijing 100191 Applicant after: BEIJING EWAY DACHENG TECHNOLOGY Co.,Ltd. Address before: 100082 Beijing city Haidian District Xitucheng Road 10, High-Tech Mansion BUPT, 1209 Applicant before: BEIJING FEISOU TECHNOLOGY Co.,Ltd. |
|
TA01 | Transfer of patent application right | ||
GR01 | Patent grant | ||
GR01 | Patent grant |