CN109977867A - A kind of infrared biopsy method based on machine learning multiple features fusion - Google Patents
A kind of infrared biopsy method based on machine learning multiple features fusion Download PDFInfo
- Publication number
- CN109977867A CN109977867A CN201910232449.3A CN201910232449A CN109977867A CN 109977867 A CN109977867 A CN 109977867A CN 201910232449 A CN201910232449 A CN 201910232449A CN 109977867 A CN109977867 A CN 109977867A
- Authority
- CN
- China
- Prior art keywords
- feature
- infrared
- point
- face
- multiple features
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
Abstract
The invention discloses a kind of infrared biopsy methods based on machine learning multiple features fusion, comprising the following steps: the extraction of S1, machine learning feature comprising: LBP feature S11, is extracted using 68 point feature point of SDM algorithm locating human face and at 68 points;S12, human eye area CNN feature is extracted;S13, human face region CNN feature is extracted;S2, deep learning network multiple features fusion comprising: S21, multiple features fusion: 68 point LBP features, human eye area CNN feature and human face region CNN feature are linked together by the concat layer in deep learning frame caffe and constitute a new characteristic layer;Most characteristic features are selected in the low learning rate training of S22, model, and according to this, most characteristic features export prediction result.The present invention improves vivo identification effect under the premise of guaranteeing that the noninductive recognition time of user is short, has very high discrimination under complex environment, and have good generalization ability.
Description
Technical field
The present invention relates to the infrared In vivo detection technical fields of wisdom security terminal, especially a kind of how special based on machine learning
Levy the infrared biopsy method of fusion.
Background technique
As deep learning is grown rapidly, face recognition technology obtains breakthrough progress, more and more security products
Using upper face recognition technology, face recognition technology brings huge convenience to life, and still, face recognition technology is brought just
Benefit also cause simultaneously people in the worry of Product Safety, such as life access control system and payment system be easy to can be by
Take on the sly face screen or paper attack.Therefore, recognition of face is carried out in important access control system to be also required to carry out people simultaneously
Whether face is living body judgement, and when only current face is living body, face recognition result is just effective.In vivo detection is pacified in recognition of face
Very important role is play in terms of full property.There are many In vivo detection algorithms at present, respectively have advantage and disadvantage, therefore be only used only
A kind of algorithm is difficult to reach good recognition effect effect.Several typical biopsy methods are described below:
(1) formula biopsy method: its face In vivo detection algorithm for belonging to most original.When Face datection algorithm is examined
After measuring facial image, SDM can be used and return at 68 points or return features of human face images using 3000pfs algorithm, then
Human eye opening degree is calculated by features of human face images, face three angles pitch, roll and yaw and mouth open journey
Degree, then instruction is issued by machine (wisdom security terminal) at random, current face is required to work it out corresponding movement, when system is known
Being clipped to face and working it out the satisfactory face for then thinking currently to identify that acts is living body.The algorithm advantage: it is highly-safe, it calculates
Method is simply easy to accomplish;The shortcomings that algorithm: needing user to cooperate, and recognition time is longer.
(2) dynamic video light stream biopsy method: this method makes paper and screen by statistics human face region light stream variation
Curtain photo attack light stream direction tends to consistent, and true man's face light stream direction is inconsistent.The advantages of this method: by dynamic approach,
User's unaware;The disadvantages of this method: will lead to true man and be difficult to pass through if true man's just face camera lens is motionless, recognition time compared with
Long, screen video attack can attack VISIBLE LIGHT SYSTEM, so, the living body mode of optical flow method judgement is seldom on main product
It uses.
(3) conventional machines learn LBP+SVM biopsy method: after navigating to face by detection algorithm, using biography
Machine learning method of uniting extracts LBP feature to facial image, and sample database face is turned to LBP feature database input SVM training pattern,
The trained SVM model prediction living body of later use.The advantages of this method: identification that user is noninductive, predetermined speed are fast;This method
Disadvantage: scene is complicated, and LBP feature SVM Deficiency of learning ability is based in sample-rich.
(4) the deep learning biopsy method of feature is extracted based on convolutional neural networks: being examined by Face datection algorithm
After measuring face, the label of true man and attack is accomplished fluently, directly human face region picture is input in CNN network and is learnt.It compares
SVM classifier, the method for deep learning learns the feature of input sample by CNN network automatically, more simple in design, and
And when data volume is very huge, CNN has the ability of more preferable fitting data.The advantages of this method: identification that user is noninductive is tested the speed in advance
Degree is very fast, can be fitted complex data;The disadvantages of this method: a large amount of training data is centainly needed just to can guarantee generalization ability.
(5) it the biopsy method based on structure light construction face depth map: by assisting in identifying structure optical device, constructs
Spatial depth figure is aligned with visible light picture by depth map and finds the position that visible light face corresponds to depth map, and people is read
Face depth map information directly identifies living body by the intrinsic depth information of face, and is bent paper, and plate attack can be easily
Prevent.The advantages of this method: speed is fast, identification that user is noninductive, and accuracy rate is very high;The disadvantages of this method: needing to increase equipment,
It is at high cost.
Summary of the invention
The present invention is in order to solve the above technical problems, the present invention provides a kind of based on the infrared of machine learning multiple features fusion
Biopsy method, to realize that machine learning based on multi-feature fusion is red in real time in embedded-type ARM platform access control system
Outer In vivo detection algorithm is guaranteeing that user is noninductive, is improving vivo identification effect under the premise of recognition time is short simultaneously, and multiple
There is very high discrimination under heterocycle border (including under daytime or night difference light), and there is good generalization ability.
To achieve the above object, the technical solution adopted by the present invention are as follows:
A kind of infrared biopsy method based on machine learning multiple features fusion, comprising the following steps:
The extraction of S1, machine learning feature comprising:
S11, LBP feature is extracted using 68 point feature point of SDM algorithm locating human face and at 68 points;
S12, human eye area CNN feature is extracted;
S13, human face region CNN feature is extracted;
S2, deep learning network multiple features fusion comprising:
S21, multiple features fusion: 68 point LBP features, human eye area CNN feature and human face region CNN feature are passed through into depth
Concat layer in degree learning framework caffe, which links together, constitutes a new characteristic layer;
Most characteristic features are selected in the low learning rate training of S22, model, and according to this, most characteristic features output prediction is tied
Fruit.
Further, the step S11 is specifically included: first passing around the face area of Face datection algorithm positioning infrared image
Then human face region is inputted 68 characteristic point of SDM algorithm locating human face, facial image is being normalized to 96*96 size, most by domain
Corresponding 68 point LBP feature is extracted on 96*96 facial image afterwards.
Further, the step S12 is specifically included: the human eye area that the 68 point feature point locations that selection step S11 is extracted go out
Domain is inputted as a dimensional characteristics, and first by the size of human eye area image normalization to 64*64, human eye block sticks corresponding mark
It is human eye image pattern library that label, which arrange, then eye image sample database input CNN is trained and extracts human eye area CNN feature.
Further, the step S13 is specifically included: the face area that the 68 point feature point locations that selection step S11 is extracted go out
Domain is inputted as a dimensional characteristics, and first by the size of human face region image normalization to 96*96, face block sticks corresponding mark
It is facial image sample database that label, which arrange, then facial image sample database input CNN is trained and extracts human face region CNN feature.
Further, the step S22 is specifically included: by LBP feature, the eye image of 64*64 and the facial image of 96*96
It inputs low learning rate learning model and carries out re -training, obtain the full articulamentum after Fusion Features layer, and the side to connect entirely
Formula connection 2 neurons of output, 2 neurons are respectively 0 and 1 probability, and 0 and 1 respectively represents non-living body and living body, are then selected
It selects most representative infrared living body characteristic of division and prediction result is exported according to this feature.
After adopting the above technical scheme, compared with the prior art, of the invention has the advantage that
1), the present invention fully considers infrared image face living body distinction feature outstanding, and uses after Fusion Features
The learning algorithm of low learning rate chooses most representative living body characteristic of division and is used to classify, what fused feature finally obtained
Recognition effect can be more preferable than single features classifying quality, and can guarantee that user is noninductive, and recognition time is short, and model generalization ability is more
By force;
2) present invention brings new direction to guide to infrared face viviperception, and existing In vivo detection classification method is all made of
Single features are classified, and recognition effect is more preferable after multiple features fusion of the invention, and discrimination is higher under complex environment, can be drawn
It leads more researchers and solves In vivo detection engineering problem using multi-feature extraction and amalgamation mode, therefore have and greatly push away
Wide value.
Detailed description of the invention
Fig. 1 is that the present invention is based on the overall flow figures of the infrared biopsy method of machine learning multiple features fusion;
Fig. 2 is 68 feature point diagram of infrared face;
Fig. 3 is multiple features fusion network diagram.
Specific embodiment
In order to be clearer and more clear technical problems, technical solutions and advantages to be solved, tie below
Closing accompanying drawings and embodiments, the present invention will be described in further detail.It should be appreciated that specific embodiment described herein is only used
To explain the present invention, it is not intended to limit the present invention.
As shown in Figure 1, a kind of infrared biopsy method based on machine learning multiple features fusion that the present invention discloses,
The following steps are included:
The extraction of S1, machine learning feature comprising:
S11, LBP feature is extracted using 68 point feature point of SDM algorithm locating human face and at 68 points, specifically included:
As shown in Fig. 2, first passing around the human face region of Face datection algorithm positioning infrared image, then human face region input SDM is calculated
68 characteristic point of method locating human face introduces excessive noise to reduce different size of face to LBP feature extraction, by face figure
As normalizing to 96*96 size, corresponding 68 point LBP feature is finally extracted on 96*96 facial image;Existing LBP feature
Extracting mode is directly to extract on the entire picture of 96*96 (containing face and non-face part) each pixel, for infrared
Many detail textures in region are all not reflected on the face of image, and the most apparent feature of face is mainly in facial contour,
Near eyes, nose and mouth, 68 point feature points just cover the place of main feature appearance, therefore in 96*96 facial image
Upper corresponding 68 point LBP feature can embody infrared face feature as main feature very well but also promote LBP feature extraction speed
Degree.
S12, extract human eye area CNN feature, specifically include: the 68 point feature point locations that selection step S11 is extracted go out
Human eye area inputted as a dimensional characteristics, infrared eyes imaging has bright pupil effect indoors, in outdoor true man eye and
Paper attack human eye is also to have very that large texture is distinguished, and therefore, human eye area is selected to input as a dimensional characteristics;First by people
Vitrea eye area image normalizes to the size of 64*64, and it is human eye image pattern library that human eye block, which sticks corresponding label and arranges, then by people
Input CNN in eye image pattern library, which is trained, extracts human eye area CNN feature.
S13, extract human face region CNN feature, specifically include: the 68 point feature point locations that selection step S11 is extracted go out
Human face region inputted as a dimensional characteristics, the information of face on the whole be it is more abundant, can learn to throughout entire
The changing features of face, face characteristic are the features of an integration type;Face is more larger than human eye, so first by human face region figure
As normalizing to the size of 96*96, it is facial image sample database that face block, which sticks corresponding label and arranges, then by facial image sample
This library input CNN, which is trained, extracts human face region CNN feature.
S2, deep learning network multiple features fusion, as shown in Figure 3 comprising:
S21, multiple features fusion: 68 point LBP features, human eye area CNN feature and human face region CNN feature are passed through into depth
Concat layer in degree learning framework caffe, which links together, constitutes a new characteristic layer;
Most characteristic features are selected in the low learning rate training of S22, model, and according to this, most characteristic features output prediction is tied
Fruit.It is specifically included: by LBP feature, the eye image of 64*64 and the facial image of 96*96 input low learning rate learning model
Re -training is carried out, the full articulamentum after Fusion Features layer is obtained, and connects 2 neurons of output in a manner of connecting entirely, 2
A neuron is respectively 0 and 1 probability, and 0 and 1 respectively represent non-living body and living body, then selects most representative infrared work
Body characteristic of division and according to this feature export prediction result.
The present invention considers cost performance factor, constructs classifier by multiple features fusion mode, is guaranteeing that user is noninductive, is knowing
The other time improves vivo identification effect simultaneously fastly, and realizes in embedded-type ARM platform access control system and melted based on multiple features
The machine learning of conjunction infrared In vivo detection algorithm in real time.
Above-mentioned SDM algorithm is the common feature location algorithm of field of face identification, and the present embodiment is not repeating this.
Above-mentioned Caffe learning framework provides one for training, testing, finely tuning and the complete tool of development model
Packet, and it possesses the example for improving document for these work.The characteristics and advantages of Caffe mainly have:
Modularity: Caffe makes new data format, network layer and loss letter in line with principle as modular as possible in this way
Number is easy extension.Network layer and loss function are defined, and a large amount of examples illustrate these parts are how to form an identification system
System is for different situations work.
The separation for indicating and realizing: the definition of Caffe model has been write as configuration text with Protocl Buffer language
Part.Caffe supports the network struction in any directed acyclic graph form.According to instantiation, Caffe retains the interior of network needs
It deposits, and extracts memory from the position of host or GPU bottom, converted between CPU and GPU and only need to call a function.
Caffe is compared with other deep learning developing instruments, and mainly have following two difference: (1) Caffe uses C++ completely
Language realizes, convenient for transplanting, and limitation without hardware and platform, it is suitable for business development and scientific research.(2)Caffe
Many trained models are provided, by fine tuning (Fine-Tuning) these models, are not having to the case where rewriteeing a large amount of codes
Under, so that it may new application is fast and efficiently developed, therefore the present invention has the identification being exceedingly fast using Caffe learning framework
Efficiency.
The preferred embodiment of the present invention has shown and described in above description, it should be understood that the present invention is not limited to this paper institute
The form of disclosure, should not be regarded as an exclusion of other examples, and can be used for other combinations, modifications, and environments, and energy
Enough in this paper invented the scope of the idea, modifications can be made through the above teachings or related fields of technology or knowledge.And people from this field
The modifications and changes that member is carried out do not depart from the spirit and scope of the present invention, then all should be in the protection of appended claims of the present invention
In range.
Claims (5)
1. a kind of infrared biopsy method based on machine learning multiple features fusion, which comprises the following steps:
The extraction of S1, machine learning feature comprising:
S11, LBP feature is extracted using 68 point feature point of SDM algorithm locating human face and at 68 points;
S12, human eye area CNN feature is extracted;
S13, human face region CNN feature is extracted;
S2, deep learning network multiple features fusion comprising:
S21, multiple features fusion: 68 point LBP features, human eye area CNN feature and human face region CNN feature are passed through into depth
Concat layer in habit frame caffe, which links together, constitutes a new characteristic layer;
Most characteristic features are selected in the low learning rate training of S22, model, and according to this, most characteristic features export prediction result.
2. a kind of infrared biopsy method based on machine learning multiple features fusion as described in claim 1, feature exist
It is specifically included in the step S11: the human face region of Face datection algorithm positioning infrared image is first passed around, then by face area
Domain inputs 68 characteristic point of SDM algorithm locating human face, facial image is being normalized to 96*96 size, finally in 96*96 face figure
Corresponding 68 point LBP feature is extracted as upper.
3. a kind of infrared biopsy method based on machine learning multiple features fusion as claimed in claim 2, feature exist
Specifically include in the step S12: the human eye area that the 68 point feature point locations for selecting step S11 to extract go out is as a dimension
Feature input, first by the size of human eye area image normalization to 64*64, it is human eye figure that human eye block, which sticks corresponding label and arranges,
As sample database, then eye image sample database input CNN is trained and extracts human eye area CNN feature.
4. a kind of infrared biopsy method based on machine learning multiple features fusion as claimed in claim 3, feature exist
Specifically include in the step S13: the human face region that the 68 point feature point locations for selecting step S11 to extract go out is as a dimension
Feature input, first by the size of human face region image normalization to 96*96, it is face figure that face block, which sticks corresponding label and arranges,
As sample database, then facial image sample database input CNN is trained and extracts human face region CNN feature.
5. a kind of infrared biopsy method based on machine learning multiple features fusion as claimed in claim 3, feature exist
It is specifically included in the step S22: the facial image of LBP feature, the eye image of 64*64 and 96*96 is inputted into low learning rate
Learning model carries out re -training, obtains the full articulamentum after Fusion Features layer, and output 2 is connected in a manner of connecting entirely
Neuron, 2 neurons are respectively 0 and 1 probability, and 0 and 1 respectively represents non-living body and living body, are then selected most representative
Infrared living body characteristic of division and according to this feature export prediction result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910232449.3A CN109977867A (en) | 2019-03-26 | 2019-03-26 | A kind of infrared biopsy method based on machine learning multiple features fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910232449.3A CN109977867A (en) | 2019-03-26 | 2019-03-26 | A kind of infrared biopsy method based on machine learning multiple features fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109977867A true CN109977867A (en) | 2019-07-05 |
Family
ID=67080650
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910232449.3A Pending CN109977867A (en) | 2019-03-26 | 2019-03-26 | A kind of infrared biopsy method based on machine learning multiple features fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109977867A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110348385A (en) * | 2019-07-12 | 2019-10-18 | 苏州小阳软件科技有限公司 | Living body faces recognition methods and device |
CN111079659A (en) * | 2019-12-19 | 2020-04-28 | 武汉水象电子科技有限公司 | Face feature point positioning method |
CN112329612A (en) * | 2020-11-03 | 2021-02-05 | 北京百度网讯科技有限公司 | Living body detection method and device and electronic equipment |
CN113191189A (en) * | 2021-03-22 | 2021-07-30 | 深圳市百富智能新技术有限公司 | Face living body detection method, terminal device and computer readable storage medium |
CN114333011A (en) * | 2021-12-28 | 2022-04-12 | 北京的卢深视科技有限公司 | Network training method, face recognition method, electronic device and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105844206A (en) * | 2015-01-15 | 2016-08-10 | 北京市商汤科技开发有限公司 | Identity authentication method and identity authentication device |
CN106295566A (en) * | 2016-08-10 | 2017-01-04 | 北京小米移动软件有限公司 | Facial expression recognizing method and device |
CN106971164A (en) * | 2017-03-28 | 2017-07-21 | 北京小米移动软件有限公司 | Shape of face matching process and device |
CN108664880A (en) * | 2017-03-27 | 2018-10-16 | 三星电子株式会社 | Activity test method and equipment |
CN108898087A (en) * | 2018-06-22 | 2018-11-27 | 腾讯科技(深圳)有限公司 | Training method, device, equipment and the storage medium of face key point location model |
CN109344716A (en) * | 2018-08-31 | 2019-02-15 | 深圳前海达闼云端智能科技有限公司 | Training method, detection method, device, medium and equipment of living body detection model |
CN109460704A (en) * | 2018-09-18 | 2019-03-12 | 厦门瑞为信息技术有限公司 | A kind of fatigue detection method based on deep learning, system and computer equipment |
-
2019
- 2019-03-26 CN CN201910232449.3A patent/CN109977867A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105844206A (en) * | 2015-01-15 | 2016-08-10 | 北京市商汤科技开发有限公司 | Identity authentication method and identity authentication device |
CN106295566A (en) * | 2016-08-10 | 2017-01-04 | 北京小米移动软件有限公司 | Facial expression recognizing method and device |
CN108664880A (en) * | 2017-03-27 | 2018-10-16 | 三星电子株式会社 | Activity test method and equipment |
CN106971164A (en) * | 2017-03-28 | 2017-07-21 | 北京小米移动软件有限公司 | Shape of face matching process and device |
CN108898087A (en) * | 2018-06-22 | 2018-11-27 | 腾讯科技(深圳)有限公司 | Training method, device, equipment and the storage medium of face key point location model |
CN109344716A (en) * | 2018-08-31 | 2019-02-15 | 深圳前海达闼云端智能科技有限公司 | Training method, detection method, device, medium and equipment of living body detection model |
CN109460704A (en) * | 2018-09-18 | 2019-03-12 | 厦门瑞为信息技术有限公司 | A kind of fatigue detection method based on deep learning, system and computer equipment |
Non-Patent Citations (1)
Title |
---|
李成: "人脸活体检测技术的研究与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110348385A (en) * | 2019-07-12 | 2019-10-18 | 苏州小阳软件科技有限公司 | Living body faces recognition methods and device |
CN110348385B (en) * | 2019-07-12 | 2023-07-07 | 深圳小阳软件有限公司 | Living body face recognition method and device |
CN111079659A (en) * | 2019-12-19 | 2020-04-28 | 武汉水象电子科技有限公司 | Face feature point positioning method |
CN112329612A (en) * | 2020-11-03 | 2021-02-05 | 北京百度网讯科技有限公司 | Living body detection method and device and electronic equipment |
CN113191189A (en) * | 2021-03-22 | 2021-07-30 | 深圳市百富智能新技术有限公司 | Face living body detection method, terminal device and computer readable storage medium |
CN114333011A (en) * | 2021-12-28 | 2022-04-12 | 北京的卢深视科技有限公司 | Network training method, face recognition method, electronic device and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106951867B (en) | Face identification method, device, system and equipment based on convolutional neural networks | |
Wang et al. | Research on face recognition based on deep learning | |
CN109977867A (en) | A kind of infrared biopsy method based on machine learning multiple features fusion | |
Luo et al. | Traffic sign recognition using a multi-task convolutional neural network | |
CN108334848B (en) | Tiny face recognition method based on generation countermeasure network | |
Chen et al. | Survey of pedestrian action recognition techniques for autonomous driving | |
CN104050471B (en) | Natural scene character detection method and system | |
WO2020182121A1 (en) | Expression recognition method and related device | |
CN103902961B (en) | Face recognition method and device | |
CN108304788A (en) | Face identification method based on deep neural network | |
CN111401270A (en) | Human motion posture recognition and evaluation method and system | |
CN108182409A (en) | Biopsy method, device, equipment and storage medium | |
CN112800903B (en) | Dynamic expression recognition method and system based on space-time diagram convolutional neural network | |
CN105426875A (en) | Face identification method and attendance system based on deep convolution neural network | |
CN112784763A (en) | Expression recognition method and system based on local and overall feature adaptive fusion | |
CN110929593A (en) | Real-time significance pedestrian detection method based on detail distinguishing and distinguishing | |
CN109325462A (en) | Recognition of face biopsy method and device based on iris | |
Zhang et al. | A survey on face anti-spoofing algorithms | |
CN109377429A (en) | A kind of recognition of face quality-oriented education wisdom evaluation system | |
Hebbale et al. | Real time COVID-19 facemask detection using deep learning | |
Diyasa et al. | Multi-face Recognition for the Detection of Prisoners in Jail using a Modified Cascade Classifier and CNN | |
CN114782979A (en) | Training method and device for pedestrian re-recognition model, storage medium and terminal | |
Zhu et al. | Unsupervised voice-face representation learning by cross-modal prototype contrast | |
CN114550270A (en) | Micro-expression identification method based on double-attention machine system | |
CN107977622B (en) | Eye state detection method based on pupil characteristics |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190705 |