CN112749687A - Image quality and silence living body detection multitask training method and equipment - Google Patents
Image quality and silence living body detection multitask training method and equipment Download PDFInfo
- Publication number
- CN112749687A CN112749687A CN202110132064.7A CN202110132064A CN112749687A CN 112749687 A CN112749687 A CN 112749687A CN 202110132064 A CN202110132064 A CN 202110132064A CN 112749687 A CN112749687 A CN 112749687A
- Authority
- CN
- China
- Prior art keywords
- label
- layer
- living body
- training
- living
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 42
- 238000000034 method Methods 0.000 title claims abstract description 29
- 230000004913 activation Effects 0.000 claims abstract description 21
- 230000004927 fusion Effects 0.000 claims description 6
- 230000006870 function Effects 0.000 abstract description 28
- 238000013527 convolutional neural network Methods 0.000 description 4
- 238000007781 pre-processing Methods 0.000 description 2
- 238000013475 authorization Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000004397 blinking Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention relates to a picture quality and silence live body detection multitask training method and equipment, wherein the method comprises the following steps: obtaining a plurality of training samples; each training sample corresponds to a quality label and a living body label; fusing the quality label and the living body label of each training sample to obtain a living body quality label of each training sample; training a pre-established mobility idea model through a plurality of training samples with living body quality labels to obtain a multi-task judgment model; acquiring a sample image of an object to be detected; inputting the sample image into a multi-task judgment model to obtain the judgment probability corresponding to the sample image; and if the judgment probability is larger than the preset threshold value, determining that the multitask detection is passed. The multi-task learning in the scheme can reduce the complexity of the model, improve the generalization of the model and reduce the overall time consumption of the system. In addition, the new activation function SquareAct is introduced, so that the quality judgment and living body detection tasks are more effective.
Description
Technical Field
The invention relates to the technical field of training of picture quality and silence living body detection, in particular to a picture quality and silence living body detection multi-task training method and equipment.
Background
At present, the face recognition technology is widely applied to stations, payment, authorization and other scenes, the face recognition is particularly dependent on the quality of the corresponding face image, and the quality of the face image can obviously influence the recognition efficiency and the recognition success rate of the human recognition.
In order to improve the success rate of recognition and face some modes of using photos and the like to perform deception face recognition systems, the existing technical scheme generally needs to go through two links of picture quality filtering and living body detection before performing a face recognition task, so as to eliminate pictures which do not meet requirements (low quality, false body attack and the like).
However, the two processes of face image quality determination and living body detection determination are performed independently and separately in the prior art, which causes some problems: the two links of the face image quality judgment and the living body detection judgment are respectively and independently carried out, the relation between tasks is omitted, the time consumption of a system is increased, and the final face recognition efficiency is further influenced.
For this reason, there is a need for a better solution to the problems of the prior art.
Disclosure of Invention
The invention provides a picture quality and silence living body detection multitask training method and equipment, which can solve the technical problem of low efficiency in the prior art.
The technical scheme for solving the technical problems is as follows:
the embodiment of the invention provides a picture quality and silence live body detection multitask training method, which comprises the following steps:
obtaining a plurality of training samples; wherein each training sample corresponds to a quality label and a living body label;
fusing the quality label and the living body label of each training sample to obtain a living body quality label of each training sample;
training a pre-established mobility cepnet model through a plurality of training samples with the living body quality labels to obtain a multi-task judgment model; wherein, the decoding layer in the mobilefacenet model consists of a full connection layer and an activation function SquareAct;
acquiring a sample image of an object to be detected;
inputting the sample image into the multi-task judgment model to obtain the judgment probability corresponding to the sample image;
and if the judgment probability is larger than a preset threshold value, determining that the multi-task detection is passed.
In a specific embodiment, the mass label value corresponding to the mass label ranges from 0 to 1, and the living body label value corresponding to the living body label is 0 or 1; the living mass label value corresponding to the living mass label is in the range of 0-1.
In a specific embodiment, the living body mass label value corresponding to the living body mass label is a product of a mass label value corresponding to the mass label and a living body label value corresponding to the living body label.
In a specific embodiment, the mobility facenet model further includes: an input layer, a convolution layer, a Sigmoid activation layer and a cross entropy loss function layer; wherein,
the input layer is connected with the convolution layer, the convolution layer is connected with the decoding layer, the decoding layer is connected with the Sigmoid active layer, and the Sigmoid active layer is connected with the cross entropy loss function layer;
the cross entropy loss function layer connects the convolutional layers.
In a specific embodiment, the method further comprises: and if the judgment probability is not greater than a preset threshold value, determining that the multi-task detection fails.
The embodiment of the invention also provides image quality and silence live body detection multitask training equipment, which comprises the following steps:
the device comprises a first acquisition module, a second acquisition module and a control module, wherein the first acquisition module is used for acquiring a plurality of training samples; wherein each training sample corresponds to a quality label and a living body label;
a fusion module, configured to fuse the quality label and the living body label of each training sample to obtain a living body quality label of each training sample;
the training module is used for training a pre-established mobility facenet model through a plurality of training samples with the living body quality labels to obtain a multi-task judgment model; wherein, the decoding layer in the mobilefacenet model consists of a full connection layer and an activation function SquareAct;
the second acquisition module is used for acquiring a sample image of the object to be detected;
the judging module is used for inputting the sample image into the multi-task judging model to obtain the judging probability corresponding to the sample image;
and the determining module is used for determining that the multitask detection is passed when the judgment probability is greater than a preset threshold value.
In a specific embodiment, the quality label corresponds to a quality label value ranging between 0 and 1, and the living label corresponds to a living label value ranging between 0 and 1; the living mass label value corresponding to the living mass label is in the range of 0-1.
In a specific embodiment, the living body mass label value corresponding to the living body mass label is a product of a mass label value corresponding to the mass label and a living body label value corresponding to the living body label.
In a specific embodiment, the mobility facenet model further includes: an input layer, a convolution layer, a Sigmoid activation layer and a cross entropy loss function layer; wherein,
the input layer is connected with the convolution layer, the convolution layer is connected with the decoding layer, the decoding layer is connected with the Sigmoid active layer, and the Sigmoid active layer is connected with the cross entropy loss function layer;
the cross entropy loss function layer connects the convolutional layers.
In a specific embodiment, the apparatus further comprises:
and the processing module is used for determining that the multitask detection fails when the judgment probability is not greater than a preset threshold value.
The invention has the beneficial effects that:
in the scheme, the training data used by the two tasks of face quality judgment and living body detection are similar, and the bottom layer characteristics of the deep network can be shared; the multi-task learning in the scheme can reduce the complexity of the model, improve the generalization of the model and reduce the overall time consumption of the system. In addition, compared with the common activation functions of PReLu, ReLu, tanh and the like in a CNN (Convolutional Neural Networks), the new activation function squareacart introduced by the invention is more effective in quality judgment and living body detection tasks.
Drawings
Fig. 1 is a schematic flowchart of a picture quality and silence live detection multitask training method according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a picture quality and silence live detection multitask training method according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of a picture quality and silence live detection multitask training method according to an embodiment of the present invention;
fig. 4 is a schematic flowchart of a picture quality and silence live detection multitask training method according to an embodiment of the present invention;
fig. 5 is a schematic flowchart of a picture quality and silence live detection multitask training method according to an embodiment of the present invention;
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
The method and the device are used for detecting the quality of the picture and the living body in the silent state, specifically, the living body in the silent state comprises a user keeping silence, and the user does not make a sound and does not perform some action of budget, such as blinking, shaking head and the like.
Example 1
An embodiment of the present invention provides a picture quality and silence live-detection multitask training method, as shown in fig. 1 or fig. 2, including the following steps:
specifically, pre-processing a pre-collected image to generate a training sample; two label values contained in each training sample are respectively a quality label and a living body label; the specific preprocessing can comprise removing pictures which are unqualified to be cleaned from the image, and can also comprise the operation of manually marking the label values of the quality label and the living label; the label value of the specific quality label is used for indicating the image quality of the training sample, such as whether the image quality is clear enough, whether the light intensity is proper, and the like; and the living body label is used for indicating whether the object in the training sample is a living body or not, and the value is only 2, one is indicated as a living body, and the other is indicated as a non-living body.
102, fusing the quality label and the living body label of each training sample to obtain a living body quality label of each training sample;
specifically, the range of the quality label value corresponding to the quality label is 0-1, wherein the larger the quality label value is, the higher the image quality of the training sample in which the quality label is represented is; the value of the living body label corresponding to the living body label is 0 or 1; specifically, a value of 0 for the living body label indicates that the object in the training sample is not a living body, and a value of 1 for the living body label indicates that the object in the training sample is a living body; in order to train more effectively, the range of the living body quality label value corresponding to the living body quality label is between 0 and 1, and the range of the living body quality label value and the range of the quality label value are set to be in the same range of the first-level living body label value, so that the numerical range of a subsequent training model does not need to be changed, and the processing efficiency is improved.
Furthermore, the fusion can also be performed in the following manner:
specifically, let the living body label be S { + S, -S }, where true human is + S and prosthesis is-S.
The image quality label (calculated according to the ambiguity and the like) is q, and the new fusion label calculation method is
That is, the live tag 1/0 is replaced with s/-s, s >0 being a scale value, multiplied by the mass tag q. Note that when S ═ S, it should be multiplied by (1-q), a specific principle is a picture of low quality, whose fusion tag value is also low. And subsequently, the new label is manufactured after Sigmoid normalization to be between [0 and 1 ].
In a specific embodiment, the living body mass label value corresponding to the living body mass label is a product of a mass label value corresponding to the mass label and a living body label value corresponding to the living body label.
103, training a pre-established mobility facenet model through a plurality of training samples with the living body quality labels to obtain a multi-task judgment model; wherein, the decoding layer in the mobilefacenet model consists of a full connection layer and an activation function SquareAct;
specifically, data enhancement processing may be performed in the training process, specifically, a new training sample and a new label are constructed in a linear interpolation manner through a Mixup (a data enhancement manner), and finally, the label is processed as shown in the following formula:
(xi,yi),(xj,yj) The two data pairs are training samples in the raw data set and their corresponding labels. Wherein λ ∈ [0,1]]Is a parameter lambda-Beta (alpha ) obeying B distribution, alpha is an element of 0, a +∞]。
The form of the penalty function when performing the second classification task is:
Lm=λCi(yp,yi)+(1-λ)Cj(yp,yj),
wherein L ismIs a loss function value; c represents the cross entropy under the continuation label, specifically:
Ci=-yilogyp-(1-yi)log(1-yp),
Cj=-yjlogyp-(1-yj)log(1-yp),
wherein, ypIs the class probability of the model prediction; y isi,yjIs an image continuation label produced according to the above method.
In the general two-classification problem, the label in mixup is yi,j1, {0,1 }. Continuous label y based on multiple tasks in the schemei,j∈[0,1]And by combining the mixup method, a more robust classification result can be obtained.
Specifically, as shown in fig. 3, a mobility network model is constructed in advance, specifically, the mobility network model is a network model that can be run on a mobile device for face recognition, and a single network model is only 4M and has a very high accuracy; wherein the decoding layer consists of full connection and an activation function SquareAct; the specific activation function SquareAct is expressed as: s (x) x2(ii) a Where x is the output of the full connection layer, S is shorthand for the activation of Squareact, and Squareact is an element-wise operation. For example, if the full link layer output is x ═ 1,2,3,4,5, ·, then S ═ 1, 4, 9, 16, 25, ·.
In addition, the mobilefacenet model further includes: an input layer, a convolution layer, a Sigmoid activation layer and a cross entropy loss function layer; wherein the input layer is connected with the convolution layer, the convolution layer is connected with the decoding layer, the decoding layer is connected with the Sigmoid active layer, and the Sigmoid active layer is connected with the cross entropy loss function layer; the cross entropy loss function layer connects the convolutional layers. In a specific training process, the cross-entropy loss function layer gives feedback to the convolutional layer, for example, performs gradient inversion, so that the training process is performed after the convolutional layer is adjusted until the training times reach or the cross-entropy loss function layer does not perform feedback any more.
104, acquiring a sample image of an object to be detected;
specifically, the step 104 is not necessarily performed after the step 103, and may be performed simultaneously with the aforementioned steps 101-103. The specific object to be detected may be a user, and the sample image thereof may identify whether the user exists, and parameters such as quality of the sample image, so as to facilitate subsequent execution of determination.
and step 106, if the judgment probability is larger than a preset threshold value, determining that the multitask detection is passed.
Specifically, the preset threshold may be set according to actual detection needs and experience, when the determination probability is greater than the preset threshold, it is determined that the multi-task detection is passed, and if the determination probability is not greater than the preset threshold, it is determined that the multi-task detection is not passed.
Example 2
Embodiment 2 of the present invention further discloses a picture quality and silence live-detection multitask training device, as shown in fig. 4, including:
a first obtaining module 201, configured to obtain a training sample including a plurality of training samples; wherein each training sample corresponds to a quality label and a living body label;
a fusion module 202, configured to fuse the quality label and the living body label of each training sample to obtain a living body quality label of each training sample;
the training module 203 is configured to train a pre-established mobility facenet model through a plurality of training samples with the living body quality labels to obtain a multi-task determination model; wherein, the decoding layer in the mobilefacenet model consists of a full connection layer and an activation function SquareAct;
the second obtaining module 204 is configured to obtain a sample image of the object to be detected;
a decision module 205, configured to input the sample image into the multi-task decision model, so as to obtain a decision probability corresponding to the sample image;
a determining module 206, configured to determine that the multitask detection is passed when the decision probability is greater than a preset threshold.
In a specific embodiment, the quality label corresponds to a quality label value ranging between 0 and 1, and the living label corresponds to a living label value ranging between 0 and 1; the living mass label value corresponding to the living mass label is in the range of 0-1.
In a specific embodiment, the living body mass label value corresponding to the living body mass label is a product of a mass label value corresponding to the mass label and a living body label value corresponding to the living body label.
In a specific embodiment, the mobility facenet model further includes: an input layer, a convolution layer, a Sigmoid activation layer and a cross entropy loss function layer; wherein,
the input layer is connected with the convolution layer, the convolution layer is connected with the decoding layer, the decoding layer is connected with the Sigmoid active layer, and the Sigmoid active layer is connected with the cross entropy loss function layer;
the cross entropy loss function layer connects the convolutional layers.
In a specific embodiment, as shown in fig. 5, the method further includes:
and the processing module 207 is used for determining that the multitask detection fails when the judgment probability is not greater than a preset threshold value.
The embodiment of the invention discloses a picture quality and silence live body detection multitask training method and equipment, wherein the method comprises the following steps: obtaining a plurality of training samples; wherein each training sample corresponds to a quality label and a living body label; fusing the quality label and the living body label of each training sample to obtain a living body quality label of each training sample; training a pre-established mobility cepnet model through a plurality of training samples with the living body quality labels to obtain a multi-task judgment model; wherein, the decoding layer in the mobilefacenet model consists of a full connection layer and an activation function SquareAct; acquiring a sample image of an object to be detected; inputting the sample image into the multi-task judgment model to obtain the judgment probability corresponding to the sample image; and if the judgment probability is larger than a preset threshold value, determining that the multi-task detection is passed. In the scheme, the training data used by the two tasks of face quality judgment and living body detection are similar, and the bottom layer characteristics of the deep network can be shared; the multi-task learning in the scheme can reduce the complexity of the model, improve the generalization of the model and reduce the overall time consumption of the system. In addition, compared with the common activation functions of PReLu, ReLu, tanh and the like in a CNN (Convolutional Neural Networks), the new activation function squareacart introduced by the invention is more effective in quality judgment and living body detection tasks.
While the invention has been described with reference to specific embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (10)
1. A picture quality and silence live detection multitask training method is characterized by comprising the following steps:
obtaining a plurality of training samples; wherein each training sample corresponds to a quality label and a living body label;
fusing the quality label and the living body label of each training sample to obtain a living body quality label of each training sample;
training a pre-established mobility cepnet model through a plurality of training samples with the living body quality labels to obtain a multi-task judgment model; wherein, the decoding layer in the mobilefacenet model consists of a full connection layer and an activation function SquareAct;
acquiring a sample image of an object to be detected;
inputting the sample image into the multi-task judgment model to obtain the judgment probability corresponding to the sample image;
and if the judgment probability is larger than a preset threshold value, determining that the multi-task detection is passed.
2. The method of claim 1, wherein the mass label corresponds to a mass label value ranging between 0-1 and the live label corresponds to a live label value of 0 or 1; the living mass label value corresponding to the living mass label is in the range of 0-1.
3. The method according to claim 1 or 2, wherein the living mass label value corresponding to the living mass label is a product of the mass label value corresponding to the mass label and the living label value corresponding to the living label.
4. The method of claim 1, wherein the mobility facenet model further comprises: an input layer, a convolution layer, a Sigmoid activation layer and a cross entropy loss function layer; wherein,
the input layer is connected with the convolution layer, the convolution layer is connected with the decoding layer, the decoding layer is connected with the Sigmoid active layer, and the Sigmoid active layer is connected with the cross entropy loss function layer;
the cross entropy loss function layer connects the convolutional layers.
5. The method of claim 1, further comprising: and if the judgment probability is not greater than a preset threshold value, determining that the multi-task detection fails.
6. A picture quality and silence liveness detection multitask training device, comprising:
the device comprises a first acquisition module, a second acquisition module and a control module, wherein the first acquisition module is used for acquiring a plurality of training samples; wherein each training sample corresponds to a quality label and a living body label;
a fusion module, configured to fuse the quality label and the living body label of each training sample to obtain a living body quality label of each training sample;
the training module is used for training a pre-established mobility facenet model through a plurality of training samples with the living body quality labels to obtain a multi-task judgment model; wherein, the decoding layer in the mobilefacenet model consists of a full connection layer and an activation function SquareAct;
the second acquisition module is used for acquiring a sample image of the object to be detected;
the judging module is used for inputting the sample image into the multi-task judging model to obtain the judging probability corresponding to the sample image;
and the determining module is used for determining that the multitask detection is passed when the judgment probability is greater than a preset threshold value.
7. The apparatus of claim 6, wherein the mass tag corresponds to a mass tag value in a range between 0-1 and the living tag corresponds to a living tag value in a range between 0-1; the living mass label value corresponding to the living mass label is in the range of 0-1.
8. The apparatus of claim 6 or 7, wherein the living mass label value to which the living mass label corresponds is a product of a mass label value to which the mass label corresponds and a living label value to which the living label corresponds.
9. The apparatus of claim 6, wherein the mobility net model further comprises: an input layer, a convolution layer, a Sigmoid activation layer and a cross entropy loss function layer; wherein,
the input layer is connected with the convolution layer, the convolution layer is connected with the decoding layer, the decoding layer is connected with the Sigmoid active layer, and the Sigmoid active layer is connected with the cross entropy loss function layer;
the cross entropy loss function layer connects the convolutional layers.
10. The apparatus of claim 6, further comprising:
and the processing module is used for determining that the multitask detection fails when the judgment probability is not greater than a preset threshold value.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110132064.7A CN112749687B (en) | 2021-01-31 | 2021-01-31 | Picture quality and silence living body detection multitasking training method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110132064.7A CN112749687B (en) | 2021-01-31 | 2021-01-31 | Picture quality and silence living body detection multitasking training method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112749687A true CN112749687A (en) | 2021-05-04 |
CN112749687B CN112749687B (en) | 2024-06-14 |
Family
ID=75653391
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110132064.7A Active CN112749687B (en) | 2021-01-31 | 2021-01-31 | Picture quality and silence living body detection multitasking training method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112749687B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115512427A (en) * | 2022-11-04 | 2022-12-23 | 北京城建设计发展集团股份有限公司 | User face registration method and system combined with matched biopsy |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110991249A (en) * | 2019-11-04 | 2020-04-10 | 支付宝(杭州)信息技术有限公司 | Face detection method, face detection device, electronic equipment and medium |
CN111241925A (en) * | 2019-12-30 | 2020-06-05 | 新大陆数字技术股份有限公司 | Face quality evaluation method, system, electronic equipment and readable storage medium |
CN111611851A (en) * | 2020-04-10 | 2020-09-01 | 北京中科虹霸科技有限公司 | Model generation method, iris detection method and device |
WO2020187160A1 (en) * | 2019-03-15 | 2020-09-24 | 北京嘉楠捷思信息技术有限公司 | Cascaded deep convolutional neural network-based face recognition method and system |
CN112215043A (en) * | 2019-07-12 | 2021-01-12 | 普天信息技术有限公司 | Human face living body detection method |
-
2021
- 2021-01-31 CN CN202110132064.7A patent/CN112749687B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020187160A1 (en) * | 2019-03-15 | 2020-09-24 | 北京嘉楠捷思信息技术有限公司 | Cascaded deep convolutional neural network-based face recognition method and system |
CN112215043A (en) * | 2019-07-12 | 2021-01-12 | 普天信息技术有限公司 | Human face living body detection method |
CN110991249A (en) * | 2019-11-04 | 2020-04-10 | 支付宝(杭州)信息技术有限公司 | Face detection method, face detection device, electronic equipment and medium |
CN111241925A (en) * | 2019-12-30 | 2020-06-05 | 新大陆数字技术股份有限公司 | Face quality evaluation method, system, electronic equipment and readable storage medium |
CN111611851A (en) * | 2020-04-10 | 2020-09-01 | 北京中科虹霸科技有限公司 | Model generation method, iris detection method and device |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115512427A (en) * | 2022-11-04 | 2022-12-23 | 北京城建设计发展集团股份有限公司 | User face registration method and system combined with matched biopsy |
CN115512427B (en) * | 2022-11-04 | 2023-04-25 | 北京城建设计发展集团股份有限公司 | User face registration method and system combined with matched biopsy |
Also Published As
Publication number | Publication date |
---|---|
CN112749687B (en) | 2024-06-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111242900A (en) | Product qualification determination method and device, electronic equipment and storage medium | |
CN112446869A (en) | Unsupervised industrial product defect detection method and device based on deep learning | |
CN113591674B (en) | Edge environment behavior recognition system for real-time video stream | |
CN111199238A (en) | Behavior identification method and equipment based on double-current convolutional neural network | |
CN116740384B (en) | Intelligent control method and system of floor washing machine | |
CN111145145A (en) | Image surface defect detection method based on MobileNet | |
CN117011274A (en) | Automatic glass bottle detection system and method thereof | |
Gupta et al. | Progression modelling for online and early gesture detection | |
Krithika et al. | MAFONN-EP: A minimal angular feature oriented neural network based emotion prediction system in image processing | |
CN112749687B (en) | Picture quality and silence living body detection multitasking training method and device | |
KR20200038072A (en) | Entropy-based neural networks partial learning method and system | |
ViswanathReddy et al. | Facial emotions over static facial images using deep learning techniques with hysterical interpretation | |
CN115761576A (en) | Video motion recognition method and device and storage medium | |
CN114666571A (en) | Video sensitive content detection method and system | |
Mobsite et al. | A Deep Learning Dual-Stream Framework for Fall Detection | |
CN116958615A (en) | Picture identification method, device, equipment and medium | |
Karthik et al. | GrapeLeafNet: A Dual-Track Feature Fusion Network with Inception-ResNet and Shuffle-Transformer for Accurate Grape Leaf Disease Identification | |
Xenya et al. | Intruder detection with alert using cloud based convolutional neural network and Raspberry Pi | |
CN113435248A (en) | Mask face recognition base enhancement method, device, equipment and readable storage medium | |
CN110991366A (en) | Shipping monitoring event identification method and system based on three-dimensional residual error network | |
Vasudeva et al. | Comparative Analysis of Techniques for Recognising Facial Expressions | |
Gupta et al. | Facial Expression Recognition with Combination of Geometric and Textural Domain Features Extractor using CNN and Machine Learning | |
KARADAĞ | An adversarial framework for open-set human action recognition usingskeleton data | |
Oyeniran et al. | Review of the application of artificial intelligence in sign language recognition system | |
CN113780091B (en) | Video emotion recognition method based on body posture change representation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |