CN109101925A - Biopsy method - Google Patents
Biopsy method Download PDFInfo
- Publication number
- CN109101925A CN109101925A CN201810924142.5A CN201810924142A CN109101925A CN 109101925 A CN109101925 A CN 109101925A CN 201810924142 A CN201810924142 A CN 201810924142A CN 109101925 A CN109101925 A CN 109101925A
- Authority
- CN
- China
- Prior art keywords
- sample
- color
- robust features
- training set
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/162—Detection; Localisation; Normalisation using pixel segmentation or colour matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The present invention proposes a kind of biopsy method, is related to field of face identification.The present invention solves the problems, such as that the identification method safety at present only using living body and non-living body face under solid color space is not high, its drip irrigation device are as follows: for each sample in training set, the color for calculating separately the image under hsv color space and the image under YCbCr color space accelerates robust features, and accelerate robust features to carry out Vector Fusion two colors of sample each in calculated training set, it is fused into color and accelerates robust features group;Accelerate robust features group to be input in gauss hybrid models the color of sample each in training set, the FV coding vector of each sample in training set is calculated by gauss hybrid models;The FV coding vector of sample each in training set is normalized and is converted into being sent into classifier after the format of classifier requirement and is trained, trains model, and remaining sample is subjected to relevant parameter to the model as test set and is adjusted.
Description
Technical field
The present invention relates to face recognition technologies, in particular to how more safely to identify in access control system or advertisement machine
The technology of living body faces out.
Background technique
Key breakthrough has occurred in computer vision field technology in recent years, and recognition of face is a kind of untouchable technology, tool
There is visualization, meet the characteristics of thinking habit of people, is able to be widely applied in fields such as business, safety.Currently, recognition of face by
Gradually become a popular research field.
The U.S. is the country that face recognition technology is started to walk at first, and applies the country of the technology, recognition of face at first
The level of technology is gone on along in international forefront.FBI was just proposed the electronic recognition system of their a new generation in 2014, total to throw
Enter more than 1,000,000,000 U.S. dollars.For utilizing monitoring locking suspect, chased to carry out the whole network.Moreover, U.S. national defense
Portion and Homeland Security department increase the investment to artificial intelligence identification technology, for preventing terrorist from causing to public safety
Threat.The Main Airport of Japan at home, which has been introduced, identifies face by computer intelligence to confirm the system of identity, is expected to
Before holding Tokyo Olympics and Paralympics, promotes Japanese's entry and exit to examine unmanned, greatly shorten foreign tourist
Enter a country the time examined.The video monitoring face recognition technology that Hitachi, Japan in 2015 is released can be schemed with 36,000,000
The speed of picture/second is scanned, and identifies passerby with high precision, and stores passerby's face image immediately, and appearance is similar
Face is classified.
Face recognition technology is started in late nineteen nineties in last century in the development of China, experienced technology transfer-profession city
Importing-technical perfection-technical application-every profession and trade field such as uses at five stages.Currently, domestic face recognition technology is
Opposite mature, the technology are more and more generalized to safety-security area, extend the multiple products such as attendance recorder, door access machine,
Product line up to 20 multiple types, can cover comprehensively coal mine, building, bank, army, welfare, e-commerce and
The overall application epoch in the fields such as safe defence, recognition of face have arrived.
But it is easy based on the living creature characteristic recognition system of face by impersonation attack, such as use photograph print, video
False face is presented in display and mask, can cause to attack to system.
It is entitled a kind of based on HSV referring to application No. is 201310041766.X in order to solve the problems, such as this impersonation attack
One patent application of the living body faces of color space statistical nature, the image comprising face that will be obtained first from camera
It is transformed into YCrCb color space from RGB color, skin color segmentation processing is successively carried out to the image comprising face later, goes
Make an uproar processing, morphology processing and calibration connected region BORDER PROCESSING, obtain face rectangular area coordinate;Then according to people
The coordinate of face rectangular area obtains facial image to be detected from the image comprising face;Again to facial image to be detected
Partial image block, and obtain the characteristic value of three color components of all image blocks in facial image to be detected;To finally it return
It is detected in the support vector machines that characteristic value after one change is sent into after training as sample to be detected, determines the figure comprising face
It seem no for living body real human face image, advantage is reduction of face authentication system delay, reduces computation complexity, improves
Detection accuracy.
But what is used is also that sample is identified and judged under a kind of color space, i.e., in the color space
Under divide an image into subimage blocks much more again, and adopt the corresponding model of training after combination in various manners, realize living body and non-
The judgement of living body faces, then only utilizing living body and non-living body people under solid color space when by more brilliant attack means
The identification method of face can be also broken, and safety is not high.
Summary of the invention
The object of the present invention is to provide a kind of biopsy methods, solve current biopsy method and only utilize single face
The identification method of living body and non-living body face can be also broken under the colour space, the not high problem of safety.
The present invention solves its technical problem, the technical solution adopted is that: biopsy method includes the following steps:
Step 1, collecting sample and the picture tag value that sample is arranged, the sample include multiple living body faces images and more
A non-living body facial image, the corresponding picture tag value of living body faces image is 1, the corresponding picture mark of non-living body facial image
Label value is 0;
Step 2, the part sample for randomly choosing acquisition are simultaneously divided equally as training set, and by sample each in training set
The image under the image and YCbCr color space under hsv color space is not converted to;
Step 3, for each sample in training set, calculate separately color of image under hsv color space and accelerate robust
Feature and the color of the image under YCbCr color space accelerate robust features, and by sample each in calculated training set
Two colors accelerate robust features to carry out Vector Fusion, are fused into color and accelerate robust features group;
The color of sample each in training set is accelerated robust features group to be input in gauss hybrid models by step 4, is passed through
Gauss hybrid models calculate the FV coding vector of each sample in training set;
Step 5 is normalized the FV coding vector of sample each in training set and is converted into classifier requirement
Format after be sent into classifier and be trained, train model, and the initial output area of the model is set, it is described initial defeated
Range refers to the value that exports when the sample in training set is living body faces image by the model and corresponding picture tag out
The difference of value 1 within the specified scope, when the sample in training set is non-living body facial image, the value that is exported by the model with
The difference of corresponding picture tag value 0 is within the specified scope;
Step 6, using remaining sample as test set, and each sample standard deviation in test set is respectively converted into hsv color
The image under image and YCbCr color space under space;
Step 7, for each sample in test set, calculate separately color of image under hsv color space and accelerate robust
Feature and the color of the image under YCbCr color space accelerate robust features, and by sample each in calculated test set
Two colors accelerate robust features to carry out Vector Fusion, are fused into color and accelerate robust features group;
The color of sample each in test set is accelerated robust features group to be input in gauss hybrid models by step 8, is passed through
Gauss hybrid models calculate the FV coding vector of each sample in test set;
Step 9 is normalized the FV coding vector of sample each in test set and is converted into classifier requirement
Format after be sent into the model trained and calculated, and judge calculated result whether in the initial output area, if
The relevant parameter of the model at this time is then being recorded, if not existing, the penalty values of the calculated result is being calculated and reversely passes penalty values
It transports in the model, and adjusts the relevant parameter of the model according to penalty values.
Specifically, in step 5 and/or step 8, the model is neural network model.
Further, in step 3 and/or step 7, for each sample in training set and/or test set, HSV is calculated
Color of image under color space accelerates robust features and the color of the image under YCbCr color space to accelerate the side of robust features
Method includes the following steps:
Step A1, in defined rectangular area, centered on characteristic point, the image of 20s × 20s is divided along principal direction
At 4 × 4 sub-regions, wherein s is characterized scale a little;
Step A2, each subregion carries out response computation using the Haar small echo template of size 2s, calculates response;
Step A3, extract response in horizontally and vertically on response, combination form each subregion
Feature vector, the feature vector formula of each subregion are as follows:
Vj=[∑ dx, ∑ dy, ∑ | dx|, ∑ | dy|]
Wherein, j indicates that any one subregion, j take the arbitrary integer between 1-16;Dx and dy is horizontal and vertical respectively
Haar small echo response on direction calculates dx when dx is greater than 0, right when dx is less than 0 | dx | it calculates, works as dy
Dy is calculated when greater than 0, right when dy is less than 0 | dy | it calculates;
Step A4, the feature vector extracted from each subregion is connected, the color for forming 64 dimensions accelerates robust features to retouch
Symbol is stated, the color of the formation accelerates robust features descriptor as follows:
SURF=[V1..., V16]。
Specifically, between step 3 and step 4 further include: to the color that is fused into accelerate robust features group carry out PCA it is main at
Point analysis, in step 4, being input in gauss hybrid models is each sample in training set after PCA principal component analysis
Color accelerates robust features group.
Still further, between step 7 and step 8 further include: accelerate robust features group to carry out PCA the color being fused into
Principal component analysis, in step 8, being input in gauss hybrid models is each sample in test set after PCA principal component analysis
This color accelerates robust features group.
The invention has the advantages that being counted respectively by above-mentioned biopsy method for each sample in training set
The color for calculating the image under hsv color space and the image under YCbCr color space accelerates robust features, and by calculated instruction
Practice and concentrate two colors of each sample that robust features is accelerated to carry out Vector Fusion, is fused into color and accelerates robust features group,
In, the correlated characteristic under two kinds of color spaces is merged, has higher robustness, then by the color of sample each in training set
Accelerate robust features group to be input in gauss hybrid models, the FV of each sample in training set is calculated by gauss hybrid models
Coding vector;The FV coding vector of sample each in training set is normalized and is converted into the format of classifier requirement
It is sent into classifier and is trained afterwards, train model, and relevant parameter is carried out to the model using remaining sample as test set
It is adjusted.
When by more brilliant attack means, if the relevant parameter under solid color space is cracked, it will not make to sentence
Disconnected result receives influence, avoids the model safety only gone out using living body under solid color space and non-living body face sample training
Property not high the problem of causing In vivo detection result to be affected.
Specific embodiment
Technical solution of the present invention is described below in detail.
Biopsy method of the present invention, includes the following steps:
Step 1, collecting sample and the picture tag value that sample is set, wherein sample include multiple living body faces images and
Multiple non-living body facial images, the corresponding picture tag value of living body faces image is 1, the corresponding picture of non-living body facial image
Label value is 0;
Step 2, the part sample for randomly choosing acquisition are simultaneously divided equally as training set, and by sample each in training set
The image under the image and YCbCr color space under hsv color space is not converted to;
Step 3, for each sample in training set, calculate separately color of image under hsv color space and accelerate robust
Feature and the color of the image under YCbCr color space accelerate robust features, and by sample each in calculated training set
Two colors accelerate robust features to carry out Vector Fusion, are fused into color and accelerate robust features group, wherein due to hsv color sky
Between and the brightness in YCbCr color space and chrominance information it is different, merge the color in the two color spaces and accelerate robust special
Sign, can between them from it is potential complementarity in benefit;
The color of sample each in training set is accelerated robust features group to be input in gauss hybrid models by step 4, is passed through
Gauss hybrid models calculate the FV coding vector of each sample in training set, wherein the purpose for calculating FV coding vector is to make
The color being mixed into accelerates robust features group more firm, and attack protection is strong;
Step 5 is normalized the FV coding vector of sample each in training set and is converted into classifier requirement
Format after be sent into classifier and be trained, train model, and the initial output area of the model is set, wherein initial
Output area refers to the value that exports when the sample in training set is living body faces image by the model and corresponding picture mark
The difference of label value 1 within the specified scope, when the sample in training set is non-living body facial image, the value that is exported by the model
With the difference of corresponding picture tag value 0 within the specified scope;
Step 6, using remaining sample as test set, and each sample standard deviation in test set is respectively converted into hsv color
The image under image and YCbCr color space under space;
Step 7, for each sample in test set, calculate separately color of image under hsv color space and accelerate robust
Feature and the color of the image under YCbCr color space accelerate robust features, and by sample each in calculated test set
Two colors accelerate robust features to carry out Vector Fusion, are fused into color and accelerate robust features group;
The color of sample each in test set is accelerated robust features group to be input in gauss hybrid models by step 8, is passed through
Gauss hybrid models calculate the FV coding vector of each sample in test set;
Step 9 is normalized the FV coding vector of sample each in test set and is converted into classifier requirement
Format after be sent into the model trained and calculated, and judge calculated result whether in initial output area, if,
The relevant parameter of the record model at this time, if not existing, calculate the penalty values of the calculated result and by penalty values reverse transfer extremely
In the model, and adjust according to penalty values the relevant parameter of the model.Here, mesh FV coding vector being normalized
Be to further increase the stability of data.
Wherein, the biopsy method of the application has merged the correlated characteristic under two kinds of color spaces, then goes to train again
Model has higher robustness.
In the above method, in step 5 and/or step 8, signified model is preferably neural network model, is here artificial
Neural network model, because artificial nerve network model has following advantage: Serial Distribution Processing ability, height robustness
With fault-tolerant ability, distribution storage and learning ability and can sufficiently approach complicated non-linear relation, and this application claims be exactly
High robust, therefore the identification that whole face is living body or non-living body can be improved in selection artificial nerve network model here
Rate.
Preferably, in step 3 and/or step 7, for each sample in training set and/or test set, HSV face is calculated
Color of image under the colour space accelerates robust features and the color of the image under YCbCr color space to accelerate the method for robust features
Include the following steps:
Step A1, in defined rectangular area, centered on characteristic point, the image of 20s × 20s is divided along principal direction
At 4 × 4 sub-regions, wherein s is characterized scale a little;
Step A2, each subregion carries out response computation using the Haar small echo template of size 2s, calculates response;
Step A3, extract response in horizontally and vertically on response, combination form each subregion
Feature vector, wherein the feature vector formula of each subregion are as follows:
Vj=[∑ dx, ∑ dy, ∑ | dx|, ∑ | dy|]
In the formula, j indicates that any one subregion, j take the arbitrary integer between 1-16;Dx and dy be respectively it is horizontal and
Haar small echo response in vertical direction calculates dx when dx is greater than 0, right when dx is less than 0 | dx | it calculates,
Dy is calculated when dy is greater than 0, right when dy is less than 0 | dy | it calculates;
Step A4, the feature vector extracted from each subregion is connected, the color for forming 64 dimensions accelerates robust features to retouch
State symbol, wherein the color of formation accelerates robust features descriptor as follows:
SURF=[V1..., V16]。
Preferably, between step 3 and step 4 further include: to the color that is fused into accelerate robust features group carry out PCA it is main at
Point analysis, in step 4, being input in gauss hybrid models is each sample in training set after PCA principal component analysis
Color accelerates robust features group.
Preferably, between step 7 and step 8 further include: to the color that is fused into accelerate robust features group carry out PCA it is main at
Point analysis, in step 8, being input in gauss hybrid models is each sample in test set after PCA principal component analysis
Color accelerates robust features group.
Wherein, it is to find out to main in the color acceleration robust features group being fused into that PCA principal component analysis, which is added,
Data, replace the color being entirely fused into accelerate robust features group with wherein most important data, can reduce being fused into
Color accelerates the vector dimension of robust features group, has simplified total algorithm, and then shorten the In vivo detection time.
Claims (5)
1. biopsy method, which comprises the steps of:
Step 1, collecting sample and the picture tag value that sample is arranged, the sample include multiple living body faces images and multiple non-
Living body faces image, the corresponding picture tag value of living body faces image is 1, the corresponding picture tag value of non-living body facial image
It is 0;
Step 2, the part sample for randomly choosing acquisition simultaneously turn as training set, and by sample standard deviation each in training set respectively
Image under the image and YCbCr color space that are changed under hsv color space;
Step 3, for each sample in training set, calculate separately color of image under hsv color space and accelerate robust features
And the color of the image under YCbCr color space accelerates robust features, and by two of sample each in calculated training set
Color accelerates robust features to carry out Vector Fusion, is fused into color and accelerates robust features group;
The color of sample each in training set is accelerated robust features group to be input in gauss hybrid models by step 4, passes through Gauss
Mixed model calculates the FV coding vector of each sample in training set;
Step 5 is normalized the FV coding vector of sample each in training set and is converted into the lattice of classifier requirement
It is sent into classifier and is trained after formula, train model, and the initial output area of the model is set, the initial output model
It encloses and refers to the value that exports when the sample in training set is living body faces image by the model and corresponding picture tag value 1
Difference within the specified scope, when the sample in training set is non-living body facial image, the value that is exported by the model with it is right
The difference for the picture tag value 0 answered is within the specified scope;
Step 6, using remaining sample as test set, and each sample standard deviation in test set is respectively converted into hsv color space
Under image and YCbCr color space under image;
Step 7, for each sample in test set, calculate separately color of image under hsv color space and accelerate robust features
And the color of the image under YCbCr color space accelerates robust features, and by two of sample each in calculated test set
Color accelerates robust features to carry out Vector Fusion, is fused into color and accelerates robust features group;
The color of sample each in test set is accelerated robust features group to be input in gauss hybrid models by step 8, passes through Gauss
Mixed model calculates the FV coding vector of each sample in test set;
Step 9 is normalized the FV coding vector of sample each in test set and is converted into the lattice of classifier requirement
Be sent into the model trained and calculated after formula, and judge calculated result whether in the initial output area, if,
The relevant parameter of the record model at this time, if not existing, calculate the penalty values of the calculated result and by penalty values reverse transfer extremely
In the model, and adjust according to penalty values the relevant parameter of the model.
2. biopsy method according to claim 1, which is characterized in that in step 5 and/or step 8, the model is
Neural network model.
3. biopsy method according to claim 1, which is characterized in that in step 3 and/or step 7, for training set
And/or each sample in test set, it calculates the color of image under hsv color space and accelerates robust features and YCbCr color empty
Between under image color accelerate robust features method include the following steps:
Step A1, in defined rectangular area, centered on characteristic point, the image of 20s × 20s is divided into 4 along principal direction
× 4 sub-regions, wherein s is characterized scale a little;
Step A2, each subregion carries out response computation using the Haar small echo template of size 2s, calculates response;
Step A3, the response on horizontally and vertically is extracted in response, combination forms the feature of each subregion
Vector, the feature vector formula of each subregion are as follows:
Vj=[∑ dx, ∑ dy, ∑ | dx|, ∑ | dy|]
Wherein, j indicates that any one subregion, j take the arbitrary integer between 1-16;Dx and dy is both horizontally and vertically respectively
On Haar small echo response, when dx be greater than 0 when dx is calculated, it is right when dx is less than 0 | dx | calculate, when dy is greater than
Dy is calculated when 0, right when dy is less than 0 | dy | it calculates;
Step A4, the feature vector extracted from each subregion being connected, the color for forming 64 dimensions accelerates robust features descriptor,
The color of the formation accelerates robust features descriptor as follows:
SURF=[V1..., V16]。
4. biopsy method according to claim 1 or 3, which is characterized in that between step 3 and step 4 further include: right
The color that is fused into accelerates robust features group to carry out PCA principal component analysis, and in step 4, being input in gauss hybrid models is
The color of each sample accelerates robust features group in training set after PCA principal component analysis.
5. biopsy method according to claim 1 or 3, which is characterized in that between step 7 and step 8 further include: right
The color that is fused into accelerates robust features group to carry out PCA principal component analysis, and in step 8, being input in gauss hybrid models is
The color of each sample accelerates robust features group in test set after PCA principal component analysis.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810924142.5A CN109101925A (en) | 2018-08-14 | 2018-08-14 | Biopsy method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810924142.5A CN109101925A (en) | 2018-08-14 | 2018-08-14 | Biopsy method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109101925A true CN109101925A (en) | 2018-12-28 |
Family
ID=64849665
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810924142.5A Pending CN109101925A (en) | 2018-08-14 | 2018-08-14 | Biopsy method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109101925A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110008820A (en) * | 2019-01-30 | 2019-07-12 | 广东世纪晟科技有限公司 | Silent in-vivo detection method |
CN110135259A (en) * | 2019-04-15 | 2019-08-16 | 深圳壹账通智能科技有限公司 | Silent formula living body image identification method, device, computer equipment and storage medium |
CN110298230A (en) * | 2019-05-06 | 2019-10-01 | 深圳市华付信息技术有限公司 | Silent biopsy method, device, computer equipment and storage medium |
CN110427828A (en) * | 2019-07-05 | 2019-11-08 | 中国平安人寿保险股份有限公司 | Human face in-vivo detection method, device and computer readable storage medium |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107992842A (en) * | 2017-12-13 | 2018-05-04 | 深圳云天励飞技术有限公司 | Biopsy method, computer installation and computer-readable recording medium |
-
2018
- 2018-08-14 CN CN201810924142.5A patent/CN109101925A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107992842A (en) * | 2017-12-13 | 2018-05-04 | 深圳云天励飞技术有限公司 | Biopsy method, computer installation and computer-readable recording medium |
Non-Patent Citations (4)
Title |
---|
刘呈云,: "基于纹理分析的活体人脸检测算法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
刘奕,: "多模生物特征融合关键技术研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
张二磊 等,: "一种改进的SURF彩色遥感图像配准算法", 《液晶与显示》 * |
邹承明 等,: "基于多特征组合的细粒度图像分类方法", 《计算机应用》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110008820A (en) * | 2019-01-30 | 2019-07-12 | 广东世纪晟科技有限公司 | Silent in-vivo detection method |
CN110135259A (en) * | 2019-04-15 | 2019-08-16 | 深圳壹账通智能科技有限公司 | Silent formula living body image identification method, device, computer equipment and storage medium |
WO2020211396A1 (en) * | 2019-04-15 | 2020-10-22 | 深圳壹账通智能科技有限公司 | Silent living body image recognition method and apparatus, computer device and storage medium |
CN110298230A (en) * | 2019-05-06 | 2019-10-01 | 深圳市华付信息技术有限公司 | Silent biopsy method, device, computer equipment and storage medium |
CN110427828A (en) * | 2019-07-05 | 2019-11-08 | 中国平安人寿保险股份有限公司 | Human face in-vivo detection method, device and computer readable storage medium |
CN110427828B (en) * | 2019-07-05 | 2024-02-09 | 中国平安人寿保险股份有限公司 | Face living body detection method, device and computer readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108596041B (en) | A kind of human face in-vivo detection method based on video | |
CN106204779B (en) | Check class attendance method based on plurality of human faces data collection strategy and deep learning | |
CN109101925A (en) | Biopsy method | |
US6661907B2 (en) | Face detection in digital images | |
CN108537743A (en) | A kind of face-image Enhancement Method based on generation confrontation network | |
CN110516616A (en) | A kind of double authentication face method for anti-counterfeit based on extensive RGB and near-infrared data set | |
CN110472519B (en) | Human face in-vivo detection method based on multiple models | |
CN109858439A (en) | A kind of biopsy method and device based on face | |
CN101390128B (en) | Detecting method and detecting system for positions of face parts | |
CN107403142A (en) | A kind of detection method of micro- expression | |
CN110008793A (en) | Face identification method, device and equipment | |
CN105844245A (en) | Fake face detecting method and system for realizing same | |
CN109740572A (en) | A kind of human face in-vivo detection method based on partial color textural characteristics | |
CN108805008A (en) | A kind of community's vehicle security system based on deep learning | |
CN110543848B (en) | Driver action recognition method and device based on three-dimensional convolutional neural network | |
CN111639580A (en) | Gait recognition method combining feature separation model and visual angle conversion model | |
CN109117752A (en) | A kind of face recognition method based on gray scale and RGB | |
CN113033305A (en) | Living body detection method, living body detection device, terminal equipment and storage medium | |
Guetta et al. | Dodging attack using carefully crafted natural makeup | |
Davis et al. | Facial recognition using human visual system algorithms for robotic and UAV platforms | |
CN109635712A (en) | Spontaneous micro- expression type method of discrimination based on homogeneous network | |
CN113468954B (en) | Face counterfeiting detection method based on local area features under multiple channels | |
CN114202775A (en) | Transformer substation dangerous area pedestrian intrusion detection method and system based on infrared image | |
Abdallah et al. | A new color image database for benchmarking of automatic face detection and human skin segmentation techniques | |
CN112487926A (en) | Scenic spot feeding behavior identification method based on space-time diagram convolutional network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181228 |