CN109522877A - A kind of offline plurality of human faces recognition methods and computer equipment based on Android device - Google Patents
A kind of offline plurality of human faces recognition methods and computer equipment based on Android device Download PDFInfo
- Publication number
- CN109522877A CN109522877A CN201811531723.9A CN201811531723A CN109522877A CN 109522877 A CN109522877 A CN 109522877A CN 201811531723 A CN201811531723 A CN 201811531723A CN 109522877 A CN109522877 A CN 109522877A
- Authority
- CN
- China
- Prior art keywords
- face
- data
- module
- presentation data
- human
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Abstract
A kind of offline plurality of human faces recognition methods and computer equipment based on Android device, specifically: interface module real-time reception camera acquires face presentation data face presentation data;Business module scheduling tool module is decoded and format is converted into formatted data;Formatted data is converted human face region coordinate set by face detection module;Non-live volume data in In vivo detection modular filtration human face region coordinate set, obtains living body faces data acquisition system, and characteristic extracting module carries out feature extraction to formatted data and living body faces data acquisition system, obtains face characteristic set;Face recognition module compares face characteristic set and the face characteristic data in business module, acquires similarity result, and be back to business module;Business module is exported current face, similarity result to user interface by interface module.Rapid recognition of face can be realized in the lower situation of equipment performance without introducing excessive additional sensors in the present invention;It present invention can be suitably applied to face unlock, the scenes such as face gate inhibition, face are checked card.
Description
Technical field
Present invention relates particularly to a kind of offline plurality of human faces recognition methods method and computer equipment based on Android device.
Background technique
The recognition of face of mainstream is divided into offline and two kinds online on Android system at present, and offline scheme is due to by equipment
The hardware limitations such as energy, camera can not be accomplished to reach sufficiently rapid recognition speed under the premise of meeting commercial precision, especially
It identifies and is particularly acute in the case that plurality of human faces and single face match multiple registered faces at the same time.The offline people of current area
Face identifying system is to the more demanding of camera, this is because conventional face's identification is special based on histograms of oriented gradients (HOG)
What sign carried out, higher to image resolution requirement, generalization ability is poor, just will appear identification not when acquisition image resolution ratio is smaller
Out the problem of face, this undoubtedly increases the hardware cost of equipment.
Summary of the invention
One of the technical problem to be solved in the present invention is to provide a kind of offline plurality of human faces identification side based on Android device
Method.
The present invention is implemented as follows: a kind of offline plurality of human faces recognition methods based on Android device, first configures interface mould
Block, tool model, business module, face detection module, In vivo detection module, characteristic extracting module and face recognition module, institute
It states and is stored with face characteristic data in business module;It the described method comprises the following steps:
Step S1, the face presentation data of interface module real-time reception acquisition, and will be transmitted after face presentation data coding
To business module;
Step S2, business module scheduling tool module is decoded and format is converted into formatted data;
Step S3, formatted data is converted human face region coordinate set by face detection module, and by formatted data and people
Face area coordinate set is sent to In vivo detection module;
Step S4, living body faces number is obtained after non-live volume data in In vivo detection modular filtration human face region coordinate set
Characteristic extracting module is issued according to set, and by formatted data and living body faces data acquisition system;
Step S5, characteristic extracting module carries out feature extraction to formatted data and living body faces data acquisition system, obtains face
Characteristic set;
Step S6, face recognition module compares face characteristic set and the face characteristic data in business module,
Similarity result is acquired, and is back to business module;
Step S7, business module is exported current face's presentation data, similarity result to user circle by interface module
Face.
Further, in the step S3, face detection module uses MTCNN network model to convert formatted data for people
Face area coordinate set.
Further, in the step S5, characteristic extracting module is aligned using the face shape in the library dlib and cutting side
Method carries out feature extraction to formatted data and living body faces data acquisition system, obtains face characteristic set.
Further, in the step S6, face recognition module use ResNet network model, by face characteristic set with
The face characteristic data compare, and ResNet returns to similarity result, and is back to business module.
It further, further include acquisition unit, the acquisition unit includes an at least infrared LED light compensating lamp and one equipped with red
The infrared camera of outer smooth filter;
Also there is step S01 before the step S1:
Acquisition unit dynamic control PWM wave, changes infrared LED light compensating lamp intensity, acquires the face under different light intensities
Image extracts human face region by MTCNN network model from the facial image, then by the library dlib to the face
Region carries out face alignment and cutting, obtains the face presentation data under different light intensities;Wherein, under different light intensities
Face presentation data is described using following formula:
I (x) is the face presentation data under different light intensities;
IaFor the face presentation data under natural light state;
For the face presentation data under various light source irradiations around;
N is surrounding number of light sources;
IxFor the face presentation data under the PWM wave that brightness duty ratio is x;
X is brightness duty ratio.
Further, the In vivo detection module includes residual computations unit and residual error identification unit;
The specific operation method is as follows by the step S4:
Step S4-1: residual computations unit carries out residual computations to the face presentation data in human face region coordinate set,
Obtain residual image data, wherein residual computations method is as follows:
Id=Ihigh-Ilow
IdChange the residual image data of front and back for light intensity;
IhighFor face presentation data collected under strong light;
IlowFor face presentation data collected under dim light;
Step S4-2: the residual image data is normalized in residual error identification unit, is then introduced into the CNN trained
Network model is identified that the CNN network model trained exports the confidence level F that the residual image data is living body;The F
Numberical range be 0~1, as 0.5≤F≤1, the residual image data be living body faces data;As 0≤F < 0.5,
The residual image data is non-live volume data, and deletes the non-live volume data.
The second technical problem to be solved by the present invention is to provide a kind of computer equipment.
The present invention is implemented as follows: a kind of computer equipment, including memory, processor and storage are on a memory simultaneously
The computer program that can be run on a processor, the processor perform the steps of when executing described program
First configure interface module, tool model, business module, face detection module, In vivo detection module, feature extraction mould
Block and face recognition module are stored with face characteristic data in the business module;It the described method comprises the following steps:
Step S1, the face presentation data of interface module real-time reception acquisition, and will be transmitted after face presentation data coding
To business module;
Step S2, business module scheduling tool module is decoded and format is converted into formatted data;
Step S3, formatted data is converted human face region coordinate set by face detection module, and by formatted data and people
Face area coordinate set is sent to In vivo detection module;
Step S4, living body faces number is obtained after non-live volume data in In vivo detection modular filtration human face region coordinate set
Characteristic extracting module is issued according to set, and by formatted data and living body faces data acquisition system;
Step S5, characteristic extracting module carries out feature extraction to formatted data and living body faces data acquisition system, obtains face
Characteristic set;
Step S6, face recognition module compares face characteristic set and the face characteristic data in business module,
Similarity result is acquired, and is back to business module;
Step S7, business module is exported current face's presentation data, similarity result to user circle by interface module
Face.
It further, further include acquisition unit, the acquisition unit includes an at least infrared LED light compensating lamp and one equipped with red
The infrared camera of outer smooth filter;
Also there is step S01 before the step S1:
Acquisition unit dynamic control PWM wave, changes infrared LED light compensating lamp intensity, acquires the face under different light intensities
Image extracts human face region by MTCNN network model from the facial image, then by the library dlib to the face
Region carries out face alignment and cutting, obtains the face presentation data under different light intensities;Wherein, under different light intensities
Face presentation data is described using following formula:
I (x) is the face presentation data under different light intensities;
IaFor the face presentation data under natural light state;
For the face presentation data under various light source irradiations around;
N is surrounding number of light sources;
IxFor the face presentation data under the PWM wave that brightness duty ratio is x;
X is brightness duty ratio.
Further, the In vivo detection module includes residual computations unit and residual error identification unit;
The specific operation method is as follows by the step S4:
Step S4-1: residual computations unit carries out residual computations to the face presentation data in human face region coordinate set,
Obtain residual image data, wherein residual computations method is as follows:
Id=Ihigh-Ilow:
IdChange the residual image data of front and back for light intensity;
IhighFor face presentation data collected under strong light;
IlowFor face presentation data collected under dim light;
Step S4-2: the residual image data is normalized in residual error identification unit, is then introduced into the CNN trained
Network model is identified that the CNN network model trained exports the confidence level F that the residual image data is living body;The F
Numberical range be 0~1, as 0.5≤F≤1, the residual image data be living body faces data;As 0≤F < 0.5,
The residual image data is non-live volume data, and deletes the non-live volume data.
The present invention has the advantage that the present invention relies only on single camera, can be realized in the lower situation of equipment performance
Rapid recognition of face;Face datection is carried out, based on the ResNet of deep learning using the MTCNN network model based on deep learning
Network model carries out face characteristic comparison, has accomplished good generalization ability, has been pre-processed in identification process to image,
Only need the resolution ratio of general low side camera that can just meet the requirements;It present invention can be suitably applied to face unlock, face gate inhibition, face
It checks card, the scenes such as photo classification.
Detailed description of the invention
The present invention is further illustrated in conjunction with the embodiments with reference to the accompanying drawings.
Fig. 1 is offline plurality of human faces identifying system block diagram in the present invention.
Fig. 2 is the method execution flow chart that formatted data is converted into human face region coordinate set in the present invention.
Specific embodiment
Fig. 1 is offline plurality of human faces identifying system block diagram of the invention.
Referring to Fig. 1, a kind of offline plurality of human faces recognition methods based on Android device, first configures acquisition unit, interface mould
Block, tool model, business module, face detection module, In vivo detection module, characteristic extracting module and face recognition module, institute
It states and is stored with face characteristic data in business module;The acquisition unit includes an at least infrared LED light compensating lamp and one equipped with red
The infrared camera of outer smooth filter;
It the described method comprises the following steps:
Step S01:
Acquisition unit dynamic control PWM wave (pulse width modulation wave), changes infrared LED light compensating lamp intensity, and acquisition is different
Facial image under light intensity extracts human face region from the facial image by MTCNN network model, then passes through
The library dlib carries out face alignment and cutting to the human face region, obtains the face presentation data under different light intensities;Wherein,
Face presentation data under different light intensities is described using following formula:
I (x) is the face presentation data under different light intensities;
IaFor the face presentation data under natural light state;
For the face presentation data under various light source irradiations around;
N is surrounding number of light sources;
IxFor the face presentation data under the PWM wave that brightness duty ratio is x;
X is brightness duty ratio.
Step S1, the face presentation data of interface module real-time reception acquisition, and will be transmitted after face presentation data coding
To business module;The coded format of the face presentation data can be NV21 format;
Step S2, business module scheduling tool module is decoded and format is converted into formatted data, such as by NV21 format
Switch to rgb format data;
Step S3, face detection module uses MTCNN (multitask concatenated convolutional neural network) network model by format number
According to human face region coordinate set is converted into, there is process as shown in Fig. 2, and sending formatted data and human face region coordinate set
Give In vivo detection module;
Step S4, In vivo detection module judges which region is real work in human face region coordinate set using CNN network
Body, which is non-living body, in filtering human face region coordinate set after non-live volume data, obtains living body faces data acquisition system, and will
Formatted data and living body faces data acquisition system issue characteristic extracting module;
The In vivo detection module includes residual computations unit and residual error identification unit;
The specific operation method is as follows by the step S4:
Step S4-1: residual computations unit carries out residual computations to the face presentation data in human face region coordinate set,
Obtain residual image data, wherein residual computations method is as follows:
Id=Ihigh-Ilow:
IdChange the residual image data of front and back for light intensity;
IhighFor face presentation data collected under strong light;
IlowFor face presentation data collected under dim light;
Step S4-2: the residual image data is normalized in residual error identification unit, is then introduced into the CNN trained
Network model is identified that the CNN network model trained exports the confidence level F that the residual image data is living body;The F
Numberical range be 0~1, as 0.5≤F≤1, the residual image data be living body faces data;As 0≤F < 0.5,
The residual image data is non-live volume data, and deletes the non-live volume data.
Step S5, characteristic extracting module using in the library dlib face shape alignment and method of cutting out, to formatted data and
Living body faces data acquisition system carries out feature extraction, obtains face characteristic set;
Step S6, face recognition module use ResNet (depth residual error network) network model, by face characteristic set with
The face characteristic data compare, and ResNet returns to similarity result, and is back to business module;
Step S7, business module is exported current face's presentation data, similarity result to user circle by interface module
Face.
The present invention be directed to the recognitions of face under Android platform equipment off-line state, propose one and are based on relying only on singly taking the photograph
As head, rapid recognition of face can be realized in the lower situation of equipment performance, and have the subtle change of off-line learning face looks
Face identification method.The present invention carries out Face datection using the MTCNN network model based on deep learning, is based on deep learning
ResNet network model carry out face characteristic comparison, accomplished good generalization ability, in identification process to image carry out
Pretreatment, as long as therefore general low side camera the resolution ratio of CIF rank can be supported to meet the requirements.The present invention can fit
It checks card for face unlock, face gate inhibition, face, the scenes such as photo classification, Face datection service can also be provided separately, can be used for
The scenes such as passenger flow statistics, photo beautification, the focusing of camera face.
Present invention employs the mode of silence detection, do not need to require system introducing multiple as the scheme of 3D structure light
Sensor is also not required to have higher requirements to the computing capability of system as multi-frame analysis, only needs one to have infrared light filter
Infrared camera and infrared LED light compensating lamp achieve that face In vivo detection identifies that hardware cost is low, have good identification quasi-
Exactness;The present invention is easily integrated, and is not influenced by the environmental background light in identification process, there is preferable generalization, because using
NIR technology is simultaneously equipped with infrared light compensating lamp, also can be used normally under the weaker environment of light.
The present invention substitutes the eyeglass of infrared camera using infrared light filter, i.e., no matter only infrared light is allowed to enter
Attacker uses photochrome or black-and-white photograph, and infrared camera acquired image is all considered as black white image (because passing through
Filter filtered image color is single).Present invention employs single infrared cameras to carry out Image Acquisition, utilizes near-infrared
Spectral technique (Near Infrared, NIR) can intercept electrical screen imaging, i.e. electronic photo Replay Attack, electric video is reset
The means of attack, electronics replacement face attack.The present invention uses the residual error technology under CNN network and infrared imaging, is beaten with intercepting
Print the attack means of photo.
Claims (9)
1. a kind of offline plurality of human faces recognition methods based on Android device, it is characterised in that: first configure interface module, tool mould
Block, business module, face detection module, In vivo detection module, characteristic extracting module and face recognition module, the business module
In be stored with face characteristic data;It the described method comprises the following steps:
Step S1, the face presentation data of interface module real-time reception acquisition, and industry will be sent to after face presentation data coding
Business module;
Step S2, business module scheduling tool module is decoded and format is converted into formatted data;
Step S3, formatted data is converted human face region coordinate set by face detection module, and by formatted data and face area
Domain coordinate set is sent to In vivo detection module;
Step S4, living body faces data set is obtained after non-live volume data in In vivo detection modular filtration human face region coordinate set
It closes, and formatted data and living body faces data acquisition system is issued into characteristic extracting module;
Step S5, characteristic extracting module carries out feature extraction to formatted data and living body faces data acquisition system, obtains face characteristic
Set;
Step S6, face recognition module compares face characteristic set and the face characteristic data in business module, obtains
To similarity result, and it is back to business module;
Step S7, business module is exported current face's presentation data, similarity result to user interface by interface module.
2. a kind of offline plurality of human faces recognition methods based on Android device according to claim 1, it is characterised in that: described
In step S3, face detection module converts human face region coordinate set for formatted data using MTCNN network model.
3. a kind of offline plurality of human faces recognition methods based on Android device according to claim 1, it is characterised in that: described
In step S5, characteristic extracting module is aligned using the face shape in the library dlib and method of cutting out, to formatted data and living body people
Face data acquisition system carries out feature extraction, obtains face characteristic set.
4. a kind of offline plurality of human faces recognition methods based on Android device according to claim 1, it is characterised in that: described
In step S6, face recognition module uses ResNet network model, and face characteristic set and the face characteristic data are carried out
Comparison, ResNet returns to similarity result, and is back to business module.
5. a kind of offline plurality of human faces recognition methods based on Android device according to claim 1, it is characterised in that: also wrap
Acquisition unit is included, the acquisition unit includes that an at least infrared LED light compensating lamp and one are equipped with the infrared camera of infrared light filter;
Also there is step S01 before the step S1:
Acquisition unit dynamic control PWM wave, changes infrared LED light compensating lamp intensity, acquires the facial image under different light intensities,
Human face region is extracted from the facial image by MTCNN network model, then by the library dlib to the human face region
Face alignment and cutting are carried out, the face presentation data under different light intensities is obtained;Wherein, the face under different light intensities
Presentation data is described using following formula:
I (x) is the face presentation data under different light intensities;
IaFor the face presentation data under natural light state;
For the face presentation data under various light source irradiations around;
N is surrounding number of light sources;
IxFor the face presentation data under the PWM wave that brightness duty ratio is x;
X is brightness duty ratio.
6. a kind of offline plurality of human faces recognition methods based on Android device according to claim 5, it is characterised in that: described
In vivo detection module includes residual computations unit and residual error identification unit;
The specific operation method is as follows by the step S4:
Step S4-1: residual computations unit carries out residual computations to the face presentation data in human face region coordinate set, obtains
Residual image data, wherein residual computations method is as follows:
Id=Ihigh-Ilow
IdChange the residual image data of front and back for light intensity;
IhighFor face presentation data collected under strong light;
IlowFor face presentation data collected under dim light;
Step S4-2: the residual image data is normalized in residual error identification unit, is then introduced into the CNN network trained
Model is identified that the CNN network model trained exports the confidence level F that the residual image data is living body;The number of the F
Being worth range is 0~1, and as 0.5≤F≤1, the residual image data is living body faces data;It is described as 0≤F < 0.5
Residual image data is non-live volume data, and deletes the non-live volume data.
7. a kind of computer equipment including memory, processor and stores the meter that can be run on a memory and on a processor
Calculation machine program, it is characterised in that: the processor performs the steps of when executing described program
First configure interface module, tool model, business module, face detection module, In vivo detection module, characteristic extracting module and
Face recognition module is stored with face characteristic data in the business module;It the described method comprises the following steps:
Step S1, the face presentation data of interface module real-time reception acquisition, and industry will be sent to after face presentation data coding
Business module;
Step S2, business module scheduling tool module is decoded and format is converted into formatted data;
Step S3, formatted data is converted human face region coordinate set by face detection module, and by formatted data and face area
Domain coordinate set is sent to In vivo detection module;
Step S4, living body faces data set is obtained after non-live volume data in In vivo detection modular filtration human face region coordinate set
It closes, and formatted data and living body faces data acquisition system is issued into characteristic extracting module;
Step S5, characteristic extracting module carries out feature extraction to formatted data and living body faces data acquisition system, obtains face characteristic
Set;
Step S6, face recognition module compares face characteristic set and the face characteristic data in business module, obtains
To similarity result, and it is back to business module;
Step S7, business module is exported current face's presentation data, similarity result to user interface by interface module.
8. a kind of computer equipment according to claim 7, it is characterised in that: further include acquisition unit, the acquisition is single
Member includes that an at least infrared LED light compensating lamp and one are equipped with the infrared camera of infrared light filter;
Also there is step S01 before the step S1:
Acquisition unit dynamic control PWM wave, changes infrared LED light compensating lamp intensity, acquires the facial image under different light intensities,
Human face region is extracted from the facial image by MTCNN network model, then by the library dlib to the human face region
Face alignment and cutting are carried out, the face presentation data under different light intensities is obtained;Wherein, the face under different light intensities
Presentation data is described using following formula:
I (x) is the face presentation data under different light intensities;
IaFor the face presentation data under natural light state;
For the face presentation data under various light source irradiations around;
N is surrounding number of light sources;
IxFor the face presentation data under the PWM wave that brightness duty ratio is x;
X is brightness duty ratio.
9. a kind of computer equipment according to claim 8, it is characterised in that: the In vivo detection module includes residual error meter
Calculate unit and residual error identification unit;
The specific operation method is as follows by the step S4:
Step S4-1: residual computations unit carries out residual computations to the face presentation data in human face region coordinate set, obtains
Residual image data, wherein residual computations method is as follows:
Id=Ihigh-Ilow
IdChange the residual image data of front and back for light intensity;
IhighFor face presentation data collected under strong light;
IlowFor face presentation data collected under dim light;
Step S4-2: the residual image data is normalized in residual error identification unit, is then introduced into the CNN network trained
Model is identified that the CNN network model trained exports the confidence level F that the residual image data is living body;The number of the F
Being worth range is 0~1, and as 0.5≤F≤1, the residual image data is living body faces data;It is described as 0≤F < 0.5
Residual image data is non-live volume data, and deletes the non-live volume data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811531723.9A CN109522877A (en) | 2018-12-14 | 2018-12-14 | A kind of offline plurality of human faces recognition methods and computer equipment based on Android device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811531723.9A CN109522877A (en) | 2018-12-14 | 2018-12-14 | A kind of offline plurality of human faces recognition methods and computer equipment based on Android device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109522877A true CN109522877A (en) | 2019-03-26 |
Family
ID=65795622
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811531723.9A Pending CN109522877A (en) | 2018-12-14 | 2018-12-14 | A kind of offline plurality of human faces recognition methods and computer equipment based on Android device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109522877A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110991239A (en) * | 2019-10-30 | 2020-04-10 | 珠海格力电器股份有限公司 | Identity verification method, device, equipment and computer readable storage medium |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103593598A (en) * | 2013-11-25 | 2014-02-19 | 上海骏聿数码科技有限公司 | User online authentication method and system based on living body detection and face recognition |
JP2016152029A (en) * | 2015-02-19 | 2016-08-22 | 大阪瓦斯株式会社 | Face authentication device, image processing device, and living body determination device |
CN106203305A (en) * | 2016-06-30 | 2016-12-07 | 北京旷视科技有限公司 | Human face in-vivo detection method and device |
CN107545243A (en) * | 2017-08-07 | 2018-01-05 | 南京信息工程大学 | Yellow race's face identification method based on depth convolution model |
CN108549873A (en) * | 2018-04-19 | 2018-09-18 | 北京华捷艾米科技有限公司 | Three-dimensional face identification method and three-dimensional face recognition system |
CN108629305A (en) * | 2018-04-27 | 2018-10-09 | 朱旭辉 | A kind of face recognition method |
CN108710831A (en) * | 2018-04-24 | 2018-10-26 | 华南理工大学 | A kind of small data set face recognition algorithms based on machine vision |
CN108875559A (en) * | 2018-04-27 | 2018-11-23 | 中国科学院自动化研究所 | The face identification method and system shone based on certificate photo and scene |
CN108985134A (en) * | 2017-06-01 | 2018-12-11 | 重庆中科云丛科技有限公司 | Face In vivo detection and brush face method of commerce and system based on binocular camera |
-
2018
- 2018-12-14 CN CN201811531723.9A patent/CN109522877A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103593598A (en) * | 2013-11-25 | 2014-02-19 | 上海骏聿数码科技有限公司 | User online authentication method and system based on living body detection and face recognition |
JP2016152029A (en) * | 2015-02-19 | 2016-08-22 | 大阪瓦斯株式会社 | Face authentication device, image processing device, and living body determination device |
CN106203305A (en) * | 2016-06-30 | 2016-12-07 | 北京旷视科技有限公司 | Human face in-vivo detection method and device |
CN108985134A (en) * | 2017-06-01 | 2018-12-11 | 重庆中科云丛科技有限公司 | Face In vivo detection and brush face method of commerce and system based on binocular camera |
CN107545243A (en) * | 2017-08-07 | 2018-01-05 | 南京信息工程大学 | Yellow race's face identification method based on depth convolution model |
CN108549873A (en) * | 2018-04-19 | 2018-09-18 | 北京华捷艾米科技有限公司 | Three-dimensional face identification method and three-dimensional face recognition system |
CN108710831A (en) * | 2018-04-24 | 2018-10-26 | 华南理工大学 | A kind of small data set face recognition algorithms based on machine vision |
CN108629305A (en) * | 2018-04-27 | 2018-10-09 | 朱旭辉 | A kind of face recognition method |
CN108875559A (en) * | 2018-04-27 | 2018-11-23 | 中国科学院自动化研究所 | The face identification method and system shone based on certificate photo and scene |
Non-Patent Citations (5)
Title |
---|
ESTEBAN VAZQUEZ-FERNANDEZ 等: "BUILT-IN FACE RECOGNITION FOR SMART PHOTO SHARING IN MOBILE DEVICES", 《2011 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO》 * |
TEDDY MANTORO 等: "Multi-Faces Recognition Process Using Haar Cascades and Eigenface Methods", 《2018 6TH INTERNATIONAL CONFERENCE ON MULTIMEDIA COMPUTING AND SYSTEMS(ICMCS)》 * |
XUDONG SUN 等: "Context Based Face Spoofing Detection Using Active Near-Infrared Images", 《2016 23RD INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR)》 * |
李德毅 等: "《人工智能导论》", 30 September 2018 * |
李硕豪: "基于OMAP3530数字图像处理的多人脸识别系统设计术", 《微型机与应用》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110991239A (en) * | 2019-10-30 | 2020-04-10 | 珠海格力电器股份有限公司 | Identity verification method, device, equipment and computer readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108921041A (en) | A kind of biopsy method and device based on RGB and IR binocular camera | |
US20150020181A1 (en) | Personal authentication method and personal authentication device | |
US11281892B2 (en) | Technologies for efficient identity recognition based on skin features | |
CN104933344A (en) | Mobile terminal user identity authentication device and method based on multiple biological feature modals | |
CN102360420A (en) | Method and system for identifying characteristic face in dual-dynamic detection manner | |
Li et al. | 3D face mask presentation attack detection based on intrinsic image analysis | |
EP3751505A1 (en) | Image coloring method and apparatus | |
CN109977846B (en) | Living body detection method and system based on near-infrared monocular photography | |
CN112818722A (en) | Modular dynamically configurable living body face recognition system | |
CN110287787A (en) | Image-recognizing method, device and computer readable storage medium | |
RU2556417C2 (en) | Detecting body movements using digital colour rear projection | |
CN104217503A (en) | Self-service terminal identity identification method and corresponding house property certificate printing method | |
Chopra et al. | Unconstrained fingerphoto database | |
CN109522877A (en) | A kind of offline plurality of human faces recognition methods and computer equipment based on Android device | |
CN205644823U (en) | Social security self -service terminal device | |
KR101344851B1 (en) | Device and Method for Processing Image | |
CN104217504A (en) | Identity recognition self-service terminal and corresponding certificate of house property printing terminal | |
Yusuf et al. | Human face detection using skin color segmentation and watershed algorithm | |
WO2018185574A1 (en) | Apparatus and method for documents and/or personal identities recognition and validation | |
CN109635746A (en) | It is a kind of that face vivo identification method and computer readable storage medium are singly taken the photograph based on NIR residual plot elephant | |
KR20110032846A (en) | Apparatus for detecting face | |
Gangopadhyay et al. | FACE DETECTION AND RECOGNITION USING HAAR CLASSIFIER AND LBP HISTOGRAM. | |
KR101082842B1 (en) | Face recognition method by using face image and apparatus thereof | |
CN112926367A (en) | Living body detection equipment and method | |
Borah et al. | A human face detection method based on connected component analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |