CN109977794A - A method of recognition of face is carried out with deep neural network - Google Patents
A method of recognition of face is carried out with deep neural network Download PDFInfo
- Publication number
- CN109977794A CN109977794A CN201910164908.9A CN201910164908A CN109977794A CN 109977794 A CN109977794 A CN 109977794A CN 201910164908 A CN201910164908 A CN 201910164908A CN 109977794 A CN109977794 A CN 109977794A
- Authority
- CN
- China
- Prior art keywords
- face
- recognition
- depth
- camera
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Life Sciences & Earth Sciences (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The present invention relates to a kind of methods for carrying out recognition of face with deep neural network, which comprises the following steps: acquires the 2D RGB picture of human face region by first camera, and acquires the 3D point cloud of human face region in real time by second camera;Interception and the scaling that face area is carried out for the 2D RGB picture of acquisition, reduce the influence of distance;For the 3D point cloud of acquisition, two-dimensional surface is projected to, depth information is replaced with grayscale information, forms two-dimensional depth figure;And carry out the interception of face area;The D2D RGB picture intercepted is input to 2D feature extraction network, and the face area two-dimensional depth figure intercepted is inputted into 3D feature extraction network, carries out characteristic vector pickup;According to the feature vector of extraction, combining environmental information carries out recognition of face.Present invention combination 2D and 3D face recognition technology not only ensure that the accuracy under normal use situation (positive face is unobstructed), but also improve the safety of robustness and system that system is coped under severe use condition.
Description
Technical field
The present invention relates to face recognition technology more particularly to a kind of methods for carrying out recognition of face with deep neural network.
Background technique
It is the forward position of algorithm perception revolution without the recognition of face in constraint image.In fact, without constraint face recognition technology
It is universal, broader practice prospect can be not only brought to face recognition technology, but also can enter more from formal application scenarios
For universal living scene.The FaceID of iPhone is exactly a typical example.In this course, it is necessary to improve face knowledge
The robustness of other technology.In the application scenarios that current state-of-the-art two-dimension human face identification technology is arranged in standard data set or strictly
Performance is good.However, performance can sharply decline under unsatisfied environment (side face and block).
The prior art optimizes in the case of not high for traditional 2D network recognition of face side face accuracy rate, by acquiring people
Characteristic point position on characteristic point position on the face, with a universal three-dimensional human face model compares, and estimates photograph taking
Angle, and two dimensional image is changed into and carries out recognition of face after positive face again.Such way problem is, finds feature for side face
Point is more much more difficult than positive face.On the other hand the characteristic point distribution of identification object is related to the feature face relative position of individual itself,
It can only say it is substantially similar with general three-dimensional model, influence the precision of angle estimation.The problem of most serious, is, in rotation 2D figure
When piece, the process that side face rotates the face that is positive inevitably is produced into the part (behind nose) being blocked in original image, this part
There is no initial data, recognition effect can only be influenced by some mode completions.It meanwhile being one three for the rotation of 2D photo
The operation of dimension space, no depth data inevitably will cause the deformation of face.Just not saying the effect is unsatisfactory, this side
The operation based on designer's experience that case is related to completion that is too many, such as rotating, estimation of angle etc. are difficult generalized, greatly
Scale uses.
The prior art divides face area, feature extraction network of the design comprising different zones combination for occlusion issue.
Wish to meet not including in the region of certain several network coverage to be blocked region simultaneously and the face as much as possible comprising unshielding
Portion's information.In actual use, the information of 2-D recognition of face is largely dependent upon the eyes and its surrounding of people.And for
Color changes lesser parts of skin, and by the interference of light condition, it is difficult to extract effective features out.This result in user with
Upper glasses, in the case where can not identifying eye, discrimination is very low, but such case is very common in practical application scene.
To sum up, currently available technology is all difficult to accomplish preferably to carry out recognition of face under unconfined condition again.
Summary of the invention
It is an object of the present invention to for tradition 2D neural network for side face with to block face recognition effect poor,
The less recognition capability in positive face of 3D network training data is lower than the case where tradition 2D network simultaneously, proposes combined use
The novel face recognition scheme of 2D RGB image and 3D depth map cascading judgement.
To achieve the above object, the present invention provides carry out recognition of face with deep neural network the present invention relates to a kind of
Method, method includes the following steps:
The 2D RGB picture of human face region is acquired by first camera, and human face region is acquired by second camera in real time
3D point cloud;
Interception and the scaling that face area is carried out for the 2D RGB picture of acquisition, reduce the influence of distance;For acquisition
3D point cloud, project to two-dimensional surface, replace depth information with grayscale information, form two-dimensional depth figure;And carry out face area
Interception;
The D2D RGB picture intercepted is input to 2D feature extraction network, and the face area intercepted is two-dimentional
Depth map inputs 3D feature extraction network, carries out characteristic vector pickup;According to the feature vector of extraction, combining environmental information is carried out
Recognition of face.
The present invention combination 2D and 3D identify that extracted feature carries out comprehensive identification, both ensure that normal use situation (just
Face is unobstructed) under accuracy, and improve system and cope with robustness under severe use condition.Meanwhile there is 3D Face datection
The addition of system is also prevented from a possibility that emitting using photo puppet, improves the safety of system.
Detailed description of the invention
Fig. 1 is that a kind of method flow for carrying out recognition of face with deep neural network provided in an embodiment of the present invention is illustrated
Figure;
Fig. 2 is recognition of face part flow diagram in method flow shown in Fig. 1.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention
In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is
A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art
Every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.
The present invention is for tradition 2D neural network is for side face and to block face recognition effect poor, while 3D network instruction
Practice the less recognition capability in positive face of data be lower than tradition 2D network the case where, propose be used in combination 2D RGB image with
The novel face recognition scheme of 3D depth map cascading judgement.
Fig. 1 is that a kind of method flow for carrying out recognition of face with deep neural network provided in an embodiment of the present invention is illustrated
Figure.As shown in Figure 1, this method step includes:
It is color to acquire human face region 2G RGB using almost coaxial RGB digital camera and structure light camera in real time for step 1
Chromatic graph piece and 3-D point cloud.Interception and the scaling that face area is carried out for the 2G RGB color picture of acquisition, reduce the shadow of distance
It rings.For 3D point cloud, two-dimensional surface is projected to, depth information is replaced with grayscale information, forms two-dimensional depth figure.For two-dimentional deep
Degree figure carries out subsequent smooth operation and forms the preferable two-dimensional depth figure of quality and carry out the interception of human face region.Using intercepting
2D and depth map face as subsequent input.
Step 2: network is extracted according to actual use picture format design feature, is mentioned using 3D depth map training 3-D feature
Take network (mixnet-3D):
By including that the convolution kernel of different scales extracts the whole and local feature of picture in each layer.
Step 3: the 2G RGB facial image and depth map of identification target input respectively 2D feature extraction network and
Mixnet-3D network carries out feature extraction, for the feature vector of extraction, carries out recognition of face in conjunction with other environmental information.
Fig. 2 is recognition of face part flow diagram in method flow shown in Fig. 1.As shown in Fig. 2, the face identification division
Concrete operations are as follows:
Face datection is carried out according to processed two-dimensional depth figure first, it is determined whether there are 3D face figures, and read depth
Degree information judges whether in operating distance (for example, between 30-50cm), without judgement if not in operating distance,
Recognition result is not provided, to prevent photo deceptive practices that may be present.Operating distance is according to the specific camera resolution used
Conversion.
For the input in sphere of action, system estimates illumination condition according to RGB picture brightness.For illumination condition
The input data of poor (for example, illumination tensor is less than illumination threshold value) only uses depth information and carries out recognition of face and provide most
Whole recognition result, to reduce common traditional 2D recognition of face bring interference.Number is inputted for illumination condition acceptable
According to using 2D feature extraction network and 3D feature extraction network, respectively to 2D RGB portrait photo and the progress of 3D depth portrait figure
Feature extraction, and the feature vector extracted is respectively fed to 2D classifier and 3D classifier identifies, export each class
Other 2D and 3D posterior probability, to obtain the maximum classification of 2D and 3D posterior probability respectively.Then, judge the maximum of 2D and 3D
Whether posterior probability classification is consistent.For consistent and inconsistent two kinds of situations, located respectively according to specific posterior probability values
Reason.
The embodiment of the present invention when identifying side face image, traditional 2D network due to the feature in side face and face image not
Matching, so causing to be difficult to extract identical feature vector for the positive face and side face of the same person, causes recognition of face
On difficulty.But for 3D network, especially with depth map 3D network as input, since gray scale represents depth.3D
Network is easy to extract the relative distance information between characteristic point: distance of the such as nose to left eye angle.Such distance letter
Breath is will not be as the variation of shooting angle be changed on depth map.As long as being added when training network enough
The positive sample pair being made of positive face and side face, network, which will be converged to, extracts these constant information with suitable mode.It compares
In 2D picture, depth map remains the constant feature of relative rotation, so that the rotation of angle is almost exempted from 3D recognition of face
Epidemic disease.
For blocking, the feature that 3D network extracts is more extensive, since the variation of height is generally existing on face.One
Indiscoverable feature can also be extracted similar to (raised cheek) by 3D network on traditional 2D network a bit, in certain journey
Also solve the problems, such as key position on degree blocks recognition failures caused by similar eyes.
But 3D identifies that relied on structure light camera precision can not show a candle to RGB color digital camera, is having led in reality
In the case of unobstructed for positive face in the application of border, the effect of 3D identification is less than 2D identification, we combine 2D and 3D to know thus
Not extracted feature carries out comprehensive identification, not only ensure that the accuracy under normal use situation (positive face is unobstructed), but also improve
System copes with the robustness under severe use condition.Meanwhile there is the addition of 3-D face detection system to be also prevented from using photo puppet
A possibility that emitting improves the safety of system.
It is disclosed comprising on positive face, side face, Bosphorus the and Texas data set that blocks we demonstrate ours
Conclusion, and compared with existing scheme:
Above-described specific embodiment has carried out further the purpose of the present invention, technical scheme and beneficial effects
It is described in detail, it should be understood that being not intended to limit the present invention the foregoing is merely a specific embodiment of the invention
Protection scope, all within the spirits and principles of the present invention, any modification, equivalent substitution, improvement and etc. done should all include
Within protection scope of the present invention.
Claims (5)
1. a kind of method for carrying out recognition of face with deep neural network, which comprises the following steps:
The 2D RGB picture of human face region is acquired by first camera, and acquires the 3D of human face region in real time by second camera
Point cloud;
Interception and the scaling that face area is carried out for the 2D RGB picture of acquisition, reduce the influence of distance;For the 3D of acquisition
Point cloud, projects to two-dimensional surface, replaces depth information with grayscale information, forms two-dimensional depth figure;And carry out cutting for face area
It takes;
The D2D RGB picture intercepted is input to 2D feature extraction network, and the face area two-dimensional depth that will be intercepted
Figure input 3D feature extraction network, carries out characteristic vector pickup;According to the feature vector of extraction, combining environmental information carries out face
Identification.
2. the method according to claim 1, wherein the feature vector according to extraction, combining environmental information
Carry out recognition of face step, comprising:
Face datection is carried out according to two-dimensional depth figure, it is determined whether there are faces, and read depth information and judge whether acting on
In distance, without judgement if not in operating distance, recognition result is not provided, to prevent photo that may be present from cheating
Behavior;
The operating distance converts according to the specific camera resolution that uses, for the input in sphere of action, system according to
Illumination condition is estimated in RGB picture brightness, and the input data poor for illumination condition only uses depth information and carry out face knowledge
Not and final recognition result is provided, to reduce common traditional 2D recognition of face bring interference;Illumination condition can be connect
The input data received provides final recognition result in conjunction with the feature vector that 2D and depth picture extract.
3. the method according to claim 1, wherein the 2D feature extraction network is according to actual use picture lattice
Formula design, the 3D feature extraction Web vector graphic 3D depth map training obtain, and by including the volume of different scales in each layer
Product core extracts the whole and local feature of picture.
4. the method according to claim 1, wherein carrying out subsequent smooth operation, shape for two-dimensional depth figure
At high-quality two-dimensional depth figure, and carry out the interception of human face region.
5. the method according to claim 1, wherein the first camera is that RAB colored digital is mechanical, described the
Two cameras are structure light 3 D camera.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910164908.9A CN109977794A (en) | 2019-03-05 | 2019-03-05 | A method of recognition of face is carried out with deep neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910164908.9A CN109977794A (en) | 2019-03-05 | 2019-03-05 | A method of recognition of face is carried out with deep neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109977794A true CN109977794A (en) | 2019-07-05 |
Family
ID=67077931
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910164908.9A Pending CN109977794A (en) | 2019-03-05 | 2019-03-05 | A method of recognition of face is carried out with deep neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109977794A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110686652A (en) * | 2019-09-16 | 2020-01-14 | 武汉科技大学 | Depth measurement method based on combination of depth learning and structured light |
CN111626241A (en) * | 2020-05-29 | 2020-09-04 | 北京华捷艾米科技有限公司 | Face detection method and device |
CN113205058A (en) * | 2021-05-18 | 2021-08-03 | 中国科学院计算技术研究所厦门数据智能研究院 | Face recognition method for preventing non-living attack |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103971137A (en) * | 2014-05-07 | 2014-08-06 | 上海电力学院 | Three-dimensional dynamic facial expression recognition method based on structural sparse feature study |
CN106326867A (en) * | 2016-08-26 | 2017-01-11 | 维沃移动通信有限公司 | Face recognition method and mobile terminal |
CN106600640A (en) * | 2016-12-12 | 2017-04-26 | 杭州视氪科技有限公司 | RGB-D camera-based face recognition assisting eyeglass |
CN107944435A (en) * | 2017-12-27 | 2018-04-20 | 广州图语信息科技有限公司 | Three-dimensional face recognition method and device and processing terminal |
CN108197587A (en) * | 2018-01-18 | 2018-06-22 | 中科视拓(北京)科技有限公司 | A kind of method that multi-modal recognition of face is carried out by face depth prediction |
CN108520204A (en) * | 2018-03-16 | 2018-09-11 | 西北大学 | A kind of face identification method |
CN108549886A (en) * | 2018-06-29 | 2018-09-18 | 汉王科技股份有限公司 | A kind of human face in-vivo detection method and device |
CN108875546A (en) * | 2018-04-13 | 2018-11-23 | 北京旷视科技有限公司 | Face auth method, system and storage medium |
-
2019
- 2019-03-05 CN CN201910164908.9A patent/CN109977794A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103971137A (en) * | 2014-05-07 | 2014-08-06 | 上海电力学院 | Three-dimensional dynamic facial expression recognition method based on structural sparse feature study |
CN106326867A (en) * | 2016-08-26 | 2017-01-11 | 维沃移动通信有限公司 | Face recognition method and mobile terminal |
CN106600640A (en) * | 2016-12-12 | 2017-04-26 | 杭州视氪科技有限公司 | RGB-D camera-based face recognition assisting eyeglass |
CN107944435A (en) * | 2017-12-27 | 2018-04-20 | 广州图语信息科技有限公司 | Three-dimensional face recognition method and device and processing terminal |
CN108197587A (en) * | 2018-01-18 | 2018-06-22 | 中科视拓(北京)科技有限公司 | A kind of method that multi-modal recognition of face is carried out by face depth prediction |
CN108520204A (en) * | 2018-03-16 | 2018-09-11 | 西北大学 | A kind of face identification method |
CN108875546A (en) * | 2018-04-13 | 2018-11-23 | 北京旷视科技有限公司 | Face auth method, system and storage medium |
CN108549886A (en) * | 2018-06-29 | 2018-09-18 | 汉王科技股份有限公司 | A kind of human face in-vivo detection method and device |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110686652A (en) * | 2019-09-16 | 2020-01-14 | 武汉科技大学 | Depth measurement method based on combination of depth learning and structured light |
CN110686652B (en) * | 2019-09-16 | 2021-07-06 | 武汉科技大学 | Depth measurement method based on combination of depth learning and structured light |
CN111626241A (en) * | 2020-05-29 | 2020-09-04 | 北京华捷艾米科技有限公司 | Face detection method and device |
CN111626241B (en) * | 2020-05-29 | 2023-06-23 | 北京华捷艾米科技有限公司 | Face detection method and device |
CN113205058A (en) * | 2021-05-18 | 2021-08-03 | 中国科学院计算技术研究所厦门数据智能研究院 | Face recognition method for preventing non-living attack |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210287386A1 (en) | Methods and Systems to Modify a Two Dimensional Facial Image to Increase Dimensional Depth and Generate a Facial Image That Appears Three Dimensional | |
US11037281B2 (en) | Image fusion method and device, storage medium and terminal | |
CN107852533B (en) | Three-dimensional content generation device and three-dimensional content generation method thereof | |
US8385638B2 (en) | Detecting skin tone in images | |
CN104834898B (en) | A kind of quality classification method of personage's photographs | |
CN105631861B (en) | Restore the method for 3 D human body posture from unmarked monocular image in conjunction with height map | |
CN102332095B (en) | Face motion tracking method, face motion tracking system and method for enhancing reality | |
WO2021159767A1 (en) | Medical image processing method, image processing method, and device | |
CN109583304A (en) | A kind of quick 3D face point cloud generation method and device based on structure optical mode group | |
CN110175558A (en) | A kind of detection method of face key point, calculates equipment and storage medium at device | |
CN102147852B (en) | Detect the method for hair zones | |
CN109977794A (en) | A method of recognition of face is carried out with deep neural network | |
EP4036790A1 (en) | Image display method and device | |
US8917317B1 (en) | System and method for camera calibration | |
CN103902958A (en) | Method for face recognition | |
TW201005673A (en) | Example-based two-dimensional to three-dimensional image conversion method, computer readable medium therefor, and system | |
CN104318603A (en) | Method and system for generating 3D model by calling picture from mobile phone photo album | |
CN106570447B (en) | Based on the matched human face photo sunglasses automatic removal method of grey level histogram | |
CN108416291B (en) | Face detection and recognition method, device and system | |
CN102024156A (en) | Method for positioning lip region in color face image | |
CN109523622A (en) | A kind of non-structured light field rendering method | |
CN113128428A (en) | Depth map prediction-based in vivo detection method and related equipment | |
CN109587394A (en) | A kind of intelligence patterning process, electronic equipment and storage medium | |
CN109685892A (en) | A kind of quick 3D face building system and construction method | |
CN110348344A (en) | A method of the special facial expression recognition based on two and three dimensions fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190705 |