CN109460733A - Recognition of face in-vivo detection method and system based on single camera, storage medium - Google Patents

Recognition of face in-vivo detection method and system based on single camera, storage medium Download PDF

Info

Publication number
CN109460733A
CN109460733A CN201811323383.0A CN201811323383A CN109460733A CN 109460733 A CN109460733 A CN 109460733A CN 201811323383 A CN201811323383 A CN 201811323383A CN 109460733 A CN109460733 A CN 109460733A
Authority
CN
China
Prior art keywords
image
face
recognition
single camera
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811323383.0A
Other languages
Chinese (zh)
Inventor
周孺
丁建华
王栋
邱建华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Athena Eyes Science & Technology Co ltd
Original Assignee
Beijing Athena Eyes Science & Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Athena Eyes Science & Technology Co ltd filed Critical Beijing Athena Eyes Science & Technology Co ltd
Priority to CN201811323383.0A priority Critical patent/CN109460733A/en
Publication of CN109460733A publication Critical patent/CN109460733A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of recognition of face biopsy method based on single camera comprising following steps: video flowing step S100: is obtained using single camera;Step S200: various sizes of image is extracted from video flowing;And step S300: judging whether image is living body using deep learning neural network, wherein being determined for various sizes of image using different deep learning models.The recognition of face in-vivo detection method and system based on single camera of the invention and computer-readable storage medium, by the deep learning neural network for inputting various sizes of image at most Model Fusion, various sizes of image is detected using different depth learning model, not only ensure accuracy in detection, and detection efficiency is improved, it can effectively resist photo attack and video attack.

Description

Recognition of face in-vivo detection method and system based on single camera, storage medium
Technical field
The present invention relates to technical field of face recognition, particularly, are related to a kind of recognition of face living body based on single camera Detection method and system.
Background technique
With the accumulation of internet data and the development of deep learning, recognition of face is landed in various application scenarios to be promoted It is more and more extensive, such as financial payment industry, social security old-age pension get.Recognition of face goes encipherment without card as a kind of Using, the fitness small favor that increasingly obtains each financial institution simple and fast with its.But allow financial institution's headache the most It is while convenient and efficient, if the ensuring that the accuracy of accuracy of face identification and detection face living body is important and asks Topic.
Currently, the primary challenge means for recognition of face include attacking with photo attack, with mobile phone or pad video, Both attack means can all influence the accuracy of recognition of face.Many manufacturers for example blink and shake the head by the way of acting Carry out attack protection, but such mode it is bad to user experience and cannot anti-mobile phone or pad video attack.With The top grade mobile phone such as iphoneX, millet 8 emerges, and main selling point all includes the detection of face living body, and the face of high-grade mobile phone is living Physical examination surveys the upgrading being based primarily upon on hardware and for example obtains 3D information by depth camera.But for recognition of face The use environment of most users includes PC, low and middle-end mobile phone, access control system etc., and the production cost of depth camera is too high, It is not suitable for the application scenarios of single camera visible light.
Summary of the invention
The present invention provides a kind of recognition of face in-vivo detection method and system and storage medium based on single camera, with It solves can not to prevent video attack existing for existing recognition of face biopsy method, be not suitable for single camera visible light and answer The technical issues of with scene.
An invention according to the present invention, provides a kind of recognition of face biopsy method based on single camera,
Itself the following steps are included:
Step S100: video flowing is obtained using single camera;
Step S200: various sizes of image is extracted from video flowing;And
Step S300: judging whether image is living body using deep learning neural network, wherein being directed to various sizes of figure As being determined using different deep learning models.
Further, step S300 specifically includes the following steps:
Step S301: various sizes of image is inputted into deep learning neural network and is classified;
Step S302: it is exported respectively using the various sizes of image that deep learning neural network is directed to input normalized First score and the second score;And
Step S303: the scores of comprehensive different sized images provide final judging result.
Further, classified for various sizes of image using different classifiers in step S301.
Further, the image that three kinds of sizes are at least extracted in step S200, using Face datection from original positive face figure As plucking out human face region as first size image;Respectively expand at least on first size image towards four direction up and down 20% is used as the second sized image;Using original face image as third sized image.
Further, deep learning neural network detects the first ruler using the mobileFaceNet network architecture of extended pattern The distortion factor information of very little image.
Further, the mobileFaceNet network architecture of extended pattern is on the basis of the mobileFaceNet network architecture On the convolutional layer of 3x3 is extended so that its receptive field reaches 7x7.
Further, deep learning neural network using reduction version the resnet network architecture detect the second sized image and The marginal information of third sized image.
Further, the resnet network architecture for reducing version is to reduce input on the basis of the resnet18 network architecture The size of image and port number is halved.
The recognition of face In vivo detection system based on single camera that the present invention also provides a kind of, uses people as described above Face identifies biopsy method,
It includes single camera (100), for acquiring video;
Image zooming-out module (200), for extracting different sized images from the video of acquisition;
Recognition of face detection module (300), for judging whether image is living body.
The present invention also provides a kind of computer-readable storage mediums, are used for storage and carry out the people based on single camera Face identifies that the computer program of In vivo detection, the computer program execute following steps when running on computers:
Step S100: video flowing is obtained using single camera;
Step S200: various sizes of image is extracted from video flowing;And
Step S300: judge whether image is living body using deep learning neural network.
The invention has the following advantages:
Recognition of face biopsy method based on single camera of the invention, by inputting various sizes of image at most The deep learning neural network of Model Fusion detects various sizes of image using different deep learning model, not only It ensures accuracy in detection, and improves detection efficiency, can effectively resist photo attack and video attack.
Recognition of face In vivo detection system based on single camera of the invention equally has the above advantages.
Other than objects, features and advantages described above, there are also other objects, features and advantages by the present invention. Below with reference to figure, the present invention is described in further detail.
Detailed description of the invention
The attached drawing constituted part of this application is used to provide further understanding of the present invention, schematic reality of the invention It applies example and its explanation is used to explain the present invention, do not constitute improper limitations of the present invention.In the accompanying drawings:
Fig. 1 is the process of the recognition of face biopsy method based on single camera visible light of the preferred embodiment of the present invention Schematic diagram;
Fig. 2 is the sub-process schematic diagram of the step S300 in Fig. 1 of the preferred embodiment of the present invention;
Fig. 3 is that the deep learning neural network in Fig. 1 of the preferred embodiment of the present invention carries out the training of deep learning machine Flow diagram;
Fig. 4 is the module of the recognition of face In vivo detection system based on single camera visible light of another embodiment of the present invention Structural schematic diagram.
Marginal data:
100, single camera;200, image zooming-out module;300, recognition of face detection module.
Specific embodiment
The embodiment of the present invention is described in detail below in conjunction with attached drawing, but the present invention can be limited by following and The multitude of different ways of covering is implemented.
As shown in Figure 1, the preferred embodiment of the present invention provides a kind of recognition of face living body inspection based on single camera visible light Survey method is used to carry out face In vivo detection, can prevent video from attacking, and is suitable for the application of single camera visible light Scene.The recognition of face biopsy method the following steps are included:
Step S100: video flowing is obtained using single camera;
Step S200: various sizes of image is extracted from video flowing;And
Step S300: judge whether image is living body using deep learning neural network.
It is appreciated that in the step s 100, face view is acquired using the single camera on laptop or mobile phone Frequently.
It is appreciated that in step s 200, it is original greater than 160*160 that an at least frame face size is chosen from video flowing Then face image plucks out human face region as first size image, in the first ruler from original face image using Face datection Respectively expand at least 20% towards four direction up and down on very little image and be used as the second sized image, then uses original face image As third sized image.Certainly it is further appreciated that in order to enhance the accuracy of detection, can increase according to actual needs more Different Dimension Types, for example, towards four sides up and down on the basis of only including the first sized image of human face region Expand 25% or 30% etc. to each.
As shown in Fig. 2, the step S300 specifically includes the following steps:
Step S301: various sizes of image is inputted into deep learning neural network and is classified;
Step S302: it is exported respectively using the various sizes of image that deep learning neural network is directed to input normalized First score and the second score;And
Step S303: the scores of comprehensive different sized images provide final judging result.
It is appreciated that in the step S301, preferably, using different classification for various sizes of image Device is classified, rather than is used uniformly a classifier to classify.The case where being particularly suitable for single camera, because single Camera is unable to get the model of 3d, so information can only be obtained on flat image.Various sizes of image is due to scale Difference, the information for including is different, for example the small image of scale is wanted in the information content of the reflective and moire fringes of attack sample More, the big image of scale can include the marginal information amount of more attack samples.Therefore, using different classifiers for not Image with size is classified, and can effectively be enhanced the accuracy of In vivo detection and be resisted the ability of attack.For example, by using Three softmax classifiers are directed to three kinds of various sizes of images respectively and classify.
It is appreciated that the image of each size is input to deep learning network and is trained in the step S302, The image of each size can export normalized first score score1 and the second score score2, wherein the first score Score1 indicates that input is the score of living body, and the second score score2 indicates that input is the score of non-living body.
It is appreciated that in the step S303, the scores of the image of comprehensive multiple sizes, multiple sized images The first score score1 be averaged to obtain the first score averages S1, the second score score2 of multiple sized images is taken Average obtains the second score averages S2, if the first score averages S1 is greater than the second score averages S2, provides most Determine that result is non-living body eventually, if the first score averages S1 less than the second score averages S2, provides final judgement knot Fruit is living body.
As shown in figure 3, the deep learning neural network in the step S300 carries out deep learning machine by following steps Device training:
Step S401: acquisition data;Video, people derived from video are shot with a camera of notebook computer or mobile phone camera Face picture is labeled as 0 as positive sample, if this people only has a picture just using this picture as the ID just Sample.It is appreciated that plurality of pictures can also be acquired to constitute positive sample, to enhance the accuracy of detection.Select the positive sample in part This photo print at papery picture, then with camera of notebook computer, mobile phone camera against papery picture shoot video, and from this Derived face picture is labeled as 1 as negative sample in video.The video of positive sample is stored in a mobile phone or plate electricity In brain, then with camera of notebook computer, another mobile phone camera against the positive sample video in mobile phone or tablet computer into Row video capture, and face picture is exported as negative sample from the video, it is labeled as 2.Preferably, in order to ensure detection Accuracy, different acquisition parameters can be replaced to repeat above-mentioned movement to obtain positive sample 0, negative sample 1 and negative sample Sheet 2, such as change illumination, shooting angle, shooting background etc..2~3 frame figures are taken within one second when exporting face picture from video As the distance of camera and target is needed slowly to adjust when recording video every time and thinks subsequent more size training It prepares.
Step S402: enhancing data;For each image in positive sample 0, negative sample 1 and negative sample 2, using face Detection plucks out human face region as patch1, respectively expands 20% conduct towards four direction up and down on the basis of patch1 Patch2, using original image as patch3.
Step S403: being trained, for patch1, using the network architecture of the mobileFaceNet of extended pattern, image Input is dimensioned to 112x112.The network architecture can see the table below one, and what input field indicated is the channel input size x of current layer Number operates the arithmetic type of the current layer of expression, and Conv3x3 indicates the convolution operation of 3x3 size, and C indicates the port number of output, N indicates the number of repetition of current layer, and S indicates the size of step-length.The secondary imaging of photo attack and mobile phone or display Secondary imaging will appear reflective, inverted image, the distortion factors information such as moire fringes, therefore for patch1, it is important to notice that two Reflective, the inverted image that secondary imaging occurs, the distortion factors information such as moire fringes, so the requirement to receptive field is larger.Passing through will All convolutional layers of mobileFaceNet have changed the convolutional layer of extended pattern into, such as after the convolutional layer of a 3x3 is extended Receptive field has reached 7x7, but calculation amount is constant.It is 18% directly using the convolution interval velocity of 7x7 in this way.Finally adopt again Classified with softmax.By using the convolutional layer of extended pattern, not only available bigger receptive field can be effectively Reflective, the inverted image that the secondary imaging of the secondary imaging and mobile phone or display that determine photo attack occurs, moire fringes etc. Distortion factor information improves the accuracy of detection.
Table one, extended pattern mobileFaceNet the network architecture
Input Operation C N S
1122x3 Conv3x3 64 1 2
562x64 Depthwise conv3x3 64 1 1
562x64 Conv3x3 64 5 2
282x64 Conv3x3 128 1 2
142x128 Conv3x3 128 6 1
142x128 Conv3x3 128 1 2
72x128 Conv3x3 128 2 1
72x128 GlobalMaxPool 128 1 7
For patch2, using the network architecture of the resnet of reduction version, the network architecture can see the table below two, and input field indicates Be current layer input size x port number.The arithmetic type of the current layer indicated is operated, Conv3x3 indicates the volume of 3x3 size Product operation, C indicate that the port number of output, N indicate the number of repetition of current layer, and S indicates the size of step-length.Convolutional layer only has 7 layers, Image input is dimensioned to 64x64.It clearly due to the marginal information of mobile phone and tablet computer, can be directly as one Decision means.Since patch2 can substantially be included the frame of mobile phone or tablet computer, so being needed for patch2 It is important to notice that the marginal information of image.Marginal information does not need the very deep network architecture, so resnet is the smallest Network resnet18 is reduced, and input picture becomes smaller, and port number halves.By using the resnet network rack after reduction Structure not only can accurately detect the marginal information of image, but also considerably reduce operand, improve detection speed Degree.
Table two, reduce version resnet the network architecture
Input Operation C N S
642x3 Conv3x3 64 1 2
322x64 Conv3x3 64 1 1
322x64 Conv3x3 64 1 2
162x64 Conv3x3 64 1 1
162x128 Conv3x3 64 1 2
82x128 Conv3x3 128 1 1
82x128 Conv3x3 128 1 1
82x128 GlobalMaxPool 128 1 8
Patch3 is only become input using the resnet network architecture of the reduction version as patch2 Full figure.
Step S404: test;True man's number is acquired using camera of notebook computer acquisition true man's data, using mobile phone camera According to, using camera of notebook computer record photochrome data, using mobile phone camera record photochrome data, using pen The mobile video data that this camera is recorded, the mobile video data recorded using mobile phone camera are remembered, by above-mentioned six groups of data Be input to deep learning network and be trained test, test accuracy rate is respectively 99.96%, 99.63%, 99.12%, 99.54%, 98.93%, 99.27%.It can be seen that by above-mentioned deep learning process, deep learning network of the invention With up to 98%~99% accuracy rate, photo attack and video attack can be effectively resisted, and is suitable for single camera shooting The application scenarios of head.
Recognition of face biopsy method based on single camera visible light of the invention, by inputting various sizes of figure The deep learning neural network of picture at most Model Fusion, deep learning neural network is using different types of classifier for difference The image of size is classified, and improves the accuracy of detection, and using different deep learning model to various sizes of figure As being detected, accuracy in detection is not only ensured, and improve detection efficiency, can effectively resist photo attack and view Frequency is attacked.For example, for the first ruler for the patch1 or input for being easy to appear the distortion factors information such as reflective, inverted image, moire fringes Very little image is detected using the deep learning model of the mobileFaceNet network architecture comprising extended pattern, be can have Biggish receptive field, so that the secondary imaging appearance of secondary imaging and mobile phone or display that effectively photo is attacked is anti- Light, inverted image, the distortion factors information such as moire fringes improve the accuracy of detection.And it is directed to patch2 and patch3 and the second ruler Very little image and third sized image, picture size is larger, the frame of mobile phone peace plate computer can be substantially included, and And clearly due to the marginal information of mobile phone and tablet computer, it can take directly as a judging means comprising reduction The deep learning model of the resnet network architecture afterwards is detected, and not only can accurately the marginal information to image be carried out Detection, and operand is considerably reduced, improve detection speed.
As shown in figure 4, another embodiment of the present invention also provides a kind of recognition of face work based on single camera visible light Physical examination examining system preferably uses recognition of face biopsy method as described above.It is described based on single camera visible light Recognition of face In vivo detection system includes for acquiring the single camera 100 of video, for extracting difference from the video of acquisition The image zooming-out module 200 of sized image, for judge image whether be living body recognition of face detection module 300, wherein figure It is connect as extraction module 200, with single camera 100 and recognition of face detection module 300.Single camera 100 is notebook camera shooting Camera on head or mobile phone camera or tablet computer.It is appreciated that the recognition of face detection module 300 is equipped with As described above deep learning neural network detects image.
Another embodiment of the present invention also provides a kind of computer-readable storage medium, is used for storage and is based on The computer program of the recognition of face In vivo detection of single camera visible light, execution when which runs on computers Following steps:
Step S100: video flowing is obtained using single camera;
Step S200: various sizes of image is extracted from video flowing;And
Step S300: judge whether image is living body using deep learning neural network.
The form of general computer-readable medium includes: floppy disk (floppy disk), flexible disc (flexible Disk), hard disk, tape, it is any its with magnetic medium, CD-ROM, remaining any optical medium, punched card (punch Cards), paper tape (paper tape), remaining any physical medium of pattern with hole, random access memory (RAM), Programmable read only memory (PROM), erasable programmable read-only memory (EPROM), the read-only storage of quick flashing erasable programmable Device (FLASH-EPROM), remaining any memory chip or cassette or it is any remaining can allow computer read medium.Instruction It can further be sent or receive by a transmission medium.This term of transmission medium may include any tangible or invisible medium, It, which can be used to store, encodes or carries, is used to the instruction that executes to machine, and include digital or analog communication signal or its with Promote the intangible medium of the communication of above-metioned instruction.Transmission medium includes coaxial cable, copper wire and optical fiber, and it comprises be used to pass The conducting wire of the bus of a defeated computer data signal.
The foregoing is only a preferred embodiment of the present invention, is not intended to restrict the invention, for the skill of this field For art personnel, the invention may be variously modified and varied.All within the spirits and principles of the present invention, made any to repair Change, equivalent replacement, improvement etc., should all be included in the protection scope of the present invention.

Claims (10)

1. a kind of recognition of face biopsy method based on single camera, it is characterised in that:
Itself the following steps are included:
Step S100: video flowing is obtained using single camera;
Step S200: various sizes of image is extracted from video flowing;And
Step S300: judging whether image is living body using deep learning neural network, wherein adopting for various sizes of image Determined with different deep learning models.
2. recognition of face biopsy method as described in claim 1, it is characterised in that:
Step S300 specifically includes the following steps:
Step S301: various sizes of image is inputted into deep learning neural network and is classified;
Step S302: the various sizes of image using deep learning neural network for input exports normalized first respectively Score and the second score;And
Step S303: the scores of comprehensive different sized images provide final judging result.
3. recognition of face biopsy method as claimed in claim 2, it is characterised in that:
Classified for various sizes of image using different classifiers in step S301.
4. recognition of face biopsy method as described in claim 1, it is characterised in that:
The image that three kinds of sizes are at least extracted in step S200 plucks out human face region from original face image using Face datection As first size image;Respectively expand at least 20% towards four direction up and down on first size image and is used as the second size Image;Using original face image as third sized image.
5. recognition of face biopsy method as claimed in claim 4, it is characterised in that:
Deep learning neural network detects the distortion factor of first size image using the mobileFaceNet network architecture of extended pattern Information.
6. recognition of face biopsy method as claimed in claim 5, it is characterised in that:
The mobileFaceNet network architecture of extended pattern is on the basis of the mobileFaceNet network architecture by the convolution of 3x3 Layer extension is so that its receptive field reaches 7x7.
7. recognition of face biopsy method as claimed in claim 4, it is characterised in that:
Deep learning neural network detects the second sized image and third sized image using the resnet network architecture of reduction version Marginal information.
8. recognition of face biopsy method as claimed in claim 7, it is characterised in that:
Reduction version the resnet network architecture be reduced on the basis of the resnet18 network architecture input picture size and Port number is halved.
9. a kind of recognition of face In vivo detection system based on single camera, uses as described in any one of claims 1 to 8 Recognition of face biopsy method, it is characterised in that:
It includes single camera (100), for acquiring video;
Image zooming-out module (200), for extracting different sized images from the video of acquisition;
Recognition of face detection module (300), for judging whether image is living body.
10. a kind of computer-readable storage medium is used to store the recognition of face living body inspection carried out based on single camera The computer program of survey, it is characterised in that: the computer program executes following steps when running on computers:
Step S100: video flowing is obtained using single camera;
Step S200: various sizes of image is extracted from video flowing;And
Step S300: judge whether image is living body using deep learning neural network.
CN201811323383.0A 2018-11-08 2018-11-08 Recognition of face in-vivo detection method and system based on single camera, storage medium Pending CN109460733A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811323383.0A CN109460733A (en) 2018-11-08 2018-11-08 Recognition of face in-vivo detection method and system based on single camera, storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811323383.0A CN109460733A (en) 2018-11-08 2018-11-08 Recognition of face in-vivo detection method and system based on single camera, storage medium

Publications (1)

Publication Number Publication Date
CN109460733A true CN109460733A (en) 2019-03-12

Family

ID=65609666

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811323383.0A Pending CN109460733A (en) 2018-11-08 2018-11-08 Recognition of face in-vivo detection method and system based on single camera, storage medium

Country Status (1)

Country Link
CN (1) CN109460733A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110070047A (en) * 2019-04-23 2019-07-30 杭州智趣智能信息技术有限公司 A kind of face control methods, system and electronic equipment and storage medium
CN111310724A (en) * 2020-03-12 2020-06-19 苏州科达科技股份有限公司 In-vivo detection method and device based on deep learning, storage medium and equipment
WO2020258120A1 (en) * 2019-06-27 2020-12-30 深圳市汇顶科技股份有限公司 Face recognition method and device, and electronic apparatus
CN112560819A (en) * 2021-02-22 2021-03-26 北京远鉴信息技术有限公司 User identity verification method and device, electronic equipment and storage medium
CN112861586A (en) * 2019-11-27 2021-05-28 马上消费金融股份有限公司 Living body detection, image classification and model training method, device, equipment and medium
CN113343889A (en) * 2021-06-23 2021-09-03 的卢技术有限公司 Face recognition system based on silence live body detection

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107122744A (en) * 2017-04-28 2017-09-01 武汉神目信息技术有限公司 A kind of In vivo detection system and method based on recognition of face
CN107220635A (en) * 2017-06-21 2017-09-29 北京市威富安防科技有限公司 Human face in-vivo detection method based on many fraud modes
CN107358157A (en) * 2017-06-07 2017-11-17 阿里巴巴集团控股有限公司 A kind of human face in-vivo detection method, device and electronic equipment
KR20180004635A (en) * 2016-07-04 2018-01-12 한양대학교 에리카산학협력단 Method and device for reconstructing 3d face using neural network
CN108182409A (en) * 2017-12-29 2018-06-19 北京智慧眼科技股份有限公司 Biopsy method, device, equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180004635A (en) * 2016-07-04 2018-01-12 한양대학교 에리카산학협력단 Method and device for reconstructing 3d face using neural network
CN107122744A (en) * 2017-04-28 2017-09-01 武汉神目信息技术有限公司 A kind of In vivo detection system and method based on recognition of face
CN107358157A (en) * 2017-06-07 2017-11-17 阿里巴巴集团控股有限公司 A kind of human face in-vivo detection method, device and electronic equipment
CN107220635A (en) * 2017-06-21 2017-09-29 北京市威富安防科技有限公司 Human face in-vivo detection method based on many fraud modes
CN108182409A (en) * 2017-12-29 2018-06-19 北京智慧眼科技股份有限公司 Biopsy method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SHENG CHEN ETAL.: ""MobileFaceNets: Efficient CNNs for Accurate Real-Time Face Verification on Mobile Devices"", 《ARXIV.ORG》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110070047A (en) * 2019-04-23 2019-07-30 杭州智趣智能信息技术有限公司 A kind of face control methods, system and electronic equipment and storage medium
WO2020258120A1 (en) * 2019-06-27 2020-12-30 深圳市汇顶科技股份有限公司 Face recognition method and device, and electronic apparatus
CN112861586A (en) * 2019-11-27 2021-05-28 马上消费金融股份有限公司 Living body detection, image classification and model training method, device, equipment and medium
CN111310724A (en) * 2020-03-12 2020-06-19 苏州科达科技股份有限公司 In-vivo detection method and device based on deep learning, storage medium and equipment
CN112560819A (en) * 2021-02-22 2021-03-26 北京远鉴信息技术有限公司 User identity verification method and device, electronic equipment and storage medium
CN113343889A (en) * 2021-06-23 2021-09-03 的卢技术有限公司 Face recognition system based on silence live body detection

Similar Documents

Publication Publication Date Title
CN109460733A (en) Recognition of face in-vivo detection method and system based on single camera, storage medium
CN108334848B (en) Tiny face recognition method based on generation countermeasure network
Wen et al. Face spoof detection with image distortion analysis
WO2020151489A1 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN108549886A (en) A kind of human face in-vivo detection method and device
US11151397B2 (en) Liveness testing methods and apparatuses and image processing methods and apparatuses
US10055843B2 (en) System and methods for automatic polyp detection using convulutional neural networks
CN108549854B (en) A kind of human face in-vivo detection method
Raja et al. Video presentation attack detection in visible spectrum iris recognition using magnified phase information
CN108229376B (en) Method and device for detecting blinking
CN112052186B (en) Target detection method, device, equipment and storage medium
CN109325933A (en) A kind of reproduction image-recognizing method and device
CN108229308A (en) Recongnition of objects method, apparatus, storage medium and electronic equipment
CN105069448A (en) True and false face identification method and device
CN107609463A (en) Biopsy method, device, equipment and storage medium
CN110287862B (en) Anti-candid detection method based on deep learning
Noman et al. Mobile-based eye-blink detection performance analysis on android platform
CN108416797A (en) A kind of method, equipment and the storage medium of detection Behavioral change
Eyiokur et al. A survey on computer vision based human analysis in the COVID-19 era
CN110188602A (en) Face identification method and device in video
CN109410138A (en) Modify jowled methods, devices and systems
US11699162B2 (en) System and method for generating a modified design creative
CN106446837B (en) A kind of detection method of waving based on motion history image
CN112348112B (en) Training method and training device for image recognition model and terminal equipment
CN115565097A (en) Method and device for detecting compliance of personnel behaviors in transaction scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 410205 14 Changsha Zhongdian Software Park Phase I, 39 Jianshan Road, Changsha High-tech Development Zone, Yuelu District, Changsha City, Hunan Province

Applicant after: Wisdom Eye Technology Co., Ltd.

Address before: 100193 4th Floor 403, Building A, Building 14, East Courtyard, 10 Northwest Wanglu, Haidian District, Beijing

Applicant before: ATHENA EYES SCIENCE & TECHNOLOGY CO., LTD.

CB02 Change of applicant information
RJ01 Rejection of invention patent application after publication

Application publication date: 20190312

RJ01 Rejection of invention patent application after publication